id
stringlengths
20
20
content
stringlengths
211
8.3M
dsir_books
float64
-20,679,750.45
-316.68
fluency_en
sequencelengths
2
2
rps_lines_ending_with_terminal_punctution_mark
float64
0
100
modernbert_cleanliness
sequencelengths
6
6
qurater
sequencelengths
4
4
rps_doc_num_sentences
int64
1
84.7k
rps_doc_word_count
float64
9
1.29M
ad_en
sequencelengths
2
2
rps_doc_frac_no_alph_words
float64
15.7
92
modernbert_reasoning
sequencelengths
6
6
rps_doc_frac_chars_top_2gram
float64
0
92.9
rps_lines_uppercase_letter_fraction
float64
0.06
87.8
rps_doc_frac_unique_words
float64
1.16
100
rps_lines_numerical_chars_fraction
float64
0
85.7
fineweb_edu
sequencelengths
1
1
dsir_math
float64
-17,396,368.29
-306.94
rps_doc_mean_word_length
float64
1.44
61
dsir_wiki
float64
-20,772,722.61
-313.93
rps_doc_frac_chars_top_3gram
float64
0
119
rps_doc_unigram_entropy
float64
1.2
8.5
modernbert_professionalism
sequencelengths
6
6
modernbert_readability
sequencelengths
6
6
sub_path
stringclasses
1 value
BkiUdvE4eIfiUWiFdbiw
\subsection{Isolated S(Se)-edge band in MoSe$_{2}$, WS$_{2}$ and WSe$_{2}$ from DFT calculations} \begin{figure}[tbph] \centering \includegraphics[width=0.99\textwidth]{mos2_cmb.eps}\newline \caption{Left: DFT MoS$_{2}$ zigzag ribbon band structure, without SOC; Right: Zoom in of the Se-edge band bottom, with SOC. } \end{figure} \begin{figure}[tbph] \centering \includegraphics[width=0.99\textwidth]{mose2_cmb.eps}\newline \caption{ Left: DFT MoSe$_{2}$ zigzag ribbon band structure, without SOC; Right: Zoom in of the Se-edge band bottom, with SOC. } \end{figure} \begin{figure}[tbph] \centering \includegraphics[width=0.99\textwidth]{ws2_cmb.eps}\newline \caption{ Left: DFT WS$_{2}$ zigzag ribbon band structure, without SOC; Right: Zoom in of the S-edge band bottom, with SOC. } \end{figure} \end{document}
-1,961.174329
[ -0.396484375, 0.446533203125 ]
12.5
[ -8.703125, 1.4111328125, 9.984375, 1.9501953125, 0.95751953125, -10.9609375 ]
[ -0.83837890625, 5.6796875, 0.158203125, 1.4248046875 ]
10
83
[ -2.732421875, 3.697265625 ]
28.957055
[ 6.38671875, 8.125, 3.447265625, -3.646484375, -5.8671875, -6.44921875 ]
12.060302
16.542599
42.168675
1.66776
[ 1.3297353982925415 ]
-1,629.944745
7.192771
-1,934.915481
8.040201
3.391857
[ -4.96875, -3.70703125, -1.28515625, 0.7763671875, 3.55859375, 2.96484375 ]
[ -5.640625, -2.0625, -1.189453125, 0.01424407958984375, 3.388671875, 1.3349609375 ]
BkiUdb44dbjiU9oEeWKt
\section{Introduction} \smallskip The question of existence of conformal metrics of constant or more generally prescribed curvature on riemannian manifolds is a recurrent problem in differential geometry and geometric analysis. Indeed a positive or a negative answer to such a question has far reaching consequences on the geometry and topology of the underlying manifold. The Poincar\'e uniformization's theorem on closed surfaces, the Yamabe problem on riemannian manifolds of dimension $n \geq 3$ and the Nirenberg's problem on standard spheres $\mathbb{S}^n$, just to name a few, are well known and well studied mathematical problems. \\ A similar question, which goes back to Picard \cite{Picard1, Picard2} deals with the existence of conformal metrics of constant or prescribed curvature on surfaces with conical singularities. After the pioneering work of Picard at the beginning of the last century, such a problem has been systematically investigated by Berger \cite{Berger}, McOwen\cite{McOwen, McOwen2} and Troyanov\cite{T,T2}. More recently Bartolucci-deMarchis-Malchiodi\cite{BdM} used a Morse theoretical approach to prove further existence and multiplicity results.\\ In this paper, a first part of a series of papers, we address the problem of existence of conformal conical metrics of constant, or more generally prescribed $Q-$curvature on four dimensional riemannian manifolds. In the following we will explain in some details the geometric context of such a problem: \\ Given $(M,g)$ a compact four-dimensional Riemannian manifold, the $Q$-curvature and the Paneitz operator are defined respectively, by \begin{equation}\label{Q-curvature} Q_g=-\frac{1}{12}\big(\Delta_g R_g-R_g^2+3|\rm{ Ric}_g|^2\big), \end{equation} \begin{equation}\label{Peneitz} P_g \varphi=\Delta_g^2\varphi+{\rm div}_g\Big(\big(\frac{2}{3}R_g g-2{\rm Ric}_g\big)\nabla u\Big), \end{equation} where ${\rm Ric}_g$ is the Ricci tensor and $R_g$ is the scalar curvature of $(M,g)$. \\ Similar to second order equations a natural question is the following uniformization statement: \textit{given a four-dimensional Riemannian manifold $(M,g)$, is there a metric $\tilde{g}=e^{2u}g$ in the conformal class of $g$ with constant $Q$-curvature?} Under the conformal change of metric above, the Paneitz operator is conformally covariant: \begin{equation}\label{conformal-P} P_{\tilde{g}}\varphi=e^{-4u}P_g\varphi, \end{equation} and the $Q$ curvature of $\tilde{g}$ is given by \begin{equation}\label{conformal-Q} P_gu+2Q_g=2Q_{\tilde{g}}e^{4u}. \end{equation} From (\ref{conformal-P}) and (\ref{conformal-Q}), the question above is equivalent to the existence of solution to this fourth order equation: \begin{equation}\label{Q-equation-reg} P_gu+2Q_g=2\bar{Q}e^{4u}, \end{equation} where $\bar{Q}$ is a real constant.\\ Integrating with respect to the volume element ${\rm d}V_g$, we can see that \begin{equation*} \kappa_P=\int_{M}Q_g{\rm d}V_g \end{equation*} is a constant in the conformal class of $g$, here we also point out the Gauss-Bonnet-Chern formula that links the local curvature to the global topology of $M$ is: \begin{equation*} \int_{M}\Big(Q_g+\frac{1}{8}|W_g|^2\Big){\rm d}V_g=4\pi^2\chi_M, \end{equation*} where $W_g$ denotes the Weyl's tensor of $(M,g)$ and $\chi_M$ is the Euler characteristic of $M$. From this equality and the aforementioned conformal covariance property it is not hard to imagine that $P_g$ and $Q_g$ are related to a number of studies such as Moser-Trudinger type inequatilities, log-determinant formulas and the compactification of locally flat manifolds, see \cite{Beckner,Branson-Chang-Yang,Branson-Oersted,Chang-Qing-Yang-Invent,Chang-Qing-Yang-Duke,Chang-Yang}. In many of these studies the kernel of $P_g$ is usually assumed to consist only of constants: \begin{equation*}\label{P-assumption} {\rm Ker}\,(P_g)=\{constants\}. \leqno(P) \end{equation*} In this paper consider the following prescribed $Q$-curvature equation involving singular sources \begin{equation}\label{Q-singular} P_gu+2Q_g=2h e^{4u}-8\pi^2\sum_{j=1}^{N}\gamma_j\Big(\delta_{q_j}-\frac{1}{{\rm vol}_g(M)}\Big), \end{equation} where $h$ is a smooth positive function, $N\in \mathbb N$ is a positive integer, $q_1,\cdots,q_N$ are distinct points on $M$ and where Dirac measures $\delta_{q_j}$ are located, $\gamma_j>-1$ are constants.\\ Solutions to \eqref{Q-singular} have the following geometric interpretation: Setting $\tilde{g}:= e^{2u} g$ we obtain a metric conformal to $g$ on $M \setminus \{q_1, \cdots,q_N \}$ which has a conical singularity at each $q_i$. One says that $\tilde{g}$ is represented by the divisor $D:= \sum_{i=1}^{N} \gamma_i q_i$. See Fang-Ma \cite{Fang-Ma}. Furthermore due to Gauss-Bonnet-Chern formula for conic four manifolds, see \cite{Chang-Qing-Yang-Duke}, \cite{Chang-Qing-Yang-Invent},\cite{BN19}, we have that $$ \tilde{\kappa}_P \, := \int_{M}Q_{\tilde{g}}{\rm d}V_{\tilde{g}} \, = \, \int_{M}Q_g{\rm d}V_g \, + \, 8 \pi^2 \sum_{i=1}^{N} \gamma_i $$ is a conformal invariant.\\ Considerable progress has been made for the regular case of (\ref{Q-singular}), that is $N=0$ in (\ref{Q-singular}). Under the assumption that the Kernel of the Paneitz operator contains only constants, Chang-Yang \cite{Chang-Yang} proved existence for $\kappa_P<8\pi^2$, Djadli-Malchiodi \cite{Djadli-Malchiodi-2008} settled the case that $\kappa_P\neq 8\pi^2n$ for any $n\in \mathbb N$. Li-Li-Liu \cite{Li-Li-Liu} gave a necessary condition for existence in the case $\kappa_P = 8 \pi^2$, Ahmedou-Ndiaye\cite{Ahmedou-Ndiaye} developed a Morse theory the case of $\kappa_P=8\pi^2n$ and Ndiaye \cite{Ndiaye-2} combined the celebrated topological argument of Bahri-Coron \cite{Bahri-Coron} with the \emph{critical point theory at infinity } in \cite{Ahmedou-Ndiaye} to derive some existence results. We point out that an essential estimate related to the proof in \cite{Djadli-Malchiodi-2008} is a priori estimate when $\kappa_P$ is away from $8\pi^2 \mathbb N$ proved by Malchiodi in \cite{Malchiodi}. Later in \cite{Druet-Robert}, Druet and Robert extended such an a priori estimate to the following more general equation in the same class: \begin{equation}\label{Q-equ-gen-reg} P_g u+2b=2he^{4u}, \end{equation} where $b$ is a smooth function. If $b=Q_g$ is the $Q$-curvature of the conformal metric $e^{2u}g$. More specially, assuming $h_k\to h_0$, $h_k\geq c_0>0$ and $b_k \to b_0$, then any sequence of solutions $\{u_k\}$ of (\ref{Q-equ-gen-reg}) with $h=h_k$ and $b=b_k$ is uniformly bounded under the condition $\int_{M}b_0{\rm d}V_g\neq8\pi^2n$, see also Malchiodi \cite{Malchiodi}. However, bubbling can occur when $\int_{M}b_0{\rm d}V_g=8\pi^2n$ for some positive integer $n$. The understanding of this bubbling phnomenon is vital for the existence problem. The study of the blow-up profile and other blow-up phenomena for the Paneitz operator and other elliptic equations has attracted much interest recently and the reference is too numerous to be mentioned, we just list a few closely related to our article in our humble opinion: \cite{Struwe-Robert, Brendle,Chang-Qing-Yang-3,Djadli-Malchiodi-2005,Fefferman,Gursky-Viaclovsky, hyder, Li-Li-Liu,Malchiodi-2,Malchiodi-Struwe,Ndiaye,Qing-Raske,Wei-1996,Wei-Xu,zhang-weinstein}. Particularly, in \cite{zhang-weinstein}, the third named author and Weinstein have obtained sharp estimates on the difference near the blow-up points between a bubbling sequence of solutions to (\ref{Q-equ-gen-reg}) with $h=h_k$ and $b=b_k$ and standard bubbles, and obtained the vanishing rate under the assumption that $(M,g)$ may not be locally conformally flat. \medskip When taking the singularities into the account as in (\ref{Q-singular}), we consider the following more general singular equation: \begin{equation}\label{Q-equ-gen-singular} P_gu+2b=2he^{4u}-8\pi^2\sum_{j=1}^{N}\gamma_j\Big(\delta_{q_j}-\frac{1}{vol_g(M)}\Big), \end{equation} where $h$ is a positive smooth function on $M$ and $b\in C^1(M)$. Before stating our first main result, we define a {\itshape critical set} $\Gamma$ as follows: \begin{equation*} \Gamma=\Big\{16\pi^2n+16\pi^2\sum_{j\in J}(1+\gamma_j):\ \,n\in\mathbb{N}\cup\{0\}\ \,{\rm and}\ \,J\subset\{1,\cdots,N\}\Big\}. \end{equation*} In order to obtain the a priori estimates and existence results, we mainly study the blow-up phenomena for (\ref{Q-equ-gen-singular}). Let us consider the following equations: \begin{equation}\label{Q-equation-blowup} P_gu_k+2b_k=2h_ke^{4u_k}-8\pi^2\sum_{j=1}^{N}\gamma_j\Big(\delta_{q_j}-\frac{1}{{\rm vol}_g(M)}\Big)\quad {\rm in }\ \, M, \end{equation} with normalized total integration: \begin{equation}\label{volume-normal} \int_M e^{4u_k}{\rm d}V_g=1. \end{equation} Let $\{u_k\}$ be a sequence of solutions of (\ref{Q-equation-blowup}). We say $p$ is a blowup point of $u_k$ if there exists a sequence $p_k\to p$ such that $$u_k(p_k)+8\pi^2\sum_{j=1}^N\gamma_jG(p_k,q_j)\to \infty. $$ $u_k$ is called a sequence of blowup solutions if it has a blowup point. Here $G(x,p)$ is the Green's function of $P_g$ defined in (\ref{Green-func-expression}). For blowup solutions we assume that coefficient functions are regular enough to have limits: \begin{equation}\label{assumption-coe} \parallel b_k-b_0\parallel_{C^1(M)}\to 0,\quad \parallel h_k-h_0\parallel_{C^1(M)}\to 0,\quad 0<c_0<h_0<1/c_0. \end{equation} Without loss of generality, we assume the integration of $h_ke^{u_k}$ is $1$: Our first main result asserts that a priori estimate holds for $u_k$, as long as $2\int_Mb_k$ does not tend to the following critical set: \begin{equation*} \Gamma=\Big\{16\pi^2n+16\pi^2\sum_{j\in J}(1+\gamma_j):\ \,n\in\mathbb{N}\cap\{0\}\ \,{\rm and}\ \,J\subset\{1,\cdots,N\}\Big\}. \end{equation*} \begin{thm}[A Priori Estimate]\label{thm-apriori-est} Suppose (P) holds, $b$ and $h$ satisfy (\ref{assumption-coe}). If $\{u_k\}$ is a sequence of solutions of (\ref{Q-equation-blowup}) under restriction (\ref{volume-normal}) and $\int_M2b_0{\rm d}V_g\in\mathbb{R}^+\setminus \Gamma$, $$\Big|u_k(x)+8\pi^2\sum_{j=1}^N\gamma_j G(x,q_j) \Big|\le C,\quad \forall x\in M $$ for some $C>0$ independent of $k$. \end{thm} In particular, the a priori estimate holds for the singular prescribing $Q$-curvature equations. Indeed Theorem \ref{thm-apriori-est} is an extension of previous results of Malchiodi \cite{Malchiodi}, Druet-Robert \cite{Druet-Robert} and Fardoun-Regbaoui \cite{fard} for the regular prescribed $Q$-curvature equation. We point out that the argument in the regular case uses in a crucial way the explicit form of the bubble, while our argument uses only the asymptotic behavior of the bubble and is based on a Pohozaev identity for equations under conformal normal coordinates (see \cite{zhang-weinstein}).\\ One indispensable part of the blowup analysis for the Q curvature equation is the classification of global solutions on $\mathbb R^4$. For this purpose we consider the limiting equation used to describe the profile of bubbling solutions: \begin{equation}\label{equ-liou-2} \Delta^2 u(x)=6|x|^{4\gamma}e^{4u(x)}, \quad {\rm in} \ \; \mathbb{R}^4, \qquad |x|^{4\gamma}e^{4u(x)}\in L^1(\mathbb{R}^4). \end{equation} Clearly if $u$ is a solution of (\ref{equ-liou-2}), so is $u_{\lambda}$ defined by \begin{equation}\label{rescale} u_{\lambda}(x)=u(\lambda x)+(1+\gamma)\log \lambda \end{equation} for any given $\lambda>0$. Our next main result is \begin{thm}\label{thm-classification} Suppose that $u$ is a solution of (\ref{equ-liou-2}) with $\gamma>-1$ and $|u(x)|=o(|x|^2)$ at infinity. Then \begin{itemize} \item [(i)] $\int_{\mathbb{R}^4}6|y|^{4\gamma} e^{4u(y)}{\rm d}y=16\pi^2(1+\gamma)$. \item [(ii)] \begin{equation} \begin{split} u(x)=&\frac{3}{4\pi^2}\int_{\mathbb{R}^4}\log\big(\frac{|y|}{|x-y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y+C_0 \\ =&-2(1+\gamma)\log|x|+c+O(\frac{1}{|x|}), \quad |x|>1 \end{split} \end{equation} for some $C_0,c\in \mathbb R$, \begin{equation} \left\{\begin{array}{ll} -\Delta u(x)=\frac{4(1+\gamma)}{|x|^2}+O(\frac{1}{|x|^{2+\tau}}), \\ -\frac{\partial}{\partial x_i}\Delta u(x)=-8(1+\gamma)\frac{x_i}{|x|^4}+O(\frac{1}{|x|^{3+\tau}}),\\ -\frac{\partial^2}{\partial x_i\partial x_j}\Delta u(x)=O(\frac{1}{|x|^4}), \end{array} \right. \end{equation} where $\tau=1$ if $\gamma>-3/4$, and $\tau\in(0,1)$ if $-1<\gamma\leq-3/4$. \item [(iii)] Furthermore, if $-3/4<\gamma<0$, $u$ is radially symmetric about the origin and is unique up to scaling in (\ref{rescale}). \end{itemize} \end{thm} Theorem \ref{thm-classification} is a quantization result that also determines the asymptotic behavior of $u$ at infinity under a sub-quadratic growth condition. If the $o(|x|^2)$ assumption is removed, we have \begin{thm}\label{thm-classification-2} Let $u$ be a solution of (\ref{equ-liou-2}) with $\gamma>-1$. Then after an orthogonal transformation, $u(x)$ can be represented by \begin{equation}\label{rep-u-2} \begin{split} u(x)=&\frac{3}{4\pi^2}\int_{\mathbb{R}^4}\log\big(\frac{|y|}{|x-y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y-\sum_{j=1}^{4}a_j(x_j-x_j^0)^2+c_0 \\ =&-\sum_{j=1}^{4}a_j(x_j-x_j^0)^2-2(1+\gamma)\log|x|+c_0+O(|x|^{-\tau}) \end{split} \end{equation} for some $\tau>0$ and large $|x|$. The function $\Delta u$ satisfies \begin{equation}\label{lap-u-2} \Delta u(x)=-\frac{3}{2\pi^2}\int_{\mathbb{R}^4}\frac{1}{|x-y|^2}|y|^{4\gamma}e^{4u(y)}{\rm d}y-2\sum_{j=1}^{4}a_j, \end{equation} where $a_j\geq 0$, $c_0$ are constants and $x^0=(x_1^0,\cdots,x_4^0)\in\mathbb{R}^4$. Moreover, if $-3/4<\gamma<0$, $u$ is symmetric with respect to the hyperplane $\{x:x_i=0\}$ when $a_i x_i^0=0$. In particular, under the assumption $-3/4<\gamma<0$, if $a_i x_i^0=0$ for all $i=1,\cdots,4$, $u$ is radially symmetric with respect to the origin. \end{thm} When we were about to finish writing this article we heard that Jevnikar-Sire-Yang \cite{yang-wen} are working on a similar project independently and their results are to be posted soon. Here we briefly outline the strategy of the proofs in our paper. For the proof of the classification result for globally defined singular equation, we follow the argument of Lin \cite{lin-classification} but we need to take care of all the complications caused by the singular source. In particular we are able to prove the complete classification for $\gamma\in (-\frac 34,0]$ and a quantization result for all $\gamma>-1$. For blowup solutions we first use a small-energy lemma (Lemma \ref{lem-small-mass-regular}) to prove that there are at most finite blowup points. Then we take advantage of a Pohozaev identity established by Weinstein-Zhang \cite{zhang-weinstein} to describe a precise asymptotic behavior of blowup solutions around a blowup point. Then the total integration as well as precise asymptotic behavior of solutions can be further determined. With this information the critical set $\Gamma$ can be identified and if the total integral of $b$ is not equal to a number corresponding to the critical set we obtain a priori estimate. Here it is worth to remark that Theorem \ref{SHT} asserts that if the strength of the singular source is not an integer, the blowup solutions satisfy a \emph{spherical Harnack inequality} around $q$, this is a new phenomenon and we shall explore in a forthcoming work. The idea of the proof of such a spherical Harnack inequaliy is as follows: If the spherical harnack inequality is violated, there should be finite small bubbling circles around the singular source all tending to the singular source. Around each tiny bubbling disk there is a Pohozaev identity, and a "big" circle that includes all these tiny disks also has a Pohozaev identity. The comparison of these Pohozaev identities implies that the strength of the singular source has to be an integer. \bigskip The organization of the remainder of this paper is as follows. In section \ref{entire} we analyze the globally defined solutions and proved the quantization and the classification results stated in Theorem \ref{thm-classification}. Then in Section \ref{preliminaries}, we list some useful facts about the conformal normal coordinates and Pohozaev identity and in section \ref{blowup-local}, we perform a blow-up analysis near singular points. Section \ref{CC-Apriori} is devoted to a concentration-compactness theorem and a priori estimate for the singular prescribing $Q$-curvature equation on 4-manifolds while in section \ref{harnack} we prove our spherical Harnack inequality. Finally we provide is the appendix an useful estimate of the difference between the geodesic distance and the Euclidean one for nearby points on the manifold. \section[Entire solutions]{Entire solutions of fourth order singular Liouville type equations in $\mathbb{R}^4$}\label{entire} In this section, we will follow the argument of Lin \cite{lin-classification} to analyze solutions of (\ref{equ-liou-2}) and prove Theorem \ref{thm-classification} and Theorem \ref{thm-classification-2}. \subsection{Asymptotic behavior of entire solution} \quad Our argument is progressive in nature and we shall obtain a rough estimate of $u$ at infinity. For this purpose we set \begin{equation}\label{v-def} v(x):=\frac{3}{4\pi^2}\int_{\mathbb{R}^4}\log\big(\frac{|x-y|}{|y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y, \end{equation} which is obviously a solution of \begin{equation}\label{v-equ} \Delta^2v(x)=-6|x|^{4\gamma}e^{4u(x)},\quad{\rm in} \ \; \mathbb{R}^4. \end{equation} The asymptotic behavior of $u$ has a large to do with that of $v$, so in the first lemma we derive a rough upper bound of $v$. For convenience we set \begin{equation}\label{energy-R4} \alpha=\frac{3}{4\pi^2}\int_{\mathbb R^4}|y|^{4\gamma}e^{4u}dy. \end{equation} \begin{lem}\label{lem-v-upper} Suppose $u$ is a solution of (\ref{equ-liou-2}) and let $\alpha$ be given as in (\ref{energy-R4}). Then \begin{equation}\label{v-upper} v(x)\leq\alpha\log|x|+C \end{equation} for some constant $C$. \end{lem} \begin{proof}[\textbf{Proof}] Since the goal is to describe asymptotic behavior it is natural to assume $|x|>4$. For such $x$, we decompose $\mathbb{R}^4=A_1\cup A_2$, where \begin{equation*} A_1=\Big\{y:|y-x|\leq\frac{|x|}{2}\Big\},\quad A_2=\Big\{y:|y-x|\geq\frac{|x|}{2}\Big\}. \end{equation*} For $y\in A_1$, $\log\frac{|x-y|}{|y|}\leq 0$ because $|y|\geq|x|-|x-y|\geq\frac{|x|}{2}\geq|x-y|$, Thus \begin{equation*} \int_{A_1}\log\big(\frac{|x-y|}{|y|}\big)|y|^{4\gamma} e^{4u(y)}{\rm d}y\leq 0 \end{equation*} and \begin{equation*} v(x)\leq\frac{3}{4\pi^2}\int_{A_2}\log\big(\frac{|x-y|}{|y|}\big)|y|^{4\gamma} e^{4u(y)}{\rm d}y\leq 0. \end{equation*} To evaluate the integral over $A_2$, we first make two trivial observations: $$|x-y|\leq|x|+|y|\leq|x||y|, \quad \mbox{if} \quad |y|\ge 2, $$ $$\log|x-y|\leq\log|x|+C, \quad \mbox{if }\quad |y|\leq 2 $$ where $|x|>4$ is used. Consequently \begin{equation*} \begin{split} v(x)&\leq\frac{3}{4\pi^2}\int_{A_2}\log\big(\frac{|x-y|}{|y|}\big)|y|^{4\gamma} e^{4u(y)}{\rm d}y\\ &\leq\frac{3}{4\pi^2}\Big\{\log|x|\int_{A_2\cap\{|y|\geq2\}}|y|^{4\gamma} e^{4u}{\rm d}y+\int_{A_2\cap\{|y|\leq2\}}\log\big(\frac{|x-y|}{|y|}\big)|y|^{4\gamma} e^{4u}{\rm d}y\Big\}\\ &\leq\frac{3}{4\pi^2}\Big\{\log|x|\int_{A_2}|y|^{4\gamma} e^{4u}{\rm d}y+C\int_{A_2\cap\{|y|\leq2\}}|y|^{4\gamma} e^{4u}{\rm d}y\\ &\qquad\quad-\int_{A_2\cap\{|y|\leq2\}}\big(\log|y|\big)|y|^{4\gamma} e^{4u}{\rm d}y\Big\}\\ &\leq\frac{3}{4\pi^2}\log|x|\int_{\mathbb{R}^4}|y|^{4\gamma} e^{4u}{\rm d}y+C. \end{split} \end{equation*} Lemma \ref{lem-v-upper} is established. \end{proof} Before proving a lower bound of $v(x)$ we derive an expression of $\Delta u(x)$ in integral form. \begin{lem}\label{lem-lap-u} Suppose $u$ is a solution of (\ref{equ-liou-2}). Then there exists a constant $C_1\geq 0$ such that \begin{equation}\label{lap-u} \Delta u(x)=-\frac{3}{2\pi^2}\int_{\mathbb{R}^4}\frac{1}{|x-y|^2}|y|^{4\gamma}e^{4u(y)}{\rm d}y-C_1. \end{equation} \end{lem} \begin{proof}[\textbf{Proof}] Let $w(x)=u(x)+v(x)$. Then from the equations of $u$ and $v$ in (\ref{equ-liou-2}) and (\ref{v-equ}), we have $\Delta^2w=0$ in $\mathbb{R}^4$. Hence, $\Delta w$ is a harmonic function in $\mathbb{R}^4$. By the mean value property of harmonic functions, we have, for any $x_0\in \mathbb{R}^4$ and $r>0$, \begin{equation*} \begin{split} \Delta w(x_0)=\frac{2}{\pi^2 r^4}\int_{B(x_0,r)}\Delta w(y){\rm d}y=\frac{2}{\pi^2 r^4}\int_{\partial B(x_0,r)}\Delta w(y){\rm d}\sigma, \end{split} \end{equation*} where $\frac{\pi^2}{2}$ is the volume of the unit ball. That is \begin{equation}\label{lap-w} \frac{r}{4}\Delta w(x_0)=\dashint_{|y-x_0|=r}\frac{\partial w}{\partial r}(y){\rm d}\sigma, \end{equation} where $\dashint_{|y-x_0|=r}f(y){\rm d}\sigma=\frac{1}{2\pi^2r^3}\int_{|y-x_0|=r}f(y){\rm d}\sigma$ denotes the integral average of $f$ over $\partial B(x_0,r)$. Then integrating the identity above along $r$, we get \begin{equation*} \frac{r^2}{8}\Delta w(x_0)=\dashint_{|y-x_0|=r}w{\rm d}\sigma-w(x_0). \end{equation*} Therefore, the Jensen inequality implies \begin{equation*} \begin{split} \exp\Big(\frac{r^2}{2}\Delta w(x_0)\Big)&\leq e^{-4w(x_0)}\exp\Big(4\dashint_{|y-x_0|=r} w {\rm d}\sigma\Big)\\ &\leq e^{-4w(x_0)}\dashint_{|y-x_0|=r} e^{4w} {\rm d}\sigma \end{split} \end{equation*} From Lemma \ref{lem-v-upper}, we have $w(x)=u(x)+v(x)\leq u(x)+\alpha\log|x|+C$, and as a consequence \begin{equation*} \begin{split} &\int_{0}^{\infty}r^{3-4\alpha+4\gamma}\exp\big(\frac{\Delta w(x_0)}{2}r^2\big){\rm d}r \leq\int_{\mathbb{R}^4}|x|^{-4\alpha+4\gamma}e^{-4w(x_0)}e^{4w(x)}{\rm d}x\\ \leq& C\int_{\mathbb{R}^4}|x|^{-4\alpha+4\gamma}e^{4u(x)}|x|^{4\alpha}{\rm d}x =C\int_{\mathbb{R}^4}|x|^{4\alpha}e^{4u}{\rm d}x<+\infty, \end{split} \end{equation*} which means $$r^{3-4\alpha+4\gamma}\exp\Big(\frac{\Delta w(x_0)}{2}r^2\Big)\in L^1\big([1,+\infty)\big).$$ From here we see $\Delta w(x_0)\leq 0$ for all $x_0\in\mathbb{R}^4$. Liouville's Theorem implies that there exists some constant $C_1\geq0$ such that $\Delta w(x)\equiv-C_1$ in $\mathbb{R}^4$. Lemma \ref{lem-lap-u} is established based on this and \begin{equation*} \Delta v(x)=\frac{3}{2\pi^2}\int_{\mathbb{R}^4}\frac{1}{|x-y|^2}|y|^{4\gamma}e^{4u(x)}{\rm d}y. \end{equation*} \end{proof} With the help of the representation for $\Delta u$, we can estimate $v(x)$ from below for $|x|$ large. We will use following result in Lemma 2.3 of \cite{lin-classification}. \medskip Let $h(x)$ be the solution of \begin{equation*} \left\{\begin{array}{lcl} \Delta^2 h(x)=f(x), && {\rm in} \ \; \Omega, \\ \Delta h(x)=h(x)=0, && {\rm on} \ \; \partial\Omega, \end{array} \right. \end{equation*} where $\Omega$ is a bounded domain of $\mathbb{R}^4$. \begin{lemA}\label{lem-BM}\cite{lin-classification} Suppose $f\in L^1(\bar{\Omega})$.Then for any $\delta\in(0,32\pi^2)$, there exists a constant $C_{\delta}>0$ such that \begin{equation*} \int_{\Omega}\exp\Big(\frac{\delta|h|}{\parallel f\parallel_{L^1}}\Big){\rm d}x\leq C_{\delta}(diam \; \Omega)^4, \end{equation*} where $diam \;\Omega$ denotes the diameter of $\Omega$. \end{lemA} \begin{lem}\label{lem-v-lower} Let $u$ be a solution of (\ref{equ-liou-2}) and $v$ be in (\ref{v-def}). Then for given $\varepsilon>0$, there exists a constant $R=R(\varepsilon)$ only depending on $\varepsilon$ such that \begin{equation}\label{v-lower} v(x)\geq(\alpha-\varepsilon)\log|x|, \quad |x|>R(\epsilon). \end{equation} \end{lem} \begin{proof}[\textbf{Proof}] We first prove a claim slightly weaker than (\ref{v-lower}): for any $\varepsilon>0$, there exists $R=R(\varepsilon)>0$ such that \begin{equation}\label{v-lower-rough} v(x)\geq(\alpha-\frac{\varepsilon}{2})\log|x| +\frac{3}{4\pi^2}\int_{B(x,1)}(\log|x-y|)|y|^{4\gamma}e^{4u(y)}{\rm d}y. \end{equation} To prove (\ref{v-lower-rough}) we consider $\mathbb R^4$ as a disjoint union of three sets: $\mathbb{R}^4=A_1\cup A_2\cup A_3$, where \begin{align*} A_1&=\{y:|y|<R_0\}, \\ A_2&=\{y:|x-y|\leq |x|/2,|y|\geq R_0\}, \\ A_3&=\{y:|x-y|\geq |x|/2,|y|\geq R_0\}. \end{align*} Then we choose $R_0=R_0(\varepsilon)$ sufficiently large so that \begin{equation*} \begin{split} &\frac{3}{4\pi^2}\int_{A_1}\log\big(\frac{|x-y|}{|y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y-\alpha \log|x|\\ =&\frac{3}{4\pi^2}\log|x|\int_{A_1}\frac{\log|x-y|-\log|x|-\log|y|}{\log|x|}|y|^{4\gamma}e^{4u(y)}{\rm d}y-\frac{\varepsilon}{8} \log|x| \\ \geq& -\frac{\varepsilon}{4} \log|x| \end{split} \end{equation*} for large $|x|$. Thus we have \begin{equation}\label{int-A1} \frac{3}{4\pi^2}\int_{A_1}\log\big(\frac{|x-y|}{|y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y\geq (\alpha-\frac{\varepsilon}{4} )\log|x|. \end{equation} For $x\in A_2$ and $|x|$ large, we have $\frac{|x|}{2}\leq |x|\leq\frac{3}{2}|x|$. Then \begin{equation}\label{int-A2} \begin{split} &\int_{A_2}\log\big(\frac{|x-y|}{|y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y \\ =&\int_{A_2}(\log|x-y|)|y|^{4\gamma}e^{4u(y)}{\rm d}y-\int_{A_2}(\log|y|)|y|^{4\gamma}e^{4u(y)}{\rm d}y \\ \geq& \int_{B(x,1)}(\log|x-y|)|y|^{4\gamma}e^{4u(y)}{\rm d}y-\log(2|x|)\int_{A_2}|y|^{4\gamma}e^{4u(y)}{\rm d}y. \end{split} \end{equation} For $y\in A_3$, we use two trivial inequalities: $|x-y|\geq\frac{|x|}{2}\geq\frac{|y|}{4}$ if $|y|\le 2|x|$ and $|x-y|\geq|y|-|x|\geq\frac{|y|}{2}$ if $|y|\ge 2|x|$. Clearly in both cases, we have \begin{equation*} \frac{|x-y|}{|y|}\geq\frac{1}{4},\quad y\in A_3. \end{equation*} Therefore, \begin{equation}\label{int-A3} \frac{3}{4\pi^2}\int_{A_3}\log\big(\frac{|x-y|}{|y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y\geq \log\frac{1}{4}\int_{A_3}|y|^{4\gamma}e^{4u(y)}{\rm d}y. \end{equation} From (\ref{int-A1}), (\ref{int-A2}), (\ref{int-A3}) and $|y|^{4\gamma}e^{4u(y)}\in L^1(\mathbb{R}^4)$, we obtain (\ref{v-lower-rough}). \medskip Next, we show that \begin{equation}\label{int-B_x1-v} \int_{B(x,1)}(\log|x-y|)|y|^{4\gamma}e^{4u(y)}{\rm d}y\geq-C \end{equation} for some positive constant $C$. For this purpose we set \begin{equation} \tilde{u}(x)=u(x)-\gamma\log |x| \end{equation} Then $\tilde{u}$ satisfies \begin{equation}\label{equ-liou} \left\{\begin{array}{lcl} \Delta^2 \tilde{u}(x)=6e^{4\tilde{u}(x)}-8\pi^2\gamma\delta_0, && {\rm in} \ \; \mathbb{R}^4, \\ e^{4\tilde{u}}\in L^1(\mathbb{R}^4). && \quad \end{array} \right. \end{equation} Let $0<\varepsilon_0<\pi^2$ and $R_0=R_0(\varepsilon_0)$ be sufficiently large such that \begin{equation}\label{energy-B4} 6\int_{B(x,4)}|y|^{4\gamma}e^{4u(y)}{\rm d}y=6\int_{B(x,4)}e^{4\tilde{u}(y)}{\rm d}y\leq\varepsilon_0, \quad {\rm for}\ \, |x|\geq R_0, \end{equation} then we let $h$ be the solution of \begin{equation*} \left\{\begin{array}{lcl} \Delta^2 h(x)=6e^{4\tilde{u}(y)}, && {\rm in} \ \; B(x,4), \\ h(x)=\Delta h(x)=0, && {\rm on} \ \; \partial B(x,4). \end{array} \right. \end{equation*} From Lemma \ref{lem-BM}, we can see that for $\varepsilon_0>0$ small, \begin{equation}\label{int-h-B4} \int_{B(x,4)}e^{24|h|}{\rm d}y\leq c_1 \end{equation} for some constant $c_1$ independent of $x$. Next we set $q(y)=\tilde{u}(y)-h(y)$ for $y\in B(x,4)$, which clearly satisfies \begin{equation*} \left\{\begin{array}{lcl} \Delta^2 q(y)=0, && {\rm in} \ \; B(x,4), \\ q(y)=\tilde{u}(y),\quad \Delta q(y)=\Delta\tilde{u}(y), && {\rm on} \ \; \partial B(x,4). \end{array} \right. \end{equation*} Let $\tilde{q}(y)=-\Delta q(y)$. Then by Lemma \ref{lem-lap-u}, we see that for $|x|$ large enough and $y\in\partial B(x,4)$ \begin{equation*} \begin{split} \tilde{q}(y)=&-\Delta \tilde{u}(y)=-\Delta u(y)+\gamma\Delta(\log |y|)=-\Delta u(y)-\frac{2\gamma}{|y|^2}. \end{split} \end{equation*} By setting \begin{equation*} \hat{q}(y)=\tilde{q}(y)+\frac{2\gamma}{|y|^2}=-\Delta u(y)=\frac{3}{2\pi^2}\int_{\mathbb{R}^4}\frac{1}{|z-y|^2}|z|^{4\gamma}e^{4u(z)}{\rm d}z+C_1, \end{equation*} we obviously have $\hat{q}(y)>0$ on $\partial B(x,4)$, and hence $\tilde{q}(y)>-2\gamma/|y|^2$ on $\partial B(x,4)$. Observing that $1/|y|^2$ is the fundamental solution of $\Delta$. In other words, $\hat{q}(y)$ is harmonic in $B(x,4)$ with positive boundary value on $\partial B(x,4)$. The maximum principle implies $\hat{q}>0$ in $B(x,4)$. Thus, by the Harnack inequality and mean value property of harmonic functions, we have \begin{equation}\label{est-lap-q} \begin{split} \tilde{q}(y)&=\hat{q}(y)-\frac{2\gamma}{|y|^2} \\ &\leq c_2\hat{q}(x)-\frac{2\gamma}{|y|^2} =-\dashint_{\partial B(x,4)}\Delta u{\rm d }\sigma-\frac{2\gamma}{|y|^2}=-\dashint_{\partial B(x,4)}\Delta \tilde{u}{\rm d }\sigma,\quad y\in\overline{B(x,2)}, \end{split} \end{equation} with a constant $c_2$. \medskip Integrating (\ref{equ-liou}) along $r$, we have \begin{equation*} \begin{split} &\dashint_{\partial B(x,4)}\Delta \tilde{u}{\rm d}\sigma-\Delta\tilde{u}(x)=\int_{0}^{r}\frac{3}{\pi^2 s^3}\int_{0}^{s}\int_{\partial B(x,t)}e^{4\tilde{u}}{\rm d}\sigma{\rm d}t{\rm d}s \\ =&\int_{0}^{r}\frac{3}{\pi^2 s^3}\int_{0}^{s}t^3\int_{\partial B(x,1)}e^{4\tilde{u}}{\rm d}\sigma{\rm d}t{\rm d}s \\ =&\int_{0}^{r}t^3\int_{\partial B(x,1)}e^{4\tilde{u}}\Big(\frac{3}{2\pi^2t^2}-\frac{3}{2\pi^2r^2}\Big){\rm d}\sigma{\rm d}t. \end{split} \end{equation*} That is \begin{equation}\label{ave-int-tildeu} \dashint_{\partial B(x,4)}\Delta \tilde{u}{\rm d}\sigma-\Delta\tilde{u}(x)=\frac{3}{2\pi^2}\int_{ B(x,r)}\Big(\frac{1}{|x-y|^2}-\frac{1}{r^2}\Big)e^{4\tilde{u}}{\rm d}y. \end{equation} Next by Lemma \ref{lem-lap-u} and (\ref{ave-int-tildeu}), we can see \begin{equation*} \begin{split} &-\dashint_{\partial B(x,4)}\Delta \tilde{u}{\rm d}\sigma=-\Delta\tilde{u}(x)-\frac{3}{2\pi^2}\int_{ B(x,r)}\frac{1}{|x-y|^2}e^{4\tilde{u}}{\rm d}y+ \frac{3}{2\pi^2r^2}\int_{ B(x,r)}e^{4\tilde{u}}{\rm d}y\\ =&-\Delta u(x)-\frac{3}{2\pi^2}\int_{ B(x,r)}\frac{1}{|x-y|^2}e^{4\tilde{u}}{\rm d}y+ \frac{3}{2\pi^2r^2}\int_{ B(x,r)}e^{4\tilde{u}}{\rm d}y-\frac{2\gamma}{|x|^2}\\ =&\frac{3}{2\pi^2r^2}\int_{|x-y|\geq r}\frac{1}{|x-y|^2}e^{4\tilde{u}}{\rm d}y+ \frac{3}{2\pi^2r^2}\int_{ B(x,r)}e^{4\tilde{u}}{\rm d}y-\frac{2\gamma}{|x|^2}+C_1. \end{split} \end{equation*} In particular, for $r=4$ and $|x|$ large, \begin{equation}\label{est-lap-aveu} -\dashint_{\partial B(x,4)}\Delta \tilde{u}{\rm d}\sigma\leq c_3. \end{equation} Hence, from (\ref{est-lap-q}), we get \begin{equation}\label{est-lap-q-2} \tilde{q}(y)\leq c_4,\,\quad y\in\overline{B(x,2)}, \end{equation} and immediately $$|\tilde{q}(y)|\leq c_5,\,\quad y\in\overline{B(x,2)}.$$ Since $q$ satisfies \begin{equation*} \left\{\begin{array}{lcl} \Delta q(y)=-\tilde{q}(y), && {\rm in} \ \; B(x,4), \\ q(y)=\tilde{u}(y), && {\rm on} \ \; \partial B(x,4), \end{array} \right. \end{equation*} by estimates for linear elliptic equations, we have for any $p>1$ and $\sigma>2$, \begin{equation}\label{est-sup-q} \sup_{B(x,1)} q\leq c\big(\parallel q^{+}\parallel_{L^p(B(x,2))}+\parallel \tilde{q}\parallel_{L^{\sigma}(B(x,2))}\big), \end{equation} where $c=c(p,\sigma)$. On the other hand, we observe that $q^{+}(y)\leq\tilde{u}^{+}(y)+|h(y)|$ for $y\in B(x,4)$. Then by (\ref{int-h-B4}), we get \begin{equation*} \begin{split} \int_{ B(x,2)}(q^{+})^p\leq c_6\int_{ B(x,2)}e^{2q^{+}}\leq c_5\Big(\int_{ B(x,2)}e^{4\tilde{u}^{+}}\Big)^{\frac{1}{2}}\Big(\int_{ B(x,2)}e^{4|h|}\Big)^{\frac{1}{2}}\leq c_7\Big(\int_{ B(x,2)}e^{4\tilde{u}^{+}}\Big)^{\frac{1}{2}} \end{split} \end{equation*} Since $e^{4\tilde{u}^{+}}\leq 1+e^{4\tilde{u}}$, we have $\parallel q^{+}\parallel_{L^p(B(x,2))}\leq c_7$, which together with (\ref{est-lap-q-2}) and (\ref{est-sup-q}) implies \begin{equation}\label{est-sup-q-2} \sup_{B(x,1)} q\leq c_8. \end{equation} In view of $\tilde{u}=h+q$, we now obtain \begin{equation} \tilde{u}(y)\leq h(y)+q(y)\leq c_8+|h(y)|,\,\quad y\in\overline{B(x,2)}. \end{equation} Therefore, \begin{equation}\label{int-24u} \int_{ B(x,1)}e^{24\tilde{u}}\leq c_9 \int_{ B(x,1)}e^{24|h|}{\rm d}y\leq c_{10}, \end{equation} Then \begin{equation*} \Big|\int_{ B(x,1)}(\log|x-y|)e^{4\tilde{u}}{\rm d}y\Big|\leq\Big(\int_{ B(x,1)}(\log|x-y|)^2{\rm d}y\Big)^{\frac{1}{2}}\Big(\int_{ B(x,1)}e^{8\tilde{u}(y)}{\rm d}y\Big)^{\frac{1}{2}}\leq c_{11}, \end{equation*} which means \begin{equation}\label{int-minor-term} \Big|\int_{B(x,1)}(\log|x-y|)|y|^{4\gamma}e^{4u(y)}{\rm d}y\Big|\leq c_{11}, \end{equation} where $c_{11}$ is a constant independent of $x$ ($|x|$ large). As a consequence, (\ref{v-lower-rough}) and (\ref{int-minor-term}) lead to \begin{equation*} v(x)\geq(\alpha-\frac{\varepsilon}{2})\log|x|-c_{11}\geq(\alpha-\varepsilon)\log|x| \end{equation*} for $|x|$ large, which is (\ref{v-lower}). \medskip To estimate $\Delta v$, we observe that \begin{equation*} \Delta v(x)=-\frac{3}{2\pi^2}\int_{\mathbb{R}^4}\frac{1}{|x-y|^2}|y|^{4\gamma}e^{4u(y)}{\rm d}y=-\frac{3}{2\pi^2}\int_{\mathbb{R}^4}\frac{1}{|x-y|^2}e^{4\tilde{u}(y)}{\rm d}y. \end{equation*} We decompose $\mathbb{R}^4=B_1\cup B_2\cup B_3$ for $|x|$ large, where \begin{align*} B_1&=\{y:|x-y|\geq|x|/2\}, \\ B_2&=\{y:1\leq |x-y|\leq|x|/2,|y|\geq R_0\}, \\ B_3&=\{y:|x-y|\leq 1\}. \end{align*} Then \begin{equation*} \int_{B_1}\frac{1}{|x-y|^2}e^{4\tilde{u}(y)}{\rm d}y\leq\frac{C}{|x|^2}\to 0,\quad {\rm as}\ \, |x|\to\infty, \end{equation*} \begin{equation*} \int_{B_2}\frac{1}{|x-y|^2}e^{4\tilde{u}(y)}{\rm d}y\leq\int_{B_2}e^{4\tilde{u}(y)}{\rm d}y\to 0,\quad {\rm as}\ \, |x|\to\infty, \end{equation*} and \begin{equation*} \begin{split} &\int_{B_3}\frac{1}{|x-y|^2}e^{4\tilde{u}(y)}{\rm d}y \\ \leq&\Big( \int_{ |x-y|\leq 1}\big(\frac{1}{|x-y|}\big)^{\frac{8}{5}}{\rm d}y\Big)^{\frac{5}{8}}\Big( \int_{ |x-y|\leq 1}e^{4\tilde{u}}{\rm d}y\Big)^{\frac{1}{4}}( \int_{ |x-y|\leq 1}e^{24\tilde{u}}{\rm d}y\Big)^{\frac{1}{8}} \\ \leq& c\Big( \int_{ |x-y|\leq 1}e^{4\tilde{u}}{\rm d}y\Big)^{\frac{1}{4}}\to 0,\quad {\rm as}\ \, |x|\to\infty, \end{split} \end{equation*} where we have used (\ref{int-24u}). Lemma \ref{lem-v-lower} is established. \end{proof} With the estimates of $v(x)$ near infinity and the expression of $\Delta u$, we can show the expression of $u$ in integral form under the condition $|u(x)|=o(|x|^2)$ at $\infty$: \begin{lem}\label{lem-rep-u} Suppose $|u(x)|=o(|x|^2)$ at $\infty$. Then there exists some constant $C_0$ such that \begin{equation}\label{rep-u} u(x)=\frac{3}{4\pi^2}\int_{\mathbb{R}^4}\log\big(\frac{|y|}{|x-y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y+C_0. \end{equation} Furthermore, for any given $\varepsilon>0$, \begin{equation}\label{u-est} -\alpha\log|x|-C\leq u(x)\leq(-\alpha+\varepsilon)\log|x|,\quad |x|\geq R(\varepsilon), \end{equation} where $R(\varepsilon)$ comes from Lemma \ref{lem-v-lower}. \end{lem} \begin{proof}[\textbf{Proof}] We start from the integral expression of $\Delta u$ in Lemma \ref{lem-lap-u}: \begin{equation*} \Delta u(x)=-\frac{3}{2\pi^2}\int_{\mathbb{R}^4}\frac{1}{|x-y|^2}|y|^{4\gamma}e^{4u(y)}{\rm d}y-C_1,\quad C_1\ge 0, \end{equation*} and we first prove $C_1=0$ by contradiction. If $C_1>0$ we have \begin{equation*} \Delta u(x)\leq -C_1<0,\quad |x|\geq R_0, \end{equation*} where $R_0$ is large. Let \begin{equation} h(y)=u(y)+\varepsilon|y|^2+A\big(|y|^{-2}-R_0^{-2}\big). \end{equation} Under the assumption of $|u(y)|=o(|y|^2)$ at $\infty$, we have $\lim\limits_{|y|\to +\infty}h(y)=+\infty$ for any fixed $\varepsilon>0$ and $A>0$. So we choose $\varepsilon>0$ small to make \begin{equation} \Delta h(y)=\Delta u(y)+8\varepsilon<-\frac{C_1}{2}<0,\quad |y|\geq R_0, \end{equation} and $A$ sufficiently large such that $\inf\limits_{|y|\geq R_0} h(y)$ is achieved by some $y_0\in \mathbb{R}^4$ and $|y_0|>R_0$. Clearly we have obtained a contradiction to the maximum principle. Hence, $C_1=0$ and $u+v$ is harmonic in $\mathbb{R}^4$. \medskip From Lemma \ref{lem-v-upper} and Lemma \ref{lem-v-lower}, we know for $|x|$ large enough \begin{equation*} (\alpha-\varepsilon)\log|x|\leq v(x)\leq \alpha\log|x|+C, \end{equation*} which together with the assumption $|u(x)|=o(|x|^2)$ at $\infty$ indicates \begin{equation} |u(x)+v(x)|=o(|x|^2)\quad {\rm at} \ \,\infty. \end{equation} Since $u+v$ is harmonic, by the gradient estimates for harmonic functions, we have \begin{equation*} u(x)+v(x)=\sum_{j=1}^{4}a_j x_j+a_0 \end{equation*} with some constants $a_j\in\mathbb{R},\,j=0,\cdots,4$. Therefore, for $|x|$ large enough, we get \begin{equation*} e^{4u(x)}=e^{a_0}e^{-4v(x)}e^{\sum_{j=1}^4a_j x_j}\geq C|x|^{-4\alpha}e^{\sum_{j=1}^4 a_j x_j}. \end{equation*} Since $|y|^{4\gamma}e^{4u(x)}\in L^1(\mathbb{R}^4)$, we have $a_j=0$ for $1\leq j\leq 4$. Therefore, \begin{equation*} u(x)=-v(x)+a_0=\frac{3}{4\pi^2}\int_{\mathbb{R}^4}\log\big(\frac{|y|}{|x-y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y+C_0, \end{equation*} and then \begin{equation*} -\alpha\log|x|-C\leq u(x)\leq(-\alpha+\varepsilon)\log|x|, \end{equation*} for $|x|$ large. Lemma \ref{lem-rep-u} is established. \end{proof} Next we need a Pohozaev identity for $u$ of \begin{equation}\label{equ-PI} \Delta u=Q(x)e^{4u}\ \,{\rm in} \ \,\mathbb{R}^4. \end{equation} \begin{lem}\label{lem-PI} Suppose $u$ is an entire smooth solution of (\ref{equ-PI}). Then for any bounded domain, we have \begin{equation}\label{PI-omega} \begin{split} &\int_{\Omega}\big(Q+\frac{1}{4}<x,\nabla Q>e^{4u}\big){\rm d}x \\ =&\frac{1}{4}\int_{\partial\Omega}<x,\nu>Q(x) e^{4u}{\rm d}\sigma+\int_{\partial\Omega}\Big\{\frac{1}{2}|\Delta u|^2<x,\nu>-2\frac{\partial u}{\partial\nu}\Delta u \\ &\quad -<x,\nabla u>\frac{\partial\Delta u}{\partial\nu}-<x,\nabla\Delta u>\frac{\partial u}{\partial \nu}+<x,\nu><\nabla u,\nabla \Delta u>\Big\}{\rm d}\sigma. \end{split} \end{equation} In particular, taking $\Omega=B_R$, we have \begin{equation}\label{PI-BR} \begin{split} &\int_{B_R}Q(x)e^{4u}{\rm d}x+\frac{1}{4}\int_{B_R}<x,\nabla Q>e^{4u}{\rm d}x \\ =&\frac{1}{4}\int_{\partial B_R}|x|Q e^{4u}{\rm d}\sigma+\frac{1}{2}\int_{\partial B_R}|x||\Delta u|^2{\rm d}\sigma-2\int_{\partial B_R}\frac{\partial u}{\partial r}\Delta u{\rm d}\sigma -\int_{\partial B_R}|x|\frac{\partial u}{\partial r}\frac{\partial \Delta u}{\partial r}{\rm d}\sigma . \end{split} \end{equation} \end{lem} \begin{proof}[\textbf{Proof}] Multiplying (\ref{equ-PI}) by $x\cdot\nabla u$, we have \begin{equation}\label{IBP} \int_{\Omega}\nabla^2(x\cdot\nabla u)=\int_{\Omega}Q(x)e^{4u}(x\cdot\nabla u). \end{equation} After integrating by parts and direct computation, we get \begin{equation*} \begin{split} ({\rm RHS}) \; {\rm of}\; (\ref{IBP})=\frac{1}{4}\int_{\partial\Omega}<x,\nu>Q(x) e^{4u}{\rm d}\sigma-\int_{\Omega}\big(Q+\frac{1}{4}<x,\nabla Q>e^{4u}\big){\rm d}x, \end{split} \end{equation*} and \begin{equation*} \begin{split} ({\rm LHS}) \; {\rm of}\; (\ref{IBP})=&-\frac{1}{2}\int_{\partial\Omega}|\Delta u|^2<x,\nu>+2\int_{\partial\Omega}\frac{\partial u}{\partial\nu}\Delta u+\int_{\partial\Omega}<x,\nabla u>\frac{\partial\Delta u}{\partial\nu}\\ &+\int_{\partial\Omega}<x,\nabla\Delta u>\frac{\partial u}{\partial \nu}-\int_{\partial\Omega}<x,\nu><\nabla u,\nabla \Delta u>. \end{split} \end{equation*} Thus we establish (\ref{PI-omega}). Taking $\Omega=B_R$, we immediately obtain (\ref{PI-BR}) from (\ref{PI-omega}). \end{proof} From the Pohozaev identity we shall determine the exact value of $\alpha$. \begin{lem}\label{alpha} Let $u$ be a solution of (\ref{equ-liou-2}). Assume $|u(x)|=o(|x|^2)$ at $\infty$, then $\alpha=2(1+\gamma)$. \end{lem} \begin{proof}[\textbf{Proof}] Taking $Q(x)=6|x|^{4\gamma}$ in (\ref{PI-BR}), we have \begin{equation}\label{PI-BR-2} \begin{split} &6(1+\gamma)\int_{B_R}|x|^{4\gamma}e^{4u}{\rm d}x \\ =&\frac{1}{4}\int_{\partial B_R}6r^{4\gamma+1}e^{4u}{\rm d}\sigma+\frac{1}{2}\int_{\partial B_R}r|\Delta u|^2-2\int_{\partial B_R}\frac{\partial u}{\partial r}\Delta u -\int_{\partial B_R}r\frac{\partial u}{\partial r}\frac{\partial \Delta u}{\partial r} . \end{split} \end{equation} In view of Lemma \ref{lem-rep-u}, we have obtained \begin{equation*} u(x)=\frac{3}{4\pi^2}\int_{\mathbb{R}^4}\log\big(\frac{|y|}{|x-y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y+C_0, \end{equation*} and $e^{u(y)}\geq |y|^{-4\alpha}$ for $|y|$ large enough. Then $|y|^{4\gamma}e^{4u(y)}\geq|y|^{4(\gamma-\alpha)}$. Hence $|y|^{4\gamma}e^{4u(y)}\in L^1(\mathbb{R}^4)$ implies $\alpha>1+\gamma$. On the other hand, by the representation of $u$ and direct calculations, there hold \begin{equation*} \frac{\partial u}{\partial r}(x)=-\frac{3}{4\pi^2}\int_{\mathbb{R}^4}\frac{x\cdot(x-y)}{|x||x-y|^2}|y|^{4\gamma}e^{4u(y)}{\rm d}y, \end{equation*} \begin{equation*} \Delta u(x)=-\frac{3}{2\pi^2}\int_{\mathbb{R}^4}\frac{1}{|x-y|^2}|y|^{4\gamma}e^{4u(y)}{\rm d}y, \end{equation*} and \begin{equation*} \frac{\partial}{\partial r}\Delta u(x)=\frac{3}{\pi^2}\int_{\mathbb{R}^4}\frac{x\cdot(x-y)}{|x||x-y|^4}|y|^{4\gamma}e^{4u(y)}{\rm d}y, \end{equation*} Recall the definication of $u$, Lemma \ref{lem-rep-u} and $\alpha>1+\gamma$, then we have \begin{equation}\label{u-limit} \begin{split} &\lim_{r\to +\infty}\frac{\partial u}{\partial r}=0, \\ &\lim_{r\to +\infty}r\frac{\partial u}{\partial r}=-\alpha,\\ &\lim_{r\to +\infty}r^2\Delta u=-2\alpha, \\ &\lim_{r\to +\infty}r^3\frac{\partial }{\partial r}\Delta u=4\alpha, \end{split} \end{equation} where $r=|x|$. Therefore, applying the Pohozaev identity (\ref{PI-BR-2}) and (\ref{u-limit}), we have \begin{equation*} 8\pi^2(1+\gamma)\alpha=4\pi^2\alpha^2, \end{equation*} which leads to $\alpha=2(1+\gamma)$. \end{proof} Now we can determine the asymptotic behavior of $u$ at infinity using the exact value of $\alpha$. \begin{lem}\label{lem-u-asy} Let $u$ be a solution of (\ref{equ-liou-2}) and suppose $|u(x)|=o(|x|^2)$ at $\infty$. Then there exist constants $c$ and $a_j$ $(j=1,\cdots,4)$ such that for $|x|$ large enough, \begin{equation}\label{u-asy} u(x)=-2(1+\gamma)\log|x|+c+O(\frac{1}{|x|}), \end{equation} the derivatives of $u$ at infinity are determined in two cases: if $\gamma>-3/4$, \begin{equation}\label{u-high-asy} \left\{\begin{array}{ll} -\Delta u(x)=\frac{4(1+\gamma)}{|x|^2}+\sum_{j=1}^{4}\frac{a_jx_j}{|x|^4}+O(\frac{1}{|x|^4}), \\ -\frac{\partial}{\partial x_i}\Delta u(x)=-8(1+\gamma)\frac{x_i}{|x|^4}+O(\frac{1}{|x|^4}),\\ -\frac{\partial^2}{\partial x_i\partial x_j}\Delta u(x)=O(\frac{1}{|x|^4}). \end{array} \right. \end{equation} if $-1<\gamma\leq -3/4$, there exists a constant $\tau\in(0,1)$ such that \begin{equation} \left\{\begin{array}{ll} -\Delta u(x)=\frac{4(1+\gamma)}{|x|^2}+O(\frac{1}{|x|^{2+\tau}}), \\ -\frac{\partial}{\partial x_i}\Delta u(x)=-8(1+\gamma)\frac{x_i}{|x|^4}+O(\frac{1}{|x|^{3+\tau}}),\\ -\frac{\partial^2}{\partial x_i\partial x_j}\Delta u(x)=O(\frac{1}{|x|^4}). \end{array} \right. \end{equation} \end{lem} \begin{proof}[\textbf{Proof}] Let $w(x)=u(\frac{x}{|x|^2})-2(1+\gamma)\log|x|$, then from the equation of $u$ and the assumption, we see $w$ satisfies \begin{equation}\label{equ-w} \left\{\begin{array}{ll} \Delta^2 w(x)=6|x|^{4\gamma}e^{4w(x)},\quad {\rm in} \ \, \mathbb{R}^4\setminus\{0\}, \\ |w(x)|=o(\log\frac{1}{|x|}),\quad |\Delta w(x)|=o(\frac{1}{|x|^2}),\quad{\rm as}\ \, |x|\to 0. \end{array} \right. \end{equation} Let $h(x)$ be a solution of \begin{equation} \left\{\begin{array}{lcl} \Delta^2 h(x)=6|x|^{4\gamma}e^{4w(x)},&& {\rm in} \ \, B_1\setminus \{0\}, \\ h(x)=w(x),\quad \Delta h(x)=\Delta w(x),&&{\rm on} \ \, \partial B_1. \end{array} \right. \end{equation} and $q(x)=w(x)-h(x)$. Then $q(x)$ satisfies \begin{equation} \left\{\begin{array}{lcl} \Delta^2 q(x)=0,&& {\rm in} \ \, B_1, \\ q(x)=\Delta q=0,&& {\rm on} \ \, \partial B_1,\\ |q(x)|=o(\log\frac{1}{|x|}),\quad |\Delta q(x)|=o(\frac{1}{|x|^2}),&& { \rm as}\ \, |x|\to 0. \end{array} \right. \end{equation} First for $\Delta q$, since its growth near the singular source is weaker than fundamental solutions, the singularity is removable, thus $\Delta q=0$ in $B_1$. By exactly the same reason we further conclude that $q\equiv 0$ in $B_1$. That means $w(x)=h(x)\in C^{0,\tau}(\bar{B}_1)$. It suffices to consider the regularity of $h$ in $B_1$. Note that \begin{equation*} \begin{split} |x|^{4\gamma}e^{4w(x)}&=|x|^{4\gamma}e^{4u(\frac{x}{|x|^2})-8(1+\gamma)\log|x|} \\ &\sim|x|^{4\gamma}|x|^{-8(1+\gamma)}|x|^{4\alpha} \sim|x|^{4\gamma},\quad {\rm near} \ \, 0, \end{split} \end{equation*} where we used Lemma \ref{lem-rep-u} and Lemma \ref{alpha}. \textbf{Case 1:} $\gamma>-\frac{3}{4}$. In this case, there exists some $p>\frac{4}{3}$ such that $|x|^{4\gamma}e^{4w(x)}\in L^p(\bar{B}_1)$. By the standard elliptic estimates and general Sobolev inequality, we have $h\in C^{1,\tau}(B_1)$ for some $0<\tau<1$, which is valid for $w$ as well. Then it is not difficult to obtain (\ref{u-asy}), and (\ref{u-high-asy}) follows immediately. \textbf{Case 2:} $-1<\gamma\leq-\frac{3}{4}$, we can still find some $p>1$ (indeed $1<p\leq\frac{4}{3}$) such that $|x|^{4\gamma}e^{4w(x)}\in L^p(\bar{B}_1)$. Then by the regularity theorems of linear elliptic equations, $h(x)\in C^{0,\tau}(\bar{B}_1)$ for some $0<\tau<1$. In this case, we get \begin{equation} u(x)=-2(1+\gamma)\log|x|+c+O(\frac{1}{|x|^{\tau}}), \end{equation} and \begin{equation} \left\{\begin{array}{ll} -\Delta u(x)=\frac{4(1+\gamma)}{|x|^2}+O(\frac{1}{|x|^{2+\tau}}), \\ -\frac{\partial}{\partial x_i}\Delta u(x)=-8(1+\gamma)\frac{x_i}{|x|^4}+O(\frac{1}{|x|^{3+\tau}}),\\ -\frac{\partial^2}{\partial x_i\partial x_j}\Delta u(x)=O(\frac{1}{|x|^4}). \end{array} \right. \end{equation} \end{proof} \medskip \subsection{Classification of entire solutions for the case $-3/4<\gamma<0$} \quad In this subsection, we will show the solution of (\ref{equ-liou}) or (\ref{equ-liou-2}) has radial symmetry and uniqueness property up to scaling if $-3/4<\gamma<0$. Similar to \cite{lin-classification}, we will use the method of moving planes. Suppose that $u$ is a smooth entire solution of (\ref{equ-liou-2}) with $|u(x)|=o(|x|^2)$ at $\infty$. Recall $-\Delta u>0$ in $\mathbb{R}^4$ and (\ref{u-asy})$\sim$(\ref{u-high-asy}), so we will apply the method of moving planes to $-\Delta u$. Let $v(x)=-\Delta u(x)$. Then by Lemma \ref{lem-u-asy}, $v(x)$ has the following harmonic asymptotic expansion at $\infty$: \begin{equation}\label{harmonic-asy} \left\{\begin{array}{ll} v(x)=\frac{4(1+\gamma)}{|x|^2}+O(\frac{1}{|x|^3}), \\ v_{x_i}(x)=-8(1+\gamma)\frac{x_i}{|x|^4}+O(\frac{1}{|x|^4}),\\ v_{x_ix_j}(x)=O(\frac{1}{|x|^4}). \end{array} \right. \end{equation} First, we state some conventional notations for moving planes. For any $\lambda\in\mathbb{R}$, let $T_{\lambda}=\{x\in\mathbb{R}^4:x_1=\lambda\}$, $\Sigma_{\lambda}=\{x:x_1>\lambda\}$ and $x^{\lambda}=(2\lambda-x_1,x_2,x_3,x_4)$ be the reflection point of $x$ with respect to $T_{\lambda}$. In addition, when applying the method of moving plane, we need two lemmas in \cite{CGS}, also can be seen in \cite{lin-classification}. \begin{lemA}\label{CGS-lem-1}\cite{CGS} Let $v$ be a positive function defined in a neighborhood at $\infty$ satisfying the asymptotic expansion (\ref{harmonic-asy}). Then there exists $\bar{\lambda}_0<0$ and $R>0$ such that there holds \begin{equation*} v(x)>v(x^{\lambda}) \end{equation*} for $\lambda\leq\bar{\lambda}_0$, $x\in\Sigma_{\lambda}$ and $|x|\geq R$. \end{lemA} \begin{lemA}\label{CGS-lem-2}\cite{CGS} Suppose $v$ satisfies the assumption of Lemma \ref{CGS-lem-1}, and $v(x)>v(x^{\lambda}_0)$ for $x\in\Sigma_{\lambda_0}$. Assume $v(x)-v(x^{\lambda_0})$ is superharmonic in $\Sigma_{\lambda_0}$. Then there exist $\varepsilon>0$ and $S>0$ such that the followings hold. \begin{itemize} \item[(i)] $v_{x_1}>0$ in $\{x:|x_1-\lambda_0|<\varepsilon,|x|>S\}$. \item[(ii)] $v(x)>v(x^{\lambda})$ in $\{x:x_1\geq\lambda_0|,|x|>S\}$. \end{itemize} for all$\lambda\leq\bar{\lambda}_0$, $x\in\Sigma_{\lambda}$ and $|x|\geq R$ for all $x\in\Sigma_{\lambda}$, $\lambda\leq\lambda_1$ with $|\lambda_1-\lambda_0|<c\varepsilon$, where $c_0$ is a small positive number depending on $\lambda_0$ and $v$ only. \end{lemA} \begin{proof}[\textbf{Proof of Theorem \ref{thm-classification}}] Lemma \ref{alpha} and Lemma \ref{lem-u-asy} complete the proof of (i) and (ii) in Theorem (\ref{thm-classification}), respectively. Next we aim to prove the radial symmetry of solutions by the method of moving planes. \textbf{Step 1:} We start moving planes along $x_1$-direction. For any $\lambda$, we consider $w_{\lambda}(x)=u(x)-u(x^{\lambda})$ in $\Sigma_{\lambda}$. Then $w_{\lambda}(x)$ satisfies \begin{equation}\label{equ-MP} \left\{\begin{array}{lcl} \Delta^2 w_{\lambda}(x)=b_{\lambda}(x)w_{\lambda}(x), && {\rm in} \ \; \Sigma_{\lambda}, \\ w_{\lambda}(x)=\Delta w_{\lambda}(x)=0, && {\rm on} \ \; \partial T_{\lambda}, \end{array} \right. \end{equation} where \begin{equation*} b_{\lambda}(x)=6\frac{|x|^{4\gamma}e^{4u(x)}-|x^{\lambda}|^{4\gamma}e^{4u(x^{\lambda})}}{u(x)-u(x^{\lambda})}. \end{equation*} By (\ref{harmonic-asy}) and Lemma 2.3 in \cite{CGS}, we have $\Delta w_{\lambda}(x)=v(x^{\lambda})-v(x)<0$ for $x\in \Sigma_{\lambda}$, $\lambda<\bar{\lambda}_0$ and $|x|\geq R$. Since $v(x)>0$ in $\mathbb{R}^4$ and $v(x)\to 0$ as $|x|\to \infty$, there exists $\bar{\lambda}_1<\bar{\lambda}_0$ such that \begin{equation*} v(x^{\lambda})<v(x),\quad |x|\leq R,\ \, \lambda<\bar{\lambda}_1. \end{equation*} Therefore, we have \begin{equation*} \Delta w_{\lambda}(x)<0,\quad x\in \Sigma_{\lambda},\ \,\lambda<\bar{\lambda}_1. \end{equation*} By (\ref{u-asy}), $\lim\limits_{|x|\to +\infty}w_{\lambda}(x)=0$. Applying the maximum principle, we have \begin{equation*} w_{\lambda}(x)>0,\quad x\in \Sigma_{\lambda},\ \,\lambda<\bar{\lambda}_1. \end{equation*} Let \begin{equation}\label{limit-location} \lambda_0:=\sup\big\{\lambda:v(x^{\mu})<v(x)\quad{\rm for\ \,all}\ \,x\in\Sigma_{\lambda}\ \,{\rm and}\ \,\mu\leq\lambda\big\} \end{equation} Since $v(x)\to 0$ as $|x|\to\infty$, we can see that $\lambda_0<+\infty$. Next we claim \begin{equation*} u(x)\equiv u(x^{\lambda_0}),\quad {\rm for\ \, all\ \,}x\in\Sigma_{\lambda}. \end{equation*} Suppose $w_{\lambda_0}\not\equiv0$ in $\Sigma_{\lambda}$. By continuity, $\Delta w_{\lambda_0}(x)\leq 0$. Since $w_{\lambda_0}\to 0$ as $|x|\to+\infty$ and $w_{\lambda_0}\big|_{T_{\lambda_0}}=0$. By the strong maximum principle. we have \begin{equation}\label{w-0} w_{\lambda_0}(x)>0,\quad x\in \Sigma_{\lambda_0}. \end{equation} From (\ref{equ-liou-2}), we have \begin{equation*} \Delta^2 w_{\lambda_0}(x)=6\big(|x|^{4\gamma}e^{4u(x)}-|x^{\lambda_0}|^{4\gamma}e^{4u(x^{\lambda_0})}\big). \end{equation*} (\ref{w-0}) implies $u(x)>u(x^{\lambda_0})$ in $\Sigma_{\lambda_0}$. On the other hand, $|x|<|x^{\lambda_0}|$ and $\gamma<0$ imply $|x|^{4\gamma}>|x^{\lambda_0}|^{4\gamma}$. Hence \begin{equation}\label{w-4} \Delta^2 w_{\lambda_0}(x)>0,\quad x\in \Sigma_{\lambda_0}, \end{equation} which means $\Delta w_{\lambda_0}$ is a subharmonic function. Applying the strong maximum principle again, we have \begin{equation}\label{w-2} \Delta w_{\lambda_0}(x)<0,\quad x\in \Sigma_{\lambda_0}. \end{equation} By the definition of $\lambda_0$, there exists $\lambda_n\nearrow\lambda_0$ and $x_n\in \Sigma_{\lambda_n}$ such that \begin{equation*} w_{\lambda_n}(x_n)=\sup_{\Sigma_{\lambda_n}}w_{\lambda_n}>0. \end{equation*} By Lemma 2.4 in \cite{CGS}, $\{x_n\}$ is bounded. Without loss of generality, we assume $x_0=\lim\limits_{n\to+\infty}x_n$. If $x_0\in\Sigma_{\lambda_0}$, by continuity we have $\Delta w_{\lambda_0}(x_0)=0$, a contradiction to (\ref{w-2}). If $x_0\in T_{\lambda_0}$, then $\nabla\big(\Delta w_{\lambda_0}(x_0)\big)=0$. However, by (\ref{w-4}) and (\ref{w-2}), $\Delta w_{\lambda_0}$ is a negative subharmonic function in $\Sigma_{\lambda_0}$. Thus by the Hopf boundary lemma, we have $\frac{\partial}{\partial\nu}\Delta w_{\lambda_0}(x_0)>0$, which yields a contradiction. At this point, the claim is proved. Then we can change the $x_1$-direction to another one and finally obtain the radial symmetry. \medskip \textbf{Step 2:} Now we prove the uniqueness of the solution of (\ref{equ-liou-2}) modulus scaling in (\ref{rescale}). Let $w_1$ and $w_2$ be two radial solutions of (\ref{equ-liou-2}). Then $w_i$ satisfies $w_i^{'}(0)=w_i^{'}(0)=0$. Without loss of generality, we may assume $w_1(0)=w_2(0)$, on account of the rescale invariance. By the uniqueness of ODE, we only need to prove $w_1^{''}(0)=w_1^{''}(0)$. If $w_1^{''}(0)<w_2^{''}(0)$, then $w_1(r)<w_2(r)$ for small $r>0$. We will prove $w_1(0)<w_2(0)$ for all $r>0$. Suppose there exists $r_0>0$ such that $w_1(r_0)=w_2(r_0)$ and $w_1(r)<w_2(r)$ for $0<r<r_0$. Then by (\ref{equ-liou-2}), we have \begin{equation*} \frac{\partial}{\partial r}\Delta (w_1(r)-w_2(r))=6r^{4\gamma}\big(e^{4w_1(r)}-e^{4w_2(r)}\big)<0,\quad 1<r\leq r_0, \end{equation*} which together with the assumption implies \begin{equation*} \Delta (w_1-w_2)<0,\quad {\rm in} \ \, B(0,r_0). \end{equation*} Since $w_1(r_0)-w_2(r_0)=0$, from the maximum principle, we have $w_1(r)-w_2(r)>0$ for $0<r< r_0$, which contradicts with $w_1(0)=w_2(0)$. Thus, $w_1(r)<w_2(r)$ for all $r>0$. Hence $\frac{\partial}{\partial r}\Delta (w_1(r)-w_2(r))<0$ for all $r>0$, which means $\Delta (w_1(r)-w_2(r))$ is decreasing in $r$. Thus $w_1(r)-w_2(r)\leq -c r^2$ as $r\to +\infty$ for some constant $c>0$, which yields a contradiction to the assumption $w_i(r)=o(r^2)$ at $\infty$. Similarly, it is impossible for $w_1^{''}(0)>w_2^{''}(0)$. Thus, the radial solution of (\ref{equ-liou-2}) is unique under the rescale $u_{\lambda}(x)=u(\lambda x)+(1+\gamma)\log \lambda$ for some $\lambda>0$, and it is valid for (\ref{equ-liou}) after scaling. \end{proof} \medskip Next, we consider the case without the assumption $|u(x)|=o(|x|^2)$ at $\infty$. The following lemma is similar to Lemma 3.3 in \cite{lin-classification}. But we do not require $u$ is harmonic, and in fact $\Delta u\equiv const.$ is enough. Furthermore, when we apply this lemma in the proof of Theorem \ref{thm-classification-2}, it is uncertain that $w$ is harmonic. \begin{lem}\label{lem-poly} Suppose that $\Delta u =a$ in $\mathbb{R}^n$ for a constant $a\in \mathbb{R}$ such that $\exp(u-c|x|^2)\in L^1(\mathbb{R}^n)$ for some $c>0$. Then $u$ is a polynomial of order at most 2. \end{lem} \begin{proof}[\textbf{Proof}] We only need to prove for any unit vector $\xi \in\mathbb{R}^n$, there holds \begin{equation*} u_{\xi\xi}(x)\equiv const.\quad {\rm in}\ \, \mathbb{R}^n. \end{equation*} By Liouville's Theorem, it suffices to prove $u_{\xi\xi}(x)$ is bounded from above by a constant independent of $x$. Without loss of generality, we may consider $x=0$ and $\xi=e_1$. For simplicity, we denote $B(0,r)$ and $\partial B(0,r)$ to $B_r$ and $\partial B_r$ respectively. Since $\Delta u=a$, then $u_{x_1 x_1}$ is harnomic. Hence for any $r>0$, we have \begin{equation} u_{x_1x_1}(0)=\frac{1}{\sigma_n r^n}\int_{B_r}u_{x_1x_1}(x){\rm d}x=\frac{1}{\sigma_n r^n}\int_{\partial B_r}u_{x_1}\frac{x_1}{|x|}{\rm d}\sigma, \end{equation} where $\sigma_n$ is the volume of the unit ball in $\mathbb{R}^n$. Integrating the identity above from $0$ to $r$, we have \begin{equation}\label{poly-0} \begin{split} \frac{\sigma_n}{n+1}r^{n+1}u_{x_1x_1}(0)=&\int_{ B_r}u_{x_1}\frac{x_1}{|x|}{\rm d}\sigma=-\int_{B_r}u\frac{\partial}{\partial x_1}\big(\frac{x_1}{|x|}\big){\rm d }x+\int_{\partial B_r}u\frac{x_1^2}{|x|^2}{\rm d}\sigma \\ =&-\int_{B_r}\frac{u}{|x|}{\rm d }x+\int_{B_r}u\frac{x_1^2}{|x|^3}{\rm d }x+\int_{\partial B_r}u\frac{x_1^2}{|x|^2}{\rm d}\sigma. \end{split} \end{equation} Note that \begin{equation*} \int_{B_r}\frac{u}{|x|}{\rm d }x=n\sigma_n\int_{0}^{r}\Big(\dashint_{\partial B_s}u{\rm d}\sigma\Big)s^{n-2}{\rm d}s. \end{equation*} Since $\Delta u=a$, then \begin{equation*} \begin{split} a\sigma_n s^n=&\int_{B_s}\Delta u(y){\rm d}y=\int_{\partial B_s}\frac{\partial u}{\partial \nu}{\rm d}\sigma=s^{n-1}\int_{|\omega|=1}\frac{\partial u}{\partial s}(s\omega){\rm d}\sigma_{\omega}\\ =&s^{n-1}\frac{\partial}{\partial s}\int_{|\omega|=1}u(s\omega){\rm d}\sigma_{\omega}. \end{split} \end{equation*} That is \begin{equation*} \frac{\partial}{\partial s}\int_{|\omega|=1}u(s\omega){\rm d}\sigma_{\omega}=a\sigma_n s. \end{equation*} Integrating the identity above from $0$ to $s$, we obtain \begin{equation*} \int_{|\omega|=1}u(s\omega){\rm d}\sigma_{\omega}=\int_{|\omega|=1}u(0){\rm d}\sigma_{\omega}+\frac{a}{2}\sigma_n s^2, \end{equation*} which means \begin{equation*} \dashint_{\partial B_s}u{\rm d}\sigma=\frac{1}{n\sigma_n}\int_{|\omega|=1}u(s\omega){\rm d}\sigma_{\omega}=u(0)+\frac{a}{2n} s^2. \end{equation*} Then \begin{equation}\label{poly-1} \int_{B_r}\frac{u}{|x|}{\rm d }x=n\sigma_n\int_{0}^{r}\big(u(0)+\frac{a}{2n} s^2\big)s^{n-2}{\rm d}s=\frac{n}{n-1}\sigma_n r^{n-1}u(0)+\frac{a}{2(n+1)} r^{n+1}. \end{equation} On the other hand, by a direct computation, we have \begin{equation}\label{poly-2} \int_{B_r}u\frac{x_1^2}{|x|^3}{\rm d }x=\frac{\sigma_n}{n-1} r^{n-1} +\int_{\partial B_r}u\frac{x_1^2}{|x|^2}{\rm d}\sigma \end{equation} and \begin{equation}\label{poly-3} \int_{\partial B_r}u\frac{x_1^2}{|x|^2}{\rm d}\sigma=\sigma_n r^{n-1} \end{equation} From (\ref{poly-0})$\sim$(\ref{poly-3}), we have \begin{equation*} \frac{r^2}{n+1}u_{x_1x_1}(0)=-\frac{n}{n-1}u(0)-\frac{a}{2(n+1)\sigma_n}r^2+\frac{1}{n-1}\dashint_{B_r}u{\rm d}\mu_1+\dashint_{\partial B_r}{\rm d}\mu_2, \end{equation*} where ${\rm d}\mu_1=\frac{x_1^2}{|x|^3}{\rm d }x$ and ${\rm d}\mu_2=\frac{x_1^2}{|x|^2}{\rm d }x$ on $\partial B_r$. By Jensen's inequality, we have \begin{equation}\label{poly-4} \begin{split} &\exp\Big(\frac{r^2}{2(n+1)}u_{x_1x_1}(0)\Big) \\ \leq&\exp\Big(-\frac{n}{2(n-1)}u(0)\Big)\exp\Big(-\frac{a}{4(n+1)\sigma_n}r^2\Big)\Big(\dashint_{ B_r}e^{\frac{u}{2(n-1)}}{\rm d}\mu_1\Big)\Big(\dashint_{\partial B_r}e^{\frac{u}{2}}{\rm d}\mu_2\Big). \end{split} \end{equation} For any $c_1>-\frac{a}{4(n+1)\sigma_n}$, denoting $\tilde{c}_1=c_1+\frac{a}{4(n+1)}$, we have \begin{equation}\label{poly-5} \begin{split} &\int_{1}^{\infty}\exp\Big[\big(\frac{r^2}{2(n+1)}u_{x_1x_1}(0)-c_1\big)r^2\Big]{\rm d}r \leq\exp\Big(-\frac{n}{2(n-1)}u(0)\Big) \\ &\qquad\Big\{\int_{1}^{\infty}\big(\dashint_{B_r}e^{\frac{u}{n-1}}{\rm d}\mu_1\big)e^{-\tilde{c}_1r^2}{\rm d}r\Big\}^{\frac{1}{2}}\Big\{\int_{1}^{\infty}\big(\dashint_{\partial B_r}e^u{\rm d}\mu_2\big)e^{-\tilde{c}_1r^2}{\rm d}r\Big\}^{\frac{1}{2}}. \end{split} \end{equation} By the assumption, we can choose $c_1$ large enough such that the right hand of (\ref{poly-5}) is finite. Therefore, we have \begin{equation}\label{poly-6} u_{x_1x_1}(0)\leq2(n+1)c_1. \end{equation} (\ref{poly-6}) is valid for all $x\in\mathbb{R}^n$. By Liouville's Theorem, we obtain $u_{x_1x_1}(x)\equiv const.$, which indicates $u$ is a polynomial of order at most 2. \end{proof} \begin{proof}[\textbf{Proof of Theorem \ref{thm-classification-2}}] Suppose that $u$ is a solution of (\ref{equ-liou-2}). Let $v$ be (\ref{v-def}) and $w(x)=u(x)+v(x)$. By Lemma \ref{lem-lap-u}, we have $\Delta w(x)\equiv-C_1\leq 0$ in $\mathbb{R}^4$. Then Lemma \ref{lem-poly} implies there exist constants $c_0$ and $a_{ij}$ $(i,j=1,\cdots,4)$ such that $a_{ij}=a_{ji}$ and \begin{equation*} w(x)=\sum_{i,j,k=1}^{4}(a_{ij}x_ix_j+b_kx_k)+c_0. \end{equation*} After an orthogonal transformation, we may assume \begin{equation*} u(x)=\frac{3}{4\pi^2}\int_{\mathbb{R}^4}\log\big(\frac{|y|}{|x-y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y-\sum_{j=1}^{4}(a_jx_j^2+b_jx_j)+c_0. \end{equation*} Since $|x|^{4\gamma}e^{4u}\in L^1(\mathbb{R}^4)$, we have $a_j\geq 0$ for all $j=1,\cdots,4$, and $b_j=0$ if $a_j=0$. Hence, we can rewrite $u(x)$ as follows: \begin{equation*} u(x)=\frac{3}{4\pi^2}\int_{\mathbb{R}^4}\log\big(\frac{|y|}{|x-y|}\big)|y|^{4\gamma}e^{4u(y)}{\rm d}y-\sum_{j=1}^{4}a_j(x_j-x_j^0)^2+c_0 \end{equation*} and (\ref{lap-u-2}) holds from the previous argument. \medskip Next, we show the radial symmetry. Let $\hat{u}(x)=u(x)+\sum_{j=1}^{4}a_j(x_j-x_j^0)^2$. Then \begin{equation} \Delta^2\hat{u}(x)=6|x|^{4\gamma}e^{-4\sum_{j=1}^{4}a_j(x_j-x_j^0)^2}e^{4\hat{u}(x)},\quad{\rm in}\ \, \mathbb{R}^4. \end{equation} As in Lemma \ref{lem-u-asy}, we set $\hat{w}(x)=\hat{u}(\frac{x}{|x|^2})-\alpha\log|x|$, then $|\hat{w}(x)|=o(\log\frac{1}{|x|})$ near 0 from Lemma \ref{lem-v-upper} and Lemma \ref{lem-v-lower}, and $\hat{w}$ satisfies \begin{equation} \Delta^2\hat{w}(x)=6|x|^{4\gamma}e^{-4\sum_{j=1}^{4}a_j(\frac{x_j}{|x|^2}-x^0)^2}e^{4\hat{w}(x)},\quad{\rm in}\ \, \mathbb{R}^4\setminus\{0\}. \end{equation} Note that $a_j\geq 0$, then we can follow the argument in the proof of Lemma \ref{lem-u-asy} to obtain for $|x|$ large, \begin{equation} \hat{u}(x)=-\alpha\log|x|+c_0+O(\frac{1}{|x|}), \end{equation} and \begin{equation} \left\{\begin{array}{ll} -\Delta \hat{u}(x)=\frac{2\alpha}{|x|^2}+O(\frac{1}{|x|^3}), \\ -\frac{\partial}{\partial x_i}\Delta \hat{u}(x)=-4\alpha\frac{x_i}{|x|^4}+O(\frac{1}{|x|^4}),\\ -\frac{\partial^2}{\partial x_i\partial x_j}\Delta \hat{u}(x)=O(\frac{1}{|x|^4}). \end{array} \right. \end{equation} At this point, we establish (\ref{rep-u-2}). Finally, as in the proof of Theorem \ref{thm-classification}, we can apply the method of moving planes to show that $\hat{u}(x)$ is symmetric with respect to the hyperplane $\{x:x_i=0\}$ if $a_i x_i^0= 0$. In particular, if $a_i x_i^0= 0$ for all $i=1,\cdots,4=0$, then $u$ is radially symmetric with respect to the origin. \end{proof} \section[preliminaries]{Preliminaries for blowup analysis}\label{preliminaries} Let $G:M\times M\setminus {\rm diag}$ denote the Green's function for the Paneitz operator \begin{equation}\label{Green-function} f(x)-\bar{f}=\int_M G(x,y)P_g f(y){\rm d}V_g(y),\quad \int_M G(x,y){\rm d}V_g(y)=0, \end{equation} where $\bar{f}=\frac{1}{{\rm vol}_g(M)}\int_M f{\rm d }V_g$ is the average of $f$ over $M$. Then the weak form of (\ref{Green-function}) is \begin{equation}\label{Green-weakequ} P_{g,y}G(x,y)=\delta_x-\frac{1}{{\rm vol}_g(M)}. \end{equation} Set $R$ be the regular part of the Green function. Then by the Appendix A in \cite{zhang-weinstein}, for $y$ in a neighborhood of $x$, \begin{equation}\label{Green-func-expression} G(x,y)=-\frac{1}{8\pi^2}\log d_g(x,y) \chi+R(x,y). \end{equation} where $\chi$ is a cut-off function to avoid cut locus. Using $G$ we can decompose $u_k$ as the sum of its regular part and singular part \begin{equation}\label{decompose-uk} u_k(x)=\tilde{u}_k(x)-8\pi^2\sum_{j=1}^{N}\gamma_j G(x,q_j). \end{equation} Then $\tilde{u}_k$ satisfies \begin{equation}\label{equ-tilde-uk} P_g\tilde{u}_k+2b_k=2H_ke^{4\tilde{u}_k}\quad {\rm in} \ \, M, \end{equation} where \begin{equation}\label{Hk(x)} H_k(x)=h_k(x)\prod_{j=1}^N e^{-32\pi^2\gamma_j G(x,q_j)}. \end{equation} Clearly, (\ref{Q-equation-blowup}) and (\ref{assumption-coe}) imply that \begin{equation}\label{finite-integral-tildeuk} \int_M H_ke^{4\tilde{u}_k}{\rm d}V_g\leq C \end{equation} We will work with $\tilde{u}_k$ in the later blow-up analysis. Similar with \cite{zhang-weinstein}, since the metric $g$ may not be locally conformally flat, we will apply the {\itshape conformal normal coordinates}, whose existence has been proved in \cite{Lee-Parker}. More specially, for $q\in M$, there exists a normal coordinate around $q$ such that $g$ can be deformed to $\mathfrak{g}$ which satisfies $\det\,(\mathfrak{g})=1$. We use $R_{ijkl}$ to denote the curvature tensor under $\mathfrak{g}$. We will apply the expansions of the metric $\mathfrak{g}$ and its derivatives in the conformal normal coordinates (seen in the Appendix B of \cite{zhang-weinstein}), which are \begin{equation}\label{expansion-g} \begin{split} &\mathfrak{g}_{ab}(x)=\delta_{ab}+\frac{1}{3}R_{aijb}x^ix^j+O(r^3),\\ &\mathfrak{g}^{ab}(x)=\delta_{ab}-\frac{1}{3}R_{aijb}x^ix^j+O(r^3),\\ &\partial_c\mathfrak{g}^{ab}(x)=-\frac{1}{3}(R_{acib}+R_{aicb})x^i+O(r^2), \\ &\partial_{ab}\mathfrak{g}^{ab}(x)=\frac 13 R_{ia,a}x^i+O(r^2), \end{split} \end{equation} In addition, the following Pohozaev identity from the Appendix D in \cite{zhang-weinstein} will play an important role when the blow-up analysis is carried out in the conformal normal coordinates. \begin{lemA}\cite{zhang-weinstein} For equation $P_gu+2b=2he^{4u}$ in $M$ and $\Omega=B(0,r)$, there holds \begin{equation}\label{PI-mfd} \begin{split} &\int_{\Omega}\Big(2he^{4u}+\frac{1}{2}x^i\partial_i he^{4u}\Big) \\ =&\int_{\partial\Omega}\Big(\frac{1}{2}x^i\nu_ihe^{4u}-x^k\nu_jg^{ij}\partial_i(\Delta_g u)\partial_k u +\nu_jg^{ij}\Delta_gu\partial_iu +x^k\nu_jg^{ij}\Delta_gu \partial_{ik} u-\frac{1}{2}x^i\nu_i(\Delta_gu)^2 \Big) \\ &+\int_{\Omega}\Big(\Delta_gu\partial_i g^{ij}\partial_ju+x^k\Delta_gu\partial_{ik} g^{ij}\partial_ju+x^k\Delta_gu\partial_kg^{ij}\partial_{ij}u-2bx^i\partial_iu\Big) \\ &+2\int_{\partial\Omega}\Big(R_{ij,l}(q)x^lx^k\nu_i\partial_ju\partial_ku +O(r^3)|\nabla u|^2\Big) \\ &-\int_{\Omega}\Big(2R_{ij,l}(q)\big(x^l\partial_j u\partial_i u+x^kx^l\partial_j u\partial_{ik}u\big)+0(r^2)|\nabla u|^2+O(r^4)|\nabla^2 u|\Big) \end{split} \end{equation} \end{lemA} Note that we use $B(p,r)$ to denote a ball centered at $p$ with radius $r$. Sometimes if the center is the origin, we use $B_r$ instead of $B(0,r)$. \section[Blow-up analysis]{Blow-up analysis near the singularity}\label{blowup-local} In this section, we focus on the blow-up analysis near $q_j$, and to simplify the notation, we will omit the superscript $j$. Similar to the argument in \cite{zhang-weinstein}, we will work in the conformal normal coordinates near $q$ from \cite{Lee-Parker}. To be specific, we can find some function $w$ defined on $M$, such that in a small neighborhood $B(q,\delta)$ of $q$, $\delta>0$, we have \begin{equation} \det\,(\hat{g})=1 \end{equation} in the normal coordinates of the conformal metric $\hat{g}=e^{2w}g$. For convenience we just use $g$ instead of $\hat g$. Note that in a neighborhood of $q$, \begin{equation}\label{hat-w} w(x)=O(d_g(x,q)^2). \end{equation} where $d_g(x,q)$ stands for the distance between $x$ and $q$ under metric $g$. Using the conformal covariance property of $P_g$ the function $\mathfrak{u}_k=\tilde{u}_k-w$ satisfies \begin{equation}\label{equ-hat-uk} P_{g}\mathfrak{u}_k+2\mathfrak{b}_k=2H_ke^{4\mathfrak{u}_k}, \end{equation} and \begin{equation}\label{finite-int-hatuk} \int_MH_ke^{4\mathfrak{u}_k}{\rm d}V_{g}\leq C, \end{equation} where $2\mathfrak{b}_k=P_{g}w+2b_k$ and $\tilde{H}_k=H_ke^{4w}$. For simplity, we still denote $\tilde{H}_k$ by $H_k$. We still use $G$ to denote the Green's function for $P_{g}$. Then we have the following Green's representation formula \begin{equation}\label{Green-formula} \mathfrak{u}_k(x)=\bar{\mathfrak{u}}_k+2\int_MG(x,y)H_k(y)e^{4\mathfrak{u}_k(y)}{\rm d}V_{g}(y)-2\int_MG(x,y)\mathfrak{b}_k(y){\rm d}V_{g}(y), \end{equation} where $\bar{\mathfrak{u}}_k$ is the average of $\mathfrak{u}_k$ over $(M,g)$. Using the expression of $G$ in (\ref{Green-func-expression}, we have \begin{equation}\label{Green-rep-formula} \mathfrak{u}_k(x)=\bar{\mathfrak{u}}_k+2\int_M \Big(-\frac{1}{8\pi^2}\log d_g(x,y)\chi\Big) H_k(y)e^{4\mathfrak{u}_k(y)}{\rm d}V_{g}(y)+\mathfrak{\phi}_k(x), \end{equation} where \begin{equation}\label{hat-phik} \mathfrak{\phi}_k(x)=2\int_MR(x,y)H_k(y)e^{4\mathfrak{u}_k(y)}{\rm d}V_{g}(y)-2\int_MG(x,y)\mathfrak{b}_k(y){\rm d}V_{g}(y). \end{equation} Note that $\det\,(g)=1$ in $B(q,\delta)$, we have ${\rm d}V_{g}(y)={\rm d}y$ in $B(q,\delta)$. Taking the difference of (\ref{Green-rep-formula}) evaluated at $x$ and $q$, we get \begin{equation} \begin{split} &\mathfrak{u}_k(x)-\mathfrak{u}_k(q)\\ =&\frac{1}{4\pi^2}\int_M\big(\chi(r_q)\log|y-q|-\chi(r_x)\log d_g(x,y)\big)H_k(y)e^{4\mathfrak{u}_k(y)}{\rm d}V_{g}(y)+\mathfrak{\phi}_k(x)-\mathfrak{\phi}_k(q), \end{split} \end{equation} where $r_q=|y-q|$ and $r_x=d_g(x,y)$. Here since the coordinates are normal, we have $d_g(y,q)=|y-q|$. Thanks to the cut-off function $\chi$, we can replace the integral over $M$ by an integral over $B(q,2\delta)$: \begin{equation}\label{difference-Green-for} \begin{split} &\mathfrak{u}_k(x)-\mathfrak{u}_k(q)\\ =&\frac{1}{4\pi^2}\int_{B(q,4\delta)}\big(\chi(r_q)\log|y-q|-\chi(r_x)\log d_g(x,y)\big)H_k(y)e^{4\mathfrak{u}_k(y)}{\rm d}V_{g}(y)\\ &+\mathfrak{\phi}_k(x)-\mathfrak{\phi}_k(q),\quad x\in B(q,2\delta). \end{split} \end{equation} \medskip We next give an upper bound of the mass near $q$ when $\mathfrak{u}_k$ cannot blow up at $q$. Before stating such a small energy Lemma we point out that the function $H_k$ can be written in the neighborhood of a singular source $q$ as \begin{equation}\label{eq:mathcalh} H_k(x) = \mathfrak{h}_k(x)d_g(x,q)^{4\gamma}, \end{equation} with $ \mathfrak{h}_k(q) \neq 0$. Using this notation, our result states as follows: \begin{lem}\label{lem-small-mass-regular} Let $q$ be a singular source with index $\gamma$. If \begin{equation}\label{smallness-condition} \int_{B(q,2\delta)}2\mathfrak{h}_kd_g(x,q)^{4\gamma}e^{4\mathfrak{u}_k(x)}{\rm d}V_{g}<\min\{8\pi^2,8\pi^2(1+\gamma)\}, \end{equation} $\mathfrak{h}_k$ is defined in \eqref{eq:mathcalh}. Then $\mathfrak{u}_k\leq C$ in $B(q,\delta)$. \end{lem} \begin{proof} Note that \begin{equation*} P_{g}\mathfrak{u}_k=2\mathfrak{h}_kd_g(x,q)^{4\gamma}e^{4\mathfrak{u}_k(x)}-2\mathfrak{b}_k. \end{equation*} We write $\mathfrak{u}_k=u_{1k}+u_{2k}$ where $u_{1k}$ is the solution of \begin{equation}\label{smallness-equ} \left\{\begin{array}{lcl} \Delta^2 u_{1k}=2\mathfrak{h}_kd_g(x,q)^{4\gamma}e^{4\mathfrak{u}_k(x)}&& {\rm in} \ \, B(q,2\delta)\\ u_{1k}(x)=\Delta u_{1k}(x)=0 && {\rm on} \ \, \partial B(q,2\delta). \end{array} \right. \end{equation} By Lemma \ref{lem-BM} from \cite{lin-classification} we have \begin{equation}\label{BM-thm-1} \int_{B(q,2\delta)}\exp\Big\{\frac{\tilde{\delta}|\mathfrak{u}_{1k}|}{\parallel 2H_ke^{4\mathfrak{u}_k} \parallel_{L^1(B(q,2\delta),g)}}\Big\}{\rm d}V_{g}\leq C, \end{equation} with any $\tilde{\delta}\in(0,32\pi^2)$ and some constant $C=C(\tilde{\delta},\delta)$. On one hand in $B(q,2\delta)$, \begin{equation}\label{u1k} u_{1k}(x)=\int_{B(q,\delta)}G_{\delta}(x,y)2\mathfrak{h}_kd_g(\eta, q)^{4\gamma}e^{4\mathfrak{u}_k}{\rm d}\eta, \quad x\in B(q,2\delta) \end{equation} where $G_{\delta}(x,y)$ is the Green's function of $\Delta^2$ on $B(q,2\delta)$: $$G_{\delta}(x,y)=-\frac{1}{8\pi^2}\log |x-y|+R_{\delta}(x,y), $$ with $$ G_{\delta}(x,y)=\Delta_yG_{\delta}(x,y)=0,\quad \mbox{for } x\in B(q,2\delta), y\in \partial B(q,2\delta). $$ In particular for $x\in B(q, \frac 32\delta)$, $$u_{1k}(x)=-\frac 1{8\pi^2}\int_{B(q,2\delta)}\log |x-q|2\mathfrak{h}_k|\eta -q|^{4\gamma}e^{4\mathfrak{u}_k}{\rm d}\eta+O(1),\quad x\in B(q,\frac 32\delta). $$ On the other hand the Green's representation formula of $\mathfrak{u}_k$ gives $$\mathfrak{u}_k(x)=u_{1k}(x)+u_{2k}(x)=\overline{ \mathfrak{u}_k}+\int_MG(x,\eta)2\mathfrak{h}_kd_g(x,q)^{4\gamma}e^{4\mathfrak{u}_k}{\rm d}\eta. $$ Since the leading term of $G$ and $G_{\delta}$ are both $-\frac 1{8\pi^2}\log d_g(x,q)$, we have $$u_{2k}(x)=\overline{\mathfrak{u}_k}+O(1). $$ From $\int_Me^{4\mathfrak{u}_k}{\rm d}V_g\le C$ and Jensen's inequality $$e^{4\bar{\mathfrak{u}}_k}\le \int_M e^{4\mathfrak{u}_k}{\rm d}V_g\le C.$$ Therefore, $\mathfrak{u}_{2k}\leq C$ in $B(q,\frac{3}{2}\delta)$. Now we focus on $u_{1k}$. If $\gamma>0$, (\ref{smallness-condition}) is \begin{equation*} \int_{B(q,2\delta)}2\mathfrak{h}_kd_g(x,q)^{4\gamma}e^{4\mathfrak{u}_k(x)}{\rm d}V_{g}<8\pi^2. \end{equation*} Since $u_{2k}$ is bounded from above in $B(q,\frac 32\delta)$, we see from (\ref{BM-thm-1}) that there exists some $p>1$ to make $e^{4u_{1k}}\in L^{p}(B(q,2\delta))$: \begin{equation}\label{Lp'-estimate} \parallel 2\mathfrak{h}_kd_g(x,q)^{4\gamma}e^{4\mathfrak{u}_k(x)}\parallel_{ L^{p}(B(q,\frac{3}{2}\delta))}\leq C. \end{equation} The estimate (\ref{Lp'-estimate}) leads to a $L^{\infty}$ of $u_{1k}$ in $B(q,\delta)$ based on two reasons. First the integration of (\ref{u1k}) gives a $L^1$ bound of $u_{1k}$ in $B(q,\frac 32\delta)$: \begin{equation} \parallel u_{1k} \parallel_{L^1(B(q,\frac{3}{2}\delta))}\leq C. \end{equation} Second, the standard interior regularity results in \cite{Browder-regularity} (Theorem 1 in Section 3 of \cite{Browder-regularity}) gives \begin{equation*} \parallel u_{1k}\parallel_{W^{4,p}B(q,\delta)}\leq \parallel 2\mathfrak{h}_kd_g(x,q)^{4\gamma}e^{4\mathfrak{u}_k(x)}\parallel_{ L^{p}(B(q,\frac{3}{2}\delta))}+\parallel u_{1k} \parallel_{L^1(B(q,\frac{3}{2}\delta))}\leq C. \end{equation*} Thus we have obtained the $L^{\infty}$ bound of $u_{1k}$ in $B(q,\delta)$ by standard Sobolev embedding theorem. If $-1<\gamma<0$, (\ref{smallness-condition}) is \begin{equation*} \int_{B(q,2\delta)}2\mathfrak{h}_kd_g(x,q)^{-4|\gamma |}e^{4\mathfrak{u}_k(x)}{\rm d}V_{g}<8\pi^2(1-|\gamma |). \end{equation*} Since $\frac{\tilde \delta}{8\pi^2(1-|\gamma |)}<\frac{4}{1-|\gamma|}$, this strict inequality makes it possible to choose $\tilde{\delta}\in(0,32\pi^2)$ and $p>1$ with \begin{equation*} p<\frac{1}{|\gamma |}\quad p<\frac{\tilde{\delta}}{32\pi^2(1-|\gamma |)}. \end{equation*} Then H${\rm \ddot{o}}$lder inequality tells us that (\ref{Lp'-estimate}) is also true in this case and the $L^{\infty}$ bound of $u_{1k}$ over $B(q,\delta)$ follows immediately. The combination of the $L^{\infty}$ bound of $u_{1k}$ and the upper bound of $u_{2k}$ implies that $\mathfrak{u}_k\le C$ in $B(q, \delta)$. \end{proof} \begin{rem} \label{rem-small} If $q$ is a regular point the same proof obviously leads to a similar conclusion: Suppose $B(q,2\delta)$ has no singular source and $\int_{B(q,2\delta)}2\mathfrak{h}_k e^{4\mathfrak{u}_k}<8\pi^2$, there is an upper bound of $\mathfrak{u}_k$ in $B(q,\delta)$. \end{rem} Another immediate consequence of Lemma \ref{lem-small-mass-regular} is that blowup sequence only converges to point measures. The following theorem takes one step further to assert that the point measure is quantized and the bubbling solutions tend to $-\infty$ away from blowup points. \begin{thm}[Concentration and Quantization]\label{thm-con-quan} Let $\{\mathfrak{u}_k\}$ be a sequence of solution to (\ref{equ-hat-uk}) with (\ref{finite-int-hatuk}). Assume that $q$ is the only blow-up point of $\mathfrak{u}_k$ in $B(q,\delta)$, then as $k\to+\infty$, along a subsequence, there hold \begin{equation}\label{con-neg-infty} \mathfrak{u}_k\to-\infty,\quad uniformly\ \, on \ \, any\ \, compact \ \, set\ \, of\ \, B(q,\delta)\setminus\{q\}, \end{equation} \begin{equation}\label{con-measure} 2 \mathfrak{h}_kd_g(x,q)^{4\gamma}e^{4\mathfrak{u}_k}\to\beta\delta_q,\quad weakly\ \, in \ \, the\ \, sense \ \, of\ \,measure\ \, on\ \,M, \end{equation} with $\beta=16\pi^2(1+\gamma)$ and $\mathfrak{h}_k$ is defined in \eqref{eq:mathcalh}. In particular, along a subsequence, \begin{equation}\label{con-mass} 2 \int_{B(q,\delta)}H_k(x)e^{4\mathfrak{u}_k(x)}{\rm d}V_{g}\to16\pi^2(1+\gamma),\quad as\ \, k\to+\infty. \end{equation} \end{thm} \begin{proof}[\textbf{Proof}] Suppose $q$ is the only blowup pint in $B(q,2\delta)$, then we observe that for any given $K\subset\subset B(q,\delta)$ \begin{equation}\label{bd-osi} |\mathfrak{u}_k(x)-\mathfrak{u}_k(y)|\le C(K) \end{equation} because the $|x-q|$ is comparable to $|y-q|$ and the total integration of $\mathfrak{h}_ke^{4\mathfrak{u}_k}$ is bounded. It is easy to obtain (\ref{bd-osi}) from the Green's representation formula. Then it is also immediate to observe that \begin{equation}\label{uk-grad} |\nabla_{g}^j\mathfrak{u}_k(x)|_{g}\leq C(K),\quad {\rm in}\ \,K,\quad j=1,2,3 \end{equation} because when $x\in K$, $$|\nabla^jG(x,y)|\le C|x-y|^{-j}, \quad x,y\in B(q,2\delta), $$ and $\mathfrak{u}_k$ has a upper bound. These two facts imply (\ref{uk-grad} by trivial estimates. Then the equation for $\mathfrak{u}_k$ further provides the estimate for the fourth order derivatives of $\mathfrak{u}_k$: $$|\nabla^{\alpha}\mathfrak{u}(x)|\le C(k), \quad x\in K, \quad |\alpha|=4. $$ Now we prove (\ref{con-neg-infty}) by contradiction. Suppose that there exists a point $x_0\in B(q,\delta)\setminus\{q\}$ such that $\{\mathfrak{u}_k(x_0)\}_{k\in\mathbb{N}}$ is also bounded from below. By (\ref{bd-osi}) we see that $\mathfrak{u}_k$ is bounded in $L^{\infty}$ norm in any compact subset of $B(q,\delta)\setminus\{q\}$. This fact and the gradients estimates of $\mathfrak{u}_k$ guarantee that along a subsequence \begin{equation*} \mathfrak{u}_k\to \mathfrak{u}_0\quad {\rm in}\ \,C_{loc}^{3,\sigma}(B(q,2\delta)\setminus\{q\}), \end{equation*} with some constant $\sigma\in(0,1)$ and the limit function $\mathfrak{u}_0$ solves \begin{equation*} P_{g}\mathfrak{u}_0(x)+2\mathfrak{b}_0(x)=2\mathfrak{h}_0(x)d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_0(x)} \quad {\rm in} \ \,B(q,2\delta)\setminus\{q\}. \end{equation*} Around $q$ we use $\beta(r)$ and its limit to describe the concentration of energy: \begin{equation*} \begin{split} &\beta_k(r)=\int_{B(q,r)}2\mathfrak{h}_k(x)d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_k(x)}{\rm d}V_{g} \\ &\beta(r)=\lim_{k\to+\infty}\beta_k(r),\quad \beta=\lim_{r\to 0}\beta(r). \end{split} \end{equation*} From Lemma \ref{lem-small-mass-regular} and Remark \ref{rem-small} we see that if $q$ is singular point $\beta\ge \min\{8\pi^2(1+\gamma),8\pi^2\}$, if $q$ is a regular point $\beta\ge 8\pi^2$. Fixing any $r>0$ small, we integrate the equation of $\mathfrak{u}_k$ in $B(q,r)$ to obtain \begin{equation*} \int_{B(q,r)}\big(P_{g}\mathfrak{u}_k+2\mathfrak{b}_k\big){\rm d}V_{g}=\int_{B(q,r)}H_ke^{4\mathfrak{u}_k}{\rm d}V_{g}=\beta_k(r), \end{equation*} which implies \begin{equation*} \lim_{r\to 0}\int_{B(q,r)}\big(P_{g}\mathfrak{u}_0+2\mathfrak{b}_0\big){\rm d}V_{g}=\beta. \end{equation*} Therefore, $\mathfrak{u}_0$ satisfies, in the distribution sense, \begin{equation*} P_{g}\mathfrak{u}_0(x)+2\hat{b}_0(x)=2\mathfrak{h}_0(x)d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_0(x)}+\beta \delta_q \quad {\rm in} \ \,B(q,2\delta) \end{equation*} Using the Green's representation formula for $\mathfrak{u}_k$, we have \begin{equation}\label{hatu0-decompose} \mathfrak{u}_0(x)=-\frac{\beta}{8\pi^2}\log d_g(x,q)+v(x)+w(x) \end{equation} where the first term comes from the convolution of $-\frac 1{8\pi^2}\log d_g(x,y)\chi$ with $\beta \delta_q$, the second term $v$ comes from the convolution of $-\frac 1{8\pi^2}\log d_g(x,y)\chi $ with $2\mathfrak{h}_0d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_0}$: \begin{equation}\label{hatu0-v} v(x)=-\frac{1}{4\pi^2}\int_{B(q,\delta)}\log d_g(x,y)\mathfrak{h}_0(y)d_g(y,q)^{4\gamma}e^{4\mathfrak{u}_0(y)}{\rm d}V_{g}(y), \end{equation} and $w$ is the collection of insignificant other terms: \begin{equation}\label{hatu0-w} w\in C^4(M). \end{equation} For $v$ we use (\ref{hatu0-v}) to denote $v(x)$ in $B(q,\delta)$ and we extend $v$ to make $v\equiv 0$ on $M\setminus B(q,2\delta)$. Based on the definition of $v$ we now show that $v\in L^{\infty}(B(q,\delta))$. In fact, from (\ref{hatu0-v}) we have this lower bound of $v$ in $B(q,\delta)$: \begin{equation}\label{v-lb} v(x)\geq \frac{1}{4\pi^2}\log\frac{1}{\delta}\parallel V\parallel_{L^1(M)}\geq C\quad {\rm in} \ \, B(q,\delta) \end{equation} where $$V(x)=\mathfrak{h}_0(x)d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_0(x)}. $$ The lower bounds of $\mathfrak{h}_0$ and $v$ lead to a lower bound for $V(x)$: \begin{equation*} V(x)=\mathfrak{h}_0(x)d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_0(x)}\geq Cd_g(x,q)^{4\gamma-\frac{\beta}{2\pi^2}}e^{4v(x)+w(x)}\geq\frac{c}{d_g(x,q)^s} \end{equation*} with $s=\frac{\beta}{2\pi^2}-4\gamma$ and suitable $c>0$. Since $\|V\|_{L^1}<\infty$ we see immediately that $s<4$, which is \begin{equation}\label{beta-con-1} \beta<8\pi^2(1+\gamma). \end{equation} Thus there is no way for $\mathfrak{u}_k$ to be bounded from below away from singular source unless $\gamma>0$. We have proved (\ref{con-neg-infty}) for $\gamma\le 0$. For $\gamma>0$ we have an upper bound for $V(x)$: \begin{equation} V(x)\leq\frac{c}{d_g(x,q)^s}e^{4v(x)}\quad {\rm in} \ \, B(q,\delta), \quad \mbox{if}\quad \gamma>0. \end{equation} To proceed with the proof of $v\in L^{\infty}$ we observe from (\ref{hatu0-v}) and direct computation that \begin{equation*} \left\{\begin{array}{lcl} \Delta^2 v(x)=V(x)+\eta(x)&& {\rm in} \ \, B(q,2\delta)\\ v(x)=\Delta v(x)=0 && {\rm on} \ \, \partial B(q,2\delta), \end{array} \right. \end{equation*} where $\eta$ is smooth in $B(q,2\delta)$. Note that the boundary condition of $v$ on $\partial B(q,2\delta)$ is based on the smooth extension of $v$ mentioned before. Now we employ a standard argument of Brezis-Merle \cite{BM} to obtain $e^{\kappa |v|}\in L^1(B(q,2\delta),g)$ for any constant $\kappa>0$. Indeed, let $0<\varepsilon<1/\kappa$ and $V=V_1+V_2$ with $\parallel V_1\parallel_{L^1(B(q,2\delta))}<\epsilon$ and $V_2\in L^{\infty}(B(q,2\delta))$. Correspondingly we write $v=v_1+v_2$, where $v_1$ solves \begin{equation*} \left\{\begin{array}{lcl} \Delta^2 v_1(x)=V_1(x)&& {\rm in} \ \, B(q,2\delta)\\ v_1(x)=\Delta v_1(x)=0 && {\rm on} \ \, \partial B(q,2\delta), \end{array} \right. \end{equation*} and $v_2$ solves \begin{equation*} \left\{\begin{array}{lcl} \Delta^2 v_2(x)=V_2(x)+\eta(x)&& {\rm in} \ \, B(q,2\delta)\\ v_2(x)=\Delta v_2(x)=0 && {\rm on} \ \, \partial B(q,2\delta), \end{array} \right. \end{equation*} Choosing $\tilde{\delta}=32\pi^2-1$ in (\ref{BM-thm-1}), we find $\int_{B(q,2\delta)}e^{\kappa |v_1|}\leq\int_{B(q,2\delta)}e^{\frac{|v_1|}{\parallel V_1\parallel_{L^1(B(q,2\delta))}}}\leq C$ based on the smallness of $\epsilon$. Then by standard elliptic regularity theory, we have $v_2\in L^{\infty}(B(q,2\delta))$. Consequently, $e^{\kappa |v|}\in L^1(B(q,2\delta),g)$. Since $\kappa$ is large it is possible to use H${\rm \ddot{o}}$lder inequality to obtain $V\in L^{p'}(B(q,2\delta),\hat{g})$ for some $p\in(1,\frac{4}{s})$ if $s\geq 0$ and $p\in(1,+\infty)$ if $s\leq 0$. Thus we have proved $v\leq C$ in $B(q,2\delta)$. From the $L^{\infty}$ bound of $v$ we can use two positive constants $c_1$ and $c_2$ to bound $V$ from above and below \begin{equation}\label{range-V} \frac{c_1}{d_g(x,q)^s}\leq V(x)=\mathfrak{h}_0(x)d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_0(x)}\leq \frac{c_2}{d_g(x,q)^s},\quad \mbox{if}\quad \gamma>0. \end{equation} Next, we aim to derive a contradiction by taking advantage of the Pohozaev identity (\ref{PI-mfd}) in Section \ref{preliminaries}. Set $h(x)=\mathfrak{h}_0(x)d_g(x,q)^{4\gamma}$ and $\Omega=B(q,r)$ in (\ref{PI-mfd}), then direct computation and (\ref{comparsion-dist}) give rise to \begin{equation*} \int_{\Omega}x^i\partial_i\big(\mathfrak{h}_0(x)d_g(x,q)^{4\gamma}\big)e^{4\mathfrak{u}_0}=\int_{B(q,r)}\big(x^i\partial_i \mathfrak{h}_0(x)d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_0}+x^i\mathfrak{h}_0\partial_i d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_0}\big) \end{equation*} and \begin{equation*} \begin{split} \partial_i d_g(x,q)^{4\gamma}=&4\gamma d_g(x,q)^{4\gamma-1}\partial_i d_g(x,q)=4\gamma d_g(x,q)^{4\gamma-1}\big(\partial_i|x-q|+C\big)\\ =&4\gamma d_g(x,q)^{4\gamma-2}\big((x-q)^i+C\big) \end{split} \end{equation*} Immediately, together with (\ref{comparsion-dist}) we get \begin{equation*} \begin{split} &\int_{\Omega}\frac{1}{2}(x-q)^i\partial_i\big(\mathfrak{h}_0(x)d_g(x,q)^{4\gamma}\big)e^{4\mathfrak{u}_0}\\ =&\int_{B(q,r)}2\gamma \mathfrak{h}_0(x)d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_0}+\int_{B(q,r)}\frac{1}{2}\partial_{\nu} \mathfrak{h}_0 d_g(x,q)^{4\gamma+1}e^{4\mathfrak{u}_0} \end{split} \end{equation*} Therefore, for $r\to 0$ \begin{equation}\label{LHS-PI} {\rm (LHS)\ \,of\ \,}(\ref{PI-mfd})=2(1+\gamma)\int_{B(q,r)}\mathfrak{h}_0d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_0}+O(r)=(1+\gamma)\beta+o_r(1), \end{equation} where $\lim_{r\to 0}o_r(1)=0$. Denote the four integrals on the right hand side of (\ref{PI-mfd}) by $I_1$, $I_2$, $I_3$ and $I_4$, respectively. Thanks to the expansions of $g$, which are in (\ref{expansion-g}), we obtain that \begin{equation*} |I_2|\leq C\int_{B(q,r)}\Big(r|\nabla^2\mathfrak{u}_0||\nabla\mathfrak{u}_0|+|x-q||\nabla^2\mathfrak{u}_0||\nabla\mathfrak{u}_0|+r|x-q||\nabla^2\mathfrak{u}_0|+|x-q||\nabla \mathfrak{u}_0|\Big), \end{equation*} \begin{equation*} |I_3|\leq C\int_{\partial B(q,r)}\Big(|x-q|^2|\nabla\mathfrak{u}_0|^2+O(r^3)|\nabla\mathfrak{u}_0|^2\Big), \end{equation*} \begin{equation*} |I_4|\leq C\int_{B(q,r)}\Big(|x-q||\nabla\mathfrak{u}_0|^2+|x-q|^2|\nabla^2\mathfrak{u}_0||\nabla\mathfrak{u}_0|+O(r^2)|\nabla\mathfrak{u}_0|^2+O(r^4)|\nabla^2\mathfrak{u}_0|\Big), \end{equation*} Next, we shall estimate $|\nabla^j\mathfrak{u}_0|$ in $B(q,r)$ for $j=1,2,3$. Recalling (\ref{hatu0-decompose})$\sim$(\ref{hatu0-w}), it is important to consider the $\nabla^j v$ in $B(q,r)$ for $j=1,2,3$.By means of the Green's representation formula, we observe that \begin{equation*} \begin{split} |\nabla^jv(x)|\leq C\int_{B(q,2\delta)}\frac{1}{d_g(x,y)^j}V(y)+O(1) \end{split} \end{equation*} In order to estimate the integral in the inequality above, we decompose $B(q,2\delta)$ into two parts \begin{equation*} \Omega_1=B(q,2\delta)\cap\Big\{d_g(x,y)\leq\frac{d_g(x,q)}{2}\Big\},\quad \Omega_2=B(q,2\delta)\setminus \Omega_1. \end{equation*} In this estimate we use $V(y)=O(1)d_g(y,q)^{-s}$ in (\ref{range-V}). Hence \begin{equation}\label{inte-omega1} \int_{\Omega_1}\frac{1}{d_g(x,y)^j}V(y){\rm d}V_{g}(y)\leq \frac{C}{d_g(x,q)^s}\int_{B(x,\frac{d_g(x,q)}{2})}\frac{1}{d_g(x,y)^j}{\rm d}V_{g}(y)\leq C d_g(x,q)^{4-s-j}. \end{equation} Using (\ref{range-V}) again, we obtain that \begin{equation*} \begin{split} \tilde{I}:=\int_{\Omega_2}\frac{1}{d_g(x,y)^j}V(y){\rm d}V_{g}(y)\leq C\int_{\Omega_2}\frac{1}{d_g(x,y)^j}\frac{1}{d_g(y,q)^s}{\rm d}V_{g}(y). \end{split} \end{equation*} Fixing some $t$ as follows \begin{equation}\label{t-range} t\in \left\{\begin{array}{lcl} (0,\frac{4}{s}),&& s>0,\\ (1,+\infty),&& s\leq 0, \end{array} \right. \end{equation} we have $-ts>-4$. It follows from the H$\ddot{\rm o}$lder inequality that \begin{equation*} \begin{split} \tilde{I}\leq C\Big(\int_{\Omega_2}\frac{1}{d_g(x,y)^{jt^*}}{\rm d}V_{g}(y)\Big)^{\frac{1}{t^*}}\Big(\int_{\Omega_1}\frac{1}{d_g(y,q)^{st}}{\rm d}V_{g}(y)\Big)^{\frac{1}{t}} \leq C\Big(\int^{\tilde{c}}_{\frac{d_g(x,q)}{2}}\frac{1}{\rho^{jt^*-3}}{\rm d}\rho\Big)^{\frac{1}{t^*}} \end{split} \end{equation*} where $t^*=\frac{t}{t-1}$ denotes the conjugate of $t$ and $\tilde{c}$ is some positive constant. Then direct computation and the fact $-ts>-4$ imply that \begin{equation}\label{inte-omega2} \tilde{I}=\int_{\Omega_2}\frac{1}{d_g(x,y)^j}V(y){\rm d}V_{g}(y)\leq \left\{\begin{array}{lcl} C|\log d_g(x,q)|^{\frac{1}{t^*}},&& {\rm if} \ \,jt^*=4,\\ Cd_g(x,q)^{\frac{4}{t^*}-j}+C,&& {\rm if} \ \,jt^*\neq 4. \end{array} \right. \end{equation} In view of (\ref{t-range}), we get that \begin{equation*} t^*\in \left\{\begin{array}{lcl} (\frac{4}{4-s},+\infty),&& s>0,\\ (1,+\infty),&& s\leq 0. \end{array} \right. \end{equation*} Hence, there holds $\frac{4}{t^*}-j<4-s-j$. Consequently, from (\ref{inte-omega1}) and (\ref{inte-omega2}) there exists some $\tau>0$ such that for any $r\in(0,\delta)$ \begin{equation}\label{est-dv-inball} |\nabla^j v(x)|\leq Cd_g(x,q)^{\tau-j}+C,\quad j=1,2,3,\quad x\in B(q,r). \end{equation} In fact, we may choose $\tau\in(0,1)$ if $jt^*=4$, and otherwise $\tau=\frac{4}{t^*}$. At this point, we obtain that \begin{equation} |\nabla^j \mathfrak{u}_0(x)|\leq Cd_g(x,q)^{-j},\quad j=1,2,3\quad x\in B(q,r). \end{equation} Thus by virtue of (\ref{comparsion-dist}) and (\ref{est-dv-inball}), we may adjust $\tau>0$ such that on $\partial B(q,r)$ \begin{equation*} \begin{split} &\frac{\partial\mathfrak{u}_0}{\partial r}=-\frac{\beta}{8\pi^2}\frac{1}{|x-q|}+O(r^\tau)\frac{1}{|x-q|}+O(1), \\ &\Delta\mathfrak{u}_0=-\frac{\beta}{4\pi^2}\frac{1}{|x-q|^2}+O(r^\tau)\frac{1}{|x-q|^2}+O(1), \\ &\frac{\partial\Delta\mathfrak{u}_0}{\partial r}=\frac{\beta}{2\pi^2}\frac{1}{|x-q|^3}+O(r^\tau)\frac{1}{|x-q|^3}+O(1). \end{split} \end{equation*} \medskip Therefore, the estimates of $I_2$, $I_3$ and $I_4$ can be improved: \begin{equation*} \begin{split} |I_2|&\leq C\int_{B(q,r)}\big(rd^{-3}+d^{-2}+1\big){\rm d}V_{g}\leq Cr^2, \\ |I_3|&\leq C\int_{\partial B(q,r)}r^2d_g(x,q)^{-2}{\rm d}V_{g}\leq Cr^3, \\ |I_4|&\leq C\int_{B(q,r)}d_g(x,q)^{-1}{\rm d}V_{g}\leq Cr^3. \\ \end{split} \end{equation*} Finally for $I_1$ and we use the expansions of $g^{ij}$ to obtain \begin{equation*} \begin{split} I_1=&\int_{\partial B(q,r)}\big(-r\nu_i\partial_i(\Delta_{g}\mathfrak{u}_0)\partial_{\nu}\mathfrak{u}_0+\Delta_{g}\mathfrak{u}_0\partial_{\nu}\mathfrak{u}_0+(x-q)^k\nu_i\Delta_{g}\mathfrak{u}_0\partial_{ik}\mathfrak{u}_0-\frac{1}{2}r(\Delta_{g}\mathfrak{u}_0)^2\big)\\ &+O(r)\\ =&\int_{\partial B(q,r)}\Big(-r\nu_i\partial_i(\Delta\mathfrak{u}_0)\partial_{\nu}<x-q,\mathfrak{u}_0>_{flat}+\Delta\mathfrak{u}_0\partial_{\nu}\mathfrak{u}_0-\frac{1}{2}r(\Delta\mathfrak{u}_0)^2\Big)+O(r), \end{split} \end{equation*} where we have used $\det\,(g)=1$ in $B(q,\delta)$ and $\Delta_{g}u=\partial_i(g^{ij}\partial_j u)$. Consequently, \begin{equation}\label{RHS-PI} {\rm (RHS)\ \,of\ \,}(\ref{PI-mfd})=I_1+o_r(1)=\frac{\beta^2}{16\pi^2}+o_r(1). \end{equation} Combining (\ref{LHS-PI}) and (\ref{RHS-PI}), we derive that $\beta=16\pi^2(1+\gamma)$, which yields a contradiction to (\ref{beta-con-1}) in the case $\gamma>0$. Therefore $\mathfrak{u}_k\to-\infty$ uniformly on any compact subset of $B(q,2\delta)\setminus\{q\}$, $\mathfrak{h}_k(x)d_g(x,q)e^{4\mathfrak{u}_k}(x)\to 0$ uniformly on any compact subset of $B(q,2\delta)\setminus\{q\}$ and \begin{equation*} \mathfrak{h}_k(x)d_g(x,q)e^{4\mathfrak{u}_k}(x)\to \beta\delta_q \ \,{\rm\ \,weakly\ \,in\ \,the\ \,sense\ \,of\ \,measure\ \,on\ \,}M. \end{equation*} In the end, we show the quantization $\beta$ is exactly $16\pi^2(1+\gamma)$. To see this, set $c_k=\dashint\mathfrak{u}_k{\rm d}\sigma$ and $\check{u}_k(x)=\mathfrak{u}_k(x)-c_k$. Then, we have $c_k\to-\infty$ and $\check{u}_k\to\check{u}$ in $C^4(B(q,\delta))$ as $k\to+\infty$. Moreover, there exists a smooth function $\check{v}$ such that $\check{u}(x)=\frac{\beta}{8\pi^2}\log d_g(x,q)+\check{v}$. Taking advantage of the Pohozaev identity as before, we obtain $\beta=16\pi^2(1+\gamma)$ and \begin{equation*} 2\mathfrak{h}_k(x)d_g(x,q)^{4\gamma}e^{4\mathfrak{u}_k(x)}\to 16\pi^2(1+\gamma)\delta_q. \end{equation*} \end{proof} Based on Theorem \ref{thm-con-quan} and its proof, we immediately obtain the following corollaries. \begin{cor} Suppose that $\mathfrak{u}_k$ satisfies the assumptions in Theorem \ref{thm-con-quan}, then along a subsequence, there holds \begin{equation*} \mathfrak{u}_k-c_k\to-2(1+\gamma)\log d_g(x,q)+\hat{v},\quad in \ \,C^4_{loc}(B(q,2\delta)\setminus\{q\}), \end{equation*} where $c_k=\dashint_{\partial B(q,\delta)}\mathfrak{u}_k{\rm d}\sigma_{g}\to-\infty$ and $\hat{v}$ is a smooth function in $B(q,2\delta)$. \end{cor} \begin{cor} Suppose that $\mathfrak{u}_k$ satisfy $$P_{g} \mathfrak{u}_k(x)+2\mathfrak{b}_k=2H_ke^{4\mathfrak{u}_k}\quad in \ \, M$$ with $\int_{B(q_j,2\delta)}2H_ke^{4\mathfrak{u}_k}{\rm d}V_{g}\to \rho_j<16\pi^2(1+\gamma_j)$ for some $j\in\{1,\cdots,N\}$. Then $\{\mathfrak{u}_k\}$ is uniformly bounded from above on any subset of $B(q_j,2\delta)$. In particular, $\{\mathfrak{u}_k\}$ can not blow up in $B(q_j,2\delta)$. \end{cor} \section[A Priori Estimate]{Concentration-Compactness Result and A Priori Estimate}\label{CC-Apriori} In this section, we aim to establish the concentration-compactness principle and a priori estimate basing on the result in the section \ref{blowup-local}. Indeed, we will derive the following concentration-compactness type result for the regular part of $\{u_k\}$. Set \begin{equation*} \rho_k=\int_M 2H_ke^{4\mathfrak{u}_k}{\rm d}V_{\mathfrak{u}_k}. \end{equation*} \begin{thm}[Concentration-Compactness]\label{thm-concentration-compactness} Let $\{\tilde{u}_k\}$ be a sequence of solution to (\ref{equ-tilde-uk}) and (\ref{finite-integral-tildeuk}) with $\rho_k\to\rho$. Then there exists a subsequence, still denoted $\{\tilde{u}_k\}$, for which one of the following alternative holds: \begin{itemize} \item[(i)] $\sup_{\varSigma}|\tilde{u}_k|\leq C_{\varSigma}$, for any $\varSigma\subset\subset M$. \item[(ii)] $\sup_{\varSigma}\tilde{u}_k\to-\infty$, for any $\varSigma\subset\subset M$. \item[(iii)] There exist a finite set $S=\{p^1,\cdots,p^m\}\subset M$ with $m\in \mathbb{N}$, and sequences of points $\{x_k^1\}_{k\in\mathbb{N}},\cdots,\{x_k^m\}_{k\in\mathbb{N}}\subset M$, such that for all $i=1,\cdots,m$ \begin{equation*} x_k^i\to p^i,\quad \sup_{\varSigma}\tilde{u}_k\to-\infty \ for\ \, any\ \,\varSigma\subset M\setminus S \end{equation*} and \begin{equation*} 2H_ke^{4\tilde{u}_k}\to \sum_{i=1}^m\beta_i\delta_{p^i} \ weakly\ \,in\ \,the\ \,sense\ \,of\ \,measures\ \,in\ \,M. \end{equation*} Futhermore, $\beta_i\in 16\pi^2\mathbb{N}$ if $p^i\notin\{q_1,\cdots,q_N\}$, and $\beta_i=16\pi^2(1+\gamma)$ if $p^i=q_j$ for some $j\in\{1,\cdots,N\}$. \end{itemize} \end{thm} \begin{proof}[\textbf{Proof of Theorem \ref{thm-concentration-compactness}}] We define $S$ to be the set of blow-up points of $\mathfrak{u}_k$ in $M$, that is, \begin{equation*} S=\{x\in M:\ \,\exists x_k\in M,\ \,{\rm s.t.}\ \,x_k\to x\ \, {\rm and}\ \,\tilde{u}_k(x_k)\to+\infty\ \,{\rm as}\ \,k\to+\infty\}. \end{equation*} We distinguish two cases. \textbf{Case 1:} $S\neq\emptyset$. For $p\in S$, Lemma \ref{lem-small-mass-regular} say that the mass of $\{\tilde{u}_k\}$ near $p$ is no less than $8\pi^2(1+\gamma)$. Then finite integral assumption $\int_M H_ke^{4\tilde{u}_k}{\rm d}V_{g}\leq C$ implies ${\rm card}\,(S)\leq C$. We may denote $S=\{p_1,\cdots,p_m\}$ with some $n\in\mathbb{N}$. Therefore, there exists $r_0\in (0,1)$ such that for any $p^i\in S$, $p^i$ is the only blow-up point of $\tilde{u}_k$ in $B(p^i,r_0)$. Therefore, from the results in \cite{Druet-Robert} and Theorem \ref{thm-con-quan}, we obtain the alternative (iii). \textbf{Case 2:} $S=\emptyset$. In this case, we have $\sup_M\tilde{u}_k\leq C$, which implies $H_ke^{4\tilde{u}_k}$ is uniformly bounded in $M$. Taking into account of the Green's representation formula, \begin{equation*} \tilde{u}_k(x)-\bar{\tilde{u}}_k=\int_M G(x,y)H_k(y)e^{4\tilde{u}_k}{\rm d}V_{g}(y)=O(1). \end{equation*} Hence, after taking a subsequence, the alternative (i) occurs if $\limsup_{k\to+\infty}\int_M\tilde{u}_k{\rm d}V_{g}>-\infty$, the alternative (ii) holds if $\limsup_{k\to+\infty}\int_M\tilde{u}_k{\rm d}V_{g}\to-\infty$, \end{proof} Immediately, we derive the following two corollaries from Theorem \ref{thm-concentration-compactness}. \begin{cor} Suppose that $\tilde{u}_k$ satisfies the assumption in Theorem \ref{thm-concentration-compactness} and alternative (iii) occurs, then $\rho\in\Gamma$. \end{cor} \begin{cor} Suppose that $\tilde{u}_k$ satisfies the assumption in Theorem \ref{thm-concentration-compactness}. Then for every $\rho\in\mathbb{R}^+\setminus\Gamma$, there exists a constant $C$ only depending on $\rho$, such that $\tilde{u}_k\leq C$. In particular, if we additionally assume $\int_M \tilde{u}_k{\rm d}V_g=0$ or $\int_M H_k e^{4\tilde{u}_k}{\rm d}V_g\geq c$ for some constant $c>0$, then \begin{equation*} \parallel\tilde{u}_k\parallel_{L^{\infty}(M)}\leq C. \end{equation*} \end{cor} The following result explains that $\Gamma$ is some critical set to (\ref{Q-equ-gen-singular}) or (\ref{Q-equation-blowup}). \begin{prop}[Critical Set]\label{critical-set} Suppose {\rm Ker}\,$(P_g)=\{constants\}$ and that $\{u_k\}$ is a sequence of solutions to (\ref{Q-equation-blowup})$\sim$(\ref{volume-normal}) with the coefficients satisfying (\ref{assumption-coe}). If the blow-up phenomena occur, then $\int_M2b{\rm d}V_g\in\Gamma$. \end{prop} \begin{proof}[\textbf{Proof of Propertion \ref{critical-set} and Theorem \ref{thm-apriori-est}}] Since $\int_MH_ke^{4\mathfrak{u}_k}{\rm d}V_{g}=\int_Mh_ke^{4u_k}{\rm d}V_g=\int_Mb_k{\rm d}V_g$ Proposition \ref{critical-set} and Theorem \ref{thm-apriori-est} obviously follow from the two corollaries above. \end{proof} \section{A spherical Harnack inequality}\label{harnack} As a corollary of the Concentration-Compactness theorem, we derive a spherical Harnack inequality near a singular source $q_j$ when its strength $\gamma_i$ is not an integer. Recall that if $q_j$ is a blow-up point of $\mathfrak{u}_k$, then there exist $\delta>0$ and a sequence $\{x_k^j\}_{k\in\mathbb{N}}$ such that $\mathfrak{u}_k(x_k)=\max_{B(q_j,\delta)}\mathfrak{u}_k\to+\infty$ and $S\cap B(q_j,\delta)=\{q_j\}$. Thus, we have \begin{equation*} \varepsilon_k^j=\exp\Big(-\frac{\mathfrak{u}_k(x_k^j)}{1+\gamma_j}\Big)\to 0,\quad {\rm as}\;k\to+\infty. \end{equation*} \begin{cor}\label{lem-simple-blowup} Suppose that $\tilde{u}_k$ satisfies the assumption in Theorem \ref{thm-concentration-compactness} and $q_j$ is a blow-up point of $\tilde{u}_k$. If $\gamma_j\notin \mathbb{N}$, then $d_{{g}}(x_k^j,q_j)=O(\varepsilon_k^j)$, that is \begin{equation*} y_k^j:=\frac{x_k^j-q_j}{\varepsilon_k^j}\to y^0\in\mathbb{R}^4 \end{equation*} and $\tilde{u}_k$ has only one bubble at $q_j$. \end{cor} \begin{proof}[\textbf{Proof}] Otherwise, we assume that \begin{equation*} \frac{x_k^j-q_j}{\varepsilon_k^j}\to+\infty\quad {\rm as }\ \,k\to+\infty. \end{equation*} For simplicity, we omit the superscript $j$ and work on the $\mathfrak{u}_k$ under the conformal normal coordinate. Let $r_k=d_{g}(x_k,q)$ and define the map $\varphi_k:B(0,\delta r_k^{-1})\to B(q,\delta)$ by \begin{equation*} \varphi_k:y\mapsto r_ky+q, \end{equation*} where on the right hand side we are using the conformal normal coordinates on $B(q,\delta)$. We use the notation $\breve{f}=\varphi_{k*}f=f\circ \varphi_k$ to denote the pull-back of a function $f$ defined on $B(q,\delta)$, and we let $\breve{g}_k=r_k^{-2}\varphi_{k*}g$ be the blow-up metric, i.e., a rescaling of the pull-back metric. We define \begin{equation*} v_k(y)=\breve{\mathfrak{u}}_k+(1+\gamma)\log r_k,\quad y\in B(0,\delta/r_k). \end{equation*} Then $v_k$ satisfies \begin{equation*} \left\{\begin{array}{lcl} P_{g_k}v_k+2\breve{\mathfrak{b}}_k=2\breve{\mathfrak{h}}_k(y)d_{g_k}(y,0)^{4\gamma}e^{4v_k}, \quad {\rm in} \ \, B(0,\delta/r_k),\\ \int_{B(0,\delta/r_k)}\breve{\mathfrak{h}}_k(y)d_{g_k}(y,0)^{4\gamma}e^{4v_k}{\rm d}V_{g_k}\leq C, \end{array} \right. \end{equation*} where $\breve{\mathfrak{b}}_k(y)=\mathfrak{b}_k(r_ky+q)$, $\breve{\mathfrak{h}}_k(y)=\mathfrak{h}_k(r_ky+q)$. Denote $e_k=\frac{x_k-q}{r_k}$, then $e_k\to e_0\in S^1\subset\mathbb{R}^4$ and $e_0$ is a blow-up point of $\tilde{v}_k$. Applying Theorem \ref{thm-concentration-compactness}, there exists a finite blow-up set $\tilde{S}=\{y^1,\cdots,y^l\}$ such that $v_k\to-\infty$ uniformly on any compact subset of $\mathbb{R}^4\setminus \tilde{S}$ and \begin{equation*} 2\breve{\mathfrak{h}}_k(y)d_{g_k}(y,0)^{4\gamma}e^{4v_k}\to \sum_{i=1}^l\alpha_i\delta_{y^i},\quad {\rm weakly\ \,in\ \,the\ \,sense\ \,of\ \,measure}, \end{equation*} where \begin{equation*} \alpha_i=\lim_{k\to+\infty}\int_{B(y^i,r_0)}2\breve{\mathfrak{h}}_k(y)d_{g_k}(y,0)^{4\gamma}e^{4v_k(y)}{\rm d}V_{g_k}\quad{\rm and}\quad B(y^i,r_0)\cap \tilde{S}=\{y^i\}. \end{equation*} First, we claim that $0$ is not a blow-up point of $v_k$. Suppose $0\in\tilde{S}$, then by means of Theorem \ref{thm-concentration-compactness} we have \begin{equation*} \lim_{k\to+\infty}\int_{B(0,r_0)}\breve{\mathfrak{h}}_k(y)d_{g_k}(y,0)^{4\gamma}e^{4v_k(y)}{\rm d}V_{g_k}=16\pi^2(1+\gamma). \end{equation*} On the other hand, since $e_0\in S^1$ is also a blow-up point of $v_k$ in $B(0,R)$ for $R$ large. Then \begin{equation*} \begin{split} 16\pi^2(1+\gamma)=&\lim_{k\to+\infty}\int_{B(q,\delta)}2\hat{h}_k(x)d_{g_k}(x,q)^{4\gamma}e^{4\mathfrak{u}_k(x)}{\rm d}V_{g_k} \\ =&\lim_{k\to+\infty}\int_{B(0,\delta/r_k)}2\breve{\mathfrak{h}}_k(y)d_{g_k}(y,0)^{4\gamma}e^{4v_k(y)}{\rm d}V_{g_k} \\ \geq & \lim_{k\to+\infty}\int_{B(0,r_0)\cup B(e_0,r_0)}2\breve{\mathfrak{h}}_k(y)d_{g_k}(y,0)^{4\gamma}e^{4v_k(y)}{\rm d}V_{g_k} \\ =&16\pi^2(1+\gamma)+16\pi^2, \end{split} \end{equation*} which yields a contradiction. Thus, $0\notin \tilde{S}$. As a consequence, we have \begin{equation*} 16\pi^2(1+\gamma)=16\pi^2l \end{equation*} which is impossible, since $\gamma\notin \mathbb{N}$ while $l$ is an integer. In other words, if $\gamma$ is not an integer, then $d_{g}(x_k,q)=O(\varepsilon_k)$ and $\hat{u}_k$ has only one bubble at $q$. \end{proof} \begin{thm}[A Spherical Harnack Inequality ($\gamma\notin\mathbb{N}$)]\label{SHT} Suppose that $\tilde{u}_k$ satisfies the assumption in Theorem \ref{thm-concentration-compactness} and $q_j$ is a blow-up point of $\tilde{u}_k$. If $\gamma_j\notin \mathbb{N}$, then near $q_j$ there holds the following spherical Harnack inequality: \begin{equation}\label{spherical-Harnack} \max_{x\in B(q_j,\delta)}\{\tilde{u}_k(x)+(1+\gamma_j)\log|x-q|\}\leq C \end{equation} with some constant $C$. \end{thm} \begin{proof}[\textbf{Proof}] Suppose that (\ref{spherical-Harnack}) fails, then there exists a sequence $\{x_k\}\subset B(q_j,\delta)$ such that $\max_{x\in B(q_j,\delta)}\{\tilde{u}_k(x)+(1+\gamma_j)\log|x-q|\}\to +\infty$. By means of the selection process and quantization property in \cite{Malchiodi,Druet-Robert}, we obtain that $\int_{B(q,\delta)}H_k(x)e^{4\tilde{u}_k(x)}{\rm d}V_{g}\to16\pi^2n$ for some positive integer $n$. However, from Theorem \ref{thm-concentration-compactness}, we know that \begin{equation} \int_{B(q,\delta)}H_k(x)e^{4\tilde{u}_k(x)}{\rm d}V_{g}\to16\pi^2(1+\gamma),\quad as\ \, k\to+\infty, \end{equation} which is impossible. \end{proof} \section*{Appendix: Comparsion between $d_g(x,q)$ and $|x-q|$} \setcounter{equation}{0} \setcounter{subsection}{0} \renewcommand{\theequation}{A.\arabic{equation}} \renewcommand{\thesubsection}{A.\arabic{subsection}} In this appendix, we will establish the comparison between the distance $d_g(x,q)$ and its derivatives and their Euclidean counterparts as in the Appendix B in \cite{zhang-weinstein}. We will follow the argument in \cite{zhang-weinstein} and give the detail for completeness. We claim that for $j=0,1,2,3$, there holds \begin{equation}\label{comparsion-dist} \nabla^j\big(\log|x-y|-\log d_g(x,y)\big)=O(r^{2-j}),\quad x\in B(q,2r)\setminus B(q,r/2). \end{equation} We recall that $g$ is the conformal normal metric centered at $q$, and we identity $x,y\in T_qM$ with $\exp_qx$ and $\exp_qy$ respectively, where $\exp_q$ is the exponential map at $q$ with respect to the metric $g$. Thus, \begin{equation*} d_g(x,y)=d(\exp_qx,\exp_qy), \quad x,y\in B(q,\delta). \end{equation*} Also, we denote $\nabla_{g}$ by $\nabla$ for convenience. First, let us note that the following simple estimates on $d$ hold: \begin{equation}\label{rough-comparsion-dist} \big|\nabla^j(\log d_g(x,y))\big|\leq C|x-y|^{-j},\quad j=1,2,3,4. \end{equation} Set \begin{equation*} f(x)=\log|x-y|-\log d_g(x,y),\quad x\in B(q,2r)\setminus B(q,r/2). \end{equation*} We aim to show that \begin{equation*} \big|\nabla^jf(x)\big|\leq C|x-q|^{2-j},\quad j=0,1,2,3,\quad x\in B(q,2r)\setminus B(q,r/2). \end{equation*} Let $R$, $R_{ij}$ and $R_{ijkl}$ respectively denote the scalar, Ricci and Riemann curvature of $g$. From the definitions of $g$ and $R^i_{jkl}$, we obtain that \begin{equation*} \nabla^jR^i_{jkl}(x)=O(1),\quad j=1,2. \end{equation*} In conformal normal coordinates, there holds $R(q)=R_{ij}(q)=|\nabla R(q)|=0$. As a consequence, we have futher \begin{equation*} R(x)=O(r^2),\quad R_{ij}(x)=O(r). \end{equation*} We shall derive an estimate on $\Delta_{g}^2f(x)$. By the definition of $g$ and (A.4) in \cite{zhang-weinstein}, that is \begin{equation*} P_{g,y}\Big(-\frac{1}{8\pi^2}\chi(r)\log d_g(x,y)\Big)=\delta_x+E(x,y),\quad {\rm with \ \,}E \ \,{\rm bounded}, \end{equation*} we have that \begin{equation} P_{g}\log d_g(x,q)=O(r^4),\quad x\in B(q,2r)\setminus B(q,r/2). \end{equation} In view of the rough estimates (\ref{rough-comparsion-dist}), we can estimate the term: \begin{equation*} \big(P_{g}-\Delta_{g}^2\big)\log d_g(x,q)=\partial_m\Big(g^{mi}\big(\frac{2}{3}R(x)g_{ij}-2R_{ij}(x)\big)g^{lj}\partial_j\big(\log d_g(x,q)\big)\Big)=O(r^{-1}). \end{equation*} Therefore, we can get \begin{equation*} \Delta_{g}^2\big(\log d_g(x,q)\big)=O(r^{-1}),\quad x\in B(q,2r)\setminus B(q,r/2). \end{equation*} Finally, we consider the term $\Delta_{g}^2\big(\log |x-q|\big)$. Since $\Delta^2\big(\log |x-q|\big)=0$, it suffices to estimate $\Delta_{g}^2-\Delta^2$. For any function $u$, the direct computation leads to \begin{equation}\label{expansion-4-order} \begin{split} \Delta_{g}^2u=&g^{ab}g^{ij}\partial_{ijab}u+2\partial_{ija}u\big(\partial_bg^{ab}g^{ij}+g^{ab}\partial_bg^{ij}\big) \\ &+\partial_{ij}u\big(\partial_ag^{ab}\partial_bg^{ij}+2g^{ai}\partial_{ab}g^{bj}+g^{ab}\partial_{ab}g^{ij}+\partial_ag^{ia}\partial_bg^{bj}\big) \\ &+\partial_ju\big(\partial_ag^{ab}\partial_{ib}g^{ij}+g^{ab}\partial_{iab}g^{ij}\big), \end{split} \end{equation} where we have used $\det\,(g)=1$. Using the expansion of $g^{ab}$: \begin{equation*} g^{ab}(x)=\delta_{ab}-\frac{1}{3}R_{mabl}(q)x^ax^b+O(|x-q|^3), \end{equation*} and replacing $u$ by $\log|x-q|$ in (\ref{expansion-4-order}) above, we obtain that \begin{equation*} \Delta_{g}^2\big(\log|x-q|\big)=O(r^{-2}). \end{equation*} Consequently, \begin{equation}\label{app-equ} \Delta_{g}^2 f(x)=O(r^{-2}),\quad x\in B(q,2r)\setminus B(q,r/2). \end{equation} An estimate on the $L^{\infty}$-norm of $f(x)$ can easily be seen as follows: \begin{equation} d_g(x,q)=\int_{0}^{x(t)}\sqrt{x_i'(t)g_{ij}(t)x_j'(t)}{\rm d}t=|x-q|\big(1+O(r^2)\big), \end{equation} which implies \begin{equation}\label{app-C0} f(x)=O(r^2),\quad x\in B(q,2r)\setminus B(q,r/2). \end{equation} Applying the elliptic theory to (\ref{app-equ}) and (\ref{app-C0}), we obtain the claim (\ref{comparsion-dist}) and \begin{equation}\label{0-comparsion} d_g(x,q)=|x-q|(1+O(r^2)). \end{equation}
-154,369.188339
[ -2.380859375, 2.197265625 ]
30.821206
[ -3.494140625, 0.75244140625, -2.033203125, -6.11328125, -0.859375, 8.21875 ]
[ 3.76171875, 9.03125, 1.115234375, 5.4453125 ]
540
9,571
[ -3.5234375, 3.91015625 ]
38.795535
[ -5.44921875, -4.04296875, -4.890625, -2.548828125, 1.7060546875, 12.5546875 ]
0.605369
12.647271
25.953401
2.842884
[ 1.7331242561340332 ]
-90,314.711449
7.145335
-153,364.535069
0.807159
6.379017
[ -1.8994140625, -3.455078125, -4.11328125, -5.40625, 1.923828125, 12.6875 ]
[ -5.52734375, -1.6845703125, -2.185546875, -1.037109375, 3.4765625, 3.3984375 ]
BkiUd0E5qhDBdMC-Fcd6
\section{Supplemental Material: \\ Sign-changing photon-mediated atomic interactions in multimode cavity QED} \subsection{Spectrum of a confocal cavity} Within paraxial optics, the beam inside a Fabry-Perot cavity is described by Hermite-Gaussian modes. A mode $\Phi_{Q,l,m}$ is labeled by one longitudinal index $Q$ and two transverse indices $l$ and $m$. These indices count the number of field nodes along their respective axes. For a symmetric two-mirror cavity of length $L$, with $R$ as the mirror radius of curvature, the frequency of a given mode is \begin{equation} f_{Qlm}=\frac{c}{2L}\big[ Q + \frac{l+m+1}{\pi} \arccos{ g} \big], \end{equation} where $c$ is the speed of light inside the cavity, $g = 1-L/R$ and $c/2L$ is the free spectral range of the cavity. The term proportional to $\arccos{g}$ captures the effect of additional Gouy phase shifts on higher-order transverse modes, which involve terms proportional to $(l+m+1)\psi(z)$, where $\psi(z) = \mathrm{arctan} (z/z_R)$ is the Gouy phase and $z_R$ is the Rayleigh range $z_R = \pi w_0^2/\lambda$. In general, different transverse modes will be resonant at different frequencies; however, degenerate cavities with special geometries can support a family of transverse modes, each with distinct spatial profiles, at a single frequency. In particular, a confocal cavity has $L=R$ and thus $g = 0$. Therefore, all modes that satisfy the condition \begin{equation} Q + \frac{1}{2}(l + m + 1) = Q_0 + \frac{(\eta+1)}{2} \label{rescondition} \end{equation} will be resonant at the same frequency $c(2 Q_0+\eta+1)/4L$, where $Q_0$ is a positive integer and $\eta=0 (1)$ for even (odd) families. At every half free spectral range, the transverse mode content varies between all even modes $l+m~\mathrm{mod}~2 = 0$ and all odd modes $l+m~\mathrm{mod}~2 = 1$. Within a degenerate resonance, to satisfy Eq.~\eqref{rescondition}, different transverse modes must carry different longitudinal indices. This causes the longitudinal profile of sequential transverse modes within a degenerate resonance to cycle between $+\cos{k_r z}$, $-\sin{k_r z}$, $-\cos{k_r z}$, and $+\sin{k_r z}$, as described in Fig.~\ref{fig1}(a) of the main text. \subsection{Experimental apparatus} This work employs a $R=1$-cm radius-of-curvature confocal cavity of length $L=R$. The length of the multimode cavity is adjustable~\cite{Kollar2015}, though in this work we set $L=R$. We trap within this cavity a BEC of $2.5(3) {\times} 10^5$ $\mathrm{Rb}^{87}$ atoms in the $|F=1,m_F=-1 \rangle$ state. See Ref.~\cite{Kollar2015} for BEC preparation procedure and Fig.~\ref{fig1} for illustration of experiment. The BEC is confined in a crossed optical dipole trap (ODT) formed by a pair of $1064$-nm laser beams propagating along $\hat{x}$ and $\hat{y}$ with waists of $40$~$\mu$m in the $xy$-plane and $80$~$\mu$m along $\hat{z}$. The resulting trap frequencies of $(\omega_x,\omega_y,\omega_z) = 2 \pi \times [189(2),134(1),90(1)]$~Hz create a compact BEC with Thomas-Fermi radii $(R_x, R_y, R_z) = [4.2(1), 5.8(3), 8.9(1)]$~$\mu$m that are significantly smaller than the $w_0 = 35$~$\mu$m waist of the TEM$_{0,0}$ cavity mode. Acousto-optic deflectors (AODs) placed in the path of each ODT control the intensity and location of the ODTs, allowing us to translate the BEC to any point in the $xy$-plane with an uncertainty of $0.9~\mu$m. In the experiments of Figs.~\ref{fig3} and~\ref{fig4}, we use dynamic trap shaping~\cite{Henderson09} to produce two smaller BECs of $2.0(3) {\times} 10^5$ atoms each, with a population imbalance uncertainty of ${<}10$\%. The relative position of these BECs along $\hat{x}$ is controlled by the AOD. Both the local oscillator beam (used for holographic imaging of the cavity emission) and the transverse pump are derived from the same laser but pass through different acousto-optic modulators (AOMs) for intensity stabilization. To maintain the relative phase stability between the two beams, both AOMs are driven by signals from the same multichannel direct digital synthesizer. This synthesizer is synced to a stable Rb frequency reference. Due to path length drift, the relative phase between the pump and the local oscillator is stable only within the same experiment realization. \subsection{Holographic imaging} The employed holographic imaging method is described in detail in Ref.~\cite{Kroeze:2018wd} and is similar to that reported in Ref.~\cite{Schine:2018ui}. Briefly, a portion of the pump field---serving as a local oscillator (LO)---is directed onto the same EMCCD camera onto which the cavity emission is imaged. The cavity field $E_c(\mathbf{r}) = |E_c(\mathbf{r})|e^{i \phi_c(\mathbf{r})}$ and the LO field $E_\text{LO}(\mathbf{r}) = |E_\text{LO}(\mathbf{r})|e^{i \phi_\text{LO}(\mathbf{r})}$ interfere to form a spatial heterodyne image $I_h(\mathbf{r})$. The image's interference fringes are proportional to the phase and amplitude of the cavity field: \begin{equation} I_{h}(\mathbf{r}) \propto |E_c(\mathbf{r})E_\text{LO}(\mathbf{r})| \cos \left[ \Delta \mathbf{k} \cdot \mathbf{r} + \Delta\phi(\mathbf{r}) \right], \label{hologram} \end{equation} where the phase difference between the cavity and LO wavefronts is $\Delta\phi(\mathbf{r}) =\phi_c(\mathbf{r}) -\phi_\text{LO}(\mathbf{r}) $. The amplitude and phase of the fringes produced are a measure of $|E_c(\mathbf{r})|$ and $\phi_c(\mathbf{r})$. Demodulating this image at the fringe wavevector $\Delta \mathbf{k}$ provides a holographic reconstruction of $|E_c(\mathbf{r})|$ and $\phi_c(\mathbf{r})$. Accurate extraction of these images requires the correction of LO intensity and phase variation. To do so for the confocal cavity, we perform a least-squares fit to the cavity emission intensity pattern using the exact theory result from Ref.~\cite{Vaidya:2018fp}. We extract the LO phase variation from the difference between measured phase and the expected phase. \subsection{Effective Hamiltonian} The Green's function for the cavity-mediated interaction in a perfect confocal cavity near an even degenerate resonance can be written as a sum of the contributions from the two classes of longitudinal modes~\cite{Vaidya:2018fp,GouyPRA2018}: \begin{align} 4\mathcal{D}^{+}(\mathbf{x},\mathbf{x}') = 4\mathcal{D}^{+} (\mathbf{r},\mathbf{r^\prime},z,z^\prime) &=D_c (\mathbf{r},\mathbf{r^\prime})\cos{k_r z} \cos{k_r z^\prime} \nonumber \\ &+ D_{s} (\mathbf{r},\mathbf{r^\prime}) \sin{k_r z} \sin{k_r z^\prime}, \end{align} with \begin{equation} \begin{cases} D_{c} = \delta\Big(\frac{\sqrt{2}(\mathbf{r} - \mathbf{r^\prime})}{w_0}\Big) + \delta\Big(\frac{\sqrt{2}(\mathbf{r} + \mathbf{r^\prime})}{w_0}\Big) + \frac{1}{\pi} \cos\big(\frac{2 \mathbf{r} \cdot \mathbf{r^\prime}}{w^2_0}\big) \\ D_{s} = \delta\Big(\frac{\sqrt{2}(\mathbf{r} - \mathbf{r^\prime})}{w_0}\Big) + \delta\Big(\frac{\sqrt{2}(\mathbf{r} + \mathbf{r^\prime})}{w_0}\Big) - \frac{1}{\pi} \cos\big(\frac{2 \mathbf{r} \cdot \mathbf{r^\prime}}{w^2_0}\big). \end{cases} \end{equation} To allow for the full phase freedom in the atomic density wave, the atomic profile is expanded as \begin{eqnarray} \Psi(\mathbf{x}) &&=\sqrt{\rho(\mathbf{r})}\times \\\nonumber &&\big[ \psi_0 +\sqrt{2}\cos{k_rx}( \psi_c \cos{k_rz} + \psi_s \sin{k_rz}) \big], \end{eqnarray} where for simplicity we shall assume a $\delta$-function transverse atomic profile $\rho(\mathbf{r}) = \delta(\mathbf{r} - \mathbf{r}_0)$ since the Thomas-Fermi radius of the BEC is much smaller than the cavity waist $w_0$, $\mathbf{r}_0$ is the location of the atoms in the cavity transverse plane, $\psi_0$ is the ground state fraction of the gas that has a uniform density profile (compared to the $\lambda$-scale) and $\psi_{c (s)}$ is the excited atomic density wave in the $\cos{k_rz}~(\sin{k_rz})$ pattern. The Hamiltonian is then \begin{align}\label{Ham} H= E_0 \int &d^3\mathbf{x} d^3\mathbf{x^\prime} \cos(k_r x) \cos(k_r x') \times \nonumber \\ &|\Psi(\mathbf{x})|^2 \mathcal{D}^{+}(\mathbf{x},\mathbf{x^\prime}) |\Psi(\mathbf{x^\prime})|^2 \equiv -E_0 \mathcal{H}, \end{align} where $E_0$ is a positive constant prefactor, and $\cos(k_r x) \cos(k_r x')$ term is due to the standing wave pump. Focusing only on the terms involving $\cos(2 \mathbf{r} \cdot \mathbf{r^\prime}/w^2_0)$ in $\mathcal{D}^{+}(\mathbf{x},\mathbf{x}')$, the effective Hamiltonian $\mathcal{H}$ can then be evaluated as \begin{equation} \mathcal{H} = -\frac{1}{8 \pi}\left[ |\psi_0 \psi^{*}_{c} + \psi^{*}_0 \psi_{c}|^2 - |\psi_0 \psi^{*}_{s} + \psi^{*}_0 \psi_{s}|^2 \right] \cos\left(\frac{2 r^2_0}{w_0}\right). \end{equation} Defining the following order parameters \begin{align} \chi_c &= \frac{\psi_0 \psi^{*}_{c} + \psi^{*}_0 \psi_{c}}{N} \nonumber \\ \chi_s &= \frac{\psi_0 \psi^{*}_{s} + \psi^{*}_0 \psi_{s}}{N}, \end{align} and ignoring the numeric prefactor, we recover the effective Hamiltonian in the main text, where $N$ is the total atom number. For two BECs, the cross term in the integral in Eq.~\ref{Ham} gives rise to the interaction term \begin{equation} H_{12} \propto -J_{12} (\chi_{c1} \chi_{c2} - \chi_{s1} \chi_{s2}), \end{equation} where $J_{12} = 2N\cos\left({2\mathbf{r_1} \cdot \mathbf{r_2}}/{w^2_0}\right)$. \end{document}
-11,232.844488
[ -3.212890625, 2.998046875 ]
29.87013
[ -7.046875, -4.40625, -2.4296875, -10.03125, 0.57958984375, 14.234375 ]
[ 2.265625, 8.03125, 2.19140625, 5.13671875 ]
55
1,129
[ -3.375, 3.966796875 ]
31.670984
[ -5.31640625, -4.01171875, -2.83984375, -1.3798828125, 1.9013671875, 8.546875 ]
0.998464
19.492032
43.578388
2.151065
[ 3.1166553497314453 ]
-7,571.034708
5.766165
-10,967.794082
0.506912
5.50255
[ -2.623046875, -3.771484375, -4.56640625, -5.3125, 2.462890625, 12.2890625 ]
[ -6.5, -4.25, -3.291015625, -2.09765625, 4.39453125, 6.78515625 ]
BkiUdPA4dbgg3Wyis_JT
\section{Introduction} \label{sec:1} Active Galactic Nuclei (AGN) and black hole X-ray binaries (XRBs) are both powered by gas accretion on to central black holes. Their observational properties are thought to be determined mainly by the fundamental parameters of black hole, such as mass and spin, together with the mass accretion rate. AGN and XRBs are both X-ray emitters. As their X-rays are believed to be predominately emitted from a region close to the black hole, the study of X-rays can provide a wealth of information about the black hole and its surroundings. In the case of XRBs, the X-rays come mainly from the disc and the corona, and the relative dominance depends on which of the spectral/accretion states the system is actually in. Particularly, the low/hard state shows hard spectrum with a power law shape thought to arise from Comptonized emission, while in the high/soft state the X-ray flux is dominated by thermal radiation from the accretion disc; and the `very high state' has very high flux together with a very soft spectrum (see the review by \citealt{Remillard2006}). In contrast, the X-ray emission of AGN is thought to be mainly originated from the hot corona above the accretion disc, while the disc itself has a much lower temperature and mainly radiates in the optical and UV bands. A long-standing idea is that AGN are the scaled-up counterparts of XRBs, and the accretion processes in the two systems are comparable. In the past decades, many studies have been devoted to the comparative studies of AGN and XRBs. For instance, \citet{Merloni2003} suggested a common `fundamental plane' formed by the radio and X-ray luminosities of AGN and XRBs. \citet{McHardy06} proposed an anti-correlation between the high-frequency break of the X-ray power spectral density (PSD) and the black hole mass which extends from XRBs to AGN. \citet{McHardy2007b} reported two Lorentzian components in the broadband X-ray PSD of the narrow-line Seyfert 1 (NLS1) galaxy Ark 564, similar to the PSD of XRBs which also consists of multiple Lorentzian components (e.g. \citealt{Axelsson2005}). \citet{Zhou2015} found that the frequencies of quasi-periodic oscillation (QPO) has a good inverse scaling relation with the black hole mass extending from XRBs to AGN. \citet{Jin2021} reported that the X-ray QPO signal in the NLS1 RE J1034+396 has a similar rms spectrum and phase-lag spectrum as the 67 Hz QPO of the black hole XRB GRS 1915+105. Moreover, it was reported that there exists a linear rms-flux correlation in the X-ray light curves of both XRB and AGN (e.g. \citealt{Gaskell2004}; \citealt{Uttley2005}). The correlation between the hard X-ray photon index and the Eddington ratio was also found to be similar between AGN and XRBs (\citealt{Yang2015}). Therefore, although the two types of accretion systems have masses differing by a factor of $10^{5-9}$, many similar properties have been found between them. Our current understanding about the state transition of XRBs is much deeper than AGN. An important observation is the co-evolution of the energy spectrum and PSD of XRBs during the transition, which is crucial for understanding the geometry of disc-corona and its evolution with the accretion rate. It is known that the X-ray PSD of XRBs is not stationary, but evolves with the accretion state. As the spectral state evolves from the hard state to the soft, the components in the PSD all move towards higher frequencies, and eventually disappears at a roughly constant frequency (e.g. \citealt{Axelsson2005}; \citealt{Gierlinski2008}). This is qualitatively well explained by the evolution of truncated disc model (see the review by \citealt{Done2007}). In comparison, the relationship between the X-ray spectra and PSD evolution of AGN is still unclear. The variability timescale of AGN is at least $10^{5}$ longer than XRBs, so it is more difficult to monitor the state transition of AGN. Recent attempts mainly focused on the rapid spectral variation of rare changing-look AGN (e.g. \citealt{Lamassa2015}). For instance, \citet{Noda2018} studied the evolution of broadband spectral energy distribution (SED) of the AGN Mrk 1018, and found that its SED evolution was similar to XRBs, although the effect of radiation pressure may be more important in AGN. \citet{Ruan2019b} collected a sample of changing-look AGN, and discovered that the correlation formed by their UV-to-X-ray spectral indices and Eddington ratios is remarkably similar to the state transition of XRBs. However, there have been few studies on the link between the X-ray spectrum and PSD of normal AGN. It has been known that the PSDs of some AGN are variable. For instance, \citet{Jin2021} found that the X-ray PSD of RE J1034+396 does not show a QPO signal when the soft X-ray excess becomes stronger. \citet{Gonzalez-Martin2018} reported a correlation between the X-ray absorption column density ($N_{\rm H}$) and the high-frequency break in the PSD of AGN. Since they only used the 2-6 keV spectrum to determine $N_{\rm H}$, their result may also be ascribed to a correlation between the observed X-ray spectral slope and the break frequency, i.e. the harder the spectrum, the higher the break frequency is. It is also worth noting that the light curve observed in a single observation of AGN is only a random realization of the intrinsic variability (\citealt{Timmer1995}). Hence even if the intrinsic PSD is stationary, the observed periodogram, which is a realization of the intrinsic power spectrum, would still vary from one observation to another (e.g. \citealt{Vaughan2005, Vaughan2010}). The X-ray rms amplitude can vary significantly in different segments of a single light curve from a stationary process (\citealt{Vaughan2003b}). Therefore, when comparing two PSDs or quantifying PSD evolution, one must consider random fluctuation properly. \citet{Vaughan2010} proposed a Bayesian test for the existence of QPO signal in the PSD of AGN (also see \citealt{Jin2020}). In this work, we generalize this method to measure and compare PSD shapes. Moreover, the variation of spectral properties such as the flux and slope generally reflect intrinsic variation of physical processes and parameters. Therefore, we use a sample of radio-quiet AGN to explore the link between the variability of their X-ray spectra and PSDs. The choice of radio-quietness is to avoid complexities due to the jet emission. Since the AGN in this sample all exhibit \edit1{{significant}} spectral variation, their PSD variation is more likely to be intrinsic rather than due to random fluctuation. We will show that there is indeed evidence to support the existence of such a link. This paper is organized as follows. Firstly we describe the AGN sample, observations and data reduction in Section \ref{sec:2}. Then we describe the methodology of our spectral fitting and PSD analyses in Section \ref{sec:3}. This is followed by the investigation of potential link between the state of energy spectra and PSDs in Section \ref{sec:4}. In Section \ref{sec:5} we discuss the variability of PSD and the possible implications and explanations, as well as the impact of PSD variation on various correlations used to estimate the black hole mass. Section \ref{sec:6} summarizes the results of this work. \begin{table*} \renewcommand\arraystretch{1.2} \begin{center} \caption{List of sources and \textit{XMM-Newton} observations used in this paper. \label{tab:obs}} \setlength\tabcolsep{4.0pt \begin{tabular}{@{}llcccccccc@{}} \hline Source Name & Type & Obs ID & Date & Duration & Pile-up & bin & $\Gamma_{\rm 2-10 keV}$ & $F_{\rm 0.3-1~keV}$ & $F_{\rm 2-10~keV}$ \\ & & & & (ks) & ($''$) & (s) & & (erg cm$^{-2}$ s$^{-1}$) & (erg cm$^{-2}$ s$^{-1}$)\\ \hline Mrk 335 & NLS1 & 0306870101 & 2006-01 & 110.4 & -- & 50 & $2.05\pm 0.01$ & 2.27$\times 10^{-11}$ & 1.85$\times 10^{-11}$ \\ & & 0600540601 & 2009-06(1) & 100.0 & 7.5 & 50 & $1.48\pm 0.02$ & 2.76$\times 10^{-12}$ & 5.50$\times 10^{-12}$ \\ & & 0600540501 & 2009-06(2) & 80.7 & 9.0 & 50 & $1.65\pm 0.02$ & 3.91$\times 10^{-12}$ & 5.20$\times 10^{-12}$ \\ \hline Mrk 766 & NLS1 & 0109141301 & 2001-05 & 80.1 & 2.5 & 50 & $2.05\pm 0.01$ & 2.55$\times 10^{-11}$ & 2.61$\times 10^{-11}$ \\ & & 0304030101 & 2005-05(1) & 70.1 & 1.5 & 50 & $1.31\pm 0.01$ & 4.45$\times 10^{-12}$ & 7.63$\times 10^{-12}$ \\ & & 0304030501 & 2005-05(2) & 70.1 & 2.5 & 50 & $1.95\pm 0.01$ & 1.33$\times 10^{-11}$ & 1.75$\times 10^{-11}$ \\ \hline 1H 0707-495 & NLS1 & 0148010301 & 2002-10 & 76.4 & 10.0 & 200 & $2.27\pm 0.05$ & 4.71$\times 10^{-12}$ & 1.25$\times 10^{-12}$ \\ & & 0506200301 & 2007-05 & 38.6 & 10.0 & 200 & $1.94\pm 0.10$ & 2.53$\times 10^{-12}$ & 6.51$\times 10^{-13}$ \\ & & 0653510301 & 2010-09 & 99.8 & 13.0 & 200 & $2.08\pm 0.06$ & 5.01$\times 10^{-12}$ & 1.02$\times 10^{-12}$ \\ \hline IRAS 13224-3809 & NLS1 & 0110890101 & 2002-01 & 35.0 & 10.5 & 200 & $1.66\pm 0.13$ & 2.31$\times 10^{-12}$ & 4.10$\times 10^{-13}$ \\ & & 0673580401 & 2011-07 & 109.8 & 11.0 & 200 & $2.20\pm 0.07$ & 2.62$\times 10^{-12}$ & 4.72$\times 10^{-13}$ \\ & & 0780561301 & 2016-07 & 119.8 & 11.0 & 200 & $2.23\pm 0.06$ & 3.25$\times 10^{-12}$ & 4.35$\times 10^{-13}$ \\ \hline NGC 1365 & S1.8 & 0505140401 & 2007-07 & 120.1 & 1.0 & 50 & $-0.29\pm 0.03$ & 4.66$\times 10^{-13}$ & 1.89$\times 10^{-12}$ \\ & & 0692840201 & 2012-07 & 100.0 & 1.0 & 50 & $-0.52\pm 0.01$ & 4.31$\times 10^{-13}$ & 7.81$\times 10^{-12}$ \\ & & 0692840501 & 2013-02 & 113.3 & 0.5 & 50 & $0.10\pm 0.01$ & 4.69$\times 10^{-13}$ & 1.34$\times 10^{-11}$ \\ \hline NGC 3516 & S1.5 & 0107460701 & 2001-11 & 80.0 & 3.0 & 50 & $0.87\pm 0.01$ & 1.60$\times 10^{-12}$ & 1.66$\times 10^{-11}$ \\ & & 0401210601 & 2006-10 & 50.1 & 1.0 & 50 & $1.36\pm 0.01$ & 7.15$\times 10^{-12}$ & 3.52$\times 10^{-11}$ \\ \hline NGC 4051 & NLS1 & 0109141401 & 2001-05 & 101.4 & 2.5 & 50 & $1.85\pm 0.01$ & 2.86$\times 10^{-11}$ & 2.49$\times 10^{-11}$ \\ & & 0830430801 & 2018-11 & 42.8 & 1.0 & 50 & $1.65\pm 0.01$ & 8.67$\times 10^{-12}$ & 1.20$\times 10^{-11}$ \\ \hline NGC 4395 & S1.8 & 0142830101 & 2003-11 & 89.9 & 6.0 & 50 & $0.98\pm 0.01$ & 4.44$\times 10^{-13}$ & 5.92$\times 10^{-12}$ \\ & & 0744010101 & 2014-12 & 51.8 & -- & 50 & $0.003\pm 0.021$ & 4.23$\times 10^{-14}$ & 4.66$\times 10^{-12}$ \\ & & 0824610401 & 2019-01 & 74.8 & -- & 50 & $0.78\pm 0.01$ & 5.27$\times 10^{-14}$ & 5.09$\times 10^{-12}$ \\ \hline PG 1211+143 & NLS1 & 0112610101 & 2001-06 & 53.1 & 6.0 & 50 & $1.72\pm 0.03$ & 3.86$\times 10^{-12}$ & 3.16$\times 10^{-12}$ \\ & & 0502050101 & 2007-12 & 45.1 & 9.0 & 50 & $2.03\pm 0.03$ & 7.69$\times 10^{-12}$ & 4.01$\times 10^{-12}$ \\ \hline \end{tabular} \end{center} \vspace{-0.3cm} \tablecomments{Type: AGN classifications from \citet{Veron-Cetty2010} and \citet{Alston2019}; Duration: total length of light curve used for variability analysis from each observation; Pile-up: radii of removed central area for suppressing photon pile-up; bin: time bins of different light curves; $\Gamma_{\rm 2-10~keV}$: 2-10 keV photon indices with 90\% confidence intervals derived from {\sc xspec} fitting; $F_{\rm 0.3-1~keV}$ and $F_{\rm 2-10~keV}$: the fluxes in 0.3-1 and 2-10 keV obtained by integrating the energy spectrum.} \vspace{0.6cm} \end{table*} \section{Sample Selection and Data Reduction} \label{sec:2} \subsection{The AGN Sample} The AGN sample of this work is based on the larger sample in \citet{Kara2016b} which, in turn, comes from a sample of 104 variable AGN presented originally by \citet{Gonzalez-Martin2012}. The merit of \citet{Kara2016b}'s sample is that those AGN are strongly variable in X-rays, so that possible evolution of X-ray timing properties can be investigated. Since our objective is to explore the link between the X-ray energy spectrum and PSD, we inspected the spectra of each source in \citet{Kara2016b} to identify those with multiple \textit{XMM-Newton} observations and exhibiting \edit1{{significant}} spectral variation, i.e. the 2-10 keV photon indices differ by more than 0.2 between two observations of the same source. Then we select sources having at least 2 \textit{XMM-Newton} observations of more than 40 ks each, which ensures that their PSDs can be compared down to $\sim10^{-5}$ Hz. Moreover, we only use observations which are not severely affected by background flares. The final sample consists of nine highly variable AGN with 24 \textit{XMM-Newton} observations. Details of these observations are listed in Table \ref{tab:obs}, where the AGN classification comes from various literatures (e.g. \citealt{Veron-Cetty2010}, \citealt{Alston2019}). The sample consists of one Seyfert 1.5 galaxy, two Seyfert 1.8 galaxies and six narrow-line Seyfert 1 galaxies (NLS1s). \edit1{{According to the AGN unified model, the difference between Seyfert 1.5, 1.8 and 1 is mainly the difference in viewing angle \citep{Antonucci1993, Urry1995}, and their X-ray variations may all originate from intrinsic changes in the nuclear region and/or the absorption in the line of sight \citep{Hernandez-Garcia2017}, so we can study them in one sample and compare their properties.}} \subsection{Data Reduction} We use the data from the European Photon Imaging Camera (EPIC) pn camera of \textit{XMM-Newton} as its data quality is the best among the three EPIC cameras. The data were processed using the \textit{XMM-Newton} Science Analysis System (SAS v. 17.0.0) with the latest calibration files. The calibrated EPIC event files were produced from the Observation Data Files (ODFs) using the {\tt \string epproc} command. For each source, we extracted a single-event 10-12 keV light curve from a source-free region to identify intervals of background flares. Generally, this was done by removing the high background periods at the beginning and end of the observation data, while the {\tt \string RATE\textless=0.4} command was also used when background flares appeared in the middle of the observing window. We created a good-time-interval (GTI) file, and filtered the event file to include data only in the low-background periods. Only single and double pixel events were used (i.e. {\tt \string PATTERN\textless=4}) in the following analysis. For most observations, we used 50 s as the binning time. For observations when the source appears relatively faint (i.e. in observations of 1H 0707-495 and IRAS 13224-3809), we adjusted the binning time to 200 s to reduce the number of zero-count bins in the light curves. The source and background light curves and spectra were extracted from the filtered event file using the {\tt \string evselect} command. In most cases, the radii of the source and background regions were 40 arcsec and 50 arcsec, respectively. Only if the observation mode was {\tt Small Window} (e.g. the observation of Mrk 335 in 2006), the two radii were both set as 20 arcsec. We also checked every observation for the pile-up effect \edit1{{using {\tt\string epatplot} task}}, and the core area of the point-spread-function (PSF) was masked when it is necessary to reduce pile-up. \edit1{{We make sure that the observed pattern distributions as calculated by the {\tt\string epatplot} task are consistent with the models, and the energy fraction lose is less than 3.5\% in all the observations.}} The radii of the masked inner regions are listed in Table \ref{tab:obs}. The redistribution matrix files (RMFs) and ancillary files (ARFs) are generated using the {\tt\string rmfgen} and {\tt\string arfgen} commands, respectively. \section{Methodologies of X-ray Spectral and PSD Analysis} \label{sec:3} \subsection{Spectral Analysis} To quantify the spectral shape, the 2-10 keV spectra were fitted with an absorbed power law model (the {\sc tbabs$\times$powerlaw} in XSPEC v12.10.1). The abundances were adopted from \citet{Wilms2000}, and the photoelectric absorption cross-sections were adopted from \citet{Verner1996}. The Galactic hydrogen column density $N_{\rm H}$ of each source was frozen at the value provided by the {\tt NHtot} tool\footnote{\url{https://www.swift.ac.uk/analysis/nhtot/index.php}} (Willingale et al. 2013). The free parameters are the photon index and the normalization of the power law. The best-fit photon indices are listed in Table \ref{tab:obs}, with errors representing the 1$\sigma$ confidence intervals. Since our study focuses on the PSD, these parameters are sufficient to make a simple distinction between different spectral states. \edit3{{The folded spectra are plotted in Figures \ref{fig:spec&PSD_s} and \ref{fig:spec&PSD_h}, as well as in Figures \ref{fig:spec&PSD_unused_s} and \ref{fig:spec&PSD_unused_h}. These spectra include various instrumental features (e.g. effective area and detector's response), but they are model-independent. Meanwhile, for the ease of comparison with previous works such as \citet{Kara2016b}, we follow the prescription in \citet{Vaughan2011} to plot the `fluxed' 0.3-10 keV spectra in Figure \ref{fig:unfoldspec}. These spectra are unfolded through a power law model with photon index equal to 0 and normalization equal to 1. This method largely removes the effects of the energy-dependent effective area, so the resultant `fluxed' spectra can better reflect the difference of the intrinsic spectral states. Significant spectral variation is found for every source.}} \subsection{PSD Analysis} The light curves of the soft (0.3-1 keV) and hard (2-10 keV) X-ray bands were extracted, separately. For the short-period data gaps resulting from the removal of background flares, we followed the method of \citet{Ashton2021} to use linear interpolation to fill them. We performed Fast Fourier Transform (FFT) to the light curves to generate periodograms. We also compared the periodograms thus obtained with those produced by filling data gaps with the mean count rates, and found that they are not significantly different. Then the periodograms were fitted with two models. Model-A is a single power law model for the red noise continuum, plus a constant to account for Poisson noise: \begin{equation} P(f)=Af^{-\alpha}+C, \end{equation} where the free parameters include the normalization $A$, the power law slope index $\alpha$, and the constant $C$. Model-B is a broken power law plus a constant: \begin{equation} P(f)=\frac{Af^{-\alpha_{\rm L}}}{1+(f/f_{\rm B})^{\alpha_{\rm H}-\alpha_{\rm L}}}+C, \end{equation} where the slope below the break frequency $f_{\rm B}$ is fixed as $\alpha_{\rm L}=1$, leaving only four free parameters: the normalization $A$, the break frequency $f_{\rm B}$, the slope $\alpha_{\rm H}$ after break, and the constant $C$. We fitted the periodograms using the method described in \citet{Vaughan2010} with the two models using the maximum likelihood estimation. \edit1{{Note that this method avoid the necessity of binning the periodogram to achieve certain level of signal-to-noise.}} For a given model, the fitting process is equivalent to minimizing the following deviance function: \begin{equation} D=2\sum_{j=1}^{N/2}\{\frac{I_j}{S_j}+\log S_j\} \end{equation} which is twice the minus logarithm of the likelihood. $I_j$ and $S_j$ are the periodogram data and the model PSD at frequency $f_j$, respectively. The summation was performed on the entire frequency range below the Nyquist frequency. To obtain the confidence limits of the PSD parameters, we follow the Bayesian analysis method detailed in \citet{Vaughan2010} and use the Markov Chain Monte Carlo (MCMC) method to obtain posterior distributions of the model parameters. We use 100 walkers and each goes more than 1000 steps to ensure that the samples reach burn-in period. The median value of the posterior distribution is used as the fitting result of each parameter. \begin{table}[t!] \centering \caption{LRT1 \textit{p}-values calculated from 5000 datasets simulated under Model-A. {\it soft} and {\it hard} indicate the 0.3-1 keV and 2-10 keV bands. \label{tab:LRT1}} \begin{tabular}{lccc} \hline Source & Obs & {\it p} (soft) & {\it p} (hard) \\ \hline Mrk 335 & 2006 & 0.0000 & 0.0000 \\ & 2009(1) & 0.0352 & 0.2542 \\ & 2009(2) & 0.0456 & 0.8408 \\ Mrk 766 & 2001 & 0.0000 & 0.0006 \\ & 2005(1) & 0.0998 & 0.8642 \\ & 2005(2) & 0.0000 & 0.0000 \\ 1H 0707-495 & 2002 & 0.0022 & 0.0144 \\ & 2007 & 0.0494 & 0.0398 \\ & 2010 & 0.0006 & 0.0024 \\ IRAS 13224-3809 & 2002 & 0.1090 & 0.0672 \\ & 2011 & 0.1716 & 0.8642 \\ & 2016 & 0.6278 & 0.0052 \\ NGC 1365 & 2007 & 0.9446 & 0.7968 \\ & 2012 & 0.9354 & 0.0286 \\ & 2013 & 0.5166 & 0.0036 \\ NGC 3516 & 2001 & 0.2948 & 0.3798 \\ & 2006 & 0.5906 & 0.8336 \\ NGC 4051 & 2001 & 0.0000 & 0.0900 \\ & 2018 & 0.0000 & 0.0052 \\ NGC 4395 & 2003 & 0.0178 & 0.1948 \\ & 2014 & 0.1918 & 0.0764 \\ & 2019 & 0.0194 & 0.0026 \\ PG 1211+143 & 2001 & 0.0056 & 0.0198 \\ & 2007 & 0.0472 & 0.0594 \\ \hline \end{tabular} \vspace{-0.3cm} \vspace{0.2cm} \end{table} \subsection{Comparison of PSD models} For an observed PSD, the two models were compared using the likelihood ratio test (LRT) statistic. As described in \citet{Vaughan2010}, the calculation of the LRT statistic is equivalent to the difference between the deviance of the two models: \begin{equation} T_{\rm LRT}=D_{\rm min}(A) -D_{\rm min}(B). \end{equation} By using datasets simulated under one PSD model, the LRT statistical \textit{p}-value can be calculated, which gives the proportion of all simulated data where the LRT statistic is greater than the observed value \citep{Vaughan2010}. For the comparison of PSD models, we simulate 5000 periodograms following \citet{Timmer1995} from the posterior distribution of Model-A and calculate the LRT statistics (hereafter LRT1). Then \textit{p}-values relative to real data can be calculated. For LRT1, a smaller \textit{p}-value indicates a more significant deviation of the observed PSD from a single power law model. We take \textit{p} \textless\ 0.01 as the criterion that Model-B is selected over Model-A. \edit1{{Although this is just a crude check as in \citet{Gonzalez-Martin2012}, the selection is generally consistent with visual check result, in which the observations with \textit{p} \textless\ 0.01 do show breaks in their PSDs. Besides, considering the fact that there are only 24 observations in our sample, which is a rather limited number, this criterion is reasonable as the uncertainty implied by the \textit{p}-value would not change the result of more than one observation.}} According to the results of LRT1 statistics, four objects in our sample show \textit{p}-value lower than 0.01 in more than one dataset (different energy bands of one observation are treated as different datasets), which are Mrk 335, Mrk 766, 1H 0707-495, and NGC 4051. Besides, IRAS 13224-3805, NGC 1365, NGC 4395 and PG 1211+143 give low \textit{p}-value in only one observation (i.e. 2-10 keV in the 2016 observation for IRAS 13224-3805, 2-10 keV in the 2013 observation for NGC 1365, 2-10 keV in the 2019 observation for NGC 4395, and 0.3-1 keV in the 2001 observation for PG 1211+143). After checking the PSDs of these observations, we consider that they do not show significant break and the low LRT1 \textit{p}-values are more likely due to the power fluctuations at low frequencies. For those observations which have an LRT1 \textit{p}-value lower than 0.01 (i.e. data that can be described better by Model-B than by Model-A), MCMC gives relatively tight posterior distributions for the break frequency. These break frequencies are shown in Table \ref{tab:parameters}. \edit1{{The other PSD fit parameters of Model A and Model B are shown in Table \ref{tab:parameters_modelA} and \ref{tab:parameters_modelB}, respectively.}} \begin{table}[t!] \begin{center} \caption{Break frequencies from the MCMC method. Error-bars represent 1$\sigma$ confidence intervals. \label{tab:parameters}} \setlength\tabcolsep{5.8pt \begin{tabular}{cccc} \hline Souece & Obs & $\log f_{\rm B}$ & $\log f_{\rm B}$ \\ & & (0.3-1 keV) & (2-10 keV) \\ \hline Mrk 335 & 2006 & $-3.88_{-0.17}^{+0.13}$ & $-3.63_{-0.13}^{+0.10}$ \\ & 2009(1) & -- & -- \\ & 2009(2) & -- & -- \\ Mrk 766 & 2001 & $-3.38_{-0.13}^{+0.10}$ & $-3.14_{-0.24}^{+0.11}$ \\ & 2005(1) & -- & -- \\ & 2005(2) & $-3.31_{-0.13}^{+0.09}$ & $-3.16_{-0.06}^{+0.04}$ \\ 1H 0707-495 & 2002 & $-3.54_{-0.30}^{+0.16}$ & -- \\ & 2007 & -- & -- \\ & 2010 & $-3.64_{-0.14}^{+0.10}$ & $-3.47_{-0.07}^{+0.07}$ \\ NGC 4051 & 2001 & $-3.61_{-0.24}^{+0.18}$ & -- \\ & 2018 & $-2.94_{-0.12}^{+0.09}$ & $-2.80_{-0.19}^{+0.12}$ \\ \hline \end{tabular} \end{center} \vspace{-0.3cm} \vspace{0.4cm} \end{table} \subsection{Comparison of Periodograms} \label{sec-psd-compare} Since a periodogram is only a single realization of the intrinsic PSD, the difference between two periodograms can be attributed to either the random fluctuation or the intrinsic PSD variation, and so it is not trivial to tell if two periodograms are intrinsically different. However, it is known that the probability distribution for the random fluctuation of PSD follows the $\chi^2$ distribution of two degrees of freedom (\citealt{Timmer1995}). Besides, the posterior probability distribution can be derived for every parameter of the PSD model (\citealt{Vaughan2010}). Hence, one can run MCMC simulation to assess the level of model uncertainty and random fluctuation, thereby identifying intrinsic PSD variation. Another issue of comparing two periodograms is the different level of Poisson noise, because if a frequency band is dominated by the Poisson noise power, then it is difficult to retrieve the PSD shape in this band. Therefore, it is also not trivial to compare two periodograms if their Poisson noise power is different. To overcome these issues, we apply the following method to compare two periodograms. For each source, we first obtain the distributions of Model-B parameters for the observation (i.e. the reference observation) showing significant PSD break, and then adopt them as the reference \edit1{{PSD models}} for the other observations (i.e. the comparing observations) of the same source, except that the Poisson noise is kept at the level of the comparing observation. This step is to avoid the influence of different Poisson noise power between the reference and comparing observations. Similarly, \edit1{{for each comparing observation,}} we run 5000 MCMC simulations using \edit1{{Poisson noise distribution from this comparing observation and other parameters from reference observations}} in Model-B to produce a set of simulated PSDs \edit1{{based on the frequency baseline of the comparing observation}}. Then these mock PSDs are fitted again with Model-A and Model-B to derive a set of $T_{\rm LRT}$ statistics (hereafter LRT2). Based on the distribution of LRT2 statistics, the $T_{\rm LRT}^{\rm obs}$ from the real \edit1{{comparing}} observation is used to determine the \textit{p}-value. In this case, a larger \textit{p}-value can indicate that the shape of the comparing PSD is to a higher degree different from the reference PSD (i.e. the shape of broken power law). \begin{table}[t!] \begin{center} \caption{LRT2 \textit{p}-value calculated from 5000 datasets simulated under Model-B. The model parameters adopt the distributions obtained by the broken PSD models of the reference observations (Ref Obs), while the Poisson terms are obtained from the comparing observations (Comp Obs). {\it soft} and {\it hard} indicate the 0.3-1 keV and 2-10 keV bands. \label{tab:LRT2}} \begin{tabular}{lcccc} \hline Source & Comp Obs & Ref Obs & {\it p} (soft) & {\it p} (hard) \\ \hline Mrk 335 & 2009(1) & 2006 & 0.7876 & 0.7710 \\ & 2009(2) & 2006 & 0.7216 & 0.9318 \\ Mrk 766 & 2005(1) & 2001 & 0.9218 & 0.9398 \\ & 2005(1) & 2005(2) & 0.9524 & 0.9860 \\ 1H 0707-495 & 2002 & 2010 & -- & 0.5224 \\ & 2007 & 2010 & 0.7884 & 0.5998 \\ & 2007 & 2002 & 0.5134 & -- \\ NGC 4051 & 2001 & 2018 & -- & 0.9264 \\ \hline \end{tabular} \end{center} \vspace{-0.3cm} \vspace{0.2cm} \end{table} \section{Relations between the X-ray Power Spectra and Energy Spectra} \label{sec:4} Based on the methods described above, we investigate the variability of X-ray PSDs and their potential co-variation with the energy spectra. \edit3{{The folded energy spectra are shown in Figures \ref{fig:spec&PSD_s} and \ref{fig:spec&PSD_h}, along with the corresponding PSDs in the soft (0.3-1 keV) and hard (2-10 keV) X-ray bands. And unfolded spectra are plotted in Figure \ref{fig:unfoldspec}.}} For observations with significant breaks in the PSDs the best-fit broken power law model (Model-B) curves are plotted, for the rest the PSDs are shown with the best-fit single power law model (Model-A) curves. The same scale of x-axis and y-axis is adopted so that the difference between observations can be compared directly. As described in the previous section, we calculated the LRT1 statistics based on the simulation of the best-fit Model-A from the same observation. The distributions of LRT1 statistics for Mrk 335 in the 0.3-1 keV are shown in Figure \ref{fig:mrk335_LRTs_erect}. The LRT1 \textit{p}-values for all sources are shown in Table \ref{tab:LRT1}. The calculation of LRT2 distribution is based on another set of simulation using the best-fit Model-B from a reference observation. The results of LRT2 simulation for Mrk 335 in the 0.3-1 keV are shown in Figure \ref{fig:mrk335_LRTs_bendtest_erect}. The LRT2 \textit{p}-values for all sources are shown in Table \ref{tab:LRT2}. \begin{figure*}[hp!] \begin{center} \gridline{ \fig{pic//Spec&PSD//mrk335_s.pdf}{1.0\textwidth}{} } \vspace{-0.3cm} \gridline{ \fig{pic//Spec&PSD//mrk766_s.pdf}{1.0\textwidth}{} } \vspace{-0.3cm} \gridline{ \fig{pic//Spec&PSD//1h0707_s.pdf}{1.0\textwidth}{} } \vspace{-0.3cm} \gridline{ \fig{pic//Spec&PSD//ngc4051_s.pdf}{1.0\textwidth}{} } \vspace{-0.3cm} \caption{The folded energy spectra and 0.3-1 keV PSDs of the four AGN which have two or more detections of high-frequency break. Black lines are the best-fit Model-B (i.e. break power law) results for observations which report significant breaks, and are the best-fit Model-A (i.e. single power law) results for observations which do not report significant breaks. The dashed lines and the dotted lines represent red noise components and Poisson terms, respectively. The LRT1 \textit{p}-values of Model-A simulation are shown downside. Spectra and PSDs are shown in different colors attending to the observation date. The PSDs are arranged in order of fluxes. \label{fig:spec&PSD_s}} \end{center} \end{figure*} \begin{figure*}[hp!] \begin{center} \gridline{ \fig{pic//Spec&PSD//mrk335_h.pdf}{1.0\textwidth}{} } \vspace{-0.3cm} \gridline{ \fig{pic//Spec&PSD//mrk766_h.pdf}{1.0\textwidth}{} } \vspace{-0.3cm} \gridline{ \fig{pic//Spec&PSD//1h0707_h.pdf}{1.0\textwidth}{} } \vspace{-0.3cm} \gridline{ \fig{pic//Spec&PSD//ngc4051_h.pdf}{1.0\textwidth}{} } \vspace{-0.3cm} \caption{Similar to Figure~\ref{fig:spec&PSD_s}, but for the 2-10 keV band. \label{fig:spec&PSD_h}} \end{center} \end{figure*} \begin{figure}[t!] \begin{center} \includegraphics[width=0.48\textwidth]{pic//ModelTest//mrk335_LRTs_erect.pdf} \caption{Distributions of LRT1 statistics of Mrk 335 in the 0.3-1 keV band with 5000 datasets simulated under Model-A. The black vertical line is the observed value $T_{\rm LRT}^{\rm obs}$, with the corresponding \textit{p}-value at the top left. \label{fig:mrk335_LRTs_erect}} \end{center} \vspace{0.4cm} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.48\textwidth]{pic//bendtest//mrk335_LRTs_bendtest_erect.pdf} \caption{Distributions of LRT2 statistics of Mrk 335 in the 0.3-1 keV band with 5000 datasets simulated under Model-B. The model parameters are based on the posterior distribution for the 2006 observation, while the Poisson term uses the value of the observations in 2009. The black vertical line is the observed value $T_{\rm LRT}^{\rm obs}$, with the corresponding \textit{p}-value at the top left. \label{fig:mrk335_LRTs_bendtest_erect}} \end{center} \vspace{0.4cm} \end{figure} \subsection{Results for Individual Sources} \subsubsection{Mrk 335} The first row of panels in Figure \ref{fig:spec&PSD_s} shows the results for Mrk 335. The 0.3-1 keV periodogram of Mrk 335 from the observation of 2006 shows a significant break, which does not appear in the observations of 2009. This is confirmed by the distribution of LRT1 statistics shown in Figure \ref{fig:mrk335_LRTs_erect}. Among the three observations investigated, only the \textit{p}-value of the observation of 2006 is lower than 0.01. This means that the high-frequency break is significant in the observation of 2006, but not in the two observations of 2009. However, Figure \ref{fig:spec&PSD_s} also shows that the Poisson noise power in the two observations of 2009 is much stronger, and so it dominates the power at lower frequencies, close to the break frequency observed in 2006. Hence the Poisson noise power can reduce the significance of the PSD break signal or even overwhelm it. To overcome this issue, we use the observation in 2006 as the reference observation, and apply the Bayesian MCMC method as described in Section~\ref{sec-psd-compare} to compare these periodograms. As shown by Figure~\ref{fig:mrk335_LRTs_bendtest_erect}, the LRT2 distributions are systematically larger than the observed LRT2. The corresponding \textit{p}-value is 0.79 and 0.72 for the two observations in 2009 (see Table~\ref{tab:LRT2}), which are both much larger than the \textit{p}-value based on Model-A in Table~\ref{tab:LRT1}. This clearly indicates that the difference between the periodograms observed in 2009 and 2006 reflects the intrinsic difference of the PSD, rather than due to the difference of Poisson noise power or the random fluctuation. Meanwhile, we can see from Figure \ref{fig:spec&PSD_s} that the source flux in the observation of 2006 is much higher than the other two observations, and the spectral shape is also much steeper. The differences between the spectral flux and slope can also be found in Table \ref{tab:obs} quantitatively. Therefore, it seems that for Mrk 335 a high-flux and steeper energy spectrum is associated with the emergence of a high-frequency break in the PSD. Similarly, as shown in the first panel of Figure \ref{fig:spec&PSD_h}, the 2-10 keV periodogram of Mrk 335 in 2006 is also different from that in 2009, which shows a significant high-frequency break similar to the results found for 0.3-1 keV. The result is robust, as being verified by the same Bayesian MCMC method (see Table~\ref{tab:LRT2}). Besides, the break frequency of 2-10 keV band appears to be higher than that of 0.3-1 keV band, although the difference is not sufficiently significant if the PSD random fluctuation is considered. It is also interesting to note is that the PSD normalization in the observation of 2006 is lower than that in the two observations of 2009. This suggests that as the flux increases, the relative variability amplitude of Mrk 335 decreases. \subsubsection{Mrk 766} The PSDs of Mrk 766 for the 0.3-1 keV and 2-10 keV bands are shown in the second row of panels in Figure \ref{fig:spec&PSD_s} and \ref{fig:spec&PSD_h}, respectively, with energy spectra on the left side. The spectrum in 2001, which has the highest flux and steepest slope among the three observations, is associated with the presence of a significant PSD break in the two energy bands. The other two observations, both of which were made in 2005, are significantly different from each other in terms of their energy spectra and PSDs. The first observation in 2005 does not show high-frequency break in either 0.3-1 keV or 2-10 keV when the spectral flux is low and has a hard shape, which is similar to the observations of Mrk 335 in 2009. The second observation in 2005 shows a relatively higher flux and steeper spectrum, accompanied by the emergence of a PSD break in both energy bands. We have also applied the same method to ensure that the non-detectability of the PSD break is not due to the influence of higher Poisson noise power (see Table~\ref{tab:LRT2}). Interestingly, these results are all similar to those found in Mrk 335. Likewise, for the two observations which show a high-frequency break, the break frequencies of 2-10 keV are higher than those of 0.3-1 keV, which can be seen in Table \ref{tab:parameters}. The break frequencies detected in the two observations are also statistically consistent with each other. The PSD normalization of Mrk 766 also decreases as the spectral flux increases. Therefore, regarding the flux state, the emergence of PSD break and the PSD normalization, the results of Mrk 766 are the same as Mrk 335. \subsubsection{1H 0707-495} As in the third row of panels in Figure \ref{fig:spec&PSD_s} and \ref{fig:spec&PSD_h}, the spectral slopes and fluxes of 1H 0707-495 during the observations that we selected have relatively smaller differences than the other sources, but different PSDs are still observed. The PSD in 2007 does not show a high-frequency break in the two energy bands when the energy spectrum has the hardest shape and lowest flux. In comparison, the observations in 2002 and 2010 show high-frequency breaks in the 0.3-1 keV PSDs, and the two break frequencies are consistent with each other. Moreover, only the observation in 2010 show a significant high-frequency break in 2-10 keV. The break does not emerge in the 2-10 keV PSD in 2002, while the spectral flux and slope is highest among the three observations. Likewise, the non-detectability of the PSD break has been checked against the influence of different Poisson noise power (see Table~\ref{tab:LRT2}). The PSD normalization of 1H 0707-495 is also found to be anti-correlated with the spectral flux. Therefore, for 1H 0707-495 the results in the 0.3-1 keV band are the same as Mrk 335 and Mrk 766. The results in the 2-10 keV band is a little different. A possible explanation is that the break in 2-10 keV in the observation of 2010 is caused by the random fluctuation of the underlying noise. \subsubsection{NGC 4051} For NGC 4051, only two observations met the requirements that we set in Section~\ref{sec:2}. The spectra and PSDs are shown in the bottom row of panels in Figure \ref{fig:spec&PSD_s} and \ref{fig:spec&PSD_h}. The spectrum in 2001 is steeper than that in 2010, and the flux in 2001 is also higher. But in the 0.3-1 keV band, the break frequency in 2001 is much lower than that in 2018, which is opposite to the behavior of break frequency in other objects. Meanwhile, the 2-10 keV PSD does not show a high-frequency break in the observation of 2001, which has also been confirmed by taking into account the influence of Poisson noise power (see Table~\ref{tab:LRT2}). Interestingly, the PSD normalization in the observation of 2001 is higher than 2018, except the lower Poisson noise in 2001 due to the higher count rate. This suggests that as the flux of NGC 4051 increases, its relative variability amplitude also increases. Therefore, the link between the PSD normalization and the energy spectrum of NGC 4051 is different from the previous three sources. \subsubsection{IRAS 13224-3809, NGC 1365, NGC 3516, \\NGC 4395 and PG 1211+143} The other five sources also show highly variable energy spectra, but do not have more than two PSDs which show significant high-frequency breaks. Their energy spectra and PSDs are shown in Figure \ref{fig:spec&PSD_unused_s} and \ref{fig:spec&PSD_unused_h}, while the LRT1 \textit{p}-values used to distinguish PSD breaks are given in Table \ref{tab:LRT1}. We find four PSD breaks in these observations. They are in the 2-10 keV band in the observation of 2016 for IRAS 13224-3809, the 2-10 keV band in the observation of 2013 for NGC 1365, the 2-10 keV band in the observation of 2019 for NGC 4395, and the 0.3-1 keV band in the observation of 2001 for PG 1211+143. The LRT1 \textit{p}-value are lower than 0.01 for these four PSDs. However, for these datasets, the MCMC runs did not converge except the 2019 observation of NGC 4395. This result suggests that these PSD breaks are poorly constrained. Although the break in these observations can exist, it is elusive to compare these PSD shapes in more detail for the lack of other break detection. Moreover, for NGC 3516 and NGC 4395 we notice that as the spectral flux decreases, the intrinsic variability decreases as well. \subsection{Overall Results for the Sample} \edit1{{ All our sources are included in the sample of \citet{Gonzalez-Martin2012}, which is also the parent sample of \citet{Kara2016b}. Our fitting results are generally consistent with those in \citet{Gonzalez-Martin2012}. However, our analyses go further given that we focus on PSD changing between observations of each source. For example, two observations close in time were analyzed together in \citet{Gonzalez-Martin2012}, while we analyzed them separately and pay attention to the variation of PSD if the spectrum changes significantly (e.g. two observations of Mrk 766 in 2005). The break frequencies found in this work are generally consistent with those reported by \citet{Gonzalez-Martin2012}, although some differences do exist. Specifically, our results are slightly higher for Mrk 766 in 2001, Mrk 766 in 2005, and 1H 0707-495 in 2010. For NGC 4051 in 2001, \citet{Gonzalez-Martin2012} report a PSD break in 2-10 keV at $\log(f_{\mathrm{B}})\sim -3.99$ with a large uncertainty, but this is not detected by our analysis. These differences are likely caused by different methods of PSD analysis. The observation of NGC 4051 in 2018 is not included in \citet{Gonzalez-Martin2012}, yet studies about this source has shown that it has unique spectral variability which is more than ``softer-when-brighter'' trend and can be explained by a geometry include an extended corona \citep{Wu2020}. Timing analysis of these sources may provide more clues in addition to the spectral property. }} We compare the relations between the PSD break frequencies and the spectral slopes and fluxes for Mrk 335, Mrk 766, 1H 0707-495 and NGC 4051 in Figures \ref{fig:logf_slope} and \ref{fig:logf_flux}. These AGN showed significant PSD breaks in at least two \textit{XMM-Newton} observations. For observations in which no PSD break is detected, we use a dashed line to indicate the observed frequency range of the PSD in which there is no detectable break. As we pointed out before, for Mrk 335, Mrk 766, 1H 0707-495, the PSD break is detectable only when the spectral flux is high and the shape is steep (the photon index is $\gtrsim$ 2.0). A possible explanation is that their low-flux spectra are dominated by additional processes. Indeed, these sources all have very high mass accretion rates, and it has been proposed that their drastic spectral variability may be caused by the absorption of strong disc wind (e.g. \citealt{Done2016a}; \citealt{Hagino2016}). Hence a natural explanation would be the absorption of wind clumps, which may introduce extra variability and lead to the undetectability of the PSD break during the low-flux state. However, the situation of NGC 4051 is somewhat different. Both the high and low-flux spectral states show PSDs with a high-frequency break, and the break frequency is higher when the flux is lower. Therefore, we conclude that in a low-flux state, the PSD shows either no break or a break shifted to a higher frequency where it is hard to detect the break for the PSD at high frequency is dominated by white noise. Physically, the main difference of NGC 4051 is that its mass accretion rate is much lower than those of the other three sources, hence the origin of its spectral variability is probably different. Then we investigate if there is any correlation between the variations of spectra and PSDs for all the four sources. As shown in Figure \ref{fig:spec&PSD_s} and \ref{fig:spec&PSD_h}, for observations showing drastically different states in flux or spectral shape, their PSDs also show significant differences. This can be confirmed by the changing of LRT1 \textit{p}-values of observations from the same source, and difference between LRT1 \textit{p}-value and LRT2 \textit{p}-value using the same observed $T_{\rm LRT}^{\rm obs}$ but with two simulation sets. Therefore, our result hints at a possible correlation between the spectral state and PSD, although the sample size is too small to draw a general conclusions for AGN. \begin{figure*}[t!] \begin{center} \gridline{\fig{pic//panels//sum//mrk335_logf_slope.pdf}{0.23\textwidth}{}\fig{pic//panels//sum//mrk766_logf_slope.pdf}{0.23\textwidth}{}\fig{pic//panels//sum//1h0707_logf_slope.pdf}{0.23\textwidth}{}\fig{pic//panels//sum//ngc4051_logf_slope.pdf}{0.242\textwidth}{}} \vspace{-0.6cm} \caption{Logarithm of the PSD break frequencies against spectral slopes. The values and errors are calculated using MCMC. The data are marked as dashed lines which show the observed frequency ranges of PSDs if they do not exhibit a significant break. \label{fig:logf_slope}} \end{center} \end{figure*} \begin{figure*}[t!] \begin{center} \gridline{\fig{pic//panels//sum//mrk335_logf_flux.pdf}{0.46\textwidth}{} \fig{pic//panels//sum//mrk766_logf_flux.pdf}{0.46\textwidth}{} } \vspace{-0.6cm} \gridline{\fig{pic//panels//sum//1h0707_logf_flux.pdf}{0.46\textwidth}{} \fig{pic//panels//sum//ngc4051_logf_flux.pdf}{0.46\textwidth}{} } \vspace{-0.6cm} \caption{Logarithm of the PSD break frequencies against fluxes. The values and errors are calculated using MCMC. The data are marked as dashed lines which show the observed frequency ranges of PSDs if they do not exhibit a significant break. \label{fig:logf_flux}} \end{center} \vspace{0.4cm} \end{figure*} \section{Discussion} \label{sec:5} \subsection{Variation of the X-ray PSD in AGN} It is well known that there is a coevolution between the spectral state and the PSD shape for XRBs. However, the variability of AGN's PSD is not well explored. In this work, we attempt to look for evidence of more general correlations between AGN's energy spectra and PSDs. Sources in our sample all exhibit drastic spectral variation, which increases the possibility to observe the change of PSD caused by the change of underlying physical processes rather than random fluctuation. Indeed, we find that the PSD changes in both the shape (the detectability and frequency of the high-frequency break) and normalization (i.e. the rms amplitude). \edit1{{In our sample, observations of individual sources with similar energy spectra tend to show similar PSDs, while a large difference in the spectral shape is also accompanied by a large difference in the corresponding PSD shape.}} This indicates that in AGN there is also a tentative link between the X-ray spectral state and PSD. \edit1{{However, this potential correlation is limited by the sample size. A larger sample is needed to verify it.}} Moreover, we find that for Mrk 335, Mrk 766 and 1H 0707-495, the high-frequency break only appears in the high-flux state \edit1{{in the observed frequency range}}. For NGC 4051, the break frequency in the low-flux state is higher than in the high-flux state. These results are different from the coevolution of energy spectrum and PSD observed in XRBs. Therefore, it is likely that the high and low-flux spectral states observed in our AGN sample do not correspond to different accretion states as in XRBs. Indeed, previous studies of 1H 0707-495 have shown that its X-ray flux can change by more than one order of magnitude while no significant variation of accretion rate is observed in the outer disc (e.g. \citealt{Done2016a}). This is also consistent with the expectation that the timescale of accretion state transition is much longer than the variability timescales covered by individual \textit{XMM-Newton} observations. Therefore, the driver behind the correlation between the PSD and the energy spectrum is not likely to be the accretion rate. Another possible scenario for the observed correlation is that the changes in the energy spectrum are mainly caused by the absorption of interfering materials along the line of sight (e.g. \citealt{Connolly2014}), and/or reflection from disc wind or outflowing materials (e.g. \citealt{Miller2010}; \citealt{Done2016a}; \citealt{Hagino2016}). The low-flux state suffers more severe absorption, which suppresses the intrinsic X-ray variability and causes the PSD break to disappear. The result of NGC 4051 is also consistent with the positive correlation between $N_{\rm H}$ and $f_{\rm B}$ found by \citet{Gonzalez-Martin2018}, in which the harder the spectral shape is and the lower the spectral flux is, the higher the break frequency would be. A simple scenario is that the X-ray corona has an extended structure, where soft X-rays come from larger radii than hard X-rays, thus their corresponding variability timescales are longer (e.g. \citealt{Jin2013, Jin2017}). Therefore, as more soft X-rays are absorbed, the observed energy spectrum becomes harder, and the observed variability timescale becomes shorter. This hypothesis, which can be supported by the fact that the break frequency in the hard X-ray band is higher than in the soft X-ray band, should be tested with more careful studies including detailed spectral fitting and variability analysis. The additional variability caused by the reflection of the absorbing material may also provide an explanation (\citealt{Miller2010}), but this is also related to the disc wind or outflowing materials whose properties and distributions may be different from one source to anther. More detailed spectral analysis for each source is required, but it is beyond the scope of this work. \edit1{{We also note that the Eddington ratios of most of our sources are higher than a few per cent (see Table \ref{tab:mass_of_sources}), and so their accretion flows should behave like a standard sub-Eddington thin disc (e.g. \citealt{Shakura1973}) or a super-Eddington disc (e.g. \citealt{Poutanen2007}). Only NGC 4395 shows an Eddington ratio lower than 0.01. However, given that the uncertainty of black hole mass estimation is large, the calculated Eddington ratio could not be accurate. Hence the possibility that the accretion rate of NGC 4395 is higher than estimation and a normal disc exists can not be ruled out. In comparison, XRBs can also evolve to a low/hard state when their Eddington ratios drop below $\sim~0.02$ (e.g. \citealt{Maccarone2003}) and becomes radiative inefficient (e.g. \citealt{Narayan1995}). Therefore, our AGN sample cannot map to every type of accretion state of XRBs.}} \edit1{{For XRBs the frequency range used to study the PSD evolution with other parameters is 4-5 decades (e.g. \citealt{Gierlinski2008}). It was found that all the characteristic frequencies increases as the mass accretion rate increases (e.g. \citealt{vanderKlis2004, Axelsson2005, Remillard2006}), which can be understood in the context of truncated disc model (e.g. \citealt{Done2003, Done2007}). In this work, the frequency range of AGN PSD available for comparison is only 2 decades, i.e. $10^{-5}-10^{-3}$Hz, before Poisson noise dominates higher frequencies. Hence, it is possible that the break frequency of the observations without break detection is located outside the frequency window. The fitting results of Model A shown in Table \ref{tab:parameters_modelA} with slope $\alpha$ significantly larger than 1 may provide some implication on the existence of such a break at lower frequencies. Indeed, the 2-10 keV PSD of NGC 3516 was reported to have a high-frequency break at $f_\mathrm{B}\sim 2.00\times 10^{-6} Hz$ by \citet{Markowitz2003}, although the data sets are also possibly be affected by the red-noise leak \citep{Gonzalez-Martin2012}. }} \edit1{{ The motivation for the PSD fitting of Model B by a broken power law model from slope of -1 (i.e. $\alpha_{\mathrm{L}}=1$) to slope steeper than -2 (i.e. $\alpha_{\mathrm{H}}\textgreater 2$) comes from PSDs of XRBs in the soft states and has been tested in many long-term studies of AGN (e.g. \citealt{Uttley2002, Markowitz2003, McHardy06}). Although the choice of $\alpha_{\mathrm{L}}$ for Model B can change the fit result of break frequency, which has been tested in some sources of our sample, a fixed slope at lower frequency is more reasonable considering the lack of power spectral data points and the window effect from time bin division at lower frequency. }} \subsection{Estimating Black Hole Mass Using X-ray Variability} It is generally believed that the hard X-ray corona of AGN is located near black hole's event horizon, and so the properties of the corona can be used to infer the properties of the black hole. Indeed, it has been found that the black hole mass ($M_{\rm BH}$) is significantly correlated with the hard X-ray excess variance ($\sigma^2_{\rm rms}$) \citep{Zhou2010, Ponti2012, Pan2015}. It is also known that the black hole mass is anti-correlated with the break frequency ($f_{\rm B}$) of the X-ray PSD \citep{McHardy06, Gonzalez-Martin2012, Gonzalez-Martin2018}. These two correlations are important as they are widely used to obtain estimates of the black hole mass independent from optical estimators (e.g. the virial mass based on the gas dynamics of the broad line region: \citealt{Peterson2004}). \edit1{{ The normalized excess variance is calculated following \citet{Nandra1997} using the following equation: \begin{equation} \sigma^2_{\mathrm{rms}}=\frac{1}{N\mu}\sum_{i=1}^N[(X_i-\mu)^2-\sigma_i^2], \end{equation} where $N$ is the number of time bins of the light curve, $\mu$ is the unweighted mean count rate, $X_i$ and $\sigma_i$ are the count rates and errors in every time bin. Following \citet{Ponti2012}, every light curve is binned in 250 s. The selection criteria of our sample is to have exposure time longer than $40$ ks, and so we can divided light curves into segments of 20 ks length, and calculate the excess variance for every segment. The mean of segments is taken if an observation can be divided into more than one segment. The uncertainty of the excess variance is calculated following \citet{Vaughan2003a}, \begin{equation} \Delta \sigma^2_{\mathrm{rms}}=\sqrt{(\sqrt{\frac{2}{N}}\frac{\langle \sigma^2_i \rangle}{\mu^2})^2 + (\sqrt{\frac{\langle \sigma^2_i \rangle}{N}}\frac{2F_{\mathrm{var}}}{\mu})^2}, \end{equation} where $\langle \sigma^2_i \rangle$ is the mean of the square of count rate errors, and $F_{\mathrm{var}}=\sqrt{\sigma^2_{\mathrm{rms}}}$ is the fractional variability. }} However, significant dispersions have been found in these two correlations. For example, \citet{Ponti2012} reported a dispersion of $\sim$ 0.7 dex for the $\sigma^2_{\rm rms}$-$M_{\rm BH}$ relation. Since $\sigma^2_{\rm rms}$ is the integration of the X-ray PSD, the variability of PSD naturally lead to the variability of $\sigma^2_{\rm rms}$ for the same AGN. This is a possible origin for the dispersion observed in \citet{Ponti2012}, which can be tested with our sample. We plot the $\sigma^2_{\rm rms}$ and $M_{\rm BH}$ of our sample in Figure \ref{fig:rms20_h} using the black hole masses \edit1{{and errors}} listed in Table \ref{tab:mass_of_sources}. Our data points are generally consistent with the linear correlation reported by \citet{Ponti2012} for the 20 ks segments, which is shown by the dotted line. Only the points of PG 1211+143 show a relatively large deviation, which is probably caused by the poorly constrained black hole mass of this source \citep{Peterson2004}. Our sample also shows significant variation in $\sigma^2_{\rm rms}$, with large variation seen in NGC 1365, NGC 3516 and PG 1211+143, which is $\sim$ 1.0 dex, corresponding to $\sim$ 0.8 dex of the black hole mass estimate. Hence, the variation of $\sigma^2_{\rm rms}$ can indeed contribute to the dispersion observed in the $\sigma^2_{\rm rms}$-M$_{\rm BH}$ relation, but it should not be the only source of dispersion, because there must be some dispersion caused by the measurement uncertainty of the black hole mass. In comparison, the $f_{\rm B}$-$M_{\rm BH}$ relation appears more dispersed. It has been reported that this relation may also depend on the bolometric luminosity ($L_{\rm bol}$, \citealt{McHardy06}) or the neutral absorption column ($N_{\rm H}$, \citealt{Gonzalez-Martin2018}). The latter dependence is such that as the $N_{\rm H}$ increases, $f_{\rm B}$ also increases. In fact, $N_{\rm H}$ in \citet{Gonzalez-Martin2018} is derived from fitting the 2-6 keV spectrum, so it can also be understood as an indicator for the hard X-ray shape. Then a larger $N_{\rm H}$ would indicate a smaller photon index, and then a larger $f_{\rm B}$ is expected. We note that NGC 4051 does follow this trend, but the other sources in our sample do not. We also plot our sample in Figure \ref{fig:plane_mchardy} using the PSD breaks detected in both 0.3-1 keV and 2-10 keV listed in Table \ref{tab:parameters} and parameters listed in Table \ref{tab:mass_of_sources}. \edit1{{Our bolometric luminosities are taken directly from previous works (e.g. \citealt{Woo2002, Zhou2005a, Vasudevan2010, Meyer-Hofmeister2011}). It must be noted that the systematic uncertainty of the bolometric luminosity could be large, as most of the energy of an AGN spectral energy distribution is contained in the far UV band which is not observable (e.g. \citealt{Jin2012}). For the ease of comparison with previous works (e.g. \citealt{McHardy2016}), we did not consider the uncertainty of bolometric luminosity in our analysis. Besides, although the X-ray flux changes in different spectral state, it only contribute a small fraction of the bolometric luminosity (e.g. \citealt{Vasudevan2010, Jin2012}), and so it should not affect our results significantly.}} As shown by Figure \ref{fig:plane_mchardy}, our data points are roughly consistent with the relation reported by \citet{McHardy06} which is shown as the dotted line. The relatively large variation of break frequency showed in NGC 4051 brings a $\sim$ 0.3 dex uncertainty to the black hole mass estimation, without considering the changes of other parameters. However, \edit1{{although we only fit the spectra with a fixed Galactic absorption $N_{\mathrm{H}}$ given that this value is constant for different observations of the same object,}} few sources of our sample show significant absorption when fitting the 2-6 keV spectrum with an \edit1{{unfixed}} absorbed power law model, which makes it hard to test the relation reported by \citet{Gonzalez-Martin2018}. \edit1{{Besides, although the choice of spectral band from 2-6 keV is more conservative considering the effect of reflection component like Fe line on spectrum and variability, the 2-6 keV light curves and PSDs do not show significant difference from that in 2-10 keV.}} \edit1{{In summary, our results suggest that the variation of PSD, which is linked to the spectral state of AGN, can provide a significant fraction of the uncertainty of these scaling relations, so it is recommended to consider the intrinsic variation of the PSD when using AGN X-ray variability parameters (e.g. the rms and the high-frequency break) to estimate the black hole mass.}} In principle, it is possible to take into account spectral variability and to reduce the dispersion of existing correlations, similar to the work done by \citet{Gonzalez-Martin2018}, but the complexity of AGN X-ray emission and the lack of a large sample with high-quality data make it difficult to do. For black hole mass estimates, it is perhaps more appropriate to use observations taken when the source is at its high-flux state, for which the X-ray spectrum is generally {\it simple} (\citealt{Gallo2006}) and the X-ray variability mainly reflect the properties of the corona. \begin{table}[t!] \begin{center} \caption{\edit1{{Black hole masses, luminosities and Eddington ratios of our sample.}} \label{tab:mass_of_sources}} \begin{tabular}{lccr} \hline Source Name & $\log M_{{\rm BH}}$ & $\log L_{{\rm bol}}$ & $L_{\rm bol}/L_{\rm Edd}$\\ \hline Mrk 335 & $7.23\pm 0.04^{\rm a}$ & $44.69^{\rm f}$ & 0.23 \\ Mrk 766 & $6.2\pm 0.3^{\rm a}$ & $44.08^{\rm g}$ & 0.60 \\ 1H 0707-495 & $6.3\pm 0.5^{\rm a}$ & $44.43^{\rm h}$ & 1.1 \\ IRAS 13224-3805 & $6.28\pm 0.05^{\rm b}$ & $45.55^{\rm h}$ & 15 \\ NGC 1365 & $6.6\pm 0.3^{\rm c}$ & $43.8 ^{\rm i}$ & 0.13 \\ NGC 3516 & $7.40\pm 0.05^{\rm a}$ & $44.29^{\rm f}$ & 0.062 \\ NGC 4051 & $6.1\pm 0.1^{\rm a}$ & $43.56^{\rm f}$ & 0.23 \\ NGC 4395 & $5.4\pm 0.2^{\rm d}$ & $41.0^{\rm d}$ & 0.0032 \\ PG 1211+143 & $7.61\pm 0.15^{\rm e}$ & $45.81^{\rm f}$ & 1.3 \\ \hline \end{tabular} \end{center} \vspace{-0.3cm} \tablecomments{The BH masses of all the sources in out sample, bolometric luminosities \edit1{{and calculated Eddington ratios}} of the sources which have at least one detection of high-frequency break are listed. The values \edit1{{and errors}} of BH mass estimates in terms of ${\rm M_{\bigodot}}$ come from (a)\citet{Gonzalez-Martin2018} (which refers to \citet{Bian2003}, \citet{Bentz2009}, and \citet{Zu2011}), (b)\citet{Alston2020a}, (c)\citet{Combes2019}, (d)\edit1{{\citet{Brum2019}}}, and (e)\edit1{{\citet{Kaspi2000}}}.The values of bolometric luminosities in terms of ${\rm erg\ s^{-1}}$ are from (f)\citet{Woo2002}, \edit1{{(d)\citet{Brum2019}}}, (h)\citet{Zhou2005a}, and (i)\citet{Vasudevan2010}.} \vspace{0.2cm} \end{table} \begin{table}[t!] \begin{center} \caption{\edit1{{List of $\sigma_{\mathrm{rms}}^2$ computed in the 2–10 keV band with 20 ks intervals.}}\label{tab:rms20}} \begin{tabular}{lcr} \hline Source & Obs & $\sigma_{\mathrm{rms,\ 20ks}}^2$ \\ \hline Mrk 335 & 2006 & $0.011\pm 0.001$ \\ & 2009(1) & $0.017\pm 0.006$ \\ & 2009(2) & $0.020\pm 0.005$ \\ Mrk 766 & 2001 & $0.023\pm 0.002$ \\ & 2005(1) & $0.032\pm 0.004$ \\ & 2005(2) & $0.030\pm 0.002$ \\ 1H 0707-495 & 2002 & $0.15\pm 0.04$ \\ & 2007 & $0.23\pm 0.06$ \\ & 2010 & $0.23\pm 0.07$ \\ IRAS 13224-3809 & 2002 & $0.19\pm 0.09$ \\ & 2011 & $0.16\pm 0.10$ \\ & 2016 & $0.25\pm 0.09$ \\ NGC 1365 & 2007 & $0.007\pm 0.007$ \\ & 2012 & $0.035\pm 0.004$ \\ & 2013 & $0.045\pm 0.003$ \\ NGC 3516 & 2001 & $(1.5\pm 1.0) \times 10^{-3}$ \\ & 2006 & $(7.0\pm 0.8) \times 10^{-3}$ \\ NGC 4051 & 2001 & $0.105\pm 0.003$ \\ & 2018 & $0.041\pm 0.003$ \\ NGC 4395 & 2003 & $0.16\pm 0.02$ \\ & 2014 & $0.13\pm 0.01$ \\ & 2019 & $0.12\pm 0.01$ \\ PG 1211+143 & 2001 & $(2.2\pm 3.9) \times 10^{-3}$ \\ & 2007 & $0.024\pm 0.006$ \\ \hline \end{tabular} \end{center} \vspace{-0.3cm} \vspace{0.2cm} \end{table} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{pic//rms//rms20_h_witherr.pdf} \caption{Excess variance calculated using 20 ks segments of light curve against the black hole mass. The sizes of points indicate the relative 2-10 keV flux of corresponding observations. The best fit relationship in \citet{Ponti2012} are marked as the black dotted line \edit1{{and the 1-$\sigma$ error region are shown as a gray shadowed area which comes from random sampling under distributions of slope and normalisation from \citet{Ponti2012}}}. \label{fig:rms20_h}} \end{center} \vspace{0.4cm} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=0.47\textwidth]{pic//plane//logf_logMlogL_mchardy_witherr.pdf} \caption{The relationship between the PSD break frequency ($f_{\rm B}$) and black hole mass ($M_{\rm BH}$) and bolometric luminosity ($L_{\rm bol}$). The values of horizontal axis are calculated using expression given in \citet{McHardy06} (which is shown as the diagonal black dotted line), while the observed break frequencies are got in our sample. \edit1{{The uncertainties of mass estimation are used to get the horizontal errors. The gray shadowed areas display $\pm 0.7$ dex regions around the model, as a nearer study of \citet{Gonzalez-Martin2018} show standard deviations around this value.}} \edit1{{Related parameters}} are listed in Tables \ref{tab:parameters} and \ref{tab:mass_of_sources}. \label{fig:plane_mchardy}} \end{center} \vspace{0.4cm} \end{figure} \section{Conclusions} \label{sec:6} In this work, we aim to explore the potential link between the X-ray PSD and spectral states for AGN. By making use of high-quality \textsl{XMM-Newton} data, we studied the spectral-timing properties of nine AGN exhibiting drastic spectral variation. We applied the Bayesian MCMC method for the PSD modelling, and generalized it for the comparison between different periodograms to identify the intrinsic PSD variation. Our results are summarized below. \begin{itemize} \itemsep0em \item We found that, out of the nine objects, four showed a high-frequency break in their PSDs in at least two of the observations. Among them, in three objects, namely, 1H 0707-495, Mrk 335 and Mrk 766, the break was found to exist only in a state with a high flux and a steep spectrum, but not in the low-flux state. Their rms variations were also found to be smaller in the high-flux state than low-flux. Interestingly, these sources are all NLS1s with high mass accretion rates near/over the Eddington limit. \item We found that NGC 4051 showed a PSD break in both high and low-flux states, and the break frequency in the low-flux state was higher. \item Based on the observations of the four AGN in which more than one PSD break is detected, we \edit1{{suggested}} a tendency that a large difference in the spectral flux and shape is associated with a large difference in the PSD shape. This result suggests a possible link between PSDs and spectral states in AGN. \item We investigated the effect of the PSD variation on the black hole mass estimate using X-ray variability for AGN. We found that the PSD variation could introduce as much as 1.0 dex variation to the X-ray rms, corresponding to $\sim$ 0.8 dex for the black hole mass estimate, although it is not the only reason to cause the dispersion observed in the $\sigma^2_{\rm rms}$-$M_{\rm BH}$ relation of \citet{Ponti2012}. Moreover, the change of the high-frequency break of the PSD results in a large scatter in the $f_{\rm B}$-$M_{\rm BH}$ relation, since the frequency $f_{\rm B}$ is likely depending on the spectral state. \edit1{{The correlation between timing properties and spectral state can provide a significant fraction of the uncertainty of these scaling relations to estimate the black hole mass for AGN.}} \end{itemize} It should be noted that our result is tentative given the limited sample size. \edit1{{Besides, the accretion rates of most of our sources are higher than a few per cent, and the frequency range investigated in this work has only two decades. Hence it is not possible to conduct a systematic comparison between our AGN sample and all the accretion states of XRBs.}} More observational data for a larger sample, especially taken at truly different accretion states, are essential to confirm this result. Besides, this work only covers the high-frequency break of AGN's PSD above $10^{-5}$ Hz. The study of the low-frequency break, like the work done for Ark 564 (\citealt{McHardy2007b}), requires long-term monitoring data covering the timescales from days to years. Future X-ray instruments of high-sensitivity and large field-of-view, such as the Einstein Probe mission (\citealt{Yuan2016}; \citealt{Yuan2018}), can provide monitoring data on timescales from days to years for hundreds of neighboring AGN. These future observations would, combining with deep observations of \textsl{XMM-Newton} and \textsl{eROSITA} (\citealt{Merloni2012}; \citealt{Predehl2016}), provide new datasets with which to improve the understanding of the link between the PSD and spectral state for AGN. \acknowledgments CJ acknowledges the National Natural Science Foundation of China through grant 11873054, as well as the support by the Strategic Pioneer Program on Space Science, Chinese Academy of Sciences through grant XDA15052100. This work has made use of observations conducted by \textit{XMM–Newton}, an ESA science mission with instruments and contributions directly funded by ESAMember States and the USA (NASA). This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \facilities{{\it XMM-Newton}} \software{ emcee \citep{Foreman-Mackey2013}, HEAsoft \citep{NASA2014}, matplotlib \citep{Hunter2007}, numpy \citep{vanderWalt2011}, scipy \citep{Virtanen2020}, SAOImage DS9 \citep{Joye2003}, SAS \citep{Gabriel2004}, XSPEC \citep{Arnaud1996} } \newpage
-42,351.612971
[ -3.630859375, 3.326171875 ]
38.85918
[ -3.783203125, 1.2138671875, -1.193359375, -5.94140625, -1.1259765625, 8.0234375 ]
[ 2.81640625, 7, 2.72265625, 5.3515625 ]
920
9,641
[ -3.3515625, 3.70703125 ]
34.774782
[ -6.18359375, -2.703125, -2.576171875, -2.181640625, 1.1630859375, 9.96875 ]
0.763802
3.093778
19.62045
22.240329
[ 3.0763978958129883 ]
-29,592.979564
5.36407
-40,494.760294
0.440878
6.083903
[ -4.171875, -3.30859375, -2.6640625, -3.583984375, 2.46484375, 10.125 ]
[ -7.14453125, -2.80078125, -2.61328125, -2.19140625, 4.2421875, 6.671875 ]
BkiUdILxK7FjYAb_67PT
\section{Introduction} Atmospheric neutrinos have contributed significantly to our understanding of neutrino properties and, in the emerging field of neutrino astronomy, they constitute the main source of background. A high precision calculation of their flux is, therefore, paramount to achieve the ambitious physics goals of future large-volume neutrino detectors, such as Hyper-Kamiokande and IceCube-Gen2. Hadronic interactions play a crucial role in flux calculations. Of particular importance is the modeling of light meson production in the very forward phase-space, {\it i.e.}~of secondary particles produced at very small scattering angles that would usually escape the detection at modern colliders due to the presence of a beam pipe. Fixed target experiments can shed light on some specific regions of this phase-space, but they can only be performed for a limited number of targets and energies. Theoretically, this phase-space is the domain of non-perturbative QCD implying the absence of robust computational methods that would allow one to predict the required particle spectra from first principles. There is, however, a number of phenomenological models implemented as Monte Carlo event generators in use, and in the past decades they have evolved into sophisticated but still imperfect tools. The different ideologies of these models, or of purely data-driven methods \cite{Engel:2001qi}, constitute to a large extent the main source uncertainty in atmospheric lepton flux calculations \cite{Barr:2006it,Fedynitch:2018vfe}, and cannot be easily reduced due to the absence of more constraining data. One possible way to reduce their impact on neutrino flux calculations is to exploit the correlation with the flux of atmospheric muons that originate in the same decays of charged pions and kaons as the neutrinos. In comparison to neutrinos, the flux of $\mu^+$ and $\mu^-$ can be measured relatively precisely with small spectrometers and therefore it can serve as a calibration source for the models involved in the flux computations. This method \cite{Honda:2006qj} has been employed in the most successful neutrino flux calculation by Honda et al.~\cite{Honda:2015fha}. A large number of experiments have performed measurements of both the flux and the charge ratio $(\mu^+/\mu^-)$. The measurements are affected by specific experimental conditions that can range from atmospheric effects to magnet alignment for charge identification, which makes their comparison complicated. Here, for the first time, we present a comprehensive analysis of publicly available atmospheric muon data by combining several experiments with a detailed treatment of their systematic uncertainties. Our aim is to derive corrections for hadronic interaction models and test their impact on the resulting {\it calibrated} neutrino fluxes. \section{Flux model} \label{sec:flux_computation} We employ the cascade code {\sc MCEq}{} \cite{Fedynitch:2015zma}\footnote{https://github.com/afedynitch/MCEq} to perform the calculations of the lepton fluxes. Despite a high computational speed, the code is not fast enough to be directly involved in a minimization. Therefore, we pre-compute a database of fluxes individually taking into account the conditions of each measurement. These conditions include the reported zenith angles, the altitude, the atmosphere (averaging over the duration of data-taking) using NRLMSISE-00 \cite{Picone:2002go}, and the variable in which the spectrum is reported (momentum or energy). {\sc MCEq}{} contains functions that modify the multiplicity distributions of pions and kaons for proton-air and neutron-air interactions, starting from a baseline interaction model. Currently, we use \sibyll{-2.3c} \cite{Riehn:2017mfm} and \dpmjet{-III-19.1} \cite{Fedynitch:2015kcn} in combination with GSF as the model for the cosmic ray flux at the top of the atmosphere \cite{Dembinski:2017zsh}. GSF is a fit performed on a large set of cosmic ray data relying on a minimal set of assumptions and comes with error estimates. These have not yet been incorporated into this work and are the subject of a future study. To parameterize the impact of hadronic uncertainties, we subdivide the particle production phase-space into discrete regions in projectile energy and secondary energy fraction exactly following Barr et al.~(see Fig.~3 in \cite{Barr:2006it}). While the full scheme contains $11 \times 2$ parameters $\mathcal{B}_i$ for pions and kaons of both charges in proton-air interactions, the contributing number of parameters at energies $>20$ GeV is eight (G, H, W and Y for each charge sign). To propagate the effect of the modified particle yields, we use an original scheme that developed for the fast propagation of model errors impacting the lepton flux calculations \cite{Fedynitch:2018vfe}. In this scheme a Jacobian ``matrix'' is constructed by computing the first term of a Taylor expansion of the muon flux $\Phi(E_\mu)$ to a perturbation $\delta$ with respect to the variation of a single parameter $\mathcal{B}_i$: \begin{equation} \frac{\partial \Phi(E_\mu)}{\partial \mathcal{B}_i} = \frac{\Phi(E_\mu,\mathcal{B}_i=1 + \delta) - \Phi(E_\mu,\mathcal{B}_i= 1 - \delta)}{2 \delta}. \end{equation} These Jacobians are interpolated and stored together with the unperturbed flux. The flux for each experimental site with the corrections, $\mathcal{B}_i$, applied can be computed easily from \begin{equation} \Phi(E_\mu, \mathcal{B}_a, \mathcal{B}_b, \dots) = \Phi(E_\mu) + \sum_i \mathcal{B}_i \frac{\partial \Phi(E_\mu)}{\partial \mathcal{B}_i}. \end{equation} Since the coupled cascade equations (see for instance \cite{Gaisser:2016uoy}) solved by {\sc MCEq}{} are linear, this approach is exact and does not require higher order terms. The resulting database of fluxes and Jacobians for each experimental site is fast to evaluate for arbitrary combinations of $\mathcal{B}_i$ and can be directly used by minimizers. \section{Muon flux and charge ratio data} We conducted a comprehensive literature survey to identify suitable measurements of muon fluxes and charge ratios. In order for a measurement to be included, it must have been published in a peer-reviewed journal with a detailed description of the measurement conditions and the systematic uncertainties. An incomplete description of the systematic uncertainties was the most frequent reason for discarding a measurement. Table \ref{table:experiments} shows the list of experiments that were selected for the analysis. \begin{table}[!t] \centering \begin{adjustbox}{width=1\textwidth} \begin{tabular}{@{}lllllll@{}} \toprule Experiment & Energy (GeV) & Measurements & Reported unit & Location & Altitude & Zenith range \\ \midrule AMS-02 & 0.1-2500 & Flux \& charge ratio & rigidity & 28.57$^\circ$N , 80.65$^\circ$ W & 5 m (sea level) & \\ BESS-TeV & 0.6-400 & Flux & momentum & 36.2$^\circ$N, 140.1$^\circ$W & 30 m & 0-25.8$^\circ$ \\ CMS & 5-1000 & Charge ratio & momentum & 46.31$^\circ$N, 6.071$^\circ$E & 420 m & $p\cos\theta_z$ \\ L3+C & 20-3000 & Flux \& charge ratio & momentum & 46.25$^\circ$N, 6.02$^\circ$E & 450 m & 0-58$^\circ$ \\ MINOS & 1000-7000 & Charge ratio & total energy & 47.82$^\circ$N, 92.24$^\circ$W & 5 m (sea level) & unfolded \\ OPERA & 891-7079 & Charge ratio & total energy & 42.42$^\circ$N, 13.51$^\circ$E & 5 m (sea level) & $E\cos\theta^*$ \\ \bottomrule \end{tabular} \end{adjustbox} \caption{List of measurements used for calibration. Most data are taken for vertical incidence angles, or, corrected to vertical through model-dependent unfolding. At this stage, we are using data near sea level. A few more data sets are available from high-altitude balloon flights and near horizontal directions, which we aim to include in the future.} \label{table:experiments} \end{table} We include muon fluxes from L3+cosmic \cite{Achard:2004ws}, Bess-TeV \cite{Haino:2004nq} and AMS-02\footnote{While the recent AMS-02 reference is not a peer-reviewed journal publication, we were curious about the impact of this measurement that comes with small errors, a large energy range and detailed systematic uncertainties} \cite{Duranti:2012css}. Charge ratio measurements come from CMS \cite{Khachatryan:2010mw}, OPERA \cite{Agafonova:2014mzx}, MINOS \cite{Adamson:2007ww}, and also from L3+cosmic and AMS-02. The collection of data covers an energy range from below 1~GeV to 7~TeV and altitudes range from sea level to 450~m. Only L3+C reports measurements at various zenith ranges. The remaining experiments report either at almost vertical angles or have unfolded their angular distribution, reporting a vertical equivalent measurement. We note that this practice, while appealing, introduces an additional model dependence to the data. Reporting the measurement prior to this correction would greatly benefit later analyses. \section{Data analysis and calibration scheme} \label{sec:scheme} We created a dedicated data analysis framework that enables us to accurately include the detailed systematic uncertainties reported by each experiment in conjuncture with the database of individual fluxes and their correction functions for each measurement. In the code, each experiment is implemented as a set of functions for retrieving the published data with errors; correcting the data given a set of systematic functions provided by the experiments (if available); obtaining a flux expectation and comparing it with the corrected data using a $\chi^2$ function. The function includes penalty terms for deviations on the 24 systematic functions considered. The minimization is performed using the code i{\sc Minuit} \cite{iminuit}. The flux database is generated as described in Section~\ref{sec:flux_computation}. A scheme that adds sufficient degrees of freedom to the flux calculations and captures all hadronic uncertainties is a topic of active discussions. We originally started following the breakdown in $\mathcal{B}_i$ parameters adopted from Barr et al.~\cite{Barr:2006it}. In the initial studies, we attempted to fit the calculated muon fluxes to the combined data using the high-energy parameters (G, H, W and Y), and found their effects to be strongly correlated. Muon flux and charge ratio data are not sufficient to separate the impact of different regions. Particularly problematic are the parameter combinations related to the same projectile energy (such as G and H). We followed multiple steps to simplify the scheme, which resulted in only four parameters that scale the total yield of each charged meson independently ($c_{\pi^+}$, $c_{\pi^-}$, $c_{\text{K}^+}$ and $c_{\text{K}^-}$). This parameterization was found to be more stable against small variations in the input data and, more importantly, is able to describe the data as well as the complicated scheme with twice the number of free parameters. \section{Experimental considerations} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{err_vs_e.pdf} \caption{Expected precision of the calibration parameters as a function of the energy threshold. Clearly, the low energy data is very valuable in constraining the contribution of each of the parameters.} \label{fig:err_vs_e} \end{figure} The experiments used cover an energy from a few GeV to several TeV. We find that the low-energy data plays a crucial role in the precise determination of the correction values, as well as in breaking correlations between them. At the same time, we expect the impact of the accurate modeling of the cosmic ray flux, including a geo-magnetic cutoff and solar modulation, to become relevant at very low energies. As demonstrated in \figu{err_vs_e}, a value of 5\,GeV for the energy cutoff below which we discard data is an optimal trade-off where data and calculations can reach an agreement and the modifications introduced are stable. With the energy threshold fixed to 5\,GeV we also tested the constraints that single experiments could provide on their own. We find that pion yields are mainly constrained by Bess-TeV and L3+cosmic data, while for the kaons, L3+cosmic, MINOS and OPERA are the main contributors to the fit. By removing one experiment at a time from the analysis, we find that L3+cosmic has the largest impact, most likely because it covers regions that no other single experiment can access in energy and arrival direction. The same exercise was conducted on the data. We find that the inclusion or exclusion of single experiments can significantly change the overall agreement between the calculations and data. The main contributors to these changes are AMS-02, with rather small errors below 50~GeV and a poor agreement with data above that energy, and CMS, which has data points that significantly deviate from models and experiment below 20~GeV. \section{Results} \begin{figure} \centering \includegraphics[width=\textwidth]{results.pdf} \caption{Left panel: Muon flux for the sum of both charges compared to the \sibyll{-2.3c} and GSF flux calculation with corrections from Table \ref{table:results} applied. For the two most vertical zenith angles the data from AMS-02 and Bess-TeV overlap with L3+cosmic. Some occurs between AMS-02 and L3+cosmic above 50 GeV, while for Bess-TeV the agreement is excellent within the errors. There are signs of a systematic disagreement between the calculated and observed angular dependence of the spectrum that is not covered by the systematic uncertainties provided by the experiment. Right panel: Muon charge ratio. As mentioned in the text, most data are unfolded to strictly vertical zenith angles. The agreement across the entire energy range is excellent, except for one of the three CMS measurements. The zenith dependence from L3+cosmic is described well by the corrected model.} \label{fig:model_vs_exp} \end{figure} The result of the fit is demonstrated in \figu{model_vs_exp}. For vertical directions the corrected model matches the data very well. The zenith dependence, presently only available from L3+cosmic data, shows some systematic deviation. We speculate that this may come from a systematic uncertainty not covered by the available correction functions, since none of the performed checks and variations of the fit conditions resulted in a better description of this aspect. The muon charge ratio is also very well described with the exception of one of the three CMS measurements. The charge ratio is the most relevant in constraining the $c_{\text{K}^+}$ parameter. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{corr_full.pdf} \caption{Correlation matrix for the calibration and systematic parameters involved in the fit. Different systematic uncertainties of the same experiment carry the same label for the sake of clarity. As the lower left block demonstrates, the calibration parameters possess stronger correlations, which are smaller compared to schemes with eight and more parameters or fits with a higher energy threshold.} \label{fig:corr} \end{figure} As we argued in Sec.~\ref{sec:scheme}, the fit parameters typically have strong correlations. This is apparent in the lower left block in of the correlation matrix in Fig.~\ref{fig:corr}. \begin{table} \centering \begin{tabular}{@{}lll@{}} \toprule Parameter & Best fit & Error \\ \midrule $c_{\pi^-}$ & +0.141 & $\pm$0.017 \\ $c_{\pi^+}$ & +0.116 & $\pm$0.016 \\ $c_{\text{K}^-}$ & +0.402 & $\pm$0.073 \\ $c_{\text{K}^+}$ & +0.583 & $\pm$0.055 \\ \bottomrule \end{tabular} \caption{Values and errors for the determined correction parameters for the combination \sibyll{-2.3c} and GSF. For \dpmjet{-III-19.1} the pion parameters are similar, but kaon parameters are slightly higher.} \label{table:results} \end{table} The obtained values for the calibration parameters $c_i$ are listed in Table \ref{table:results}. The pion corrections are at the level of 10-15\%. The kaon parameters suggest much higher corrections at the level of 50\%. These values are determined with a precision of about 10\% and 10-20\%, respectively. \section{Discussion and Outlook} In this work, we are developing a calibration scheme for high-precision atmospheric neutrino flux calculations. Our aim is to reduce the uncertainties arising from the modeling of hadronic interactions and cosmic ray fluxes by deriving corrections from high-quality atmospheric muon data. Initially, it was envisioned to obtain constrains on the sub-divisions of the particle production phase-space according to \cite{Barr:2006it}. However, the many parameters resulted to be strongly correlated and not constrained by world's atmospheric muon data. A drastic simplification to only four parameters resulted in smaller correlations, a more stable fit and a similarly good description of data. With this simplified scheme, the derived corrections suggest an increase of roughly 10\% for pions, and an increase of 50\% for kaons. These values correspond to corrections of the particle production in the specific phase-space relevant for atmospheric lepton fluxes ($x_{\rm lab} > 0.1$) and do not necessarily imply an increase of central multiplicities by the same amount. While the errors of these values are small, the current best fit is most likely dominated by systematic effects that are not yet considered, such as uncertainties in the cosmic ray flux. It is also noteworthy that some regions in experimental data cannot be described, regardless of modifications to their correction functions and our model. We will further investigate the impact of these disagreements and reconsider using parts of the data in future. Furthermore, we aim to study the impact of measurements made at high altitude by balloon experiments and data taken at near-horizontal directions. \bibliographystyle{ICRC}
-10,655.543137
[ -3.376953125, 3.13671875 ]
42.857143
[ -2.798828125, 0.290283203125, -2.404296875, -6.09375, -1.1328125, 9.09375 ]
[ 4.18359375, 8.4765625, 4.38671875, 7.3671875 ]
151
2,458
[ -2.796875, 2.869140625 ]
25.279337
[ -6.90625, -4.94140625, -4.96875, -2.685546875, 2.404296875, 13.90625 ]
0.820608
22.791403
33.360456
4.17229
[ 2.6187286376953125 ]
-8,341.184871
5.701383
-10,351.29855
0.470958
5.723614
[ -3.2265625, -3.947265625, -3.384765625, -4.34765625, 2.443359375, 11.65625 ]
[ -6, -3.5703125, -3.126953125, -2.912109375, 4.4609375, 8.0234375 ]
BkiUfYzxK6-gD5TlfjZo
\section*{Acknowlegements} We thank Livio Baldini Soares, Kenton Lee, Tom Kwiatkowski, Ilya Eckstein and others at Google Research for insightful discussions. This work is partially supported by NSF Awards IIS-1513966/ 1632803/1833137, CCF-1139148, DARPA Awards\#: FA8750-18-2-0117, FA8750-19-1-0504, DARPA-D3M - Award UCB-00009528, Google Research Awards, gifts from Facebook and Netflix, and ARO\# W911NF-12-1-0241 and W911NF-15-1-0484. \section{Pre-training} \label{sec:appendix:pretraining} We train on English Wikipedia, processed with the entity linking and named entity recognition tools from the Google Cloud NLP API\footnote{\url{https://cloud.google.com/natural-language/docs/basics\#entity_analysis}}. We use existing hyperlinks in Wikipedia as additional entity annotations. All models are pre-trained on 128 TPUs using AdamW optimizer \citep{adamw} with learning rate 1e-4 and batch size of 4096. Each passage in the batch has length $T=128$, excluding entity tokens. The Mention Encoder\xspace and \textsc{batch-tome}\xspace are pre-trained for 1 million steps with 50k warmup steps, and \textsc{tome}\xspace is trained for 500k additional steps with 25k warmup steps after initialization from \textsc{batch-tome}\xspace. Both models are trained with linear learning rate decay. Mention Encoder\xspace and \textsc{batch-tome}\xspace share Transformer weights during Mention Encoder\xspace pre-training. We apply gradient clipping with a norm of 1.0 and weight decay of 0.01. Weight decay is applied to all weights except layer norm and bias weights. \textsc{batch-tome}\xspace and \textsc{tome}\xspace are trained with weight 0.85 on the MLM objective and 0.15 on the entity coreference resolution objective. We mask 20\% of whole entity mentions and 10\% of other tokens. We limit the coreference resolution objective to mentions of the 1 million most frequent Wikipedia entities. We use 24 mentions per sample, with a batch size of 32 samples per TPU. We subsample mentions uniformly if the average number of annotated mentions on a TPU exceeds 24. Key mention encoding have dimension $d_K=128$ and value and coreference mention encodings have dimension $d_V = d_C = 512$. \paragraph{Disallowed same passage retrieval for Mention Encoder\xspace.} We want the model to use memory as a source of additional information for processing a passage. Therefore, we explicitly set attention weights to 0 for memories generated from the same passage as the current one. \subsection{Mention Encoder\xspace data generation} \label{sec:appendix:pretrain_mention_encoder} We pre-train Mention Encoder\xspace to produce mention encodings that are useful for \textsc{batch-tome}\xspace. In order to provide \textsc{batch-tome}\xspace with an incentive to use the memory, we need to ensure that mentions from different samples within a batch are relevant to each other. We achieve this by batching passages from the same or related Wikipedia articles. We generate clusters of 256 passages from Wikipedia articles using a greedy method. First, we create a cluster from the longest unused Wikipedia article and add related articles until the cluster consists of 256 passages. In particular, at each step we add the article with the largest Jaccard similarity between its entity set and the entity set of articles in the current cluster. \subsection{Coreference resolution loss} \label{sec:appendix:coref_loss} For every linked mention $m$ in the batch we compute a mention encoding $z_m$ by applying a separate $\mathtt{SpanEncodingLayer}$ on the output of the \textsc{batch-tome}\xspace. First, we compute the loss for every linked mention $m$ in the batch. To this end, we denote linked mentions in \textit{every other passage} in the batch as positive, $\mathcal{P}^+(m)$, if they have the same entity ID as $m$ and negative, $\mathcal{P}^-(m)$, otherwise. The loss per mention is an average of cross-entropy losses per every positive mention $m^+ \in \mathcal{P}^+(m)$ \begin{align*} \mathcal{L}_{coref}(m) = -\frac1{|\mathcal{P}^+(m)|} \log \sum_{m^+ \in \mathcal{P}^+(m)} \frac{\exp(z_m^T z_{m^+})}{\exp(z_m^T z_{m^+}) + \sum_{m^- \in \mathcal{P}^-(m)} \exp(z_m^Tz_{m^-})} \end{align*} The total loss is average of losses per linked mentions which have at least one positive mention (set $\mathcal{P}^+(m)$ is not empty). \section{Experiments} \label{sec:appendix:evaluation} \subsection{Fine-tuning setup} \textsc{tome}\xspace is fine-tuned on 32 TPUs using the Adam optimizer with a learning rate of 1e-5 and total batch size 32. In contrast to pre-training we set max mentions to 32 per sample for fine-tuning. We use 1000 warmup steps and linear learning rate decay. Gradient clipping and weight decay are the same as during pre-training. We take the highest scoring checkpoint on dev sets and evaluate it on the test set. We use the spaCy noun chunker to detect noun phrases and treat these as claim/question entity mentions. The model can be fine-tuned with full memory on a server of 8 A100 GPUs or 16 v3/v4 TPUs. A model with half memory (75M mentions) can be fine-tuned on 8 V100s/P100s or 8 TPUs. \subsection{Baselines} Following \cite{realm} we used REALM to perform extractive question answering on TriviaQA and ComplexWebQuestions datasets. We also adapted the model to the classification setting in order to apply it to claim verification tasks. Given an input claim $X$ we compute probability of a prediction $Y$ (whether the claim holds true or not) as a marginalization over retrieval $Z$. \begin{align*} \mathtt{Pr}(Y | X, Z) = \sum_{z \in Z} \mathtt{Pr}(Y | X, Z = z) \cdot \mathtt{Pr}(Z = z | X) \end{align*} where $\mathtt{Pr}(Y | X, Z = z)$ is the output probability produced by the reader model and $\mathtt{Pr}(Z = z | X)$ is produced by the retrieval model. \subsection{Claim verification} \label{sec:appendix:evaluation:results} See Table~\ref{table:claim_full} for the results on development and test splits of claim verification datasets. Additionally, Table~\ref{table:fm2_original} compares our FM2 results to the original dataset baselines. \subsection{Question Answering} We report additional results on the EntityQuestions dataset from \citet{eq}. The dataset consists of questions involving rare entities, making it especially challenging for modern retrievals methods such as DPR. Evaluation results for \textsc{tome}\xspace models and baselines are shown in Table~\ref{table:eq_recall} and Table~\ref{table:eq_accuracy}. Following \citep{eq} we report recall at 20 as an evaluation metric. Since \textsc{tome}\xspace retrieves mentions rather than passages, a direct comparison is difficult. We evaluate \textsc{tome}\xspace conservatively, treating recall at 20 as successful if one of the 20 highest scoring mentions belongs to the correct entity (in contrast to DPR, for which the correct answer only has to be somewhere in the retrieved 100 word document). \textsc{tome}\xspace sets the state of the art on this dataset and outperforms DPR by a very large margin. REALM cannot be fairly compared to DPR due to longer retrieved passages (100 vs 288 tokens). Therefore, we perform a separate experiment using accuracy with REALM as a baseline, showing large performance gains over REALM as well. \begin{table}[t!] \centering \caption{Accuracy on claim verification datasets. \#Encoded refers to the number of passages encoded by a BERT reader to answer a single question. EaE stands for Entities as Experts model. \label{table:claim_full}} \input{results/factverification_full} \end{table} \begin{table}[t!] \centering \caption{Accuracy on FM2 compared with original dataset baselines. Oracle refers to oracle retrieval followed by a BERT-Base reader. \label{table:fm2_original}} \input{results/fm2_orig} \end{table} \begin{figure} \centering \begin{minipage}{0.48\textwidth} \centering \captionof{table}{EntityQuestions recall@20}{ \label{table:eq_recall}} \input{results/eq_recall} \end{minipage}\hfill \begin{minipage}{0.48\textwidth} \centering \captionof{table}{EntityQuestions top-1 accuracy \label{table:eq_accuracy}} \input{results/eq_accuracy} \end{minipage} \end{figure} \subsection{Importance of pre-training objectives} \begin{table}[t] \centering \caption{Performance ablations for pre-training objectives experiments. \label{table:ablations}} \input{results/ablations} \end{table} We perform several ablation experiments for the pre-training procedure (see Table~\ref{table:ablations}). First, results show that the entity prediction objective (c.f. Section~\ref{sec:pretrain_tome}) is not essential for TOME pre-training. Performance on claim verification datasets (FEVER and HoVer) is not affected by whether we use entity prediction for pre-training. More surprisingly, removing this objective only slightly decreases the performance on entity question answering datasets (TriviaQA and ComplexWebQuestions). We predict entities for question-answering in the same way as we do for the entity prediction objective during pre-training (c.f. Equation~\ref{eq:entity_prediction}), so we expected the entity prediction auxiliary loss to be important. On the other hand, a related entity coreference objective (c.f. Section~\ref{sec:appendix:pretrain_mention_encoder} and Appendix~\ref{sec:appendix:coref_loss}) is crucial for Batch-TOME and Mention Encoder pre-training. That is consistent with our intuition that semantically challenging tasks incentivize the model to store useful information in memory. \subsection{\textsc{tome}\xspace initialization} We initialize \textsc{tome}\xspace model with a pre-trained \textsc{batch-tome}\xspace model which we find to be especially important for warming up retrieval. If \textsc{tome}\xspace is initialized from scratch (or even from BERT weights), \textsc{tome}\xspace does not learn to use the memory. In fact, \textsc{tome}\xspace has to be initialized from the same BATCH-TOME used to generate the memory. This implies that multi-stage training is a vital ingredient for \textsc{tome}\xspace to succeed. Our explanation for why \textsc{tome}\xspace is sensitive to initialization is that \textsc{tome}\xspace needs to learn two skills: first, to effectively use retrieved mentions for its predictions, and second, to retrieve relevant mentions. Learning both capabilities end to end gives rise to a mutual dependence: to get a signal for learning how to use retrieved mentions, the retrieved mentions have to be useful, and to learn to retrieve useful mentions, the model needs to utilize retrieved mentions. If initialized from scratch, the model is not able to learn both skills simultaneously. The pre-training stage with the smaller in-batch memory functions as a curriculum to address that problem. \section{Nearest neighbor search} \label{sec:appendix:ann} Nearest neighbor search is an extremely common problem, and there exist numerous approaches and packages for fast approximate nearest neighbor search (ANNS).~\citep{scann, faiss}. Most approaches employ two methods for fast search: 1) compress the search table through projecting to a lower dimension and quantization and perform comparisons in this compressed space and 2) divide the search table in buckets of similar items, and search only a subset of the buckets. Retrieve-and-read use ANNS packages to search for related passages~\citep{realm, rag}. Applying such packages for \textsc{tome}\xspace is slightly trickier, as \textsc{tome}\xspace needs to perform ANNS inside the model. One viable route is to compute queries on-device, transmit them to a separate ANNS server and them transmit them back. We would recommend this approach for GPU accelerators, with faster host-device communication and slower device-device communication. As we are using TPU accelerators, we decided to use on-device ANNS, which does not require coordinating additional servers and will potentially allow for backpropagating through memory in future work. \subsection{On-device nearest neighbor search} We shard the \textsc{Mention Memory} \xspace over all TPU devices. We perform search by distributing each query to all devices and retrieving top-K search results from each local memory shard. Then, the results are distributed back to the original devices and the local search results are aggregated through another, global top-K. \subsubsection{Dot-product} The first method we describe is naive dot-product search, taking advantage of matrix multiplication capacity of TPU accelerators. In this method we perform search over local shards by taking the dot product between the query and the local memory shard and performing an approximate top-k operation over the results. Dot-product search is easy to implement and fast for smaller memory sizes (up to 10 million entries). We implemented this method first due to its simplicity and our primary experimental results employ this search method. \subsubsection{ANNS} To speed up search we implemented method 2) from standard CPU-based ANNS, bucketing the search table and searching only a subset of buckets. In particular we perform k-means clustering to divide the \textsc{Mention Memory} \xspace into clusters, and perform dot-product search over the top $n_s$ clusters on each device. \subsubsection{Overhead} While the \textsc{Mention Memory} \xspace is stored on-device, memory overhead is negligible as the memory table is sharded. For pre-training the \textsc{Mention Memory} \xspace took up 2.2\% of available device memory. Table \ref{table:ann_overhead} shows percentage of time spent on ANNS in \textsc{tome}\xspace-1 pretraining for different reader architectures. The relative overhead of search becomes smaller with reader size, and ANNS overhead in particular becomes negligible for BERT-Large and up. We did not measure standard CPU ANNS overhead, but it should be comparable to or faster than our ANNS numbers. \begin{table}[h!] \centering \caption{Proportion of time spent on ANNS for \textsc{tome}\xspace-1 pre-training setting. \label{table:ann_overhead}} \begin{tabular}{lcc} \toprule Model & Dot-product & ANNS \\ \midrule BERT-Base& 0.79 & 0.22 \\ BERT-Large & 0.48 & 0.07 \\ T5-11B Encoder & 0.17 & 0.02 \\ \bottomrule \end{tabular} \end{table} \subsubsection{Hyperparameters} For ANNS in $\mathtt{TOMEBlocks}$ we take top-2 search results from each local memory shard, and apply top-128 over the retrieved results. For ANNS in the entity prediction layer we take top-32 search results from each local shard, and aggregate across shards without applying an additional top-K operation. \section{Retrieval examples} \label{sec:appendix:retrieval} \begin{table}[h!] \centering \caption{\textsc{tome}\xspace-2 retrievals for the second HoVer dev sample. We show top-1 retrieval results for the first ($\longrightarrow_{1}$) memory attention layer for passage mentions ``the novel'', ``the movie'' and ``the album''. Memory mentions are in brackets. We can see that the model can retrieve relevant mentions for non-named passage mentions, and generally understands it is looking for mentions related to music. However, while the best retrieval for ``album'' is from a passage that mentions sampling the Shining, it is quite far removed and it is likely the retrieval is not sufficiently accurate here. \label{table:hover_2}} \input{results/hover_second} \end{table} \section{Conclusion} We introduced \textsc{tome}\xspace, a Transformer model that performs attention over a semi-parametric representation of the entire Wikipedia text corpus. This representation, or \textsc{Mention Memory} \xspace, consists of a dense encoding for each entity mention in Wikipedia. \textsc{tome}\xspace can retrieve information from multiple sources without supervision, aggregate information within the Transformer, and reason over the retrieved information. \textsc{tome}\xspace leads to strong improvements on multiple open-domain claim verification and entity-based question answering tasks. \section{Experiments} \subsection{Experimental setup} The Mention Encoder\xspace is based on a BERT-base model with two final $\mathtt{SpanEncodingLayers}$ that produce key and value encodings. Mention Encoder\xspace and \textsc{batch-tome}\xspace share Transformer weights during Mention Encoder\xspace pre-training. The \textsc{Mention Memory} \xspace consists of mention encodings for $N=150$ million linked Wikipedia entity mentions. Transformer layers in \textsc{tome}\xspace and \textsc{batch-tome}\xspace models are equivalent to those in the BERT-base model. The \textsc{tome}\xspace $\mathtt{InitialTransformerBlock}$ contains 4 Transformer layers. \textsc{tome}\xspace-1 has a single $\mathtt{TOMEBlock}$ with 8 Transformer layers, and \textsc{tome}\xspace-2 has two $\mathtt{TOMEBlocks}$ with 4 Transformer layers each. Therefore, the number of trainable parameters in \textsc{tome}\xspace-1 and \textsc{tome}\xspace-2 is approximately the same as in BERT-base. We use a smaller \textsc{Mention Memory} \xspace containing 38m uniformly sampled memories for \textsc{tome}\xspace pre-training. During fine-tuning and evaluation we utilize the full \textsc{Mention Memory} \xspace. Appendix \ref{sec:appendix:pretraining} contains more details. \subsection{Baselines} We compare \textsc{tome}\xspace with existing methods that utilize textual information from a corpus in a language model. These can be divided into generative LLMs (T5), entity embedding retrieval (Entities as Experts, OPQL), extractive retrieve-and-read (REALM) and generative retrieve-and-read (RAG, Fusion-in-Decoder). \textsc{tome}\xspace occupies a novel position in the space of retrieval models, being more fine-grained than entity embedding retrieval methods, but performing all its reasoning with a single BERT read, unlike retrieve-and-read methods. The most closely comparable models are Entities as Experts and REALM, and we use these as our primary baselines. We report other baselines for reference, with the caveat that these results are not apples-to-apples: RAG and Fusion-in-Decoder have large decoders and retrievers and encode a large number of passages with a BERT reader for each question compared to \textsc{tome}\xspace's single read. Fusion-in-Decoder and RAG\footnote{RAG is initialized from DPR which is trained with gold retrieval passages for TriviaQA.} also use ground-truth supervision for retrieval. We mark the number of parameters and BERT applications for each baseline in the result tables. Consistent with retrieve-and-read, we count the parameters of the Mention Encoder\xspace and \textsc{tome}\xspace, but not the size of the non-trainable and sparsely accessed \textsc{Mention Memory} \xspace. \subsection{Claim verification} \label{sec:claim} \textbf{Data.} Our first set of experiments evaluates \textsc{tome}\xspace on the claim verification tasks FEVER \citep{fever}, HoVer \citep{hover}, and FM2 \citep{fm2} in which the model is provided with a claim and has to determine whether the claim is supported by the Wikipedia corpus. FEVER is a larger dataset with 186k claims for which most of the claims can be verified with a single Wikipedia passage. In contrast, HoVer is smaller with 26k claims, but is explicitly constructed to require evidence from multiple sources and multiple reasoning steps. FM2 is also smaller and is constructed through an adversarial game that leads to more challenging retrieval. The claim verification training data contains gold evidence passages, but unlike most published results \textit{we do not use these}, leaving only the accuracy of the claim verification to guide the retrieval. \textbf{Results.} Table \ref{table:claim} contains our claim verification results. \textsc{tome}\xspace outperforms both Entities as Experts and REALM, especially on HoVer and FM2. This is consistent with the properties of \textsc{tome}\xspace: HoVer requires combining detailed information from multiple sources, which \textsc{tome}\xspace is especially well equipped to do compared to aggregate entity-based or retrieve-and-read models. FM2 features generally challenging retrieval and may benefit from contextualizing retrieved evidence. \begin{table}[t!] \centering \caption{Accuracy on claim verification datasets. \#Encoded refers to the number of passages encoded by a BERT reader to answer a single question. \label{table:claim}} \input{results/factverification} \end{table} \subsection{Question Answering} \label{sec:qa} \textbf{Data.} In a second set of experiments we evaluate \textsc{tome}\xspace on TriviaQA (TQA) \citep{triviqa}, ComplexWebQuestions (CWQ) \citep{complexqa} and EntityQuestions (EQ) \citep{eq}, open-domain QA tasks for which most answers are Wikipedia entities. We approach these datasets as entity-linking tasks, as in \citet{eae}. We append a mask token to each question, which is marked as a question mention. The probability for each candidate entity is predicted as the aggregate attention weight on mentions of the entity (Section~\ref{sec:pretrain_tome}). Questions with answers that do not correspond to entities in our entity vocabulary are marked as answered incorrectly. TQA consists of 96k trivia questions, for which 84\% of answers correspond to a Wikipedia entity. We use the open-domain setting without gold evidence passages. In order to compare head-to-head performance, we also report results on a subset of TQA with only questions with Wikipedia entities as an answer. CWQ consists of 35k complex questions (compositions, conjunctions, etc.) for which 94\% of answers correspond to a Wikipedia entity. EQ contains challenging questions involving rare entities, with Wikipedia entities as answers. \textbf{Results.} Table \ref{table:qa} contains the results for TQA, CWQ and EQ experiments. Like \textsc{tome}\xspace, Entities as Experts and OPQL treat the above datasets as entity-linking tasks. REALM performs extractive QA, while T5, RAG and Fusion-in-Decoder generate the answer. We note a similar pattern of results as for claim verification. \textsc{tome}\xspace strongly outperforms Entities as Experts on all tasks. \textsc{tome}\xspace performs slightly better than REALM on a simple task like TriviaQA (entity subset) and strongly outperforms REALM on more challenging tasks that require multiple (CWQ) or challenging (EQ) retrieval. \begin{table}[t!] \centering \caption{Accuracy on open-domain QA datasets TriviaQA (TQA), ComplexWebQuestions (CWQ) and EntityQuestion (EQ). \#Encoded refers to the number of passages encoded by a BERT reader to answer a question. TQA\textsubscript{e-dev} corresponds to TQA with train and dev samples limited to those with Wikipedia entity as an answer. See Appendix~\ref{sec:appendix:evaluation:results} for full results. \label{table:qa}} \input{results/questionanswering} \end{table} \subsection{Qualitative properties of \textsc{tome}\xspace} \textbf{What memories does \textsc{tome}\xspace retrieve?} Given that \textsc{tome}\xspace retrieval is unsupervised, it is natural to ask what memories it learns to retrieve. First, we observe that \textsc{batch-tome}\xspace and \textsc{tome}\xspace trained on just the MLM objective learn to attend to memories of the same entity as the passage linked mention (55\% and 41\% average attention score). This is promising as entity mentions from the same entity often contain mutually relevant information. Quantitative evaluation of downstream retrieval is challenging as \textsc{tome}\xspace often retrieves mentions that are not part of, but equally informative as gold passages. Instead, we provide \textsc{tome}\xspace retrievals on \textit{the first three} samples of the HoVer dev set to demonstrate its retrieval behavior without cherry-picking. Table \ref{table:hover_simple} demonstrates a successful simple retrieval, while Table \ref{table:mention_retrieval_samples} displays interesting multi-hop retrieval. The last is found in Appendix \ref{sec:appendix:retrieval}. \begin{table}[t!] \centering \caption{\textsc{tome}\xspace-2 retrievals for the second HoVer dev sample. We show top-1 retrieval results for the first ($\longrightarrow_{1}$) memory attention layer for two passage mentions. Memory mentions are in brackets. \label{table:hover_simple}} \input{results/simple_hover_example} \end{table} \begin{table}[t!] \centering \caption{\textsc{tome}\xspace-2 retrievals for the first HoVer dev sample. We show top-1 retrieval results for the first ($\longrightarrow_{1}$) and the second ($\longrightarrow_{2}$) memory attention layers for passage mentions ``Life Goes On'' and ``Hungry''\protect\footnotemark. Memory mentions are in brackets. The first retrieval for the ``Life Goes On'' is a different song with the same name and the first retrieval for ``Hungry'' is related but not useful. However, the second retrieval for ``Life Goes On'' identifies the correct song and describes its position on the album while the second retrieval for ``Hungry'' captures its position relative to ``Life Goes On''. \label{table:mention_retrieval_samples} } \input{results/multihop_example} \end{table} \begin{figure} \centering \begin{minipage}{0.48\textwidth} \centering \input{results/claim_verification_vs_memory_size} \caption{Claim verification accuracy as a function of fine-tuning memory size (in millions). \label{fig:memory_ablation}} \end{minipage}\hfill \begin{minipage}{0.48\textwidth} \centering \captionof{table}{Accuracy on held-out subset of TriviaQA and ComplexWebQuestions (CWQ) questions. \textsc{tome}\xspace-1-unseen was pre-trained and fine-tuned with memory without entities from held-out set and evaluated with full memory. Note that performance is considerably lower than on the full dev set as answers in the held-out set (which are in dev but not train) are more likely to be rare entities. \label{table:unseen}} \input{results/unseen} \end{minipage} \end{figure} \textbf{Importance of memory size.} Figure \ref{fig:memory_ablation} shows claim verification performance as a function of memory-size during fine-tuning (pre-training memory size is held constant). For smaller memory sizes, entries in memory are uniformly sampled from the full \textsc{Mention Memory} \xspace. Performance increases smoothly with memory size. Larger memory size yields diminishing returns, perhaps reflecting that entity mentions may contain overlapping information. \textbf{Zero-shot transfer to unseen entities.} An important advantage of memory architectures is that the behavior of the model can be steered by deciding what to include in the memory. Here we show that the \textsc{tome}\xspace model can use information that was not present in memory during training. We sample questions in the TQA and CQA dev sets, and generate a subset of the memory without any mentions corresponding to the answer entities for those questions. Then we pre-train and fine-tune a model on this smaller memory, which we call \textsc{tome}\xspace-unseen. We evaluate \textsc{tome}\xspace-unseen on the sampled questions \textit{using the full memory for evaluation only}, and compare to standard \textsc{tome}\xspace. Table \ref{table:unseen} shows that using full memory only during evaluation does not lower performance. \footnotetext{We replaced the original song title with the song ``Hungry'' as the original may be inappropriate.} \section{Introduction} Neural models have greatly advanced the state of the art in natural language processing and generation tasks. Accordingly, there has been increasing interest in applying neural language models to tasks which require extensive world knowledge to solve~\citep{kilt}. Much of this world knowledge can be found distributed over text corpora, which raises the question whether language models pre-trained on text corpora capture this information. Recent work suggests that while language models may successfully predict facts about the world~\citep{llm_kb} such knowledge is superficial and unreliable~~\citep{llm_nokb}. Our goal is to reliably incorporate information from across a text corpus into a language model. Recent work has represented the information present in a text corpus explicitly by constructing a virtual knowledge base (\textsc{vkb}\xspace)~\citep{drkit, opql}. A \textsc{vkb}\xspace consists of dense representations of entity mentions in the text, designed to reflect the property or relation expressed by the entity mention. We propose to incorporate a \textsc{vkb}\xspace into a language model by using it as an external memory, performing attention over the entire \textsc{vkb}\xspace \textit{within} a Transformer model. In this way the model can synthesise and reason over many disparate sources of information from the text corpus. We refer to the \textsc{vkb}\xspace used in such a way as \textsc{Mention Memory} \xspace, and the model as \textsc{tome}\xspace (Transformer Over Mention Encodings\xspace). We first pre-train a mention encoder to specifically encourage mention representations that are useful for a Transformer model, and construct a \textsc{Mention Memory} \xspace from 150 million entity mentions in English Wikipedia. Then we train a \textsc{tome}\xspace model with attention layers over the Mention Memory, which is kept frozen (see Figure \ref{fig:model_overview}). We argue that the \textsc{Mention Memory} \xspace approach has several appealing properties. \textit{First}, \textsc{tome}\xspace retrieves entity mention representations corresponding to specific entity attributes or relations described in the corpus. This retrieval is much more fine-grained than aggregate entity retrieval methods such as Entities as Experts (EaE)~\citep{eae}, and we show large improvements in accuracy over EaE on tasks that require detailed entity information, such as claim verification and entity-based question answering. The fine-grained retrieval also allows potential users to see more precisely what knowledge the model's predictions is based on (see Table \ref{table:mention_retrieval_samples}). \textit{Second}, \textsc{tome}\xspace retrieves dense representations, which are easy to incorporate into a Transformer model without reprocessing the input, unlike raw text. Therefore, \textsc{tome}\xspace is able to retrieve, assimilate and reason over information from many different sources within a \textit{single} Transformer model, allowing for multi-source and multi-hop reasoning without the beam search machinery that is required for multi-hop retrieve-and-read~\citep{beamdr}. This also makes \textsc{tome}\xspace much more scalable: retrieve-and-read approaches have to read many retrieved passages which becomes expensive with larger reader models, while the cost of memory layers does not scale with reader size and is negligible for larger readers. \textit{Third}, the retrieval is latent, without direct or distant supervision on the retrieved results. We show that, even without supervision, the model learns to retrieve highly specific and informative entity attributes and perform multiple reasoning steps. \textit{Finally}, the memory table is semi-parametric, so knowledge can be added or updated by applying the mention encoder to new text without retraining. In order to verify the model's capacity to capture accurate factual information in the corpus, we start by evaluating \textsc{tome}\xspace on the HoVer~\citep{hover}, FEVER~\citep{fever} and FM2~\citep{fm2} claim verification datasets, on which it strongly improves performance over entity aggregate and comparable retrieve-and-read baselines. We demonstrate that the model learns to attend to informative mentions for verifying claims using only the verification accuracy as a signal. Ablations show the memory is crucial for performance, and that the model can effectively use larger memory than it was pre-trained on. In a second set of experiments we evaluate \textsc{tome}\xspace on question-answering benchmarks TriviaQA~\citep{triviqa}, ComplexWebQuestions~\citep{complexqa} and EntityQuestions~\citep{eq}, improving performance over comparable baselines. Finally we show that the model can be adapted to generalize to new unseen entities by updating the memory, without retraining. \section{Method} \begin{figure}[t!] \centering \includegraphics[width=1.0\textwidth]{figures/overview_v3.png} \caption{Overview of \textsc{Mention Memory} \xspace. A pre-trained mention encoder is used to generate dense representations for each entity mention in Wikipedia (approximately 150 million total) which are stored in a table. The \textsc{tome}\xspace model takes a passage annotated with entity mention boundaries as input, and applies a Transformer block. Next, the \textsc{tome}\xspace model applies one or more $\mathtt{TOMEBlocks}$. Each $\mathtt{TOMEBlock}$ contains a memory attention layer and a Transformer block.} \label{fig:model_overview} \end{figure} \vspace{-4pt} Our method represents knowledge in a corpus as a collection of ``\textit{mention encodings}'' -- dense vector representations for every entity mention that appears in the corpus. Every time an entity appears in a passage -- "\textit{[Barack Obama] was elected president in 2008}" -- some property of the entity or its relation to other entities is described. The first component of our method, the Mention Encoder\xspace model, is responsible for distilling information from entity mentions in the corpus into high-dimensional mention encodings. We use the Mention Encoder\xspace to encode each entity mention in English Wikipedia and gather encodings into a \textit{\textsc{Mention Memory} \xspace}. The purpose of the \textsc{Mention Memory} \xspace is to capture all knowledge contained in the corpus in a way that can be easily integrated into a Transformer. The second component of our method, the \textsc{tome}\xspace model, applies sparse attention over the \textsc{Mention Memory} \xspace to incorporate external information from the corpus into a Transformer model. An overview of the whole method is shown in Figure \ref{fig:model_overview}. Jointly training the Mention Encoder\xspace and \textsc{tome}\xspace models is computationally costly, since it would require backpropagating through the Mention Encoder for each attended mention. Consequently, we propose to train the models in two stages. First, we pre-train the Mention Encoder\xspace and generate the \textsc{Mention Memory} \xspace. Second, we pre-train the \textsc{tome}\xspace model while keeping the \textsc{Mention Memory} \xspace frozen: the gradient does not propagate through it and the memories are not modified. Mention Encoder\xspace pre-training is specifically designed such that mention encodings capture relevant contextual information about each mention and are useful for \textsc{tome}\xspace even without joint training. We formally define these models in sections \ref{sec:constructing_mm} and \ref{sec:tome}, and their pre-training procedures in \ref{sec:pretrain_mention_encoder} and \ref{sec:pretrain_tome}. \textbf{Notation.} An input to the model is a passage $\mathbf{x} = x_1, \ldots, x_T$ of length $T$. We assume that each passage has been annotated with an NER system. Following \cite{mtb} we use special entity markers to highlight entity mentions in the passage. We introduce tokens [$E_{start}$] and [$E_{end}$] to the vocabulary and insert them before and after each mention in the passage. For example, the original passage ``\textit{What is the nationality of the hero who killed Medusa}'' turns into ``\textit{What is the [$E_{start}$] nationality [$E_{end}$] of the [$E_{start}$] hero [$E_{end}$] who killed [$E_{start}$] Medusa [$E_{end}$]}''. Each mention $m$ in a passage is described by a tuple $(s, e)$, where $s$ and $e$ are start and end positions of the mention. We consider entity markers to be part of the corresponding mention, so that $x_s = [E_{start}]$ and $x_e = [E_{end}]$. Representations of these tokens are later used to generate mention encodings. \subsection{Constructing mention memory from corpus} \label{sec:constructing_mm} \subsubsection{Mention Encoder\xspace} \label{sec:me} Let $H \in \mathbb{R}^{T \times d}$ be token representations where $d$ is the hidden dimension, such that $H_i \in \mathbb{R}^{d}$ is the contextualized embedding for the $i$-th token. Following \cite{eae} we compute the encoding of a span ($s$, $e$) as a learnable linear projection W of the concatenation of its start and end token representations $H_s$ and $H_e$ \begin{align} \mathtt{SpanEncodingLayer}(H, (s, e)) &= W [H_s; H_e] \end{align} The Mention Encoder\xspace is a Transformer model with two final $\mathtt{SpanEncodingLayers}$ that produce \textit{key} and \textit{value} mention encodings. \textit{Value mention encodings} store context-level information about each mention and are used as inputs to the \textsc{tome}\xspace model. \textit{Key mention encodings} identify the type of information stored in the value encodings and serve as attention keys for the memory layer. These two $\mathtt{SpanEncodingLayers}$ do not share weights. \vspace{-2pt} \subsubsection{Mention memory} After the Mention Encoder\xspace is pre-trained (see section \ref{sec:pretrain_mention_encoder}), we use it to generate a \textsc{Mention Memory} \xspace from entity mentions in Wikipedia. While we could include encodings of any corpus mention in the \textsc{Mention Memory} \xspace, we focus on grounded mentions which can be linked to Wikipedia entities. We denote these as \textit{linked} mentions, which we hypothesize contain information that can be retrieved and grounded. We gather mention encodings into matrices $\mathtt{MemKey} \in \mathbb{R}^{N \times d_K}$ and $\mathtt{MemValue} \in \mathbb{R}^{N \times d_V}$, where $N$ is the total number of linked entity mentions in English Wikipedia (approximately 150 million) and $d_K$ and $d_V$ are dimensions of key and value encodings. Additionally, we record entity (Wikipedia) IDs of mentions in $\mathtt{MemEnt} \in \mathbb{R}^{N}$, which we use as \textit{labels for auxiliary losses}, not as inputs to the model or supervision on retrieval. $\mathtt{MemKey}(i), \mathtt{MemValue}(i), \mathtt{MemEnt}(i)$ correspond to the key encoding, value encoding and entity ID for the $i$-th linked mention in Wikipedia. \vspace{-2pt} \subsection{\textsc{tome}\xspace model} \label{sec:tome} The \textsc{tome}\xspace model incorporates information from a text corpus into a Transformer by applying sparse attention over the \textsc{Mention Memory} \xspace. The model consists of one or more $\mathtt{TOMEBlocks}$, each containing a memory attention layer followed by a post-processing Transformer block. Memory attention layers retrieve and attend to relevant ``memories'' for every mention in the input passage. The model then processes the retrieval-augmented representation with the Transformer block, allowing it to access and combine information from multiple sources in the corpus. Finally, multiple $\mathtt{TOMEBlocks}$ enable the model to refine retrievals and perform multi-hop reasoning. More formally, a $\mathtt{TOMEBlock}$ receives the output representation of the previous layer $H$ and produces new representations $H'$ \begin{align} M &= \mathtt{MemoryAttention}(H),\\ H' &= \mathtt{TransformerBlock}(M) \end{align} The \textsc{tome}\xspace model encodes input passages $\mathbf{x}$ with the word embedding layer and initial Transformer block and then applies one or more $\mathtt{TOMEBlocks}$ \begin{align} H^0 &= \mathtt{InitialTransformerBlock}(\mathtt{TokenEmbedding}(\mathbf{x})),\\ H^{l} &= \mathtt{TOMEBlock}_{l}(H^{l - 1}), \ l=1 \ldots L \end{align} In this work we consider two configurations of the \textsc{tome}\xspace model: \textsc{tome}\xspace-1 and \textsc{tome}\xspace-2, with one and two $\mathtt{TOMEBlocks}$ respectively. Each $\mathtt{TOMEBlock}$ of \textsc{tome}\xspace-2 contains half as many Transformer layers as in \textsc{tome}\xspace-1 to hold the total number of Transformer layers fixed between models. \subsubsection{$\mathtt{MemoryAttention}$} Each memory attention layer is implemented as a sparse dot-product attention layer that takes the output $H$ of the previous Transformer block, incorporates information from the \textsc{Mention Memory} \xspace, and returns a representation $M$ (omitting layer indices). Consider a mention $m$ that starts at position $s$ and ends at position $e$. We start by computing its \textit{query mention encoding} $\mathtt{Query}(m)$ by applying a $\mathtt{SpanEncodingLayer}$ \begin{align} \mathtt{Query}(m) &= \mathtt{SpanEncodingLayer}(H, (s, e)), \end{align} Query mention encodings are used to retrieve relevant memories from the Mention Memory table\xspace. However, applying standard attention over 150 million mention encodings is infeasible. Instead, we first perform approximate nearest neighbor search to retrieve the top-$K$ mentions with the largest dot product between query $\mathtt{Query}(m)$ and key mention encoding from $\mathtt{MemKey}$. We denote the set of these memories as $\mathtt{TopMem}(\mathtt{Query}(m))$. We compute attention over these memories and incorporate the result into the token contextual representation at position $s$ \begin{align} \alpha_i &\propto \exp(\mathtt{Query}(m) \cdot \mathtt{MemKey}(i)), \ i \in \mathtt{TopMem}(\mathtt{Query}(m)) \\ \mathtt{Value}(m) &= \sum_{i \in \mathtt{TopMem}(\mathtt{Query}(m))} \alpha_i \cdot \mathtt{MemValue}(i)\\ M_s &= \mathtt{LayerNorm}(H_s + W_U \mathtt{Value}(m)) \end{align} where $W_U$ is a learnable matrix of shape $d \times d_V$. \subsubsection{Sparse large-scale retrieval} Approximate nearest neighbor search (ANNS) can be performed cheaply using one of multiple ANNS libraries, for example ScaNN~\citep{scann}. We implemented two on-device search methods to avoid the engineering complexity of real-time communication with an ANNS server, though we have verified this is also viable. The first naively computes a simple dot-product between passage queries and memory keys, and was used in our main experiments as it was easiest to implement. We also implemented and will be releasing a much faster version based on CPU ANNS methods. The memory is sharded over devices, so that the device-memory overhead is negligible. Holding the number of entries in memory fixed, the compute cost of retrieval from memory does not grow with the size of the reader or the dimensionality of the memory values, so that the relative cost of the memory layer becomes smaller with reader size. In particular, the overhead from the memory used in our pre-training setting is small for BERT-Large and up. More details on ANNS implementation and overhead can be found in Appendix \ref{sec:appendix:ann}. \subsection{Mention encoder pre-training} \label{sec:pretrain_mention_encoder} While backpropagating through a Wikipedia-scale mention memory is challenging, it is possible to train smaller-scale memory architectures end-to-end. We take an approach inspired by \textsc{Marge}\xspace~\citep{marge} and {\textnormal{t}}~\citep{readtwice} which apply cross-attention over documents within a batch. In particular, we process passages in each batch twice. As a first step, the Mention Encoder\xspace model generates mention encodings from each passage and aggregates the mention encodings into a batch-wide memory table. In the second step, we apply a \textsc{tome}\xspace architecture that attends to the batch memory, which we call \textsc{batch-tome}\xspace. Note that \textsc{batch-tome}\xspace is just used for pre-training the Mention Encoder\xspace and not evaluated on any downstream tasks. Mention Encoder\xspace and \textsc{batch-tome}\xspace are jointly trained end-to-end so that the Mention Encoder\xspace is encouraged to produce mention encodings that contain useful information for \textsc{batch-tome}\xspace. We want to make sure the batch memory contains relevant mentions, so we pre-train the models on batches of passages constructed from related Wikipedia articles with high entity overlap. Appendix~\ref{sec:appendix:pretrain_mention_encoder} provides more details on Mention Encoder\xspace data generation. We use the pre-trained Mention Encoder\xspace to construct the Mention Memory table\xspace from corpus, and use the \textsc{batch-tome}\xspace model as the initialization point for \textsc{tome}\xspace-specific pre-training (described in Section \ref{sec:pretrain_tome}). \textbf{Masked language model.} Our primary pre-training objective is the standard masked language modeling task, with the loss computed based on the output of the second read (\textsc{batch-tome}\xspace). To encourage the model to rely on memory, we increase the task's difficulty relative to standard BERT pre-training by masking entity mention tokens more aggressively. \textbf{Coreference resolution.} We wish to encourage the Mention Encoder\xspace to represent the entity attributes expressed by entity mentions, so we also employ an entity-oriented pre-training task to the output of \textsc{batch-tome}\xspace for which such attribute information is likely to be especially helpful. Unlike Entities as Experts~\citep{eae}, \textsc{batch-tome}\xspace does not use entity embeddings, so we cannot use the entity linking task. Instead, we apply a related entity coreference resolution objective, which asks the model to predict whether two linked mentions correspond to the same entity based on the similarity of their encodings. Given that entity surface forms are frequently masked, the model needs to instead use the properties of other mentions in the batch to determine which entity it is most compatible with, incentivizing the Mention Encoder\xspace to encode such properties. We compute a coreference mention encoding for every linked mention in the batch by applying a separate $\mathtt{SpanEncodingLayer}$ on the output of \textsc{batch-tome}\xspace. The loss is implemented using cross-entropy over dot-product similarity scores. See Appendix \ref{sec:appendix:coref_loss} for details. \subsection{\textsc{tome}\xspace pre-training} \label{sec:pretrain_tome} As \textsc{tome}\xspace attends to the full \textsc{Mention Memory} \xspace instead of in-batch memory, we do not employ the batching procedure from Mention Encoder\xspace pre-training, instead sampling Wikipedia passages randomly. For the same reason, we replace the in-batch entity coreference objective by \textsc{Mention Memory} \xspace entity coreference, in which the model has to predict which mentions from the \textsc{Mention Memory} \xspace share an entity with the input mention. The goal of this auxiliary objective is to incentivize the model to learn to retrieve informative mention encodings to solve the semantically challenging task. \textsc{Mention Memory} \xspace entity coreference also allows us to solve tasks like TriviaQA or ComplexWebQA without a decoder by directly predicting the answer entity. \textbf{Entity prediction.} Analogous to batch coreference resolution loss we compute mention encoding $z_m$ using the output of the \textsc{tome}\xspace model. As in section \ref{sec:tome}, $\mathtt{TopMem}(z_m)$ returns the top $K$ memories with the largest dot product between the mention encodings $z_m$ and key mention encodings $\mathtt{MemKey}$ from the \textsc{Mention Memory} \xspace. The score $\mathtt{EntProb}(m, j)$ of entity $j$ equals the sum of attention weights of memories corresponding to this entity. \begin{align} \mathtt{EntProb}(m, j) &= \frac{\sum_{i \in \mathtt{TopMem}(z_m)} \exp(z_m \cdot \mathtt{MemKey}(i)) \cdot \mathbb{1}\{\mathtt{MemEnt}(i) = j\} }{\sum_{i \in \mathtt{TopMem}(z_m)} \exp(z_m \cdot \mathtt{MemKey}(i))} \label{eq:entity_prediction} \end{align} The final entity prediction is $\argmax_j \mathtt{EntProb}(m, j)$. Entity prediction loss $\mathcal{L}_{ep}(m)$ for a mention $m$ of entity $\mathtt{Ent}(m)$ is $\mathcal{L}_{ep}(m) = -\log \mathtt{EntProb}(m, \mathtt{Ent}(m))$. Total loss equals the average loss over linked input mentions for which at least one memory of the same entity is retrieved. \textbf{Disallowed same passage retrieval.} For each passage in the pre-training corpus, there exist memories corresponding to mentions in the passage generated from the unmasked version of the same passage. In order to prevent the model from `cheating' by attending to such memories, we set the attention weight for all memories from the same passage to zero. \section{Related Work} Our approach lies at the intersection of three lines of work: i) knowledge-augmented language models, ii) employing a text corpus as a virtual knowledge base, iii) retrieve-and-read methods. \textbf{Knowledge-augmented language models.} Entities as Experts (EaE)~\citep{eae} injects information into a Transformer model model with an intermediate attention layer over trainable entity embeddings, which serve as an aggregate representation of entity information in a text corpus. In contrast, \textsc{tome}\xspace attends to a much larger table of mention encodings, allowing for retrieval of more fine-grained information. Attending to mentions as opposed to entity representations also enables \textsc{tome}\xspace to generalize to unseen entities. FiLM~\citep{fae} extends EaE by adding an attention layer over facts from a KB on the output of the Transformer. The fact attention layer enables more fine-grained queries but still retrieves aggregate entity embeddings as values, which are also not reasoned over by the Transformer. KnowBERT~\citep{knowbert} is similar to EaE, but with entity embeddings generated from a KB instead of trained end-to-end with a text corpus. \textsc{Marge}\xspace~\citep{marge} and {\textnormal{t}} \citep{readtwice} incorporate dense representations from other passages within the same batch into a Transformer through sparse top-k attention. The first pre-training stage of our method for training the Mention Encoder\xspace is similar to \textsc{Marge}\xspace and {\textnormal{t}}. However, \textsc{tome}\xspace performs global attention over a full corpus, rather than a single batch. Furthermore, \textsc{tome}\xspace attends to a \textsc{Mention Memory} \xspace consisting of pre-computed dense representations. Therefore \textsc{tome}\xspace is not limited to downstream task with batches of relevant documents, and does not need to apply an expensive reader model to an entire batch of documents for each input. \textbf{Text corpus as virtual knowledge base.} DrKIT \citep{drkit} performs multi-hop question answering by using a text corpus as a virtual knowledge base. Similar to \textsc{tome}\xspace, the authors apply a mention encoder to convert the corpus into a table of mention encodings. A Transformer model encodes the question into dense queries, which are compared with the mention encodings to \textit{traverse} the \textsc{vkb}\xspace. Conversely, \textsc{tome}\xspace \textit{retrieves} mention encodings, and then jointly processes them \textit{inside} the Transformer. In follow-up work to DrKIT, OPQL \citep{opql} uses a FiLM-like approach to access a memory of relation mentions, which are encoded with a self-supervised relation encoder. However, the relation mention encoding combines a mention-specific relation representation with EaE-like entity encodings, so they are less fine-grained than \textsc{tome}\xspace's encodings. Unlike \textsc{tome}\xspace, OPQL also lacks a sparse large-scale retrieval mechanism, and relies on \textit{ad hoc} heuristics to limit the size of the memory.\footnote{It should be noted that absent heuristics, the number of potential relation mentions (i.e., entity mention pairs) is much larger than the number of entity mentions.} MOLEMAN \citep{moleman} compares a passage mention encoding with mention encodings from a corpus to perform entity linking, but does not retrieve the mentions. \textbf{Retrieve-and-read methods.} REALM \citep{realm} learns to retrieve relevant passages from a text corpus in a self-supervised manner. Retrieved passages are concatenated to the input passage which is then re-encoded by a Transformer model to perform a task. The key difference between retrieve-and-read approaches~\citep{realm, dpr, rag, fid} and \textsc{tome}\xspace is that \textsc{tome}\xspace retrieves dense representations, as opposed to text. That means that \textsc{tome}\xspace only applies a reader model once to a single input, while retrieve-and-read approaches have to apply an expensive BERT reader to many different passages. In addition, Transformer models can only process relatively short sequences, which imposes a binding constraint on the number of retrieved text passages that can be processed \textit{together}, whereas \textsc{tome}\xspace can retrieve and reason over information from many sources inside the same reader. Generative models like RAG~\citep{rag} or FiD~\citep{fid} attend to different retrieved documents in the decoder, but still have to apply a BERT read for every retrieved document, do not consider interaction between retrievals while encoding the question, and cannot perform iterative retrieval.
-31,678.204772
[ 0.247314453125, 0.417236328125 ]
52.021563
[ -2.5390625, 1.1357421875, -1.9951171875, -5.4296875, -1.13671875, 7.7734375 ]
[ -0.004711151123046875, 3.271484375, -0.08331298828125, 2.33203125 ]
384
6,910
[ -2.1328125, 2.451171875 ]
21.538952
[ -5.6328125, -3.05078125, -3.314453125, -1.806640625, 1.740234375, 10.3359375 ]
0.427006
36.067881
21.143271
1.188575
[ 1.1885628700256348 ]
-24,333.191699
6.100434
-31,203.729552
1.660578
6.007914
[ -2.865234375, -3.560546875, -3.681640625, -4.48046875, 2.79296875, 11.65625 ]
[ -6.01171875, -2.298828125, -2.666015625, -2.123046875, 4.0390625, 6.12890625 ]
BkiUdZY5qrqCyq8KqH5a
\section{Introduction} In profiling radar systems, range resolution is determined by transmitted signal bandwidth. Synthetic bandwidth technique provides high-range-resolution (HRR) capability by transmitting a series of pulses with various carrier frequencies. Within each pulse, the bandwidth is small. The benefit of this method is to achieve large signal bandwidth, while retaining low system complexity. After collecting all the received pulses, the `stretch' algorithm \cite{Einstein1984} can be implemented to derive HRR profiles\cite{wehner1995}. In synthetic HRR profiling, target back scattering property is carefully discussed in \cite{peikang2005}. Theoretical analysis and experimental results had demonstrated that target back scatter can be modeled as distributed point scatterers with individual amplitudes. This approximation of target named `point-scatterer model' is widely recognized in HRR profiling or imaging. Synthetic bandwidth technique usually suffers from jammers and interferences, due to large bandwidth occupied by transmitted signal. While signal frequencies clashes with environmental electromagnetic interferences, not all of the scattered signal can be correctly acquired by radar receiver. If we let the polluted signal unattended, synthetic HRR profiling will be affected. This frequency domain interference problem in synthetic profiling process can be modeled as `missing data' problem.Various methods have been proposed to interpolate the missing data \cite{Babu2010} \cite{wang2006} \cite{Stoica2009}. These fitting algorithms are based on assumptions of signal models or properties. Then, `stretch' is applied to the processed data. However, if the missing parts are too long, profiling result falls in quality due to inaccuracy interpolation \cite{Babu2010}. For large size of missing, recently developed compressed sensing \cite{Baraniuk2007} \cite{Shah2009} technique is a solution. However, while the missing pattern appears to be block-like, compressed sensing results also deteriorate. Autocovariance is the second order property of signal. While part of signal is missing, autocovariance function (ACF) of various lags can be estimated from valid data. In this paper, we demonstrate a new approach in missing data case. In which autocovariance matrix is estimated using limited available data. Subspace decomposition method is applied to the estimated matrix to obtain scatterer range information. We present both simulation results and real data from radar systems to validate this new methods. The paper is organized as follows. Section II describes signal model. Section III presents the new method. Simulation and real data result are presented in section IV. Results derived from real radar data is presented in Section V. Section VI concludes the paper. \section{Signal Model of HRR Profiling} In synthetic HRR profiling process, radar system transmits a pulse train of $N$ pulses with various carrier frequencies. For the $n$th pulse, carrier frequency is $f_n=f_0+n \Delta f$, where $f_0$ is starting frequency and $\Delta f$ is frequency step. The transmitted pulse for the $n$th frequency is $p_n(t)=A(t)\exp{j2\pi f_n t}$, in which $A(t)$ is the signal envelop. The total bandwidth is $N \Delta f$. Suppose a single reflecting poing with scattering amplitude $\alpha$ is positioning at time delay $\tau$ (corresponding to range at $c \tau /2$). At the receiver, radar received signal will be $r_n(t)=\alpha A(t-\tau)\exp{j2\pi f_n (t-\tau)}$. In synthetic radar HRR Profiling, point scatterer model describes a target as a series of individual reflecting points \cite{peikang2005} with various amplitudes $[\alpha_1,\alpha_2,\ldots,\alpha_N]$ and time delays $[\tau_1,\tau_2,\ldots,\tau_N]$. These scatterers composes of a linear, time-invariant system. Received signal is the superposition of reflecting wave from all the scatterers, plus noise: \begin{equation} r_n(t)=\sum_{k=1}^K \alpha_k A(t-\tau_k)\exp \{j2\pi f_n (t-\tau_k)\}+e(t) \end{equation} After quadrature demodulation and sampling, the complex sample of the $n$th pulse is \begin{equation} \label{eq:rx} y_n=\sum_{k=1}^K \alpha_k e^{-j2\pi f_n \tau_k}+e_n \end{equation} Obviously, the sampled data is superposition of multiple sinusoids. In synthetic bandwidth system, a series of samples are collected as equally spaced frequency domain samples. With these samples, synthetic HRR profiling is essentially an estimation of three parameters: numbers, time delays and amplitudes. MUSIC \cite{stoica1997} and other subspace methods are used in HRR profiling \cite{Kim2002}. The key step in traditional MUSIC, covariance matrix estimation is operated in full data case. In synthetic bandwidth signal, due to interference or jamming on the receiver, complex samples may not to be obtainable. Suppose pulse number $P_I=[m_1,m_2,\ldots,m_A]$ are available and $P_N=[m_{A+1},m_{A+1},\ldots,m_N]$ are polluted pulses with interference. We designate \begin{IEEEeqnarray*}{C} I(k)= \begin{cases} 1 & \text{if } k \in P_I, \\ 0 & \text{if } k \in P_N. \end{cases} \end{IEEEeqnarray*} Then $I(k)$ is an indication of the availability of the $k$th pulse. The observations in (\ref{eq:rx}) reduce to: \begin{equation} \label{eq:rxP} y_{m_i}=\sum_{k=1}^K \alpha_k e^{-j2\pi f_{m_i} \tau_k}+e_{m_i} \end{equation} In the following section, we present a new algorithm named Missing-MUSIC or M-MUSIC, modified for missing data case. \section{Algorithm Description} \subsection{Autocovariance Estimation} We starts the analysis of signal autocovariance function (ACF). For a second-order stationary process, the autocovarainces of different lags do not depend on time shift. In full data case, ACF for a continuously sampled data $X_1,X_2,\ldots,X_N$ with zero mean is estimated by unbiased estimator \begin{equation} \hat{c}(h)=\frac{1}{N-h}\sum_{i=1}^{N-h}X_{i+h} X_i^* \end{equation} Where $h$ is the lag of ACF and $(\cdot)^*$ is complex transpose. It is stated for lag $h$, we have $N-h$ couples of sampled data to be averaged. For sufficiently long series, the estimation is asymptotic consistent. However, as sampled data is polluted by jammers, the number of available couples will decrease as polluted sample size increases. For ACF with lag $h$, the number of useful couples is \begin{equation} Q(h)=\sum_{i=1}^{N-h} I(i)I(i+h). \end{equation} For each couple of samples that both exist, it will be accounted for ACF estimation. A series of $Q(h)$ for different lags $h$ is calculated. In theory, $Q(h)$ should be in range of $[0,N-h]$. ACF can be estimated by average all available couples of sampled data in the following manner. \begin{equation} \hat{c}(h)=\frac{1}{Q(h)}\sum_{i=1}^{N-h} I(i) I(i+h) X_{i+h} X_i^*. \end{equation} Stated simply, we should drop out the couples of polluted samples, average all the `clear' samples left. \subsection{Covariance Matrix Forming} In order to proceed subspace identification, a covariance matrix has to be formed. For a second-order stationary process, elements on the same diagonal equal to each other. Thus the covariance matrix should be a Toeplitz matrix, with the value of each diagonal equals to ACF of certain lag. \begin{equation} \hat{C}=\left[ \begin{array}{cccc} \hat{c}(0) & \hat{c}(1) & \ldots & \hat{c}(L-1) \\ \hat{c}(-1) & \hat{c}(0) & \ldots & \hat{c}(L-2) \\ \vdots & \vdots & \ddots & \vdots \\ \hat{c}(1-L) & \hat{c}(2-L) & \ldots & \hat{c}(0) \end{array} \right]. \end{equation} In which $L$ is the size of the covariance matrix. The matrix is essentially both Hermitian and Toeplitz. The choice of $L$ is important in this method. Too small size will reduce the accuracy and resolution of subspace method \cite{Stoica1989} \cite{Stoica1990}. On the other side, ACF estimation is based on limited samples, missing data condition reduces samples size in advance. Thus the size of $\hat{C}$ should be limited, or ACF estimations of large lags will degenerate covariance matrix property. A rule of thumb is to choose the largest $L$, subject to \begin{equation} \forall h<L, \quad Q(h) \ge \frac{N-h}{2} \end{equation} \subsection{Number of Scatterers and Range Estimation} All eigenvalues of estimated covariance matrix $\hat{C}$ is real, since $\hat{C}$ is hermitian. Let $\lambda_1 \ge \lambda_2 \ge \ldots \lambda_L$ denote the eigenvalues of $\hat{C}$ listed in decreasing order. In subspace method such as MUSIC, eigenspaces is divided into signal subspace and noise subspace. Signal subspace is the the span of eigenvectors correspondint to large eigenvalues. In HRR profiling problems, the number of scatterers is usually unknown. However, significant scatterers with strong reflectivity determines the characteristic of radar target. We may determine the number of scatterers by the value of eigenvalues above some `noise level'. Let $T$ be the threshold for significant eigenvalues, the number of scatterers $K$ is the total number of eigenvalues $\alpha_k > T$. $\hat{K}$ is also the dimension of signal subspace, noise subspace dimension is $L-\hat{K}$. Suppose $\{ s_1,s_2,\ldots,s_{\hat{K}} \}$ are $\hat{K}$ orthogonal eigenvector of the associated $K$ largest eigenvalues of $\hat{C}$. The vectors span the signal space $S$. $\{ g_{\hat{K}+1},g_{\hat{K}+2},\ldots,g_L \}$ are $L-\hat{K}$ eigenvectors spanning the noise subspace $G$. The range or time delay of scatterers is determined by Root-MUSIC \cite{Barabell1983}. If we define the polynomial: \begin{equation} g_k (z)=\sum_{l=1}^L g_{lk} z^{-(l-1)} \end{equation} Where $g_{lk}$ is the $l$th element of noise-eigenvector $g_k$. Solving the roots of another polynomial: \begin{equation} D(z)=\sum_{k=\hat{K}+1}^L g_k(z) s_k^* (1/z^*) \end{equation} Each of the polynomial roots has complex form $\hat{z}_i=|\hat{z}_i| e^{j \hat{\omega}_i}$ Obtaining $\hat{K}$ complex roots $\{ Z_1, Z_2, \ldots, Z_{\hat{K}} \}$ that is closest to unit circle on complex plain. The time delay in (\ref{eq:rx}) is \begin{equation} \hat{\tau}_i=\frac{\hat{\omega}_i}{2\pi}. \end{equation} \subsection{Amplitude Estimation and Profile Forming} Substituting estimated scatterer number $\hat{K}$ and their time delay $\hat{\tau}_i$ into equation (\ref{eq:rxP}): \begin{equation} y_{m_i}=\sum_{k=1}^{\hat{K}} \alpha_k e^{-j2\pi f_{m_i} \hat{\tau}_k}+e_{m_i} \end{equation} or in matrix form \begin{equation} \mathbf{Y}=\mathbf{\hat{F}} \alpha + \mathbf{E} \end{equation} Each of the estimated scatterers forms a steering vector for observations. The reflective amplitude of $\hat{K}$ scatteres are simply the solution to the least square equations above. \begin{equation} \hat{\alpha}=(\mathbf{\hat{F}}^* \mathbf{\hat{F}})^{-1} \mathbf{\hat{F}}^* \mathbf{Y} \end{equation} Up to now, all parameters of scatterers are determined. To generate a HRR profile, simply arrange the reflectivity of scatterers according to their time delay or range. Noting the amplitude we estimated is complex. In HRR profiles, amplitude is usually transformed to absolute value and represented in log scale. \section{Simulation Result} To demonstrate the proposed technique, we simulated a synthetic bandwidth system with total bandwidth of $960$MHz, in which $512$ pulses are equally spaced by $\Delta f=1.875$MHz. Theoretical range resolution of this system is about $0.15$ meters. We place four scatterers down range. The locations and amplitudes of scatterers are drawn in figure \ref{pic:simutgt}. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{SimulationTargets.jpg} \caption{Target simulation of four scatterers with different reflectivity. Closest scatterers are separated by 1 meter.} \label{pic:simutgt} \end{figure} \begin{table}[!t] \caption{Parameter Table for Missing Data Simulation} \label{tab:simupara} \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{c c} \hline \hline Parameter & Value\\ \hline Radio-frequency & X band\\ Frequency Step Size $(\Delta f)$ & 1.875MHz\\ Total Pulse Number & 512\\ Full Bandwidth & 960MHz \\ Valid Pulse Number & 300\\ SNR at the Receiver & 15dB\\ \hline \end{tabular} \end{table} \subsection{Random Missing Data} \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{RandomMissing} \caption{Randomly distributed missing data pattern. Blue circles denotes valid sampling frequencies. Red crosses are polluted frequencies. Samples on these frequencies are not used. } \label{pic:lackrand} \end{figure} In this case, interference is randomly distributed on carrier frequencies, as drawn in figure \ref{pic:lackrand}. Simulation parameters are listed in Table \ref{tab:simupara}. Within total bandwidth of 960MHz composed of 512 pulses, 212 pulses are polluted. The unavailable frequencies are equally distributed with in the total bandwidth. We compare the results by compressed sensing and by our new method. Both methods incorporate Akaike information criterion to determine the number of scatterers. In in figure \ref{pic:resrand}, both result had proved to be capable to profile the scatterers. In both results, the four recovered scatterers appeared at the correct range. Amplitudes also reflect original reflectivity. However, if we look into the detail of the profiling result, new method has displayed some advantages. Compressed sensing result shows more scatterers surrounding the original scatterers. This phenomenon is due to `grid-mismatch' problems in this method \cite{Chi2011}. Meanwhile, new method resolves scatterer range by the root of a polynomial. The range of scatterers are not limited on pre-defined grids. \begin{figure}[htbp] \centering \subfigure[]{\includegraphics[width=0.48\textwidth]{RandResOMP} \label{pic:randomp}} \subfigure[]{\includegraphics[width=0.48\textwidth]{RandResMUSIC} \label{pic:randmusic}} \caption{(a) HRR profile generated by compressed sensing. (b) HRR profile generated by M-MUSIC. 212 Pulses out of 512 transmitted pulses are polluted and discarded. Polluted frequencies are randomly ditributed over total bandwidth.} \label{pic:resrand} \end{figure} \subsection{Block Missing Data} While the interference over the total bandwidth is block-shaped, as in figure \ref{pic:lackblock}. Simulation parameters are the same as in previous subsection. Only the interference frequency distribution is different. This interference pattern is common in real environment. Signal transmitted by jammers or other radars occupies a continuous part of bandwidth. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{BlockMissing.jpg} \caption{Randomly distributed missing data pattern. Blue circles denotes valid sampling frequencies. Red crosses are polluted frequencies. Samples on these frequencies are not used. } \label{pic:lackblock} \end{figure} \begin{figure}[htbp] \centering \subfigure[]{\includegraphics[width=0.48\textwidth]{BlockResOMP} \label{pic:blockomp}} \subfigure[]{\includegraphics[width=0.48\textwidth]{BlockResMUSIC} \label{pic:blockmusic}} \caption{(a) HRR profile generated by compressed sensing. (b) HRR profile generated by M-MUSIC. 212 pulses over two band are interfered.} \label{pic:resblock} \end{figure} The result by compressed sensing has further deteriorated. This is due to the matrix property in this method. The new method displaying stable profiling result exhibit an advantage over traditional method. \section{Real Radar Data Result} In order to verify the new method over real system, a synthetic bandwidth radar is placed at the shore of a lake. In this environment, clutter energy and other unwanted interferences are sufficiently low. Experimental data of I/Q channels was collected from the baseband of the radar receiver. The signal parameters are listed in Table \ref{tab:real}. Two corner reflectors separated by about 4 meters are set above the water, as in figure \ref{pic:targetview}. The target is 5km away from radar. In profiling result, we expect to derive two spikes of strong reflection. 200 pulses are deliberately jammed using electronic jammers, in block-like pattern. \begin{table}[!t] \caption{Parameter Table for Real System} \label{tab:real} \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{c c} \hline \hline Parameter & Value\\ \hline Radio-frequency & X band\\ Frequency Step Size $(\Delta f)$ & 1.875MHz\\ Intra-pulse Bandwidth & 6MHz \\ Total Pulse Number & 512\\ Full Bandwidth & 960MHz \\ \hline \end{tabular} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{LakeTarget} \caption{Two corner reflectors were placed above water. The range between the reflectors was roughly 4 meters. Two point scatterers are expected at the profile output.} \label{pic:targetview} \end{figure} \begin{figure}[htbp] \centering \subfigure[]{\includegraphics[width=0.48\textwidth]{RealTargetsOMP} \label{pic:realomp}} \subfigure[]{\includegraphics[width=0.48\textwidth]{RealTargets} \label{pic:realmusic}} \caption{Two reflectors are resolved by 3.95 meters. (a) HRR profile by OMP noting multiple reflectors are surrounding the main reflector. This is caused by `Grid-Mismatch'. (b) Result by M-MUSIC. Only two spikes is sufficient to represent the received signal.} \label{pic:realres} \end{figure} We compare compressed sensing and our new method in HRR profile results. In Figure \ref{pic:realomp}, two scatterer points are resolved. However, multiple spikes are required to represent the reflector. This is the phenomenon appeared in simulation data. Result by new method in Figure \ref{pic:realmusic} shows clearly two scatterers. They are separated by 3.95 meters, correctly describing the position of the two reflectors. \section{Conclusion} In this paper, we introduced a new HRR profiling algorithm for synthetic bandwidth signal. This new approach is able to correctly resolve scatterer ranges and amplitudes under the condition of missing data. Based on sampled auto-covariance, the signal covariance matrix is formed. Subspace decomposition is applied in order to resolve scatterer ranges. Amplitudes are obtained by least square. Simulations and real data results show the newly method has advantage over compressed sensing method, which is a widely used in missing data case. More application of this method may apply to sinusoid signal decomposition and other radar imaging area. \bibliographystyle{plain}
-14,980.26078
[ -2.55859375, 2.41015625 ]
29.910714
[ -3.41796875, 0.007568359375, -2.029296875, -4.8671875, -0.1904296875, 7.3828125 ]
[ 1.4443359375, 7.07421875, 1.470703125, 5.13671875 ]
188
2,339
[ -2.541015625, 2.974609375 ]
24.078517
[ -5.95703125, -3.4921875, -3.40625, -1.7978515625, 2.021484375, 10.2734375 ]
0.93306
20.178804
31.081659
1.769764
[ 2.209501266479492 ]
-10,655.804222
6.048311
-14,547.542744
0.46653
5.754922
[ -3.21484375, -3.728515625, -3.197265625, -4.20703125, 2.60546875, 11.1796875 ]
[ -5.11328125, -1.4130859375, -2.201171875, -1.3154296875, 3.224609375, 4.05859375 ]
BkiUcQrxK02iP15vfcUN
\section{Introduction} Principal pivot transform (PPT, or simply pivot) is a matrix transformation operation capable of partially (component-wise) inverting a given matrix. PPT is originally motivated by the well-known linear complementarity problem \cite{tucker1960}, and is applied in many other settings such as mathematical programming and numerical analysis, see \cite{Tsatsomeros2000151} for an overview. A natural restriction of pivot is to graphs (with possibly loops), i.e., symmetric matrices over $\mathbb{F}_2$. For graphs, each pivot operation can be decomposed into a sequence of elementary pivots. There are two types of elementary pivot operations, (frequently) called local complementation and edge complementation. These two graph operations are also (in fact, originally) defined for simple graphs. The operations are similar for graphs and simple graphs, however, for simple graphs, applicability is less restrictive. Local and edge complementation for simple graphs, introduced in \cite{kotzig1968} and \cite{bouchet1988} respectively, were originally motivated by the study of Euler circuits in 4-regular graphs and by the study of circle graphs (also called overlap graphs) as they model natural transformations of the underlying circle segments. Many other applications domains for these operations have since appeared, e.g., quantum computing \cite{PhysRevA.69.022316}, the formal theory of gene assembly in ciliates \cite{GeneAssemblyBook} (a research area within computational biology), and the study of interlace polynomials, initiated in \cite{Arratia_InterlaceP_SODA}. In many contexts where local and edge complementation have been used, PPT has originally appeared in disguise (we briefly discuss some examples in the paper). In this paper we show that the pivot operator on matrices $A$ (over some field) and the symmetric difference operator on sets $Y$ have an equivalent effect w.r.t. the nullity value of the principal submatrices $A\sub{Y}$ of $A$. We subsequently show that this nullity invariant can be formulated in terms of (a sequence of) set systems. Furthermore we discuss its consequences for pivot on graphs and in particular apply it to the interlace polynomial. It was shown in \cite{ArratiaBS04} that the interlace polynomial, which is defined for graphs, fulfills a characteristic recursive relation. We generalize the notion of interlace polynomial and its recursive relation to square matrices in general. In this way, we simplify the proof of the (original) recursive relation for interlace polynomials of graphs. Also, in Section~\ref{sec:background}, we recall a motivation of pivot applied to overlap graphs, and relate it to the nullity invariant. \section{Notation and Terminology} A \emph{set system} (over $V$) is a tuple $M = (V,D)$ with $V$ a finite set, called the \emph{domain} of $M$, and $D \subseteq 2^V$ a family of subsets of $V$. To simplify notation we often write $X \in M$ to denote $X \in D$. Moreover, we often simply write $V$ to denote the domain of the set system under consideration. We denote by $\oplus$ the logical exclusive-or (i.e., addition in $\mathbb{F}_2$), and we carry this operator over to sets: for sets $A, B \subseteq V$, $A \oplus B$ is the set defined by $x \in A \oplus B$ iff $(x \in A) \oplus (x \in B)$ for $x \in V$. For sets, the $\oplus$ operator is called symmetric difference. We consider matrices and vectors indexed by a finite set $V$. For a vector $v$ indexed by $V$, we denote the element of $v$ corresponding to $i \in V$ by $v[i]$. Also, we denote the nullity (dimension of the null space) and the determinant of a matrix $A$ by $n(A)$ and $\det(A)$ respectively. For $X \subseteq V$, the principal submatrix of $A$ w.r.t. $X$ is denoted by $A\sub{X}$. We consider undirected graphs without parallel edges, however we do allow loops. Hence a graph $G=(V,E)$ can be considered a symmetric $V \times V$-matrix $A = \left(a_{u,v}\right)$ over $\mathbb{F}_2$ (the field having two elements): for $u \in V$, $\{u\} \in E$ (i.e., $u$ has a loop in $G$) iff $a_{u,u} = 1$, and for $u,v \in V$ with $u \not=v$, $\{u,v\} \in E$ iff $a_{u,v} = 1$. We denote the set of edges of $G$ by $E(G)$. We often make no distinction between $G$ and its matrix representation $A$. Thus, e.g., we write $n(G) = n(A)$, and, for $X \subseteq V$, $G\sub{X} = A\sub{X}$, which consequently is the subgraph of $G$ induced by $X$. Note that as $G$ is represented by a matrix $A$ over $\mathbb{F}_2$, $n(G)$ is computed over $\mathbb{F}_2$. Also, for $Y \subseteq V$, we define $G \setminus Y = G \sub{V\setminus Y}$. In case $Y = \{v\}$ is a singleton, to simplify notation, we also write $G \setminus Y = G \setminus v$. Similar as for set systems, we often write $V$ to denote the vertex set of the graph under consideration. \section{Background: Nullity and Counting Closed Walks} \label{sec:background} In this section we briefly and informally discuss an application of principal pivot transform where nullity plays an important role. In \cite{DBLP:journals/jct/CohnL72} a first connection between counting cycles and the nullity of a suitable matrix was established. It is shown in that paper that the number of cycles obtained as the result of applying disjoint transpositions to a cyclic permutation is described by the nullity of a corresponding ``interlace matrix''. It has been recognized in \cite{LT/BinNullity/09} that the result of \cite{DBLP:journals/jct/CohnL72} has an interpretation in terms of 2-in, 2-out digraphs (i.e., directed graphs with 2 incoming and 2 outgoing edges for each vertex), linking it to the interlace polynomial \cite{Arratia2004199}. We discuss now this interpretation in terms of 2-in, 2-out digraphs and subsequently show the connection to the pivot operation. \begin{figure}[t] \begin{center} \unitlength 0.4mm% \begin{picture}(80,40) \gasset{AHnb=0,Nw=1.5,Nh=1.5,Nframe=n,Nfill=y} \gasset{AHnb=0,Nw=8,Nh=8,Nframe=y,Nfill=n} \node(1)(0,0){$1$} \node(2)(0,40){$2$} \node(3)(40,0){$3$} \node(5)(40,40){$5$} \node(6)(80,0){$6$} \node(4)(80,40){$4$} \drawedge(1,2){} \drawedge(1,3){} \drawedge(2,5){} \drawedge(3,5){} \drawedge(5,4){} \drawedge(5,6){} \drawedge(3,6){} \drawedge(4,6){} \end{picture} \end{center} \caption{The overlap graph of $s = 146543625123$.} \label{fig:overlap_graph} \end{figure} Let $V = \{1,2,3,4,5,6\}$ be an alphabet and let $s = 146543625123$ be a double occurrence string (i.e., each letter of the string occurs precisely twice) over $V$. The \emph{overlap graph} $O_s$ corresponding to $s$ has $V$ as the set of vertices and an edge $\{u,v\}$ precisely when $u$ and $v$ overlap: the vertices $u$ and $v$ appear either in order $u,v,u,v$ or in order $v,u,v,u$ in $s$. The overlap graph $O_s$ is given in Figure~\ref{fig:overlap_graph}. One may verify that the nullity of $O_s$ is $n(O_s) = 0$. Consider now the subgraph $O_s[X]$ of $O_s$ induced by $X = \{3,4,5,6\}$. Then it can be verified that $n(O_s[X]) = 2$. \begin{figure}[t] \begin{center} \unitlength 0.6mm% \begin{picture}(80,60) \gasset{AHLength=4,Nw=8,Nh=8,Nframe=y,Nfill=n} \node(1)(0,0){$1$} \node(2)(40,60){$2$} \node(3)(80,0){$3$} \node(4)(40,12){$4$} \node(5)(32,26){$5$} \node(6)(48,26){$6$} \drawedge(1,2){} \drawedge(3,1){} \drawedge(2,3){} \drawedge(5,4){} \drawedge(4,6){} \drawedge(6,5){} \drawedge(1,4){} \drawedge(5,1){} \drawedge(2,5){} \drawedge(6,2){} \drawedge(3,6){} \drawedge(4,3){} \end{picture} \end{center} \caption{A 2-in, 2-out digraph.} \label{fig:4_reg_graph} \end{figure} We discuss now the link with 2-in, 2-out digraphs (only in this section we consider digraphs). Let $G$ be the 2-in, 2-out digraph of Figure~\ref{fig:4_reg_graph} with $V = \{1,2,3,4,5,6\}$ as the set of vertices. Although our example graph does not have parallel edges, there is no objection to consider such ``2-in, 2-out multidigraphs''. Notice that the double occurrence string $s = 146543625123$ considered earlier corresponds to an Euler circuit $C$ of $G$. We now consider partitions $P$ of the edges of $G$ into closed walks (i.e., cycles where repeated vertices are allowed). Note that there are $2^{|V|}$ such partitions: if in a walk passing through vertex $v$ we go \emph{from} incoming edge $e$ of $v$ \emph{to} outgoing edge $e'$ of $v$, then necessarily we also walk in $P$ from the other incoming edge of $v$ to the other outgoing edge of $v$. Hence for each vertex there are two ``routes''. Let $P$ now be the the partition of the edges of $G$ into $3$ closed walks as indicated by Figure~\ref{fig:4_reg_graph_closed_walks} using three types of line thicknesses. Then $P$ follows the same route as the Euler circuit (corresponding to) $s$ in vertices $\{1,2\}$, while in the other vertices $X = \{3,4,5,6\}$ it follows the other route. We say that $P$ is \emph{induced} by $X$ in $s$. \begin{figure}[t] \begin{center} \unitlength 0.6mm% \begin{picture}(80,60) \gasset{AHLength=6,Nw=8,Nh=8,Nframe=y,Nfill=n} \node(1)(0,0){$1$} \node(2)(40,60){$2$} \node(3)(80,0){$3$} \node(4)(40,12){$4$} \node(5)(32,26){$5$} \node(6)(48,26){$6$} \drawedge[linewidth=1.5](1,2){} \drawedge[linewidth=0.9](3,1){} \drawedge[linewidth=1.5](2,3){} \drawedge(5,4){} \drawedge(4,6){} \drawedge[linewidth=1.5](6,5){} \drawedge[linewidth=0.9](1,4){} \drawedge[linewidth=1.5](5,1){} \drawedge(2,5){} \drawedge(6,2){} \drawedge[linewidth=1.5](3,6){} \drawedge[linewidth=0.9](4,3){} \end{picture} \end{center} \caption{Partition of the edges of a 2-in, 2-out digraph into three closed walks.} \label{fig:4_reg_graph_closed_walks} \end{figure} Theorem~1 in \cite{DBLP:journals/jct/CohnL72} now states (applying it to the context of 2-in, 2-out digraphs) that the number of closed walks of a partition $P$ of edges induced by $X$ in $s$ is $n(O_s[X])+1$. In our case we have indeed $|P| = 3$ and $n(O_s[X])=2$. The pivot operation, which is recalled in the next section, has the property that it can map $O_{s_1}$ into $O_{s_2}$ for any two double occurrence strings $s_1$ and $s_2$ that correspond to Euler circuits of a 2-in, 2-out digraph $G$, see, e.g., the survey section of \cite{DBLP:journals/ejc/Bouchet01}. For example, the partition of edges induced by $\{1,3\}$ in $s$ corresponds to a single closed walk which may be described by the double occurrence string $s' = 123625146543$. It then holds that $O_{s'}$ is obtained from $O_{s}$ by pivot on $\{1,3\}$, denoted by $O_{s'} = O_{s}*\{1,3\}$. We notice that the partition induced by $\{1,3\} \oplus \{3,4,5,6\} = \{1,4,5,6\}$ in $s'$ is equal to the partition $P$ induced by $\{3,4,5,6\}$ in $s$ depicted in Figure~\ref{fig:4_reg_graph_closed_walks}. Hence $n(O_{s}*Y[Y \oplus X]) = n(O_s[X])$ for $X = \{3,4,5,6\}$ and $Y = \{1,3\}$. In Theorem~\ref{thm:nullity_invariant} below we prove this property for arbitrary $X$ and $Y$ and for arbitrary square matrices (over some field) instead of restricting to overlap graphs $O_{s}$. \section{Pivot} \label{sec:def_pivots} In this section we recall principal pivot transform (pivot for short) for square matrices over an arbitrary field in general, see also \cite{Tsatsomeros2000151}. Let $A$ be a $V \times V$-matrix (over an arbitrary field), and let $X \subseteq V$ be such that the corresponding principal submatrix $A\sub{X}$ is nonsingular, i.e., $\det A\sub{X} \neq 0$. The \emph{pivot} of $A$ on $X$, denoted by $A*X$, is defined as follows. If $P = A\sub{X}$ and $A = \left( \begin{array}{c|c} P & Q \\ \hline R & S \end{array} \right)$, then \begin{eqnarray}\label{pivot_def} A*X = \left( \begin{array}{c|c} P^{-1} & -P^{-1} Q \\ \hline R P^{-1} & S - R P^{-1} Q \end{array} \right). \end{eqnarray} Matrix $S - R P^{-1} Q$ is called the \emph{Schur complement} of $P$ in $A$. The pivot can be considered a partial inverse, as $A$ and $A*X$ are related by the following equality, where the vectors $x_1$ and $y_1$ correspond to the elements of $X$. This equality is characteristic as it is sufficient to define the pivot operation, see \cite[Theorem~3.1]{Tsatsomeros2000151}. \begin{eqnarray} \label{pivot_def_reverse} A \left( \begin{array}{c} x_1 \\ x_2 \end{array} \right) = \left(\begin{array}{c} y_1 \\ y_2 \end{array} \right) \mbox{ iff } A*X \left( \begin{array}{c} y_1 \\ x_2 \end{array} \right) = \left(\begin{array}{c} x_1 \\ y_2 \end{array} \right) \end{eqnarray} Note that if $\det A \not= 0$, then $A * V = A^{-1}$. Also note by Equation~(\ref{pivot_def_reverse}) that the pivot operation is an involution (operation of order $2$), and more generally, if $(A*X)*Y$ is defined, then it is equal to $A*(X \oplus Y)$. \section{Nullity Invariant} \label{sec:nullity_invar} It is well known that any Schur complement in a matrix $A$ has the same nullity as $A$ itself, see, e.g., \cite[Section~6.0.1]{SchurBook2005}. See moreover \cite[Chapter~0]{SchurBook2005} for a detailed historical account of the Schur complement. We can rephrase the nullity property of the Schur complement in terms of pivot as follows. \begin{Proposition}[Nullity of Schur complement] \label{prop:Schur} Let $A$ be a $V \times V$-matrix, and let $X \subseteq V$ such that $A\sub{X}$ is nonsingular. Then $n(A*X[V \backslash X]) = n(A[V])$. \end{Proposition} The following result is known from \cite{tucker1960} (see also \cite[Theorem~4.1.2]{cottle1992}). \begin{Proposition}\label{prop:tucker} Let $A$ be a $V\times V$-matrix, and let $X\subseteq V$ be such that $A\sub{X}$ is nonsingular. Then, for $Y \subseteq V$, $$ \det (A*X)\sub{Y} = \det A\sub{X \oplus Y} / \det A\sub{X}. $$ \end{Proposition} As a consequence of Proposition~\ref{prop:tucker} we have the following result. \begin{Corollary}\label{cor:tucker} Let $A$ be a $V\times V$-matrix, and let $X\subseteq V$ be such that $A\sub{X}$ is nonsingular. Then, for $Y \subseteq V$, $(A*X)\sub{Y}$ is nonsingular iff $A\sub{X \oplus Y}$ is nonsingular. \end{Corollary} We will now combine and generalize Proposition~\ref{prop:Schur} and Corollary~\ref{cor:tucker} to obtain Theorem~\ref{thm:nullity_invariant} below. We denote by $A \sharp X$ the matrix obtained from $A$ by replacing every row $v_x^T$ of $A$ belonging to $x \in V \setminus X$ by $i_x^T$ where $i_x$ is the vector having value $1$ at element $x$ and $0$ elsewhere. \begin{Lemma} \label{lem:nullity_principal_submatrix} Let $A$ be a $V \times V$-matrix and $X \subseteq V$. Then $n(A \sharp X) = n(A\sub{X})$. \end{Lemma} \begin{Proof} By rearranging the elements of $V$, $A$ is of the following form $\left(\begin{array}{c|c} P & Q \\ \hline R & S \end{array}\right)$ where $A\sub{X} = P$. Now $A \sharp X$ is $\left(\begin{array}{c|c} P & Q \\ \hline 0 & I \end{array}\right)$ where $I$ is the identity matrix of suitable size. We have $n(P) = n(A \sharp X)$. \end{Proof} We are now ready to prove the following result, which we refer to as the \emph{nullity invariant}. \begin{Theorem}\label{thm:nullity_invariant} Let $A$ be a $V\times V$-matrix, and let $X\subseteq V$ be such that $A\sub{X}$ is nonsingular. Then, for $Y \subseteq V$, $n((A*X)\sub{Y}) = n(A\sub{X \oplus Y})$. \end{Theorem} \begin{Proof} We follow the same line of reasoning as the proof of Parsons\cite{ParsonsTDP70} of Proposition~\ref{prop:tucker} (see also \cite[Theorem 4.1.1]{cottle1992}). Let $Ax = y$. Then $$ ((A \sharp X)x)[i] = \begin{cases} y[i] & \mbox{if } i \in X,\\ x[i] & \mbox{otherwise}. \end{cases} $$ As, by Equation~(\ref{pivot_def_reverse}), $$ ((A*X)(A \sharp X)x)[i] = \begin{cases} x[i] & \mbox{if } i \in X,\\ y[i] & \mbox{otherwise}, \end{cases} $$ we have, by considering each of the four cases depending on whether or not $i$ in $X$ and $i$ in $Y$ separately, $$ (((A*X) \sharp Y)(A \sharp X)x)[i] = \begin{cases} y[i] & \mbox{if } i \in X \oplus Y,\\ x[i] & \mbox{otherwise}. \end{cases} $$ Thus we have $((A*X) \sharp Y) (A \sharp X) = A \sharp (X \oplus Y)$. By Lemma~\ref{lem:nullity_principal_submatrix}, $n(A \sharp X) = n(A\sub{X}) = 0$, and therefore $A \sharp X$ is invertible. Therefore $n((A*X) \sharp Y) = n(A \sharp (X \oplus Y))$, and the result follows by Lemma~\ref{lem:nullity_principal_submatrix}. \end{Proof} By Theorem~\ref{thm:nullity_invariant}, we see that the pivot operator $*X$ on matrices and the symmetric difference operator $\oplus X$ on sets have an equivalent effect on the nullity values of principal submatrices. Note that Theorem~\ref{thm:nullity_invariant} generalizes Corollary~\ref{cor:tucker} as a matrix is nonsingular iff the nullity of that matrix is $0$ (the empty matrix is nonsingular by convention). One can immediately see that Theorem~\ref{thm:nullity_invariant} generalizes Proposition~\ref{prop:Schur}. Also note that by replacing $Y$ by $V \setminus Y$ in Theorem~\ref{thm:nullity_invariant}, we also have, equivalently, $n((A*X)\sub{X \oplus Y}) = n(A\sub{Y})$. The ``Nullity Theorem'' \cite[Theorem~2]{Fiedler1986}, restricted to \emph{square} principal submatrices, states that if $A$ is an invertible $V\times V$-matrix, then, for $Y \subseteq V$, $n(A^{-1}\sub{Y}) = n(A\sub{V \setminus Y})$. Note that this is implied by Theorem~\ref{thm:nullity_invariant} as $A * V = A^{-1}$. \begin{Example} \label{ex:matrix_nullity_invar} Let $V = \{a,b,c\}$ and let $A$ be the $V \times V$-matrix $\left(\begin{array}{ccc} 1 & 2 & 5\\ 1 & 4 & 2\\ 3 & 2 & 1 \end{array} \right)$ over $\mathbb{Q}$ where the columns and rows are indexed by $a,b,c$ respectively. We see that $A\sub{\{b,c\}} = \left(\begin{array}{cc} 4 & 2\\ 2 & 1 \end{array} \right)$ and therefore $n(A\sub{\{b,c\}})=1$. Moreover, for $X = \{a,b\}$, the columns of $A\sub{X}$ are independent and thus $\det A\sub{X} \not= 0$. We have therefore that $A*X$ is defined, and it is given below. $$ A*X = \left(\begin{array}{ccc} 2 & -1 & -8\\ -\frac{1}{2} & \frac{1}{2} & \frac{3}{2}\\ 5 & -2 & -20 \end{array} \right) $$ By Theorem~\ref{thm:nullity_invariant}, we have $n(A\sub{\{b,c\}}) = n(A*X[X \oplus \{b,c\}]) = n(A*X[\{a,c\}])$. Therefore $n(A*X[\{a,c\}]) = 1$. This can easily be verified given $A*X[\{a,c\}]= \left(\begin{array}{cc} 2 & -8\\ 5 & -20 \end{array} \right) $ \end{Example} It is easy to verify from the definition of pivot that $A*X$ is skew-symmetric whenever $A$ is. In particular, if $G$ is a graph (i.e., a symmetric matrix over $\mathbb{F}_2$), then $G*X$ is also a graph. For graphs, all matrix computations, including the determinant, will be over $\mathbb{F}_2$. \newcommand{\pivotorbitmacro}{ \gasset{AHLength=0,Nw=8,Nh=8,Nframe=y,Nfill=n} \node(1)(0,30){$1$} \node(2)(30,30){$2$} \node(3)(0,0){$3$} \node(4)(30,0){$4$} } \begin{figure}[t] \begin{center} \begin{picture}(90,30)(0,0) \unitlength 1mm% \node[Nw=25,Nh=25,Nframe=n](I)(15,15){% \unitlength 0.4mm% \begin{picture}(30,30) \pivotorbitmacro \drawedge(2,3){} \drawedge(2,4){} \drawedge(1,3){} \drawedge(3,4){} \drawloop[loopangle=90,loopdiam=7](2){} \drawloop[loopangle=-90,loopdiam=7](3){} \end{picture} } \node[Nw=25,Nh=25,Nframe=n](II)(75,15){% \unitlength 0.4mm% \begin{picture}(30,30) \pivotorbitmacro \drawedge(2,1){} \drawedge(2,4){} \drawedge(1,3){} \drawloop[loopangle=90,loopdiam=7](2){} \drawloop[loopangle=-90,loopdiam=7](4){} \end{picture} } \drawedge[AHnb=1,ATnb=1](I,II){$*\{1,2,3\}$} \end{picture} \end{center} \caption{Graphs $G$ and $G*X$ of Example~\ref{ex:graph_nullity_invar}.} \label{fig:ex_graph_nullity_invar} \end{figure} \begin{Example} \label{ex:graph_nullity_invar} Let $G$ be the graph given on the left-hand side of Figure~\ref{fig:ex_graph_nullity_invar}. Let $X = \{1,2,3\}$. Then the $X \times X$-matrix belonging to $G\sub{X}$ is $\left(\begin{array}{ccccc} 0 & 0 & 1\\ 0 & 1 & 1\\ 1 & 1 & 1 \end{array} \right)$ where the columns and rows represent vertices $1,2,3$, respectively. We see that the columns of $G\sub{X}$ are independent (over $\mathbb{F}_2$) and therefore $\det G\sub{X} = 1$. Consequently $G*X$ is defined and the graph is given on the right-hand side of Figure~\ref{fig:ex_graph_nullity_invar}. Let now $Y = \{1,4\}$. We see that $G\sub{Y}$ is a discrete graph (i.e., the graph has no edges). Therefore $n(G\sub{Y}) = 2$. Now by Theorem~\ref{thm:nullity_invariant}, we have $n(G\sub{Y}) = n(G*X[X \oplus Y]) = n(G*X[\{2,3,4\}])$. One may verify that removing vertex $1$ from $G*X$ indeed obtains a graph of nullity $2$. \end{Example} \section{Set Systems} \label{sec_seq_set_systems} Let $A$ be a $V\times V$-matrix. Let $\mathcal{M}_A = (V,D)$ be the set system with $X \in D$ iff $A\sub{X}$ is nonsingular. Set system $\mathcal{M}_A$ turns out to fulfill a specific exchange axiom if $A$ is (skew-)symmetric, making it in this case a delta-matroid \cite{bouchet1987} (we will not recall its definition here as we do not use this notion explicitly). Let $M = (V,D)$ be a set system. We define for $X \subseteq V$, the \emph{pivot} (often called \emph{twist}) of $M$ on $X$, denoted $M*X$, by $(V,D*X)$ where $D*X = \{Y \oplus X \mid Y \in M\}$. By Corollary~\ref{cor:tucker} it is easy to verify, see \cite{Geelen97}, that the operations of pivot on set systems and matrices match, i.e., $\mathcal{M}_{A}*X = \mathcal{M}_{A*X}$ if the right-hand side is defined (i.e., if $X \in \mathcal{M}_{A}$). Theorem~\ref{thm:nullity_invariant} allows now for a generalization of this result from the set system $\mathcal{M}_{A}$ of nullity $0$ to a ``sequence of set systems'' $\mathcal{P}_A$ for each possible nullity $i$. We formalize this as follows. For a finite set $V$, we call a sequence $P = (P_0,P_1,\ldots,P_n)$ with $n = |V|$ and $P_i \subseteq V$ for all $i \in \{0,\ldots,n\}$ a \emph{partition sequence} (over $V$) if the nonempty $P_i$'s form a partition of $2^V$. Regarding $P$ as a vector indexed by $\{0,\ldots,n\}$, we denote $P_i$ by $P[i]$. Moreover, we define for partition sequence $P$ and $X \subseteq V$, the \emph{pivot} of $P$ on $X$, denoted by $P*X$, to be the partition sequence $(P_0*X,P_1*X,\ldots,P_n*X)$. Also, we call the vector $(|P_0|,|P_1|,\ldots,|P_n|)$ of dimension $n+1$, denoted by $\|P\|$, the \emph{norm} of $P$. Clearly, $\|P\| = \|P*X\|$, i.e., the norm of $P$ is invariant under pivot. For a $V \times V$-matrix $A$ we denote by $\mathcal{P}_A$ the partition sequence over $V$ where $X \in \mathcal{P}_A[i]$ iff $n(A\sub{X}) = i$. As nullity $0$ corresponds to a non-zero determinant (this holds also for $\emptyset$ as $\det A\sub{\emptyset} = 1$ by convention), we have $\mathcal{M}_A = (V,\mathcal{P}_A[0])$. We now have the following consequence of Theorem~\ref{thm:nullity_invariant}. Note that $X \in \mathcal{P}_A[0]$ iff $A*X$ is defined. \begin{Theorem} \label{thm:twist_gen_matroid} Let $A$ be a $V\times V$-matrix, and $X \in \mathcal{P}_A[0]$. Then $\mathcal{P}_{A*X} = \mathcal{P}_A*X$. \end{Theorem} \begin{Proof} By Theorem~\ref{thm:nullity_invariant} we have for all $i\in\{0,\ldots,n\}$, $Y \in \mathcal{P}_{A*X}[i]$ iff $n((A*X)[Y]) = i$ iff $n(A[X \oplus Y]) = i$ iff $X \oplus Y \in \mathcal{P}_{A}[i]$ iff $Y \in \mathcal{P}_{A}[i]*X$. \end{Proof} Since the norm of a partition sequence is invariant under pivot, we have by Theorem~\ref{thm:twist_gen_matroid}, $\|\mathcal{P}_A\| = \|\mathcal{P}_{A*X}\|$. Therefore, for each $i \in \{0,\ldots,n\}$, the number of principal submatrices of $A$ of nullity $i$ is equal to the number of principal submatrices of $A*X$ of nullity $i$. For $X \subseteq V$, it it is easy to see that $\mathcal{P}_{A[X]}$ is obtained from $\mathcal{P}_A$ by removing all $Y \in \mathcal{P}_A[i]$ containing at least one element outside $X$: $\mathcal{P}_{A[X]}[i] = \{Z \subseteq X \mid Z \in \mathcal{P}_A[i]\}$ for all $i \in \{0,\ldots,|X|\}$. \begin{Example} For matrix $A$ from Example~\ref{ex:matrix_nullity_invar}, we have $\mathcal{P}_A = (P_0,P_1,P_2,P_3)$ with $P_0 = 2^V\setminus \{\{b,c\}\}$, $P_1 = \{\{b,c\}\}$, and $P_2 = P_3 = \emptyset$. \end{Example} \begin{Example} For graph $G$ from Example~\ref{ex:graph_nullity_invar}, depicted on the left-hand side of Figure~\ref{fig:ex_graph_nullity_invar}, we have $\mathcal{P}_G = (P_0,P_1,P_2,P_3,P_4)$ with \begin{eqnarray*} P_0 &=& \{\emptyset, \{2\}, \{3\}, \{1,3\}, \{2,4\}, \{3,4\}, \{1,2,3\}, \{1,2,3,4\}\}, \\ P_1 &=& \{\{1\}, \{4\}, \{1,2\}, \{2,3\}, \{1,2,4\}, \{1,3,4\}, \{2,3,4\}\}, \\ P_2 &=& \{\{1,4\}\}, P_3 = P_4 = \emptyset. \end{eqnarray*} By Theorem~\ref{thm:twist_gen_matroid} we have for $G*X$ with $X = \{1,2,3\}$, depicted on the right-hand side of Figure~\ref{fig:ex_graph_nullity_invar}, $\mathcal{P}_{G*X} = (P'_0,P'_1,P'_2,P'_3,P'_4)$ where \begin{eqnarray*} P'_0 &=& \{\emptyset, \{2\}, \{4\}, \{1,2\}, \{1,3\}, \{1,2,3\}, \{1,2,4\}, \{1,3,4\}\}, \\ P'_1 &=& \{\{1\}, \{3\}, \{1,4\}, \{2,3\}, \{2,4\}, \{3,4\}, \{1,2,3,4\}\}, \\ P'_2 &=& \{\{2,3,4\}\}, P'_3 = P'_4 = \emptyset. \end{eqnarray*} We have $\|\mathcal{P}_G\| = \|\mathcal{P}_{G*X}\| = (8,7,1,0,0)$. \end{Example} \begin{Example} \label{ex:part_seq_overlap_graph} In the context of Section~\ref{sec:background}, where matrix $A$ an overlap graph $O_s$ for some double occurrence string $s$, we have that $\|\mathcal{P}_{O_s}\|[i]$ is the number of partitions of the edges of the 2-in, 2-out digraph $D$ corresponding to $s$ into closed walks of $D$, such that the number of closed walks is precisely $i+1$. The value $\|\mathcal{P}_{O_s}\|[0]$ is therefore the number of Euler circuits in $D$. \end{Example} \section{Elementary Pivots on Graphs} \label{sec:pivots_graphs} From now on we consider pivot on graphs (i.e., symmetric $V \times V$-matrices over $\mathbb{F}_2$), and thus on all matrix computations will be over $\mathbb{F}_2$. Hence for graph $G$, $\mathcal{M}_G = (V,D_G)$ is the set system with $X \in D_G$ iff $\det(G\sub{X}) = 1$. Also, $G$ can be (re)constructed given $\mathcal{M}_G$. Indeed, $\{u\}$ is a loop in $G$ iff $\{u\} \in D_G$, and $\{u,v\}$ is an edge in $G$ iff $(\{u,v\} \in D_G) \oplus ((\{u\} \in D_G) \wedge (\{v\} \in D_G))$, see \cite[Property~3.1]{Bouchet_1991_67}. Therefore, the function $\mathcal{M}_{(\cdot)}$ assigning to each graph $G$ the set system $\mathcal{M}_G$ is an injective function from the family of graphs to the family of set systems. It this way the family of graphs may be regarded as a subclass of the family of set systems. Note that $\mathcal{M}_G*X$ is defined for all $X \subseteq V$, while pivot on graphs $G*X$ is defined only if $X \in \mathcal{M}_G$ (or equivalently, $\emptyset \in \mathcal{M}_G*X$). In this section we recall from \cite{Geelen97} that the pivot operation on graphs can be defined as compositions of two graph operations: local complementation and edge complementation. The pivots $G*X$ where $X$ is a minimal element of $\mathcal{M}_G \backslash \{\emptyset\}$ w.r.t. inclusion are called \emph{elementary}. It is noted in \cite{Geelen97} that an elementary pivot $X$ on graphs corresponds to either a loop, $X = \{u\} \in E(G)$, or to an edge, $X = \{u,v\} \in E(G)$, where both vertices $u$ and $v$ are non-loops. Thus for $Y \in \mathcal{M}_G$, if $G[Y]$ has elementary pivot $X_1$, then $Y \setminus X_1 = Y \oplus X_1 \in \mathcal{M}_{G*X_1}$. In this way, each $Y \in \mathcal{M}_G$ can be partitioned $Y = X_1 \cup \cdots\cup X_n$ such that $G*Y = G*(X_1 \oplus \cdots\oplus X_n) = (\cdots(G*X_1)\cdots * X_n)$ is a composition of elementary pivots. Consequently, a direct definition of the elementary pivots on graphs $G$ is sufficient to define the (general) pivot operation on graphs. The elementary pivot $G*\{u\}$ on a loop $\{u\}$ is called \emph{local complementation}. It is the graph obtained from $G$ by complementing the edges in the neighbourhood $N_G(u) = \{ v \in V \mid \{u,v\} \in E(G), u \not= v \}$ of $u$ in $G$: for each $v,w \in N_G(u)$, $\{v,w\}\in E(G)$ iff $\{v,w\} \not\in E(G*\{u\})$, and $\{v\}\in E(G)$ iff $\{v\} \not\in E(G*\{u\})$ (the case $v=w$). The other edges are left unchanged. The elementary pivot $G*\pair uv$ on an edge $\pair uv$ between distinct non-loop vertices $u$ and $v$ is called \emph{edge complementation}. For a vertex $x$ consider its closed neighbourhood $N'_G(x)= N_G(x)\cup \{x\}$. The edge $\pair uv$ partitions the vertices of $G$ connected to $u$ or $v$ into three sets $V_1 = N'_G(u) \setminus N'_G(v)$, $V_2 = N'_G(v) \setminus N'_G(u)$, $V_3 = N'_G(u) \cap N'_G(v)$. Note that $u,v \in V_3$. \begin{figure}[t] \centerline{\unitlength 1.0mm \begin{picture}(55,42)(0,1) \drawccurve(02,28)(25,21)(48,32)(25,39) \drawccurve(00,10)(10,00)(20,10)(10,20) \drawccurve(30,10)(40,00)(50,10)(40,20) \gasset{AHnb=0,Nw=1.5,Nh=1.5,Nframe=n,Nfill=y} \gasset{ExtNL=y,NLdist=1.5,NLangle=90} \put(10,02){\makebox(0,0)[cc]{$V_1$}} \put(40,02){\makebox(0,0)[cc]{$V_2$}} \put(25,36){\makebox(0,0)[cc]{$V_3$}} \node(u)(09,28){$u$} \node(v)(20,30){$v$} \node(uu)(29,32){} \node(vv)(41,28){} \node(u1)(7,14){} \node(u2)(14,7){} \node(v1)(38,7){} \node(v2)(43,14){} \drawedge(u,v){} \drawedge(u,u1){} \drawedge(u,u2){} \drawedge(v,v1){} \drawedge(v,v2){} \drawedge(u1,v2){} \drawedge(u2,v1){} \drawedge[dash={1}0](v1,v2){} \drawedge[dash={1}0](u1,u2){} \drawedge[dash={1}0](uu,vv){} \drawedge(uu,u1){} \drawedge(vv,u2){} \drawedge(uu,v1){} \drawedge(vv,v2){} \end{picture} \begin{picture}(55,42)(0,1) \drawccurve(02,28)(25,21)(48,32)(25,39) \drawccurve(00,10)(10,00)(20,10)(10,20) \drawccurve(30,10)(40,00)(50,10)(40,20) \gasset{AHnb=0,Nw=1.5,Nh=1.5,Nframe=n,Nfill=y} \gasset{ExtNL=y,NLdist=1.5,NLangle=90} \put(10,02){\makebox(0,0)[cc]{$V_1$}} \put(40,02){\makebox(0,0)[cc]{$V_2$}} \put(25,36){\makebox(0,0)[cc]{$V_3$}} \node(u)(09,28){$u$} \node(v)(20,30){$v$} \node(uu)(29,32){} \node(vv)(41,28){} \node(u1)(7,14){} \node(u2)(14,7){} \node(v1)(38,7){} \node(v2)(43,14){} \drawedge(u,v){} \drawedge(v,u1){} \drawedge(v,u2){} \drawedge(u,v1){} \drawedge(u,v2){} \drawedge(u1,v1){} \drawedge(u2,v2){} \drawedge[dash={1}0](v1,v2){} \drawedge[dash={1}0](u1,u2){} \drawedge[dash={1}0](uu,vv){} \drawedge(uu,u2){} \drawedge(vv,u1){} \drawedge(uu,v2){} \drawedge(vv,v1){} \end{picture}% } \caption{Pivoting on an edge $\pair uv$ in a graph with both $u$ and $v$ non loops. Connection $\pair xy$ is toggled iff $x\in V_i$ and $y\in V_j$ with $i\neq j$. Note $u$ and $v$ are connected to all vertices in $V_3$, these edges are omitted in the diagram. The operation does not affect edges adjacent to vertices outside the sets $V_1,V_2,V_3$, nor does it change any of the loops.}% \label{fig:pivot} \end{figure} The graph $G*\pair uv$ is constructed by ``toggling'' all edges between different $V_i$ and $V_j$: for $\pair xy$ with $x\in V_i$, $y\in V_j$ and $i\neq j$: $\pair xy \in E(G)$ iff $\pair xy \notin E(G\sub{\{u,v\}})$, see Figure~\ref{fig:pivot}. The other edges remain unchanged. Note that, as a result of this operation, the neighbours of $u$ and $v$ are interchanged. \renewcommand{\pivotorbitmacro}{ \gasset{AHLength=0,Nw=8,Nh=8,Nframe=y,Nfill=n} \node(p)(30,30){$p$} \node(q)(0,0){$q$} \node(r)(30,0){$r$} } \begin{figure}[t] \unitlength 1mm% \begin{center} \begin{picture}(100,48)(-10,-05) \node[Nw=25,Nh=25,Nframe=n](I)(00,35){% \unitlength0.4mm \begin{picture}(30,30) \pivotorbitmacro \drawedge(p,q){} \drawedge(p,r){} \drawloop[loopangle=-135,loopdiam=7](q){} \end{picture} } \node[Nw=25,Nh=25,Nframe=n](II)(40,35){% \unitlength0.4mm \begin{picture}(30,30) \pivotorbitmacro \drawedge(p,q){} \drawedge(p,r){} \drawloop[loopangle=45,loopdiam=7](p){} \drawloop[loopangle=-135,loopdiam=7](q){} \end{picture} } \node[Nw=25,Nh=25,Nframe=n](III)(80,17){% \unitlength0.4mm \begin{picture}(30,30) \pivotorbitmacro \drawedge(p,q){} \drawedge(p,r){} \drawedge(q,r){} \drawloop[loopangle=45,loopdiam=7](p){} \drawloop[loopangle=-45,loopdiam=7](r){} \end{picture} } \node[Nw=25,Nh=25,Nframe=n](IV)(40,00){% \unitlength0.4mm \begin{picture}(30,30) \pivotorbitmacro \drawedge(p,r){} \drawedge(q,r){} \drawloop[loopangle=-135,loopdiam=7](q){} \drawloop[loopangle=-45,loopdiam=7](r){} \end{picture} } \node[Nw=25,Nh=25,Nframe=n](V)(00,00){% \unitlength0.4mm \begin{picture}(30,30) \pivotorbitmacro \drawedge(p,r){} \drawedge(q,r){} \drawloop[loopangle=-135,loopdiam=7](q){} \end{picture} } \drawedge[AHnb=1,ATnb=1](I,II){$*\{q\}$} \drawedge[AHnb=1,ATnb=1](II,III){$*\{p\}$} \drawedge[AHnb=1,ATnb=1](III,IV){$*\{r\}$} \drawedge[AHnb=1,ATnb=1](IV,V){$*\{q\}$} \drawedge[AHnb=1,ATnb=1](V,I){$*\{p,r\}$} \end{picture} \end{center} \caption{The orbit of a graph under pivot. Only the elementary pivots are shown.}\label{fig:pivot_space} \end{figure} \begin{Example} Figure~\ref{fig:pivot_space} depicts an orbit of graphs under pivot. The figure also shows the applicable elementary pivots (i.e., local and edge complementation) of the graphs within the orbit. \end{Example} Interestingly, in many contexts, principal pivot transform originally appeared in disguise. For example, PPT was recognized in \cite{GlantzP06} as the operation underlying the recursive definition of the interlace polynomial, introduced in \cite{Arratia_InterlaceP_SODA}. We will consider the interlace polynomial in the next section. Also, e.g., the graph model defined in \cite{Equiv_String_Graph_2} within the formal theory of (intramolecular) gene assembly in ciliates turns out to be exactly the elementary pivots, as noted in \cite{BHH/PivotsDetPM/09}. Furthermore, the proof of the result from \cite{DBLP:journals/jct/CohnL72}, connecting nullity to the number of cycles in permutations, as mentioned in Section~\ref{sec:background}, implicitly uses the Schur complement (which is an essential part of PPT). \section{The Interlace Polynomial} The interlace polynomial is a graph polynomial introduced in \cite{Arratia_InterlaceP_SODA,Arratia2004199}. We follow the terminology of \cite{ArratiaBS04}. The single-variable interlace polynomial (simply called interlace polynomial in \cite{Arratia2004199}) for a graph $G$ (with possibly loops) is defined by $$ q(G) = \sum_{S\subseteq V} (y-1)^{n(G\sub{S})}. $$ It is is shown in \cite{ArratiaBS04} that the interlace polynomial fulfills an interesting recursive relation, cf. Proposition~\ref{prop:arratia_recursion} below, which involves local and edge complementation. As we consider here its generalization, principal pivot transform, it makes sense now to define the interlace polynomial for $V \times V$-matrices (over some arbitrary field) in general. Therefore, we define the \emph{interlace polynomial} for $V \times V$-matrix $A$ as $$ q(A) = \sum_{S\subseteq V} (y-1)^{n(A\sub{S})}. $$ We may (without loss of information) change variables $y := y-1$ in the definition of the interlace polynomial to obtain $$ q'(A) = \sum_{S\subseteq V} y^{n(A\sub{S})}. $$ As $q(A)$ (and $q'(A)$) deals with nullity values for (square) matrices in general, one can argue that the \emph{nullity polynomial} is a more appropriate name for these polynomials. We see that the coefficient $a_i$ of term $a_i y^i$ of $q'(A)$ is equal to $\|\mathcal{P}_{A}\|[i]$ (the element of $\|\mathcal{P}_{A}\|$ corresponding to $i$) for all $i \in \{0,\ldots,n\}$. Therefore, we have for matrices $A$ and $A'$, $q(A) = q(A')$ iff $q'(A) = q'(A')$ iff $\|\mathcal{P}_{A}\| = \|\mathcal{P}_{A'}\|$. \begin{Example} \label{ex:int_poly_overlap_graph} Let $O_s$ be the overlap graph for some double occurrence string $s$, and let $a_i$ be the coefficient $a_i$ of term $a_i y^i$ of $q'(O_s)$. We have, see Example~\ref{ex:part_seq_overlap_graph}, that $a_i$ is equal to the number of partitions of the edges of the 2-in, 2-out digraph $D$ corresponding to $s$ into closed walks of $D$, such that the number of closed walks is precisely $i+1$. More specifically, $a_0$ is the number of Euler circuits in $D$. The interlace polynomial is originally motivated by the computation of these coefficients $a_i$ of 2-in, 2-out digraphs, see \cite{Arratia2004199}. \end{Example} It is shown in \cite{Arratia2004199} that the interlace polynomial is invariant under edge complementation. By Theorem~\ref{thm:twist_gen_matroid} we see directly that this holds for pivot in general: $\|\mathcal{P}_{A*X}\| = \|\mathcal{P}_{A}\|$ and equivalently $q(A*X) = q(A)$. Furthermore, we show that $q(A)$ fulfills the following recursive relation. \begin{Theorem} \label{thm:disjoint_sum} Let $A$ be a $V \times V$-matrix (over some field), let $X \subseteq V$ with $A[X]$ nonsingular, and let $u \in X$. We have $q(A) = q(A\setminus u) + q(A*X\setminus u)$. \end{Theorem} \begin{Proof} Let $\mathcal{P}_A = (P_0,P_1,\ldots,P_n)$. Since $X$ is nonempty and $A[X]$ is nonsingular, $P_n = \emptyset$. Let $R = (P_0,P_1,\ldots,P_{n-1})$. Let $Z \in P_i$ for $i \in \{0,1,\ldots,n-1\}$. We have $Z \subseteq V$ \emph{does not} appear in $\mathcal{P}_{A\setminus u}$ iff $u \in Z$ iff $u \not\in Z \oplus X$ iff $Z \oplus X$ \emph{does} appear in $\mathcal{P}_{A*X\setminus u}$. Hence $\|R\| = \|\mathcal{P}_{A\setminus u}\| + \|\mathcal{P}_{A*X\setminus u}\|$ (point-wise addition of the two vectors), and the statement holds. \end{Proof} The recursive relation for the single-variable interlace polynomial in \cite{ArratiaBS04} is now easily obtained from Theorem~\ref{thm:disjoint_sum} by restricting to the case of elementary pivots on graphs.\footnote{We use here the fact observed in \cite{GlantzP06} that the operations in the recursive relations of \cite{ArratiaBS04} are exactly the elementary pivots of Section~\ref{sec:pivots_graphs}, assuming that the neighbours of $u$ and $v$ are interchanged after applying the ``pivot'' operation of \cite{ArratiaBS04} on edge $\{u,v\}$.} \begin{Proposition}[\cite{ArratiaBS04}] \label{prop:arratia_recursion} Let $G$ be a graph. Then $q(G)$ fulfills the following conditions. \begin{enumerate} \item $q(G) = q(G\setminus u) + q(G*\{u,v\}\setminus u)$ for edge $\{u,v\}$ in $G$ where both $u$ and $v$ do not have a loop, \item $q(G) = q(G\setminus u) + q(G*\{u\}\setminus u)$ if $u$ has a loop in $G$, and \item $q(G) = y^n$ if $G$ is a discrete graph with $n$ vertices. \end{enumerate} \end{Proposition} \begin{Proof} Conditions (1) and (2) follow from Theorem~\ref{thm:disjoint_sum} where $A$ is a graph $G$, and $X$ is an elementary pivot (i.e., $X = \{u\}$ is a loop in $G$ or $X = \{u,v\}$ is an edge in $G$ where both $u$ and $v$ do not have a loop, see Section~\ref{sec:pivots_graphs}). Finally, if $G$ is a discrete graph with $n$ vertices, then, for all $Y \subseteq V$, $Y \in P_{|Y|}$. Consequently, $|P_i| = {n \choose i}$. Thus, $q'(G) = (y+1)^n$ and therefore $q(G) = y^n$. \end{Proof} \section{Discussion} We have shown that the pivot operator $*X$ on matrices $A$ and the symmetric difference operator $\oplus X$ on sets $Y$ have an equivalent effect w.r.t. the nullity value of the principal submatrices $A\sub{Y}$ of $A$. This nullity invariant may be described in terms of partition sequences $\mathcal{P}_A$, where the sets $Y \subseteq V$ are arranged according to the nullity value of $A\sub{Y}$. We notice that interlace polynomial of a graph $G$ corresponds to the norm $\|\mathcal{P}_G\|$ of the partition sequence of $G$ (where $G$ is considered as a matrix). Hence we (may) naturally consider interlace polynomials for square matrices in general, and obtain a recursive relation for these generalized interlace polynomials. In this way, we simplify the proof of the (original) recursive relation for interlace polynomials of graphs. \subsection*{Acknowledgements} R. Brijder is supported by the Netherlands Organization for Scientific Research (NWO), project ``Annotated graph mining''.
-44,745.591973
[ -2.99609375, 2.7578125 ]
16.509927
[ -2.44921875, 1.6630859375, -1.8779296875, -5.22265625, -1.8662109375, 7.36328125 ]
[ 2.337890625, 8.140625, 2.083984375, 5.734375 ]
326
4,966
[ -3.001953125, 3.513671875 ]
35.676276
[ -4.62890625, -2.892578125, -4.08984375, -2.27734375, 0.783203125, 10.484375 ]
0.54155
10.364483
20.720902
8.346476
[ 2.7331137657165527 ]
-27,952.165738
5.391663
-44,183.940242
0.903828
5.802729
[ -2.345703125, -3.033203125, -3.46875, -4.83203125, 2.046875, 11.4921875 ]
[ -5.82421875, -1.6513671875, -2.2734375, -1.3623046875, 3.35546875, 4.15625 ]
BkiUd-7xK7IAD7XMJnpB
\section{Introduction} \IEEEPARstart{F}{orthcoming} 5G and beyond wireless network architectures, promise an increase in transmission capacity by several times and much lower latency levels compared to current wireless networks, which will be achieved with the introduction of new radio spectrums, including millimeter wave and a bundle of emerging technologies such as massive multiple-input multiple-output (MIMO), small cells, edge computing, device centric architecture and new channel coding techniques \cite{Boccardi1}, \cite{Coll1}. The expected improvements in the physical layer apart from the new spectrum usage depend largely on the control of the propagation environment with techniques such as employing antenna diversity in transmitters and receivers, cooperative relaying schemes, and intelligent surfaces. Among these, reconfigurable intelligent surfaces (RIS), which may also be referred to as software defined surfaces (SDS) or large intelligent surfaces (LIS) in the literature, is a brand new concept which aims to enable every surface in a propagation environment to be used effectively to control the environment for efficient transmission and even to exploit the scattering waves \cite{Basar1,Basar2}. A RIS is a thin surface of multiple reflecting elements made of low-cost passive electronic devices to alter the incident waves' phase angle. This concept finds its roots in \cite{Subrt1} as early as 2012, where intelligent walls were proposed for controlling propagation environment in indoor scenarios thanks to an active frequency selective surface covering the walls inside a building. In \cite{Hu1}, LIS is proposed to form a contiguous surface of the electromagnetically active material as an alternative to massive MIMO. Albeit intelligent surfaces had been mainly intended for indoor environments initially in \cite{Subrt1,Hu1,Huang2,Wu1}, outdoor usage was also conceived in the form of a passive intelligent mirror (PIM) for multi-user multiple-input single-output (MISO) downlink communications \cite{Huang1} where the PIM could be deployed around the building exteriors practically replacing base stations (BS) serving for mobile terminals. The outdoor usage scenarios can be diversified with the progress in material technology. As a RIS comprises low-cost passive reflectors, one of its significant benefits is to be used in a relatively high-cost next-generation network. The ability to be applied on various surfaces gives way to diverse use cases. One use case covered in \cite{Basar3} by means of deploying an RF transmitter in tandem with a RIS for modulating the RF signal, transforming the RIS effectively into an RF modulator. Furthermore, various applications of RIS were investigated, such as using RIS to create a virtual line-of-sight (LOS) link between BS and user where it's not feasible to establish a direct reliable link, improving the physical layer security in a wireless network, or realizing simultaneous wireless information and power transfer (SWIPT) in an Internet-of-things (IoT) network \cite{Wu1}, \cite{Wu2}. Additionally, RIS-based space shift keying (SSK), spatial modulation (SM) and media-based modulation (MBM) schemes were proposed in \cite{SALAN2021153713, Onal2021}. In the current and future wireless network architectures, using relay-based cooperative communication techniques to create a link or route diversity is an emerging research area for achieving the desired coverage and throughput capacities \cite{Aydin1,Aydin2,cogen1}. In a typical cooperative system, single or multiple relays are deployed between source ($S$) and destination ($D$) nodes to create alternative routes for signal transmission \cite{Renzo1}. Consequently, for these types of networks, one can wonder about the possibility of using RIS as a relay ($R$) as both devices have a very similar function, e.g., to create route diversity, turning a non-LOS link into a LOS link. This is one of the active research areas where the performance and cost of both methods are being investigated and compared. Although relaying is a well-established method to conceive this, requirements can get too steep to be applied in the field where each relay requires a dedicated power supply and front-end circuitry, consequently increasing the total power requirements, cost, and implementation complexity. One of the main issues discussed extensively in \cite{Renzo1}, \cite{Bjornson1} for wireless networks operating in high-frequency bands. The power consumption and energy efficiency of decode-and-forward (DF) and amplify-and-forward (AF) relays against RISs were compared and it was shown that sufficiently large RISs could outperform relay supported systems in terms of data rate, with a reduced implementation complexity \cite{Huang33}. Although RISs and relays may be regarded as two competing devices to be deployed in a wireless network, it has been shown in various studies that the hybrid RIS and $R$ assisted systems have their benefits against single $R$ or single RIS utilized systems \cite{Abdullah1}, \cite{ying2020relay}. In \cite{Abdullah1}, deploying a DF relay and a RIS together in a hybrid system showed the same data rate performance with a RIS or $R$ only system, saving a great number of reflecting elements with no additional power cost. It is also noteworthy that a single RIS system is found to be optimal when the SNR is very high and the number of reflecting elements on the RIS is extremely large \cite{Abdullah1}. Besides this, utilizing a $R$ as a combining element is another approach discussed in \cite{ying2020relay}, where two side-by-side RISs were connected through a full-duplex relay in various configurations. Using deep learning (DL) or machine learning methods in optimizing RIS utilization in wireless networks is a brand new approach, finding its place in several studies, some of which focus on subjects such as symbol estimation \cite{khan2020deeplearningaided}, cooperative communication \cite{Akdemir1}, indoor signal focusing \cite{huang2019indoor} or securing wireless communication \cite{ying2020relay}, \cite{Yang2021}. The results achieved with deep learning techniques often show better or at least comparable performances against conventional methods while significantly reducing the computational complexity. These results are encouraging enough to apply DL in several new applications, one of which is presented with this. \vspace{-0.3cm} \subsection{Contributions} In this work, we investigate the possibility of a novel usage scenario: using RIS as the sole relaying element in a cooperative network, where multiple DL optimized RISs are utilized in a cooperative configuration between the $S$ and $D$. Our work focuses exclusively on the outdoor use case with building facades covered with RISs in a dense urban environment with slow fading channels. In our proposed deep learning assisted cooperative RIS models (DNN$_R$\:-\:CRIS and DNN$_{R, D}$\:-\:CRIS), RISs are optimized through a deep neural network (DNN) which is trained with the channel state information (CSI) obtained for incident and reflected signals to estimate the phase adjustments required on the RISs for optimal signal transmission. Furthermore, in DNN$_{R, D}$\:-\:CRIS model, instead of using a conventional maximum-likelihood (ML) detector at $D$, we deploy another DNN to estimate the received symbols. Using multiple DNNs on different parts of the network, it finally becomes possible to get an approximation of an end-to-end DNN reinforced communication network and to compare its performance against the conventional network architectures. The performances of the DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS models are analyzed in terms of bit error rate (BER) for an $M$-ary quadrature aplitude modulation (QAM) modulation scheme using various relay configurations with path loss effects. Simulating and measuring the effects of multiple RISs on the received signal in a dense urban environment are the main goals of our work, consequently exhibiting the potential of DNN assisted RIS usage in a next-generation wireless network. As a result of the analyses, we can summarize the contributions of our paper as follows. \begin{enumerate} \item In the proposed DNN$_R$\:-\:CRIS and DNN$_{R, D}$\:-\:CRIS models, RIS deployment as a relaying element in a cooperative network is presented. For the RIS-based relaying model, it will be possible to select the best performing $R$ among multiple relay configurations between $S$ and $D$ with a choice of conventional relay selection methods. \item In both models, RIS configurations are optimized with a DNN. A single DNN is deployed for controlling all RISs in the network, optimizing the phase adjustments in real time. \item In DNN$_{R, D}$\:-\:CRIS model, another DNN is deployed at the destination for symbol detection instead of an ML detector. Performances of ML detector and DNN at the destination are compared against each other in terms of BER. Also, through this model, DNN assisted phase and symbol detection are collectively deployed and analysed in a cooperative network. \item In a RIS-based multiple relay usage, the effect of the relay locations on receiver performance with path loss effects is also investigated. Distance of relays to $S$ or $D$ in conjunction with fading channels has a critical impact on BER performance and this is demonstrated extensively in our work. \end{enumerate} \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{organization_v2.jpg} \caption{Organization of the paper.} \label{org_1} \vspace{-0.3cm} \end{figure} \vspace{-0.4cm} \subsection{Organization and Notations} The organization of this paper is given in Fig. \ref{org_1}. The system and channel models are presented in Section II, with the theoretical background of transmission and the proposed DNN based relay and receiver models. In Section III, the architecture of the proposed model with details of the basic structure of the proposed DNNs, training data generation and training processes are thoroughly covered for both DNNs. Section IV provides simulation setup and performance results for a series of RIS configurations, including a detailed performance comparison of the proposed model against various parameters. Finally, Section V concludes the article. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{sistem_model_v6.jpg} \caption{System model of the DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS.} \label{system_model} \vspace{-0.4cm} \end{figure} \textit{Notations}: The following notations are used throughout this paper. \textit{i}) Bold lower/upper case symbols represent vectors/matrices; \textit{ii}) $(\cdotp)^T$ and $\left\|\cdotp \right\|_F $ denote transpose and Frobenius norm operators, respectively, $\Re(.)$ and $\Im(.)$ are the real and imaginary parts of a complex-valued quantity. \vspace{-0.2cm} \section{Transmission Through DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS: System Model } In this section, we provide an overview of the generic model of the proposed DNN$_R$\:-\:CRIS and DNN$_{R, D}$\:-\:CRIS schemes. The system model under consideration will be named as DNN$_R$\:-\:CRIS if only DNN is used in $R$, and DNN$_{R, D}$\:-\:CRIS if DNN is used in both $R$ and $D$. System model for the proposed scheme is shown in Fig. \ref{system_model}, where $S$ and $D$ contain a single transmit and receive antennas. In the considered scenario, assuming no LOS link between $S$ and $D$, cooperative transmission occurs only through multiple RIS elements. We assume that each RIS has the form of a reflect-array comprising $N$ simple and reconfigurable reflector elements and acts as a $R_\ell$ in a cooperative network directing the transmission from $S$ to $D$, and all RISs are individually configured by a control software which adjusts the phase shifts for each reflector. The software is based on a DNN architecture which will be covered in detail in subsequent sections. At the destination end, two scenarios are considered. First, an ML-detector is deployed to estimate the received symbols (DNN$_R$\:-\:CRIS). Second, another DNN is used to estimate the received symbols implemented in the receiver hardware, effectively replacing an ML-detector (DNN$_{R, D}$\:-\:CRIS). We'll evaluate and compare the performances of both detection schemes during simulation. \vspace{-0.4cm} \subsection{Path Loss and Channel Model for the Proposed System} For the transmission model, we assume that $S$, $D$ and each $R_\ell$ are positioned in a triangle formation in a two dimensional space as shown in Fig. \ref{S_model1} where $d_{SD}$, $d_{SR_\ell}$ and $d_{R_{\ell}D}$ represent the distances of source to destination ($S \to D$), source to $\ell^{th}$ relay ($S \to R_\ell$) and $\ell^{th}$ relay to destination ($R_\ell \to D$), respectively, where $\ell=1,2,\ldots,L$ and here $L$ is the number of relays. In this configuration, $S$ and $D$ stay at the either end of the triangle base and $R_\ell$ is positioned on the top corner such that all three corners are kept within the half-circle as shown in Fig. \ref{S_model1}, assuming $ \pi/2<\theta_\ell<\pi$ where $\theta_\ell$ is the angle between $d_{SR_\ell}$ and $d_{R_{\ell} D}$. In this formation, $R_\ell$ lies on the circle arc when $\theta_\ell=\pi/2$, and lies on the line $S \to D$ when $\theta=\pi$. In all cases, $d_{SD}$ will be equal to diameter. \begin{figure}[t!] \centering \includegraphics[width=0.48\textwidth]{RIS_coop_v2.pdf} \vspace{-0.3cm} \caption{Channel model for DNN-CRIS system.} \label{S_model1} \vspace{-0.4cm} \end{figure} In the proposed channel model, all links are assumed to be exposed to both long-term free space path loss and short-term Rayleigh fading. Here, the path loss is proportional to $d^{-c}$ where $d$ is the propagation distance and $c$ is the path loss exponent. As the path loss for the channel $S \to D$ is assumed to be unity, the relative gains of $S \to R_\ell$ and $R_\ell \to D$ can be defined as $G_{SR_\ell}=(d_{SD}/d_{SR_\ell})^c$ and $G_{R_{\ell}D}=(d_{SD}/d_{R_{\ell}D})^c$ respectively. Here, $G_{SR_{\ell}}$ and $G_{R_{\ell}D}$ are related by law of cosines, which is \begin{eqnarray}\label{eq_cosines} G_{SR_{\ell}}^{-2/c}+G_{R_{\ell}D}^{-2/c}-2G_{SR_{\ell}}^{-1/c}G_{R_{\ell}D}^{-1/c}\cos\theta_{\ell}=1. \end{eqnarray} Through (\ref{eq_cosines}), assumning $d_{SR_{\ell}}$ and $\theta_{\ell}$ given and $d_{SD}=1$, $d_{R_{\ell}D}$ can be computed in order to apply path loss to complex channel coefficients between $S \to R_{\ell}$ and $R_{\ell} \to D$, which can be expressed as $\textbf{h}_\ell= \big[h_{\ell,1}, h_{\ell,2},\ldots,h_{\ell,N}\big] \in \mathbb{C}^{1 \times N}$ and $\textbf{g}_\ell= \big[g_{\ell,1}, g_{\ell,2},\ldots,$ $g_{\ell,N}\big] \in \mathbb{C}^{1 \times N}$, where $\ell=1,2,\ldots,L$ is the number of relays, where $\mathbb{C}$ denotes the set of complex numbers. Channel vector elements can be expressed in terms of channel amplitudes and phases $ h_{i,j} = \alpha_{i,j}e^{-j\varphi_{i,j}}$ and $ g_{i,j} = \beta_{i,j}e^{-j\theta_{i,j}}$ respectively where $i\in\big\{ 1,2,\ldots,L \big\}$ and $j\in\big\{ 1,2,\ldots,N \big\}$. In the considered system structure, all channel coefficients are assumed to be Rayleigh fading distribution. Under this assumption, we have $h_{i,j}, g_{i,j} \sim \mathcal{CN}(0,1)$, where $h_{i,j}, g_{i,j} \sim \mathcal{CN}(0,\sigma^2)$ stands for the complex Gaussian distribution with zero mean and $\sigma^2$ variance. With this model, it's possible to observe the effect of various relay positionings and evaluate their performances. \vspace{-0.3cm} \subsection{Intelligent Transmission Model} The noisy received baseband signals reflected through the $\ell^{th}$ relay with $N$ passive elements can be expressed at the $D$ as follow: \begin{eqnarray}\label{eq_y_l} y_\ell &=& \sqrt{G_{SR_{\ell}}G_{R_{\ell}D}}\Bigg( \sum_{n=1}^{N}h_{\ell,n}e^{j \phi_{\ell,n}}g_{\ell,n} \Bigg)x + w_\ell \nonumber \\ &=& \sqrt{G_{SR_{\ell}}G_{R_{\ell}D}} \bigg(\textbf{h}_\ell\boldsymbol{\Phi}_\ell\textbf{g}_\ell^T\bigg)x + w_\ell, \end{eqnarray} where, $x$ is the modulated $M$-QAM signal, and $w_\ell \sim \mathcal{CN}(0,\sigma^2 )$ represents the additive white Gaussian noise (AGWN) with with zero mean and $\mathcal{N}_0/2$ variance per dimension at the receiver. $\boldsymbol{\Phi}_\ell \triangleq \mathtt{diag}\big(e^{j\phi_{\ell,1}},e^{j\phi_{\ell,2}},\ldots,e^{j\phi_{\ell,N}}\big) \in \mathbb{C}^{N \times N}$ is the diagonal phase matrix for each RIS including the adjusted phase angles by RIS reflecting elements such that, \begin{equation}\label{eq_phi_mtrx} \boldsymbol{\Phi}_\ell=\begin{bmatrix} e^{j\phi_{\ell,1}} & 0 &\cdots &0 \\ 0 & e^{j\phi_{\ell,2}} & 0 & 0\\ \vdots & &\ddots &\\ 0 & \cdots & 0& e^{j\phi_{\ell,N}} \end{bmatrix}. \end{equation} where, we can define the phase vector for $\ell^{th}$ RIS as $\boldsymbol{\phi}_\ell=\Big[\phi_{\ell,1},\phi_{\ell,2},\dots,\phi_{\ell,N}\Big]$. The vectorial representation of the noisy baseband signals reflected through all relays with $N$ passive elements can be expressed at $D$ as follows: \begin{eqnarray}\label{eq_y_vec} \textbf{y} &=& \Big[y_1, y_2,\ldots,y_L\Big]^T \in \mathbb{C}^{L \times 1}. \end{eqnarray} From (\ref{eq_y_l}), we can find the instantaneous signal-to-noise ratio (SNR) at $D$ for the reflected signals through the $\ell^{th}$ relay with $N$ passive elements as \begin{equation}\label{eq_SNR} \gamma_\ell = \frac{{{{\left| {\mathop \sum \nolimits_{i = 1}^N {\alpha_{\ell,i}}{\beta_{\ell,i}}{e^{j\left( {{\phi_{\ell,i}} - {\varphi_{\ell,i}} - {\theta_{\ell,i}}} \right)}}} \right|}^2}{G_{SR_{\ell}}G_{R_{\ell}D}E_s}}}{{{\mathcal{N}_0}}}. \end{equation} where $E_s$ is the average transmitted energy per symbol. Adjustment of the phase angles by RIS is based on the fact that the maximum signal strength at receiver can be achieved when \({\phi _{\ell,i}} = {\varphi _{\ell,i}} + {\theta _{\ell,i}}\) for \(i = 1, \ldots ,N\), assuming the channel phases are known at the RIS. This can be verified by the identity \({\Big| {\mathop \sum_{i = 1}^N {v_i}{e^{j{\xi _i}}}} \Big|^2} = \mathop \sum_{i = 1}^N v_i^2 + 2\mathop \sum_{i = 1}^N \mathop \sum_{t = i + 1}^N {v_i}{v_t}\cos \left( {{\xi _i} - {\xi _t}} \right)\) where maximum value is achieved when \({\xi _i} = {\xi _t}\) for all $i$. Using this fact for phase adjustment at each reflecting element, the maximum instantaneous SNR is achieved at $D$ for the reflected signals through the $\ell^{th}$ relay with $N$ passive elements as \begin{eqnarray}\label{eq_SNR_max} \gamma_\ell &=& \frac{{{{\left| {\mathop \sum \nolimits_{i = 1}^N {\alpha_{\ell,i}}{\beta_{\ell,i}}} \right|}^2}{G_{SR_{\ell}}G_{R_{\ell}D}E_s}}}{{{\mathcal{N}_0}}} \nonumber \\ &=& \frac{{{\chi^2}{G_{SR_{\ell}}G_{R_{\ell}D}E_s}}}{{{\mathcal{N}_0}}}, \end{eqnarray} where $\chi= \big| {\mathop \sum \nolimits_{i = 1}^N{\alpha_{\ell,i}}{\beta_{\ell,i}}} \big|$. The total instantaneous SNR ($\gamma_{T}$) at $D$ for the reflected signals through all relays with $N$ passive elements can be written as $\gamma_{T} = \sum_\ell^L\gamma_{\ell}$. Finally, the achievable throughput performance of the considered DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS communication systems is given by $\mathcal{R}:= \log_2(1+\gamma_{T} )$. \subsubsection{Combining Techniques at the Destination} Combining techniques used at $D$ are as follows. \textit{i) Transmitting Through the Best Performing Relay}: In this technique, the relay having the lowest BER value is selected among $L$ relays for transmission. In this case, the best relay selection criterion will be, \begin{eqnarray}\label{eq_BER_rs} \widetilde{BER}_{p^*}=\arg\min_p\{BER_p\},\quad p=1,2,...,L. \end{eqnarray} Here, $p^*$ denotes the relay which has the lowest BER value among the $p=1,2,...,L$ relays. \textit{ii) Maximum Ratio Combining (MRC)}: Using MRC technique for combining the signals received through multiple relays, the combined overall signal at $D$ can be written as, \begin{eqnarray}\label{eq_y_MRC} y_{MRC}=\beta_1 y_1 + \beta_2 y_2+...+\beta_L y_L. \end{eqnarray} where $y_\ell$ is defined in (\ref{eq_y_l}), and the combining coefficient $\beta_\ell$ for the $\ell^{th}$ relay will be, \begin{eqnarray}\label{eq_w_l} \beta_\ell &=& \Bigg(\sum_{n=1}^Nh_{\ell,n}g_{\ell,n}\Bigg)^* = \ (\mathbf{h}_\ell\mathbf{g}^T_\ell)^*. \end{eqnarray} After MRC based combining is carried out at the receiver, ML detection applied on the combined signal can be expressed as, \begin{eqnarray}\label{eq_x_MRC} \hat{x}_{MRC} &=& \mathrm{arg}\ \underset{\upsilon \in \{1,2,\ldots,M\}}{\min}\!\Biggl\{\Bigg|y_{MRC}\!-\!\Bigg(\!\beta_1 \! \! \sum_{n=1}^{N}h_{1,n}e^{\hat{\phi}_{\ell,n}}g_{1,n} \nonumber \\ \! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!& &\! \!\! \!\! \!\! \!\! \!\! \! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \!\! \! \!\! \!\!\! \!+\beta_2\sum_{n=1}^{N}h_{2,n}e^{\hat{\phi}_{\ell,n}}g_{2,n}+...+\beta_L \sum_{n=1}^{N}h_{L,n}e^{\hat{\phi}_{\ell,n}}g_{L,n}\Bigg)x_\upsilon\Bigg|^2\Biggr\}. \end{eqnarray} Here, $M$ is the modulation order and $x_v$ is the symbol to be transmitted where $v\in{1,2,...,M}$. \textit{iii) Maximum Likelihood (ML) Detector}: For the first scenario in the proposed system model, an ML detector is deployed at $D$ for symbol detection. Using ML detector, the estimated symbol at the receiver can be defined as, \begin{eqnarray}\label{eq_x_ML} \hat{x}_{ML} &=& \mathrm{arg}\ \underset{\upsilon \in \{1,2,\ldots,M\}}{\min} \Biggl\{\Bigg{\lVert}\mathbf{y}-\Bigg[ \sum_{n=1}^{N}h_{1,n}e^{\hat{\phi}_{\ell,n}}g_{1,n}\nonumber \\ & & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\sum_{n=1}^{N}h_{2,n}e^{\hat{\phi}_{\ell,n}}g_{2,n}... \sum_{n=1}^{N}h_{L,n}e^{\hat{\phi}_{L,n}}g_{L,n}\Bigg]^Tx_\upsilon\Bigg{\rVert}^2\Biggr\}. \end{eqnarray} \begin{figure*}[t!] \centering \includegraphics[width=0.8\textwidth]{DNN_yapisi_v6_model3.jpg} \caption{The training and transmission phases of the proposed deep learning assisted CRIS scheme.} \label{DNN_model1} \vspace{-0.4cm} \end{figure*} \vspace{-0.3cm} \subsection{Proposed DNN-Based Relay and Receiver Model} This subsection explains how DNN assisted RIS technique can be implemented in a RIS-assisted cooperative communication scenario. Artificial neural networks (ANNs) are mathematical structures inspired by the biological neural networks consisting of interconnected neurons to form a network that can be trained to perform desired tasks. No programming is required in the classical sense, taking advantage of past experiences and samples as it is of biological origin. A typical ANN contains an input layer, an output layer, and a hidden layer in between. Each layer comprises several neurons that process the input signal with an activation function and forward the processed signal to the neuron(s) in the following layer. A DNN can be realized with an ANN with multiple hidden layers with a much greater function approximation ability than a single hidden layer network. In the proposed model, deep neural networks have been deployed at two critical locations: At the relays to adjust the phase shifts for each reflector and at $D$ to estimate the received symbols. \subsubsection{DNN$_R$\:-\:CRIS Model} This model employs a DNN in each $R$ which has been pre-trained with a sample set of channel coefficients and respective phase adjustment information$\big(\textbf{h}_\ell,\textbf{g}_\ell,\boldsymbol{\phi}_\ell\big)$ based on the theoretical framework given in the next section. The objective of the proposed DNN based system is to compute the optimal phase vector that maximizes reflected signals from the relays. The trained DNN dynamically updates the phase adjustments on each RIS reflecting element according to channel information varying in time. Hence, the whole process is divided into two following distinct phases: \begin{itemize} \item [i)]\emph{Training Phase:} As shown in the in Fig. \ref{DNN_model1}, during training phase the training data set is created, containing $S \to R_\ell$ and $R_\ell \to D$ channel coefficients and the corresponding optimum phase configurations $\big($i.e., $\textbf{h}_\ell,\textbf{g}_\ell,\boldsymbol{\phi}_\ell$, where $\ell=1,2,\ldots,L \big)$. Here, the channel information is assumed to be perfectly obtained by the pilot signals. After the database is created, the elements of this database will be used to train the DNNs to find optimum RIS phase vectors for each $R$, and consequently, weight and bias coefficients ($\textbf{W}_k,\textbf{b}_k$) of ANN will be updated for the respective data set. In this model, we assume that the data set and DNN are located on $R$ for convenience, so the special storage for the data set and the transmission interface of the DNN unit will be easier to connect to the RIS of each $R$. \item [ii)]\emph{Transmission Phase:} During actual transmission, DNN receives the new $S \to R_\ell$ and $R_\ell \to D$ channel coefficients and computes the adjusted phase vector and forwards it to the RIS. RIS adopts the computed phase configuration to assist in the transmission until new channel coefficients arrive. \end{itemize} \subsubsection{DNN$_{R, D}$\:-\:CRIS Model} In the second scenario, another DNN is deployed instead of an ML detector to detect the received symbols at $D$. Like the preceding DNN at the relay stage, the process is divided into training and transmission phases where the network is first trained using the appropriate training data set. Then the trained network carries out the symbol detection based on the noisy signals received at $D$ during the transmission phase, respectively. \begin{itemize} \item [i)]\emph{Training Phase:} Training data set contains the noisy and path faded signals received at $D$ as feature vectors and the respective modulated symbols as outputs. The objective of the training process is to update the $\textbf{W}_k$ and $\textbf{b}_k$ parameters of DNN such that for every received signal, DNN converges to the desired symbol. DNN is trained for a single relay position and noise level to optimize the training period and reduce the system complexity. \item [ii)]\emph{Transmission Phase:} DNN at the destination with updated parameters $\textbf{W}_k$ and $\textbf{b}_k$ estimates the correct symbols from the noisy signals reaching $D$ in real-time during the actual transmission. \end{itemize} Here, the detection process at $D$ will be defined as a multiclass classification problem such that the elements of the symbol (constellation) set determined by modulation order are mapped to classes to be estimated by the DNN. The details of the process will be covered in the following sections. \begin{figure*}[t!] \centering \includegraphics[width=0.85\textwidth]{DNN_model3.jpg} \vspace{-0.3cm} \caption{Proposed DNN architectures (a) for the relay and (b) for the destination, for DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS scheme. } \label{DNN architecture} \vspace{-0.3cm} \end{figure*} \vspace{-0.3cm} \section{DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS Architecture} This section presents the architectural details of the proposed DNN schemes to find the optimal phase adjustments for RIS-based relays and detect symbols at $D$. The proposed scenario is based on a double DNN architecture as shown in Fig. \ref{DNN_model1}, where the first DNN is deployed at $R$ for estimating the optimal phase vectors using $S \to R_\ell$ and $R_\ell \to D$ channel coefficients and the second DNN is deployed at $D$ to detect the received symbols using path faded and noisy signals at $D$. The subsections below explain our analysis of the proposed system's structure, training procedure, and activation function. As the model is based on a double DNN architecture, both the common and specific structural properties of both DNNs will be pointed out in the next section for convenience. \vspace{-0.3cm} \subsection{Basic Structure of the Proposed DNNs} Let $\Theta \triangleq \bigl\{\boldsymbol{\theta}_1,\boldsymbol{\theta}_2,\ldots,\boldsymbol{\theta}_{\mathcal{K}}\bigr\} $ contains $\mathcal{K}$ set of parameters, where $\mathcal{K}$ is the number of layers. A feedforward DNN structure with $\mathcal{K}$ layers is defined as a mapping $\mathcal{F}\bigl(\textbf{u}_0,\boldsymbol{\theta}\bigr): \mathbb{R}^{N_0 \times 1}\mapsto \mathbb{R}^{N_\mathcal{K} \times 1}$ of the input vector $\textbf{u}_0 \in \mathbb{R}^{N_0 \times 1}$ to an output vector in $\mathbb{R}^{N_\mathcal{K} \times 1}$ with the $\mathcal{K}$ iterative processing steps, and expressed as follows: \begin{eqnarray}\label{eq_u_k} \textbf{u}_k \triangleq f_k\Bigl( \textbf{u}_{k-1}; \boldsymbol{\theta}_k \Bigr), \ k=1,2,\ldots,\mathcal{K} \end{eqnarray} where $ f_k\bigl( \textbf{u}_{k-1}; \boldsymbol{\theta}_k \bigr):\mathbb{R}^{N_0 \times 1}\mapsto \mathbb{R}^{N_\mathcal{K} \times 1}$ expresses the mapping implemented by the $k^{th}$ DNN layer. This mapping depends on both a set of $\boldsymbol{\theta}_k $ parameters and the $\textbf{u}_{k-1}$ output vector from the previous $(k-1)^{th}$ layer. The mapping function can be a function of random variables. Hence, $f_k \bigl( .;.\bigr)$ may have a stochastic structure. Since all neurons in the proposed DNN architectures are fully connected to all following neurons, mapping function $ f_k\bigl( \textbf{u}_{k-1}; \boldsymbol{\theta}_k \bigr)$ can be written via $\boldsymbol{\theta}_\ell \triangleq \bigl\{\textbf{W}_k,\textbf{b}_k\bigr\}$ as follows: \begin{eqnarray}\label{eq_f_k} f_k\Bigl( \textbf{u}_{\ell-1}; \boldsymbol{\theta}_k \Bigr) = \sigma\Big( \textbf{W}_k \textbf{u}_{k-1} + \textbf{b}_k \Big) \end{eqnarray} here, $\textbf{W}_k \in \mathbb{R}^{N_k \times (N_k-1)} $ and $\textbf{b}_k \in \mathbb{R}^{N_k \times 1} $ refer to the neurons' weight and bias vectors, respectively. $\sigma(.)$ denotes an activation function. It should be stated that the basic mapping structure and processing steps are valid for both DNNs deployed in the system. Proposed DNN architectures are shown in Fig. \ref{DNN architecture}, where both DNNs consist of $4$ hidden layers, one input and one output layer, and each hidden layer has $256$ neurons with a fully connected structure. In the proposed models, both rectified linear unit (ReLU) and hyperbolic tangent (tanh) activation functions are considered to be deployed at hidden layers. ReLU and tanh functions can be defined as \begin{eqnarray}\label{eq_relu} \sigma_{ReLU}(z)=max(0,z), \end{eqnarray} \begin{eqnarray}\label{eq_tanh} \sigma_{tanh}(z) = \frac{\sinh{z}}{\cosh{z}} = \frac{e^z-e^{-z}}{e^z+e^{-z}}. \end{eqnarray} for a neuron with input $z$\cite{khan2020deeplearningaided}, respectively. For both DNNs, we have applied ReLU and tanh activation functions consecutively in order to evaluate accuracy and convergence speed for the particular tasks in Table \ref{table:1}. It is observed that ReLU converged nearly 5 times faster on average than tanh during the training phase showing the same or better accuracy levels. \begin{table} \centering \caption{ReLU and tanh Training Performance Comparison.} \label{table:1} \begin{tabular}{| c | c | c |} \hline \hline \textbf{} & \textbf{ReLU} & \textbf{tanh} \\ \hline\hline Mini-batch Loss & 0,0385 & 0,0391 \\ \hline Mini-batch RMSE & 0,028 & 0,028 \\ \hline No. of Iterations & 60 & 250 \\ \hline Time Elapsed & 5'23'' & 25'35'' \\ \hline \hline \end{tabular} \vspace{-0.5cm} \end{table} As the inputs for both DNNs in the model exhibit sequence data characteristics, both DNN's input layers are configured as the sequence input layer so that channel coefficients and received symbols can be used as input sequences for DNNs deployed at the relay and destination, respectively, albeit evaluated as different data types. On the other hand, the most significant difference between the two DNNs is the output layer implementation, which characterizes the network response behavior. The output layer of the DNN at the relay is configured as a regression layer as the output will be complex phase adjustment estimations $\hat{\Phi}_\ell$. In contrast, the output layer of the DNN at the destination is configured as a classification layer for $M$-QAM modulated symbol estimations $\hat{x}$. In a typical classification network, a classification layer follows a softmax layer with the activation function defined as, \begin{eqnarray}\label{eq_softmax} \sigma_{softmax}(\textbf{z})_i = \frac{e^{z_i}}{\sum^K_{j=1}e^{z_j}}. \end{eqnarray} Here, $\textbf{z}$ and $z_i$ denote $K$ vectors from previous layer and the elements of the input vector, respectively, where $i=1,...,K$ and $\textbf{z}=(z_1,...,z_k)\in\mathbb{R}^K$ \cite{bridle-softmax}. The objective of using the softmax function is to convert $K$ vectors at the input to $K$ vectors at the output whose total is equal to 1. In this way, those vectors are normalized and mapped onto a probability distribution, and the vector values with the highest probability are transmitted to the next layer for classification. Overfitting is a common problem in machine learning as the network performs too well on the training dataset but performs poorly when it encounters new data. It is a generalization problem that can be reduced with various methods. In the considered system, to reduce overfitting, the dropout layer is applied to DNN at the relay with a probability of 8\% which randomly "\emph{drops out}" or sets some layer outputs to zero, practically reducing the number of nodes in a layer. This will help the proposed model to generalize better for new data. Also, 10\% of the training data set is split and reserved as validation data to evaluate the model during the training process. \vspace{-0.3cm} \subsection{Training Data Generation and the Training Process} \subsubsection{DNN$_R$\:-\:CRIS Model} Training of the proposed DNN requires feature vector samples, consisting of CSI data for incoming and outgoing channels $h_{\ell,n}$ and $g_{\ell,n}$, respectively, and adjusted phase angles $\boldsymbol{\phi}_{\ell}$ accordingly. Because of the processing constraints of DNN, complex channel coefficients are separated into real and imaginary parts as DNN accepts solely real valued input. Preservation of the complete channel information is crucial, so the imaginary part should be processed along with real part, consequently doubling the number of features per sample. Thus, the resulting feature vector set becomes as, \begin{eqnarray}\label{eq_F_train} \mathbf{F}_{\text{train}}^i = \Big[ \Re\big(h_{\ell,n} \big) \ \Im\big(h_{\ell,n}\big) \ \Re\big(g_{\ell,n}\big) \ \Im\big(g_{\ell,n}\big)\Big]_{1\times4}. \end{eqnarray} Here, $\mathbf{F}_\text{train}\triangleq{\mathbf{F}_\text{train}^1,\mathbf{F}_\text{train}^2,\ldots,\mathbf{F}_\text{train}^s}$ for $s$ samples, $i=1,2,...,s$ and $\mathbf{F}_{\text{train}}^i\in\mathbf{F}_{\text{train}}$. Likewise, for each feature vector, the corresponding phase vector $\boldsymbol{\phi}_{\ell,n}$ is separated into real and imaginary parts, forming the output vector as, \begin{eqnarray}\label{eq_O_train} \mathbf{O}_{\text{train}}^i = \Big[ \Re\big(\boldsymbol{\phi}_{\ell,n}\big) \ \Im\big(\boldsymbol{\phi}_{\ell,n}\big)\Big]_{1\times2} \end{eqnarray} where $\mathbf{O}_\text{train}\triangleq{\mathbf{O}_\text{train}^1,\mathbf{O}_\text{train}^2,\ldots,\mathbf{O}_\text{train}^s}$ for $i=1,2,...,s$ and $\mathbf{O}_{\text{train}}^i\in\mathbf{O}_{\text{train}}$. Finally, corresponding feature and output vector elements are concatenated to create the training data set as, \begin{eqnarray}\label{eq_D_train} D_{\text{train}}\!=\!\big\{\!\{\mathbf{F}_\text{train}^1,\!\mathbf{O}_\text{train}^1\},\{\mathbf{F}_\text{train}^2,\!\mathbf{O}_\text{train}^2\},\dots,\{\mathbf{F}_\text{train}^s,\!\mathbf{O}_\text{train}^s\}\!\big\}. \end{eqnarray} The objective of the training process of the proposed DNN algorithm is to optimize the weight ($\textbf{W}_k$) and bias ($\textbf{b}_k$) parameters of the DNN so that the difference between the actual output value of the DNN and the target value is minimized. To achieve this, the weights and bias vectors of the hidden layers of the DNN are updated at each iteration during training. $\textbf{W}_k$ and $\textbf{b}_k$ are the weight and bias vectors of the $k^{th}$ layer with $k=1,2,\ldots,\mathcal{K}$, where $\mathcal{K}=5$. According to the proposed model, DNN output $\hat{\boldsymbol{\phi}}_{\ell}$ for each relay is a function of $\textbf{F}_{\text{train}}$ and $\Theta \triangleq \bigl\{\boldsymbol{\theta}_1,\boldsymbol{\theta}_2,\ldots,\boldsymbol{\theta}_{s}\bigr\}$ with $\boldsymbol{\theta}_k \triangleq \bigl\{\textbf{W}_k,\textbf{b}_k\bigr\}$, where $\Theta$ denotes the training parameter set consisting of $\textbf{W}_k$ and $\textbf{b}_k$ values. The difference between the actual output and target values is determined by the loss function which is defined for the proposed DNN architecture as, \begin{eqnarray}\label{eq_loss_relay} \mathcal{L}(\Theta) = \frac{1}{2}\sum_{i=1}^s \left \| \boldsymbol{\phi}_{\ell,n}^i- \hat{\boldsymbol{\phi}}_{\ell,n}^i \big(\Theta\big) \right \|^2 \end{eqnarray} where, $\boldsymbol{\phi}_{\ell,n}$ is the target output value, $\hat{\boldsymbol{\phi}}_{\ell,n}$ is the actual DNN output and $i$ is the index of the responses. On each iteration of the network, optimization algorithm constantly compares the loss value for each respective DNN output and optimizes $\textbf{W}_k$ and $\textbf{b}_k$ values on each layer to converge to the desired value. Optimization is carried out for each iteration based on the Adam (adaptive moment estimation optimization algorithm) optimizer, which can be defined as \cite{kingma2017adam}, \begin{eqnarray}\label{eq_adam} \Theta_{{\mu}+1}=\Theta_{\mu}-\frac{\eta m_{\mu}}{\sqrt{v_{\mu}+\epsilon}} \end{eqnarray} where $\eta$ is the learning rate determining the step size used to update the weights, $\epsilon$ is a smoothing term to avoid division by zero, and $\mu$ is the iteration index. Adam optimizer updates the training parameter $\Theta$ at each iteration based on element-wise moving averages of both the parameter gradients and their squared values $m_{\mu}$ and $v_{\mu}$, respectively, which can be expressed as, \begin{eqnarray}\label{eq_adam_m} m_{\mu}=\delta_1m_{{\mu}-1}+(1-\delta_1)\nabla\mathcal{L}(\Theta), \end{eqnarray} \begin{eqnarray}\label{eq_adam_v} v_{\mu}=\delta_2m_{{\mu}-1}+(1-\delta_2)[\nabla\mathcal{L}(\Theta)]^2. \end{eqnarray} Here, $\delta_1$ and $\delta_2$ are the decay rates of the moving averages, and $\nabla\mathcal{L}(\Theta)$ is the gradient of the loss function given in (\ref{eq_loss_relay}). If the gradients in (\ref{eq_adam_m}) and (\ref{eq_adam_v}) stay close after several iterations, the moving averages assist for the parameter updates to gain momentum in a certain direction. On the other hand, if the gradients contain a great amount of noise, then the parameter updates will be smaller due to the moving averages of the gradients getting smaller. This mechanism of the Adam is crucial for our case since the received signals contain a great deal of channel noise, exposed to path loss and occasionally mismatched phase angles at the relay \cite{Song2021}. \subsubsection{DNN$_{R, D}$\:-\:CRIS Model} As the proposed DNN is supposed to be deployed at the destination, input data for the DNN will be noisy, path faded and modulated signals reaching the destination ($y_\ell$), and the output will be the estimations for original transmitted symbols ($\hat{x}$). The estimation problem for this case is a multiclass classification problem such that for every received signal $y_\ell$, the DNN based estimator should converge to the $\hat{x}$ estimate closest to the original symbol, which will be chosen from a pre-determined symbol set. The symbol set and the number of classes to be used in the classification process are based on the modulation order, such that for $M^{th}$ modulation order, there will be $M$ symbols in the symbol set and $M$ classes consequently. In this respect, for $M^{th}$ order, the symbol set and the respective classes can be defined as, \begin{eqnarray}\label{eq_S} \mathcal{S}=\Big\{x_{\mathcal{S}_1},x_{\mathcal{S}_2}\ldots,x_{\mathcal{S}_M}\Big\}, \end{eqnarray} \begin{eqnarray}\label{eq_C} \mathcal{C}=\Big\{\mathbb{X}_1,\mathbb{X}_2,\ldots,\mathbb{X}_M\Big\}. \end{eqnarray} Here, $\mathcal{C}$ is basically a set of class labels assigned for each member of the symbol set $\mathcal{S}$, where $x_{\mathcal{S}_v}\in\mathcal{S}$ and $\mathbb{X}_v\in\mathcal{C}$ for $v=1,2,\ldots,M$. It can be seen that each element of $\mathcal{C}$ can be mapped to the respective element in $\mathcal{S}$ and vice versa, which ultimately results in the estimate $\hat{x}_s$ as seen in Fig. \ref{DNN_model1}. This relation between the symbol set and classes can be represented by $\big\{x_{\mathcal{S}_v}\longleftrightarrow\mathbb{X}_v\big\}\longrightarrow{\hat{x}_s}$. In this case, feature vector and the corresponding output vector for the proposed DNN can be stated as, \begin{eqnarray}\label{eq_F_train_dest} \mathbf{F}_{\text{train}}^i = \Big[ \Re(\mathbf{y}_{\ell,n}), \Im(\mathbf{y}_{\ell,n})\Big]_{1\times2}, \end{eqnarray} \begin{eqnarray}\label{eq_O_train_dest} \mathbf{O}_{\text{train}}^i = \Big[\mathbb{X}_v^i\Big]_{1\times1}, \quad v=1,2,\ldots,M. \end{eqnarray} Here, it should be noted that $\mathbf{y}_{\ell,n}$ is separated into real and imaginary parts as in (\ref{eq_F_train}) for $i=1,2,\ldots,s$ and $\mathbb{X}_v$ is a class category for $\mathbf{O}_{\text{train}}^i$. In this case, the training data set can be generated by concatenating feature vectors and corresponding symbol/class pairs for the sample space $s$ as shown in (\ref{eq_D_train}). For the multiclass classification problem, the loss function which will be applied in the process can be defined as, \begin{eqnarray}\label{eq_loss_dest} \mathcal{L}(\Theta) = -\frac{1}{s}\sum_{i=1}^s\sum_{v=1}^M \mathbb{X}_v^i\ln{\Hat{\mathbb{X}}_v^i}\big(\Theta\big) \end{eqnarray} where, $s$ is the number of samples, $M$ is the number of classes, $\Hat{\mathbb{X}}_v^i$ is the actual output value for the $v^{th}$ class at $i^{th}$ sample, and $\mathbb{X}_v^i$ is the target value for the same sample. As the classification layer follows the softmax layer, $\Hat{\mathbb{X}}_v^i\big(\Theta\big)$ value will be the output of the softmax layer. (\ref{eq_loss_dest}) is also called as the cross-entropy loss between the actual and target outputs \cite{bridle-softmax}. \begin{algorithm}[t!] \caption{Theoretical Training Algorithm for DNN @ $R$} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE CSI data - $h_{\ell,n}, g_{\ell,n}$ \ENSURE Trained DNN at $R$ - $\textbf{W}_k$, $\textbf{b}_k$ \\ \textit{Initialisation} :Initialize the DNN parameters - $\textbf{W}_k$, $\textbf{b}_k$, and loss $\mathcal{L}(\Theta)$ are set to zero. \\ \textit{LOOP Process} \FOR {$i = 1$ to $s$} \STATE Pre-process the CSI data - Real and imaginary parts of $h_{\ell,n}$ and $g_{\ell,n}$ are separated and rearranged as in (\ref{eq_F_train}), generating feature vector $\mathbf{F}_{\text{train}}^i$. \STATE Compute the theoretical adjusted phase vector $\boldsymbol{\phi}_{\ell,n}$ maximizing instantenous SNR as in (\ref{eq_SNR_max}). \STATE Pre-process the phase data found in step 3 - Real and imaginary parts of $\boldsymbol{\phi}_{\ell,n}$ are separated and rearranged as in (\ref{eq_O_train}), generating output vector $\mathbf{O}_{\text{train}}^i$. \ENDFOR \STATE Generate the training data - Feature and output vectors are concatenated to form $D_{\text{train}}$ as in (\ref{eq_D_train}). \STATE Train the DNN until $\mathcal{L}(\Theta)$ is minimized with respect to (\ref{eq_loss_relay}) and (\ref{eq_adam}). \RETURN Trained DNN at $R$ \end{algorithmic} \end{algorithm} \begin{algorithm}[t!] \caption{Theoretical Training Algorithm for DNN @ $D$} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE Original transmitted symbols ($x$) and noisy, path faded and modulated signals received at the destination ($\mathbf{y}_{\ell,n}$). \ENSURE Trained DNN at $D$ - $\textbf{W}_k$, $\textbf{b}_k$ \\ \textit{Initialisation} :Initialize the DNN parameters - $\textbf{W}_k$, $\textbf{b}_k$, and loss $\mathcal{L}(\Theta)$ are set to zero. \\ \textit{LOOP Process} \FOR {$i = 1$ to $s$} \STATE Pre-process received signals at the destination - Real and imaginary parts of $\mathbf{y}_{\ell,n}$ are separated and rearranged as in (\ref{eq_F_train_dest}), generating feature vector $\mathbf{F}_{\text{train}}^i$. \STATE Extract the transmitted symbols and convert symbol set into a class set as in (\ref{eq_S}) and (\ref{eq_C}), respectively. \STATE Generate output vectors - For every feature vector, the corresponding class is designated as an output vector $\mathbf{O}_{\text{train}}^i$ as in (\ref{eq_O_train_dest}). \ENDFOR \STATE Generate the training data - Feature and output vectors are concatenated to form $D_{\text{train}}$ as in (\ref{eq_D_train}). \STATE Train the DNN until $\mathcal{L}(\Theta)$ is minimized with respect to (\ref{eq_loss_dest}) and (\ref{eq_adam}). \RETURN Trained DNN at $D$ \end{algorithmic} \end{algorithm} For optimizing the loss function in (\ref{eq_loss_dest}), we also deploy Adam and follow the same blueprint stated for the previous DNN as in (\ref{eq_adam}), (\ref{eq_adam_m}) and (\ref{eq_adam_v}). Based on the training processes presented so far, theoretical algorithms for both DNNs can be shown as in Algorithm 1 and Algorithm 2. \begin{table} \caption{Training Parameters.} \vspace{-0.2cm} \begin{center} \begin{tabular}{|c|c|c|} \hline \hline \textbf{Training Parameter} & \textbf{DNN@Relay} & \textbf{DNN@Destination} \\ \hline\hline {Modulation Degree ($M$)} &\multicolumn{2}{c|}{4/8} \\ \hline {Num. of Transmit Antennas} &\multicolumn{2}{c|}{1} \\ \hline {Num. of Reflect. Elements ($N$)} &\multicolumn{2}{c|}{8/16/32/64} \\ \hline {Pathloss Exponent ($c$)} &\multicolumn{2}{c|}{4} \\ \hline {Num. of Hidden Layers} &\multicolumn{2}{c|}{4} \\ \hline {Batchsize} &\multicolumn{2}{c|}{256} \\ \hline {Validation Split} &\multicolumn{2}{c|}{10\%} \\ \hline {Num. of Samples ($s$)} & 360000 & 45000 \\ \hline {Iteration Steps} & 300 & 150 \\ \hline {Learning Rate ($\eta$)} & 0.003 & 0.003 \\ \hline {Dropout Rate} & 8\% & - \\ \hline \hline \end{tabular} \label{table:2} \end{center} \vspace{-0.3cm} \end{table} \vspace{-0.4cm} \section{Numerical Results and Discussion} \begin{figure}[t!] \centering \includegraphics[width=0.43\textwidth]{trainingfig-eps-converted-to.pdf} \vspace{-0.2cm} \caption{Training Progress of the DNN for relay, in terms of RMSE and Loss using ReLU.} \label{RMSE for relu} \vspace{-0.3cm} \end{figure} \vspace{-0.2cm} \subsection{Simulation Parameters for DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS} Simulation is based on the channel and system model described in Section II, where $S$, $R_{\ell}$, and $D$ have been positioned as depicted in Fig. \ref{S_model1}. All relays are kept within the half circle area by choosing $ \pi/2<\theta_\ell<\pi$, with $d_{SD}$, $d_{SR_{\ell}}$, and $d_{R_{\ell}D}$ distances normalized with respect to $d_{SD}=1$. The SNR used in the simulations herein is defined as $\mathrm{SNR (dB)}=10\log_{10}(E_s/N_0)$. Number of reflecting elements on RIS based relays varies in the range 8-64 for observing its effect on relay configurations and BER performances. All channels are subject to Rayleigh fading and path loss effects with path loss exponent $c$ chosen as $4$, reflecting a densely populated urban environment with building obstructions \cite{Rappaport1992}. SNR range is assumed to be between $-40$ dB and $0$ dB with $4$ dB variations for all BER performance measurements. We use MATLAB and its toolboxes for the simulation with the basic training parameters given in Table \ref{table:2}. As stated in Section III.A, ReLU activation function is opted for deployment in hidden layers for both DNN models in all simulation phases. Batchsize determines the size of the data batch -also called as a mini-batch- processed by the network on each iteration, which is basically a subset of the training data set. During training, $10\%$ of the training data set is reserved for validation by the validation split and validation frequency is set as $10$ for both DNNs. \vspace{-0.3cm} \subsection{DNN Performance Analysis} Performances of the DNNs deployed in the proposed model have been evaluated using a number of tools during training and transmission phases. Fig. \ref{RMSE for relu} and Fig. \ref{Accuracy for relu} show the training progress plots for DNNs for the relay and destination, respectively, providing information on the training and validation performance in terms of root mean square error (RMSE) and loss values for DNN for the relay, and accuracy percentage for DNN for the destination. These metrics are calculated during training for each mini-batch, processed by the network on each iteration. \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth]{trainingfig2-eps-converted-to.pdf} \caption{Training Progress of the DNN for destination, in terms of Accuracy (\%) and Loss using ReLU.} \label{Accuracy for relu} \vspace{-0.3cm} \end{figure} \begin{figure}[t!] \centering \begin{subfigure}[a]{0.3\textwidth} \centering \includegraphics[width=1\textwidth,height=1\textwidth]{confusion_m40_v2-eps-converted-to.pdf} \subcaption{} \label{Confusion -40dB} \end{subfigure} \begin{subfigure}[a]{0.3\textwidth} \centering \includegraphics[width=1\textwidth,height=1\textwidth]{confusion_m20_v2-eps-converted-to.pdf} \subcaption{} \label{Confusion -20dB} \end{subfigure} \caption{Confusion matrices for DNN at $D$ with $M=4$ for (a) SNR=$-40$ dB and (b) SNR=$-20$ dB } \label{Confusion -20dB -40dB} \vspace{-0.2cm} \end{figure} Loss values for the proposed DNNs at the relay and destination can be directly computed by the functions defined in (\ref{eq_loss_relay}) and (\ref{eq_loss_dest}), respectively. On the other hand, RMSE for a typical regression network, DNN for the relay in our case, can be defined as, \begin{eqnarray}\label{eq_RMSE} RMSE=\sqrt{\frac{1}{s}\sum_{i=1}^s(t_i-\hat{t}_i)^2}. \end{eqnarray} Here, $t_i$ is the target output, $\hat{t}_i$ is the actual output, and s denotes the number of samples or responses in general terms \cite{Kay97}. RMSE and loss curves in a regression network will follow a similar path as loss function is half mean squared error computed by the network for optimizing the parameters. The accuracy of the DNN for the destination can also be monitored during the transmission phase, using confusion matrices. The confusion matrix for classification networks shows the percentage of the actual DNN outputs matching with the target outputs for each prediction cycle of the network. Fig. \ref{Confusion -40dB} and \ref{Confusion -20dB} show the confusion matrices for DNN deployed at destination for $M=4$ and SNR values of $-40$ dB and $-20$ dB, respectively. Here, the horizontal axis shows the target output. The vertical axis shows the actual output. The cells on the green diagonal show the percentage of the successfully matched symbols for every symbol class. In the proposed model, it can be seen that the accuracy of the DNN increases significantly with high SNR values, as expected. During simulation, the DNN output accuracy has a direct impact on the BER performance, such that a high estimation accuracy for symbol detection improves BER values considerably. Therefore, to achieve consistent results, $80\%$ accuracy for destination DNN is a lower limit for acceptable performance levels at all distances and SNR values. \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth]{Fig1_R2_N8_v3-eps-converted-to.pdf} \caption{Performance comparisons of DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS and CRIS systems for $M=4, L=2$ and $N_1=N_2=8$.} \label{BER R2N8} \vspace{-0.3cm} \end{figure} \vspace{-0.3cm} \subsection{BER Analysis of DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS Schemes} In this section, we will evaluate the BER performances of the DNN$_R$\:-\:CRIS and DNN$_{R, D}$\:-\:CRIS models using various relay and receiver configurations, including non-DNN RIS based relay configurations with ML-based receivers at the destination as a benchmark for all BER performance comparisons. \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth]{Fig1_R2_N16_v3-eps-converted-to.pdf} \caption{BER comparisons of DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS and CRIS systems for $M=4, L=2$ and $N_1=N_2=16$.} \label{BER R2N16} \vspace{-0.3cm} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth]{Fig1_R2_N32_v4-eps-converted-to.pdf} \caption{BER comparisons of DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS and CRIS systems for $M=4, L=2$ and $N_1=N_2=32$.} \label{BER R2N32} \vspace{-0.3cm} \end{figure} We start evaluating the BER performance of a two relay cooperative system as shown in Fig. \ref{BER R2N8}, over a 4-QAM modulation scheme. Each RIS based relay contains 8 reflecting elements with relay distances $d_{SR_{1}}=0.2$, $d_{R_1D}=0.98$ and $d_{SR_2}=0.5$, $d_{R_2D}=0.86$ for $\theta_1=\theta_2=\pi/2$. Here, each relay's individual BER performance can be observed along with the combined MRC performance at the destination for both DNN based and non-DNN based configurations. It can be seen that the performance of DNN$_R$\:-\:CRIS model is very close to non-DNN based configurations for both distances. Addition of a DNN-based destination into the system (DNN$_{R, D}$\:-\:CRIS) introduces a slight impact on BER performance as expected, with some increased performance degradation occurring particularly for $R_1$ such that, for a BER level of $10^{-3}$ the SNR loss is about $6$ dB for $R_1$ whereas it's $4$ dB for $R_2$. It is also noteworthy that the DNN$_{R, D}$\:-\:CRIS with MRC receivers show comparable performance with non-DNN MRC system, even outperforming non-DNN receiver-ML single relay configurations. In Fig. \ref{BER R2N16} and Fig. \ref{BER R2N32}, the number of reflecting elements is increased to 16 and 32, respectively, with all remaining parameters retained from the previous scenario. The immediate effect of increasing reflecting elements is observed as an SNR improvement of about $10$ dB for $N=16$ and $7$ dB for $N=32$ at the BER of $10^{-3}$ or below. We see that configurations with the DNN$_R$\:-\:CRIS model show nearly identical performance with non-DNN reference configurations, and the impact of DNN based destination on SNR remains about similar levels. Increasing the number of reflecting elements on the relays improves BER performances of DNN based systems as a whole, preserving their performance characteristics nevertheless. \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth]{Fig2_R4_N16_v4-eps-converted-to.pdf} \caption{BER comparisons of DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS and CRIS systems for $M=4, L=4$ and $N_1=N_2=8$.} \label{BER R4N64} \vspace{-0.3cm} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.4\textwidth]{Fig_M4_8fig_v4-eps-converted-to.pdf} \caption{BER comparisons of DNN$_R$\:-\:CRIS/DNN$_{R, D}$\:-\:CRIS and CRIS systems for $M=4,8, L=2$ and $N_1=N_2=8$.} \label{BER R2N8_v2} \vspace{-0.3cm} \end{figure} Fig. \ref{BER R4N64} shows a four relay system with relay distances of $d_{SR_{1}}=0.4$, $d_{R_1D}=0.91$, $d_{SR_2}=0.55$, $d_{R_2D}=0.83$, $d_{SR_{3}}=0.65$, $d_{R_3D}=0.76$, $d_{SR_4}=0.2$, and $d_{R_4D}=0.98$, each relay having 64 reflecting elements. We continue to observe that the DNN$_R$\:-\:CRIS scenarios show nearly identical BER performances with non-DNN configurations as in the previous case. Here, the path loss effects become clearer as the relays are distributed throughout the model layout, such that configurations incorporating relays closer to source or destination show a better BER performance as the signals are exposed to a lesser attenuation. It is also found that the worst-performing relay is $R_3$, where the relay distance is near midway between source and destination. The most significant outcome here is that the BER value is found to be maximum when $d_{SR_\ell}=d_{R_{\ell}D}$ as the path loss is getting less severe when the relay is getting closer to either $S$ or $D$. Finally, we also compared the BER performances for $M=4$ and $M=8$ schemes as shown in Fig. \ref{BER R2N8_v2} with relay distances of $d_{SR_{1}}=0.9$, $d_{R_1D}=0.43$, $d_{SR_{2}}=0.35$ and $d_{R_2D}=0,93$. It can be seen that for both modulation schemes, the performance characteristics remain mostly unchanged as the best performance is achieved for the relay closest to the destination ($R_1$), and DNN based destination introduced a certain amount of SNR loss for both relays. The only remarkable difference is the general performance setback for $M=8$ as expected, such that at the BER of $10^{-3}$, the SNR loss is about $5$ dB for DNN$_R$\:-\:CRIS and $8$ dB for DNN$_{R, D}$\:-\:CRIS models. \vspace{-0.2cm} \section{Conclusion and Future Works} In this work, we have realized a deep learning assisted RIS-based cooperative communications system with a DNN based symbol detector at the destination. Effect of the path loss on DNN$_R$\:-\:CRIS and DNN$_{R, D}$\:-\:CRIS models are demonstrated in terms of BER performances. We have incorporated DNN performance analysis techniques during training and/or transmission phases for evaluating the individual DNN performances and BER analysis to evaluate the total system performance for all deployment scenarios. Simulation results showed that DNN assisted RIS-based relays can effectively maximize the reflected signals exhibiting a very satisfactory performance against reference non-DNN configurations in general. Combining DNN assisted relays with the DNN based symbol detectors at the destination also showed acceptable performance, especially in conjunction with RIS-based relays, harboring a high number of reflectors. Moreover, using MRC at the destination to combine multi-relay signals for the dual DNN scenarios proved promising results, where practicing a number of relay selection schemes for the final model is also possible. In the context of this work, it is assumed that the CSI is perfectly known at the relay. It will be a broad and challenging field of study to consider similar scenarios for a totally CSI blind system, where more advanced deep learning techniques and DNN architectures will be required, presumably. Another interesting approach to investigate could be to optimize the RIS for a content-aware system going beyond a signal maximization point of view. The system detects the type of data transmitted and optimizes the network parameters accordingly. \ifCLASSOPTIONcaptionsoff \newpage \fi \vspace{-0.3cm} \bibliographystyle{ieeetr}
-42,902.18836
[ -2.453125, 2.51171875 ]
43.543046
[ -3.291015625, 0.61865234375, -1.90234375, -5.328125, -0.2161865234375, 7.4921875 ]
[ 3.0546875, 6.8984375, 2.2734375, 5.6640625 ]
416
7,410
[ -2.189453125, 2.208984375 ]
27.909868
[ -6.23046875, -3.884765625, -4.0546875, -2.173828125, 2.423828125, 11.9140625 ]
0.744986
30.204365
22.523617
3.800918
[ 2.6080095767974854 ]
-26,749.782027
5.887314
-42,193.526511
0.513467
6.071932
[ -3.171875, -3.5234375, -3.45703125, -4.484375, 2.74609375, 11.25 ]
[ -5.35546875, -2.400390625, -2.33203125, -2.302734375, 4.0546875, 5.671875 ]
BkiUdC84eIZjfagW4Wnr
\section{Introduction} \label{sec:intro} Over the last few years, it has been realized that combining the power of dispersion relations and crossing symmetry leads to powerful constraints on the low energy expansion of 2-2 scattering \cite{snowsmat1,snowsmat2}. At the same time, to understand scattering of massless particles in four spacetime dimensions, the program bearing the name of ``Celestial amplitudes'' \cite{Pasterski:2021raf, McLoughlin:2022ljp} has been an active area of research. Celestial amplitudes represent S-matrix elements in a basis where the external particles are in boost eigenstates. In this basis, $4$-$d$ scattering amplitudes manifestly transform as $2$-$d$ conformal correlation functions \cite{Pasterski:2016qvg,Pasterski:2017kqt,Banerjee:2018gce}. Due to this feature, celestial amplitudes have emerged as a central object of interest in the context of flat space holography, where recent developments on the connection between soft theorems and asymptotic symmetries suggest that the holographic dual of quantum gravity in asymptotically flat spacetimes is a $2$-$d$ celestial conformal field theory (CCFT) defined on the celestial sphere at null infinity \cite{Strominger:2017zoo, Kapec:2014opa, Kapec:2016jld}. The celestial formalism has led to several fascinating recent insights, particularly for scattering of massless particles (see \cite{Pasterski:2021raf, McLoughlin:2022ljp,Pasterski:2021rjz,Raclariu:2021zjz} for recent reviews). For example, soft theorems in gravity and gauge theories in $4$-dimensions have been shown to imply the existence of infinite dimensional current algebra symmetries acting on the $2$-$d$ celestial sphere \cite{Guevara:2021abz, Strominger:2021mtt, Banerjee:2020vnt}. These symmetries impose powerful constraints on the operator product expansion (OPE) in CCFT, which in turn is related to collinear limits of scattering amplitudes \cite{Fan:2019emx, Pate:2019lpp, Banerjee:2020kaa}. Quite remarkably, these infinite-dimensional celestial symmetries can be used to completely determine tree-level MHV amplitudes in Yang-Mills theory and Einstein gravity \cite{Banerjee:2020vnt, Banerjee:2020zlg}. In this paper, we wish to understand what insights one can obtain about the S-matrix bootstrap program using ideas from Celestial amplitudes. A main recent development is the derivation of two-sided bounds on ratios of Wilson coefficients\footnote{Taylor expansion coefficients arising in the low energy expansion of 2-2 scattering amplitudes, which in turn are related to contact vertices in the effective action.} \cite{snowsmat1,snowsmat2}. The primary tool in this area of research has been the fixed-$t$ dispersion relation. Since 2-2 scattering is a function of the two Mandelstam invariants $s,t$, historically much attention has focused on a dispersion relation where one of these variables (typically $t$) is held fixed. In the case of scattering of identical particles, a penalty that one has to pay is the loss of crossing symmetry which has to be imposed as a constraint. Such constraints have been dubbed as ``null constraints" in \cite{sch1}. Using these constraints, and linear programming, one numerically finds two-sided bounds on Wilson coefficients. In \cite{efthedron} a geometric picture was put forward where it was argued that as a consequence of the constraints arising from locality and unitarity the space of Wilson coefficients was forced to lie inside a geometric region called the EFThedron. In the early 1970s, Auberson and Khuri had looked at a dispersion relation with manifest crossing symmetry (CSDR). This line of research lay dormant for many years. Recently this dispersion relation was resurrected in \cite{Sinha:2020win,Gopakumar:2021dvg}. Since there is inbuilt crossing symmetry at the onset, the penalty one pays to have a dispersion relation is the loss of manifest locality. Namely one finds spurious poles in the partial waves. Cancellation of these spurious poles is needed to have a local low energy expansion. This can only happen after summing over spins. The role of ``null constraints" is played by these ``locality constraints" in this program. In \cite{Sinha:2020win, Gopakumar:2021dvg}, the equivalence of the two sets of constraints was shown. The partial waves in the CSDR with spurious singularities were termed as ``Dyson blocks'' in \cite{Sinha:2020win}. There is another version of partial waves, which look closer in spirit to Feynman diagrams, which are singularity free and resemble exchange Feynman diagrams with specific contact diagrams. These were called ``Feynman blocks'' in \cite{Sinha:2020win}. As we will see, these Feynman blocks in the Celestial variables have remarkable properties. One of the main advantages of working with the CSDR is that it leads to a fascinating connection with an area of mathematics called Geometric Function Theory (GFT) \cite{hsz, rs1, rs2}. The origin of two sided bounds on Wilson coefficients gets related to the famous Bieberbach conjecture (de Branges' theorem). The main property of the amplitude that enables this connection is what is called ``typically real''-ness or ``Herglotz''. A function $f(z)$ is typically real if it satisfies $Im f(z) Im z> 0$ whenever $Im z \neq 0$. If the function is regular inside the unit disk then the Taylor expansion coefficients of the function satisfy two-sided bounds called Bieberbach-Rogosinski bounds \cite{rs1}. The function is also allowed to have simple poles on the real axis. When this happens, the two-sided bounds get modified to the so-called Goodman bounds where the gap between the origin and the nearest pole controls the two-sided bounds. These mathematical facts are reviewed in \cite{rs1}. It is naturally tempting to build on \cite{Atanasov:2021cje} using the CSDR. First, the crossing kernel (to be reviewed below) has the resemblance with the tree-level $\phi^3$ theory. In the CSDR, the kernel is dressed with the Legendre (Gegenbauer) polynomials, which carry information about the spin. For spin-0, the results of \cite{Atanasov:2021cje} can be readily imported. With some more effort, we will be able to calculate for any spin. In a companion paper \cite{toappear}, we will present what can be learnt for the Celestial CFT (CCFT) from EFTs. In this paper, we will explain what insights can be obtained for EFTs using CCFT techniques. We will use the Celestial variable, $z$, using which the Mandelstam variables for the scattering of 2-2 massless particles are written as \begin{equation} s=\omega^2,\quad t=-\omega^2 z,\quad u=-\omega^2(1-z)\,. \end{equation} For fixed $z$, this choice of variables enables one to study fixed-angle scattering. The Celestial amplitude is obtained as a Mellin transform of the four dimensional scattering amplitude. The Mellin variable is $\beta$. We will show how to repackage the information about the null/locality constraints systematically. Next we will examine the properties of the Feynman blocks in the Celestial variable. Specifically, we will be interested in the residues of the Celestial amplitude at $\beta=-2n$, i.e., negative even integers, since these contain information about the low-energy expansion coefficients in the momentum space amplitude \cite{Arkani-Hamed:2020gyp, Chang:2021wvv}.For each $n$, we can write an explicit expression for the amplitude in terms of a sum over Feynman blocks. Quite remarkably, we will find that beyond a certain critical spin $J=J_T$, all the Feynman blocks are typically real! This enables us to put two-sided bounds on the truncated partial wave sum $J<J_T$. Furthermore, GFT techniques lead to novel two sided bounds on the Wilson coefficients themselves in terms of the $J<J_T$ partial wave moments. Let us give a brief overview of the key results. We have been able to: \begin{itemize} \item Show that there is a new kind of positivity exhibited by amplitudes in the $\rho=-1-2z(z-1)$ variable. \item Obtain a representation for the $4$-point celestial amplitude of massless scalars using the crossing symmetric dispersive representation of momentum space amplitude \eqref{mtildebetaz1} for generic $\beta$ and by specializing to $\beta=-2n, ~n \in \mathbb{Z}_+$ relevant for low-energy physics \eqref{betaresrhovar}, systematically analyze the implications of locality constraints \eqref{locconstr}. \item Obtain bounds on partial wave moments as a direct consequence of the above mentioned locality constraints \eqref{locconstr}, using which we quantify the phenomenon of low spin dominance (LSD) and argue that the $\rho$-positivity is tied to spin-0 dominance \eqref{lsd}. \item Show that as a function of $\rho$ the Feynman blocks for large enough spins are typically-real polynomials. Using which we have been able to put non-projective bounds on the low energy Wilson coefficients \eqref{n4bounds},\eqref{pw51} in terms a few low spin partial wave moments. \item Obtain bounds for the case with graviton pole by using Goodman bounds for typically real functions in the $\tilde{\rho}=\rho+1$ variable. \end{itemize} A question worth asking at this point is if one could have obtained these results without appealing to CCFTs. The key player in our story is the Celestial variable $\rho$ and GFT methods relying on typically-realness in this variable. It is unclear why one would be interested in analysing such properties in this variable without having the motivation to understand CCFTs, which is why we feel that the CCFT formalism has been the key player leading to the S-matrix insights obtained in this paper. The paper is organized as follows. In section 2, we begin by introducing the celestial inspired $\rho$-variable, in which known amplitudes curiously seem to exhibit a hitherto unknown kind of positivity. In section 3, by starting with the CSDR we obtain a representation of the celestial amplitude for generic $\beta$. In section 4, by specializing to $\beta=-2n$, we analyze the locality constraints which imply certain bounds on partial wave moments, LSD and a connection between $\rho$-positivity and spin-0 dominance. In section 5, we show that there is a connection between the Feynman blocks and typically-real polynomials in the unit disk $|\rho|<1$ and use techniques from GFT to obtain two sided bounds on low energy Wilson coefficients $\mathcal{W}_{pq}$ in terms of lower spin partial waves. We conclude in section 6 with a discussion on the possible future directions of interest. The appendices supplement the material in the main text with proofs, closed form expressions and tables of data. \section{Celestial insight 1: A curious observation} \label{sec:newpositivity} In this section, we wish to point out an interesting feature of the low energy expansion of 2-2 scattering in many theories. We will start with string theory. Consider the following two fully crossing symmetric amplitudes \cite{nimayutinstring}. \begin{equation} \begin{split}\label{CBamplitude} \mathcal{M}_{CB}(s,t)= \frac{\Gamma(-s-1)\Gamma(-t-1)\Gamma(-u-1)}{\Gamma(2+s)\Gamma(2+t)\Gamma(2+u)}\,, \quad s+t+u=-4 \end{split} \end{equation} \begin{equation} \label{type2} \begin{split} M_{II}(s,t) = -x^2 \mathcal{M}_{II}(s,t)\equiv x^2\frac{\Gamma(-s)\Gamma(-t)\Gamma(-u)}{\Gamma(1+s)\Gamma(1+t)\Gamma(1+u)}\,,\quad s+t+u=0 \end{split} \end{equation} Here we have defined $x= -(st+tu+su)$. The first amplitude is the 2-2 tree level scattering of tachyons in closed bosonic string theory while the second one is the 2-2 tree level scattering of dilatons in type-II string theory. For type-II, we can also consider the 2-2 graviton scattering amplitude ${\mathcal R}^4 \mathcal{M}_{II}(s,t)$. \vskip 3pt We wish to expand both amplitudes in a manifestly crossing symmetric manner. To this effect we will introduce \begin{eqnarray} s_1&=&s+\frac{4}{3}\,,\quad s_2=t+\frac{4}{3}\,, \quad s_3=u+\frac{4}{3}\,, \quad{\rm for~ }\mathcal{M}_{CB}\\ s_1&=& s\,,\quad s_2=t\,,\quad s_3=u\,, \quad {\rm for~} \mathcal{M}_{II}\,. \end{eqnarray} In both cases $s_1+s_2+s_3=0$. Now we introduce the celestial variables \begin{equation} s_1=\omega^2\,,\quad s_2=-\omega^2 z\,,\quad s_3=-\omega^2(1-z)\,. \end{equation} Further for later convenience, we introduce \begin{equation} \rho=-1-2z^2+2z\,. \end{equation} The relation between the $\rho$ variable and the $z$ variable is indicated in the figure. \begin{figure}[ht] \centering \includegraphics[width=0.5\linewidth]{pic1.png} \caption{The $\rho$-variable. We have marked the range of interest. Green indicates physical region while red indicates unphysical region.} \label{fig:pic1a} \end{figure} \vskip 4pt In passing, we also note the following useful relations ($s_1+s_2+s_3=0$): \begin{eqnarray} x&\equiv &-(s_1 s_2+s_1 s_3+s_2 s_3)=\frac{\omega^4}{2}(1-\rho)\,,\\ y&\equiv &-s_1 s_2 s_3=-\frac{\omega^6}{2}(1+\rho)\,. \end{eqnarray} Then expanding the amplitudes around $\omega^2=0$ (i.e., the crossing symmetric point), we find\footnote{One cross-check about the overall sign is that if we write the expansion as $\sum {\mathcal W}_{pq}x^p y^q$, then using unitarity, one can show that $\mathcal{W}_{n,0}\geq 0$. In other words, the coefficients of $\omega^{4n}(1-\rho)^n$ are guaranteed to be positive.} \begin{equation} \begin{split}\label{cbtype2} & \mathcal{M}_{CB} \approx 7.74+27.22 \omega^4(1-\rho)+121 \omega^6(1+\rho)+121.56 \omega^8 (1-\rho)^2+911.18\omega^{10}(1-\rho^2) \\ & \hspace{1.4cm} +546.77 \omega^{12}\left[(1-\rho)^3+3(1+\rho)^2\right]+O(\omega^{14})\,,\\ \\ &\mathcal{M}_{II}-\frac{2}{\omega^6(1+\rho)}\approx 2.40+1.04\omega^4(1-\rho)+1.44\omega^6(1+\rho)+0.50\omega^8(1-\rho)^2 \\ & \hspace{3.4cm} +1.25 \omega^{10}(1-\rho^2) +0.25\omega^{12}\left[(1-\rho)^3+2.98(1+\rho)^2\right]+O(\omega^{14})\,. \end{split} \end{equation} For type II, we have put the graviton pole on the left. Now consider two more cases. First, the run-of-the-mill $\phi^2\psi$ theory at tree level where we are scattering massless $\phi$ which exchanges a massive $\psi$ at tree level. The amplitude for this is: \begin{eqnarray} \mathcal{M}_{\phi^2 \psi}&=& g^2\left(\frac{1}{m^2-s}+\frac{1}{m^2-t} +\frac{1}{m^2-u} \right) \nonumber\\ &=&\frac{3 g^2}{m^2}+\frac{g^2 \omega^4}{m^6}(1-\rho)+\frac{3 g^2 \omega^6}{2 m^8}(1+\rho)+\frac{g^2\omega^8}{2m^{10}}(1-\rho)^2+\frac{5 g^2\omega^{10}}{4 m^{12}}(1-\rho^2)\nonumber\\&+&\frac{g^2 \omega^{12}}{4m^{14}}\left[(1-\rho)^3+3(1+\rho)^2\right]+O(\omega^{14})\,. \end{eqnarray} Finally consider the theory at one-loop which is given in terms of the Appell $F_3$ \cite{davdychev} \begin{eqnarray} \mathcal{M}_{\phi^2 \psi}&=& \frac{\pi^2}{6 m^4}\left( F_3 \left({{1,1,1,1}\atop{\frac{5}{2}}}; \frac{s}{4 m^2},\frac{t}{4 m^2} \right)+(s\rightarrow t, t\rightarrow u)+(s\rightarrow u, t\rightarrow s) \right) \nonumber \\ &=&\frac{\pi^2}{6 m^4}\left( \sum_{p,q=0}^{\infty} \frac{p!~q!}{\left(\frac{5}{2}\right)_{p+q}(4~m^2)^{p+q}}\left(s^p t^q+t^p u^q+u^p s^q\right)\right) \nonumber\\ &\approx& \frac{\pi^2}{6 m^4}\bigg(3+\frac{\omega^4}{40 m^4}(1-\rho)+\frac{\omega^6}{168 m^6}(1+\rho)+\frac{\omega^8}{2520 m^8}(1-\rho)^2+\frac{\omega^{10}}{5280 m^{10}}(1-\rho^2)\nonumber\\ &+& \frac{\omega^{12}}{128128 m^{12}}\left[(1-\rho)^3+2.91(1+\rho)^2\right]\bigg)+O(\omega^{14}) \end{eqnarray} Now all of these expansions have the following startling feature in common. \smallskip {\it All these expansions up to any fixed order in $\omega$ are positive polynomials in $\rho$ in the interval $\rho\in(-1,1)$. In order to be concise, we will refer to this positivity as ${\mathcal P}_\rho$.} \smallskip A positive polynomial $p(x)$ on an interval $(a,b)$ is one that is $p(x) \ge 0 ~\forall~ x\in(a,b)$. A nice characterization of such polynomials \cite{powers} on $(-1,1)$ is that they can be expanded in terms of the so called {\it Bernstein basis} $p(x)=\sum_{i=0}^m c_{i}(1+x)^{m-i} (1-x)^i$ such that $c_{i,j}\ge 0$ however we may need $m\ge d$ where d is the degree of the polynomial. The smallest $m$ such that for $n\ge m$ guarantees $c_i \ge 0$ is called the Bernstein degree of the polynomial \cite{powers}. The Bernstein degree requires knowledge of the maximum and minimum values of the polynomial $p(x)$. In appendix \ref{sec:positivityproofs}, we will derive these positivity properties directly using the known expressions for the amplitudes. Now some of the positivity features can be explained quite straightforwardly using a dispersion relation. For instance, the coefficient of the $\omega^4(1-\rho)$ term can be shown to be positive using partial wave unitarity. The full positivity in the $\rho\in(-1,1)$ interval however is harder to explain. One of the main purposes of this paper is to find analytic conditions under which such positivity can hold. Our main tool will be to use the crossing symmetric dispersion relation (CSDR) \cite{Auberson:1972prg, Sinha:2020win} which we will review next. \section{Essential technicalities: Dispersion relations} As mentioned in the introduction, our focus in this paper will be the use of the crossing symmetric dispersion relation (CSDR). Many of the analytic properties will be transparent using the CSDR \footnote{It should be possible to use the fixed-$t$ dispersion relations to find numerical evidence for these properties but we will leave this as an open problem.}. We begin with a lightning review of the CSDR. For further details, we refer the reader to \cite{Auberson:1972prg,Sinha:2020win}. \subsection{CSDR: A quick review} \label{subsec:csdrreview} Consider $\mathbf{M}(s,t)$ to be the $2$-$2$ scattering amplitude of identical massless scalars in four spacetime dimensions. $\mathbf{M}(s,t)$ admits a crossing symmetric dispersive representation given by \cite{Auberson:1972prg,Sinha:2020win} \begin{equation} \label{csdr} \begin{split} \mathbf{M}(s,t) = c_{0}+ \frac{1}{\pi} \int_{\delta_{0}}^{\infty} \frac{ds'}{s'} \hspace{0.1cm} \mathcal{A}(s', a) H(s'; s,t,u) \end{split} \end{equation} where \begin{equation} \label{adef} \begin{split} a = \frac{s t u}{st + t u + u s}\equiv \frac{y}{x}\,. \end{split} \end{equation} Here $\delta_0$ is the location of the cut (or in the case of string theory, the first massive pole) and $c_0=\mathbf{M}(0,0)$, which arises as we have assumed two subtractions while writing down the dispersion relation \cite{Auberson:1972prg,Sinha:2020win}. $\mathcal{A}(s, a)$ is the $s$-channel discontinuity of the amplitude and $H(s'; s,t,u) $ denotes the following crossing symmetric kernel \begin{equation} \label{kernel} \begin{split} H(s'; s,t,u) = \frac{s}{s'-s} + \frac{t}{s'-t}+ \frac{u}{s'-u}\,. \end{split} \end{equation} The discontinuity $\mathcal{A}(s, a)$ can be expanded in terms of Legendre polynomials as \begin{equation} \label{absp} \begin{split} \mathcal{A}(s', a) = 32\pi \sum_{J=0}^{\infty} (2J+1) \ \alpha_{J}(s') \ P_{J}\left( \sqrt{\frac{s'+3a}{s'-a}}\right)\,. \end{split} \end{equation} where $\alpha_{J}(s')$ are the partial wave coefficients. In the sum over spins in \eqref{absp}, only even spins contribute since we are considering here the amplitude for identical scalars. The conventions are chosen so that unitarity leads to $0\leq \alpha_J(s)\leq 1$. Now the nontrivial form of the argument of the Legendre polynomial is to be noted. When the theory is gapped, it is known \cite{Auberson:1972prg}, that the partial wave expansion converges over a range of the parameter $a$, which allows for Taylor expanding around $a\sim 0$. Since $a$ involves inverse powers of $x$, this would lead to negative powers of $x$ in a particular partial wave. In a local theory, these inverse powers of $x$ should be absent. This means that when we sum over the spins, such inverse powers should cancel. This leads to what we call ``locality'' constraints. In \cite{Sinha:2020win}, it was shown that these are equivalent to the so-called ``null constraints'' which arise on imposing crossing symmetry on the fixed-$t$ dispersion relation \cite{sch1, tolley1}. \subsection{Dispersion relation in celestial basis} \label{subsec:mellincsdr} In this section we consider the $4$-point celestial amplitude for identical massless scalars in four spacetime dimensions and evaluate it using the crossing symmetric dispersive representation of the momentum space amplitude given in section \ref{subsec:csdrreview}. In order to write down the celestial amplitude, the null four-momenta of the external particles can be parametrized as \begin{equation} \label{nullmompar} \begin{split} p^{\mu}_{k} = \epsilon_{k} \omega_{k} (1+ z_{k} \bar{z}_{k}, z_{k}+ \bar{z}_{k}, -i (z_{k}-\bar{z}_{k}), 1-z_{k}\bar{z}_{k}), \quad k=1,2,3,4 \end{split} \end{equation} where $\epsilon_{k} =\pm 1$ for an outgoing (incoming) particle. $\omega_{k}$ is the energy of the $k$-th particle. $(z_{k},\bar{z}_{k})$ specify the directions of null-momenta of the asymptotic states in the S-matrix and hence can be regarded as stereographic coordinates on the $2$-$d$ celestial sphere. Throughout the rest of this paper, we take $\epsilon_{1}=\epsilon_{2}= -1$ and $\epsilon_{3}=\epsilon_{4}=1$ corresponding to particles $(1,2)$ incoming and $(3,4)$ outgoing. \vskip 4pt The $4$-point celestial amplitude is then given by \begin{equation} \label{csamp} \begin{split} \mathcal{M}(\Delta_{i}, z_{i},\bar{z}_{i}) & = \int_{0}^{\infty} \prod_{i=1}^{4} d\omega_{i} \ \omega_{i}^{\Delta_{i}-1} \ \mathbf{M}(\omega_{i},z_{i},\bar{z}_{i}) \ \delta^{(4)}\left(\sum_{i=1}^{4}p^{\mu}_{i}(\omega_{i},z_{i},\bar{z}_{i})\right) \end{split} \end{equation} where $\mathbf{M}(\omega_{i},z_{i},\bar{z}_{i})$ is the momentum space amplitude with the external momenta parametrized as in \eqref{nullmompar}. Under the action of the Lorentz group which acts as $SL(2,C)$ on the $(z_{i},\bar{z}_{i})$ variables, the celestial amplitude $\mathcal{M}(\Delta_{i}, z_{i},\bar{z}_{i})$ transforms as a $4$-point correlation function of quasi-primary operators with scaling dimension $\Delta_{i}$ in a $2$-$d$ CFT which in this context is referred to as Celestial CFT (CCFT). Now $\mathcal{M}(\Delta_{i}, z_{i},\bar{z}_{i})$ can be further expressed as \cite{Arkani-Hamed:2020gyp}\footnote{See Appendix \ref{sec:csampreview} for a review of the derivation of \eqref{csamp1}.} \begin{equation} \label{csamp1} \begin{split} & \mathcal{M}\left( \Delta_{i}, z_{i},\bar{z}_{i} \right) = \prod_{i<j}^{4} ( z_{ij}\bar{z}_{ij})^{\frac{1}{2}\left(\frac{\Delta}{3}-\Delta_{i}-\Delta_{j}\right)} \ 2^{-\beta-2} \ |z(1-z)|^{\frac{(\beta+4)}{6}} \ \delta(z-\bar{z}) \ \widetilde{\mathbf{M}}(\beta, z) \end{split} \end{equation} where \begin{equation} \label{KandXdef} \begin{split} & \Delta= \sum_{i=1}^{4}\Delta_{i} ; \quad \beta= \Delta-4 \end{split} \end{equation} $z,\bar{z}$ denote the cross ratios \begin{equation} \label{crs} \begin{split} z =\frac{z_{13}z_{24}}{z_{12}z_{34}}; \quad \bar{z} = \frac{\bar{z}_{13}\bar{z}_{24}}{\bar{z}_{12}\bar{z}_{34}} \end{split} \end{equation} and \begin{equation} \label{Mtildedef} \begin{split} \widetilde{\mathbf{M}}(\beta, z) = \int_{0}^{\infty} d\omega \ \omega^{\beta-1} \ \mathbf{M}(\omega^{2}, - z \omega^{2}) \end{split} \end{equation} $\widetilde{\mathbf{M}}(\beta, z)$ is the Mellin transform of the of $2$-$2$ momentum space amplitude $\mathbf{M}(s,t)$ where the Mandelstam invariants have been parametrized as \begin{equation} \label{stuzvar} \begin{split} s = \omega^{2}; \quad t= -z\omega^{2}; \quad u= (z-1) \omega^{2} \end{split} \end{equation} Here $z = - t/s $ is related to the scattering angle $\theta$ in the $s$-channel via $z= \frac{1}{2}(1-\cos \theta)$. For physical $s$-channel kinematics we thus have $z \in [0,1]$. In this paper one of the central objects of interest is $\widetilde{\mathbf{M}}(\beta, z)$. Since the kinematic prefactors in \eqref{csamp1} will be irrelevant for our purposes here, we will refer to $\widetilde{\mathbf{M}}(\beta, z)$ simply as the celestial or Mellin amplitude in the rest of this paper. Let us now determine the Mellin amplitude using the representation of $\mathbf{M}(s,t)$ given by the crossing symmetric dispersion relation \eqref{csdr}. For this, we use the partial wave expansion \eqref{absp} and also apply the celestial parametrization \eqref{stuzvar}. The Mellin integral over $\omega$ can then be performed and we obtain \begin{equation} \label{mtildebetaz1} \begin{split} & \widetilde{\mathbf{M}}(\beta, z) \\ &= 2\pi c_{0} \delta(\beta) + \frac{ \pi }{2 \sin\left( \frac{\pi \beta}{2}\right)} \sum_{J=0}^{\infty} (2J+1) \ \widetilde{\alpha}_{J}(\beta,\delta_{0}) \bigg[ e^{- i\pi \beta/2} P_{J}\left( 1-2 z \right) + z^{-\beta/2} P_{J}\left( \frac{z-2}{z} \right) \\ & + (1-z)^{-\beta/2} P_{J}\left( \frac{z+1}{z-1} \right) + (z(1-z))^{-\beta/2-J} (z^{2}-z+1)^{\beta/2+3} \mathcal{Q}_{J}(\beta,z) \bigg] \end{split} \end{equation} where $\widetilde{\alpha}_{J}(\beta,\delta_{0})$ is given by \begin{equation} \label{alphatdef} \begin{split} \widetilde{\alpha}_{J}(\beta,\delta_{0}) = 32\int_{\delta_{0}}^{\infty} ds' \ s'^{\beta/2-1} \ \alpha_{J}(s') \,, \end{split} \end{equation} and are partial-wave moments for $\beta=-2n.$ $\mathcal{Q}_{J}(\beta,z)$ is a polynomial in $\beta, z$ and is given by \begin{equation} \label{Qjdef} \begin{split} \mathcal{Q}_{J}(\beta, z) = (z(1-z))^J (z^2-z+1)^{-3} \sum\limits_{i=1}^3 \mathcal{R}_{J}(\beta,x_{i}) \end{split} \end{equation} where \begin{equation} \label{Rjdef} \begin{split} \mathcal{R}_{J}(\beta, x_{i}) = \frac{e^{- i \pi \beta/2}}{\left(\frac{J}{2}-1\right)!} \ \frac{d^{J/2-1}}{dx^{J/2-1}}\bigg[ \frac{x^{\beta/2} (1+x)^{J/2}}{(x-x_{i})} P_{J}\left( \sqrt{\frac{1-3x}{1+x}}\right) \bigg] \bigg|_{x=-1} \end{split} \end{equation} with $x_{1} = -z(z-1)(z^{2}-z+1)^{-1}, \ x_{2} = (z-1)(z^{2}-z+1)^{-1}, \ x_{3} = -z(z^{2}-z+1)^{-1}$. \vskip 4pt \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{complexz.png} \caption{The analytic structure for each spin-$J$ partial wave. The blue circles are potential singularities at $z=0,1$ while the red crosses are potential singularities at $z^2-z+1=0$.} \label{fig:pic2a} \end{figure} \vskip 5pt We refer the reader to the Appendix, section \ref{sec:formulas} where a closed form expression for $\mathcal{Q}_{J}(\beta,z)$ is given. For illustrative purposes, we note below some explicit examples of $\mathcal{Q}_{J}(\beta,z)$ for spin $J=0,2,4,6$. \begin{equation} \label{Qjdef} \begin{split} & \mathcal{Q}_{2}(\beta, z) = 0, \quad \mathcal{Q}_{2}(\beta, z)=-6, \quad \mathcal{Q}_{4}(\beta, z) = \frac{35}{4} (1+\rho )^3+ \frac{1}{4} (35 \beta+190) (1+\rho)^2-70, \\ & \mathcal{Q}_{6}(\beta, z) = -\frac{21}{16} (11 \beta +57) (\rho +1)^5 -\frac{21}{32} \left(11 \beta ^2+136 \beta +424\right) (\rho +1)^4 \\ & -\frac{21}{16} (132 \beta +508) (\rho +1)^3-\frac{21}{8} (44 \beta +564) (\rho +1)^2-\frac{231}{16} (\rho +1)^6+2772 (\rho +1)+924 \end{split} \end{equation} where $\rho =-1-2z(z-1)$. The analytic structure in the complex-$z$ plane is indicated in the figure for $\beta \in -2 \mathbb{Z}$. The $z=0,1$ poles \footnote{For generic $\beta$ singularity structure is more complicated since from the expression \eqref{mtildebetaz1} it is manifest that there are branch cuts.} in each channel are cancelled for each $J$ when the crossing symmetric combination is used. The $z^2-z+1=0$ or $\rho=1$ singularities are what will lead to locality constraints discussed below. The expression for $\widetilde{\mathbf{M}}(\beta, z)$ given by \eqref{mtildebetaz1} provides a representation of the $4$-point celestial amplitude for massless scalars in terms of the partial wave expansion of the momentum space amplitude. In the following sections we will primarily focus on the residues at the poles of $\widetilde{\mathbf{M}}(\beta, z)$ with respect to the parameter $\beta$ for negative integer values of $\beta$. In \eqref{mtildebetaz1}, these poles arise from the $\sin(\pi \beta/2)$ factor. The residues at these poles encode the Wilson coefficients in the low energy expansion of the amplitude in momentum space \cite{Arkani-Hamed:2020gyp}. \section{Celestial insight 2: Moment bounds}\label{sec4} In section \ref{subsec:RMTapp} below, we use Ramanujan's master theorem to relate the Wilson coefficients in the low energy expansion of $2$-$2$ scalar amplitudes to the residues of the corresponding Mellin amplitude $\widetilde{\mathbf{M}}(\beta,z)$ at negative integer values of $\beta$. We then discuss locality constraints in the context of CSDR in section \ref{subsec:lc} and use it to obtain bounds on partial wave moments in section \ref{subsec:pwmomentbounds}. In section \ref{subsec:dnmpos} we derive sufficient conditions for the positivity properties mentioned in section \ref{sec:newpositivity} to hold. \subsection{Applying Ramanujan's master theorem} \label{subsec:RMTapp} We now consider the low energy expansion of the amplitude $\mathbf{M}(s,t)$. If we do not include loop-level contribution of exchange of massless particles, then $\mathbf{M}(s,t)$ can be expanded around low energies as \begin{equation} \label{eftexp} \begin{split} \mathbf{M}(s,t) = \sum_{p,q=0}^{\infty} \mathcal{W}_{p,q} \hspace{0.04cm} x^{p}y^{q} \end{split} \end{equation} where $x=-(st +tu +su), y=-s t u$ with $s+t+u=0$ and $\mathcal{W}_{p,q}$ denote Wilson coefficients. Let us now employ the change of variables $s=\omega^{2}, t=-z\omega^{2}, u=(z-1)\omega^{2}$. Then \eqref{eftexp} becomes \begin{equation} \label{eftexp1} \begin{split} \mathbf{M}(\omega^{2}, -\omega^{2} z) & = \sum_{n=0}^{\infty} \widetilde{\mathcal{W}}(n, z) \ \omega^{2n} \end{split} \end{equation} where we have defined \begin{equation} \label{wzntilde} \begin{split} \widetilde{\mathcal{W}}(n, z) = \sum_{\substack{p,q \\ 2p+3q=n}} \mathcal{W}_{p,q} \hspace{0.05cm} z^{q}(z-1)^{q} (z^{2}-z+1)^{p} \end{split} \end{equation} Now let us evaluate the Mellin amplitude $\widetilde{\mathbf{M}}(\beta,z)$ using the representation of the amplitude $\mathbf{M}(\omega^{2}, -\omega^{2} z)$ given by \eqref{eftexp1}. This can be done using Ramanujan's master theorem (RMT) for obtaining the Mellin transform of a function given its Taylor series expansion coefficients. See section \ref{sec:RMT} for further details of this theorem. Then according to RMT, the Mellin transform of \eqref{eftexp1} is given by \begin{equation} \label{mteftexp} \begin{split} \widetilde{\mathbf{M}}(\beta,z) & = \int_{0}^{\infty} d\omega \ \omega^{\beta-1}\ \mathbf{M}(\omega^{2}, -\omega^{2} z) = \frac{\pi e^{-i\pi \beta/2} }{2 \sin\left(\frac{\pi \beta}{2}\right)} \hspace{0.07cm} \widetilde{\mathcal{W}}(-\beta/2,z) \end{split} \end{equation} In writing the above expression, we have assumed that $\widetilde{\mathcal{W}}(n,z)$ can be analytically continued away from integer values of $n$. Now note that the $(1/\sin(\pi \beta/2))$ factor in \eqref{mteftexp} has poles when $\beta= -2n, n \in \mathbb{Z}_{\ge 0}$\footnote{$\widetilde{\mathbf{M}}(\beta,z)$ also has poles at $\beta =2n, n \in \mathbb{Z}_{\ge 0}$. In \cite{Arkani-Hamed:2020gyp} it was pointed out that the residues at these poles encode the coefficients in the high-energy expansion of the amplitude. However in this paper we will not consider this, since we are mainly interested in the low-energy expansion of the amplitude. }. The residue at these poles is \begin{equation} \label{betares} \begin{split} \mathrm{Res}_{\beta= -2 n} \left[ \widetilde{\mathbf{M}}(\beta, z) \right] & = \widetilde{\mathcal{W}}(n,z) = \sum_{\substack{p,q \\ 2p+3q=n}} \mathcal{W}_{p,q} z^{q}(z-1)^{q} (z^{2}-z+1)^{p} \end{split} \end{equation} Thus the residues of the Mellin amplitude at $\beta=-2n$ encode the Wilson coefficients appearing in the low energy expansion of the momentum space amplitude\cite{Arkani-Hamed:2020gyp}. \vskip 4pt Now we can also write \eqref{betares} in terms of the ``crossing-symmetric'' variable\footnote{We refer to this as ``crossing-symmetric'' since it is invariant under $z\rightarrow 1-z$.} $\rho =-1-2z^2+2z$, introduced in section \ref{sec:newpositivity}. Then \eqref{betares} takes the form \begin{equation} \label{betaresrhovar} \begin{split} \mathrm{Res}_{\beta= -2 n} \left[ \widetilde{\mathbf{M}}(\beta, \rho) \right] & = \widetilde{\mathcal{W}}(n,\rho) = \sum_{\substack{p,q \\ 2p+3q=n}}\frac{(-1)^q}{2^{p+q}} \mathcal{W}_{p,q} (1+\rho)^{q} (1-\rho)^{p} \\ & \equiv\sum_{m=0}^{[\frac{n}{2}]} d_m^{(n)}(1+\rho)^m(1- \rho)^{[\frac{n}{2}]-m} \end{split} \end{equation} \subsubsection*{Physical interpretation of the $\rho$ variable} Noting that $z^2-z=- t u/s^2$, and $\cos\theta=1+2t/s$, where $\theta$ is the scattering angle, we can express $\rho$ \begin{equation} \rho= -1-2z^2+2z = -\frac{1+\cos^2\theta}{2}\,. \end{equation} Therefore, $\rho=-1$ corresponds to either $z=0$ or $z=1$, i.e., $\theta=0$ or $\theta=\pi$, while $\rho=1$ corresponds to the roots of $z^2-z+1=0$ which are $z = (-1)^{1/3}, -(-1)^{2/3}$. Then clearly $\rho=1$ maps to unphysical (analytically continued) values of $\theta$. \subsection{Locality Constraints} \label{subsec:lc} In a local EFT, the low energy expansion of the amplitude \eqref{eftexp} only contains positive powers of $x$. This implies that in \eqref{wzntilde} $\widetilde{\mathcal{W}}(n,z)$ should be non-singular at $z = (-1)^{1/3}, -(-1)^{2/3}$, which are the roots of $z^{2}-z+1=0$. In terms of the $\rho$ variable these points corresponds to $\rho=1$ as mentioned in the previous subsection. Consequently such singularities are not allowed in the residues of the celestial amplitude at $\beta=-2n, n \in \mathbb{Z}^{+}$ for a local theory\footnote{The singularities at $z = (-1)^{1/3}, -(-1)^{2/3}$ lie outside the domain of physical kinematics where $z\in (0,1)$. }. However the absence of these singularities is not manifest when the celestial amplitude is evaluated using the crossing symmetric dispersive representation of the momentum space amplitude. This is essentially the fact that the crossing symmetric dispersion relation makes crossing symmetry manifest at the expense of locality. Let us now see this a bit more explicitly as follows. \vskip 4pt Taking the residue at $\beta= -2n, n \in \mathbb{Z}^{+}$ on the R.H.S. of \eqref{mtildebetaz1} we get \begin{equation} \label{betarescsdr} \begin{split} & \mathrm{Res}_{\beta= -2 n} \left[ \widetilde{\mathbf{M}}(\beta, z) \right] = \widetilde{\mathcal{W}}(n,z) \\ & = (-1)^{n} \sum_{J=0}^{\infty} (2J+1) \ \widetilde{\alpha}_{J}(n,\delta_{0}) \bigg[ (-1)^{n} P_{J}\left( |1-2 z| \right) + z^{n} P_{J}\left( \left| \frac{z-2}{z}\right| \right) + (1-z)^{n} P_{J}\left( \left| \frac{z+1}{z-1}\right| \right) \\ & + (z(1-z))^{n-J} (z^{2}-z+1)^{3-n} \mathcal{Q}_{J}(-2n,z) \bigg] \end{split} \end{equation} where \begin{equation} \label{pwmoments} \begin{split} \widetilde{\alpha}_{J}(n, \delta_{0}) = 32\int_{\delta_{0}}^{\infty} \frac{ds}{s^{n+1}} \ \alpha_{J}(s) \end{split} \end{equation} The $(z^{2}-z+1)^{3-n}$ factor in the second line of \eqref{betarescsdr} is singular at $z = (-1)^{1/3}, -(-1)^{2/3}$ for $n\ge 4$. It is also worth noting that there are also apparent divergences at $z=0,1$ in \eqref{betarescsdr}. But it can be easily checked that the $z=0,1$ singularities cancel for any fixed $J$. \vskip 4pt Thus for a local theory we need to impose on \eqref{betarescsdr} the constraint that the $z = (-1)^{1/3}, -(-1)^{2/3}$ singularities cancel upon performing the sum over spins $J$ in the partial wave expansion. In order to study the implications of these constraints, which will henceforth be referred to as the locality or null constraints, it again turns out to be convenient to use the $\rho$ variable. Then it can be shown that \eqref{betarescsdr} takes the following form \begin{equation} \label{betarescsdrrho} \begin{split} & \mathrm{Res}_{\beta=-2n} \left[ \widetilde{\mathbf{M}}(\beta,\rho)\right] = \sum_{J=0}^{\infty}(2J+1) \widetilde{\alpha}_{J}(n, \delta_{0}) \bigg[ \sum_{k=1}^{n-3} \frac{c_{k}(n,J)}{(\rho-1)^{k}} + \mathcal{F}_{B}(n, J, \rho) \bigg] \end{split} \end{equation} where \begin{equation} \label{cknJdef} \begin{split} c_{k}(n,J) & = - \frac{ 2^{J-3}}{(n-3-k)!}\ \lim_{\rho \to 1} \frac{d^{n-3-k}}{d\rho^{n-3-k}}\bigg[ (1+\rho)^{n-J}\mathcal{Q}_{J}(-2n,\rho)\bigg] \end{split} \end{equation} and $\mathcal{F}_{B}(n, J, \rho) $ is a polynomial in $\rho$ of degree $[n/2]$. We will refer to this as the Feynman block. See the Appendix, section \ref{sec:formulas} for their closed form expressions. In section \ref{feynmanblock} where we analyse the properties of Feynman blocks we will present some explicit examples of these blocks for few values of $n$. Now demanding that the singularities at $\rho=1$ cancel for a local theory, we get \begin{equation} \label{locconstr} \begin{split} \sum_{J=2}^{\infty} (2J+1) \ c_{k}(n,J) \ \widetilde{\alpha}_{J}(n,\delta_{0}) =0 \quad \quad \forall k=1,\cdots, n-3 \end{split} \end{equation} where the sum above runs only over even spins $J$. Also note that this sum starts from $J=2$, since $c_{k}(n,J=0) =0$. In section \ref{subsec:pwmomentbounds} we will use the above locality constraint equations to derive analytic bounds on partial wave moments. In section \ref{subsec:dnmpos}, equation \eqref{locconstr} will also play a crucial role in analysing the novel positivity properties of low-energy expansion of the amplitude mentioned before. For this it is useful to relate the coefficients $d^{(n)}_{m}$, which are in turn related to the Wilson coefficients via \eqref{betaresrhovar}, to the partial wave moments $\widetilde{\alpha}_{J}(n,\delta_{0})$. In order to obtain this relation, we impose the locality constraints in \eqref{betarescsdrrho} and compare with \eqref{betaresrhovar}. This yields, \begin{equation} \label{dalphatrel} \begin{split} & d^{(n)}_{m} = \sum_{J=0}^{\infty}(2J+1)\ \chi^{(n)}_{m}(J) \ \widetilde{\alpha}_{J}(n, \delta_{0}) , \quad \quad m=0,1,\cdots, \left[\frac{n}{2}\right] \end{split} \end{equation} Explicit expressions for the coefficients $\chi^{(n)}_{m}(J)$ can be obtained using the results given in Appendix \ref{sec:formulas}. \subsection{Bounds on partial wave moments} \label{subsec:pwmomentbounds} In this section we show that the locality constraint equations \eqref{locconstr} can be used to derive lower bounds on the moments of partial wave coefficients. We first consider the case $n=4$ in \eqref{locconstr} which yields \begin{equation} \label{locn4} \begin{split} \sum_{J=2}^{\infty} (2J+1) \ c_{1}(4,J) \ \widetilde{\alpha}_{J}(4,\delta_{0})=0 \end{split} \end{equation} Using \eqref{cknJdef} it can be shown that $c_{1}(4,J)$ is given by \begin{equation} \label{cn4J} \begin{split} & c_{1}(4,J) = J (J+1)(J^{2}+J-8) \end{split} \end{equation} From \eqref{cn4J} it is clear that $c_{1}(4,2) <0 $ and $ c_{1}(4,J) >0, \ \forall J \ge 4$. Then let us write \eqref{locn4} as \begin{equation} \label{locn41} \begin{split} 60\hspace{0.05cm} \widetilde{\alpha}_{2}(4,\delta_{0})= \sum_{J=4}^{\infty} J (J+1) (2J+1)(J^{2}+J-8) \ \widetilde{\alpha}_{J}(4,\delta_{0}) \end{split} \end{equation} Now in a unitary theory, the partial waves are non-negative and this implies $\widetilde{\alpha}_{J}(n,\delta_{0}) \ge 0$. Therefore each term in the sum on the R.H.S. of \eqref{locn41} is a positive quantity. As a result we get for any $J \ge 4$ \begin{equation} \label{alphatn4bound1} \begin{split} \frac{\widetilde{\alpha}_{2}(4,\delta_{0})}{\widetilde{\alpha}_{J}(4,\delta_{0})} > \frac{1}{60} \ J(J+1)(2J+1)(J^{2}+J-8), \quad J\ge 4 \end{split} \end{equation} For example considering $J=4,6,8$, the above inequality implies \begin{equation} \label{pwn4boundsample} \begin{split} \frac{\widetilde{\alpha}_{2}(4,\delta_{0})}{\widetilde{\alpha}_{4}(4,\delta_{0})} > 36, \quad \frac{\widetilde{\alpha}_{2}(4,\delta_{0})}{\widetilde{\alpha}_{6}(4,\delta_{0})} > 309.4, \quad \frac{\widetilde{\alpha}_{2}(4,\delta_{0})}{\widetilde{\alpha}_{8}(4,\delta_{0})} > 1305. \end{split} \end{equation} As a comparison, we quote the values obtained from the dilaton amplitude in type II string theory:\footnote{In obtaining \eqref{typeiivals} we have performed the partial wave expansion in terms of Legendre polynomials as in the case of the massless scalar amplitude in four spacetime dimensions in \eqref{absp}.} \begin{equation} \label{typeiivals} \begin{split} \frac{\widetilde{\alpha}_{2}(4,\delta_{0})}{\widetilde{\alpha}_{4}(4,\delta_{0})} \approx 62, \quad \frac{\widetilde{\alpha}_{2}(4,\delta_{0})}{\widetilde{\alpha}_{6}(4,\delta_{0})} \approx 1258, \quad \frac{\widetilde{\alpha}_{2}(4,\delta_{0})}{\widetilde{\alpha}_{8}(4,\delta_{0})} \approx 13708. \end{split} \end{equation} Evidently the type II dilaton amplitude satisfies the bounds obtained in \eqref{pwn4boundsample}. Similarly we can also obtain analytic bounds for partial wave moments with $n > 4$ using the locality constraints. We present below a sampling of the results for $n=5,6,7$. \subsubsection*{$n=5$ : } \begin{equation} \label{pwn5boundsample} \begin{split} \frac{\widetilde{\alpha}_{2}(5,\delta_{0})}{\widetilde{\alpha}_{4}(5,\delta_{0})} > 15, \quad \frac{\widetilde{\alpha}_{2}(5,\delta_{0})}{\widetilde{\alpha}_{6}(5,\delta_{0})} > 946.4, \quad \frac{\widetilde{\alpha}_{2}(5,\delta_{0})}{\widetilde{\alpha}_{8}(5,\delta_{0})} > 8411. \end{split} \end{equation} \subsubsection*{$n=6$ : } \begin{equation} \label{pwn6boundsample} \begin{split} & \frac{\widetilde{\alpha}_{2}(6,\delta_{0})}{\widetilde{\alpha}_{6}(6,\delta_{0})} + \frac{6\hspace{0.05cm} \widetilde{\alpha}_{4}(6,\delta_{0})}{\widetilde{\alpha}_{6}(6,\delta_{0})} > 1183, \quad \frac{\widetilde{\alpha}_{2}(6,\delta_{0})}{\widetilde{\alpha}_{8}(6,\delta_{0})} + \frac{6\hspace{0.05cm} \widetilde{\alpha}_{4}(6,\delta_{0})}{\widetilde{\alpha}_{8}(6,\delta_{0})} > 26214 \end{split} \end{equation} Note that in the $n=6$ case, from \eqref{locconstr} we can only obtain bounds on the sum of ratios of partial wave coefficients as given in \eqref{pwn6boundsample}. This is because there happens to be only one independent locality constraint equation for $n=6$ and the coefficients $c_{1}(6, J=2), c_{1}(6,J=4)$ are both positive, while for $J\ge 4$ we have $c_{1}(6, J) <0$. \subsubsection*{$n=7$ : } \begin{equation} \label{pwn7boundsample1} \begin{split} & \frac{\widetilde{\alpha}_{2}(7,\delta_{0})}{\widetilde{\alpha}_{6}(7,\delta_{0})} > 200.2, \quad \frac{\widetilde{\alpha}_{2}(7,\delta_{0})}{\widetilde{\alpha}_{8}(7,\delta_{0})} > 38283.5, \quad \frac{\widetilde{\alpha}_{4}(7,\delta_{0})}{\widetilde{\alpha}_{6}(7,\delta_{0})} > 30.33, \quad \frac{\widetilde{\alpha}_{4}(7,\delta_{0})}{\widetilde{\alpha}_{8}(7,\delta_{0})} > 338.38 \end{split} \end{equation} The inequalities in \eqref{pwn7boundsample1} can be derived by noting that for $n=7$, we get two independent null constraint equations from \eqref{locconstr}. For both these equations, we have $c_{k}(7,2), c_{k}(7,4) <0$ and $c_{k}(7,J) >0$ for $J\ge 6$. We have also checked that all the inequalities quoted above are satisfied by the type II string dilaton amplitude. The inequalities in eqs.\eqref{pwn4boundsample}-\eqref{pwn7boundsample1} demonstrate the phenomenon of {\it low spin dominance}(LSD) and we will say more about this in section (\ref{sec:lsd}). \subsection{Investigating ${\mathcal P}_\rho$} \label{subsec:dnmpos} In this section we further explore the $\mathcal{P}_{\rho}$ positivity property of the low energy expansion of $2$-$2$ amplitude of massless scalars in four spacetime dimensions, by considering the relation \eqref{dalphatrel} between the coefficients $d^{(n)}_{m}$ and the partial wave moments obtained using the crossing symmetric dispersion relation. The above mentioned positivity property implies that we should have $d^{(n)}_{m} \ge 0$. For $n=2k, k\in \mathbb{Z}$ and $m=0$, this can be shown to follow from the fact that in a unitary theory the partial waves are positive. However in general, unitarity alone does not imply $d^{(n)}_{m} \ge 0$ for any $n$. Here we argue that if the spin $J=0$ contribution to the partial wave decomposition of the amplitude dominates over the contribution from higher spins, then the positivity feature holds. We shall illustrate this below for $n=5$ and derive the sufficient condition for $d^{(5)}_{m} \ge 0$ to hold. Further examples for other values of $n$ are considered in the Appendix, section \ref{sec:dposexamples}. We begin by considering \eqref{dalphatrel} for $n=5$ which is given by \begin{equation} \label{dn5m} \begin{split} & d^{(5)}_{m} = \sum_{J=0}^{\infty}(2J+1)\ \chi^{(5)}_{m}(J) \ \widetilde{\alpha}_{J}(5, \delta_{0}) , \quad \quad m=0,1,2 \end{split} \end{equation} For $m=0$ and $m=2$, it can be easily checked that the R.H.S. of \eqref{dalphatrel} is identical to the locality constraint equations for $n=5$. This immediately gives $d^{(5)}_{0} =d^{(5)}_{2} =0$. The only non-trivial case here is then $n=5,m=1$. In terms of the Wilson coefficients $\mathcal{W}_{p,q}$'s we have $d^{(5)}_{1} = -\mathcal{W}_{1,1}/4$. \vskip 4pt Now for $m=1$, it can be shown that $\chi^{(5)}_{m}(J)$ is given by \begin{equation} \label{chin5m1} \begin{split} \chi^{(5)}_{1}(J) & = \left( \frac{5}{4}- \frac{1}{2} J(J+1) \right) - \frac{5}{24} J (J+1) ( J (J+1) (2 J (J+1)-43)+150) \end{split} \end{equation} Let us note that for $n=5$ the locality constraint equation takes the form \begin{equation} \label{lcn5} \begin{split} & \sum_{J=2}^{\infty}(2J+1) \ J (J+1) (J (J+1) (2 J (J+1)-43)+150) \ \widetilde{\alpha}_{J}(5 , \delta_{0}) =0 \end{split} \end{equation} Then substituting \eqref{chin5m1} in \eqref{dn5m} with $m=1$ and using \eqref{lcn5} we get \begin{equation} \label{dn5m1a} \begin{split} & d^{(5)}_{1} = \frac{1}{4} \bigg[ 5 \hspace{0.05cm} \widetilde{\alpha}_{0}(5, \delta_{0}) - \sum_{J=2}^{\infty}(2J+1) \left(2 J(J+1) - 5 \right) \ \widetilde{\alpha}_{J}(5, \delta_{0}) \bigg] \end{split} \end{equation} {\it This readily implies that unless spin-0 is present, positivity in $\rho\in (-1,1)$ cannot hold.} \vskip 4pt Now we can derive a sufficient condition for $d^{(5)}_{1} \ge 0$ to hold as follows. We use \eqref{lcn5} to eliminate $\widetilde{\alpha}_{4}(5,\delta_{0})$ from \eqref{dn5m1a}. This yields \begin{equation} \label{dn5m1e} \begin{split} d^{(5)}_{1}& = \frac{1}{4} \bigg[ 5 \hspace{0.05cm} \widetilde{\alpha}_{0}(5, \delta_{0}) - 56 \hspace{0.05cm} \widetilde{\alpha}_{2}(5, \delta_{0}) \\ & + \frac{1}{360} \sum_{J=6}^{\infty}(2J+1) (J-4) (J+5) \left(14 J^4+28 J^3-7 J^2-21 J-90\right) \ \widetilde{\alpha}_{J}(5, \delta_{0}) \bigg] \end{split} \end{equation} Since all terms in the second line of \eqref{dn5m1e} are positive, we see that for $d^{(5)}_{1} \ge 0$ to hold, it suffices to have \begin{equation} \label{dn5m1f} \begin{split} \widetilde{\alpha}_{0}(5, \delta_{0}) \ge 11.2 \hspace{0.05cm} \widetilde{\alpha}_{2}(5, \delta_{0}) \end{split} \end{equation} We can obtain similar inequalities for higher values of $n$ as well. For example, for $n=6,7,8$ we find \begin{equation} \label{spin0boundsn678} \begin{split} \widetilde{\alpha}_{0}(6, \delta_{0}) \ge 25 \hspace{0.05cm} \widetilde{\alpha}_{2}(6, \delta_{0}), \quad \widetilde{\alpha}_{0}(7, \delta_{0}) \ge 10.72 \hspace{0.05cm} \widetilde{\alpha}_{2}(7, \delta_{0})\,,\quad \widetilde{\alpha}_{0}(8, \delta_{0}) ~\ge~ 13.75 \hspace{0.05cm} \widetilde{\alpha}_{2}(8, \delta_{0})\,. \end{split} \end{equation} \subsection*{How common is this positivity?} {\bf In presence of spin 0}: Using the locality constraints we had already obtained conditions that suggested spin-2 dominance, for instance via equations \eqref{pwn4boundsample} and \eqref{pwn5boundsample}. In the positivity analysis above, we saw that if there is spin-0 dominance, then there is a novel positivity which was alluded to in section 1. These considerations enable us to make the following observations. The type II string tree level suggests that $\widetilde \alpha_2(n,\delta_0=1)\lesssim 0.04 \exp(-n/\sqrt{2})$. Consider for instance the $n=6$ case. This would give $\widetilde \alpha_0(6,1)\gtrapprox 0.01$. Now if $\widetilde \alpha_0(6,1)$ takes on values between $0-1$ (the string answer is approximately 1.001), then we conclude that for random values for $\widetilde \alpha_0(6,1)$ in this range, there is a $99\%$ possibility for us to find that $d_n^{(6)}\geq 0$. Thus the question becomes, what range of $\widetilde \alpha_0(6,\delta_0)$ is typical? The discussion above suggests that whenever there is spin-0 dominance of the form \begin{equation} \label{lsd} \frac{ \widetilde \alpha_0(n,\delta_0)}{\widetilde \alpha_2(n,\delta_0)}\gtrsim O(10)\,, \end{equation} we will obtain positivity for these class of theories. \noindent{\bf In absence of spin-0}: A counterexample to the ${\mathcal P}_\rho$ positivity is the following toy amplitude: \begin{eqnarray} \label{1by} M(s,t,u)= \frac{m^4}{(m^2-s)(m^2-t)(m^2-u)} -\frac{4}{3}~ \tanh^{-1}\left({\frac{1}{3}}\right)\left(\frac{1}{m^2-s}+\frac{1}{m^2-t}+\frac{1}{m^2-u}\right) \end{eqnarray} where the second term has been chosen to make the spin-$0$ partial wave contribution vanish. One can easily check that all the higher spin partial waves and all their moments are positive in this case. However, we know of no local Lagrangian description which could give rise to this amplitude. Further, this amplitude seems to necessarily indicate the existence of an infinite tower of massive higher spin particles all of which have the same mass $m^2$. Theories with an {\it accumulation point} in the spectrum such the one above seem to play a role in S-matrix bootstrap \footnote{An analogous example in the case of Polyakov bootstrap is the 2-d Ising Mellin amplitude which also exhibits similar behaviour due to the presence of twist-0.}, though its not clear if they can be ruled out by other considerations\cite{sch1,yutintriple}. \noindent We can then look at the low energy expansion of the amplitude (setting $m=1$ for brevity) \begin{eqnarray} M(s,t,u)&=& -0.386294 + 0.0758038~ x + 0.0758038~ x^2 + 0.0758038 ~x^3 + 0.0758038~ x^4 \nonumber\\ &+& 0.0758038 ~x^5 + 0.0758038 ~x^6 + 0.386294 ~y + 0.310491~ x~ y \nonumber\\ &+& 0.234687 ~x^2 ~y + 0.158883~ x^3 ~y + 0.0830793~ x^4 ~y + 7 ~x^5 ~y \end{eqnarray} and notice that $\mathcal{W}_{1,1},\mathcal{W}_{3,1},\mathcal{W}_{5,1} \ge 0$. When the spin-0 partial is absent, it can be shown generally using results of our previous section that \begin{equation} d_1^{(2k+1)}\le 0 , \end{equation} which in terms of $\mathcal{W}_{pq}$ reads: \begin{equation} \mathcal{W}_{2k+1,1}\ge 0\,, \quad \forall k=(0,1,2\cdots). \end{equation} \subsection{Comments about LSD}\label{sec:lsd} The inequalities on the ratio of partial wave moments that we find in \eqref{alphatn4bound1} and other such inequalities in eqs.\eqref{pwn4boundsample}-\eqref{pwn7boundsample1} demonstrate the phenomenon of {\it low spin dominance}(LSD) previously considered in the context of gravitational EFT's \cite{sasha,rs2,yutin3} for spins $J\ge 2$. \begin{equation}\label{degoflsd} \frac{\widetilde{\alpha}_{2}(n,\delta_{0})}{\widetilde{\alpha}_{J}(n,\delta_{0})} \ge \lambda \end{equation} Since spin-$0$ does not directly enter the locality constraints \eqref{locconstr}, we cannot quantify $ \frac{\widetilde{\alpha}_{0}(n,\delta_{0})}{\widetilde{\alpha}_{J}(n,\delta_{0})}$ directly from the locality. However as we have argued in the previous section if there is spin-$0$ dominance of the form given in \eqref{lsd} namely $ \frac{\widetilde{\alpha}_{0}(n,\delta_{0})}{\widetilde{\alpha}_{2}(n,\delta_{0})} \ge \beta \approx 10$ then we can readily translate this to obtain: \begin{equation} \frac{\widetilde{\alpha}_{0}(n,\delta_{0})}{\widetilde{\alpha}_{J}(n,\delta_{0})} \ge 10~~\lambda \end{equation} Since these follow directly from the locality constraints\eqref{locconstr} we can conclude that \\ {\it Low spin dominance (LSD) for $J\ge 2$ in scalar low energy EFT's is a consequence of locality.}\\ \noindent Furthermore, we can also allow us to quantify\footnote{ This parameter is usually been denoted by $\alpha$ in the literature but to avoid confusion with the partial wave moments we denote it by $\lambda$ in this work} the {\it degree of LSD} as the parameter $\lambda$ has been referred to in the literature \cite{sasha,yutin3,yutin4}. Our analysis generically gives us $\lambda \equiv \lambda(n)$, we have $\lambda(4)=62,\lambda(5)=15,\lambda(7)=6.6$. We leave a more complete analysis of the properties of $\lambda(n)$ for future work. \section{Celestial insight 3: Typically Realness}\label{sec5} \subsection{Feynman blocks in $\rho$} \label{feynmanblock} We shall now discuss the positivity properties of the Feynman block $\mathcal{F}_B(n,J,\rho)$ with respect to $\rho$ for fixed value of $\beta=-2~n$ with $n$ being a positive integer. The Feynman block is a polynomial in $\rho$ of degree $\lfloor \frac{n}{2} \rfloor$: \begin{equation}\label{fbclosedform} \mathcal{F}_B(n,J,\rho) = \sum_{i=0}^{\lfloor \frac{n}{2}\rfloor} c_n(J)\rho^{i} \end{equation} \noindent A closed form expression for the Feynman block $\mathcal{F}_B(n,J,\rho)$ is given by \eqref{fbclosed} in appendix \ref{sec:formulas}. For the first few values of $n$ these are as follows: \begin{eqnarray} \mathcal{F}_B(1,J,\rho)&=&0 \nonumber\\ \mathcal{F}_B(2,J,\rho)&=& 1-\rho \nonumber\\ \mathcal{F}_B(3,J,\rho)&=& \frac{-1}{2}(2 J^2+2 J-3)(1+\rho) \nonumber\\ \mathcal{F}_B(4,J,\rho)&=& \frac{1}{4}((-3 J^4-6 J^3+21 J^2+24 J+2)+\left(-J^4-2 J^3+7 J^2+8 J-4\right) \rho +2 \rho^2) \nonumber\\ \mathcal{F}_B(5,J,\rho)&=& \frac{1}{72}\left((90 - 786 J - 571 J^2 + 420 J^3 + 185 J^4 - 30 J^5 - 10 J^6) \right.\nonumber\\ &+& \left.(-J (1 + J) (150 - 43 J - 41 J^2 + 4 J^3 + 2 J^4)) \rho +18 (-5 + 2 J + 2 J^2) \rho ^2 \right) \nonumber\\ \end{eqnarray} One can verify that the tree-level type-II string amplitude answer can be expanded in these blocks and the convergence in spin is fast. Further, it can be readily checked that for sufficiently large values of $J$ and for any value $n \ge 3 $ the Feynman block $\mathcal{F}_B(n,J,\rho)$ is positive for real $\rho$ in the interval $-1\leq \rho\leq 1$. In particular for any value of $n\ge 3$ there exists a critical value $J=J_c(n)$ such that for $J \le J_c(n)$ we have $\mathcal{F}_B(n,J,\rho)<0 $ and for $J > J_c(n)$ we have $\mathcal{F}_B(n,J,\rho)>0 $. This can be seen from the grid plot below. \begin{figure}[H] \centering \includegraphics[width=\linewidth]{Rhopositivity.png} \caption{The above grid plots shows the signs of the Feynman block for $-1\le \rho\le 1$ for $0\le J \le 16$ and $1 \le n\le 20$. The red and blue indicate $J<J_c(n)$ and $J\ge J_c(n)$ corresponding to negative and positive signs respectively.} \label{fig:pic1} \end{figure} \subsection{GFT techniques} We shall now discuss the connection of amplitude in the $\rho$ variable with typically realness. In \cite{rs1}, it was shown that the amplitude for appropriate range of the parameter\footnote{The parameter is given by $a=\frac{s t u}{s t+ tu +u s}$ where Mandelstam variables were parametrized via $$ s=a \left(1-\frac{(\zeta-1)^3}{\zeta^3-1}\right), t=a \left(1-\frac{(\zeta-e^{2\pi i/3})^3}{\zeta^3-1}\right)\,, $$ Here $e^{2\pi i/3}$ is one of the cube-roots of unity. The $a$ works out to be $a=\frac{\omega'{}^2}{9}(\zeta^3-1)(1-\frac{1}{\zeta})^3\,$ which can curiously be related to the $z=-t/s$ variable using an $SL(2,\mathbb{C})$ transformation.} $a$ was a typically real function of $\zeta$ in the unit disk $|\zeta|<1$ and this connection proved quite fruitful for getting bounds on the Wilson coefficients. We shall briefly introduce the necessary background about typically real functions that we will need now and refer the interested reader to \cite{rs1,tatarczak} and references therein for further details. A typically real function $f(z)$ on a domain $\Omega \in \mathbb{C}$ which contains part of the real line $\mathbb{R}$ is defined as: \begin{equation}\label{TR} \Im f(z) \Im z >0 ~~{\rm for ~all}~~ z~~ {\rm such ~that}~~ \Im z \neq0\,. \end{equation} where $\Im f(z)$ means the imaginary part of the function $f(z)$. It follows directly from the above definition that a typically real function $f(z)$ satisfies the following (see \cite{Wigner,goodman,robertson} and \cite{rs1} for a recent review): \begin{enumerate} \item All poles of $f(z)$ lie on the real axis. \item All poles of $f(z)$ are simple. \item Residues at any pole of $f(z)$ is negative. \item Linear combinations of typically real functions with positive coefficients are typically real. \end{enumerate} Let us look at three physical examples of amplitudes that are typically real: \begin{itemize} \item Consider the $\phi^2 \psi$ tree level amplitude of massless scalars $\phi$ with massive exchange $\psi$ \footnote{The $\alpha$,$\beta$,$\gamma$ and $\lambda$ used on this page are stand alone symbols not to be confused with symbols in other sections.} of mass $m$: \begin{eqnarray} M^{\phi^2 \psi}(\omega^2,\tilde{\rho})&=& \left(\frac{1}{s-m^2}+ \frac{1}{t-m^2}+\frac{1}{u-m^2}\right)\,, \nonumber\\ &=& \frac{1}{m^2}\left(\frac{1}{(\lambda-1)}- \frac{2(2+\lambda)}{2+2 \lambda+\lambda^2 (1+\rho)} \right)\,,\nonumber\\ \end{eqnarray} where, $\lambda=\frac{\omega^2}{m^2}$. We see that the above is just a simple pole at $\rho=-1-\frac{2}{\lambda}(1+\frac{1}{\lambda})$ with negative residue $\lambda>0$. We can manually check its typically real as with $\rho=r e^{i \theta}$: \begin{equation} \Im M^{\phi^2 \psi}(\omega^2,\tilde{\rho})~ \Im \rho = \frac{2 r \lambda^2 (\lambda+2) \sin^2 (\theta )}{r^2 \lambda^4 \sin ^2(\theta )+\left(r \lambda^2 (1+r \cos (\theta ))+2 \lambda+2\right)^2} >0 \,, \end{equation} for $\lambda>0$ and $\omega^2,m^2>0$. Thus by the properties described above this is a typically real function for any value of $\omega^2$ except $\omega^2=m^2$ in an arbitrarily large disk around the origin in the $\rho$ variable. \item Consider next the type-II string amplitude eq.\eqref{type2} and in terms of the $\rho$ variable we can rewrite it using the infinte product representation of the gamma function as: \begin{equation} \mathcal{M}_{II} = \lambda_1(\omega^2) ~\frac{ \prod_{i=1}^{\infty} \left( 1+ \alpha_i ~\rho \right)} {\prod_{i=1}^{\infty} \left(1- \beta_i \rho \right)}\, \end{equation} where, $\alpha_n(\omega^2)=-\frac{\omega ^4 \left(n-\omega ^2\right)}{\left(n+\omega ^2\right) \left(2 n^2-2 n \omega ^2+\omega ^4\right)}$ , $\beta_n(\omega^2)=-\frac{\omega ^4 \left(n+\omega ^2-1\right)}{\left(-n+\omega ^2+1\right) \left(2 n^2+2 n \omega ^2-4 n+\omega ^4-2 \omega ^2+2\right)}$ and $\lambda_1(\omega^2) >0$. In fact, one can check numerically on Mathematica that for $\omega^2<2$ the amplitude is typically real directly by using \eqref{TR} for $|\rho|<1$! \item The closed bosonic string amplitude eq.\eqref{CBamplitude} can also be similarly written in terms of the $\rho$ variable as: \begin{equation} \mathcal{M}_{CB} = \lambda_2(\omega^2) ~\frac{ \prod_{i=1}^{\infty} \left( 1+ \gamma_i ~\rho \right)} {\prod_{i=1}^{\infty} \left(1- \delta_i \rho \right)} \end{equation} where, \begin{eqnarray} \gamma_n(\omega^2)&=&\frac{9 \omega ^4 \left(-3 n+3 \omega ^2+1\right)}{\left(3 n+3 \omega ^2-1\right) \left(18 n^2-18 n \omega ^2-12 n+9 \omega ^4+6 \omega ^2+2\right)}\,, \nonumber\\ \delta_n(\omega^2)&=&-\frac{9 \omega ^4 \left(3 n+3 \omega ^2-2\right)}{\left(-3 n+3 \omega ^2+2\right) \left(18 n^2+18 n \omega ^2-24 n+9 \omega ^4-12 \omega ^2+8\right)} \nonumber \, \end{eqnarray} and $\lambda_2(\omega^2) >0$. In this case as well one can check numerically on Mathematica that for large ranges of $\omega^2$ the amplitude is typically real directly by using \eqref{TR} for $|\rho|<1$ ! \end{itemize} The above motivates one to investigate positivity and typical realness in $\rho$ more carefully. To see the power of typical-realness let us look at a generic crossing symmetric monomial $x^p y^q= \frac{(-1)^q}{2^{p+q}}\omega^{4 p+ 6 q}(1-\rho)^p(1+\rho)^q$ that could appear in the low energy expansion of the amplitude. We could ask when such a term is typically real in the disk $|\rho|<r$ ? \noindent The possible terms fall into one of 4 categories $x^p y^q$ with either $p,q \ge 0$, or $p\ge 0, q\le 0$ or $q\ge0 ,p\le 0$ or $p,q \le 0$. The latter two cases correspond to having poles at $z=(-1)^{1/3},-(-1)^{2/3}$ and can be ruled out if we assume locality as we explained in section \ref{subsec:lc}. Analyzing the first two cases which we shall call (R)egular and (S)ingular respectively more carefully now by applying the definition \eqref{TR} we get the following possibilities listed in the table below: \begin{center} \begin{tabular}{| c | c | c| c|} \hline Radius & Allowed & R & S\\ \hline & & & \\ & &\(\displaystyle -(1-\rho )^n,-(\rho +1)^n,\) for & \(\displaystyle -\frac{(1-\rho)^n}{(1+\rho)^m}, \)~ for \\ & $$ \color{green} \CheckmarkBold $$& \(\displaystyle ~ 1\le n \le 2 \) & \(\displaystyle ~0\le n \le 2, 1\le m\le 2 \) \\ $r=1$ & & &\\\cline{2-4} & & & \\ &$$ \color{red} \XSolidBold $$& \(\displaystyle \pm(1-\rho)^n,\pm(1+\rho)^n,~ \forall n\ge 3\) & \(\displaystyle -(1-\rho )^n,-(\rho +1)^n, \forall n \ge 3 \)\\ & & & \\ \hline & & & \\ & &\(\displaystyle -(1-\rho )^n,-(\rho +1)^n,\) for & \(\displaystyle -\frac{(1-\rho)^n}{(1+\rho)^m}, \)~ for \\ & $$ \color{green} \CheckmarkBold $$ & \(\displaystyle ~ 1\le n \le 2 \) & \(\displaystyle ~0\le n \le 1, m=1 \) \\ $r=2$ & & &\\\cline{2-4} & & & \\ &$$ \color{red} \XSolidBold $$& \(\displaystyle \pm(1-\rho)^n,\pm(1+\rho)^n,~ \forall n\ge 3\) & \(\displaystyle -(1-\rho )^n,-(\rho +1)^n, \forall m \ge 2 \)\\ & & & \\ \hline \end{tabular} \end{center} We can also shift to the $\tilde{\rho}=\rho+1$ variable as $\tilde{\rho}=0$ corresponds to either $t=0$ or $u=0$ which is a low energy limit. Thus in the new variable we can ask if the monomials are typically real in the disk $|\tilde{\rho}|<r$ and the possibilities are as below \begin{center} \begin{tabular}{| c | c | c| c|} \hline Radius & Allowed & R & S\\ \hline & & & \\ & &\(\displaystyle \tilde{\rho},~-(2-\tilde{\rho} )^n,\) for & \(\displaystyle -\frac{(2-\tilde{\rho})^n}{\tilde{\rho}}, \)~ for \\ & $$ \color{green} \CheckmarkBold $$& \(\displaystyle ~ 1\le n \le 6 \) & \(\displaystyle ~0\le n \le 3 \) \\ $r=1$ & & &\\\cline{2-4} & & & \\ &$$ \color{red} \XSolidBold $$& \(\displaystyle \tilde{\rho}^{n-4},~-(2-\tilde{\rho} )^{n+1},\forall n\ge 6\) & \(\displaystyle -\frac{(2-\tilde{\rho})^n}{\tilde{\rho}^m}, ~\forall m \ge 2 \)\\ & & & \\ \hline & & & \\ & &\(\displaystyle \tilde{\rho},~-(2-\tilde{\rho} )^n,\) for & \(\displaystyle -\frac{(2-\tilde{\rho})^n}{\tilde{\rho}}, \)~ for \\ & $$ \color{green} \CheckmarkBold $$& \(\displaystyle ~ 1\le n \le 2 \) & \(\displaystyle ~0\le n \le 2 \) \\ $r=2$ & & &\\\cline{2-4} & & & \\ &$$ \color{red} \XSolidBold $$& \(\displaystyle \tilde{\rho}^{n},~-(2-\tilde{\rho} )^{n+1},\forall n\ge 2\) & \(\displaystyle -\frac{(2-\tilde{\rho})^n}{\tilde{\rho}^m}, ~\forall m \ge 2 \)\\ & & & \\ \hline \end{tabular} \end{center} Thus in disk $|{\tilde{\rho}}|<2$ there are precisely 6 possibilities that are allowed. Of these 3 of them are regular and correspond\footnote{The elements of class R usually come from the dispersive representation of the amplitude and are not expected to be individually typically real only the sum as a whole, so the above analysis does not rule out other regular terms.} to $y,x,x^2$. The three singular cases are $\frac{x^n}{y}$ for $0\le n \le 2$. The singular $n=0$ case is related to massless version of the tree level amplitude considered in \eqref{1by} and singular cases corresponding to $n=1,2$ are respectively the exchange of massless spin-1 and spin-2 particles . Since the above cases are all typically real any sum of these with positive coefficients is typically real too. \subsection{Typically realness and $\widetilde{\mathbf{M}}(-2n,\rho)$} We shall now discuss the connection between typically realness and $\widetilde{\mathbf{M}}(\beta,\rho)$ at $\beta=-2n$. As discussed in \eqref{betaresrhovar}, \eqref{betarescsdrrho}. \begin{eqnarray}\label{fbex} \mathrm{Res}_{\beta= -2 n} \left[ \widetilde{\mathbf{M}}(\beta, \rho) \right] &=& \sum_{J=0}^{\infty} (2J+1)~ \tilde{\alpha}_J(n,\delta_0) ~\mathcal{F}_B(n,J,\rho) \nonumber\\ &=&\sum_{\substack{p,q \\ 2p+3q=n}}\frac{(-1)^q}{2^{p+q}} \mathcal{W}_{p,q} (1+\rho)^{q} (1-\rho)^{p} \end{eqnarray} The Feynman blocks $\mathcal{F}_B(n,J,\rho)$ are polynomials of degree $\left[ \frac{n}{2} \right]$ in $\rho$. By looking at $-\mathcal{F}_B(2,J,\rho)=\rho-1$ and $-\mathcal{F}_B(3,J,\rho)= \frac{1}{2}(2J^2+2J-3) (1+\rho)$ from the discussion in the previous section it is immediately obvious that these are typically real for any $J$ and $J\ge 2$ respectively. Thus, it is a natural question to ask if $\mathcal{F}_B(n,J,\rho)$ (possibly upto an overall sign) are typically real for any other values of $n,J$ inside the disk $|\rho|<1$? Since we have closed form expressions of the Feynman block $\mathcal{F}_B(n,J,\rho)$ in eq. \eqref{fbclosedform} namely \eqref{fbclosed} for any $n,J$ this can be readily checked to sufficiently high values of $n,J$. Rather remarkably we find that the answer to the above question is in the affirmative and the result is as follows: \\ \noindent {\it {\emph The Feynman block $\mathcal{F}_B(n,J,\rho)$ (with appropriate sign) is a typically real polynomial of degree $\lfloor\frac{n}{2} \rfloor$ in $\rho$ inside the unit disk $|\rho|<1$ for any value of $J\ge J_T$ with $J_T=n+2+\frac{1-(-1)^n}{2}$} for $n\ge 10$.}\\ \noindent For lower $n$, $J_T(n)$ can be read off from the table below:\\\\ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $n$ & 4& 5& 6& 7&8&9&10&11&12&13&14&15&19&20&21&99&100 \\ \hline $J_T(n)$ &4 & 4 & 6 & 8& 8& 10 & 12& 14& 14&16 & 16 & 18 & 22 &22& 24&102&102\\ \hline \end{tabular} \\ \noindent In fact a well known family of typically real polynomials are called {\it Suffridge polynomials} \cite{suffridge1,suffridge2,suffridge3,shaffer1} $S_{N,j}(\rho)$ and in all cases we have checked, $(-1)^n \mathcal{F}_B(n,J,\rho)$ for $n>3,J>J_T$ is a positive linear combination of these. The Suffridge polynomials are defined as: \begin{eqnarray} S_{N,j}(\rho)&=&\sum_{k=1}^N A_k(N,j) \rho^k, ~~~{\rm with} \nonumber \\ A_k(N,j)&=& \frac{N-k+1}{N} \frac{\sin \left(\frac{k j \pi}{N+1}\right)}{\sin \left(\frac{ j \pi}{N+1}\right)} \end{eqnarray} A few examples are as follows: \begin{eqnarray} S_{2,4}&=&\rho -\frac{\rho ^2}{2} \nonumber\\ S_{3,1}&=&\frac{\rho ^3}{3}+\frac{2 \sqrt{2} \rho ^2}{3}+\rho \nonumber\\ S_{5,2}&=& -\frac{\rho ^5}{5}-\frac{2 \rho ^4}{5}+\frac{4 \rho ^2}{5}+\rho \nonumber\\ S_{7,3}&=&\frac{\rho^7}{7}+\frac{3}{7} \rho ^5 \cot \left(\frac{\pi }{8}\right)+\frac{5}{7} \rho ^3 \cot \left(\frac{\pi }{8}\right)-\frac{1}{7} \sqrt{2} \rho ^6 \csc \left(\frac{\pi }{8}\right)-\frac{4}{7} \rho ^4 \csc \left(\frac{\pi }{8}\right)-\frac{3}{7} \sqrt{2} \rho ^2 \csc \left(\frac{\pi }{8}\right)+\rho \nonumber \end{eqnarray} Some examples of Feynman blocks for $J \ge J_T$ as linear combinations of Suffridge polynomials with positive coefficients: \begin{eqnarray} \mathcal{F}_B(3,10,\rho)&=&\frac{217}{2}S_{1,1}+\frac{217}{2} S_{2,2}\nonumber\\ \mathcal{F}_B(4,6,\rho)&=&\frac{359}{2}S_{2,1}+\frac{357}{2} S_{2,2} \end{eqnarray} We quote the following results about typically real polynomials that will be useful for our purposes. The reader may refer to \cite{suffridge1,suffridge2,suffridge3,shaffer1,brandt} for details. Let $f(z)=z+\sum_{i=2}^N a_i z^i$ be a typically-real polynomial of degree $N$ in the disk $|z|<1$ which we denote as $f(z) \in T^N$ and if $R(\cos{\theta}) =\frac{\Im f(e^{i \theta})}{\sin {\theta} }$. Then the following are true: \begin{enumerate} \item $f(z)\in T^N$ if and only if $R(\cos{\theta})= 1+\sum_{j=2}^N b_j \frac{\sin{j \theta}}{\sin{\theta}}\ge 0$. \item Let, $b_j \in \mathbb{R}$ with $b_{N-1}=1$ and if $\sum_{i=0}^{N-1} b_j u^j$ has a fixed sign for all $-1 \le u \le 1$ then $\exists$ unique $a_j$ with $1\le j\le N$ and $a_1=1$ such that $$R(u=\cos{\theta})=2^{N-1}a_N \sum_{j=0}^{N-1} b_j u^j$$ and $f(z)=z+\sum_{i=2}^{N } a_i z^i \in T^N$. \item If $f(z) \in T^N$ with $a_N \neq 0$ and for $1\le k\le N$, $a_k$ can assumes an extreme (max/min) value then all the zeros of $R(u)$ with $u=\cos{\theta}$ are real and \begin{subequations}\label{suffridgebounds} \begin{align} R(u)= 2^{N-1} a_N \begin{dcases*} \prod_{j=1}^{\frac{N-1}{2}}(u-\gamma_j)^2 & if N is odd, $a_N > 0$\,, \\ (u^2-1)\prod_{j=1}^{\frac{N-3}{2}}(u-\gamma_j)^2 & if N is odd, $a_N < 0$\,, \\ (u+1) \prod_{j=1}^{\frac{N-2}{2}}(u-\gamma_j)^2 & if N is even, $a_N > 0$\,, \\ (u-1) \prod_{j=1}^{\frac{N-2}{2}}(u-\gamma_j)^2 & if N is even, $a_N < 0$\,, \end{dcases*} \end{align} \end{subequations} where, $-1\le \gamma_j\le 1$. \noindent In other words the coefficient body $\{(a_2,\cdots,a_N): z+a_2 z^2+\cdots+a_N z^N\}$ has extreme points that live on a manifold of dimension $\frac{N-1}{2},\frac{N-2}{2}$ for odd, even $N$ when $a_N>0$ and manifold of dimension $\frac{N-3}{2},\frac{N-2}{2}$ for odd, even $N$ when $a_N<0$ respectively.\\ Once one has the extreme points then the allow coefficinets region is a convex hull of these extreme points since the set of typically real polynomials is a convex set and Krein-Milman theorem \cite{rs1} applies. We work out the first couple of cases for illustrative purposes: {\bf N=2:} From eq.\eqref{suffridgebounds} assuming $a_2>0,a_2<0$ we get $ 2 a_2+2 a_2 u = 1+ 2 a_2 u, -2 a_2+2 a_2 u = 1+ 2 a_2 u$ respectively, which can be solved to get the extreme points $a_2=\pm\frac{1}{2}$ and convex hull of the extreme points yields the line $|a_2|\le \frac{1}{2}$. {\bf N=3:} From eq.\eqref{suffridgebounds} assuming $a_3>0,a_3<0$ we get $ 1-a_3+2 a_2 u+ 4 a_3 u^2 = 4 a_3 \gamma_1^2- 8 a_3 u \gamma_1 +4 a_3 \gamma_1^2, 1-a_3+2 a_2 u+ 4 a_3 u^2 = -4 a_3+4 a_3 \gamma_1^2$ respectively for $|\gamma_1|\le 1$, which can be solved to obtain $(a_2,a_3) =\left(\frac{-4 \gamma_1}{1+4 \gamma_1^2},\frac{1}{1+4 \gamma_1^2} \right)$ and $(a_2,a_3) =(0,-\frac{1}{3})$ respectively. Thus for $a_3>0$ we get part of the ellipse $(2a_3-1)^2+a_2^2=1$ with $a_3\ge \frac{1}{5}$ and for $a_3<0$ we get a point $\left(0,-\frac{1}{3}\right)$. The convex hull of which yields the 2-d region below.\\ \noindent As is obvious this procedure will always give us a finite region which has implications for the $\mathcal{W}_{p,q}$ bounds we get, namely all $\mathcal{W}_{p,q}$'s will be two sided bounded as was shown using different methods in \cite{rs1}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{suff.png} \caption{The coefficient region for $N=2$ in $(a_2,a_3)$ space.} \label{fig:pic4} \end{figure} \item For any $z$ inside the disk \cite{michel} we have \begin{equation}\label{suffridgedistortion} |f(z)| \le \frac{1}{4}\csc^2{\frac{\pi}{2(N+2)}}\,. \end{equation} \end{enumerate} The following are some plots of both Suffridge polynomials and the Feynman blocks: \begin{figure}[H] \centering \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{FBJ10n8.png} \end{subfigure}% \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{FBJ18n16.png} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{FBJ42n40.png} \end{subfigure}% \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{S51.png} \end{subfigure}% \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{S72.png} \end{subfigure} \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{S7638.png} \end{subfigure}% \caption[short]{Some plots of Feynman blocks for for different $n,J$ and Suffridge polynomials for different $N,j$.} \end{figure} \noindent The above discussion motivates us to consider the following: \begin{eqnarray} \mathrm{Res}_{\beta= -2 n} \left[ \widetilde{\mathbf{M}}(\beta, z) \right] &=& \sum_{J=0}^{J_T-2} (2J+1)~ \tilde{\alpha}_J(n,\delta_0) ~\mathcal{F}_B(n,J,\rho)+\underbrace{\sum_{J=J_T}^{\infty} (2J+1)~ \tilde{\alpha}_J(n,\delta_0) ~\mathcal{F}_B(n,J,\rho)}_{R_T(n,\rho)} \nonumber \end{eqnarray} The $R_T(n,\rho)$ above being a positive sum of typically real functions is typically real. Thus we have \begin{eqnarray} \label{Rtn1} R_T(n,\rho) &=& \mathrm{Res}_{\beta= -2 n} \left[ \widetilde{\mathbf{M}}(\beta, z) \right] -\sum_{J=0}^{J_T-2} (2J+1)~ \tilde{\alpha}_J(n,\delta_0) ~\mathcal{F}_B(n,J,\rho) \end{eqnarray} \noindent An important observation is the following. Since the Feynman block $\mathcal{F}_B(n,J,\rho)$ for a particular $n$ is always a polynomial of the same degree $\lfloor\frac{n}{2}\rfloor$ for all $J$ so both the LHS and RHS of the equation \eqref{Rtn1} are polynomials of the same degree in $\rho$. This combined with the fact that $R_T(n,\rho)$ is a typically real polynomial allows one to get three kinds of constraints: \begin{enumerate} \item {\bf Sign patterns $\mathcal{S}(n)$:} These follow since the Feynman blocks have a fixed sign pattern as we had discussed earlier in sec.(\ref{feynmanblock}). The signs of the coefficient of $\rho^k$ in $\mathcal{F}_B(n,J,\rho)$ is the same for all $J\ge J_T(n)$ and the sign of each term in $R_T(n,\rho)$ (being a positive sum of $\mathcal{F}_B(n,J,\rho)$ for $J\ge J_T(n)$) is the same as that of $\mathcal{F}_B(n,J_T,\rho)$. Comparing this with RHS of \eqref{Rtn1} we get constraints on the coefficients. We shall denote these constraints by $\mathcal{S}(n)$. \item{\bf Suffridge bounds $TR(n)$:} These follow as $\tilde{R}_T(n,\rho)=\frac{R_T(n,\rho)-R_T(n,0)}{\partial_{\rho}R_T(n,\rho)|_{\rho=0}}$ is typically real polynomial and thus obeys the coefficient bounds arising from \eqref{suffridgebounds}. These depend on $n$ and can be worked out case by case for each $n$ as we had done for $n=2,3$ above. We list the first few cases in the table below: {\scriptsize{\begin{tabular}{|c|c|c|c|c|c|} \hline 2& 3& 4& 5 & $N$ odd & $N$ even \\ \hline & $|a_2|\le1,$ & $|a_2|\le\frac{1+\sqrt{7}}{3},$& $|a_2|\le \sqrt{2},$ & $|a_2|\le 2\cos{\frac{2\pi}{N+3}}$ & $|a_2|\le 2\cos{\alpha}$ \\ $|a_2|\le \frac{1}{2}$ & $ -1/3 \le a_3\le1$ & $ -1/3 \le a_3\le1$ & $ -\frac{\sqrt{5}-1}{2} \le a_3\le \frac{1-\sqrt{5}}{2}$ & \vdots & \vdots \\ & & $ |a_4| \le 2/3$ & $ |a_4| \le 1$ & $|a_{N-1}|\le 1$ & $-\frac{N-2}{N+2}\le a_{N-1}\le 1$ \\ & & & $ -\frac{1}{2}\le a_5\le 1$ & $-\frac{N-1}{N+3}\le|a_{N}|\le 1$ & $-\frac{N}{N+2}\le|a_{N}|\le \frac{N}{N+2}$ \\ \hline \end{tabular} }} \\ where, $\alpha \in \left(\frac{2 \pi}{N+4},\frac{2\pi}{N+2} \right)$ satisfies $(N+4)\sin{\frac{N+2}{2} \alpha}+(N+2)\sin{\frac{N+4}{2} \alpha}$ with $N=\lfloor n/2 \rfloor$. \item {\bf Distortion constraints $\mathcal{D}(n)$:} These also follow from \eqref{suffridgedistortion}. \begin{eqnarray} |\tilde{R}_T(n,\rho)| & \le & \frac{1}{4} \csc^2{\frac{\pi}{2(N+2)}},~~ {\rm with}~~N=\lfloor n/2 \rfloor \end{eqnarray} The $\mathcal{S}(n)$,~$TR(n)$,~$\mathcal{D}(n)$ are respectively the analogues of polynomial analogues of the $PB_c,TR_U$ and distortion constraints used in \cite{rs1} for typically-real functions. \end{enumerate} Let us consider the case $n=4$ which has $J_T=4$ in this case we get: \begin{eqnarray} R_T(4,\rho)&=& \frac{\mathcal{W}_{2,0}(1-\rho)^2}{4} -\sum_{0}^{2}(2J+1)\tilde{\alpha}_J(4) \mathcal{F}_B(4,J,\rho) \nonumber\\ &=& -\frac{1}{4} (2 \tilde{\alpha}_0(4)+190 \tilde{\alpha}_2(4)-\mathcal{W}_{2,0})+\frac{1}{2}\left(2\tilde{\alpha}_0(4)-20\tilde{\alpha}_2(4)-\mathcal{W}_{2,0} \right)\rho \nonumber\\&& - \frac{1}{4}\left(2\tilde{\alpha}_0(4)+10 \tilde{\alpha}_2(4)-\mathcal{W}_{2,0} \right) \rho^2 \,, \\ \tilde{R}_T(4,\rho)&=& \rho -\frac{\left(2\tilde{\alpha}_0(4)+10 \tilde{\alpha}_2(4)-\mathcal{W}_{2,0}\right)}{2 \left(2\tilde{\alpha}_0(4)-20\tilde{\alpha}_2(4)-\mathcal{W}_{2,0} \right)}\rho^2 \end{eqnarray} We get the following: \begin{eqnarray} \mathcal{S}(4)&:&~~~~~~~ 2 \tilde{\alpha}_0(4)+10 \tilde{\alpha}_2(4)\le \mathcal{W}_{2,0} \le 2 \tilde{\alpha}_0(4)+190 \tilde{\alpha}_2(4)\\ TR(4)&:&~~~~~~~ \mathcal{W}_{2,0} \ge 2 \tilde{\alpha}_0(4)-5 \tilde{\alpha}_2(4) \\ \mathcal{D}(4)&:& ~~~~~~~ |\tilde{R}_T(4,\rho)|\le 1+\frac{1}{\sqrt{2}}\approx 1.707 \\ &&~~~~~~~ \mathcal{W}_{2,0} \ge 2 \tilde{\alpha}_0(4)-20 \tilde{\alpha}_2(4) \end{eqnarray} Combining all of the above we get: \begin{equation}\label{n4bounds} 2 \tilde{\alpha}_0(4)+10 \tilde{\alpha}_2(4) \le \mathcal{W}_{2,0} \le 2 \tilde{\alpha}_0(4)+190 \tilde{\alpha}_2(4) \end{equation} For the string case from appendix(\ref{sec:stringexamples}) this means $2.0164 \le \mathcal{W}_{2,0}\le 2.214$ and the string value is $ \mathcal{W}_{2,0}=2.016$ which is very close to the lower limit. Using $\tilde{\alpha}_J(4) \le \frac{1}{4 M^8}$ and assuming $\frac{\tilde{\alpha}_0(4)}{\tilde{\alpha}_2(4)}>10$ we get the non-projective bound: \begin{equation} \frac{1.5}{ M^8}\le \mathcal{W}_{2,0} \le \frac{10.5}{M^8} \end{equation} A novel feature of the above if the existence of a lower bound $>0$. Let us now consider the $n=5$ case with $J_T=4$ and \begin{eqnarray} R_T(5)&=& - \mathcal{W}_{1,1} \frac{(1-\rho^2)}{4}-\sum_{J=0}^{2}(2J+1)\tilde{\alpha}_J(5) \mathcal{F}_B(5,J,\rho) \nonumber\\ &=&-\frac{1}{4} \left(5\tilde{\alpha}_0(5) +265 \tilde{\alpha}_2(5)+\mathcal{W}_{1,1} \right) -15 \tilde{\alpha}_2(5) \rho + \frac{1}{4}\left(5\tilde{\alpha}_0(5)-35\tilde{\alpha}_2(5)+\mathcal{W}_{1,1} \right) \rho^2 \,, \nonumber \\ \tilde{R}_T(5,\rho)&=& \rho- \frac{\mathcal{W}_{1,1}+5 \tilde{\alpha}_0(5)-35\tilde{\alpha}_2(5)}{60\tilde{\alpha}_2(5) } \rho^2 \,. \end{eqnarray} We find \begin{eqnarray} ~~\mathcal{S}(5)&:&~~~~~~~ \mathcal{W}_{1,1} \ge-5 \tilde{\alpha}_0(5)+35 \tilde{\alpha}_2(5) \\ TR(5)&:&~~~~~~~ -5 \tilde{\alpha}_0(5)+5 \tilde{\alpha}_2(5) \le \mathcal{W}_{1,1} \le -5\tilde{\alpha}_0(5)+65 \tilde{\alpha}_2(5) \\ \mathcal{D}(5)&:&~~~~~~~|\tilde{R}_T(5,\rho)|\le 1+\frac{1}{\sqrt{2}}\approx 1.707 \end{eqnarray} Combining all of the above we get \begin{equation}\label{pw51} -5 \tilde{\alpha}_0(5)+35 \tilde{\alpha}_2(5) \le \mathcal{W}_{1,1} \le -5 \tilde{\alpha}_0(5)+65 \tilde{\alpha}_2(5)\,. \end{equation} Using locality constraints which leads to \eqref{dn5m1e}, we get the following, slightly stronger, upper bound \begin{equation}\label{pw52} \mathcal{W}_{1,1} \le -5 \tilde{\alpha}_0(5)+ 56 \tilde{\alpha}_2(5)\,. \end{equation} The type II-string values for this case are $$\tilde{\alpha}_0(5)=1.0013,\tilde{\alpha}_2(5)=0.000538, \mathcal{W}_{1,1}=-4.9857 $$ In comparison our bound \eqref{pw51},\eqref{pw52} gives \begin{equation}-4.9878\le\mathcal{W}_{1,1} \le-4.9765 \end{equation} which is very narrow range and respected! Notice that this is a bound on the Wilson coefficient itself and not on a ratio. Using the fact that $\alpha_J(s)\leq 1$ and using 32$\int_{2M^2}^\infty \frac{ds}{s^{6}}\alpha_J(s)\leq \frac{1}{5 M^{10}}$, we have the non-projective bound \begin{equation} -\frac{1}{M^{10}}\leq \mathcal{W}_{1,1}\leq \frac{56}{5M^{10}}\,. \end{equation} Note that the type-II tree level $\alpha_J(s)$'s are to be thought of as distributions (since they have delta function support only) which leads to the different constraint given above. \noindent Similarly, the $n=6$ case with $J_T=6$ and \begin{eqnarray} R_T(6)&=& \mathcal{W}_{3,0} \frac{(1-\rho)^3}{8} + \mathcal{W}_{0,2} \frac{(1+\rho)^2}{4}-\sum_{J=0}^{4}(2J+1)\tilde{\alpha}_J(6) \mathcal{F}_B(6,J,\rho) \nonumber\\ &=&\frac{1}{8} \left(2 \mathcal{W}_{0,2}+\mathcal{W}_{3,0}-8\tilde{\alpha}_0(6)-700\tilde{\alpha}_2(6)-6552\tilde{\alpha}_4(6) \right) \nonumber\\ &+&\frac{1}{8} \left(4 \mathcal{W}_{0,2}-3\mathcal{W}_{3,0}-6\tilde{\alpha}_0(6)+210\tilde{\alpha}_2(6)-3654\tilde{\alpha}_4(6) \right) \rho \nonumber\\&+& \frac{1}{8} \left(2 \mathcal{W}_{0,2}+3 \mathcal{W}_{3,0}-12\tilde{\alpha}_0(6)+120\tilde{\alpha}_2(6)-1548\tilde{\alpha}_4(6) \right) \rho^2 \nonumber \\ &+&\frac{1}{8} \left(-\mathcal{W}_{3,0}+2\tilde{\alpha}_0(6)+10\tilde{\alpha}_2(6)+18\tilde{\alpha}_4(6) \right) \rho^3 \,, \end{eqnarray} gives \begin{eqnarray} \mathcal{S}(6)&:& 3 \tilde{\alpha} _0(6)-75 \tilde{\alpha} _2(6)+747 \tilde{\alpha} _4(6)\leq \mathcal{W}_{0,2} \le 3\tilde{\alpha} _0(6)-45 \tilde{\alpha} _2(6)+927 \tilde{\alpha} _4(6), \nonumber\\ && \left(2 \tilde{\alpha} _0(6)+10 \tilde{\alpha}_2(6)+18 \tilde{\alpha} _4(6)\right) \le \mathcal{W}_{3,0} \le \left(-2 W_{0,2}+8 \tilde{\alpha} _0(6)+700\tilde{\alpha} _2(6)+6552 \tilde{\alpha} _4(6) \right) \nonumber\\ TR(6)&:& 3 \tilde{\alpha} _0(6)-75 \tilde{\alpha} _2(6)+747 \tilde{\alpha} _4(6)\leq \mathcal{W}_{0,2} \le 3\tilde{\alpha} _0(6)-55 \tilde{\alpha} _2(6)+867 \tilde{\alpha} _4(6), \nonumber\\ && \mathcal{W}_{3,0} \ge \frac{1}{3}\left(2 \mathcal{W}_{0,2}+120 \tilde{\alpha} _2(6)-1800\tilde{\alpha} _4(6) \right) \nonumber\\ \mathcal{D}(6)&:&|\tilde{R}_T(6,\rho)|\le \frac{1}{4} \left(1+\sqrt{5}\right)^2\approx 2.618 \end{eqnarray} All of which are obeyed by the string case. Thus, we can get {\bf non-projective bounds} \cite{yutin3,yutin4} such as the ones described above for any $\mathcal{W}_{p,q}$ with $2p+3q\ge 2$. We shall briefly address the two cases namely $\mathcal{W}_{1,0},\mathcal{W}_{0,1}$ where the bounds do not follow from typically realness since the $\tilde{R}_T(\rho)=\rho$ in these cases. From \eqref{fbex} it follows that: \begin{eqnarray} \mathcal{W}_{1,0} &=& 2\sum_{J=0}^{\infty} (2J+1) \tilde{\alpha}_J(2) \ge 2 \tilde{\alpha}_0(2)+ 10 \tilde{\alpha}_2(2) \ge 0 \nonumber\\ \mathcal{W}_{0,1} &=& \sum_{J=0}^{\infty} (2J+1) \tilde{\alpha}_J(3) (2 J^2+2J-3) \ge -3 \tilde{\alpha}_0(3) \end{eqnarray} one could the upper bounds by using the facts $\tilde{\alpha}_J(n)\ge \tilde{\alpha}_J(n+1)$ and $\tilde{\alpha}_0(4)> \lambda \tilde{\alpha}_J(4)$ where\footnote{This follows from \eqref{lsd} for $J=0$ and \eqref{alphatn4bound1} for $J\ge2$.} $\lambda=10$. This gives $2 \tilde{\alpha}_0(2)+ 10 \tilde{\alpha}_2(2) \le \mathcal{W}_{1,0} \le \tilde{\alpha}_0(1) \sum_{J=0}^{\infty} (2J+1)(0.1)^{-J/2} \le \frac{25.67}{M^2} $ and $-3 \tilde{\alpha}_0(3) \le \mathcal{W}_{0,1} \le \tilde{\alpha}_0(2) \sum_{J=0}^{\infty} (2J+1)(2J^2+2J-3)(0.1)^{-J/2} \le \frac{24.71}{M^4} $. Thus we have $$ 0 \le \mathcal{W}_{1,0}\le \frac{25.67}{M^4},-\frac{4}{M^6} \le \mathcal{W}_{0,1}\le \frac{24.71}{M^6} $$ \noindent One can also readily get projective bounds by comparing partial wave moments at different orders since $\tilde{\alpha}_J(n) > \tilde{\alpha}_J(n+1)$ identically to the way we obtained upper bounds for $\mathcal{W}_{1,0},\mathcal{W}_{1,1}$ above. The interested reader may look at eq(\ref{w20w11}) in app(\ref{sec:dposexamples}) for an example. We could also estimate the error when we truncate the partial wave expansion to $J<J_{max}$ and we bound this quantity and this is most easily done at the level of \eqref{Rtn1}. We demonstrate this for $n=4,5$ cases below: \begin{eqnarray} {\bf n=4:}&& \Bigg|\frac{R_T(4,\rho)+\frac{1}{4} (2 \tilde{\alpha}_0(4)+190 \tilde{\alpha}_2(4)-\mathcal{W}_{2,0})}{\frac{1}{2}\left(2\tilde{\alpha}_0(4)-20\tilde{\alpha}_2(4)-\mathcal{W}_{2,0} \right)}\Bigg|\le 1.707 \nonumber\\ &\implies& \bigg|\sum_{J=J_T}^{\infty} (2J+1)\tilde{\alpha}_J \mathcal{F}_B(4,J,\rho)\bigg|\le f_1(\tilde{\alpha}_0(4),\tilde{\alpha}_2(4),\mathcal{W}_{2,0}) \\ {\bf n=5:}&& \Bigg|\frac{R_T(5,\rho)+\frac{1}{4} (5 \tilde{\alpha}_0(5)+265 \tilde{\alpha}_2(5)+\mathcal{W}_{1,1})}{\frac{1}{4}\left(5\tilde{\alpha}_0(5)-35\tilde{\alpha}_2(5)+\mathcal{W}_{1,1} \right)}\Bigg|\le 1.707 \nonumber\\ &\implies& \bigg|\sum_{J=J_T}^{\infty} (2J+1)\tilde{\alpha}_J \mathcal{F}_B(n,J,\rho)\bigg|\le f_2(\tilde{\alpha}_0(5),\tilde{\alpha}_2(5),\mathcal{W}_{1,1}) \end{eqnarray} where, \begin{eqnarray} f_1(\tilde{\alpha}_0(4),\tilde{\alpha}_2(4),\mathcal{W}_{2,0})&=&Max \left(1.207\tilde{\alpha}_0(4) - 64.57 \tilde{\alpha}_2(4) + 1.103 \mathcal{W}_{20}, \right. \nonumber\\ && \left. 2.207 \tilde{\alpha}_0(4) + 30.43 \tilde{\alpha}_2(4) +0.6035 \mathcal{W}_{20} \right)\,,\\f_2(\tilde{\alpha}_0(5),\tilde{\alpha}_2(5),\mathcal{W}_{1,1})&=& Max \left(0.883\tilde{\alpha}_0(5) - 81.1863 \tilde{\alpha}_2(4) + 0.176 \mathcal{W}_{20}, \right. \nonumber\\ && \left. 3.384 \tilde{\alpha}_0(4) + 51.314 \tilde{\alpha}_2(4) +0.676 \mathcal{W}_{20} \right)\,. \end{eqnarray} For the string case the above values are $f_1=3.46351,f_2=0.0457$. \subsection{The graviton pole} We consider the $\frac{1}{s t u}$ SUGRA pole in 10D along with the $R^4$ term. \begin{eqnarray} M(\tilde{\rho}) &=& - 8 \pi G_N \frac{x^2}{y}+ g_0 x^2+ \cdots \nonumber \\ &=& 4 \pi G_N \omega^2 \frac{(2-\tilde{\rho})^2}{\tilde{\rho}}+ g_0 \omega^8 \frac{(2-\tilde{\rho})^2}{4} \end{eqnarray} We could address this case by considering the class of meromorphic typically real polynomials $TM^N$ which are the polynomial analogues of the Goodman class \cite{goodman} $TM^*$ considered in \cite{rs1} and are of the form $f(z)=\frac{1}{z}+\sum_{i=0}^N a_i z^i$. However this is beyond the scope of the current paper and for now we take the simplified approach of looking at the coefficient bound one obtains for $g_0$ by assuming the rest of the low energy expansion is typically real. To do this we first scale the amplitude and change variables to $\tilde{\rho}= z~ \alpha$ we get \begin{eqnarray} \frac{-M(z)}{ 16 \pi G_N \omega^2} &=& -\frac{1}{z}+ \left(\alpha -\frac{1}{2} \alpha ~ \hat{g_0} ~\omega^6\right)+\left( \frac{1}{2} \alpha ^2 ~\hat{g_0} ~\omega^6 -\frac{\alpha ^2}{4}\right) z -\frac{1}{8} \alpha ^3 ~\hat{g_0}~ \omega^6 z^3+ \cdots \end{eqnarray} where, $\hat{g_0}=\frac{g_0}{8\pi G_N}$. The change of variables to $z$ is needed since the Goodman bounds hold for a disc of unit radius. As noted earlier, $M(\tilde\rho)$ to $O(\omega^8)$ is TR in a disc with $\alpha=2$. Strictly speaking, this is a weak-coupling result since we have ignored terms proportional to $\log(s)$ in the expansion. However, now let us assume that the full amplitude, even at strong-coupling, $M(\tilde\rho)$ is TR inside $|z|<1$ and examine the consequences. $M(\tilde\rho)$ above is an element of Goodman class $TM^{*}$ as discussed in \cite{rs1} and hence satisfies : \begin{eqnarray} && \left( \frac{1}{2} \alpha ^2 ~\hat{g_0} ~\omega^6 -\frac{\alpha ^2}{4}\right) \ge -1 \nonumber \\ &\implies& \hat{g_0} \ge \frac{(\alpha^2-4)}{2 ~\omega^6 ~\alpha^2} \end{eqnarray} Furthermore as discussed in \cite{rs1} we can convert this to an element of the Goodman class $TM$ and by assuming that there are no poles inside the disk $|z|<p$ we get the following coefficient bound: \begin{equation} \hat{g_0} \le \frac{2}{\omega^6 \alpha} \left(p+\frac{1}{p}+\alpha \right) \end{equation} Thus we have \begin{equation} \boxed{\frac{(\alpha^2-4)}{2 ~\omega^6 ~\alpha^2}\le \hat{g_0} \le \frac{2}{\omega^6 \alpha} \left(p+\frac{1}{p}+\alpha \right)}\,. \end{equation} This is a two-sided bound on $\hat g_0$ and is does not explicitly depend on the spacetime dimensions. Notice that the upper bound depends on both $p,\alpha$ while the lower bound only depends on $\alpha$. So far in the literature \cite{sch2, rs1}, the lower bound in weakly coupled theories is 0. Unitarity tells us that $\hat g_0\geq 0$ \cite{pedro} which tells us that $\alpha\geq 2$. The derivation we have presented above only assumes TR-ness of the full amplitude up to a certain value $\omega^2$. The values $p=1$ and $\alpha=2$ would give us the bound obtained in \cite{rs1} namely $0\le \hat{g_0}\le 4$. The values $p=1$ and $\alpha=2.32$ gives us the bound $0.13 \lesssim \hat{g_0}\lesssim 3.7$, where the lower bound coincides with the strong string coupling result obtained in \cite{pedro}. This suggests that the the size of the disc $\alpha$ where TR-ness holds is sensitive to the string coupling and $\alpha(g_s=0)=2$ while\footnote{The location of the nearest pole to the origin $p$ in the above formula presumably goes to zero as the string coupling increases---this removes the upper bound at strong string coupling \cite{sch2}. } $\alpha(g_s=\infty)\approx 2.32$. \section{Discussion} We will now conclude with a brief discussion of promising future directions of research. \begin{itemize} \item {\bf Celestial OPE:} It is natural to ask what one can learn about celestial conformal field theories (CCFT's) from the 4d S-matrix. In particular, one would wonder if there is a relation between the OPE coefficients of the CCFT's and the partial wave coefficients of the momentum space amplitude. It turns out that this is indeed the case and will be the topic of our upcoming work \cite{toappear}. \item {\bf External gravitons/spinning particles:} One would also like to extend the analysis of this paper to cases with external spinning particles. In particular, for the case with external gravitons is of particular interest in the context of CCFT's. It will be also be interesting to see the implications of null constraints for LSD in this case and compare with existing literature\cite{sasha}.This is ongoing work using techniques developed in \cite{rs2} and we hope report progress on this front soon. \item {\bf External massive:} It is also of great interest to see if the techniques of this paper can be extended to external massive particles both from the CCFT perspective and also from the perspective of the 4d S-matrix since it would be fascinating to see if the analogues of the partial wave moment bounds we obtained in this work could help address the presence/absence of LSD in this context (see \cite{rs2} for a related discussion). We leave this for future work. \item {\bf Analytic structure for general $\beta$:} It is also interesting to consider the analytic structure of the celestial amplitude for general $\beta$ since we have focused mainly on the low energy regime i.e., $\beta=-2 n$ in this work. It would be interesting to see if the high energy regime $\beta= 2 n$ can be addressed similarly \cite{Arkani-Hamed:2020gyp}. We can ask if the positivity properties we have considered persist for general values of $\beta$ and if these lead to any non-trivial consequences for the CCFT. This merits further study. \item {\bf Positivity:} Finally it is also a fascinating mathematical question to better explore connection between the notions of positivity introduced in this work and those studied in the maths literature such as Toeplitz positivity. The Feynman blocks introduced in this work were a family of typically-real polynomials and it would be interesting to better understand the connection between the Suffridge polynomials and the Feynman blocks. \end{itemize} \section*{Acknowledgements} We thank Faizan Bhat, Parthiv Haldar and Ahmadullah Zahed for discussions. We thank Shamik Banerjee for valuable comments on the draft. S.G. is supported by a Raman postdoctoral position of IISc while P.R. is supported by an IOE endowed postdoctoral position at IISc. A.S. acknowledges support from MHRD, Govt. of India, through a SPARC grant P315 and from DST through the SERB core grant CRG/2021/000873.
-99,277.063623
[ -2.806640625, 2.583984375 ]
29.727273
[ -3.001953125, 0.32958984375, -2.15234375, -5.24609375, -0.64697265625, 8.109375 ]
[ 4.1171875, 9.65625, 3.26953125, 6.23046875 ]
551
10,552
[ -2.34375, 2.525390625 ]
35.692381
[ -5.55078125, -4.22265625, -4.80859375, -2.328125, 1.81640625, 12.3125 ]
0.812671
18.130878
23.3605
5.099632
[ 1.5654466152191162 ]
-52,166.761655
5.947309
-98,331.703799
0.267703
6.439987
[ -1.96484375, -3.564453125, -4.00390625, -5.23046875, 1.998046875, 12.6484375 ]
[ -5.51171875, -2.15625, -2.509765625, -1.7177734375, 3.794921875, 5.6328125 ]
BkiUbCM5jDKDx8o5qk9R
\section{Introduction} Galaxies can have a complex kinematic structure, due to the superimposition of multiple components that have different kinematics and stellar populations, such as bulge and disk (in lenticulars and spirals), host spheroid and polar ring (in polar ring galaxies), and counter-rotating stellar disks or kinematically decoupled cored (mainly - but not only - in early-type galaxies). The interesting case of large-scale counter-rotating galaxies, i.e., those that host two stellar components of comparable sizes that rotate along opposite directions, is the topic of this work. The prototype of this class of objects is the E7/S0 galaxy NGC 4550, whose counter-rotating nature was first discovered by \citet{Rubin+92}. \section{Spectral decomposition} The spectral decomposition technique by \citet{Coccato+11} is an implementation of the penalized pixel fitting code by \citet{Cappellari+04}. For a given spectrum, the code builds two synthetic stellar templates (one for each stellar component) as a linear combination of stellar spectra from an input stellar library. It then convolves these two stellar templates with two Gaussian line-of-sight velocity distributions (LOSVDs), which are characterized by different velocity and velocity dispersion. The input galaxy and stellar spectra are normalized to their continuum level, therefore the contribution of each component to the observed galaxy spectrum is measured in terms of light. Gaussian functions are added to the convolved synthetic templates to account for ionized-gas emission lines. Multiplicative Legendre polynomials are included in the fit to match the shape of the galaxy continuum, and are set to be the same for the two synthetic templates. This accounts for the effects of dust extinction and variations in the instrument transmission. These steps are embedded in a iterative $\chi^2$ minimization loop. The spectral decomposition code returns the spectra of two best-fit synthetic stellar templates and ionized-gas emissions, along with the best-fitting parameters of light contribution, velocity, and velocity dispersion. The line strength of the Lick indices of the two counter-rotating components are then extracted from the two best-fit synthetic templates, and used to infer the properties of the stellar populations of the two components. Figure \ref{fig:decomposition} shows some examples of results of the spectral decomposition. \begin{figure} \includegraphics[width=1.0\columnwidth]{coccato_fig1.ps} \caption{Example of the fit results of the spectral decomposition technique, applied to disentangle the contribution of two counter-rotating stellar disks (left panel, from \citealt{Coccato+11}), and the contribution of the stars in the bulge and in the disk (right panel, from \citealt{Fabricius+14}). Black: observed galaxy spectrum; green: best fit model; blue and red: spectra of the two stellar components; purple: gas emission lines.}\label{fig:decomposition} \end{figure} \section{Applications and results} The technique has been successfully applied to disentangle (i) counter-rotating stellar components in galaxies \citep{Coccato+11, Coccato+13, Pizzella+14}, (ii) the kinematics of stars in bulge and disk \citep{Fabricius+14}, and (iii) the kinematics of the host galaxy and orthogonal disk in polar disk galaxies \citep{Coccato+14}. Here, we describe the results obtained for a sample of galaxies that host counter-rotating stellar disks of comparable size: NGC 3959, NGC 4183, NGC 4550, and NGC 5719. Observations were obtained with the VIMOS integral field unit at the VLT (Chile), except for NGC 4138, which was observed with the B\&C spectrograph at the 1.22-m telescope at Padova University (Italy). In all the studied galaxies, we detect the presence of an extended secondary stellar component, which is counter-rotating with respect the more luminous and massive stellar component. In addition, the secondary component is rotating along the same direction as the ionized gas. The measurements of the equivalent width of the absorption line features (H$\beta$, Mgb, Fe5270, and Fe5335) reveal that the secondary stellar component is always younger than the main stellar component, and it is more metal poor. Such a difference in stellar population is particularly pronounced in NGC 5719 (Figure \ref{fig:5719}). In the case of IFU observations, the spectral decomposition allows also to investigate the morphology of the decoupled structures by comparing their light contribution to the reconstructed image of the galaxy. It is therefore possible to unveil the real extension and geometrical properties such as orientation, ellipticity, and scale length of the decoupled disks. In Figure \ref{fig:surf_br} we compare the reconstructed images and the surface brightness radial profiles of the counter-rotating disks observed in NGC 3593. The secondary components overshines the main stellar disk within $\sim 10''$. A single-component fit would reveal the presence of the secondary component only in the inner regions, by revealing the kinematic signature of a kinematically decoupled core (KDC, e.g. \citealt{Bender88}). The combination of the luminosity profile and stellar mass-to-light ratio, as inferred from the stellar population analysis, shows that the secondary component has a lower mass density radial profile than the main component, also in the inner regions where it dominates in light. \begin{figure} \centering \hspace{-0.5cm} \includegraphics[width=1.0\columnwidth]{coccato_fig2.ps} \caption{Top left panel: image of the Sa galaxy NGC 5719, the yellow region indicates the footprint of the VIMOS/IFU field of view. Top central and right panels: line strengths of the Lick indices measured on the main (red) and secondary (blue) stellar components: H$\beta$, combined magnesium-iron index [MgFe]' \citep{Gorgas+90}, Mgb, and average iron index $<$Fe$>$ \citep{Thomas+03}. The black lines show the prediction of single stellar population models \citep{Thomas+11}. The bottom panels show the measured two-dimensional velocity fields of the main (left) and secondary (central) stellar components, and the ionized gas component (right). Adapted from \citet{Coccato+11}.}\label{fig:5719} \end{figure} \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{coccato_fig3.ps} \caption{Maps (left and central panels) and radial profiles (right panel) of the surface brightness of the main and the secondary stellar components of NGC 3593. The black, red, and blue solid lines correspond to the total surface brightness and to the surface brightness of the main and the secondary stellar components, respectively. The dashed red and dashed blue lines correspond to the best-fitting exponential disks to the surface brightness of the main and secondary stellar components, respectively. Dashed ellipses represent the boundaries of the elliptical annuli where the median surface brightnesses were computed. Adapted from \citet{Coccato+13}.}\label{fig:surf_br} \end{figure} \section{Conclusions} The stellar population properties and kinematics of the counter-rotating stellar disks are consistent with the formation scenario where a pre-existing galaxy acquired external gas on retrograde orbits from a companion galaxy or the external medium (e.g. \citealt{Thakar+96}; \citealt{Pizzella+04}). Depending on the amount of acquired material, the new gas removes the pre-existing gas and sets onto a disk in retrograde rotation. The new gas forms the stars of the counter-rotating secondary component. In the case of NGC 5719 we have a direct of the on-going acquisition process \citep{Vergani+07}. Despite of their discovery more than 30 years ago, counter-rotating galaxies still represent a challenging subject (see \citealt{Bertola+99, Corsini+14} for reviews). We have showed that the spectral decomposition allows to measure the properties of the stellar populations of both components, and to probe the predictions of the different formation scenarios. Forthcoming large integral field spectroscopic surveys such as MANGA \citep{Bundy+14}, will allow to increase the statistics on these objects, providing fundamental clues to constrain their formation scenarios.
-6,054.572871
[ -3.375, 3.078125 ]
24.117647
[ -5.69921875, -4.04296875, -3.544921875, -9.1484375, 1.736328125, 14.5546875 ]
[ 3.701171875, 7.48046875, 2.25, 5.98046875 ]
60
1,131
[ -3.509765625, 4.14453125 ]
21.203715
[ -6.75, -6.203125, -3.9765625, -0.623046875, 3.521484375, 11.2890625 ]
2.553319
19.191179
38.107869
1.953124
[ 2.802032709121704 ]
-5,380.883799
5.886826
-5,832.422605
2.162812
5.176353
[ -3.810546875, -4.77734375, -3.888671875, -3.099609375, 3.17578125, 9.875 ]
[ -7.64453125, -5.51171875, -4.2890625, -2.765625, 5.27734375, 10.046875 ]
BkiUdafxaL3SuhsxKGrm
\section{Introduction} Thermoelectric systems are heat engines which, by using the electron gas formed by their conduction electrons as a working fluid, directly convert a heat flux into electrical power and vice-versa, depending on the desired operation mode. In the linear regime, the coupling between heat and electricity is embodied in a parameter called thermopower or Seebeck coefficient, $S$. The principal virtues of thermoelectric heat engines are that their conversion efficiency is not size-dependent and that their operation does not rely on moving parts; the main drawback is their modest energy conversion efficiency, which is at best of the order of 10 \% of the efficiency of the ideal Carnot thermodynamic cycle. Since the first works reported by Seebeck \cite{Seebeck}, Oersted \cite{Oersted}, and Peltier \cite{Peltier}, general interest in thermoelectricity has much varied with time. Some significant milestones in this field are Thomson's thermodynamic analysis \cite{Thomson}, Callen's \cite{Callen} adaptation of Onsager's formalism \cite{Onsager}, and Ioffe's works on device operation \cite{Ioffe}. In particular Ioffe related the thermoelectric conversion efficiency to a dimensionless parameter denoted $ZT$, where $T$ is the average temperature across the device and $Z$ is a parameter that depends on the material's properties, namely the thermopower $S$, and the electrical and thermal conductivities, $\sigma$ and $\kappa$ respectively: $ZT=\sigma S^2 T/\kappa$ ($\kappa$ entails both electron and lattice thermal conductivities, $\kappa_{\rm e}$ and $\kappa_{\rm lat}$, but we neglect the latter throughout the present work since we are mainly interested in the electronic transport properties \cite{remark1}). Though $ZT$ has a profound meaning on the thermodynamic level, this parameter is much used as a means to measure how well in terms of energy conversion efficiency, a device operates. A good performance is associated to high values of $ZT$ (typically around 1). Enhancement of the performance can be obtained following three routes: $i/$ optimization of the thermoelectric material's properties \cite{handbook}; $ii/$ optimization of device working conditions through appropriate design \cite{handbook,Heikes1961} and electrical and thermal impedance matchings \cite{Apertet1}; and $iii/$ design of new nano- and mesostructures \cite{Dresselhaus,Shakouri,Trocha}. The first route, which essentially consists of finding means to decrease $\kappa$ and increase $\sigma$ has produced a number of very interesting results, but further significant progress seems out of reach \cite{Vinning}, one of the difficulties being that both $\sigma$ and the electronic part $\kappa_{\rm e}$ of the thermal conductivity $\kappa$ are tightly linked; this link is the Wiedemann-Franz law for metals \cite{Jonson}. The second line of work is based on the observation that the highest performances are usually obtained in a very limited temperature range for a given material, so that devices must be designed in such a way that they may operate over a large temperature range, which is usually achieved by segmentation; in addition, when seeking maximum efficiency at maximum output power for a thermoelectric generator, one should ensure that the conditions for both thermal and electrical impedance matchings are fulfilled. The third route, which we explore in this paper, was permitted thanks to the rapid and tremendous progress in the field of fabrication of nano- and mesoscopic artificial structures. The interest of going down to such scales stems from a number of effects that truly enrich the physics of thermoelectricity and pave the way to possibly better device performance; we can mention quantum confinement \cite{Dresselhaus,Balandin,Yuan} and the related modification of the carriers' density of states, mesoscopic fluctuations, breakdown of the Fermi liquid picture \cite{Kane,Wakeham,Benenti}, and limits of validity of standard thermodynamic approaches due to finite-size effects. Further, since for macroscopic systems an increase of thermopower impacts on the values of thermal and electrical conductivities and may not necessarily result in a significant increase of $ZT$, it is also interesting to see how at the mesoscopic level the quantum effects pertaining to electron confinement and quantization of the transport coefficients \cite{Wees,Schwab} may modify the interplay between $S$, $\kappa$, and $\sigma$. In this paper we see how to achieve high figures of merit with systems having large thermopower and low thermal conductivity at the same time. We focuse on a one-dimensional (1D) system in the presence of off-channel cavities, which perturb the electrical conductivity $\sigma$ or, equivalently, the energy-dependent transmission function ${\mathcal T}$. This system may be fabricated from structures which contain two-dimensional electron gases. The main property of the mesoscopic system we study is the cancellation of the transmission function at a given energy we may choose and vary using an external gate; other characteristics are described in Section 2. Now, the main idea underlying the present work comes from the Cutler-Mott formula \cite{Mott}, which gives the Seebeck coefficient at low temperature: \begin{equation} \label{Mott} S=-\frac{\pi^2}{3} \frac{k_{\rm B}^2 T}{e} \left(\frac{\partial \ln(\sigma)}{\partial E}\right)_{E=E_{\rm F}} \end{equation} \noindent where the derivative of the electrical conductivity is taken at the Fermi energy $E_{\rm F}$. Equation \eqref{Mott} which is valid for metallic systems such as that under consideration in this paper, shows that when the transmission vanishes, the Seebeck coefficient tends to infinity. In general, when the transmission vanishes, the derivative $\partial\sigma/\partial E$ vanishes too, but in virtue of l'Hopital's rule, we know that the limit does diverge. Therefore, if the conduction band contains a transmission-cancelling energy level, the coupling between heat and electrical fluxes is enhanced in the vicinity of this energy, which in turn may yield a significant increase of the figure of merit $ZT$. As will become obvious with the definitions of the transport coefficients given further below, a symmetric transmission function would tend to cancel the contributions from higher and lower energy terms with respect to the Fermi level, and hinder transport, so we seek configurations that makes the transmission function profile asymmetric. Indeed, the transmission function should be asymmetric around the Fermi energy so that the hot-reservoir electrons may leave their high energy states and proceed against the applied source-drain bias through the waveguide to the cold reservoir's available states as noted in Ref.~\cite{Linke}. In this work, we investigate the effects of electron-phonon coupling on the transmission profile, and see whether this coupling impacts positively on the transport coefficients and figure of merit. \begin{figure} \begin{center} \includegraphics[scale=0.3]{./Cavity.eps} \end{center} \caption{(color online) An off-channel cavity connected to 1D electron waveguide. The shape of the cavity and its coupling to the waveguide are controlled by external gates.} \label{Cavity} \end{figure} Our phenomenological model is based on a number of assumptions. First, our main framework is that of single-particle theory so that the ballistic electron transport in the waveguide may be described by a single-band effective mass Hamiltonian. We consider the linear response only, which implies that the temperature gradient across the system is small; incidently this assumption provides a small energy scale for the model. The temperature $T$ is sufficiently small so that the Cutler-Mott formula \eqref{Mott} remains valid. Effects of the couplings of the waveguide to the off-channel cavities and the phonons are studied with the nonequilibrium Green's function approach \cite{Datta}, adapting to our purposes recent works on dephasing effects on quantum transport \cite{Cresti,Datta2,Cresti2,Cresti3}. We assume that the system operates in a steady-state regime, and that the two reservoirs, to which the electron waveguide is connected, are perfect. The Fermi energy $E_{\rm F}$ in the whole system is given by the highest level occupied by an electron in either reservoirs at zero temperature. The phonons are assumed to be in equilibrium. The article is organized as follows. In section II, we present in detail the basic system under consideration. The system Hamiltonian, the thermoelectric transport coefficients and the transmission function are defined and discussed considering the presence of only one off-channel cavity coupled to one site of the lattice. In section III, we introduce the effects of the coupling of parts of the 1D lattice to a bath of dephasing phonons; for this, we redefine our basic chain model using a decimation procedure. In section IV, we study the response of the system's thermoelectric transport coefficients to the presence of an additional off-channel cavity. The article ends on concluding remarks, followed by the Appendix where some detail on the calculation of the self-energies using the decimation procedure is given. \section{Model of the basic system} We consider a 1D electron waveguide, which we describe as a 1D lattice; electron conduction is characterized by hopping between the lattice sites. The waveguide is connected first to one off-channel cavity by coupling to one of the lattice sites. We also assume a small cavity size so that it contains only one energy level accessible to the conduction electrons \cite{remark2} and, in the absence of the off-channel cavity, the system is uniform and has full transmission: ${\mathcal T}(E)=1, ~\forall E$, at any Fermi energy. The cavity is controlled by external gates which handle its coupling to the electron waveguide and its shape as shown in Fig.~\ref{Cavity}. \subsection{Hamiltonian} The Hamiltonian of the system can be written in a tight binding model as the sum of three contributions: \begin{equation} H=H_{\ell}+H_{\rm d}+H_{\rm c} \end{equation} \noindent where $H_{\ell}$ is the electron waveguide Hamiltonian, which describes the transport of an electron from $-\infty$ to $+\infty$ in the absence of the off-channel cavity: \begin{equation} H_{\ell}=\sum_{x=-\infty}^{\infty} -t\left(|x\rangle\langle x+1|+|x+1\rangle\langle x|\right) +\epsilon_0 |0\rangle\langle0| \end{equation} \noindent where $|x\rangle$ is a Wannier state at location $x$ along the 1D lattice, and $t$ is the hopping coefficient, which we take as: $t = \hbar^2/2m^{\star}a^2$ in the effective mass approximation ($m^{\star}$ is the electron effective mass, and $a$ the lattice spacing). The energy $\epsilon_0$ is that of the site at the center of lattice ($x=0$). The dispersion relation derived from the Hamiltonian $H_{\ell}$ reads $E=-2t\cos(k)$, leading to a conduction band $E\in [-2t,+2t]$. The term $H_{\rm d}$ characterizes the off-channel cavity with a single level: \begin{equation} H_{\rm d} =V_{\rm d} |d\rangle\langle d| \end{equation} \noindent where the energy $V_{\rm d}$ of the state $|d\rangle$ can be controlled by an external gate. The term $H_{\rm c}$ characterizes the coupling between the cavity and the site at the center of the electron waveguide to which it is coupled: \begin{equation} H_{\rm c}=t_{\rm c}|0\rangle\langle d|+t_{\rm c}|d\rangle\langle 0| \end{equation} \noindent where the coupling parameter is noted $t_{\rm c}$. \noindent This simple model may represent a multilevel quantum dot when the level spacing is large compared to the range of energies contributing to the transport due to temperature smearing. In all that follows, the hopping coefficient in the leads is taken as $t=1$ such that all the energies in our study are expressed in the units of $t$. \subsection{Thermoelectric transport} We consider three transport coefficients: the electrical conductivity $\sigma$, the thermopower $S$, and the electronic thermal conductivity $\kappa_0$, which is related to the zero-electric-current thermal conductivity $\kappa_{\rm e}$ \emph{via}: \begin{equation} \kappa_0=\kappa_{\rm e}+T\sigma S^2 \end{equation} \noindent These three coefficients are given by the solutions of the Boltzmann equation \cite{Mahan,Kim}: \begin{equation}\label{conductance} \sigma= \int \left(-\frac{\partial f}{\partial E}\right) \mathcal{T}(E){\rm d}E \end{equation} \begin{equation}\label{Seebeck} T\sigma S= \int(E-\mu)\left(-\frac{\partial f}{\partial E}\right) \mathcal{T}(E){\rm d}E \end{equation} \begin{equation}\label{Kappa} T\kappa_0= \int (E-\mu)^2 \left(-\frac{\partial f}{\partial E}\right) \mathcal{T}(E){\rm d}E \end{equation} \noindent where $\mu$ is the electrochemical potential of the electron system. The conductivity $\sigma$ is expressed in units of $e^2/h$, with $h$ being the Planck constant, and $e$ being the electron charge. The important quantity that determines all the necessary physical quantities for our work is the transmission function $\mathcal{T}$. Knowledge of the three transport coefficients thus permits an analysis of the figure of merit $ZT$, which for a standard system where heat is transported by electrons and the lattice, may be rewritten as: \begin{equation}\label{ZTe} ZT=\frac{\displaystyle \sigma T S^2/\kappa_0}{\displaystyle 1-\sigma T S^2/\kappa_0} \times \frac{1}{1+\kappa_{\rm lat}/\kappa_{\rm e}} \end{equation} \noindent Note that in the present work, we focuse on the electronic thermal conductivity $\kappa_0$ because the 1D system is made from 2D electronic gases. So the figure of merit we compute and analyze in this article is the electronic part of $ZT$ given in Eq.~\eqref{ZTe}, neglecting the contribution of $\kappa_{\rm lat}$. For ease of notations, we retain $ZT$ as the electronic part of the figure of merit. Equations \eqref{conductance}, \eqref{Seebeck}, and \eqref{Kappa} show that the electron waveguide, \emph{via} its transmission function ${\mathcal T}$ which we define below, acts as an energy-selective filter \cite{Linke}. As mentionned in the Introduction, for electron transport to take place, it is necessary that the transmission function be asymmetric, and this is clear with Eq. (\ref{Seebeck}) : since $(E-\mu)$ changes sign around the Fermi energy, a symmetric transmission function would yield cancellation. Also note that $(E-\mu)$ corresponds to the heat involved in the transport process, so a high energy conversion efficiency may be obtained on the conditions that the transmission profile permits a high eletrical flux from hot to cold reservoirs with a minimal heat flux. \subsection{Transmission function} The energy-dependent transmission function may be obtained from the Fisher-Lee formula \cite{FisherLee}: \begin{eqnarray}\label{Landauer} \mathcal{T}(E)=\Gamma_{\rm L} G \Gamma_{\rm R} G^\dagger \end{eqnarray} \noindent where $G$ is the retarded single-particle Green's function of the central site (Wannier state $|0\rangle$ with energy $\epsilon_0$), taking into account the effect of the leads and the off-channel cavity by means of their self energies: \begin{equation}\label{GF} G=\frac{1}{E-\epsilon_0-\Sigma_{\rm L}-\Sigma_{\rm R}-\frac{\displaystyle t_{\rm c}^2}{\displaystyle E-V_{\rm d}}} \end{equation} \noindent where the self energies of the left ($\Sigma_{\rm L}$) and the right($\Sigma_{\rm R}$) leads are taken equal: $\Sigma_{\rm R}=\Sigma_{\rm L}=\Sigma$, and have the following energy-dependence \cite{Sasada}: \begin{equation}\label{SelfEnergy} \Sigma=\frac{E}{2}-i\sqrt{1-\left(\frac{E}{2}\right)^2} \end{equation} By ``central'' region, we mean the site $|0\rangle$ connected to the cavity and the leads at the same time. In order to have full transparency ($\mathcal{T}=1$) at all the Fermi energies, the energy $\epsilon_0$ should be equal to that of the lead sites: $\epsilon_0=0$. Note that we kept $\epsilon_0$ explicit in Eq.~\eqref{GF} despite its vanishing value, in order to better see the partition of the system and define the left and right self energies. The quantities $\Gamma_{\rm L}$ and $\Gamma_{\rm R}$ in \eqref{Landauer} characterize the coupling of the central region to the leads, and they are seen as linewidths. Their expressions are given by the imaginary part of the self energy of the leads: \begin{equation}\label{couplingLR} \Gamma_{{\rm L},{\rm R}}=-2\Im \Sigma_{{\rm L},{\rm R}} \end{equation} This partition into a central system and the surrounding leads and cavity is optional and not unique \cite{Darencet}. Our purpose is to obtain a scalar Green's function, which greatly simplifies the expression of the transmission function. Notice in the expression of the Green's function \eqref{GF}, that the off-channel cavity is taken into account through its effects on the central site, which can be understood from the decimation procedures \cite{Grosso,Lambert}; these effects may be contained into a self-energy: \begin{equation} \Sigma_{\rm d}=\frac{t_{\rm c}^2}{E-V_{\rm d}} \end{equation} Substitution of the above quantities into the Fisher-Lee formula \eqref{Landauer} yields the following simple expression for the transmission function of the system: \begin{eqnarray}\label{Transmission} \mathcal{T}(E)=\frac{1}{1+\left[t_{\rm c}^2/\Gamma(E-V_{\rm d})\right]^2} \end{eqnarray} \begin{figure} \begin{center} \includegraphics[scale=0.85]{./Figures_for_articleT.eps} \end{center} \caption{(color online) Transmission profile in the presence of the off-channel cavity (orange curve) and without the cavity (dashed blue). The presence of the cavity changes drastically the transmission of the system and can even lead to cancellation when the Fermi energy equals the cavity level (here $V_{\rm d}=-1.8$).} \label{FigTransmission} \end{figure} The profile of the transmission is given in Fig. \ref{FigTransmission}. We notice that the presence of the cavity changes drastically the profile of the transmission and it even leads to its cancellation, which can be viewed as a quantum destructive interference. The value at which this cancellation appears is directly obtained from the transmission function, in Eq.~\eqref{Transmission} : as $E\rightarrow V_{\rm d}$, ${\mathcal T} \rightarrow 0$, and the width of the antiresonance is controlled by the coupling parameter $t_{\rm c}$. A first-order expansion near the energy $E=V_{\rm d}$ shows that the transmission function is parabolic: \begin{equation} \mathcal{T}(E)\sim \left(\frac{\Gamma}{t_{\rm c}^2}\right)^2 (E-V_{\rm d})^2 \end{equation} \noindent and it becomes clear that the weaker the coupling to the cavity is ($t_{\rm c}^2\ll\Gamma $ ) the sharper the antiresonance is. To recover the full transmission of the system (as without the cavity) one may increase the potential $V_{\rm d}$ to infinity, since this implies that the cavity region becomes forbidden to the electrons moving across the waveguide. Decreasing the coupling $t_{\rm c}$ down to $0$ makes the antiresonance narrower but does not eliminate it; and setting $t_{\rm c}=0$ is a rather abrupt way to take the cavity off. We just saw the effects of the coupling parameter $t_{\rm c}$ and the cavity level $V_{\rm d}$ on the transmission function, and clearly they have no influence on the symmetry of the profile. In order to induce some asymmetry we need to introduce an additional physical process, hence a new parameter. In what follows, we allow the electrons that flow through the waveguide to interact with phonons within two small regions located on both sides of the site coupled to the cavity. \section{Effects of dephasing phonons} \begin{figure*} \includegraphics[scale=0.6]{./GraphDecimation.eps} \caption{(color online) Sketch of the decimation procedure $(a)\longrightarrow(b)$.} \label{Graphdecimation} \end{figure*} In this section, to study the effects of electron-phonon interaction on the transport coefficients, we consider the presence of external phonon baths symmetrically located on both sides of the cavity and connected to $n=2\times N$ sites. The system remains essentially the same as that of the previous section where we considered phase-coherent transport, but we now assume that in some parts of the left and right leads, the electrons are interacting with phonons as shown in Fig. \ref{Graphdecimation}. The description of the new situation is done using a decimation procedure, which is described further below. \subsection{Electron-phonon interaction} Following Ref. \cite{Datta}, we make approximations to perform tractable calculations: we assume that the phonons, which form a bath of independent oscillators, are in equilibrium; we consider only one-phonon scattering processes (self-consistent Born approximation); the electron-phonon interaction is local and has no effect on the hopping coefficient \cite{Lake}. Further, we simplify our analysis by considering single-mode phonons, with energy $\hbar\omega_0$. The phonon-electron interaction Hamiltonian thus assumes a simple form and reads: \begin{equation} H_{\rm e-ph}=\lambda \sum_{x} |x\rangle \langle x| (b+b^\dagger) \end{equation} \noindent where $b^\dagger$ and $b$ respectively are the second-quantized phonon creation and annihilation operators, and $\lambda$ is the electron-phonon interaction strength considered here as site-independent. The approach to access the transmission function through the device in the presence of dephasing phonons starts with expressing the retarded (advanced) and lesser (greater) self energies as done in Refs.~\cite{Cresti,Cresti2,Cresti3,Caroli71,Caroli72}. These quantities are obtained in a closed form within the self-consistent Born approximation~\cite{Cresti,Haug}, which ensures current conservation \cite{Cresti,Wacker}. \subsection{Decimation procedure} To simplify the description of the system connected to the phonon baths on both sides of the central site, which is connected to the off-channel cavity, we use a decimation procedure \cite{Grosso,Lambert} as depicted in Figs.~\ref{Graphdecimation}$(a)$ and \ref{Graphdecimation}$(b)$: the red sites are connected to the phonon baths, and the green site is at the energy level of the off-channel cavity; the system thus is formed of two parts: the interacting region connected on its left and right sides to semi-infinite perfect 1D leads. The decimation consists in taking out the central part (five grey sites and the green one), and renormalizing the sites (and their links) to which it was attached. The renormalized sites are shown in blue. This procedure is exact in the sense that both systems $(a)$ and $(b)$ are equivalent for transmission since they share the same Green's function submatrix related to the remaining sites. The sites are renormalized as follows: we note $h$ the Hamiltonian of the part we take out and we define the Green's function $g=\frac{1}{E-h}$; then the Hamiltonian of the sites directly coupled to the removed part is: \begin{equation}\label{DecimationEq} h_{\rm r}=h_{\rm r}^{(0)}+\tau^\dagger \frac{1}{E-h}\tau \end{equation} \noindent where $h_{\rm r}$ is the new Hamiltonian of the renormalized sites (a sub-Hamiltonian of the total one), $h_{\rm r}^{(0)}$ is the Hamiltonian of these sites in the absence of coupling to the central part, and $\tau$ is the coupling matrix between the central part and its surrounding sites ($\tau^\dagger$ is its complex conjugate). Equation (\ref{DecimationEq}) clearly shows that the Hamiltonian $h_{\rm r}$ is energy-dependent, and we recognize the form of a self-energy: $\tau^\dagger \frac{1}{E-h}\tau$. Actually the decimation procedure may be seen as the use of the concept of self-energy in the formalism of Landauer-Buttikker to take into account the effects of the reservoirs, or infinite leads (an application of such procedure may also be found in Refs.~\cite{Adel,Adel2}). Here, since the part of the system which is taken out is finite, the self-energy is real valued. The aim of the decimation procedure is to lower the dimension of the Green's matrix since it is numerically obtained using a recursive algorithm. Another interest of this procedure is that it simplifies the equations describing the phonon self-energy $\Sigma^{\rm ph}$, entering the definition of the retarded Green's function: \begin{equation}\label{GreenFunction} G=\frac{1}{E-H-\Sigma^{\rm leads}-\Sigma^{\rm ph}} \end{equation} \noindent where, after application of the decimation procedure, the Hamiltonian $H$ takes the form of an $n \times n$ matrix. The leads are characterized by their self-energy matrix $\Sigma^{\rm leads}$, which has only two nonzero elements: $\Sigma^{\rm leads}_{11} = \Sigma^{\rm leads}_{nn}=\Sigma$. The phonon self-energy is more complicated and it is obtained self-consistently with the Green's function. Within the Born approximation and the assumption that the effect of the phonons is local (their effect is restricted on single sites and does not change the hopping coefficients) the phonon self-energy reads: \begin{equation}\label{SigmaPhonons} \Sigma_{\rm ph}=\gamma^2 \mathcal{D}G \end{equation} \noindent where $\mathcal{D}$ is a superoperator whose application to a matrix sets all its elements, except the diagonal ones, to zero, thus returning a diagonal matrix \cite{Cresti}. The parameter $\gamma$ characterizes the coupling between the phonon baths and the system, hence the dephasing process. Note that the Green's matrix \eqref{GreenFunction} and the phonon self-energy \eqref{SigmaPhonons} must be computed using a recursive process starting from an initial value of $\Sigma^{\rm ph}$, the computation being continued until convergence is reached to a good precision. \subsection{Transmission function} As explained in the works of Cresti and co-workers \cite{Cresti,Cresti2,Cresti3}, the transmission $\mathcal{T}$ at energy $E$ may be written as the sum of two contributions, which are called the coherent and incoherent terms: \begin{eqnarray} \mathcal{T}(E)=\mathcal{T}_{\rm coh}(E)+\mathcal{T}_{\rm inc}(E) \label{TransiInc} \end{eqnarray} \noindent The coherent term originates in the leads' lesser self energies \cite{Cresti} and assumes the following form: \begin{equation}\label{Tcoh} \mathcal{T}_{\rm coh}(E)=\Gamma_{\rm L} Q_{1n}\Gamma_{\rm R} \end{equation} \noindent and the incoherent term, in the phonon's lesser self-energies \cite{Cresti}, assumes a similar form: \begin{equation}\label{Tinc} \mathcal{T}_{\rm inc}(E)=\Gamma_{\rm L} B_{1n} \Gamma_{\rm R} \end{equation} \noindent In Eqs. \eqref{Tcoh} and \eqref{Tinc}, the couplings $\Gamma_{\rm L}$ and $\Gamma_{\rm R}$ are those defined in Eq.~\eqref{couplingLR}, and the matrix elements $Q_{1n}$ and $B_{1n}$ (in the notations of Cresti \cite{Cresti}) are defined as follows. The matrix elements of $Q$ are: \begin{equation}\label{TransiInc2} Q_{ij}=|G_{ij}|^2 \end{equation} \noindent and the matrix $B$ reads: \begin{equation}\label{TransiInc3} B=\gamma^2 \frac{Q^2}{{\mathds 1}-\gamma^2 Q} \end{equation} \begin{figure*} \begin{center}$ \begin{array}{ccc} \includegraphics[width=2.in]{./1606.eps} & \includegraphics[width=2.in]{./1610.eps} & \includegraphics[width=2.in]{./1614.eps}\\ \includegraphics[width=2.in]{./2606.eps} & \includegraphics[width=2.in]{./2610.eps} & \includegraphics[width=2.in]{./2614.eps}\\ \includegraphics[width=2.in]{./3606.eps} & \includegraphics[width=2.in]{./3610.eps} & \includegraphics[width=2.in]{./3614.eps}\\ \end{array}$ \end{center} \caption{(color online) Transmission of the system for various lengths of the electron-phonon interacting regions and various distances between them. From left to right $l$ changes as $6a$, $10a$ and $14a$, and from top to bottom the distance $L$ changes as $L=16a$, $26a$ $36a$, $a$ being the lattice step. For all the figures , $V_{\rm d}=-1.8$ and $\gamma=0.81$. The dashed curves represent the transmission of the system in the presence of phonons but without the off-channel cavity.} \label{Dephasing} \end{figure*} We may now analyze the effects of the coupling of the electron waveguide to phonons on the transmission of electrons through the cavity. The numerical simulations based on an iterative procedure yield the results plotted in Fig.~\ref{Dephasing}. We discuss first the case without connection to the off-channel cavity. Comparison of Figs. \ref{FigTransmission} and \ref{Dephasing} shows that the presence of phonons drastically perturbs the transmission function: the amplitude is significantly reduced and oscillations appear. An increase of the size $l$ of the interacting regions reduces the transmission amplitude of the system, which reflects the dephasing effect; an increase of the distance $L$ between the two interacting regions induces more oscillations in the transmission profile: the system mimicks the behavior of a Fabry-Perot interferometer. Indeed, the presence of phonons renormalizes the potentials of the lattice sites with which they interact, via the real part of the self energy $\Sigma^{\rm ph}$, and since there are two distant electron-phonon interacting regions, the system shows oscillations of its transmission function. In the presence of an off-channel cavity, the transmission profile remains essentially the same, except near the cavity level where the transmission is canceled. Note that the transmission always cancels at the cavity level: the electron-phonon coupling, as described in our model, does not impact on the location of the antiresonance. The very interesting result we observe is the asymmetric aspect that the transmission profile acquires around the Fermi energy in the presence of dephasing phonons. The transmission of either high or low energy electrons can be reduced depending on whether the cancellation occurs at the local maximum or minimum of an oscillation: the system then acts as an energy-dependent carrier filter. \begin{figure*} \begin{center}$ \begin{array}{cccc} \includegraphics[width=1.5in]{./sigman26.eps} & \includegraphics[width=1.5in]{./Ken26.eps} & \includegraphics[width=1.5in]{./Sn26.eps}& \includegraphics[width=1.5in]{./ZTn26.eps} \\ \includegraphics[width=1.5in]{./sigman16.eps} & \includegraphics[width=1.5in]{./Ken16.eps} & \includegraphics[width=1.5in]{./Sn16.eps}& \includegraphics[width=1.5in]{./ZTn16.eps} \\ \end{array}$ \end{center} \caption{(color online) Temperature dependence of the transport coefficients for two different distances between the electron-phonon interacting regions. Top panels: $L=26 a$; bottom panels: $ L=16a$; and for both cases $l=6a$. The related transmission profiles are shown in Fig.\ref{Dephasing}} \label{ThermoelectricFig} \end{figure*} \subsection{Thermoelectric coefficients in presence of phonons} Once the profile of the transmission in known, the transport coefficients may be directly obtained by numerical integration of Eqs. \eqref{conductance}, \eqref{Seebeck}, and \eqref{Kappa}; these coefficients are shown as functions of temperature in Fig.~\ref{ThermoelectricFig}. We choose to analyze two different situations: one corresponds to a system more transparent to high-energy electrons, and the other is more transparent to low-energy electrons. In both cases we notice that the conductance $\sigma$ increases with temperature (note that in the linear regime, the temperature variation must remain small). This can be understood as follows: since the antiresonnance at the Fermi energy is extremely narrow, a temperature increase allows electrons with energy around the Fermi energy within a $k_{\rm B}T$ range, to be transmitted. The behavior of the electronic thermal conductivity $\kappa_{\rm e}$ is similar for the same reasons. Both profiles of thermopower $S$ as functions of temperature $T$ show an extremum. The difference of signs is due to the location of the antiresonance of the transmission function, which corresponds either to a minimum or a maximum of the oscillations around the Fermi energy: in the upper row of Fig.~\ref{ThermoelectricFig}, the system is more transparent to electrons with energy $E < E_{\rm F}$ whereas, in the lower row, it is more transparent to electrons with energy $E > E_{\rm F}$. This change of sign for the thermopower could also be anticipated from the formula \eqref{Seebeck}. Recall that this asymmetry is due to the modulation created by the two electron-phonon interacting regions. Note that while the values of the figure of merit $ZT$ are modest in both cases, their shapes indicate that with appropriate adjustments of the location energy (which corresponds to the peak) and passband, very interesting developments are possible since the energy spectrum of the thermoelectric system is tunable. Indeed, this is a key point as regards the thermodynamic coupling of the system to its environment or other systems: minimization of the mismatch of the respective energy spectra of two systems permits a lowering of entropy production as these two systems get coupled (this is well known in the case of monochromatic photons interacting with a black body \cite{DeVos}). \section{System connected to two off-channel cavities} The analysis we did so far concerned various situations for which the electron-phonon interaction-induced asymmetry is at most typically of the order of the amplitude of the oscillations. We now see whether a stronger asymmetry is possible by simply connecting an additional off-channel cavity to the electron waveguide, still in the presence of dephasing phonons as in the previous case. The two cavities are supposed to be small and each one contains only one energy level that is tunable with external gates. The presence of a second off-channel cavity adds a new parameter to the problem. When connected to the 1D system, the additional cavity also induces a cancellation of the transmission at an energy equal to that of its level. We now assume that the Fermi energy is equal to the energy level of one of the cavities and let the other level be free to be changed. The Hamiltonian of the new system is quite similar to that of the previous case: $H=H_{\ell} + H_{\rm c} +H_{\rm d} + H_{\rm e-ph}$, with \begin{eqnarray} H_{\ell} &=& \sum_{x=-\infty}^{+\infty} -t(|x\rangle\langle x+1|+|x+1\rangle\langle x|)\\ H_{\rm d} &=& V_{\rm d} |d\rangle\langle d|+\epsilon_{\rm d} |d^\prime\rangle\langle d^\prime|\\ H_{\rm c} &=& t_{\rm c}|0\rangle\langle d|+t_{\rm c}|l\rangle\langle d^\prime|+ \mbox{h.c} \end{eqnarray} \noindent where $H_{\ell}$ is the Hamiltonian of the electron waveguide, $H_{\rm d}$ is the Hamiltonian of the two cavities with $V_{\rm d}$ and $\epsilon_{\rm d}$ being their levels, and $ H_{\rm c}$ is the Hamiltonian that characterizes the coupling between the waveguide and the two cavities. For simplicity, we choose the same coupling parameter $t_{\rm c}$ for both cavities. Here $l$ represents the index of the site to which the second level is connected (the first cavity is connected to the site with index $0$). The transmission function is obtained from Eqs. \eqref{TransiInc}, \eqref{TransiInc2}, and \eqref{TransiInc3}. By varying the level $\epsilon_{\rm d}$, we change the profile of the transmission ${\mathcal T}$ and obtain situations where the low or high energy electrons in the vicinity of the Fermi level cannot be transmitted due to the overlap of the two antiresonances as can be seen in Fig. \ref{ProfileOfTrans}. The simulation is performed considering a system with a non-interacting central region of $n=67$ sites at a Fermi energy $E_{\rm F}=-1.8$ (equal to the cavity level $V_{\rm d}$). \begin{figure} \begin{center} \includegraphics[scale=0.35]{Trans2cavites.eps} \end{center} \caption{(color online) Transmission profile in the presence of two cavities. The level of the first cavity is chosen to be $V_{\rm d} = E_{\rm F}=-1.8$. The energy level of the other cavity is $\epsilon_d$ such that: $(\epsilon_{\rm d}-E_{\rm F})/t_{\rm c}^2 = -2.24$ (blue curve); $(\epsilon_{\rm d}-E_{\rm F})/t_{\rm c}^2=-0.64$ (green curve); and $(\epsilon_{\rm d}-E_{\rm F})/t_{\rm c}^2 = 0.32$ (red curve); all with $t_{\rm c}=0.025$. The overlap of the two cancellations yields a large zero-transmission region. The green curve corresponds to the maximum of $ZT$ in Fig.\ref{ThermoelectricCoef}. The central noninteracting region contains $n=67$ sites and $\gamma=0.81$. The distance between the two cavities is $50 a$.} \label{ProfileOfTrans} \end{figure} In order to reach significant values of $ZT$, we investigate all the possible situations corresponding to various differences between the two cancellation energies, $\epsilon_{\rm d}-V_{\rm d}$, and we plot the figure of merit $ZT $ as a function of the temperature $T$. The numerical simulations are based on the same procedure as for the previous section. We only have to change the Hamiltonian of the central system by the new one including the additional cavity and iterate the algorithm in order to obtain in a self-consistent fashion the Green's matrix $G$ and the phonon self-energy $\Sigma^{\rm ph}$. The results are shown in Fig.~\ref{ThermoelectricCoef}. The figure of merit $ZT$ has a maximum at $ZT=5.38$ obtained for $(\epsilon_{\rm d}-E_{\rm F})/t_{\rm c}^2=-0.64$. This maximum and its location depend on the system's characteristics (the central region and the phonon's regions). We notice the presence of another local (but lower) maximum in the $ZT$ curve. We also computed the thermopower $S$ as a function of the same parameters ($\epsilon_{\rm d}-E_{\rm F}$ and $T$). The Seebeck coefficient has a maximum which corresponds to the maximum of $ZT$ and a minimum which corresponds to the local maximum of $ZT$ factor. The sign of the Seebeck coefficient changes as $\epsilon_{\rm d}$ varies with respect to the Fermi energy: the thermopower is positive when low-energy electrons in the vicinity of the Fermi level are not transmitted (see Fig. \ref{ThermoelectricCoef}) whereas it becomes negative when high energy-electrons are not transmitted. Note that it is also possible that for the same configuration of the system, the Seebeck also changes sign because of temperature smearing which tends to give more weight either to lower- or higher-energy electrons as the temperature increases. \begin{figure*} \begin{center}$ \begin{array}{ccc} \includegraphics[width=2.in]{./2cavities_ZT.eps}& \includegraphics[width=2.in]{./2cavities_Sk.eps}& \includegraphics[width=2.in]{./2cavities_Ke.eps} \end{array}$ \end{center} \caption{(color online) Transport coefficients as functions of the energy difference $\epsilon_{\rm d} - E_{\rm F}$, and temperature $T$.} \label{ThermoelectricCoef} \end{figure*} \section{Summary and concluding remarks} Our study of the thermoelectric transport coefficients of a 1D electron waveguide connected to one and then two off-channel cavities, in the presence of dephasing phonons, is based on the Landauer-B\"uttiker formalism and the nonequilibrium Green's function techniques. We showed how to obtain a strongly asymmetric shape for the transmission function: first by coupling parts of the waveguide connected to an off-channel cavity to phonons, and then by coupling this system to an additional off-channel cavity. In our case, the transmission profile is characterized by oscillations, which trigger changes of sign of the thermopower as the location of the antiresonance of the transmission function corresponds either to a minimum or a maximum of these oscillations around the Fermi energy. This sign change shows that the system acts as an energy-selective filter: low-energy electrons in the vicinity of the Fermi level are not transmitted (the case of positive thermopower), and the higher-energy electrons are not transmitted (the case of negative thermopower). As the transmission function vanishes we observe an enhancement of the thermoeletric coefficients. This result is in accordance with that of Ref.~\cite{Trocha}, where a strong enhancement of thermoelectric coefficients due to Coulomb correlations and destructive interference effects (hence vanishing of transmission) was found in a double quantum dot system. It is also interesting to see that while an enhancement of $ZT$ was found for transmission profiles that exhibit a sharp antiresonance, a significant enhancement is also found for a narrow transmission resonance \cite{Linke}, which is the opposite configuration. To some extent, this recalls Babinet's principle in optics. Finally, as pointed out in the Introduction, from a thermodynamic viewpoint a thermoelectric system is a thermal engine to which a $ZT$-dependent energy conversion efficiency is associated. Recently some aspects of the question of efficiency at maximum power of low-dimensional thermoelectric systems have been discussed in terms of performance for quantum dots and 1D ballistic conductors \cite{Linke,Linke2,Grifoni}. These works pertain to the more general framework of finite-time thermodynamics \cite{Andresen1,Andresen2,Andresen3,Tu}, a field which contains a number of interesting open questions at the macroscopic level \cite{vandenBroeck1,Apertet2,Apertet3,vandenBroeck2}, which must also be addressed at the mesoscopic level; these questions are related to the location of irreversibility sources in the system as a whole and the impact on the efficiency at maximum power, two particular cases being those of endoreversible and exoreversible engines. The question of irreversibility at the mesoscopic level is certainly not trivial since this requires a careful characterization of the coupling of a system to its environment. \begin{acknowledgments} We are pleased to thank Y. Apertet for a careful reading of the manuscript and insightful comments. This work was supported by a grant from the R\'egion Basse Normandie. We also acknowledge partial funding from the LabEx EMC3. \end{acknowledgments}
-25,076.873777
[ -3.53125, 3.087890625 ]
39.875389
[ -2.76953125, 0.3701171875, -2.138671875, -5.8359375, -1.0908203125, 8.59375 ]
[ 4.84765625, 7.95703125, 3.294921875, 8.0703125 ]
288
5,709
[ -3.552734375, 3.99609375 ]
23.008258
[ -6.359375, -4.40625, -4.69921875, -2.5, 2.177734375, 12.859375 ]
2.08935
31.222139
23.331582
2.022249
[ 3.379622220993042 ]
-17,904.009634
5.700823
-24,480.791429
0.718982
5.771836
[ -2.802734375, -3.9921875, -3.671875, -4.75390625, 2.44140625, 12.2265625 ]
[ -4.96484375, -2.3125, -2.48046875, -1.8349609375, 3.87109375, 5.55859375 ]
BkiUbg05ixsDMJPI2xCB
\section{Rediscovery of Schr\"oder's integral formula} In a recent article in the {\it Bulletin of the Korean Mathematical So\-ci\-ety} \cite{qi_02}, several results concerning the Bernoulli numbers of the second kind were presented. We recall that these numbers (OEIS \seqnum{A002206} and \seqnum{A002207}), which we denote below by $G_n$, are rational \begin{equation}\notag \begin{array}{llll} \displaystyle G_1\,=\,+\dfrac{1}{2} \,,\quad& G_2\,=\,-\dfrac{1}{12} \,,\quad& G_3\,=\,+\dfrac{1}{24} \,,\quad& G_4\,=\,-\dfrac{19}{720}\,, \\[5mm] G_5\,=\,+\dfrac{3}{160} \,,\quad& G_6\,=\,-\dfrac{863}{60480} \,,\quad& G_7\,=\,+\dfrac{275}{24192} \,,\quad& G_8\,=\,-\dfrac{33953}{3628800} \,,\quad \ldots \end{array} \end{equation} \vskip 0.07in \noindent and were introduced by the Scottish mathematician and astronomer James Gregory in 1670 in the context of area's interpolation formula. Subsequently, they were rediscovered by many famous mathematicians, including Gregorio Fontana, Lorenzo Mascheroni, Pierre-Simon Laplace, Augustin-Louis Cauchy, Jacques Binet, Ernst Schr\"oder, Oskar Schl\"omilch, Charles Hermite and many others. Because of numerous rediscoveries these numbers do not have a standard name, and in the literature they are also referred to as {\it Gregory's coefficients}, {\it (reciprocal) logarithmic numbers}, {\it Bernoulli numbers of the second kind}, normalized {\it generalized Bernoulli numbers} $B_n^{(n-1)}$ and normalized {\it Cauchy numbers of the first kind} $C_{1,n}$. Usually, these numbers are defined either via their generating function \begin{eqnarray} \label{eq32} \frac{u}{\ln(1+u)} = 1+\sum _{n=1}^\infty G_n \, u^n ,\qquad|u|<1\,, \end{eqnarray} or explicitly \begin{eqnarray}\notag G_n\,=\,\frac{C_{1,n}}{n!}\,=\lim_{s\to n}\frac{-B_s^{(s-1)}}{\,(s-1)\,s!\,}= \,\frac{1}{n!}\! \int\limits_0^1\! x\,(x-1)\,(x-2)\cdots(x-n+1)\, dx\,,\qquad n\in\mathbb{N}\,. \end{eqnarray} It is well known that $G_n$ are alternating $\,G_n=(-1)^{n-1}|G_n|\,$ and decreasing in absolute value; they behave as $\,\big(n\ln^2 n\big)^{-1}\,$ at $n\to\infty$ and may be bounded from below and from above accordingly to formulas (55)--(56) from \cite{iaroslav_08}. For more information about these important numbers, see \cite[pp.\ 410--415]{iaroslav_08}, \cite[p.\ 379]{iaroslav_09}, and the literature given therein (nearly 50 references). \begin{figure}[!t] \centering \includegraphics[width=0.83\textwidth]{Schroder-integral-formula.eps} \caption{A fragment of p.\ 112 from Schr\"oder's paper \cite{schroder_01}. Schr\"oder's $C_n^{(-1)}$ are exactly our $G_n$.} \label{hkgcwuih} \end{figure} Now, the first main result of \cite[p.\ 987]{qi_02} is Theorem 1.\footnote{Our $G_n$ are exactly $b_n$ from \cite{qi_02} and $\frac{c_{n,1}^{(1)}}{n!}$ from \cite[Sect.\ 5]{chikhi_01}. Despite a venerable history, these numbers still lack a standard notation and various authors may use different notation for them.} It states: {\it the Bernoulli numbers of the second kind may be represented as follows} \begin{equation} G_n\,=\,(-1)^{n+1}\!\int\limits_1^\infty \!\!\frac{dt}{\big(\ln^2 (t-1)+\pi^2\big)\,t^n}\,,\qquad n\in\mathbb{N}\,. \end{equation} The same representation appears in a slightly different form\footnote{Put $t=1+u$.} \begin{equation}\label{jm23nd2d} G_n\,=\,(-1)^{n+1}\!\int\limits_0^\infty \!\!\frac{du}{\big(\ln^2 u +\pi^2\big)\,(u+1)^n}\,,\qquad n\in\mathbb{N}\,, \end{equation} in \cite[pp.\ 473--474]{coffey_02} and \cite[Sect.\ 5]{chikhi_01}, and is called {\it Knessl's representation} and {\it the Qi integral representation} respectively. Furthermore, various internet sources provide the same (or equivalent) formula, either with no references to its author or with references to different modern writers and/or their papers. However, the integral representation in question is not novel and is not due to Knessl nor to Qi and Zhang; in fact, this representation is a rediscovery of an old result. In a little-known paper of the German mathematician Ernst Schr\"oder \cite{schroder_01}, written in 1879, one may easily find exactly the same integral representation on p.\ 112; see Fig.\ \ref{hkgcwuih}. Moreover, since this result is not difficult to obtain, it is possible that the same integral representation was obtained even earlier. \section{Simple derivation of Schr\"oder's integral formula by means of the complex integration} Schr\"oder's integral formula \cite[p.\ 112]{schroder_01} may, of course, be derived in various ways. Below, we propose a simple derivation of this formula based on the method of contour integration. If we set $u=-z-1$, then equality \eqref{eq32} may be written as \begin{eqnarray}\notag \frac{z+1}{\ln z - \pi i} = -1+\sum _{n=1}^\infty \big|G_n\big| \, (z+1)^n ,\qquad|z+1|<1\,. \end{eqnarray} Now considering the following line integral along a contour $C$ (see Fig.\ \ref{kc30jfd}), \begin{figure}[!t] \centering \includegraphics[width=0.35\textwidth]{contour.eps} \caption{Integration contour $C$ ($r$ and $R$ are radii of the small and big circles respectively, where $r\ll1$ and $R\gg1$).} \label{kc30jfd} \end{figure} where $n\in\mathbb{N}$, and then letting $R\to\infty\,$, $r\to0$, we have by the residue theorem \begin{eqnarray} && \displaystyle \ointctrclockwise\limits_{C} \frac{\,dz\,}{\,(1+z)^n \, (\ln z - \pi i)\,}\,=\, \int\limits_{r}^R \! \ldots \, dz \, + \int\limits_{C_R} \!\ldots \, dz \, + \int\limits_{R}^r \!\ldots \, dz \, + \int\limits_{C_r} \! \ldots \, dz \, \stackrel{\substack{R\to\infty \\ r\to0}}{=} \notag\\[2mm] && \displaystyle \quad = \, \int\limits_0^\infty \!\left\{\frac{1}{\ln x - \pi i} - \frac{1}{\ln x + \pi i} \right\}\cdot\frac{dx}{(1+x)^n} \, =\,2\pi i \! \int\limits_0^\infty \!\! \frac{1}{\,(1+x)^n} \cdot\frac{dx}{\,\ln^2 x +\pi^2\,}\,= \qquad\notag\\[3mm] && \displaystyle \quad \notag =\,2\pi i \! \res\limits_{z=-1}\! \frac{1}{\,(1+z)^n \, (\ln z - \pi i)\,} \,=\,\frac{2\pi i}{n!}\cdot \left.\frac{d^n}{dz^n}\frac{z+1}{\,\ln z - \pi i\,}\right|_{z=-1} \!\!\! = \,2\pi i \, \big|G_n\big|\,, \end{eqnarray} since \begin{eqnarray} & \displaystyle \left| \,\int\limits_{C_R} \! \frac{\,dz\,}{\,(1+z)^n \, (\ln z - \pi i)\,} \, \right| \,=\, O\left(\!\frac{1}{\,R^{n-1}\ln R\,}\!\right)=o(1)\,, \qquad & R\to\infty\,,\qquad n\geq1\,,\notag \\[5mm] & \displaystyle \left| \,\int\limits_{C_r} \! \frac{\,dz\,}{\,(1+z)^n \, (\ln z - \pi i)\,} \, \right| \,=\, O\left(\!\frac{r}{\,\ln r\,}\!\right)=o(1)\,, \qquad & r\to0\,, \notag \end{eqnarray} and because at $z=-1$ the integrand of the contour integral has a pole of the $(n+1)$th order. This completes the proof. Note that above derivations are valid only for $n\geq1$, and so is Schr\"oder's integral formula, which may also be regarded as one of the generalizations of $G_n$ to the continuous values of $n.$ \section{Several remarks on the asymptotics for the Bernoulli numbers of the second kind} The first-order asymptotics $\,|G_n|\sim\big(n\ln^2 n\big)^{-1}\,$ at $n\to\infty$ are usually credited to Johan F.\ Steffensen \cite[pp.\ 2--4]{steffensen_01}, \cite[pp.\ 106--107]{steffensen_02}, \cite[p.\ 29]{norlund_01}, \cite[p.\ 14, Eq.\ (14)]{davis_02}, \cite{nemes_01}, who found it in 1924.\footnote{The same first-order asymptotics also appear in \cite[p.\ 294]{comtet_01}, but without the source of the formula.} However, in our recent work \cite[p.\ 415]{iaroslav_08} we noted that exactly the same result appears in Schr\"oder's work written 45 years earlier, see Fig.\ \ref{hg28g3dc}, and the order of the magnitude of the closely related numbers is contained in a work of Jacques Binet dating back to 1839 \cite{binet_01}.\footnote{By the ``closely related numbers'' we mean the so-called {\it Cauchy numbers of the second kind} (OEIS \seqnum{A002657} and \seqnum{A002790}), and numbers $I'(k)$, see \cite[pp.\ 410--415, 428--429]{iaroslav_08}. The comment going just after Eq.\ (93) \cite[p.\ 429]{iaroslav_08} is based on the statements from \cite[pp.\ 231, 339]{binet_01}. The Cauchy numbers of the second kind $C_{2,n}$ and Gregory's coefficients $G_n$ are related to each other via the relationship $\,nC_{2,n-1}-C_{2,n}=n!\,|G_n|\,$, see \cite[p.\ 430]{iaroslav_08}.} In 1957 Davis \cite[p.\ 14, Eq.\ (14)]{davis_02} improved this first-order approximation slightly by showing that $\,|G_n|\sim\Gamma(1+\xi) \big(n\ln^2 n + n\pi^2\big)^{-1}\,$ at $\,n\to\infty\,$ for some $\,\xi\in[0,1]\,$, without noticing that 7 years earlier S.\ C.\ Van Veen had already obtained the complete asymptotics for them \cite[p.\ 336]{van_veen_01}, \cite[p.\ 29]{norlund_01}. Equivalent complete asymptotics were recently rediscovered in slightly different forms by Charles Knessl \cite[p.\ 473]{coffey_02}, and later by Gerg\H{o} Nemes \cite{nemes_01}. An alternative demonstration of the same result was also presented by the author \cite[p.\ 414]{iaroslav_08}. \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{Schroder-asymptotic-formula.eps} \caption{A fragment of p.\ 115 from Schr\"oder's paper \cite{schroder_01}.} \label{hg28g3dc} \vskip 0.25in \end{figure} \section{Acknowledgments} The author is grateful to Jacqueline Lorfanfant from the University of Strasbourg for sending a scanned version of \cite{schroder_01}. \vskip 0.25in
-11,335.27485
[ -2.322265625, 2.0625 ]
45.132743
[ -5.60546875, -3.2265625, -2.978515625, -8.1796875, 0.8837890625, 12.1171875 ]
[ 3.205078125, 7.33984375, 3.333984375, 5.73828125 ]
112
1,044
[ -3.658203125, 4.1015625 ]
38.062032
[ -5.171875, -3.974609375, -2.7109375, -1.171875, 1.5400390625, 8.453125 ]
1.419334
1.447745
45.881226
5.085656
[ 1.7950966358184814 ]
-7,957.646509
6.073755
-11,333.117522
1.324712
5.616597
[ -2.603515625, -3.205078125, -3.501953125, -4.5625, 2.19921875, 10.4375 ]
[ -6.9609375, -3.751953125, -2.822265625, -1.2001953125, 4.484375, 6.0234375 ]
BkiUdEU5qsBC4wbX-bkK
\section{Superradiant phase transition in the quantum Rabi model} In the limit of $\Omega/\omega\rightarrow \infty$, the Rabi Hamiltonian (hereafter, $\hbar=1$) \begin{align}\label{eqS1} H_{R}=\omega a^{\dag}a\left(|e\rangle\langle e|+|g\rangle\langle g|\right)+\Omega |e\rangle\langle e|+g(a^{\dag}+a)(|e\rangle\langle g|+|g\rangle\langle e|), \end{align} can be diagonalized using a Schrieffer-Wolff transformation \cite{Scully1997Book,Agarwal2012Book}. After applying the unitary operator \begin{align}\label{S1} U_{\rm{SW}}=\exp{\left[\frac{g}{\Omega}\left(a^{\dag}+a\right)(|e\rangle\langle g|-|g\rangle\langle e|)\right]}, \end{align} Eq.~(\ref{eqS1}) becomes \begin{align} H_{\rm{np}}=\omega a^{\dag}a\left(|e\rangle\langle e|+|g\rangle\langle g|\right)+{\Omega}|e\rangle\langle e|+\frac{\omega g_{c}^{2}}{4}\left(a+a^{\dag}\right)^{2}(|e\rangle\langle e|-|g\rangle\langle g|)+O(g_{c}^{2}\omega/\Omega), \end{align} where $g_{c}=2g/\sqrt{\Omega\omega}$ is a normalized coupling parameter and $O(g_{c}^{2}\omega/\Omega)$ denotes higher-order qubit-resonator coupling terms that can be neglected in the limit of $\Omega/\omega\rightarrow \infty$. The Hamiltonian $H_{\rm{np}}$ provides a faithful description of the system ground state in the normal phase of the model. Performing the projection $\langle g|H_{\rm np}|g\rangle$ for $g_{c}<1$, we obtain \begin{align} H_{\rm{np}}=\omega a^{\dag}a-\frac{\omega g_{c}^{2}}{4}\left(a+a^{\dag}\right)^{2}, \end{align} which can be diagonalized to $S^{\dag}(r_{\rm{np}})H_{\rm{np}}S(r_{\rm{np}})=\varepsilon_{\rm{np}}a^{\dag}a$ in the squeezed-light frame with \begin{align} S(r_{\rm{np}})=\exp\left[\left(r_{\rm{np}}/2\right)\left(a^{\dag 2}-a^{2}\right)\right],\ \ \ \ \ \ \ \ \ \ {\rm and} \ \ \ \ \ \ \ \ \ r_{\rm{np}}=-(1/4)\ln\left(1-g_{c}^{2}\right). \end{align} Hence, we obtain the ground eigenstate of the Hamiltonian $H_{\rm{np}}$ as $S(r_{\rm{np}})|0\rangle$ and the excitation energy $\varepsilon_{\rm{np}}=\omega\sqrt{1-g_{c}^{2}}$. For $g_{c}=0$, the energy spectrum of the system collapses (see the left panel of Fig.~\ref{figS0}). For clarity, we show, in the right panel of Fig.~\ref{figS0}, the energy spectrum near the critical point. Because of finite frequencies $\Omega$ and $\omega$, there are still small energy gaps among the eigenlevels. If $\Omega/\omega\rightarrow \infty$ can be fully satisfied, these gaps will vanish. Correspondingly, the ground eigenstate of the Rabi Hamiltonian $H_{R}$ for $g_{c}<1$ in the laboratory frame is \begin{align} |\psi_{\rm np}\rangle=S(r_{\rm np})|0\rangle|g\rangle, \end{align} Here, the excitation energy $\varepsilon_{\rm{np}}$ is a positive real number for $g_{c}<1$ and vanishes at $g_{c}=1$, i.e., in the normal phase. \begin{figure} \centering \scalebox{0.55}{\includegraphics{Energy_spectrum.pdf}} \caption{Energy spectrum of the quantum Rabi Hamiltonian $H_{R}$ in Eq.~(\ref{eqS1}). We impose the eigenvalue of the ground eigenstate $|E_{0}\rangle$ to be $E_{0}=0$. The qubit frequency is $\Omega=10^{6}\omega$. } \label{figS0} \end{figure} For $g_{c}>1$, the number of photons occupied in the cavity field becomes proportional to the ratio $\Omega/\omega$ of the atomic transition and cavity frequencies. In this case, the subspace $\left\{|0\rangle|g\rangle,|1\rangle|g\rangle,\ldots\right\}$ is no longer the low-energy subspace of the quantum Rabi Hamiltonian. To capture the physics of the superradiant phase, we displace the bosonic mode in the Rabi Hamiltonian $H_{R}$ with the displacement operators \begin{align}\label{S2} D(\pm\alpha)=\exp{\left[\alpha\left(a^{\dag}-a\right)\right]},\ \ \ \ \ \ \ {\rm{with}} \ \ \ \ \ \ \ \ \ \alpha=\frac{1}{2}\sqrt{\frac{\Omega}{\omega}\left(g_{c}^{2}-g_{c}^{-2}\right)}, \end{align} and obtain \begin{align}\label{S3} H'_{R}(\pm\alpha)=&D^{\dag}(\pm\alpha)H_{R}D({\pm\alpha})\cr =&\omega(a^{\dag}\pm\alpha)(a\pm\alpha)\left(|e\rangle\langle e|+|g\rangle\langle g|\right)-g(a+a^{\dag})(|g\rangle\langle e|+|e\rangle\langle g|)\cr&+\Omega|e\rangle\langle e|\mp2g\alpha\left(|g\rangle\langle e|+|e\rangle\langle g|\right).\cr \end{align} The terms in Eq.~(\ref{S3}) containing only the atomic part, i.e., \begin{align} H_{\pm}=\Omega|e\rangle\langle e|\mp2g\alpha(|g\rangle\langle e|+|e\rangle\langle g|), \end{align} can be diagonalized with the eigenstates \begin{align} |\uparrow\rangle_{\pm}=\cos(\theta)|e\rangle\pm\sin(\theta)|g\rangle, \ \ \ \ \ \ \ |\downarrow\rangle_{\pm}=\mp\sin(\theta)|e\rangle+\cos(\theta)|g\rangle, \end{align} corresponding to the eigenvalues \begin{align} \mathcal{E}_{\uparrow}^{\pm}=\frac{\Omega}{2}+ \frac{1}{2}\sqrt{\Omega^2+16g^2\alpha^2}=\frac{\Omega}{2}\left(1+g_{c}^2\right)\ \ \ \ \ \ \ \ {\rm and}\ \ \ \ \ \ \ \ \mathcal{E}_{\downarrow}^{\pm}=\frac{\Omega}{2}- \frac{1}{2}\sqrt{\Omega^2+16g^2\alpha^2}=\frac{\Omega}{2}\left(1-g_{c}^2\right), \end{align} where \begin{align} \theta=\frac{1}{2}\arctan\left(\frac{2g\alpha}{\Omega}\right). \end{align} Then, we can define a set of Pauli matrices using the new atomic basis $\left\{|\uparrow\rangle_{\pm},|\downarrow\rangle_{\pm}\right\}$: \begin{align} \sigma_{z}^{\pm}=|\uparrow\rangle_{\pm}\langle\uparrow|-|\downarrow\rangle_{\pm}\langle\downarrow|, \ \ \ \ \sigma_{x}^{\pm}=|\uparrow\rangle_{\pm}\langle\downarrow|+|\uparrow\rangle_{\pm}\langle\downarrow|. \end{align} The Hamiltonian in Eq.~(\ref{S3}) becomes \begin{align} H'_{R}=\omega a^{\dag}a\left(|\uparrow\rangle_{\pm}\langle \uparrow|+|\downarrow\rangle_{\pm}\langle\downarrow|\right)-g\cos(2\theta)(a^{\dag}+a)\sigma_{x}^{\pm}+\frac{\Omega g_{c}^2}{2}\sigma_{z}^{\pm}+\frac{\Omega}{2}+\omega\alpha^2, \end{align} where a block-diagonal perturbation term is neglected using $\alpha$ defined in Eq.~(\ref{S2}). Employing the same procedure used to derive $H_{\rm np}$, we obtain \begin{align}\label{eqS16} H_{\rm sp}=\omega a^{\dag}a\left(|\uparrow\rangle_{\pm}\langle \uparrow|+|\downarrow\rangle_{\pm}\langle\downarrow|\right)+\frac{\omega}{4g_{c}^{4}}\left(a^{\dag }+a\right)^{2}{\sigma}^{\pm}_{z}+\frac{\Omega g_{c}^{2}}{4}\left(g_{c}^{2}+g_{c}^{-2}\right){\sigma}^{\pm}_{z}+\frac{\Omega}{2}+\omega\alpha^2. \end{align} After the projection $_{\pm}\langle \downarrow|H_{\rm sp}|\downarrow \rangle_{\pm}$ and keeping only the terms up to second-order of the qubit-resonator coupling strengths, Eq.~(\ref{eqS16}) becomes \begin{align} H_{\rm sp}=\omega a^{\dag}a-\frac{\omega}{4g_{c}^{4}}\left(a^{\dag }+a\right)^{2}+\frac{\Omega}{4}\left(1-g_{c}^{4}\right)+\omega\alpha^2, \end{align} whose excitation energy is found to be % \begin{align} \varepsilon_{\rm sp}=\omega\sqrt{1-g_{c}^{-4}}. \end{align} The ground eigenstates of the Hamiltonian $H_{\rm{sp}}$ is $S(r_{\rm{sp}})|0\rangle$, where $r_{\rm sp}=-\frac{1}{4}\ln\left(1-g_{c}^{-4}\right)$. Correspondingly, the ground eigenstates of the quantum Rabi Hamiltonian $H_{R}$ for $g_{c}>1$ in the laboratory frame are \begin{align} |\psi_{\rm sp}\rangle =D(\pm\alpha)S(r_{\rm sp})|0\rangle|\downarrow\rangle_{\pm}, \end{align} which are degenerate. \section{Effective Hamiltonian for the effective multi-photon down-conversion}\label{SecS2} The total Hamiltonian of the whole system contains two parts: \begin{align}\label{eqS17} H_{\rm{tot}}&=H_{0}+H_{D}, \cr H_{0}&=\omega a^{\dag}a\left(|e\rangle\langle e|+|g\rangle\langle g|+|\mu\rangle\langle \mu|\right)+{\Omega}|e\rangle\langle e|+\omega_{\mu}|\mu\rangle\langle\mu|-g(a+a^{\dag})(|g\rangle\langle e|+|e\rangle\langle g|),\cr H_{{D}}&=\left[\Omega_{p}\cos(\omega_{p}t)+\Omega_{s}\cos(\omega_{s}t)\right]\left(|\mu\rangle\langle g|+|g\rangle\langle\mu|\right). \end{align} The Hamiltonian $H_{0}$ can be diagonalized as \begin{align} H_{0}=\sum_{m=0}^{\infty}E_{m}|E_{m}\rangle\langle E_{m}|+\sum_{n=0}^{\infty}\left(\omega_{\mu}+n\omega\right)|\mu_{n}\rangle\langle \mu_{n}|, \end{align} where $|E_{m}\rangle$ ($E_{m}$) is the $m$th eigenstate (eigenvalue) of the Rabi Hamiltonian $H_{R}$ and $|\mu_{n}\rangle=|n\rangle|\mu\rangle$ is the $n$th eigenstate of the noninteracting term $\left(\omega_{\mu}+\omega a^{\dag}a\right)\otimes|\mu\rangle\langle\mu|$. When $\Omega_{p,(s)}\ll |E_{m}-E_{m'}|, \omega$ for $m'\neq m$, we can perform the unitary transformation \begin{align}\label{S11} H'_{D}=&\exp\left({iH_{0}t}\right)H_{D}\exp\left({-iH_{0}t}\right) \cr =&\left[\Omega_{p}\cos(\omega_{p}t)+\Omega_{s}\cos(\omega_{s}t)\right]\sum_{n}\sum_{m}\left\{c_{n}^{(m)}\exp{\left[i(\omega_{\mu}+n\omega)t-iE_{m}t\right]|\mu_{n}\rangle\langle E_{m}|+{\rm{h.c.}}}\right\}, \end{align} where $c_{n}^{(m)}=\langle g|\langle n|E_{m}\rangle$ are probability amplitudes of the states $|n\rangle|g\rangle$ in the eigenstate $|E_{m}\rangle$. Then, when choosing \begin{align} \omega_{p}=E_{0}-\omega_{\mu}, \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm and} \ \ \ \ \ \ \ \ \ \ \ \ \ \omega_{s}=E_{0}-\omega_{\mu}-2l\omega, \ \ \ \ \ \ \ \ \ \ \ \ (l=1,2,3\ldots) \end{align} we can obtain the resonant transitions $|\mu_{0}\rangle\leftrightarrow|E_{0}\rangle\leftrightarrow|\mu_{2l}\rangle$ described by \begin{align} H_{1}=\frac{1}{2}\left[c_{0}^{(0)}\Omega_{p}|\mu_{0}\rangle+c_{2l}^{(0)}\Omega_{s}|\mu_{2l}\rangle\right]\langle E_{0}|+{\rm{h.c.}} \end{align} The remaining terms in Eq.~(\ref{S11}) can be neglected under the rotating-wave approximation. \begin{figure} \centering \scalebox{0.56}{\includegraphics{Model_SM.pdf}} \caption{(a) Effective level transitions described by the effective Hamiltonian $H_{\rm{eff}}$. (b) Energy spectrum of the effective Hamiltonian $H_{\rm{eff}}$. The spectrum collapses at $g_{c}=1$, which is the critical point determined by the Rabi Hamiltonian $H_{R}$. We choose $\Omega=10^{6}\omega$, $\omega_{\mu}=E_{0}-4.25\omega$, $\Omega_{p}=0.005(E_{2}-E_{0})$, and $\Omega_{s}=2\Omega_{p}$ to satisfy the required conditions. The eigenvalues $E_{0}$ and $E_{2}$ can be numerically calculated. } \label{figS1} \end{figure} For simplicity, we can ignore the superscript of the coefficient $c_{n}^{(0)}$ when considering only the eigenstate $|E_{0}\rangle$ in the effective dynamics. Thus, the effective Hamiltonian of Eq.~(6) in the main text is obtained, i.e., \begin{align} H_{\rm{eff}}=H_{1}=\frac{1}{2}\left[c_{0}\Omega_{p}|\mu_{0}\rangle+c_{2l}\Omega_{s}|\mu_{2l}\rangle\right]\langle E_{0}|+{\rm{h.c.}} \end{align} The effective level transitions of the system described by $H_{\rm{eff}}$ are shown in Fig.~\ref{figS1}(a). When the initial state for the system is $|\mu_{0}\rangle$, the effective level transitions in Fig.~\ref{figS1}(a) can be understood as a multi-photon down-conversion process: The pump pulse of frequency $\omega_{p}$ is converted into a Stokes pulse of frequency $\omega_{s}$ and $2l$ cavity photons of frequency $\omega$. At the critical point $g_{c}=1$, the energy spectrum of $H_{\rm{eff}}$ collapses [see Fig.~\ref{figS1}(b)] because of $c_{0}\simeq c_{2l}\simeq 0$. In this case, the down-conversion process becomes invalid. \begin{figure}[htp] \centering \scalebox{0.6}{\includegraphics{Populations_wigner.pdf}} \caption{(a) Populations of the vacuum state $|\mu_{0}\rangle$ and the four-photon state $|\mu_{4}\rangle$ at the time $\tau=\pi/\Xi$ in the evolution governed by $H_{\rm{tot}}$ when $g_{c}$ is fixed to different values. (b--g) Wigner functions $W(\beta)$ defined in Eq.~(\ref{eqS26}) for some specific values of $g_{c}$. We choose $\Omega=10^{6}\omega$, $\omega_{\mu}=E_{0}-4.25\omega$, $\Omega_{p}=0.005(E_{2}-E_{0})$, and $\Omega_{s}=2\Omega_{p}$ to satisfy the required conditions. For $\Xi\rightarrow 0$, we impose $\Xi=0.01\omega$ to avoid an infinite evolution time $\tau\rightarrow\infty$. The eigenvalues $E_{0}$ and $E_{2}$ can be numerically calculated. For a fixed $g_{c}$ in the normal phase ($g_{c}<1$), the system at the time $\tau$ is the superposition state described by Eq.~(\ref{eqS25}), while in the superradiant phase ($g_{c}>1$), the system keeps in the vacuum state $|\mu_{0}\rangle$. A collapse of the Wigner function occurs at the critical point. } \label{figS2} \end{figure} When the initial state is $|\mu_{0}\rangle$, the evolution of the system can be solved: \begin{align}\label{eqS21} |\phi(t)\rangle \approx&\exp\left(-iH_{\rm{eff}}t\right)|\mu_{0}\rangle\cr =&\left[\cos(\theta)^2+\cos(\Xi t)\sin(\theta)^2\right]|\mu_{0}\rangle-i\sin(\Xi t)\sin(\theta)|E_{0}\rangle+\frac{1}{2}\sin(2\theta)\left[\cos(\Xi t)-1\right]|\mu_{2l}\rangle, \end{align} where \begin{align}\label{eqS24} \Xi=\frac{1}{2}\sqrt{\left(c_{0}\Omega_{p}\right)^2+\left(c_{2l}\Omega_{s}\right)^{2}}, \ \ \ \ \ \ \ {\rm{and}} \ \ \ \ \ \ \ \ \theta=\arctan\left[c_{0}\Omega_{p}/\left(c_{2l}\Omega_{s}\right)\right]. \end{align} Obviously, the population of the state $|\mu_{2l}\rangle$ reaches its maximum when $\tau=t=\pi/\Xi$. The system state at time $\tau$ becomes \begin{align}\label{eqS25} |\phi(\tau)\rangle\approx\cos(2\theta)|\mu_{0}\rangle-\sin(2\theta)|\mu_{2l}\rangle =\left[\cos(2\theta)|{0}\rangle-\sin(2\theta)|{2l}\rangle\right]\otimes|\mu\rangle, \end{align} which is a separable state. Note that the photons in the eigenstate $|E_{0}\rangle$ are \textit{virtual} because they are bound to the atom and cannot correspond to radiation. In this case, the maximum mean photon number of the system is \begin{align} \bar{n}_{\rm{max}}=\langle X^{+}X^{-}\rangle|_{t=\tau}=2l\sin(2\theta). \end{align} Here, \begin{align} X^{-}=\sum_{j}\sum_{j'<j}\langle \xi_{j'}|\left(a+a^{\dag}\right)|\xi_{j}\rangle|\xi_{j'}\rangle\langle \xi_{j}|,\ \ \ \ {\rm and} \ \ \ \ X^{+}=\left(X^{-}\right)^{\dag}, \end{align} describing that a photon emission from the cavity is associated with the transition from a high-energy eigenstate $|\xi_{j}\rangle$ to a low-energy eigenstate $|\xi_{j'}\rangle$ of $H_{0}$, where $\{|\xi_{j}\rangle\}=\{|E_{m}\rangle,|\mu_{n}\rangle\}$. In Fig.~\ref{figS2}(a), we show the maximum populations \begin{align} \mathcal{P}_{2k}^{\rm{max}}={\rm{max}}\left[|\langle \mu_{2k}|\phi(\tau)\rangle|^2\right] \ \ \ \ \ \ \ \ \ \ \ \ \ (k=0,2), \end{align} of the state $|\mu_{2k}\rangle$, at the time $\tau$, versus different choices of $g_{c}$. Also, we show, in Figs.~\ref{figS2}(b--g), the Wigner function $W(\beta)$ of the system at the time $\tau$ for different $g_{c}$. Here, the Wigner function $W(\beta)$ is defined by \begin{align}\label{eqS26} W(\beta)=2{\rm{Tr}}\left[D^{\dag}(\beta)|\phi(\tau)\rangle\langle\phi(\tau)|D(\beta)\exp(i\pi a^{\dag}a)\right], \ \ \ \ \ {\rm with} \ \ \ \ \ D(\beta)=\exp{\left(\beta a^{\dag}-\beta^{*}a\right)}, \end{align} where the atomic state can be ignored because $|\phi(\tau)\rangle$ is a separable state. As shown in Figs.~\ref{figS2}(b--g), when the Rabi Hamiltonian is in the normal phase (i.e, $g_{c}<1$), the system state at time $\tau$ is a superposition state of even Fock states. Increasing the parameter $g_{c}$ results in an increase of the weight of the $2l$ Fock state $|\mu_{2l}\rangle$. When $g_{c}$ approaches $1$ from the left, the component of the vacuum state $|\mu_{0}\rangle$ vanishes, leaving only the $2l$ Fock state $|\mu_{2l}\rangle$. When $g_{c}$ crosses the critical point, we can see a collapse of the Wigner function [see Figs.~\ref{figS2}(f) and (g)], i.e., suddenly, the $2l$ Fock state $|\mu_{2l}\rangle$ vanishes, leaving only the vacuum state $|\mu_{0}\rangle$ in the system at the time $\tau$. This is understood that the energy spectrum of the effective Hamiltonian $H_{\rm{eff}}$ collapses [see Fig.~\ref{figS1}(b)] at the critical point of the quantum Rabi Hamiltonian. \section{Possible implementations using superconducting quantum circuits} \begin{figure} \centering \scalebox{0.55}{\includegraphics{model_2.pdf}} \caption{Schematic representation of a qubit (green) coupled to (a) an $LC$ resonator, (b) an array of dc SQUIDS. } \label{figM2} \end{figure} {\renewcommand\arraystretch{1.4} \begin{table}[b] \centering \caption{Superconducting experiments that have achieved the ultrastrong light-matter coupling. Abbreviations are FQ=flux qubit, TR=transmon qubit, TL=transmission line resonator, and LE=lumped-element resonator.} \label{tabS2} \begin{tabular}{p{3cm}<{\centering}p{2cm}<{\centering}p{2cm}<{\centering}p{2.5cm}<{\centering}p{2.5cm}<{\centering}p{2.5cm}<{\centering}} \hline \hline {Year \& Ref.} & {Qubit} & Cavity & $g/2\pi$ (MHz) & $\omega/2\pi$ (GHz) & $g/\omega$ \\ \hline 2010 \cite{Niemczyk2010NP} & FQ & TL & 636 & 5.357 & 0.12 \\ 2010 \cite{Forn2010PRL} & FQ &LE& 810 & 8.13& 0.1 \\ 2017 \cite{Yoshihara2016NP} & FQ &LE & 7630 & 5.711 & 1.34 \\ 2017 \cite{Yoshihara2017PRA} &FQ &LE & 5310 & 6.203 & 0.86 \\ 2017 \cite{Bosman2017Njpqi} &TR & TL & 897 & 4.268 & 0.19 \\ 2018 \cite{Yoshihara2018PRL} & FQ & LE & 7480 & 6.335 & 1.18\\ \hline \hline \end{tabular} \end{table}} The past decade has seen a rapid increase in light-matter couplings \cite{Gu2017PR,Kockum2019NRP}. Some experimental observations of the ultrastrong light-matter coupling in superconducting quantum circuits are listed in Table~\ref{tabS2}. Generally, the circuits in the experiments \cite{Niemczyk2010NP,Forn2010PRL,Yoshihara2016NP,Yoshihara2017PRA,Bosman2017Njpqi,Yoshihara2018PRL} can be simplified as an artificial qubit coupled with an $LC$ resonator [see Fig.~\ref{figM2}(a)]. The Hamiltonian of the $LC$ resonator is given by \begin{align} H_{LC}=4E_{C}\hat{q}^{2}+\frac{E_{L}}{2}\hat{\varphi}^{2}, \end{align} where $E_{C}=e^{2}/(2C)$ and $E_{L}=1/(4e^2L)$. The dimensionless charge and flux operators $\hat{q}$ and $\hat{\varphi}$ obey the commutation relations $[\hat{\varphi},\hat{q}]=i$. Following the standard quantization procedure for circuits \begin{align} \hat{\varphi}=\left({\frac{2E_{C}}{E_{L}}}\right)^{\frac{1}{4}}\left(a^{\dag}+a\right), \ \ \ \ \ \ \ {\rm{and} }\ \ \ \ \ \ \ \ \hat{q}=i\left(\frac{E_{L}}{32E_{C}}\right)^{\frac{1}{4}}\left(a^{\dag}-a\right), \end{align} we can diagonalize the Hamiltonian $H_{LC}$ as \begin{align} H_{LC}=\omega \left(a^{\dag}a+\frac{1}{2}\right), \end{align} where $\omega=\sqrt{8E_{C}E_{L}}=1/\sqrt{LC}$. Therefore, in order to reduce the cavity frequency $\omega$ to satisfy the $\Omega/\omega\rightarrow \infty$ limit, one needs to choose a very large $C$ and $L$, which could be difficult in current experiments. To overcome this problem, we use an array of dc superconducting quantum interference devices (SQUIDs) to replace the $LC$ resonator, as shown in Fig.~\ref{figM2}(b). The Hamiltonian of this array of Cooper pairs reads \begin{align} H_{A}=4E_{C}\hat{n}^{2}-N_{0}E_{J}(\Phi)\cos\left(\frac{\hat{\varphi}}{N_{0}}\right). \end{align} Here, $\hat{n}$ and $\hat{\varphi}$ satisfying $[\hat{\varphi},\hat{n}]=i$ are the number of Cooper pairs and the overall phase across the junction array, respectively; $E_{J}(\Phi)$ is the Josephson energy of a single SQUID, which can be adjusted by the external-magnetic flux $\Phi$; and $N_{0}$ is the total number of the SQUID in the array. For $N_{0}\gg \hat{\varphi}$, $H_{A}$ can be simplified as \begin{align} H_{A}\approx 4E_{C}\hat{n}^{2}-N_{0}E_{J}(\Phi)\left[1-\frac{1}{2}\left(\frac{\hat{\varphi}}{N_{0}}\right)^{2}\right]=4E_{C}\hat{n}^{2}+\frac{E_{J}(\Phi)}{2N_{0}}\hat{\varphi}^{2}-N_{0}E_{J}(\Phi), \end{align} Using the quantization procedure \begin{align} \hat{\varphi}=\left[{\frac{N_{0}E_{C}}{E_{J}(\Phi)}}\right]^{\frac{1}{4}}\left(a^{\dag}+a\right), \ \ \ \ \ \ \ {\rm{and} }\ \ \ \ \ \ \ \ \hat{n}=i\left[\frac{E_{J}(\Phi)}{32N_{0}E_{C}}\right]^{\frac{1}{4}}\left(a^{\dag}-a\right), \end{align} the Hamiltonian $H_{A}$ is diagonalized as \begin{align} H_{A}\approx\omega \left(a^{\dag}a+\frac{1}{2}\right)-N_{0}E_{J}(\Phi), \end{align} where the cavity frequency \begin{align} \omega=\sqrt{\frac{8E_{C}E_{J}(\Phi)}{N_{0}}}, \end{align} is adjustable with the parameters $\Phi$ and $N_{0}$. This allows us to reduce the frequency $\omega$ to reach the $\Omega/\omega>10^{3}$ limit and the critical point $g_{c}=1$. For instance, we can choose \begin{align} E_{C}=2\pi\times 100{~\rm{MHz}},\ \ \ \ \ E_{J}(\Phi)=E_{C}/8=2\pi\times12.5{~\rm{MHz}},\ \ \ \ \ N_{0}=400, \end{align} so that $\omega=2\pi\times 5$~{MHz}. The qubit frequency can be chosen as $\Omega\sim 2\pi\times 50$~GHz, which could be possible using, e.g., flux qubits \cite{Gu2017PR,Kockum2019NRP,Krantz2019APR,Kjaergaard2020Arc,Kwon2021Jap}. These allow to reach $\Omega/\omega=10^{4}$. In this case, the coupling strength to reach the critical point $g_{c}=1$ is $g=50\omega\simeq 2\pi\times 250$~MHz, which is possible in the current experiments. \section{Quantum phase transition in a simulated quantum Rabi model} \begin{figure} \centering \scalebox{0.6}{\includegraphics{En_pro_amp.pdf}} \caption{(a) Energy spectrum of the anisotropic Rabi Hamiltonian $H'_{0}$ in Eq.~(\ref{eqS32}). We impose the eigenvalue of the ground eigenstate to be $0$. (b) Probability amplitudes $|c_{2k}|=\langle g|\langle 2k|S^{\dag}(r_{\rm{nl}})S(r_{\rm{nl}})|E_{0}\rangle$ of the states $S(r_{\rm{nl}})|2k\rangle|g\rangle$ in the ground eigenstate of $H'_{0}$ in Eq.~(\ref{eqS32}) calculated for different $g_{c}$. We choose parameters $r_{\rm{nl}}=0.5$ and $\delta_{a}=10^{4}\delta_{c}{\rm sech}(2r_{\rm{nl}})\approx 6.841\times10^{3}\delta_{c}$, which correspond to the frequency ratio $\Omega/\omega=10^{4}$. With there parameters, the collapse of the energy spectrum is shifted to $g_{c}\simeq1.002$ due to the finite-frequency effect. } \label{figS4} \end{figure} Another possible way to verify our proposal is using a simulated quantum Rabi model. Here, we present a possible implementation of our method in a simulated quantum Rabi model induced by a squeezing-light field \cite{Qin2018PRL,Leroux2018PRL}. Assuming that the three-level atom weakly couples to a cavity with coupling strength $\eta$, the system can be described by the Jaynes-Cummings Hamiltonian under the rotating-wave approximation: \begin{align} H_{\rm{JC}}=\omega_{b} b^{\dag}b+\omega_{e}|e\rangle\langle e|+\omega_{\mu}|\mu\rangle\langle\mu|-\eta(b^{\dag}|g\rangle\langle e|+b|e\rangle\langle g|), \end{align} where $\omega_{b}$ is the bare frequency of the cavity $b$ and $\omega_{e}$ is the level frequency of the state $|e\rangle$. The cavity is subjected to a two-photon (i.e. parametric) drive with amplitude $\Omega_{\rm{nl}}$ and frequency $\omega_{\rm{nl}}$: \begin{align} H_{\rm{nl}}=-\frac{\Omega_{\rm{nl}}}{2}\left[b^{\dag 2}\exp\left({-i\omega_{\rm{nl}}t}\right)+b^{2}\exp\left(i\omega_{\rm{nl}}t\right)\right]. \end{align} Then, working in a frame rotating at half of the parametric drive frequency $\omega_{\rm{nl}}/2$, the system Hamiltonian becomes: \begin{align}\label{eqS30} H'_{0}=H_{\rm{JC}}+H_{\rm{nl}}=\delta_{c} b^{\dag} b+\delta_{a} |e\rangle\langle e|+\omega_{\mu}|\mu\rangle\langle \mu|-\frac{\Omega_{\rm{nl}}}{2}\left(b^{\dag 2}+b^{2}\right)-\eta(b^{\dag}|g\rangle\langle e|+b|e\rangle\langle g|), \end{align} where $\delta_{c}=\omega_{b}-\omega_{\rm{nl}}/2$ and $\delta_{a}=\omega_{e}-\omega_{\rm{nl}}/2$ are detunings. Upon introducing the Bogoliubov squeezing transformation \begin{align} b_{s}=S^{\dag}(r_{\rm nl})bS(r_{\rm nl})=\cosh(r_{\rm{nl}})b-\sinh(r_{\rm{nl}})b^{\dag}, \end{align} the Hamiltonian $H'_{0}$ in Eq.~(\ref{eqS30}) becomes \begin{align}\label{eqS32} H'_{0}=&\delta_{c}{\rm sech}(2r_{\rm{nl}}) b_{s}^{\dag}b_{s}+\delta_{a}|e\rangle\langle e|+\omega_{\mu}|\mu\rangle\langle\mu|\cr &-\left[\eta\cosh(r_{\rm{nl}})b_{s}+\eta\sinh(r_{\rm{nl}})b_{s}^{\dag}\right]|e\rangle\langle g|-\left[\eta\cosh(r_{\rm{nl}})b_{s}+\eta\sinh(r_{\rm{nl}})b_{s}^{\dag}\right]|g\rangle\langle e|, \end{align} where \begin{align} r_{\rm{nl}}=\frac{1}{4}\ln\left(\frac{\delta_{c}+\Omega_{\rm{nl}}}{\delta_{c}-\Omega_{\rm nl}}\right). \end{align} Note that $b_{s}$ and $b_{s}^{\dag}$ satisfy $[b_{s},b_{s}^{\dag}]=1$, i.e., $b_{s}$ is also a bosonic mode. For this bosonic mode, the ground state is the squeezed vacuum state $S(r_{\rm{nl}})|0\rangle_{b}$, where the state $|n=0\rangle_{b}$ is the vacuum state of the cavity mode $b$. The Hamiltonian $H'_{0}$ describes the anisotropic Rabi model, which can also exhibit a superradiant quantum phase transition in the $\delta_{a}/\left[\delta_{c} {\rm sech}(2r_{\rm{nl}})\right]\rightarrow \infty$ limit \cite{Shen2017PRA}. The critical point for $H'_{0}$ is at \begin{align} g_{c}=\frac{\eta\cosh(r_{\rm nl})+\eta\sinh(r_{\rm nl})}{\sqrt{\delta_{a}\delta_{c} {\rm sech}(2r_{\rm{nl}})}}=\frac{\eta\exp(r_{\rm{nl}})}{{\sqrt{\delta_{a}\delta_{c} {\rm sech}(2r_{\rm{nl}})}}}=1. \end{align} That is, the quantum phase transition exhibited by this anisotropic Rabi Hamiltonian is the same as that exhibited by the standard quantum Rabi Hamiltonian \cite{Shen2017PRA}: \begin{align} H'_{R}\simeq \delta_{c}{\rm sech}(2r_{\rm{nl}}) b_{s}^{\dag}b_{s}+\delta_{a}|e\rangle\langle e|+\omega_{\mu}|\mu\rangle\langle\mu|-\frac{\eta}{2}\exp({r_{\rm{nl}}})(b_{s}+b_{s}^{\dag})(|g\rangle\langle e|+|e\rangle\langle g|). \end{align} Figure \ref{figS4}(a) shows the energy spectrum of the anistropic Rabi Hamiltonian $H'_{0}$. We can see that the energy spectrum nearly collapses when $g_{c}\simeq1$. Meanwhile, the probability amplitudes $|c_{2k}|$ of different photonic-state components also collapse, as shown in Fig.~\ref{figS4}(b). These indicate that the quantum phase transition exhibited by the anisotropic Rabi Hamiltonian in Eq.~(\ref{eqS32}) is the same as that exhibited by the standard quantum Rabi Hamiltonian in Eq.~(\ref{eqS1}). \begin{figure} \centering \scalebox{0.6}{\includegraphics{Phi_out_S.pdf}} \caption{(a) Populations of the squeezed-vacuum state $S(r_{\rm{nl}})|0\rangle|\mu\rangle$ and the squeezed-four-photon state $S(r_{\rm{nl}})|{4}\rangle|\mu\rangle$ at the time $\tau=\pi/\Xi$ in the evolution governed by $H'_{\rm{tot}}$ when $g_{c}$ is fixed to different values. We choose driving amplitudes $\Omega_{p}=0.005(E_{2}-E_{0})$ and $\Omega_{s}=2\Omega_{p}$. (b) Steady-state output rate $\Phi_{\rm{out}}^{\rm{ss}}$ defined in Eq.~(\ref{eqS43}). We choose relatively strong driving fields, i.e., $\Omega_{p}=0.05(E_{2}-E_{0})$ and $\Omega_{s}=2\Omega_{p}$, to achieve relatively large output photon rates. Strong driving fields may cause small errors (via counter-rotating effects) in obtaining the effective Hamiltonian $H_{\rm{eff}}$, leading to oscillations in $\Phi_{\rm{out}}^{\rm{ss}}$ in the normal phase, i.e., $g_{c}<1$. The dissipation rates are $\kappa=\gamma_{1}=\gamma_{2}=10^{-3}\delta_{c}{\rm sech}(2r_{\rm{nl}})$. Other parameters are the same as those in Fig.~\ref{figS4}. } \label{figS3} \end{figure} For simplicity, we can choose the parameters \begin{align} \omega=\delta_{c}{\rm sech}(2r_{\rm{nl}}), \ \ \ \ \ \ \ \ \Omega=\delta_{a},\ \ \ \ \ \ \ \ \ g=\frac{\eta}{2}\exp({r_{\rm{nl}}}), \ \ \ \ \ \ \ \ a=b_{s}, \end{align} so that $H_{0}=H_{R}=H'_{R}$, where $H_{0}$ is given in Eq.~(\ref{eqS17}). Therefore, by driving the atomic transition $|g\rangle\leftrightarrow|\mu\rangle$ with the Hamiltonian $H_{D}$ in Eq.~(\ref{eqS17}), the system dynamics is the same as that discussed in Sec.~\ref{SecS2}. The difference is that the Fock state $|n\rangle$ discussed in Sec.~\ref{SecS2} should be replaced with the squeezed Fock state $S(r_{\rm{nl}})|n\rangle_{b}$. The total Hamiltonian for the system in the lab frame becomes \begin{align}\label{eqS37} H'_{\rm{tot}}=H_{\rm{JC}}+H_{\rm{nl}}+H_{D}. \end{align} For simplicity, in the following discussions we ignore the subscript $b$ for the cavity mode, i.e., $|n\rangle_{b}$ is simplified to $|n\rangle$. According to the derivation in Sec.~\ref{SecS2}, the evolution governed by Eq.~(\ref{eqS37}) is \begin{align} |\phi'(t)\rangle \approx&\left[\cos(\theta)^2+\cos(\Xi t)\sin(\theta)^2\right]S(r_{\rm nl})|\mu_{0}\rangle-i\sin(\Xi t)\sin(\theta)S(r_{\rm{nl}})|E_{0}\rangle \cr &+\frac{1}{2}\sin(2\theta)\left[\cos(\Xi t)-1\right]S(r_{\rm{nl}})|\mu_{2l}\rangle, \end{align} where the parameters are defined in Sec.~\ref{SecS2}. Choosing possible experimental parameters: \begin{align} r_{\rm{nl}}=0.5, \ \ \ \ \ \ \ \delta_{c}/2\pi=1~{\rm{MHz}},\ \ \ \ \ \ \ \delta_{a}/2\pi\simeq 6.841~{\rm{GHz}}, \end{align} and the initial state $S(r_{\rm {np}})|0\rangle|\mu\rangle$, for different $g_{c}$, the populations for the states $S(r_{\rm{nl}})|0\rangle|\mu\rangle$ and $S(r_{\rm{nl}})|4\rangle|\mu\rangle$ at the time $\tau$ are shown in Fig.~\ref{figS3}(a). Similar to Fig.~\ref{figS2}(a), we can find a sudden change of these populations near the critical point in Fig.~\ref{figS3}(a), indicating a sudden change of the photon number distributions. Note that the system governed by the Hamiltonian $H'_{\rm tot}$ contains only real cavity photons. We can use the standard input-output theory to study the output cavity photon rate. The master equation for this system is \begin{align} \rho=i\left[\rho,H'_{\rm{tot}}\right]+\kappa\mathcal{D}[b]+\gamma_{1}\mathcal{D}[|g\rangle\langle e|]+\gamma_{2}\mathcal{D}[|\mu\rangle\langle g|], \end{align} where $\kappa$ is the decay rate of the cavity mode $b$ and $\gamma_{1,(2)}$ is the spontaneous emission rate of the transition $|g\rangle\rightarrow|\mu\rangle$ ($|e\rangle\rightarrow|g\rangle$). The steady-state output photon rates, defined by \begin{align}\label{eqS43} \Phi_{\rm out}^{\rm ss}=\Phi_{\rm out}|_{t\rightarrow \infty}=\kappa {\rm{Tr}}\left[b^{\dag}b\rho(t\rightarrow\infty)\right], \end{align} are shown in Fig.~\ref{figS3}(b). We can find that the output photon rate $\Phi_{\rm out}^{\rm ss}$ suddenly vanishes when $g_{c}$ is tuned across the critical point. This phenomenon is the same as that discussed for the standard quantum Rabi model in the main text. It is worth noting that relatively strong drivings chosen for Fig.~\ref{figS3}(b) induce counter-rotating effects. These may excite the system to the higher levels of $H'_{R}$ at some specific values of $g_{c}$, leading to the increase of $\Phi_{\rm{out}}^{\rm{ss}}$ at those values.
-41,846.346385
[ -2.42578125, 2.083984375 ]
29.295154
[ -3.41015625, 0.343994140625, -1.412109375, -6.48046875, -0.87939453125, 8.8671875 ]
[ 1.7626953125, 8.9375, 0.038177490234375, 4.18359375 ]
231
2,702
[ -3.40234375, 3.90625 ]
36.362433
[ -5.76953125, -3.818359375, -3.51171875, -2.439453125, 1.796875, 10.796875 ]
1.22807
11.017186
33.049593
2.941051
[ 1.4223026037216187 ]
-27,143.507014
7.38342
-41,204.674928
0.882206
5.689007
[ -2.298828125, -3.783203125, -4.3359375, -5.62890625, 2.47265625, 13.421875 ]
[ -4.796875, -1.2548828125, -2.166015625, -1.3857421875, 2.681640625, 3.458984375 ]
BkiUbhg25V5hd-428xX-
\section{Introduction} The representation theory of the $0$-Hecke algebra (also called \emph{degenerate Hecke algebra}) was first studied by P.-N.~Norton~\cite{Norton.1979} in type A and expanded to other types by Carter~\cite{Carter.1986}. Using an analogue of Young symmetrizers, they describe the simple and indecomposable projective modules together with the Cartan matrix. An interesting combinatorial application was then found by Krob and Thibon~\cite{Krob_Thibon.NCSF4.1997} who explained how induction and restriction of these modules gives an interpretation of the products and coproducts of the Hopf algebras of noncommutative symmetric functions and quasi-symmetric functions. Two other important steps were further made by Duchamp--Hivert--Thibon~\cite{Duchamp_Hivert_Thibon.2002} for type $A$ and Fayers~\cite{Fayers.2005} for other types, using the Frobenius structure to get more results, including a description of the Ext-quiver. More recently, a family of minimal orthogonal idempotents was described in~\cite{Denton.2010.FPSAC,Denton.2010}. Through divided difference (Demazure operator), the $0$-Hecke algebra has a central role in Schubert calculus and also appeared has connection with $K$-theory \cite{Demazure.1974,Lascoux.2001,Lascoux.2003,Miller.2005, Buch_Kresch_Shimozono.2008,Lam_Schilling_Shimozono.2010}. Like several algebras whose representation theory was studied in recent years in the algebraic combinatorics community (such as degenerated left regular bands, Solomon-Tits algebras, ...), the $0$-Hecke algebra is the algebra of a finite monoid endowed with special properties. Yet this fact was seldom used (including by the authors), despite a large body of literature on finite semi-groups, including representation theory results~\cite{Putcha.1996,Putcha.1998,Saliola.2007,Saliola.2008,Margolis_Steinberg.2008,Schocker.2008,Steinberg.2006.Moebius,Steinberg.2008.MoebiusII,AlmeidaMargolisVolkov05,Almeida_Margolis_Steinberg_Volkov.2009,Ganyushkin_Mazorchuk_Steinberg.2009,Izhakian.2010.SemigroupsRepresentationsOverSemirings}. From these, one can see that much of the representation theory of a semi-group algebra is combinatorial in nature (provided the representation theory of groups is known). One can expect, for example, that for aperiodic semi-groups (which are semi-groups which contain only trivial subgroups) most of the numerical information (dimensions of the simple/projective indecomposable modules, induction/restriction constants, Cartan matrix) can be computed without using any linear algebra. In a monoid with partial inverses, one finds (non-trivial) local groups and an understanding of the representation theory of these groups is necessary for the full representation theory of the monoid. In this sense, the notion of aperiodic monoids is orthogonal to that of groups as they contain only trivial group-like structure (there are no elements with partial inverses). On the same token, their representation theory is orthogonal to that of groups. The main goal of this paper is to complete this program for the class of $\mathcal{J}$-trivial monoids (a monoid $M$ is \emph{$\mathcal{J}$-trivial} provided that there exists a partial ordering $\leq$ on $M$ such that for all $x,y\in M$, one has $xy \leq x$ and $xy \leq y$). In this case, we show that most of the combinatorial data of the representation theory, including the Cartan matrix and the quiver can be expressed by counting particular elements in the monoid itself. A second goal is to provide a self-contained introduction to the representation theory of finite monoids, targeted at the algebraic combinatorics audience, and focusing on the simple yet rich case of $\mathcal{J}$-trivial monoids. The class of $\mathcal{J}$-trivial monoids is by itself an active subject of research (see e.g.~\cite{Straubing.Therien.1985,Henckell_Pin.2000,Vernitski.2008}), and contains many monoids of interest, starting with the $0$-Hecke monoid. Another classical $\mathcal{J}$-trivial monoid is that of nondecreasing parking functions, or monoid of order preserving regressive functions on a chain. Hivert and Thiéry~\cite{Hivert_Thiery.HeckeSg.2006,Hivert_Thiery.HeckeGroup.2007} showed that it is a natural quotient of the $0$-Hecke monoid and used this fact to derive its complete representation theory. It is also a quotient of Kiselman's monoid which is studied in~\cite{Kudryavtseva_Mazorchuk.2009} with some representation theory results. Ganyushkin and Mazorchuk~\cite{Ganyushkin_Mazorchuk.2010} pursued a similar line with a larger family of quotients of both the $0$-Hecke monoid and Kiselman's monoid. The extension of the program to larger classes of monoids, like $\mathcal{R}$-trivial or aperiodic monoids, is the topic of a forthcoming paper. Some complications necessarily arise since the simple modules are not necessarily one-dimensional in the latter case. The approach taken there is to suppress the dependence upon specific properties of orthogonal idempotents. Following a complementary line, Berg, Bergeron, Bhargava, and Saliola~\cite{Berg_Bergeron_Bhargava_Saliola.2010} have very recently provided a construction for a decomposition of the identity into orthogonal idempotents for the class of $\mathcal{R}$-trivial monoids. The paper is arranged as follows. In Section~\ref{sec:bgnot} we recall the definition of a number of classes of monoids, including the $\mathcal{J}$-trivial monoids, define some running examples of $\mathcal{J}$-trivial monoids, and establish notation. In Section~\ref{sec:jrep} we establish the promised results on the representation theory of $\mathcal{J}$-trivial monoids, and illustrates them on several examples including the $0$-Hecke monoid. We describe the radical, construct combinatorial models for the projective and simple modules, give a lifting construction to obtain orthogonal idempotents, and describe the Cartan matrix and the quiver, with an explicit labelling of the edges of the latter. We briefly comment on the complexity of the algorithms to compute the various pieces of information, and their implementation in \texttt{Sage}. All the constructions and proofs involve only combinatorics in the monoid or linear algebra with unitriangular matrices. Due to this, the results do not depend on the ground field $\mathbb{K}$. In fact, we have checked that all the arguments pass to $\mathbb{K}=\mathbb{Z}$ and therefore to any ring (note however that the definition of the quiver that we took comes from~\cite{Auslander.Reiten.Smaloe.1997}, where it is assumed that $\mathbb{K}$ is a field). It sounds likely that the theory would apply mutatis-mutandis to semi-rings, in the spirit of~\cite{Izhakian.2010.SemigroupsRepresentationsOverSemirings}. Finally, in Section~\ref{sec:NPDF}, we examine the monoid of order preserving regressive functions on a poset $P$, which generalizes the monoid of nondecreasing parking functions on the set $\{1, \ldots, N\}$. We give combinatorial constructions for idempotents in the monoid and also prove that the Cartan matrix is upper triangular. In the case where $P$ is a meet semi-lattice (or, in particular, a lattice), we establish an idempotent generating set for the monoid, and present a conjectural recursive formula for orthogonal idempotents in the algebra. \subsection{Acknowledgments} We would like to thank Chris Berg, Nantel Bergeron, Sandeep Bhargava, Sara Billey, Jean-Éric Pin, Franco Saliola, and Benjamin Steinberg for enlightening discussions. We would also like to thank the referee for detailed reading and many remarks that improved the paper. This research was driven by computer exploration, using the open-source mathematical software \texttt{Sage}~\cite{Sage} and its algebraic combinatorics features developed by the \texttt{Sage-Combinat} community~\cite{Sage-Combinat}, together with the \texttt{Semigroupe} package by Jean-Éric Pin~\cite{Semigroupe}. TD and AS would like to thank the Universit\'e Paris Sud, Orsay for hospitality. NT would like to thank the Department of Mathematics at UC Davis for hospitality. TD was in part supported by NSF grants DMS--0652641, DMS--0652652, by VIGRE NSF grant DMS--0636297, and by a Chateaubriand fellowship from the French Embassy in the US. FH was partly supported by ANR grant 06-BLAN-0380. AS was in part supported by NSF grants DMS--0652641, DMS--0652652, and DMS--1001256. NT was in part supported by NSF grants DMS--0652641, DMS--0652652. \section{Background and Notation} \label{sec:bgnot} A \emph{monoid} is a set $M$ together with a binary operation $\cdot : M\times M \to M$ such that we have \emph{closure} ($x \cdot y\in M$ for all $x,y \in M$), \emph{associativity} ( $(x\cdot y) \cdot z = x \cdot ( y \cdot z)$ for all $x,y,z \in M$), and the existence of an \emph{identity} element $1\in M$ (which satistfies $1\cdot x = x \cdot 1 = x$ for all $x\in M$). In this paper, unless explicitly mentioned, all monoids are \emph{finite}. We use the convention that $A\subseteq B$ denotes $A$ a subset of $B$, and $A\subset B$ denotes $A$ a proper subset of $B$. Monoids come with a far richer diversity of features than groups, but collections of monoids can often be described as \emph{varieties} satisfying a collection of algebraic identities and closed under subquotients and finite products (see e.g.~\cite{Pin.1986,Pin.2009} or~\cite[Chapter VII]{Pin.2009}). Groups are an example of a variety of monoids, as are all of the classes of monoids described in this paper. In this section, we recall the basic tools for monoids, and describe in more detail some of the varieties of monoids that are relevant to this paper. A summary of those is given in Figure~\ref{fig.monoids}. \begin{figure} \includegraphics[scale=0.75]{monoidClasses.pdf} \caption{Classes of finite monoids, with examples} \label{fig.monoids} \end{figure} In 1951 Green introduced several preorders on monoids which are essential for the study of their structures (see for example~\cite[Chapter V]{Pin.2009}). Let $M$ be a monoid and define $\le_\mathcal{R}, \le_\mathcal{L}, \le_\mathcal{J}, \le_\mathcal{H}$ for $x,y\in M$ as follows: \begin{equation*} \begin{split} &x \le_\mathcal{R} y \quad \text{if and only if $x=yu$ for some $u\in M$}\\ &x \le_\mathcal{L} y \quad \text{if and only if $x=uy$ for some $u\in M$}\\ &x \le_\mathcal{J} y \quad \text{if and only if $x=uyv$ for some $u,v\in M$}\\ &x \le_\mathcal{H} y \quad \text{if and only if $x\le_\mathcal{R} y$ and $x\le_\mathcal{L} y$.} \end{split} \end{equation*} These preorders give rise to equivalence relations: \begin{equation*} \begin{split} &x \; \mathcal{R} \; y \quad \text{if and only if $xM = yM$}\\ &x \; \mathcal{L} \; y \quad \text{if and only if $Mx = My$}\\ &x \; \mathcal{J} \; y \quad \text{if and only if $MxM = MyM$}\\ &x \; \mathcal{H} \; y \quad \text{if and only if $x \; \mathcal{R} \; y$ and $x \; \mathcal{L} \; y$.} \end{split} \end{equation*} We further add the relation $\le_\mathcal{B}$ (and its associated equivalence relation $\mathcal{B}$) defined as the finest preorder such that $x\leq_\mathcal{B} 1$, and \begin{equation} \label{equation.bruhat} \text{$x\le_\mathcal{B} y$ implies that $uxv\le_\mathcal{B} uyv$ for all $x,y,u,v\in M$.} \end{equation} (One can view $\le_\mathcal{B}$ as the intersection of all preorders with the above property; there exists at least one such preorder, namely $x\le y$ for all $x,y\in M$). Beware that $1$ is the largest element of these (pre)-orders. This is the usual convention in the semi-group community, but is the converse convention from the closely related notions of left/right/Bruhat order in Coxeter groups. \begin{definition} A monoid $M$ is called $\mathcal{K}$-\emph{trivial} if all $\mathcal{K}$-classes are of cardinality one, where $\mathcal{K}\in \{\mathcal{R},\mathcal{L},\mathcal{J},\mathcal{H},\mathcal{B}\}$. \end{definition} An equivalent formulation of $\mathcal{K}$-triviality is given in terms of \emph{ordered} monoids. A monoid $M$ is called: \begin{equation*} \begin{aligned} & \text{\emph{right ordered}} && \text{if $xy\le x$ for all $x,y\in M$}\\ & \text{\emph{left ordered}} && \text{if $xy\le y$ for all $x,y\in M$}\\ & \text{\emph{left-right ordered}} && \text{if $xy\leq x$ and $xy\leq y$ for all $x,y\in M$}\\ & \text{\emph{two-sided ordered}} && \text{if $xy=yz \leq y$ for all $x,y,z\in M$ with $xy=yz$}\\ & \text{\emph{ordered with $1$ on top}} && \text{if $x\leq 1$ for all $x\in M$, and $x\le y$}\\ & && \text{implies $uxv\le uyv$ for all $x,y,u,v\in M$} \end{aligned} \end{equation*} for some partial order $\le$ on $M$. \begin{proposition} \label{proposition.ordered} $M$ is right ordered (resp. left ordered, left-right ordered, two-sided ordered, ordered with $1$ on top) if and only if $M$ is $\mathcal{R}$-trivial (resp. $\mathcal{L}$-trivial, $\mathcal{J}$-trivial, $\mathcal{H}$-trivial, $\mathcal{B}$-trivial). When $M$ is $\mathcal{K}$-trivial for $\mathcal{K}\in \{\mathcal{R},\mathcal{L},\mathcal{J},\mathcal{H},\mathcal{B}\}$, then $\leq_\mathcal{K}$ is a partial order, \emph{called $\mathcal{K}$-order}. Furthermore, the partial order $\le$ is finer than $\le_\mathcal{K}$: for any $x, y\in M$, $x \le_\mathcal{K} y$ implies $x \leq y$. \end{proposition} \begin{proof} We give the proof for right-order as the other cases can be proved in a similar fashion. Suppose $M$ is right ordered and that $x,y\in M$ are in the same $\mathcal{R}$-class. Then $x=ya$ and $y=xb$ for some $a,b\in M$. This implies that $x\leq y$ and $y\leq x$ so that $x=y$. Conversely, suppose that all $\mathcal{R}$-classes are singletons. Then $x\le_\mathcal{R} y$ and $y\le_\mathcal{R} x$ imply that $x=y$, so that the $\mathcal{R}$-preorder turns into a partial order. Hence $M$ is right ordered using $xy \le_\mathcal{R} x$. \end{proof} \subsection{Aperiodic and $\mathcal{R}$-trivial monoids} The class of $\mathcal{H}$-trivial monoids coincides with that of \emph{aperiodic} monoids (see for example~\cite[Proposition 4.9]{Pin.2009}): a monoid is called \emph{aperiodic} if for any $x\in M$, there exists some positive integer $N$ such that $x^{N}=x^{N+1}$. The element $x^\omega := x^{N}=x^{N+1}=x^{N+2}=\cdots$ is then an idempotent (the idempotent $x^\omega$ can in fact be defined for any element of any monoid~\cite[Chapter VI.2.3]{Pin.2009}, even infinite monoids; however, the period $k$ such that $x^N = x^{N+k}$ need no longer be $1$). We write $\idempMon := \{x^\omega \mid x\in M\}$ for the set of idempotents of $M$. Our favorite example of a monoid which is aperiodic, but not $\mathcal{R}$-trivial, is the biHecke monoid studied in~\cite{Hivert_Schilling_Thiery.BiHeckeMonoid.2010,Hivert_Schilling_Thiery.BiHeckeMonoidRepresentation.2010}. This is the submonoid of functions from a finite Coxeter group $W$ to itself generated simultaneously by the elementary bubble sorting and antisorting operators $\overline{\pi}_i$ and $\pi_i$ \begin{equation} \label{equation.bihecke} \biheckemonoid(W) := \langle \pi_1, \pi_2, \ldots, \pi_n, \overline{\pi}_1, \overline{\pi}_2, \ldots, \overline{\pi}_n \rangle\,. \end{equation} See~\cite[Definition 1.1]{Hivert_Schilling_Thiery.BiHeckeMonoid.2010} and~\cite[Proposition 3.8]{Hivert_Schilling_Thiery.BiHeckeMonoid.2010}. The smaller class of $\mathcal{R}$-trivial monoids coincides with the class of so-called \emph{weakly ordered monoids} as defined by Schocker~\cite{Schocker.2008}. Also, via the right regular representation, any $\mathcal{R}$-trivial monoid can be represented as a monoid of regressive functions on some finite poset $P$ (a function $f: P\to P$ is called \emph{regressive} if $f(x) \le x$ for every $x\in P$); reciprocally any such monoid is $\mathcal{R}$-trivial. We now present an example of a monoid which is $\mathcal{R}$-trivial, but not $\mathcal{J}$-trivial. \begin{example} Take the free left regular band $\mathcal{B}$ generated by two idempotents $a,b$. Multiplication is given by concatenation taking into account the idempotent relations, and then selecting only the two left factors (see for example~\cite{Saliola.2007}). So $\mathcal{B} = \{1,a,b,ab,ba\}$ and $1\mathcal{B}=\mathcal{B}$, $a\mathcal{B} = \{a,ab\}$, $b\mathcal{B} =\{b, ba\}$, $ab \mathcal{B} = \{ab\}$, and $ba\mathcal{B} = \{ba\}$. This shows that all $\mathcal{R}$-classes consist of only one element and hence $\mathcal{B}$ is $\mathcal{R}$-trivial. On the other hand, $\mathcal{B}$ is not $\mathcal{L}$-trivial since $\{ab,ba\}$ forms an $\mathcal{L}$-class since $b\cdot ab = ba$ and $a\cdot ba = ab$. Hence $\mathcal{B}$ is also not $\mathcal{J}$-trivial. \end{example} \subsection{$\mathcal{J}$-trivial monoids} The most important for our paper is the class of $\mathcal{J}$-trivial monoids. In fact, our main motivation stems from the fact that the submonoid $M_1=\{ f\in M \mid f(1) =1\} $ of the biHecke monoid $M$ in~\eqref{equation.bihecke} of functions that fix the identity, is $\mathcal{J}$-trivial (see~\cite[Corollary~4.2]{Hivert_Schilling_Thiery.BiHeckeMonoid.2010} and~\cite{Hivert_Schilling_Thiery.BiHeckeMonoidRepresentation.2010}). \begin{example} \label{example.J_trivial} The following example of a $\mathcal{J}$-trivial monoid is given in~\cite{Straubing.Therien.1985}. Take $M=\{1,x,y,z,0\}$ with relations $x^2=x$, $y^2=y$, $xz=zy=z$, and all other products are equal to $0$. Then $M1M = M$, $MxM=\{x,z,0\}$, $MyM=\{y,z,0\}$, $MzM=\{z,0\}$, and $M0M=\{0\}$, which shows that $M$ is indeed $\mathcal{J}$-trivial. Note also that $M$ is left-right ordered with the order $1>x>y>z>0$, which by Proposition~\ref{proposition.ordered} is equivalent to $\mathcal{J}$-triviality. \end{example} \subsection{Ordered monoids (with $1$ on top)} Ordered monoids $M$ with $1$ on top form a subclass of $\mathcal{J}$-trivial monoids. To see this suppose that $x,y\in M$ are in the same $\mathcal{R}$-class, that is $x=ya$ and $y=xb$ for some $a,b\in M$. Since $a\le 1$, this implies $x=ya \le y$ and $y=xb\le x$ so that $x=y$. Hence $M$ is $\mathcal{R}$-trivial. By analogous arguments, $M$ is also $\mathcal{L}$-trivial. Since $M$ is finite, this implies that $M$ is $\mathcal{J}$-trivial (see~\cite[Chapter V, Theorem 1.9]{Pin.2009}). The next example shows that ordered monoids with 1 on top form a proper subclass of $\mathcal{J}$-trivial monoids. \begin{example} The monoid $M$ of Example~\ref{example.J_trivial} is not ordered. To see this suppose that $\le$ is an order on $M$ with maximal element $1$. The relation $y\le 1$ implies $0=z^2\le z = xzy \le xy =0$ which contradicts $z\neq 0$. \end{example} It was shown by Straubing and Th\'erien~\cite{Straubing.Therien.1985} and Henckell and Pin~\cite{Henckell_Pin.2000} that every $\mathcal{J}$-trivial monoid is a quotient of an ordered monoid with $1$ on top. In the next two subsections we present two important examples of ordered monoids with $1$ on top: the $0$-Hecke monoid and the monoid of regressive order preserving functions, which generalizes nondecreasing parking functions. \subsection{$0$-Hecke monoids} Let $W$ be a finite Coxeter group. It has a presentation \begin{equation} W = \langle\, s_i \; \text{for} \; i\in I\ \mid\ (s_is_j)^{m(s_i,s_j)},\ \forall i,j\in I\,\rangle\,, \end{equation} where $I$ is a finite set, $m(s_i,s_j) \in \{1,2,\dots,\infty\}$, and $m(s_i,s_i)=1$. The elements $s_i$ with $i\in I$ are called \emph{simple reflections}, and the relations can be rewritten as: \begin{equation} \begin{alignedat}{2} s_i^2 &=1 &\quad& \text{ for all $i\in I$}\,,\\ \underbrace{s_is_js_is_js_i \cdots}_{m(s_i,s_j)} &= \underbrace{s_js_is_js_is_j\cdots}_{m(s_i,s_j)} && \text{ for all $i,j\in I$}\, , \end{alignedat} \end{equation} where $1$ denotes the identity in $W$. An expression $w=s_{i_1}\cdots s_{i_\ell}$ for $w\in W$ is called \emph{reduced} if it is of minimal length $\ell$. See~\cite{bjorner_brenti.2005, humphreys.1990} for further details on Coxeter groups. The Coxeter group of type $A_{n-1}$ is the symmetric group $\sg[n]$ with generators $\{s_1,\dots,s_{n-1}\}$ and relations: \begin{equation} \begin{alignedat}{2} s_i^2 & = 1 & & \text{ for } 1\leq i\leq n-1\,,\\ s_i s_j & = s_j s_i & & \text{ for } |i-j|\geq2\,, \\ s_i s_{i+1} s_i & = s_{i+1} s_i s_{i+1} &\quad& \text{ for } 1\leq i\leq n-2\,; \end{alignedat} \end{equation} the last two relations are called the \emph{braid relations}. \begin{definition}[\textbf{$0$-Hecke monoid}] The $0$-Hecke monoid $H_0(W) = \langle \pi_i \mid i \in I \rangle$ of a Coxeter group $W$ is generated by the \emph{simple projections} $\pi_i$ with relations \begin{equation} \begin{alignedat}{2} \pi_i^2 &=\pi_i &\quad& \text{ for all $i\in I$,}\\ \underbrace{\pi_i\pi_j\pi_i\pi_j\cdots}_{m(s_i,s_{j})} &= \underbrace{\pi_j\pi_i\pi_j\pi_i\cdots}_{m(s_i,s_{j})} && \text{ for all $i,j\in I$}\ . \end{alignedat} \end{equation} Thanks to these relations, the elements of $H_0(W)$ are canonically indexed by the elements of $W$ by setting $\pi_w := \pi_{i_1}\cdots\pi_{i_k}$ for any reduced word $i_1 \dots i_k$ of $w$. \end{definition} \emph{Bruhat order} is a partial order defined on any Coxeter group $W$ and hence also the corresponding $0$-Hecke monoid $H_0(W)$. Let $w=s_{i_1} s_{i_2} \cdots s_{i_\ell}$ be a reduced expression for $w\in W$. Then, in Bruhat order $\le_B$, \begin{equation*} u\le_B w \quad \begin{array}[t]{l} \text{if there exists a reduced expression $u = s_{j_1} \cdots s_{j_k}$}\\ \text{where $j_1 \ldots j_k$ is a subword of $i_1 \ldots i_\ell$.} \end{array} \end{equation*} In Bruhat order, $1$ is the minimal element. Hence, it is not hard to check that, with reverse Bruhat order, the $0$-Hecke monoid is indeed an ordered monoid with $1$ on top. In fact, the orders $\le_\mathcal{L}$, $\le_\mathcal{R}$, $\le_\mathcal{J}$, $\le_\mathcal{B}$ on $H_0(W)$ correspond exactly to the usual (reversed) left, right, left-right, and Bruhat order on the Coxeter group $W$. \subsection{Monoid of regressive order preserving functions} For any partially ordered set $P$, there is a particular $\mathcal{J}$-trivial monoid which has some very nice properties and that we investigate further in Section~\ref{sec:NPDF}. Notice that we use the right action in this paper, so that for $x\in P$ and a function $f:P\to P$ we write $x.f$ for the value of $x$ under $f$. \begin{definition}[\textbf{Monoid of regressive order preserving functions}] Let $(P, \leq_P)$ be a poset. The set $\mathcal{OR}(P)$ of functions $f: P \to P$ which are \begin{itemize} \item \emph{order preserving}, that is, for all $x,y\in P,\ x\leq_P y$ implies $x.f\leq_P y.f$ \item \emph{regressive}, that is, for all $x\in P$ one has $x.f \leq_P x$ \end{itemize} is a monoid under composition. \end{definition} \begin{proof} It is trivial that the identity function is order preserving and regressive and that the composition of two order preserving and regressive functions is as well. \end{proof} According to~\cite[14.5.3]{Ganyushkin_Mazorchuk.2009}, not much is known about these monoids. When $P$ is a chain on $N$ elements, we obtain the monoid ${\operatorname{NDPF}}_N$ of nondecreasing parking functions on the set $\{1, \ldots, N\}$ (see e.g.~\cite{Solomon.1996}; it also is described under the notation $\mathcal C_n$ in e.g.~\cite[Chapter~XI.4]{Pin.2009} and, together with many variants, in~\cite[Chapter~14]{Ganyushkin_Mazorchuk.2009}). The unique minimal set of generators for ${\operatorname{NDPF}}_N$ is given by the family of idempotents $(\pi_i)_{i\in\{1,\dots,n-1\}}$, where each $\pi_i$ is defined by $(i+1).\pi_i:=i$ and $j.\pi_i:=j$ otherwise. The relations between those generators are given by: \begin{gather*} \pi_i\pi_j = \pi_j\pi_i \quad \text{ for all $|i-j|>1$}\,,\\ \pi_i\pi_{i-1}=\pi_i\pi_{i-1}\pi_i=\pi_{i-1}\pi_i\pi_{i-1}\,. \end{gather*} It follows that ${\operatorname{NDPF}}_n$ is the natural quotient of $H_0(\sg[n])$ by the relation $\pi_i\pi_{i+1}\pi_i = \pi_{i+1}\pi_i$, via the quotient map $\pi_i\mapsto \pi_i$~\cite{Hivert_Thiery.HeckeSg.2006,Hivert_Thiery.HeckeGroup.2007, Ganyushkin_Mazorchuk.2010}. Similarly, it is a natural quotient of Kiselman's monoid~\cite{Ganyushkin_Mazorchuk.2010,Kudryavtseva_Mazorchuk.2009}. To see that $\mathcal{OR}(P)$ is indeed a subclass of ordered monoids with $1$ on top, note that we can define a partial order by saying $f\le g$ for $f,g\in \mathcal{OR}(P)$ if $x.f \le_P x.g$ for all $x\in P$. By regressiveness, this implies that $f\le \operatorname{id}$ for all $f\in \mathcal{OR}(P)$ so that indeed $\operatorname{id}$ is the maximal element. Now take $f,g,h \in \mathcal{OR}(P)$ with $f\le g$. By definition $x.f \le_P x.g$ for all $x\in P$ and hence by the order preserving property $(x.f).h \le_P (x.g).h$, so that $fh\le gh$. Similarly since $f\le g$, $(x.h).f \le_P (x.h).g$ so that $hf\le hg$. This shows that $\mathcal{OR}(P)$ is ordered. The submonoid $M_1$ of the biHecke monoid~\eqref{equation.bihecke}, and $H_0(W)\subset M_1$, are submonoids of the monoid of regressive order preserving functions acting on the Bruhat poset. \subsection{Monoid of unitriangular Boolean matrices} Finally, we define the $\mathcal{J}$-trivial monoid $\mathcal U_n$ of \emph{unitriangular Boolean matrices}, that is of $n\times n$ matrices $m$ over the Boolean semi-ring which are unitriangular: $m[i,i]=1$ and $m[i,j]=0$ for $i>j$. Equivalently (through the adjacency matrix), this is the monoid of the binary reflexive relations contained in the usual order on $\{1,\dots,n\}$ (and thus antisymmetric), equipped with the usual composition of relations. Ignoring loops, it is convenient to depict such relations by acyclic digraphs admitting $1,\dots,n$ as linear extension. The product of $g$ and $h$ contains the edges of $g$, of $h$, as well as the transitivity edges $i\!\!\rightarrow\!\! k$ obtained from one edge $i\!\!\rightarrow\!\! j$ in $g$ and one edge $j\!\!\rightarrow\!\! k$ in $h$. Hence, $g^2=g$ if and only if $g$ is transitively closed. The family of monoids $(\mathcal U_n)_n$ (resp. $({\operatorname{NDPF}}_n)_n$) plays a special role, because any $\mathcal{J}$-trivial monoid is a subquotient of $\mathcal U_n$ (resp. ${\operatorname{NDPF}}_n$) for $n$ large enough~\cite[Chapter~XI.4]{Pin.2009}. In particular, ${\operatorname{NDPF}}_n$ itself is a natural submonoid of $\mathcal U_n$. \begin{remark} We now demonstrate how ${\operatorname{NDPF}}_n$ can be realized as a submonoid of relations. For simplicity of notation, we consider the monoid $\mathcal{OR}(P)$ where $P$ is the reversed chain $\{1>\dots>n\}$. Otherwise said, $\mathcal{OR}(P)$ is the monoid of functions on the chain $\{1<\dots<n\}$ which are order preserving and extensive ($x.f\geq x$). Obviously, $\mathcal{OR}(P)$ is isomorphic to ${\operatorname{NDPF}}_n$. The monoid $\mathcal{OR}(P)$ is isomorphic to the submonoid of the relations $A$ in $\mathcal U_n$ such that $i\!\!\rightarrow\!\! j \in A$ implies $k\!\!\rightarrow\!\! l\in A$ whenever $i\geq k\geq l\geq j$ (in the adjacency matrix: $(k,l)$ is to the south-west of $(i,j)$ and both are above the diagonal). The isomorphism is given by the map $A \mapsto f_A\in \mathcal{OR}(P)$, where \begin{equation*} u\cdot f_A := \max\{v\mid u\,\!\!\rightarrow\!\!\,v\in A\}\,. \end{equation*} The inverse bijection $f\in \mathcal{OR}(P) \mapsto A_f\in \mathcal U_n$ is given by \begin{equation*} u\,\!\!\rightarrow\!\!\,v \in A_f \text{ if and only if } u\cdot f \leq v\,. \end{equation*} For example, here are the elements of $\mathcal{OR}(\{1>2>3\})$ and the adjacency matrices of the corresponding relations in $\mathcal U_3$: \begin{equation*} \begin{array}{ccccc} \begin{tikzpicture}[->,baseline=(current bounding box.east)] \matrix (m) [matrix of math nodes, row sep=.5em, column sep=1.5em]{ 1 & 1 \\ 2 & 2 \\ 3 & 3 \\ }; \draw (m-1-1) -> (m-1-2); \draw (m-2-1) -> (m-2-2); \draw (m-3-1) -> (m-3-2); \end{tikzpicture}& \begin{tikzpicture}[->,baseline=(current bounding box.east)] \matrix (m) [matrix of math nodes, row sep=.5em, column sep=1.5em]{ 1 & 1 \\ 2 & 2 \\ 3 & 3 \\ }; \draw (m-1-1) -> (m-2-2); \draw (m-2-1) -> (m-2-2); \draw (m-3-1) -> (m-3-2); \end{tikzpicture}& \begin{tikzpicture}[->,baseline=(current bounding box.east)] \matrix (m) [matrix of math nodes, row sep=.5em, column sep=1.5em]{ 1 & 1 \\ 2 & 2 \\ 3 & 3 \\ }; \draw (m-1-1) -> (m-1-2); \draw (m-2-1) -> (m-3-2); \draw (m-3-1) -> (m-3-2); \end{tikzpicture}& \begin{tikzpicture}[->,baseline=(current bounding box.east)] \matrix (m) [matrix of math nodes, row sep=.5em, column sep=1.5em]{ 1 & 1 \\ 2 & 2 \\ 3 & 3 \\ }; \draw (m-1-1) -> (m-2-2); \draw (m-2-1) -> (m-3-2); \draw (m-3-1) -> (m-3-2); \end{tikzpicture}& \begin{tikzpicture}[->,baseline=(current bounding box.east)] \matrix (m) [matrix of math nodes, row sep=.5em, column sep=1.5em]{ 1 & 1 \\ 2 & 2 \\ 3 & 3 \\ }; \draw (m-1-1) -> (m-3-2); \draw (m-2-1) -> (m-3-2); \draw (m-3-1) -> (m-3-2); \end{tikzpicture}\\\\ \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{pmatrix} & \begin{pmatrix} 1 & 1 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\\ \end{pmatrix} & \begin{pmatrix} 1 & 0 & 0\\ 0 & 1 & 1\\ 0 & 0 & 1\\ \end{pmatrix} & \begin{pmatrix} 1 & 1 & 0\\ 0 & 1 & 1\\ 0 & 0 & 1\\ \end{pmatrix} & \begin{pmatrix} 1 & 1 & 1\\ 0 & 1 & 1\\ 0 & 0 & 1\\ \end{pmatrix} \; . \end{array} \end{equation*} \end{remark} \section{Representation theory of $\mathcal{J}$-trivial monoids} \label{sec:jrep} In this section we study the representation theory of $\mathcal{J}$-trivial monoids $M$, using the $0$-Hecke monoid $H_0(W)$ of a finite Coxeter group as running example. In Section~\ref{ss.simple.radical} we construct the simple modules of $M$ and derive a description of the radical $\operatorname{rad}\mathbb{K}M$ of the monoid algebra of $M$. We then introduce a star product on the set $\idempMon$ of idempotents in Theorem~\ref{theorem.star} which makes it into a semi-lattice, and prove in Corollary~\ref{corollary.triangular-radical} that the semi-simple quotient of the monoid algebra $\mathbb{K}M/\operatorname{rad} \mathbb{K}M$ is the monoid algebra of $(\idempMon,\star)$. In Section~\ref{ss.orthogonal.idempotents} we construct orthogonal idempotents in $\mathbb{K}M/\operatorname{rad} \mathbb{K}M$ which are lifted to a complete set of orthogonal idempotents in $\mathbb{K}M$ in Theorem~\ref{theorem.idempotents.lifting} in Section~\ref{ss.lifting}. In Section~\ref{ss.cartan} we describe the Cartan matrix of $M$. We study several types of factorizations in Section~\ref{ss.factorizations}, derive a combinatorial description of the quiver of $M$ in Section~\ref{ss.quiver}, and apply it in Section~\ref{ss.quiver.examples} to several examples. Finally, in Section~\ref{subsection.implementation_complexity}, we briefly comment on the complexity of the algorithms to compute the various pieces of information, and their implementation in \texttt{Sage}. \subsection{Simple modules, radical, star product, and semi-simple quotient} \label{ss.simple.radical} The goal of this subsection is to construct the simple modules of the algebra of a $\mathcal{J}$-trivial monoid $M$, and to derive a description of its radical and its semi-simple quotient. The proof techniques are similar to those of Norton~\cite{Norton.1979} for the $0$-Hecke algebra. However, putting them in the context of $\mathcal{J}$-trivial monoids makes the proofs more transparent. In fact, most of the results in this section are already known and admit natural generalizations in larger classes of monoids ($\mathcal{R}$-trivial, ...). For example, the description of the radical is a special case of Almeida-Margolis-Steinberg-Volkov~\cite{Almeida_Margolis_Steinberg_Volkov.2009}, and that of the simple modules of~\cite[Corollary~9]{Ganyushkin_Mazorchuk_Steinberg.2009}. Also, the description of the semi-simple quotient is often derived alternatively from the description of the radical, by noting that it is the algebra of a monoid which is $\mathcal{J}$-trivial and idempotent (which is equivalent to being a semi-lattice; see e.g.~\cite[Chapter VII, Proposition 4.12]{Pin.2009}). \begin{proposition}\label{proposition.simple} Let $M$ be a $\mathcal{J}$-trivial monoid and $x\in M$. Let $S_x$ be the $1$-dimensional vector space spanned by an element $\epsilon_x$, and define the right action of any $y\in M$ by \begin{equation} \epsilon_x y = \begin{cases} \epsilon_x & \text{if $xy=x$,}\\ 0 & \text{otherwise.} \end{cases} \end{equation} Then $S_x$ is a right $M$-module. Moreover, any simple module is isomorphic to $S_x$ for some $x \in M$ and is in particular one-dimensional. \end{proposition} Note that some $S_x$ may be isomorphic to each other, and that the $S_x$ can be similarly endowed with a left $M$-module structure. \begin{proof} Recall that, if $M$ is $\mathcal{J}$-trivial, then $\leq_\mathcal{J}$ is a partial order called $\mathcal{J}$-order (see Proposition~\ref{proposition.ordered}). Let $(x_1, x_2, \ldots, x_n)$ be a linear extension of $\mathcal{J}$-order, that is an enumeration of the elements of $M$ such that $x_i \leq_\mathcal{J} x_j$ implies $i\leq j$. For $0<i\leq n$, define $F_i = \mathbb{K}\{x_j \mid j\leq i\}$ and set $F_0=\{0_\mathbb{K}\}$. Clearly the $F_i$'s are ideals of $\mathbb{K}M$ such that the sequence \begin{equation*} F_0 \subset F_1 \subset F_2 \subset \cdots \subset F_{n-1} \subset F_n \end{equation*} is a composition series for the regular representation $F_n=\mathbb{K}M$ of $M$. Moreover, for any $i>0$, the quotient $F_i/F_{i-1}$ is a one-dimensional $M$-module isomorphic to $S_{x_i}$. Since any simple $M$-module must appear in any composition series for the regular representation, it has to be isomorphic to $F_i/F_{i-1}\cong S_{x_i}$ for some $i$. \end{proof} \begin{corollary} Let $M$ be a $\mathcal{J}$-trivial monoid. Then, the quotient of its monoid algebra $\mathbb{K}M$ by its radical is commutative. \end{corollary} Note that the radical $\operatorname{rad} \mathbb{K}M$ is not necessarily generated as an ideal by $\{gh - hg \mid g,h\in M\}$. For example, in the commutative monoid $\{1, x, 0\}$ with $x^2=0$, the radical is $\mathbb{K} (x - 0)$. However, thanks to the following this is true if $M$ is generated by idempotents (see Corollary~\ref{corollary.rad.idemp}). The following proposition gives an alternative description of the radical of $\mathbb{K}M$. \begin{proposition}\label{proposition.basis.radical} Let $M$ be a $\mathcal{J}$-trivial monoid. Then \begin{equation} \{ x-x^\omega \mid x\in M\backslash\idempMon \} \end{equation} is a basis for $\operatorname{rad}\mathbb{K}M$. Moreover $(S_e)_{e \in \idempMon}$ is a complete set of pairwise non-isomorphic representatives of isomorphism classes of simple $M$-modules. \end{proposition} \begin{proof} For any $x,y \in M$, either $yx=y$ and then $yx^\omega =y$, or $yx <_\mathcal{J} y$ and then $yx^\omega <_\mathcal{J} y$. Therefore $x-x^\omega$ is in $\operatorname{rad}\mathbb{K}M$ because for any $y$ the product $\epsilon_y(x-x^\omega)$ vanishes. Since $x^\omega \leq x$, by triangularity with respect to $\mathcal{J}$-order, the family \begin{equation*} \{ x-x^\omega \mid x\in M\backslash \idempMon\} \cup \idempMon \end{equation*} is a basis of $\mathbb{K}M$. There remains to show that the radical is of dimension at most the number of non-idempotents in $M$, which we do by showing that the simple modules $(S_e)_{e\in \idempMon}$ are not pairwise isomorphic. Assume that $S_e$ and $S_f$ are isomorphic. Then, since $\epsilon_e e = \epsilon_e$, it must be that $\epsilon_e f = \epsilon_e$ so that $ef=e$. Similarly $fe=f$, so that $e$ and $f$ are in the same $\mathcal{J}$-class and therefore equal. \end{proof} The following theorem elucidates the structure of the semi-simple quotient of the monoid algebra $\mathbb{K}M$. \begin{theorem} \label{theorem.star} Let $M$ be a $\mathcal{J}$-trivial monoid. Define a product $\star$ on $\idempMon$ by: \begin{equation} e \star f := (e f)^\omega\,. \end{equation} Then, the restriction of $\leq_\mathcal{J}$ on $\idempMon$ is a lattice such that \begin{equation} e \wedge_\mathcal{J} f = e \star f\,, \end{equation} where $e \wedge_\mathcal{J} f$ is the meet or infimum of $e$ and $f$ in the lattice. In particular $(\idempMon, \star)$ is an idempotent commutative $\mathcal{J}$-trivial monoid. \end{theorem} We start with two preliminary easy lemmas (which are consequences of e.g.~\cite[Chapter VII, Proposition~4.10]{Pin.2009}). \begin{lemma} \label{lemma.idem_factor} If $e \in \idempMon$ is such $e = ab$ for some $a,b\in M$, then \[ e=ea=be=ae=eb\,. \] \end{lemma} \begin{proof} For $e\in \idempMon$, one has $e=e^3$ so that $e=eabe$. As a consequence, $e\leq_\mathcal{J} ea\leq_\mathcal{J} e$ and $e\leq_\mathcal{J} be\leq_\mathcal{J} e$, so that $e=ea=be$. In addition $e=e^2=eab=eb$ and $e=e^2=abe=ae$. \end{proof} \begin{lemma}\label{lemma.j.idemp} For $e\in \idempMon$ and $y \in M$, the following three statements are equivalent: \begin{equation} e \leq_\mathcal{J} y, \qquad\qquad e = ey, \qquad\qquad e = ye \;. \end{equation} \end{lemma} \begin{proof} Suppose that $e,y$ are such that $e \leq_\mathcal{J} y$. Then $e=ayb$ for some $a,b\in M$. Applying Lemma~\ref{lemma.idem_factor} we obtain $e=ea=be$ so that $eye = eaybe = eee = e$ since $e\in \idempMon$. A second application of Lemma~\ref{lemma.idem_factor} shows that $ey = eye =e$ and $ye = eye = e$. The converse implications hold by the definition of $\leq_\mathcal{J}$. \end{proof} \begin{proof}[Proof of Theorem~\ref{theorem.star}] We first show that, for any $e,f \in \idempMon$ the product $e\star f$ is the greatest lower bound $e\wedge_\mathcal{J} f$ of $e$ and $f$ so that the latter exists. It is clear that $(ef)^\omega \leq_\mathcal{J} e$ and $(ef)^\omega \leq_\mathcal{J} f$. Take now $z\in \idempMon$ satisfying $z\leq_\mathcal{J} e$ and $z\leq_\mathcal{J} f$. Applying Lemma~\ref{lemma.j.idemp}, $z = ze = zf$, and therefore $z = z(ef)^\omega$. Applying Lemma~\ref{lemma.j.idemp} backward, $z\leq_\mathcal{J} (ef)^\omega$, as desired. Hence $(\idempMon, \leq_\mathcal{J})$ is a meet semi-lattice with a greatest element which is the unit of $M$. It is therefore a lattice (see e.g.~\cite{Stanley.1999.EnumerativeCombinatorics1,Wikipedia.Lattice}). Since lower bound is a commutative associative operation, $(\idempMon, \star)$ is a commutative idempotent monoid. \end{proof} We can now state the main result of this section. \begin{corollary} \label{corollary.triangular-radical} Let $M$ be a $\mathcal{J}$-trivial monoid. Then, $(\mathbb{K}\idempMon, \star)$ is isomorphic to $\mathbb{K}M/\operatorname{rad}\mathbb{K}M$ and $\phi: x \mapsto x^\omega$ is the canonical algebra morphism associated to this quotient. \end{corollary} \begin{proof} Denote by $\psi\ :\ \mathbb{K}M \to\mathbb{K}M/\operatorname{rad}\mathbb{K}M$ the canonical algebra morphism. It follows from Proposition~\ref{proposition.basis.radical} that, for any $x$ (idempotent or not), $\psi(x) = \psi(x^\omega)$ and that $\{\psi(e)\mid e\in \idempMon\}$ is a basis for the quotient. Finally, $\star$ coincides with the product in the quotient: for any $e,f\in \idempMon$, \begin{displaymath} \psi(e)\psi(f) = \psi(ef) = \psi((ef)^\omega) = \psi(e\star f)\,.\qedhere \end{displaymath} \end{proof} \begin{corollary} \label{corollary.rad.idemp} Let $M$ be a $\mathcal{J}$-trivial monoid generated by idempotents. Then the radical $\operatorname{rad}\mathbb{K}M$ of its monoid algebra is generated as an ideal by \begin{equation} \{gh - hg \mid g,h\in M\}\,. \end{equation} \end{corollary} \begin{proof} \newcommand{\mathcal{C}}{\mathcal{C}} % Denote by $\mathcal{C}$ the ideal generated by $\{gh - hg \mid g,h\in M\}$. Since $\operatorname{rad}\mathbb{K}M$ is the linear span of $(x-x^\omega)_{x\inM}$, it is sufficient to show that for any $x\inM$ one has $x\equiv x^2 \pmod \mathcal{C}$. Now write $x=e_1\cdots e_n$ where $e_i$ are all idempotent. Then, \begin{equation*} x \equiv e_1^2\cdots e_n^2 \equiv e_1\cdots e_ne_1\cdots e_n \equiv x^2 \pmod \mathcal{C}\,.\qedhere \end{equation*} \end{proof} \begin{example}[Representation theory of $H_0(W)$] \label{example.zero.hecke} Consider the $0$-Hecke monoid $H_0(W)$ of a finite Coxeter group $W$, with index set $I=\{1, 2, \ldots, n\}$. For any $J \subseteq I$, we can consider the parabolic submonoid $H_0(W_J)$ generated by $\{\pi_i \mid i\in J\}$. Each parabolic submonoid contains a unique longest element $\pi_J$. The collection $\{\pi_J \mid J\subseteq I\}$ is exactly the set of idempotents in $H_0(W)$. For each $i\in I$, we can construct the \emph{evaluation maps} $\Phi_i^+$ and $\Phi_i^-$ defined on generators by: \begin{eqnarray*} \Phi_i^+ &:& \mathbb{C}H_0(W) \rightarrow \mathbb{C}H_0(W_{I\setminus \{i\}}) \\ \Phi_i^+(\pi_j) &=& \begin{cases} 1 & \text{if $i=j$,}\\ \pi_j & \text{if $i \neq j$,} \end{cases} \end{eqnarray*} and \begin{eqnarray*} \Phi_i^- &:& \mathbb{C}H_0(W) \rightarrow \mathbb{C}H_0(W_{I\setminus \{i\}}) \\ \Phi_i^-(\pi_j) &=& \begin{cases} 0 & \text{if $i=j$,}\\ \pi_j & \text{if $i \neq j$.} \end{cases} \end{eqnarray*} One can easily check that these maps extend to algebra morphisms from $H_0(W)\rightarrow H_0(W_{I\setminus \{i\}})$. For any $J$, define $\Phi_J^+$ as the composition of the maps $\Phi_i^+$ for $i\in J$, and define $\Phi_J^-$ analogously (the map $\Phi_J^+$ is the \emph{parabolic map} studied by Billey, Fan, and Losonczy~\cite{Billey_Fan_Lsonczy.1999}). Then, the simple representations of $H_0(W)$ are given by the maps $\lambda_J = \Phi_J^+ \circ \Phi_{\hat{J}}^-$, where $\hat{J}=I\setminus J$. This is clearly a one-dimensional representation. \end{example} \subsection{Orthogonal idempotents} \label{ss.orthogonal.idempotents} We describe here a decomposition of the identity of the semi-simple quotient into minimal orthogonal idempotents. We include a proof for the sake of completeness, though the result is classical. It appears for example in a combinatorial context in~\cite[Section 3.9]{Stanley.1999.EnumerativeCombinatorics1} and in the context of semi-groups in~\cite{Solomon.1967,Steinberg.2006.Moebius}. For $e\in \idempMon$, define \begin{equation} \label{equation.g} g_e:=\sum_{e'\leq_\mathcal{J} e} \mu_{e',e} e'\,, \end{equation} where $\mu$ is the M\"obius function of $\leq_\mathcal{J}$, so that \begin{equation} \label{equation.e} e=\sum_{e'\leq_\mathcal{J} e} g_{e'}\,. \end{equation} \begin{proposition} The family $\{g_e\mid e\in \idempMon\}$ is the unique maximal decomposition of the identity into orthogonal idempotents for $\star$ that is in $\mathbb{K}M/\operatorname{rad}\mathbb{K}M$. \end{proposition} \begin{proof} First note that $1_M = \sum_e g_e$ by~\eqref{equation.e}. Consider now the new product $\bullet$ on $\mathbb{K}\idempMon = \mathbb{K}\{g_e \mid e\in \idempMon\}$ defined by $g_u\bullet g_v = \delta_{u,v} g_u$. Then, \begin{displaymath} u \bullet v = \sum_{u'\leq_\mathcal{J} u} g_{u'} \bullet \sum_{v'\leq_\mathcal{J} v} g_{v'} = \sum_{w'\leq u \wedge_\mathcal{J} v} g_{w'} = u \wedge_\mathcal{J} v = u \star v\,. \end{displaymath} Hence the product $\bullet$ coincides with $\star$. Uniqueness follows from semi-simplicity and the fact that all simple modules are one-dimensional. \end{proof} \subsection{Lifting the idempotents} \label{ss.lifting} In the following we will need a decomposition of the identity in the algebra of the monoid with some particular properties. The goal of this section is to construct such a decomposition. The idempotent lifting is a well-known technique (see~\cite[Chapter 7.7]{curtis_reiner.1962}), however we prove the result from scratch in order to obtain a lifting with particular properties. Moreover, the proof provided here is very constructive. \begin{theorem} \label{theorem.idempotents.lifting} Let $M$ be a $\mathcal{J}$-trivial monoid. There exists a family $(f_e)_{e\in \idempMon}$ of elements of $\mathbb{K}M$ such that \begin{itemize} \item $(f_e)$ is a decomposition of the identity of $\mathbb{K}M$ into orthogonal idempotents: \begin{equation} 1 = \sum_{e\in \idempMon} f_e \qquad\text{with}\qquad f_ef_{e'} = \delta_{e,e'} f_e\,. \end{equation} \item $(f_e)$ is compatible with the semi-simple quotient: \begin{equation} \phi(f_e) = g_e \quad \text{with $\phi$ as in Corollary~\ref{corollary.triangular-radical}.} \end{equation} \item $(f_e)$ is uni-triangular with respect to the $\mathcal{J}$-order of $M$: \begin{equation} f_e = e + \sum_{x<_\mathcal{J} e} c_{x,e} x \end{equation} for some scalars $c_{x,e}$. \end{itemize} \end{theorem} This theorem will follow directly from Proposition~\ref{proposition.unitriangular} below. In the proof, we will use the following proposition: \begin{proposition}\label{proposition.idempotents.lifting} Let $A$ be a finite-dimensional $\mathbb{K}$-algebra and $\phi$ the canonical algebra morphism from $A$ to $A/\operatorname{rad} A$. Let $x\in A$ be such that $e=\phi(x)$ is idempotent. Then, there exists a polynomial $P\in x\mathbb{Z}[x]$ (i.e. without constant term) such that $y=P(x)$ is idempotent and $\phi(y) = e$. Moreover, one can choose $P$ so that it only depends on the dimension of $A$ (and not on $x$ or $A$). \end{proposition} Let us start with two lemmas, where we keep the same assumptions as in Proposition~\ref{proposition.idempotents.lifting}, namely $x\in A$ such that $\phi(x)=e$ is an idempotent: \begin{lemma}\label{lemma.x.nilpotent} $x(x-1)$ is nilpotent: $(x(x-1))^u = 0$ for some $u$. \end{lemma} \begin{proof} $e=\phi(x)$ is idempotent so that $e(e-1) = 0$. Hence $x(x-1)\in\operatorname{rad} A$ and is therefore nilpotent. \end{proof} For any number $a$ denote by $\lceil a\rceil$ the smallest integer larger than $a$. \begin{lemma} Suppose that $(x(x-1))^u=0$ and define $y := 1 - (1-x^2)^2 = 2x^2-x^4$. Then $(y(y-1))^v=0$ with $v=\lceil\frac{u}{2}\rceil$. \end{lemma} \begin{proof} It suffices to expand and factor $y(y-1) = x^2 (x - 1)^2 (x + 1)^2 (x^2 - 2)$. Therefore $(y(y-1))^v$ is divisible by $(x(x-1))^u$ and must vanish. \end{proof} \begin{proof}[Proof of Proposition~\ref{proposition.idempotents.lifting}] Define $y_0:=x$ and $y_{n+1}:= 1-(1-y_n^2)^2$. Then by Lemma~\ref{lemma.x.nilpotent} there is a $u_0$ such that $(y_0(y_0-1))^{u_0} = 0$. Define $u_{n+1}=\lceil\frac{u_n}{2}\rceil$. Clearly there is an $N$ such that $u_N = 1$. Then let $y=y_N$. Clearly $y$ is a polynomial in $x$ and $y(y-1) = 0$ so that $y$ is idempotent. Finally if $\phi(y_n) = e$ then \begin{equation} \phi(y_{n+1}) = \phi(1-(1-y_n^2)^2) = 1 -(1-e^2)^2 = 1-(1-e^2) = e\,, \end{equation} so that $\phi(y) = e$ by induction. Note that the nilpotency order $u_0$ is smaller than the dimension of the algebra. Hence the choice $N=\lceil\log_2(\dim(A))\rceil$ is correct for all $x\in A$. \end{proof} In practical implementations, the given bound is much too large. A better method is to test during the iteration of $y_{n+1}:= 1-(1-y_n^2)^2$ whether $y_n^2=y_n$ and to stop if it holds. For a given $\mathcal{J}$-trivial monoid, we choose $P$ according to the size of the monoid and therefore, for a given $x$, denote by $P(x)$ the corresponding idempotent. Recall that in the semi-simple quotient, Equation~\eqref{equation.g} defines a maximal decomposition of the identity $1 = \sum_{e\in\idempMon} g_e$ using the M\"obius function. Furthermore, $g_e$ is uni-triangular and moreover by Lemma \ref{lemma.j.idemp} $g_e = eg_e=g_ee$. \medskip Now pick an enumeration (that is a total ordering) of the set of idempotents: \begin{equation} \idempMon = \{e_1, e_2, \dots, e_k\} \qquad \text{and} \qquad g_i := g_{e_i}\,. \end{equation} Then define recursively \begin{gather} f_1 := P(g_1),\quad f_2 := P\left((1-f_1)g_2(1-f_1)\right),\quad \dots \\ \text{and for $i>1$,}\quad f_i := P\left((1-\sum_{j<i} f_j)g_i(1-\sum_{j<i} f_j)\right). \end{gather} We are now in position to prove Theorem~\ref{theorem.idempotents.lifting}: \begin{proposition} \label{proposition.unitriangular} The $f_i$ defined above form a uni-triangular decomposition of the identity compatible with the semi-simple quotient. \end{proposition} \begin{proof} First it is clear that the $f_i$ are pairwise orthogonal idempotents. Indeed, since $P$ has no constant term one can write $f_i$ as \begin{equation} f_i = (1-\sum_{j<i} f_j)U\,. \end{equation} Now, assuming that the $(f_j)_{j<i}$ are orthogonal, the product $f_k f_i$ with $k<i$ must vanish since $f_k(1-\sum_{j<i} f_j) = f_k - f_k = 0$. Therefore one obtains by induction that for all $j< i$, $f_j f_i = 0$. The same reasoning shows that $f_i f_j = 0$ with $j<i$. Next, assuming that $\phi(f_j) = g_j$ holds for all $j<i$, one has \begin{equation} \phi\left((1-\sum_{j<i} f_j)g_i(1-\sum_{j<i} f_j)\right) = (1-\sum_{j<i} g_j)g_i(1-\sum_{j<i} g_j) = g_i\,. \end{equation} As a consequence $\phi(f_i) = \phi(P(g_i)) = P(\phi(g_i)) = g_i$. So that again by induction $\phi(f_i) = g_i$ holds for all $i$. Now $\phi(\sum_i f_i) = \sum_i g_i = 1$. As a consequence $1 - \sum_i f_i$ lies in the radical and must therefore be nilpotent. But, by orthogonality of the $f_i$ it must be idempotent as well: \begin{multline} (1 - \sum_i f_i)^2 = 1 - 2\sum_i f_i + (\sum_i f_i)^2 = 1 - 2\sum_i f_i + \sum_i f_i^2 =\\ 1 - 2\sum_i f_i + \sum_i f_i = 1 - \sum_i f_i\,. \end{multline} The only possibility is that $1 - \sum_i f_i = 0$. It remains to show triangularity. Since the polynomial $P$ has no constant term $f_i$ is of the form $f_i = Ag_iB$ for $A,B\in \mathbb{K}M$. One can therefore write $f_i = Ae_ig_iB$. By definition of the $\mathcal{J}$-order, any element of the monoid appearing with a nonzero coefficient in $f_i$ must be smaller than or equal to $e_i$. Finally, using $\phi$ one shows that the coefficient of $e_i$ in $f_i$ must be $1$ because the coefficient of $e_i$ in $g_i$ is $1$ and that if $x <_\mathcal{J} e_i$ then $\phi(x) = x^{\omega}<_\mathcal{J} e_i$. \end{proof} \subsection{The Cartan matrix and indecomposable projective modules} \label{ss.cartan} In this subsection, we give a combinatorial description of the Cartan invariants of a $\mathcal{J}$-trivial monoid as well as its left and right indecomposable projective modules. The main ingredient is the notion of $\operatorname{lfix}$ and $\operatorname{rfix}$ which generalize left and right descent classes in $H_0(W)$. \begin{proposition} \label{proposition.aut} For any $x\in M$, the set \begin{equation} \raut{x}:=\{u\in M \mid xu=x\} \end{equation} is a submonoid of $M$. Moreover, its $\mathcal{J}$-smallest element $\rfix{x}$ is the unique idempotent such that \begin{equation} \raut{x} = \{u\in M \mid \rfix{x} \leq_\mathcal{J} u\}\,. \end{equation} The same holds for the left: there exists a unique idempotent $\lfix{x}$ such that \begin{equation} \label{eq.laut} \laut{x} := \{u\in M \mid ux=x\} = \{u\in M \mid \lfix{x} \leq_\mathcal{J} u\}\,. \end{equation} \end{proposition} \begin{proof} The reasoning is clearly the same on the left and on the right. We write the right one. The fact that $\raut{x}$ is a submonoid is clear. Pick a random order on $\raut{x}$ and define \begin{equation} r := \left(\prod_{u\in\raut{x}} u\right)^\omega\,. \end{equation} Clearly, $r$ is an idempotent which belongs to $\raut{x}$. Moreover, by the definition of $r$, for any $u\in\raut{x}$, the inequality $r\leq_\mathcal{J} u$ holds. Hence $\rfix{x} = r$ exists. Finally it is unique by antisymmetry of $\leq_\mathcal{J}$ (since $M$ is $\mathcal{J}$-trivial). \end{proof} Note that, by Lemma~\ref{lemma.j.idemp}, \begin{align} \label{eq.rfix} \rfix{x} &= \min \{e\in \idempMon \mid xe=x\}\,, \\ \label{eq.lfix} \lfix{x} & = \min \{e\in \idempMon \mid ex=x\}\,, \end{align} the $\min$ being taken for the $\mathcal{J}$-order. These are called the \textit{right} and \textit{left symbol} of $x$, respectively. We recover some classical properties of descents: \begin{proposition} \label{proposition.lfix.decreasing} $\operatorname{lfix}$ is decreasing for the $\mathcal{R}$-order. Similarly, $\operatorname{rfix}$ is decreasing for the $\mathcal{L}$-order. \end{proposition} \begin{proof} By definition, $\lfix{a}ab = ab$, so that $\lfix{a}\in\laut{ab}$. One concludes that $\lfix{ab}\le_\mathcal{R}\lfix{a}$. \end{proof} \bigskip \subsubsection{The Cartan matrix} We now can state the key technical lemma toward the construction of the Cartan matrix and indecomposable projective modules. \begin{lemma} For any $x\in M$, the tuple $(\lfix{x}, \rfix{x})$ is the unique tuple $(i,j)$ in $\idempMon \times \idempMon$ such that $f_i x$ and $x f_j$ have a nonzero coefficient on $x$. \end{lemma} \begin{proof} By Proposition~\ref{proposition.simple}, for any $y\in\mathbb{K}M$, the coefficient of $x$ in $xy$ is the same as the coefficient of $\epsilon_x$ in $\epsilon_xy$. Now since $S_x$ is a simple module, the action of $y$ on it is the same as the action of $\phi(y)$. As a consequence, $\epsilon_x f_{\rfix{x}} = \epsilon_x g_{\rfix{x}}$. Now $\epsilon_x \rfix{x} = \epsilon_x$, and $\epsilon_x e=0$ for any $e<_\mathcal{J}\rfix{x}$, so that $\epsilon_x g_{\rfix{x}} = \epsilon_x$ and $\epsilon_x f_{\rfix{x}} = \epsilon_x$. It remains to prove the unicity of $f_j$. We need to prove that for any $e\neq\rfix{x}$, the coefficient of $x$ in $x f_e$ is zero. Since this coefficient is equal to the coefficient of $\epsilon_x$ in $\epsilon_x f_e$ it must be zero because $\epsilon_x f_e = \epsilon_x f_{\rfix{x}} f_e = \epsilon_x 0 = 0$ by the orthogonality of the $f_i$. \end{proof} During the proof, we have seen that the coefficient is actually $1$: \begin{corollary}\label{corollary.bx.triang} For any $x\in M$, we denote $b_x := f_{\lfix{x}} x f_{\rfix{x}}$. Then, \begin{equation} b_x = x + \sum_{y <_\mathcal{J} x} c_y y\,, \end{equation} with $c_y\in \mathbb{K}$. Consequently, $(b_x)_{x\in M}$ is a basis for $\mathbb{K}M$. \end{corollary} \begin{theorem} The Cartan matrix of $\mathbb{K}M$ defined by $c_{i,j} := \dim(f_i \mathbb{K}M f_j)$ for $i,j\in \idempMon$ is given by $c_{i,j} = |C_{i,j}|$, where \begin{equation} C_{i,j} := \{x\in M \mid i=\lfix{x}\text{ and } j=\rfix{x} \}\,. \end{equation} \end{theorem} \begin{proof} For any $i,j\in \idempMon$ and $x\in C_{i,j}$, it is clear that $b_x$ belongs to $f_i\mathbb{K}M f_j$. Now because $(b_x)_{x\in M}$ is a basis of $\mathbb{K}M$ and since $\mathbb{K}M =\bigoplus_{i,j\in \idempMon} f_i\mathbb{K}M f_j$, it must be true that $(b_x)_{x\in C_{i,j}}$ is a basis for $f_i\mathbb{K}M f_j$. \end{proof} \begin{example}[Representation theory of $H_0(W)$, continued] Recall that the left and right descent sets and content of $w\in W$ can be respectively defined by: \begin{eqnarray*} D_L(w) &=& \{ i\in I \mid \ell(s_iw) < \ell(w) \}, \\ D_R(w) &=& \{ i\in I \mid \ell(w s_i) < \ell(w) \}, \\ \operatorname{cont}(w) &=& \{i \in I \mid \text{$s_i$ appears in some reduced word for $w$} \}, \end{eqnarray*} and that the above conditions on $s_iw$ and $ws_i$ are respectively equivalent to $\pi_i \pi_w =\pi_w$ and $\pi_w\pi_i = \pi_w$. Furthermore, writing $w_J$ for the longest element of the parabolic subgroup $W_J$, so that $\pi_J=\pi_{w_J}$, one has $\operatorname{cont}(\pi_J)=D_L(w_J)$, or equivalently $\operatorname{cont}(\pi_J)=D_R(w_J)$. Then, for any $w\in W$, we have $\pi_w^\omega=\pi_{\operatorname{cont}(w)}$, $\lfix{\pi_w}=\pi_{D_L(w)}$, and $\rfix{\pi_w}=\pi_{D_R(w)}$. Thus, the entry $a_{J,K}$ of the Cartan matrix is given by the number of elements $w\in W$ having those left and right descent sets. \end{example} \bigskip \subsubsection{Projective modules} By the same reasoning we have the following corollary: \begin{corollary} The family $\{b_x \mid \lfix{x} = e\}$ is a basis for the right projective module associated to $S_e$. \end{corollary} Actually one can be more precise: the projective modules are combinatorial. \begin{theorem} \label{theorem.projective_modules} For any idempotent $e$ denote by $R(e) = eM$, \begin{equation*} R_=(e) = \{x\in eM\mid \lfix{x} = e\} \quad\text{and}\quad R_<(e) = \{x\in eM\mid \lfix{x} <_\mathcal{R} e\} \, . \end{equation*} Then, the projective module $P_e$ associated to $S_e$ is isomorphic to $\mathbb{K} R(e)/\mathbb{K} R_<(e)$. In particular, the projective module $P_e$ is combinatorial: taking as basis the image of $R_=(e)$ in the quotient, the action of $m\inM$ on $x\in R_=(e)$ is given by: \begin{equation} x \cdot m = \begin{cases} xm &\text{if $\lfix{xm} = e$,}\\ 0 & \text{otherwise}. \end{cases} \end{equation} \end{theorem} \begin{proof} By Proposition~\ref{proposition.lfix.decreasing}, $R(e)$ and $R_<(e)$ are two ideals in the monoid, so that $A := \mathbb{K} R(e)/\mathbb{K} R_<(e)$ is a right $M$-module. In order to show that $A$ is isomorphic to $P_e$, we first show that $A/\operatorname{rad} A$ is isomorphic to $S_e$ and then use projectivity and dimension counting to conclude the statement. We claim that \begin{equation} \label{equation.radA} \mathbb{K} (R_=(e) \backslash \{e\})\subseteq \operatorname{rad} A\,. \end{equation} Take indeed $x\in R_=(e) \backslash \{e\}$. Then, $x^\omega$ is in $\mathbb{K} R_<(e)$ since $\lfix{x^\omega}=x^\omega \leq_\mathcal{R} x <_\mathcal{R} e$. If follows that, in $A$, $x=x-x^\omega=e(x-x^\omega)$ which, by Proposition~\ref{proposition.basis.radical}, is in $\operatorname{rad} A$. Since $\operatorname{rad} A\subset A$, the inclusion in \eqref{equation.radA} is in fact an equality, and $A / \operatorname{rad} A$ is isomorphic to $S_e$. Then, by the definition of projectivity, any isomorphism from $S_e = P_e/\operatorname{rad} P_e$ to $A/\operatorname{rad} A$ extends to a surjective morphism from $P_e$ to $A$ which, by dimension count, must be an isomorphism. \end{proof} \begin{figure} \includegraphics[width=\textwidth]{h0s4-proj.pdf} \caption{The decomposition of $H_0(\sg[4])$ into indecomposable right projective modules. This decomposition follows the partition of $\sg[4]$ into left descent classes, each labelled by its descent set $J$. The blue, red, and green lines indicate the action of $\pi_1, \pi_2,$ and $\pi_3$ respectively. The darker circles indicate idempotent elements of the monoid. } \label{h0s4.projectives} \end{figure} \begin{example}[Representation theory of $H_0(W)$, continued] \label{example.zero.hecke.projectives} The right projective modules of $H_0(W)$ are combinatorial, and described by the decomposition of the right order along left descent classes, as illustrated in Figure~\ref{h0s4.projectives}. Namely, let $P_J$ be the right projective module of $H_0(W)$ corresponding to the idempotent $\pi_J$. Its basis $b_w$ is indexed by the elements of $w$ having $J$ as left descent set. The action of $\pi_i$ coincides with the usual right action, except that $b_w.\pi_i=0$ if $w.\pi_i$ has a strictly larger left descent set than $w$. Here we reproduce Norton's construction of $P_J$~\cite{Norton.1979}, as it is close to an explicit description of the isomorphism in the proof of Theorem~\ref{theorem.projective_modules}. First, notice that the elements $\{\pi_i^-=(1-\pi_i) \mid i\in I\}$ are idempotent and satisfy the same Coxeter relations as the $\pi_i$. Thus, the set $\{\pi_i^-\}$ generates a monoid isomorphic to $H_0(W)$. For each $J\subseteq I$, let $\pi_J^-$ be the longest element in the parabolic submonoid associated to $J$ generated by the $\pi_i^-$ generators, and $\pi_J^+=\pi_J$. For each subset $J\subseteq I$, let $\hat{J}=I\setminus J$. Define $f_J=\pi_{\hat{J}}^-\pi_J^+$. Then, $f_J \pi_w=0$ if $J\subset D_L(w)$. It follows that the right module $f_J H_0(W)$ is isomorphic to $P_J$ and its basis $\{ f_J\pi_w\mid D_L(w)=J\}$ realizes the combinatorial module of $P_J$. One should notice that the elements $\pi_{\hat{J}}^-\pi_J^+$ are, in general, neither idempotent nor orthogonal. Furthermore, $\pi_{\hat{J}}^-\pi_J^+H_0(W)$ is not a submodule of $\pi_J H_0(W)$ as in the proof of Theorem~\ref{theorem.projective_modules}. The description of left projective modules is symmetric. \end{example} \subsection{Factorizations} \label{ss.factorizations} It is well-known that the notion of factorization $x=uv$ and of irreducibility play an important role in the study of $\mathcal{J}$-trivial monoids $M$. For example, the irreducible elements of $M$ form the unique minimal generating set of $M$~\cite{Doyen.1984,Doyen.1991}. In this section, we further refine these notions, in order to obtain in the next section a combinatorial description of the quiver of the algebra of $M$. \bigskip Let $x$ be an element of $M$, and $e:=\lfix{x}$ and $f:=\rfix{x}$. By Proposition~\ref{proposition.aut}, if $x=uv$ is a factorization of $x$ such that $eu=e$ (or equivalently $e \leq_\mathcal{J} u$), then $u\in\laut{x}$, that is $ux=x$. Similarly on the right side, $vf = f$ implies that $xv = x$. The existence of such trivial factorizations for any element of $M$, beyond the usual $x=1x=x1$, motivate the introduction of refinements of the usual notion of proper factorizations. \begin{definition} Take $x\in M$, and let $e:=\lfix{x}$ and $f:=\rfix{x}$. A factorization $x=uv$ is \begin{itemize} \item \emph{proper} if $u \neq x$ and $v \neq x$; \item \emph{non-trivial} if $eu \neq e$ and $vf \neq f$ (or equivalently $e \not\le_\mathcal{J} u$ and $f \not \le_\mathcal{J} v$, or $u\notin \laut{x}$ and $v\notin \raut{x}$); \item \emph{compatible} if $u$ and $v$ are non-idempotent and \begin{equation*} \lfix{u} = e\,,\quad \rfix{v} = f\,\quad\text{and}\quad \rfix{u} =\lfix{v}\,. \end{equation*} \end{itemize} \end{definition} \begin{example} Among the factorizations of $\pi_2\pi_1\pi_3\pi_2$ in $H_0(\sg[4])$, the following are non-proper and trivial: \begin{equation*} (\operatorname{id}, \pi_2\pi_1\pi_3\pi_2)\quad (\pi_2, \pi_2\pi_1\pi_3\pi_2)\quad (\pi_2\pi_1\pi_3\pi_2, \operatorname{id})\quad (\pi_2\pi_1\pi_3\pi_2, \pi_2)\,. \end{equation*} The two following factorizations are proper and trivial: \begin{equation*} (\pi_2, \pi_1\pi_3\pi_2)\quad (\pi_2\pi_1\pi_3, \pi_2)\,. \end{equation*} Here are the non-trivial and incompatible factorizations: \begin{equation*} \begin{array}{ccc} (\pi_2\pi_1, \pi_3\pi_2) & (\pi_2\pi_3, \pi_1\pi_2) & (\pi_2\pi_1, \pi_1\pi_3\pi_2) \\ (\pi_2\pi_3, \pi_1\pi_3\pi_2) & (\pi_2\pi_1\pi_3, \pi_1\pi_2) & (\pi_2\pi_1\pi_3, \pi_3\pi_2)\,. \end{array} \end{equation*} The only non-trivial and compatible factorization is: \begin{equation*} (\pi_2\pi_1\pi_3, \pi_1\pi_3\pi_2)\,. \end{equation*} \end{example} \begin{lemma} \label{lemma.factorization.nontrivial.proper} Any non-trivial factorization is also proper. \end{lemma} \begin{proof} Indeed by contraposition, if $x=xv$ then $v\in\raut{x}$ and therefore $\rfix{x} \le_\mathcal{J} v$. The case $x=vx$ can be proved similarly. \end{proof} \begin{lemma} If $x$ is an idempotent, $x$ admits only trivial factorizations. \end{lemma} \begin{proof} Indeed if $x$ is idempotent then $x=\rfix{x}=\lfix{x}$. Then from $x=uv$, one obtains that $x=xuv$. Therefore $x \le_\mathcal{J} xu \le_\mathcal{J} x$ and therefore $x=xu$. \end{proof} \begin{lemma} Any compatible factorization is non-trivial. \end{lemma} \begin{proof} Let $x=uv$ be a compatible factorization. Then $\lfix{u} = e$ implies that $eu=u$. Since $u$ is not idempotent it cannot be equal to $e$ so that $eu \neq e$. The same holds on the other side. \end{proof} We order the factorizations of $x$ by the product $\mathcal{J}$-order: Suppose that $x=uv=u'v'$. Then we write $(u, v) \leq_\mathcal{J} (u', v')$ if and only if $u\leq_\mathcal{J} u'$ and $v\leq_\mathcal{J} v'$. \begin{lemma} If $x = uv$ is a non-trivial factorization which is minimal for the product $\mathcal{J}$-order, then it is compatible. \end{lemma} \begin{proof} Let $x = uv$ be a minimal non-trivial factorization. Then $(eu, vf)$ with $e=\lfix{x}$ and $f=\rfix{x}$ is a factorization of $x$ which is also clearly non-trivial. By minimality we must have that $u=eu$ and $v=vf$. On the other hand, $\lfix{u}x = \lfix{u}uv = uv = x$, so that $e=\lfix{x}\leq_\mathcal{J}\lfix{u}$ and therefore $e=\lfix{u}$. This in turn implies that $u$ is non-idempotent since it is different from its left fix. The same holds on the right side. It remains to show that $\rfix{u} =\lfix{v}$. If $g$ is an idempotent such that $ug=u$, then $x=u(gv)$ is a non-trivial factorization, because $gvf\leq_\mathcal{J} vf <_\mathcal{J} f$ so that $gvf \neq f$. Therefore by minimality, $gv=v$. By symmetry $ug=u$ is equivalent to $gv=v$. \end{proof} Putting together these two last lemmas we obtain: \begin{proposition} Take $x\in M$. Then the following are equivalent: \begin{enumerate} \item $x$ admits a non-trivial factorization; \item $x$ admits a compatible factorization. \end{enumerate} \end{proposition} \begin{definition} An element is called \emph{irreducible} if it admits no proper factorization. The set of all irreducible elements of a monoid $M$ is denoted by $\irr$. An element is called \emph{c-irreducible} if it admits no non-trivial factorization. The set of all c-irreducible elements of a monoid $M$ is denoted by $\cirr$. We also denote by $\quiv$ the set of c-irreducible non-idempotent elements. \end{definition} \begin{remark} \label{remark.factorization.irr.cirr} By Lemma~\ref{lemma.factorization.nontrivial.proper}, $\irr\subseteq \cirr$. In particular $\cirr$ generates $M$. \end{remark} \bigskip \subsection{The Ext-quiver} \label{ss.quiver} The goal of this section is to give a combinatorial description of the quiver of the algebra of a $\mathcal{J}$-trivial monoid. We start by recalling some well-known facts about algebras and quivers.\medskip Recall that a quiver $Q$ is a directed graph where loops and multiple arrows between two vertices are allowed. The path algebra $\mathbb{K} Q$ of $Q$ is defined as follows. A path in $Q$ is a sequence of arrows $a_n a_{n-1} \cdots a_3 a_2 a_1$ such that the head of $a_{i+1}$ is equal to the tail of $a_i$. The product of the path algebra is defined by concatenating paths if tail and head matches and by zero otherwise. Let $F$ denote the ideal in $\mathbb{K} Q$ generated by the arrows of $Q$. An ideal $I \subseteq \mathbb{K} Q$ is said to be \emph{admissible} if there exists an integer $m\geq2$ such that $F^m \subseteq I \subseteq F^2$. An algebra is called \emph{split basic} if and only if all the simple $A$-modules are one-dimensional. The relevance of quivers comes from the following theorem: \begin{theorem}[See e.g.~\cite{Auslander.Reiten.Smaloe.1997}] For any finite-dimensional split basic algebra $A$, there is a unique quiver $Q$ such that $A$ is isomorphic to $\mathbb{K} Q/I$ for some admissible ideal $I$. \end{theorem} In other words, the quiver $Q$ can be seen as a first order approximation of the algebra $A$. Note however that the ideal $I$ is not necessarily unique.\medskip The quiver of a split basic $\mathbb{K}$-algebra $A$ can be computed as follows: Let $\{f_i\mid i\in E\}$ be a complete system of primitive orthogonal idempotents. There is one vertex $v_i$ in $Q$ for each $i\in E$. If $i,j \in E$, then the number of arrows in $Q$ from $v_i$ to $v_j$ is $\dim f_i\big(\operatorname{rad} A/\mathord{\operatorname{rad}^2 A}\big)f_j$. This construction does not depend on the chosen system of idempotents. \begin{theorem} Let $M$ be a $\mathcal{J}$-trivial monoid. The quiver of the algebra of $M$ is the following: \begin{itemize} \item There is one vertex $v_e$ for each idempotent $e\in\idempMon$. \item There is an arrow from $v_{\lfix{x}}$ to $v_{\rfix{x}}$ for every c-irreducible element $x\in\quiv$. \end{itemize} \end{theorem} This theorem follows from Corollary~\ref{corollary.quiver_arrow} below. \begin{lemma} \label{lemma.factorization_ex} Let $x\in \quiv$ and set $e=\lfix{x}$ and $f=\rfix{x}$. Recall that, by definition, whenever $x = uv$, then either $eu = e$ or $vf = f$. Then, \begin{equation} [x,e[_\mathcal{R} = \{ u \in M \mid eu = u \ne e \text{ and } uf = x \}. \end{equation} \end{lemma} \begin{proof} Obviously, $\{ u \in M \mid eu = u \ne e \text{ and } uf = x \} \subseteq [x,e[_\mathcal{R}$. Now take $u\in [x,e[_\mathcal{R}$. Then, $u=ea$ for some $a\in M$ and hence $eu=eea =ea= u\ne e$. Furthermore, we can choose $v$ such that $x = uv$ with $vf = v$. Since $x$ admits no non-trivial factorization, we must have $v=f$. \end{proof} \begin{proposition} Take $x\in\quiv$ and let $e:=\lfix{x}$ and $f:=\rfix{x}$. Then, there exists a combinatorial module $M_x$ with basis $\epsilon=\epsilon_x, \xi=\xi_x$ and action given by \begin{align} \epsilon \cdot m &:= \begin{cases} \epsilon & \text{if $m\in [e,1]_\mathcal{R}$}\\ \xi & \text{if $m\in [x,1]_\mathcal{R} \setminus [e,1]_\mathcal{R}$}\\ 0 & \text{otherwise,} \end{cases} \qquad \text{and}\\ \xi \cdot m &:= \begin{cases} \xi & \text{if $m\in [f,1]_\mathcal{R}$}\\ 0 & \text{otherwise.} \end{cases} \end{align} This module of dimension $2$ is indecomposable, with composition factors given by $[e] + [f]$. \end{proposition} \begin{proof} We give a concrete realization of $M_x$. Let $I_x := eM \setminus [x,e]_\mathcal{R}$. This is a right ideal, and we endow the interval $[x,e]_\mathcal{R}$ with the quotient structure of $eM / I_x$. The second step is to further quotient this module by identifying all elements in $[x,e[_\mathcal{R}$. Namely, define \begin{equation} \begin{split} \Theta: [x,e]_\mathcal{R} &\rightarrow M_x\\ e &\mapsto \epsilon\\ u &\mapsto \xi \quad \text{for $u\in [x,e[_\mathcal{R}.$} \end{split} \end{equation} It remains to prove that this map is compatible with the right action of $M$. This boils down to checking that, for $u\in [x,e[_\mathcal{R}$ and $y\in M$: \begin{equation} uy \in [x,e[_\mathcal{R} \quad \Longleftrightarrow \quad y \in [f,1]_\mathcal{R} \; . \end{equation} Recall that, by Lemma~\ref{lemma.factorization_ex}, $uf=x$. Hence, for $y\in [f,1]_\mathcal{R}$, $uy \ge_\mathcal{R} uf = x$. Also, since $u\in [x,e[_\mathcal{R}$ we have that $uy<_\mathcal{R} e$. Now take $y$ such that $uy\in [x,e[_\mathcal{R}$, and let $v=yf$. Then $uv = uyf = x$, while $v=vf$. Therefore, since $x$ is c-irreducible, $v=f$. \end{proof} \begin{corollary}\label{corollary.irr.free} The family $(x-x^\omega)_{x\in\quiv}$ is free modulo $\operatorname{rad}^2\mathbb{K}M$. \end{corollary} \begin{proof} We use a triangularity argument: If some $y\in\mathbb{K}M$ lies in $\operatorname{rad}^2\mathbb{K}M$ it must act by zero on all modules without square radical. In particular it must act by zero on all $2$-dimensional modules. Suppose that \begin{equation} \sum_{x\in\quiv} c_x(x-x^\omega) \end{equation} with $c_x\in\mathbb{K}$ acts by zero on all the previously constructed modules $M_x$. Suppose that some $c_x$ is nonzero and choose such an $x_0$ maximal in $\mathcal{J}$-order. Consider the module $M := M_{x_0}$. Since $x_0\in Q(M)$, $x_0$ is not idempotent so that $x_0^\omega \leq_\mathcal{J} x_0 <_\mathcal{J} \rfix{x_0}$. As a consequence \begin{equation} \epsilon_{x_0}\cdot x_0 = \xi_{x_0} \qquad\text{and}\qquad \epsilon_{x_0}\cdot x_0^{\omega} = 0\,. \end{equation} Moreover, if $x$ is not bigger than $x_0$ in $\mathcal{J}$-order, then $x$ is also not bigger than $x_0$ in $\mathcal{R}$-order, so that $\epsilon_{x_0}\cdot x = 0$. Therefore \begin{equation} \epsilon_{x_0} \cdot \left(\sum_{x\in\quiv} c_x(x-x^\omega)\right) = c_{x_0} \xi_{x_0} \end{equation} which must vanish in contradiction with the assumption. \end{proof} We now show that the square radical $\operatorname{rad}^2\mathbb{K}M$ is at least as large as the number of factorizable elements: \begin{proposition} Suppose that $x=uv$ is a non-trivial factorization of $x$. Then \begin{equation} (u-u^\omega)(v-v^\omega) = x + \sum_{y<_\mathcal{J} x} c_y y \end{equation} for some scalars $c_y \in \mathbb{K}$. \end{proposition} \begin{proof} We need to show that $u^\omega v$ and $uv^\omega$ are both different from $x$. Suppose that $u^\omega v = x$. Then $u^\omega x=x$ so that $\lfix{x} \leq_\mathcal{J} u^\omega$. Since $u \lfix{x} \in \laut{x}$, we have $\lfix{x} \le_\mathcal{J} u \lfix{x} \le_\mathcal{J} \lfix{x}$. Thus $u \lfix{x} = \lfix{x}$ contradicting the non-triviality of the factorization $uv$. The same reasoning shows that $uv^\omega <_\mathcal{J} x$. \end{proof} \begin{corollary} \label{corollary.quiver} The family $(x-x^\omega)_{x\in\quiv}$ is a basis of $\operatorname{rad}\mathbb{K}M/\operatorname{rad}^2\mathbb{K}M$. \end{corollary} \begin{proof} By Corollary~\ref{corollary.irr.free} we know that $\operatorname{rad}\mathbb{K}M/\operatorname{rad}^2\mathbb{K}M$ is at least of dimension $\operatorname{Card}(\quiv)$. We just showed that $\operatorname{rad}^2\mathbb{K}M$ is at least of dimension $\operatorname{Card}(M) - \operatorname{Card}(\idempMon) - \operatorname{Card}(\quiv)$. Therefore all those inequalities must be equalities. \end{proof} We conclude by an explicit description of the arrows of the quiver as elements of the monoid algebra. \begin{corollary} \label{corollary.quiver_arrow} For all idempotents $i,j\in\idempMon$, the family $(f_i(x-x^\omega)f_j)$ where $x$ runs along the set of non-idempotent c-irreducible elements such that $\lfix{x}=i$ and $\rfix{x} = j$ is a basis for $f_i \operatorname{rad} \mathbb{K}M f_j$ modulo $\operatorname{rad}^2\mathbb{K}M$. \end{corollary} \begin{proof} By Corollary~\ref{corollary.bx.triang}, one has $(f_i x f_j) = x + \sum_{y <_\mathcal{J} x} c_y y$. Since $x^\omega <_\mathcal{J} x$, such a triangularity must also hold for $(f_i(x-x^\omega)f_j)$. \end{proof} \begin{remark} By Remark~\ref{remark.factorization.irr.cirr} a $\mathcal{J}$-trivial monoid $M$ is generated by (the labels of) the vertices and the arrows of its quiver. \end{remark} \begin{lemma} \label{lemma.quiver} If $x$ is in the quiver, then it is of the form $x=epf$ with $p$ irreducible, $e = \lfix{x}$, and $f = \rfix{x}$. Furthermore, if $p$ is idempotent, then $x=ef$. \end{lemma} \begin{proof} Since $x=ex=xf$, one can always write $x$ as $x=eyf$. Assume that $y$ is not irreducible, and write $y=uv$ with $u,v<_\mathcal{J} y$. Then, since $x$ is in the quiver, one has either $eu=e$ or $vf=f$, and therefore $x=euf$ or $x=evf$. Repeating the process inductively eventually leads to $x=epf$ with $p$ irreducible. Assume further that $p$ is an idempotent. Then, $x = (e p) (p f)$ and therefore $ep=e$ or $pf=f$. In both cases, $x=ef$. \end{proof} \begin{corollary} \label{corollary.jidem_quiver} In a $\mathcal{J}$-trivial monoid generated by idempotents, the quiver is given by a subset of all products $ef$ with $e$ and $f$ idempotents such that $e$ and $f$ are respectively the left and right symbols of $ef$. \end{corollary} \subsection{Examples of Cartan matrices and quivers} \label{ss.quiver.examples} We now use the results of the previous sections to describe the Cartan matrix and quiver of several monoids. Along the way, we discuss briefly some attempts at describing the radical filtration, and illustrate how certain properties of the monoids (quotients, (anti)automorphisms, ...) can sometimes be exploited. \subsubsection{Representation theory of $H_0(W)$ (continued)} We start by recovering the description of the quiver of the $0$-Hecke algebra of Duchamp-Hivert-Thibon~\cite{Duchamp_Hivert_Thibon.2002} in type $A$ and of Fayers~\cite{Fayers.2005} in general type. We further refine it by providing a natural indexing of the arrows of the quiver by certain elements of $H_0(W)$. \begin{proposition} \label{proposition.quiver.hecke} The quiver elements $x\in\quiv$ are exactly the products $x=\pi_J\pi_K$ where $J$ and $K$ are two incomparable subsets of $I$ such that, for any $j\in J\setminus K$ and $k\in K\setminus J$, the generators $\pi_j$ and $\pi_k$ do not commute. \end{proposition} \begin{proof} Recall that the idempotents of $H_0(W)$ are exactly the $\pi_J$ for all subsets $J$ and that by Corollary~\ref{corollary.jidem_quiver}, the c-irreducible elements are among the products $\pi_J\pi_K$. First of all if $J\subseteq K$ then $\pi_J\pi_K=\pi_K\pi_J=\pi_K$ so that, for $\pi_J\pi_K$ to be c-irreducible, $J$ and $K$ have to be incomparable. Now suppose that there exists some $j\in J\setminus K$ and $k\in K\setminus J$ such that $\pi_j\pi_k=\pi_k\pi_j$. Then \begin{equation} \pi_J\pi_K=\pi_J\pi_j\pi_k\pi_K=\pi_J\pi_k\pi_j\pi_K\,. \end{equation} But since $k\notin J$, one has $\pi_J\pi_k\neq\pi_J$. Similarly, $\pi_j\pi_K\neq\pi_K$. This implies that $(\pi_J\pi_k,\pi_j\pi_K)$ is a non-trivial factorization of $\pi_J\pi_K$. Conversely, suppose that there exists a non-trivial factorization $\pi_J\pi_K = uv$. Since $\pi_Ju\neq\pi_J$, there must exist some $k \in K\backslash J$ such that $u\le_\mathcal{J}\pi_k$ (or equivalently $\pi_k$ appears in some and therefore any reduced word for $u$). Similarly, one can find some $j\in J\backslash K$ such that $v\le_\mathcal{J}\pi_j$. Then, for $<_\mathcal{B}$ as defined in~\eqref{equation.bruhat}, that is reversed Bruhat order, we have \begin{displaymath} \pi_J\pi_K = \pi_Juv\pi_K \leq_\mathcal{B} \pi_J\pi_k\pi_j\pi_K \leq_\mathcal{B} \pi_J\pi_K\, , \end{displaymath} and therefore $\pi_J\pi_k\pi_j\pi_K = \pi_J\pi_K$. Hence the left hand side of this equation can be rewritten to its right hand side using a sequence of applications of the relations of $H_0(W)$. Notice that using $\pi_i^2=\pi_i$ or any non trivial braid relation preserves the condition that there exists some $\pi_k$ to the left of some $\pi_j$. Hence rewriting $\pi_J\pi_k\pi_j\pi_K$ into $\pi_J\pi_K$ requires using the commutation relation $\pi_k\pi_j=\pi_j\pi_k$ at some point, as desired. \end{proof} \subsubsection{About the radical filtration} Proposition~\ref{proposition.quiver.hecke} suggests to search for a natural indexing by elements of the monoid not only of the quiver, but of the full \emph{Loewy filtration}. \begin{problem} \label{problem.radical.filtration} Find some statistic $r(m)$ for $m \in M$ such that, for any two idempotents $i,j$ and any integer $k$, \begin{multline} \dim f_i\big(\operatorname{rad}^k A /\operatorname{rad}^{k+1} A\big) f_j =\\ \operatorname{Card}\{m\inM\mid r(m) = k,\ \lfix{m}=i,\ \rfix{m}=j\}\,. \end{multline} \end{problem} \begin{table} \begin{equation*} \begin{array}{|c|c|} \hline Type & \text{Generating series} \\ \hline A_1 & 2 \\ A_2 & 2q + 4 \\ A_3 & 6q^2 + 10q + 8 \\ A_4 & 10q^4 + 24q^3 + 38q^2 + 32q + 16 \\ A_5 & 14q^7 + 48q^6 + 72q^5 + 144q^4 + 172q^3 + 150q^2 + 88q + 32 \\ \hline B_2 & 2q^2 + 2q + 4 \\ B_3 & 6q^4 + 10q^3 + 14q^2 + 10q + 8 \\ B_4 & 12q^8 + 24q^7 + 46q^6 + 60q^5 + 76q^4 + 64q^3 + 54q^2 + 32q + 16 \\ \hline D_2 & 4 \\ D_3 & 6q^2 + 10q + 8 \\ D_4 & 6q^6 + 12q^5 + 20q^4 + 38q^3 + 62q^2 + 38q + 16 \\ \hline H_3 & 6q^8 + 10q^7 + 14q^6 + 18q^5 + 22q^4 + 18q^3 + 14q^2 + 10q + 8\\ \hline I_5 & 2q^3 + 2q^2 + 2q + 4 \\ I_6 & 2q^4 + 2q^3 + 2q^2 + 2q + 4 \\ I_n & 2q^{n-2} + \dots + 2q^2 + 2q + 4\\ \hline \end{array} \end{equation*} \caption{The generating series $\sum_k \dim \big(\operatorname{rad}^k A/\operatorname{rad}^{k+1}A\big)\ q^k$ for the $0$-Hecke algebras $A=\mathbb{K} H_0(W)$ of the small Coxeter groups.} \label{table.radical_filtration} \end{table} Such a statistic is not known for $H_0(W)$, even in type $A$. Its expected generating series for small Coxeter group is shown in Table~\ref{table.radical_filtration}. Note that all the coefficients appearing there are even. This is a general fact: \begin{proposition} Let $W$ be a Coxeter group and $H_0(W)$ its $0$-Hecke monoid. Then, for any $k$, the dimension $d^k := \dim \operatorname{rad}^k \mathbb{K} H_0(W)$ is an even number. \end{proposition} \begin{proof} This is a consequence of the involutive algebra automorphism $\theta\ :\ \pi_i \mapsto 1-\pi_i$. This automorphism exchanges the eigenvalues 0 and 1 for the idempotent $\pi_i$. Therefore it exchanges the projective module $P_J$ associated to the descent set $J$ (see Example~\ref{example.zero.hecke} for the definition of $P_J$) with the projective module $P_{\overline J}$ associated to the complementary descent set $\overline J = I \setminus J$. As a consequence it must exchange $\operatorname{rad}^k P_J$ and $\operatorname{rad}^k P_{\overline J}$ which therefore have the same dimensions. Since there is no self-complementary descent set, $d^k = \sum_{J\subset I} \operatorname{rad}^k P_J$ must be even. \end{proof} Also, as suggested by Table~\ref{table.radical_filtration}, Problem~\ref{problem.radical.filtration} admits a simple solution for $H_0(I_n)$. \begin{proposition} Let $W$ be the $n$-th dihedral group (type $I_n$) and $\mathbb{K} H_0(W)$ its $0$-Hecke algebra. Define $a_k = \pi_1\pi_2\pi_1\pi_2\cdots$ and $b_k = \pi_2\pi_1\pi_2\pi_1\cdots$ where both words are of length $k$. Recall that the longest element of $H_0(W)$ is $\omega = a_n = b_n$. Then, for all $k>0$, the set \begin{equation} R_k := \{a_i - \omega,\ b_i - \omega\mid k<i<n\} \end{equation} is a basis for $\operatorname{rad}^k \mathbb{K} H_0(W)$. In particular, defining the statistic $r(w):=\ell(w)-1$, one obtains that the family $$\{a_{k+1}-\omega,b_{k+1}-\omega\}$$ for $0<k<n-1$ is a basis of $\operatorname{rad}^k \mathbb{K} H_0(W)/\operatorname{rad}^{k+1} \mathbb{K} H_0(W)$. \end{proposition} Note that if $k<n-1$ then $\omega$ belongs to $\operatorname{rad}^{k+1} \mathbb{K} H_0(W)$. One can therefore take $\{a_{k+1},b_{k+1}\}$ as a basis. \begin{proof} The case $k=1$ follows from Proposition~\ref{proposition.basis.radical}, and by Proposition~\ref{corollary.quiver} the quiver is given by $a_2-\omega$ and $b_2-\omega$. The other cases are then proved by induction, using the following relations: \begin{alignat*}{2} (a_2 - \omega)(a_j - \omega) &= a_{j+2} - \omega&\qquad (a_2 - \omega)(b_j - \omega) &= a_{j+1} - \omega\\ (b_2 - \omega)(b_j - \omega) &= b_{j+2} - \omega& (b_2 - \omega)(a_j - \omega) &= b_{j+1} - \omega.\qedhere \end{alignat*} \end{proof} A natural approach to try to define such a statistic $r(m)$ is to use iterated compatible factorizations. For example, one can define a new product $\bullet$, called the \emph{compatible product} on $M\cup\{0\}$, as follows: \begin{equation*} x \bullet y = \begin{cases} xy & \text{if $\lfix{x} = \lfix{xy}$ and $\rfix{y} = \rfix{xy}$ and $\rfix{x}=\lfix{y}$,}\\ 0 & \text{otherwise.} \end{cases} \end{equation*} However this product is usually not associative. Take for example $x = \pi_{14352}$, $y = \pi_{31254}$ and $z = \pi_{25314}$ in $H_0(\sg[5])$. Then, $xy= \pi_{41352}$, $yz= \pi_{35214}$ and $xyz = \pi_{45312}$. The following table shows the left and right descents of those elements: \begin{equation*} \begin{array}{r|c|c} & \text{left} & \text{right} \\\hline x=\pi_{14352} & \{2,3\} & \{2,4\} \\ y=\pi_{31254} & \{2,4\} & \{1,4\} \\ z=\pi_{25314} & \{1,4\} & \{2,3\} \\ xy=\pi_{41352} & \{2,3\} & \{1,4\} \\ yz=\pi_{35214} & \{1,2,4\} & \{2,3\} \\ xyz=\pi_{45312} & \{2,3\} & \{2,3\} \\ \end{array} \end{equation*} Consequently $(x\bullet y)\bullet z = (xy) \bullet z = xyz$ whereas $y\bullet z = 0$ and therefore $x\bullet (y\bullet z) = 0$. Due to the lack of associativity there is no immediate definition for $r(m)$ as the ``length of the longest compatible factorization'', and our various attempts to define this concept all failed for the $0$-Hecke algebra in type $D_4$. \bigskip \subsubsection{Nondecreasing parking functions} We present, without proof, how the description of the Cartan matrix of ${\operatorname{NDPF}}_n$ in~\cite{Hivert_Thiery.HeckeSg.2006,Hivert_Thiery.HeckeGroup.2007} fits within the theory, and derive its quiver from that of $H_0(\sg[n])$. \begin{proposition} The idempotents of ${\operatorname{NDPF}}_n$ are characterized by their image sets, and there is one such idempotent for each subset of $\{1,\dots,n\}$ containing $1$. For $f$ an element of ${\operatorname{NDPF}}_n$, $\rfix f$ is given by the image set of $f$, whereas $\lfix f$ is given by the set of all lowest point in each fiber of $f$; furthermore, $f$ is completely characterized by $\lfix f$ and $\rfix f$. The Cartan matrix is $0,1$, with $c_{I,J}=1$ if $I=\{i_1<\cdots<i_k\}$ and $J=\{j_1<\cdots<j_k\}$ are two subsets of the same cardinality $k$ with $i_l\leq j_l$ for all $l$. \end{proposition} \begin{proposition} Let $M$ be a $\mathcal{J}$-trivial monoid generated by idempotents. Suppose that $N$ is a quotient of $M$ such that $\idempMon[N] = \idempMon$. Then, the quiver of $N$ is a subgraph of the quiver of $M$. \end{proposition} Note that the hypothesis implies that $M$ and $N$ have the same generating set. \begin{proof} It is easy to see that $\operatorname{lfix}$ and $\operatorname{rfix}$ are the same in $M$ and $N$. Moreover, any compatible factorization in $M$ is still a compatible factorization in $N$. \end{proof} As a consequence one recovers the quiver of ${\operatorname{NDPF}}_n$: \begin{proposition} The quiver elements of ${\operatorname{NDPF}}_n$ are the products $\pi_{J\cup \{i\}}\pi_{J\cup \{i+1\}}$ where $J\subset\{1,\dots, n-1\}$ and $i,i+1\notin J$. \end{proposition} \begin{proof} Recall that ${\operatorname{NDPF}}_n$ is the quotient of $H_0(\sg[n])$ by the relation $\pi_i\pi_{i+1}\pi_i = \pi_{i+1}\pi_i$, via the quotient map $\pi_i\mapsto \pi_i$. For $J$ a subset of $\{1,\dots,n-1\}$, define accordingly $\pi_J$ in ${\operatorname{NDPF}}_n$ as the image of $\pi_J$ in $H_0(\sg[n])$. Specializing Proposition~\ref{proposition.quiver.hecke} to type $A_{n-1}$, one obtains that there are four types of quiver elements: \begin{itemize} \item $\pi_{J\cup \{i\}}\pi_{J\cup \{i+1\}}$ where $J\subset\{1,\dots, n-1\}$ and $i,i+1\notin J$, \item $\pi_{J\cup \{i+1\}}\pi_{J\cup \{i\}}$ where $J\subset\{1,\dots, n-1\}$ and $i,i+1\notin J$, \item $\pi_{K\cup \{i, i+2\}}\pi_{K\cup \{i+1\}}$ where $K\subset\{1,\dots, n-1\}$ and $i,i+1,i+2\notin K$, \item $\pi_{K\cup \{i+1\}}\pi_{K\cup \{i, i+2\}}$ where $K\subset\{1,\dots, n-1\}$ and $i,i+1,i+2\notin K$. \end{itemize} One can easily check that the three following factorizations are non-trivial: \begin{itemize} \item $\pi_{J\cup \{i+1\}}\pi_{J\cup \{i\}} = (\pi_{J\cup \{i+1\}}\pi_i,\ \pi_{i+1}\pi_{J\cup \{i\}})$, \item $\pi_{K\cup \{i, i+2\}}\pi_{K\cup \{i+1\}} = (\pi_{K\cup \{i, i+2\}}\pi_{i+1},\ \pi_{i+2}\pi_{K\cup \{i+1\}})$, \item $\pi_{K\cup \{i+1\}}\pi_{K\cup \{i, i+2\}} = (\pi_{K\cup \{i+1\}}\pi_{i},\ \pi_{i+1}\pi_{K\cup \{i, i+2\}})$. \end{itemize} Conversely, any non-trivial factorization of $\pi_{J\cup \{i\}}\pi_{J\cup \{i+1\}}$ in ${\operatorname{NDPF}}_n$ would have been non-trivial in the Hecke monoid. \end{proof} \bigskip \subsubsection{The incidence algebra of a poset} We show now that we can recover the well-known representation theory of the incidence algebra of a partially ordered set. Let $(P, \leq)$ be a partially ordered set. Recall that the incidence algebra of $P$ is the algebra $\mathbb{K} P$ whose basis is the set of pairs $(x,y)$ of comparable elements $x\leq y$ with the product rule \begin{equation}\label{equation.incidence} (x,y) (z,t) = \begin{cases} (x,t) & \text{if $y = z$,}\\ 0 & \text{otherwise.} \end{cases} \end{equation} The incidence algebra is very close to the algebra of a monoid except that $0$ and $1$ are missing. We therefore build a monoid by adding $0$ and $1$ artificially and removing them at the end: \newcommand{\operatorname{Zero}}{\operatorname{Zero}} \newcommand{\operatorname{One}}{\operatorname{One}} \begin{definition} Let $(P, \leq)$ be a partially ordered set. Let $\operatorname{Zero}$ and $\operatorname{One}$ be two elements not in $P$. The incidence monoid of $P$ is the monoid $M(P)$, whose underlying set is \begin{equation*} M(P) := \{(x,y)\in P \mid x\leq y\} \cup \{\operatorname{Zero}, \operatorname{One}\}\,, \end{equation*} with the product rule given by Equation~\ref{equation.incidence} plus $\operatorname{One}$ being neutral and $\operatorname{Zero}$ absorbing. \end{definition} \begin{proposition} Define an order $\preceq$ on $M(P)$ by \begin{equation} (x,y) \preceq (z,t) \quad\text{if and only if}\quad x\leq z\leq t\leq y\,, \end{equation} and $\operatorname{One}$ and $\operatorname{Zero}$ being the largest and the smallest element, respectively. The monoid $M(P)$ is left-right ordered for $\preceq$ and thus $\mathcal{J}$-trivial. \end{proposition} \begin{proof} This is trivial by the product rule. \end{proof} One can now use all the results on $\mathcal{J}$-trivial monoids to obtain the representation theory of $M(P)$. One gets back to $\mathbb{K} P$ thanks to the following result. \begin{proposition} As an algebra, $\mathbb{K} M(P)$ is isomorphic to $\mathbb{K}\operatorname{One}\ \oplus\ \mathbb{K} P\ \oplus\ \mathbb{K} \operatorname{Zero}$. \end{proposition} \begin{proof} In the monoid algebra $\mathbb{K} M(P)$, the elements $(x,x)$ are orthogonal idempotents. Thus $e:=\sum_{x\in P} (x,x)$ is itself an idempotent and it is easily seen that $\mathbb{K} P$ is isomorphic to $e(\mathbb{K} M(P))e$. \end{proof} One can then easily deduce the representation theory of $\mathbb{K} P$: \begin{proposition} Let $(P, \leq)$ be a partially ordered set and $\mathbb{K} P$ its incidence algebra. Then the Cartan matrix $C = (c_{x,y})_{x,y\in P}$ of $\mathbb{K} P$ is indexed by $P$ and given by \begin{equation*} c_{x,y} = \begin{cases} 1 & \text{if $x\leq y\,,$}\\ 0 & \text{otherwise.} \end{cases} \end{equation*} The arrows of the quiver are $x\to y$ whenever $(x,y)$ is a cover in $P$, that is, $x \le y$ and there is no $z$ such that $x\le z\le y$. \end{proposition} \begin{proof} Clearly $\lfix{x,y} = (x,x)$ and $\rfix{x,y} = (y,y)$. Moreover, the compatible factorizations of $(x,y)$ are exactly $(x,z)(z,y)$ with $x<z<y$. \end{proof} \bigskip \subsubsection{Unitriangular Boolean matrices} Next we consider the monoid of unitriangular Boolean matrices $\mathcal U_n$. \begin{remark} The idempotents of $\mathcal U_n$ are in bijection with the posets admitting $1,\dots,n$ as linear extension (sequence A006455 in~\cite{Sloane}). Let $m\in \mathcal U_n$ and $g$ be the corresponding digraph. Then $m^\omega$ is the transitive closure of $g$, and $\lfix g$ and $\rfix g$ are given respectively by the largest ``prefix'' and ``postfix'' of $g$ which are posets: namely, $\lfix g$ (resp. $\rfix g$) correspond to the subgraph of $g$ containing the edges $i\!\!\rightarrow\!\! j$ (resp. $j\!\!\rightarrow\!\! k$) of $g$ such that $i\!\!\rightarrow\!\! k$ is in $g$ whenever $j\!\!\rightarrow\!\! k$ (resp. $i\!\!\rightarrow\!\! j$) is. \end{remark} Figure~\ref{fig:unitribool4} displays the Cartan matrix and quiver of $\mathcal U_4$; as expected, their nodes are labelled by the 40 subposets of the chain. This figure further suggests that they are acyclic and enjoy a certain symmetry, properties which we now prove in general. \begin{figure} \centering \fbox{\includegraphics[width=\textwidth]{Fig/UnitriangularBoleanMatrixMonoid4-cartan-graph}} \fbox{\includegraphics[width=\textwidth]{Fig/UnitriangularBoleanMatrixMonoid4-quiver}} \caption{On top, the Cartan matrix (drawn as a graph) and at the bottom the quiver of $\mathcal U_4$. The edge labels have not been drawn for readability; for the quiver, they can be recovered as the product of two vertices. % Those pictures have been produced automatically by \texttt{Sage}, \texttt{dot2tex}, and \texttt{graphviz}, up to a manual reorganization of the connected components using \texttt{inkscape}.} \label{fig:unitribool4} \end{figure} The monoid $\mathcal U_n$ admits a natural antiautomorphism $\phi$; it maps an upper triangular Boolean matrix to its transpose along the second diagonal or, equivalently, relabels the vertices of the corresponding digraph by $i\mapsto n-i$ and then takes the dual. \begin{proposition} The Cartan matrix of $\mathcal U_n$, seen as a graph, and its quiver are preserved by the non-trivial antiautomorphism induced by $\phi$. \end{proposition} \begin{proof} Remark that any antiautomorphism $\phi$ flips $\operatorname{lfix}$ and $\operatorname{rfix}$: \begin{displaymath} \lfix{\phi(x)} = \rfix x \qquad \text{and} \qquad \rfix{\phi(x)}=\lfix x\,, \end{displaymath} and that the definition of $c$-irreducible is symmetric. \end{proof} \begin{comment} Note however that the monoid algebra of $\mathcal U_n$ is not Frobenius. \begin{sageexample} sage: from sage.combinat.boolean_matrix_monoids import * sage: b = UnitriangularBooleanMatricesPosetMonoid(Posets.ChainPoset(3)) sage: ba = b.algebra(QQ) sage: frob = lambda a, b: (a*b).coefficient(ba.basis().keys().list()[-1]) sage: mat = matrix([[frob(a,b) for a in ba.basis()] for b in ba.basis()]) sage: mat.rank() 7 sage: mat.kernel() Vector space of degree 8 and dimension 1 over Rational Field Basis matrix: [ 0 0 1 0 0 0 -1 0] sage: ba.basis().list()[2] B[[1 0 0] [0 1 1] [0 0 1]] sage: ba.basis().list()[-2] B[[1 0 1] [0 1 1] [0 0 1]] \end{sageexample} \end{comment} Fix an ordering of the pairs $(i,j)$ with $i<j$ such that $(i,j)$ always comes before $(j,k)$ (for example using lexicographic order). Compare two elements of $\mathcal U_n$ lexicographically by writing them as bit vectors along the chosen enumeration of the pairs $(i,j)$. \begin{proposition} The Cartan matrix of $\mathcal U_n$ is unitriangular with respect to the chosen order, and therefore its quiver is acyclic. \end{proposition} \begin{proof} We prove that, if $e=\lfix g$ and $f=\rfix g$, then $e\leq f$, with equality if and only if $g$ is idempotent. If $g$ is idempotent, then $e=f=g$, and we are done. Assume now that $g$ is not idempotent, so that $e\ne g$ and $f\ne g$. Take the smallest edge $j\!\!\rightarrow\!\! k$ which is in $g$ but not in $f$. Then, there exists $i<j$ such that $i\!\!\rightarrow\!\! k$ is not in $g$ but $i\!\!\rightarrow\!\! j$ is. Therefore $i\!\!\rightarrow\!\! j$ is not in $e$, whereas by minimality it is in $f$. Hence, $f>e$, as desired. \end{proof} Looking further at Figure~\ref{fig:unitribool4} suggests that the quiver is obtained as the transitive reduction of the Cartan matrix; we checked on computer that this property still holds for $n=5$ and $n=6$. \begin{comment} \begin{sageexample} sage: M = semigroupe.UnitriangularBooleanMatrixSemigroup(6) sage: f = M.quiver(edge_labels=False); g = M.cartan_matrix_as_graph(); h=g.transitive_reduction() sage: set(f.vertices()) == set(h.vertices()) True \end{sageexample} Looking at $n=4,5$ also suggest that, if `x` is a quiver element, then its edges are the union of the edges of the edges of its left and right symbol. That's not an if and only if condition though: \begin{sageexample} sage: def is_quiver(e,f): ... ef = e * f ... res = e.value+f.value; ... for pos in res._nonzero_positions_by_row(): ... res[pos] = 1 ... return e != f and ef.symbol("left") == e and ef.symbol("right") == f and res == ef.value sage: U = UnitriangularBooleanMatricesPosetMonoid(Posets.ChainPoset(4)) sage: set(U.quiver_elements()).issubset(set( e*f for e,f in CartesianProduct(U.idempotents(),U.idempotents()) if is_quiver(e,f))) True sage: set(U.quiver_elements()) == set( e*f for e,f in CartesianProduct(U.idempotents(),U.idempotents()) if is_quiver(e,f)) False \end{sageexample} One of the two counter examples is: \begin{sageexample} sage: e [1 0 0 0] [0 1 1 1] [0 0 1 1] [0 0 0 1] sage: f [1 1 1 0] [0 1 1 0] [0 0 1 0] [0 0 0 1] sage: e*f [1 1 1 0] [0 1 1 1] [0 0 1 1] [0 0 0 1] \end{sageexample} \end{comment} \bigskip \subsubsection{$\mathcal{J}$-trivial monoids built from quivers} We conclude with a collection of examples showing in particular that any quiver can be obtained as quiver of a finite $\mathcal{J}$-trivial monoid. \begin{example} Consider a finite commutative idempotent $\mathcal{J}$-trivial monoid, that is a finite lattice $L$ endowed with its meet operation. Denote accordingly by $0$ and $1$ the bottom and top elements of $L$. Extend $L$ by a new generator $p$, subject to the relations $pep=0$ for all $e$ in $L$, to get a $\mathcal{J}$-trivial monoid $M$ with elements given by $L\uplus \{ e p f \mid e,f \in L \}$. Then, the quiver of $M$ is a complete digraph: its vertices are the elements of $L$, and between any two elements $e$ and $f$ of $L$, there is a single edge which is labelled by $epf$. \end{example} \newcommand{{e\!\stackrel{l}{\rightarrow}\!\! f}}{{e\!\stackrel{l}{\rightarrow}\!\! f}} \begin{example} Consider any finite quiver $G=(E,Q)$, that is a digraph, possibly with loops, cycles, or multiple edges, and with distinct labels on all edges. We denote by ${e\!\stackrel{l}{\rightarrow}\!\! f}$ an edge in $Q$ from $e$ to $f$ with label $l$. Define a monoid $M(G)$ on the set $E\uplus Q \uplus \{0,1\}$ by the following product rules: \begin{xalignat*}{2} e^2 &\ =\ e && \text{for all $e \in E$,}\\ e\ {e\!\stackrel{l}{\rightarrow}\!\! f} &\ =\ {e\!\stackrel{l}{\rightarrow}\!\! f} && \text{for all ${e\!\stackrel{l}{\rightarrow}\!\! f} \in Q$,}\\ {e\!\stackrel{l}{\rightarrow}\!\! f}\ f &\ =\ {e\!\stackrel{l}{\rightarrow}\!\! f} && \text{for all ${e\!\stackrel{l}{\rightarrow}\!\! f} \in Q$,} \end{xalignat*} together with the usual product rule for $1$, and all other products being $0$. In other words, this is the quotient of the path monoid $P(G)$ of $G$ (which is $\mathcal{J}$-trivial) obtained by setting $p=0$ for all paths $p$ of length at least two. Then, $M(G)$ is a $\mathcal{J}$-trivial monoid, and its quiver is $G$ with $0$ and $1$ added as extra isolated vertices. Those extra vertices can be eliminated by considering instead the analogous quotient of the path algebra of $G$ (i.e. setting $0_{M(G)}=0_\mathbb{K}$ and $1_{M(G)}=\sum_{g\in E} g$). \end{example} \begin{example} Choose further a lattice structure $L$ on $E\cup \{0,1\}$. Define a $\mathcal{J}$-trivial monoid $M(G,L)$ on the set $E\uplus Q \uplus \{0,1\}$ by the following product rules: \begin{xalignat*}{2} e f &\ =\ e\vee_L f && \text{for all $e,f \in E$,}\\ {e\!\stackrel{l}{\rightarrow}\!\! f}\ f' &\ =\ {e\!\stackrel{l}{\rightarrow}\!\! f} && \text{for all ${e\!\stackrel{l}{\rightarrow}\!\! f}\in Q$ and $f'\in E$ with $f\leq_L f'$,}\\ e'\ {e\!\stackrel{l}{\rightarrow}\!\! f} &\ =\ {e\!\stackrel{l}{\rightarrow}\!\! f} && \text{for all ${e\!\stackrel{l}{\rightarrow}\!\! f}\in Q$ and $e'\in E$ with $e\leq_L e'$,} \end{xalignat*} together with the usual product rule for $1$, and all other products being $0$. Note that the monoid $M(G)$ of the previous example is obtained by taking for $L$ the lattice where the vertices of $G$ form an antichain. Then, the semi-simple quotient of $M(G,L)$ is $L$ and its quiver is $G$ (with $0$ and $1$ added as extra isolated vertices). \end{example} \begin{example} \renewcommand{{e\!\stackrel{l}{\rightarrow}\!\! f}}{e\,\!\!\rightarrow\!\! f} We now assume that $G=(E,Q)$ is a simple quiver. Namely, there are no loops, and between two distinct vertices $e$ and $f$ there is at most one edge which we denote by ${e\!\stackrel{l}{\rightarrow}\!\! f}$ for short. Define a monoid structure $M'(G)$ on the set $E\uplus Q\uplus \{0,1\}$ by the following product rules: \begin{xalignat*}{2} e^2 &\ =\ e && \text{for all $e \in E$,}\\ e f &\ =\ {e\!\stackrel{l}{\rightarrow}\!\! f} && \text{for all ${e\!\stackrel{l}{\rightarrow}\!\! f} \in Q$,}\\ e\ {e\!\stackrel{l}{\rightarrow}\!\! f} &\ =\ {e\!\stackrel{l}{\rightarrow}\!\! f} && \text{for all ${e\!\stackrel{l}{\rightarrow}\!\! f} \in Q$,}\\ {e\!\stackrel{l}{\rightarrow}\!\! f}\ f &\ =\ {e\!\stackrel{l}{\rightarrow}\!\! f} && \text{for all ${e\!\stackrel{l}{\rightarrow}\!\! f} \in Q$,} \end{xalignat*} together with the usual product rule for $1$, and all other products being $0$. Then, $M'(G)$ is a $\mathcal{J}$-trivial monoid generated by the idempotents in $E$ and its quiver is $G$ (with $0$ and $1$ added as extra isolated vertices). \end{example} \begin{exercise} Let $L$ be a lattice structure on $E\cup \{0,1\}$. Find compatibility conditions between $G$ and $L$ for the existence of a $\mathcal{J}$-trivial monoid generated by idempotents having $L$ as semi-simple quotient and $G$ (with $0$ and $1$ added as extra isolated vertices) as quiver. \end{exercise} \subsection{Implementation and complexity} \label{subsection.implementation_complexity} The combinatorial description of the representation theoretical properties of a $\mathcal{J}$-trivial monoid (idempotents, Cartan matrix, quiver) translate straightforwardly into algorithms. Those algorithms have been implemented by the authors, in the open source mathematical system \texttt{Sage}~\cite{Sage}, in order to support their own research. The code is publicly available from the \texttt{Sage-Combinat} patch server~\cite{Sage-Combinat}, and is being integrated into the main \texttt{Sage} library and generalized to larger classes of monoids in collaboration with other \texttt{Sage-Combinat} developers. It is also possible to delegate all the low-level monoid calculations (Cayley graphs, $\mathcal{J}$-order, ...) to the blazingly fast $C$ library \texttt{Semigroupe} by Jean-Éric Pin~\cite{Semigroupe}. We start with a quick overview of the complexity of the algorithms. \begin{proposition} In the statements below, $M$ is a $\mathcal{J}$-trivial monoid of cardinality $n$, constructed from a set of $m\leq n$ generators $s_1,\dots,s_m$ in some ambient monoid. The product in the ambient monoid is assumed to be $O(1)$. All complexity statements are upper bounds, with no claim for optimality. In practice, the number of generators is usually small; however the number of idempotents, which condition the size of the Cartan matrix and of the quiver, can be as large as $2^m$. \begin{enumerate}[(a)] \item \label{complexity.cayley} Construction of the left / right Cayley graph: $O(nm)$ (in practice it usually requires little more than $n$ operations in the ambient monoid); \item \label{complexity.sorting} Sorting of elements according to $\mathcal{J}$-order: $O(nm)$; \item \label{complexity.idempotents} Selection of idempotents: $O(n)$; \item \label{complexity.symbols} Calculation of all left and right symbols: $O(nm)$; \item \label{complexity.cartan_matrix} Calculation of the Cartan matrix: $O(nm)$; \item \label{complexity.quiver} Calculation of the quiver: $O(n^2)$. \end{enumerate} \end{proposition} \begin{proof} \ref{complexity.cayley}: See~\cite{Froidure_Pin.1997} \ref{complexity.sorting}: This is a topological sort calculation for the two sided Cayley graph which has $n$ nodes and $2nm$ edges. \ref{complexity.idempotents}: Brute force selection. For each of the following steps, we propose a simple algorithm satisfying the claimed complexity. \ref{complexity.symbols}: Construct, for each element $x$ of the monoid, two bit-vectors $l(x)=(l_1,\dots,l_m)$ and $r(x)=(r_1,\dots,r_m)$ with $l_i=\delta_{s_ix,x}$ and $r_i=\delta_{xs_i,x}$. This information is trivial to extract in $O(nm)$ from the left and right Cayley graphs, and could typically be constructed as a side effect of \ref{complexity.cayley}. Those bit-vectors describe uniquely $\laut{x}$ and $\raut{x}$. From that, one can recover all $\lfix{x}$ and $\rfix{x}$ in $O(nm)$: as a precomputation, run through all idempotents $e$ of $M$ to construct a binary prefix tree $T$ which maps $l(e)=r(e)$ to $e$; then, for each $x$ in $M$, use $T$ to recover $\lfix{x}$ and $\rfix{x}$ from the bit vectors $l(x)$ and $r(x)$. \ref{complexity.cartan_matrix}: Obviously $O(n)$ once all left and right symbols have been calculated; so $O(nm)$ altogether. \ref{complexity.quiver}: A crude algorithm is to compute all products $xy$ in the monoid, check whether the factorization is compatible, and if yes cross the result out of the quiver (brute force sieve). This can be improved by running only through the products $xy$ with $\rfix{x}=\lfix{y}$; however this does not change the worst case complexity (consider a monoid with only $2$ idempotents $0$ and $1$, like $\mathbb{N}^m$ truncated by any ideal containing all but $n-2$ elements, so that $\lfix{x}=\rfix{x}=1$ for all $x\ne 0$). \end{proof} We conclude with a sample session illustrating typical calculations, using \texttt{Sage 4.5.2} together with the \texttt{Sage-Combinat} patches, running on \texttt{Ubuntu Linux 10.5} on a \texttt{Macbook Pro 4.1}. Note that the interface is subject to minor changes before the final integration into Sage. The authors will gladly provide help in using the software. We start by constructing the $0$-Hecke monoid of the symmetric group $W=\sg[4]$, through its action on $W$: \begin{sageexample} sage: W = SymmetricGroup(4) sage: S = semigroupe.AutomaticSemigroup(W.simple_projections(), W.one(), ... by_action = True, category=FiniteJTrivialMonoids()) sage: S.cardinality() 24 \end{sageexample} We check that it is indeed $\mathcal{J}$-trivial, and compute its $8$ idempotents: \begin{sageexample} sage: S._test_j_trivial() sage: S.idempotents() [[], [1], [2], [3], [1, 3], [1, 2, 1], [2, 3, 2], [1, 2, 1, 3, 2, 1]] \end{sageexample} Here is its Cartan matrix and its quiver: \begin{sageexample} sage: S.cartan_matrix_as_graph().adjacency_matrix(), S.quiver().adjacency_matrix() ( [0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0] [0 0 1 0 1 1 0 0] [0 0 1 0 1 1 0 0] [0 1 0 0 1 0 0 0] [0 1 0 0 0 0 0 0] [0 0 0 0 0 0 0 0] [0 0 0 0 0 0 0 0] [0 1 1 0 0 0 0 0] [0 1 0 0 0 0 0 0] [0 1 0 0 0 0 1 1] [0 1 0 0 0 0 1 1] [0 0 0 0 0 1 0 1] [0 0 0 0 0 1 0 0] [0 0 0 0 0 1 1 0], [0 0 0 0 0 1 0 0] ) \end{sageexample} In the following example, we check that, for any of the $318$ posets $P$ on $6$ vertices, the Cartan matrix $m$ of the monoid $\mathcal{OR}(P)$ of order preserving nondecreasing functions on $P$ is unitriangular. To this end, we check that the digraph having $m-1$ as adjacency matrix is acyclic. \begin{sageexample} sage: from sage.combinat.j_trivial_monoids import * sage: @parallel ...def check_cartan_matrix(P): ... return DiGraph(NDPFMonoidPoset(P).cartan_matrix()-1).is_directed_acyclic() sage: time all(res[1] for res in check_cartan_matrix(list(Posets(6)))) CPU times: user 5.68 s, sys: 2.00 s, total: 7.68 s Wall time: 255.53 s True \end{sageexample} Note: the calculation was run in parallel on two processors, and the displayed CPU time is just that of the master process, which is not much relevant. The same calculation on a eight processors machine takes about 71 seconds. We conclude with the calculation of the representation theory of a larger example (the monoid $\mathcal U_n$ of unitriangular Boolean matrices). The current implementation is far from optimized: in principle, the cost of calculating the Cartan matrix should be of the same order of magnitude as generating the monoid. Yet, this implementation makes it possible to explore routinely, if not instantly, large Cartan matrices or quivers that were completely out of reach using general purpose representation theory software. \begin{sageexample} M = semigroupe.UnitriangularBooleanMatrixSemigroup(6) Loading Sage library. Current Mercurial branch is: combinat sage: time M.cardinality() CPU times: user 0.14 s, sys: 0.02 s, total: 0.16 s Wall time: 0.16 s 32768 sage: time M.cartan_matrix() CPU times: user 27.50 s, sys: 0.09 s, total: 27.59 s Wall time: 27.77 s 4824 x 4824 sparse matrix over Integer Ring sage: time M.quiver() CPU times: user 512.73 s, sys: 2.81 s, total: 515.54 s Wall time: 517.55 s Digraph on 4824 vertices \end{sageexample} Figure~\ref{fig:unitribool4} displays the results in the case $n=4$. \section{Monoid of order preserving regressive functions on a poset $P$} \label{sec:NPDF} In this section, we discuss the monoid $\mathcal{OR}(P)$ of order preserving regressive functions on a poset $P$. Recall that this is the monoid of functions $f$ on $P$ such that for any $x\leq y \in P$, $x.f\leq x$ and $x.f\leq y.f$. In Section~\ref{sub:combIdem}, we discuss constructions for idempotents in $\mathcal{OR}(P)$ in terms of the image sets of the idempotents, as well as methods for obtaining $\lfix f$ and $\rfix f$ for any given function $f$. In Section~\ref{sub:cartNDPF}, we show that the Cartan matrix for $\mathcal{OR}(P)$ is upper uni-triangular with respect to the lexicographic order associated to any linear extension of $P$. In Section~\ref{sub:ndpfSemis}, we specialize to $\mathcal{OR}(L)$ where $L$ is a meet semi-lattice, describing a minimal generating set of idempotents. Finally, in Section~\ref{sub:ndpfOrthIdem}, we describe a simple construction for a set of orthogonal idempotents in ${\operatorname{NDPF}}_N$, and present a conjectural construction for orthogonal idempotents for $\mathcal{OR}(L)$. \subsection{Combinatorics of idempotents} \label{sub:combIdem} The goal of this section is to describe the idempotents in $\mathcal{OR}(P)$ using order considerations. We begin by giving the definition of joins, even in the setting when the poset $P$ is not a lattice. \begin{definition} Let $P$ be a finite poset and $S\subseteq P$. Then $z\in P$ is called \emph{a join} of $S$ if $x \leq z$ holds for any $x\in S$, and $z$ is minimal with that property. We denote $\operatorname{Joins}(S)$ the set of joins of $S$, and $\operatorname{Joins}(x,y)$ for short if $S=\{x,y\}$. If $\operatorname{Joins}(S)$ (resp. $\operatorname{Joins}(x,y)$) is a singleton (for example because $P$ is a lattice) then we denote $\bigvee S$ (resp. $x\vee y$) the unique join. Finally, we define $\operatorname{Joins}(\emptyset)$ to be the set of minimal elements in $P$. \end{definition} \begin{lemma} \label{lemma.fix} Let $P$ be some poset, and $f\in \mathcal{OR}(P)$. If $x$ and $y$ are fixed points of $f$, and $z$ is a join of $x$ and $y$, then $z$ is a fixed point of $f$. \end{lemma} \begin{proof} Since $x\leq z$ and $y\leq z$, one has $x=x.f \leq z.f$ and $y=y.f \leq z.f$. Since furthermore $z.f\le z$, by minimality of $z$ the equality $z.f = z$ must hold. \end{proof} \begin{lemma} \label{lemma.sup} Let $I$ be a subset of $P$ which contains all the minimal elements of $P$ and is stable under joins. Then, for any $x\in P$, the set $\{y\in I\mid y\leq x\}$ admits a unique maximal element which we denote by $\sup_I(x)\in I$. Furthermore, the map $\sup_I: x\mapsto \sup_I(x)$ is an idempotent in $\mathcal{OR}(P)$. \end{lemma} \begin{proof} For the first statement, suppose for some $x \not \in I$ there are two maximal elements $y_1$ and $y_2$ in $\{y\in I\mid y\leq x\}$. Then the join $y_1 \wedge y_2 < x$, since otherwise $x$ would be a join of $y_1$ and $y_2$, and thus $x\in I$ since $I$ is join-closed. But this contradicts the maximality of $y_1$ and $y_2$, so the first statement holds. Using that $\sup_I(x)\leq x$ and $\sup_I(x)\in I$, $e:=\sup_I$ is a regressive idempotent by construction. Furthermore, it is is order preserving: for $x\leq z$, $x.e$ and $z.e$ must be comparable or else there would be two maximal elements in $I$ under $z$. Since $z.e$ is maximal under $z$, we have $z.e\geq x.e$. \end{proof} Reciprocally, all idempotents are of this form: \begin{lemma} \label{lemma.idem_image} Let $P$ be some poset, and $f\in \mathcal{OR}(P)$ be an idempotent. Then the image $\operatorname{im}(f)$ of $f$ satisfies the following: \begin{enumerate} \item \label{item.min} All minimal elements of $P$ are contained in $\operatorname{im}(f)$. \item \label{item.fix} Each $x \in \operatorname{im}(f)$ is a fixed point of $f$. \item \label{item.join} The set $\operatorname{im}(f)$ is stable under joins: if $S\subseteq \operatorname{im}(f)$ then $\operatorname{Joins}(S)\subseteq \operatorname{im}(f)$ . \item \label{item.image} For any $x\in P$, the image $x.f$ is the upper bound $\sup_{\operatorname{im}(f)}(x)$. \end{enumerate} \end{lemma} \begin{proof} Statement~\eqref{item.min} follows from the fact that $x.f\le x$ so that minimal elements must be fixed points and hence in $\operatorname{im}(f)$. For any $x=a.f$, if $x$ is not a fixed point then $x.f=(a.f).f\neq a.f$, contradicting the idempotence of $f$. Thus, the second statement holds. Statement~\eqref{item.join} follows directly from the second statement and Lemma~\ref{lemma.fix}. If $y\in \operatorname{im}(f)$ and $y\leq x$ then $y = y.f \leq x.f$. Since this holds for every element of $\{y \in \operatorname{im}(f) \mid y\leq x \}$ and $x.f$ is itself in this set, statement~\eqref{item.image} holds. \end{proof} Thus, putting together Lemmas~\ref{lemma.sup} and~\ref{lemma.idem_image} one obtains a complete description of the idempotents of $\mathcal{OR}(P)$. \begin{proposition} \label{proposition.idempotents.OO} The idempotents of $\mathcal{OR}(P)$ are given by the maps $\sup_I$, where $I$ ranges through the subsets of $P$ which contain the minimal elements and are stable under joins. \end{proposition} For $f\in \mathcal{OR}(P)$ and $y\in P$, let $f^{-1}(y)$ be the fiber of $y$ under $f$, that is, the set of all $x\in P$ such that $x.f=y$. \begin{definition} Given $S$ a subset of a finite poset $P$, set $C_0(S)=S$ and $C_{i+1}(S)=C_i(S) \cup \{x\in P \mid x \text{ is a join of some elements in } C_i(S) \}$. Since $P$ is finite, there exists some $N$ such that $C_N(S)=C_{N+1}(S)$. The \emph{join closure} is defined as this stable set, and denoted $C(S)$. A set is \emph{join-closed} if $C(S)=S$. Define \[ F(f) := \bigcup_{y\in P} \{ x\in f^{-1}(y) \mid x \text{ minimal in } f^{-1}(y) \} \] to be the collection of minimal points in the fibers of $f$. \end{definition} \begin{corollary} Let $X$ be the join-closure of the set of minimal points of $P$. Then $X$ is fixed by every $f\in \mathcal{OR}(P)$. \end{corollary} \begin{lemma}[Description of left and right symbols] For any $f\in \mathcal{OR}(P)$, there exists a minimal idempotent $f_r$ whose image set is $C(\operatorname{im}(f))$, and $f_r=\rfix{f}$. There also exists a minimal idempotent $f_l$ whose image set is $C(F(f))$, and $f_l = \lfix{f}$. \end{lemma} \begin{proof} The $\rfix{f}$ must fix every element of $\operatorname{im}(f)$, and the image of $\rfix{f}$ must be join-closed by Lemma~\ref{lemma.idem_image}. $f_r$ is the smallest idempotent satisfying these requirements, and is thus the $\rfix{f}$. Likewise, $\lfix{f}$ must fix the minimal elements of each fiber of $f$, and so must fix all of $C(F(f))$. For any $y \not \in F(f)$, find $x\leq y$ such that $x.f=y.f$ and $x\in F(f)$. Then $x= x.f_l \leq y.f_l \leq y$. For any $z$ with $x\leq z \leq y$, we have $x.f\leq z.f \leq y.f=x.f$, so $z$ is in the same fiber as $y$. Then we have $(y.f_l).f =y.f$, so $f_l$ fixes $f$ on the left. Minimality then ensures that $f_l=\lfix{f}$. \end{proof} Let $P$ be a poset, and $P'$ be the poset obtained by removing a maximal element $x$ of $P$. Then, the following rule holds: \begin{proposition}[Branching of idempotents] Let $e=\sup_I$ be an idempotent in $\mathcal{OR}(P')$. If $I\subseteq P$ is still stable under joins in $P$, then there exist two idempotents in $\mathcal{OR}(P)$ with respective image sets $I$ and $I\cup \{x\}$. Otherwise, there exists an idempotent in $\mathcal{OR}(P)$ with image set $I\cup \{x\}$. Every idempotent in $\mathcal{OR}(P)$ is uniquely obtained by this branching. \end{proposition} \begin{proof} This follows from straightforward reasoning on the subsets $I$ which contain the minimal elements and are stable under joins, in $P$ and in $P'$. \end{proof} \subsection{The Cartan matrix for $\mathcal{OR}(P)$ is upper uni-triangular} \label{sub:cartNDPF} We have seen that the left and right fix of an element of $\mathcal{OR}(P)$ can be identified with the subsets of $P$ closed under joins. We put a total order $\leq_{\operatorname{lex}}$ on such subsets by writing them as bit vectors along a linear extension $p_1,\dots,p_n$ of $P$, and comparing those bit vectors lexicographically. \begin{proposition} Let $f\in \mathcal{OR}(P)$. Then, $\operatorname{im}(\lfix f) \leq_{\operatorname{lex}} \operatorname{im}(\rfix f)$, with equality if and only if $f$ is an idempotent. \end{proposition} \begin{proof} Let $n=|P|$ and $p_1,\ldots,p_n$ a linear extension of $P$. For $k\in \{0,\dots,n\}$ set respectively $L_k=\operatorname{im}(\lfix f) \cap \{p_1,\dots,p_k\}$ and $R_k = \operatorname{im}(\rfix f) \cap \{p_1,\dots,p_k\}$. As a first step, we prove the property $(H_k)$: if $L_k=R_k$ then $f$ restricted to $\{p_1,\dots,p_k\}$ is an idempotent with image set $R_k$. Obviously, $(H_0)$ holds. Take now $k>0$ such that $L_k=R_k$; then $L_{k-1}=R_{k-1}$ and we may use by induction $(H_{k-1})$. Case 1: $p_k\in F(f)$, and is thus the smallest point in its fiber. This implies that $p_k\in L_k$, and by assumption, $L_k=R_k$. By $(H_{k-1})$, $p_k.f<_{\operatorname{lex}} p_k$ gives a contradiction: $p_k.f\in R_{k-1}$, and therefore $p_k.f$ is in the same fiber as $p_k$. Hence $p_k.f=p_k$. Case 2: $p_k \in C(F(f))=\operatorname{im}(\lfix{f})$, but $p_k \not \in F(f)$. Then $p_k$ is a join of two smaller elements $x$ and $y$ of $L_k=R_k$; in particular, $p_k\in R_k$. By induction, $x$ and $y$ are fixed by $f$, and therefore $p_k.f=p_k$ by Lemma~\ref{lemma.fix}. Case 3: $p_k\not \in C(F(f))=\operatorname{im}(\lfix f)$; then $p_k$ is not a minimal element in its fiber; taking $p_i<_{\operatorname{lex}} p_k$ in the same fiber, we have $(p_k.f).f = (p_i.f).f = p_i.f = p_k.f$. Furthermore, $R_k=R_{k-1} = \{p_1,\dots,p_{k-1}\}.f = \{p_1,\dots,p_k\}.f$. In all three cases above, we deduce that $f$ restricted to $\{p_1,\dots,p_k\}$ is an idempotent with image set $R_k$, as desired. If $L_n=R_n$, we are done. Otherwise, take $k$ minimal such that $L_k\ne R_k$. Assume that $p_k\in L_k$ but not in $R_k$. In particular, $p_k$ is not a join of two elements $x$ and $y$ in $L_{k-1}=R_{k-1}$; hence $p_k$ is minimal in its fiber, and by the same argument as in Case 3 above, we get a contradiction. \end{proof} \begin{corollary} The Cartan matrix of $\mathcal{OR}(P)$ is upper uni-triangular with respect to the lexicographic order associated to any linear extension of $P$. \end{corollary} \begin{problem} Find larger classes of monoids where this property still holds. Note that this fails for the $0$-Hecke monoid which is a submonoid of an $\mathcal{OR}(B)$ where $B$ is Bruhat order. \end{problem} \subsection{Restriction to meet semi-lattices} \label{sub:ndpfSemis} For the remainder of this section, let $L$ be a \emph{meet semi-lattice} and we consider the monoid $\mathcal{OR}(L)$. Recall that $L$ is a meet semi-lattice if every pair of elements $x,y\in L$ has a unique meet. For $a\geq b$, define an idempotent $e_{a,b}$ in $\mathcal{OR}(L)$ by: \[ x.e_{a,b} = \begin{cases} x\wedge b & \text{if $x\leq a$,}\\ x & \text{otherwise.} \end{cases} \] \begin{remark} \label{remark.oo.eab} The function $e_{a,b}$ is the (pointwise) largest element of $\mathcal{OR}(L)$ such that $a.f=b$. For $a\geq b\geq c$, $e_{a,b} e_{b,c} = e_{a,c}$. In the case where $L$ is a chain, that is $\mathcal{OR}(L)={\operatorname{NDPF}}_{|L|}$, those idempotents further satisfy the following braid-like relation: $e_{b,c} e_{a,b} e_{b,c} = e_{a,b} e_{b,c} e_{a,b} = e_{a,c}$. \end{remark} \begin{proof} The first statement is clear. Take now $a\geq b\geq c$ in a meet semi-lattice. For any $x\leq a$, we have $x.e_{a,b}=x\wedge b \leq b,$ so $x.(e_{a,b}e_{b,c}) = x\wedge b \wedge c = x \wedge c$, since $b\ge c$. On the other hand, $x.e_{a,c} = x\wedge c$, which proves the desired equality. Now consider the braid-like relation in ${\operatorname{NDPF}}_{|L|}$. Using the previous result, one gets that $e_{b,c} e_{a,b} e_{b,c}=e_{b,c} e_{a,c}$ and $e_{a,b} e_{b,c} e_{a,b}=e_{a,c} e_{a,b}$. For $x> a$, $x$ is fixed by $e_{a,c}$, $e_{a,b}$ and $e_{b,c}$, and is thus fixed by the composition. The other cases can be checked analogously. \end{proof} \begin{proposition} The family $(e_{a,b})_{a,b}$, where $(a,b)$ runs through the covers of $L$, minimally generates the idempotents of $\mathcal{OR}(L)$. \end{proposition} \begin{proof} Given $f$ idempotent in $\mathcal{OR}(L)$, we can factorize $f$ as a product of the idempotents $e_{a,b}$. Take a linear extension of $L$, and recursively assume that $f$ is the identity on all elements above some least element $a$ of the linear extension. Then define a function $g$ by: \[ x.g= \begin{cases} a & \text{if $x=a$,} \\ x.f & \text{otherwise.} \end{cases} \] We claim that $f = ge_{a,a.f},$ and $g \in \mathcal{OR}(L)$. There are a number of cases that must be checked: \begin{itemize} \item Suppose $x<a$. Then $x.ge_{a,a.f} = (x.f).e_{a,a.f} = x.f \wedge a.f = x.f$, since $x<a$ implies $x.f<a.f$. \item Suppose $x>a$. Then $x.ge_{a,a.f} = (x.f).e_{a,a.f} = x.e_{a,a.f} = x=x.f$, since $x$ is fixed by $f$ by assumption. \item Suppose $x$ not related to $a$, and $x.f\leq a.f$. Then $x.ge_{a,a.f} = (x.f).e_{a,a.f} = x.f$. \item Suppose $x$ not related to $a$, and $a.f\leq x.f\leq a$. By the idempotence of $f$ we have $a.f=a.f.f\le x.f.f\le a.f$, so $x.f=a.f$, which reduces to the previous case. \item Suppose $x$ not related to $a$, but $x.f\leq a$. Then by idempotence of $f$ we have $x.f=x.f.f\leq a.f$, reducing to a previous case. \item For $x$ not related to $a$, and $x.f$ not related to $a$ or $x.f>a$, we have $x.f$ fixed by $e_{a,a.f}$, which implies that $x.ge_{a,a.f}=x.f$. \item Finally for $x=a$ we have $a.ge_{a,a.f} = a.e_{a,a.f}=a\wedge a.f=a.f$. \end{itemize} Thus, $f = ge_{a,a.f}$. For all $x\le a$, we have $x.f\le a.f\le a$, so that $x.g\le a.g=a$. For all $x>a$, we have $x$ fixed by $g$ by assumption, and for all other $x$, the $\mathcal{OR}(L)$ conditions are inherited from $f$. Thus $g$ is in $\mathcal{OR}(L)$. For all $x\neq a$, we have $x.g=x.f=x.f.f$. Since all $x>a$ are fixed by $f$, there is no $y$ such that $y.f=a$. Then $x.f.f=x.g.g$ for all $x\neq a$. Finally, $a$ is fixed by $g$, so $a=a.g.g$. Thus $g$ is idempotent. Applying this procedure recursively gives a factorization of $f$ into a composition of functions $e_{a,a.f}$. We can further refine this factorization using Remark~\ref{remark.oo.eab} on each $e_{a,a.f}$ by $e_{a,a.f}=e_{a_0,a_1}e_{a_1,a_2}\cdots e_{a_{k-1},a_k}$, where $a_0=a$, $a_k=a.f$, and $a_i$ covers $a_{i-1}$ for each $i$. Then we can express $f$ as a product of functions $e_{a,b}$ where $a$ covers $b$. This set of generators is minimal because $e_{a,b}$ where $a$ covers $b$ is the pointwise largest function in $\mathcal{OR}(L)$ mapping $a$ to $b$. \end{proof} As a byproduct of the proof, we obtain a canonical factorization of any idempotent $f\in \mathcal{OR}(L)$. \begin{example} The set of functions $e_{a,b}$ do not in general generate $\mathcal{OR}(L)$. Let $L$ be the Boolean lattice on three elements. Label the nodes of $L$ by triples $ijk$ with $i,j,k\in \{0,1\}$, and $abc\geq ijk$ if $a\leq i, b\leq j, c\leq k$. Define $f$ by $f(000)=000$, $f(100)=110, f(010)=011, f(001)=101$, and $f(x)=111$ for all other $x$. Simple inspection shows that $f\neq ge_{a,a.f}$ for any choice of $g$ and $a$. \end{example} \subsection{Orthogonal idempotents} \label{sub:ndpfOrthIdem} For $\{1,2,\ldots, N\}$ a chain, one can explicitly write down orthogonal idempotents for ${\operatorname{NDPF}}_N$. Recall that the minimal generators for ${\operatorname{NDPF}}_N$ are the elements $\pi_i =e_{i+1,i}$ and that ${\operatorname{NDPF}}_N$ is the quotient of $H_0(\sg[n])$ by the extra relation $\pi_i\pi_{i+1}\pi_i = \pi_{i+1}\pi_i$, via the quotient map $\pi_i\mapsto \pi_i$. By analogy with the $0$-Hecke algebra, set $\pi_i^+=\pi_i$ and $\pi_i^-=1-\pi_i$. We observe the following relations, which can be checked easily. \begin{lemma}\label{lem:ndpfrels} Let $k=i-1$. Then the following relations hold: \begin{enumerate} \item $\pi_{i-1}^+ \pi_i^+ \pi_{i-1}^+ = \pi_i^+ \pi_{i-1}^+$, \item $\pi_{i-1}^- \pi_i^- \pi_{i-1}^- = \pi_{i-1}^- \pi_i^-$, \item $\pi_i^+ \pi_{i-1}^- \pi_i^+ = \pi_i^+ \pi_{i-1}^-$, \item $\pi_i^- \pi_{i-1}^+ \pi_i^- = \pi_{i-1}^+ \pi_i^-$, \item $\pi_{i-1}^+ \pi_i^- \pi_{i-1}^+ = \pi_i^- \pi_{i-1}^+$, \item $\pi_{i-1}^- \pi_i^+ \pi_{i-1}^- = \pi_{i-1}^- \pi_i^+$. \end{enumerate} \end{lemma} \begin{definition} Let $D$ be a \emph{signed diagram}, that is an assignment of a $+$ or $-$ to each of the generators of ${\operatorname{NDPF}}_N$. By abuse of notation, we will write $i\in D$ if the generator $\pi_i$ is assigned a $+$ sign. Let $P=\{ P_1, P_2, \ldots, P_k\}$ be the partition of the generators such that adjacent generators with the same sign are in the same set, and generators with different signs are in different sets. Set $\epsilon(P_i)\in \{+,-\}$ to be the sign of the subset $P_i$. Let $\pi_{P_i}^{\epsilon(P_i)}$ be the longest element in the generators in $P_i$, according to the sign in $D$. Define: \begin{itemize} \item $L_D := \pi_{P_1}^{\epsilon(P_1)}\,\pi_{P_2}^{\epsilon(P_2)}\,\cdots \,\pi_{P_k}^{\epsilon(P_k)}$, \item $R_D := \pi_{P_k}^{\epsilon(P_k)}\,\pi_{P_k-1}^{\epsilon(P_{k-1})}\,\cdots \,\pi_{P_1}^{\epsilon(P_1)}$, \item and $C_D := L_DR_D$. \end{itemize} \end{definition} \begin{example} \label{example.D} Let $D=++++---++$. Then $P=\{ \{1,2,3,4\}, \{5,6,7\}, \{8,9\} \}$, and the associated long elements are: $\pi_{P_1}^+=\pi_4^+ \pi_3^+ \pi_2^+ \pi_1^+$, $\pi_{P_2}^-=\pi_5^- \pi_6^- \pi_7^-$, and $\pi_{P_3}^+=\pi_9^+ \pi_8^+$. Then \begin{equation*} \begin{split} L_D &= \pi_{P_1}^+ \pi_{P_2}^- \pi_{P_3}^+ = (\pi_4^+ \pi_3^+ \pi_2^+ \pi_1^+) (\pi_5^- \pi_6^- \pi_7^-) (\pi_9^+ \pi_8^+),\\ R_D &= \pi_{P_3}^+ \pi_{P_2}^- \pi_{P_1}^+ = (\pi_9^+ \pi_8^+) (\pi_5^- \pi_6^- \pi_7^-) (\pi_4^+ \pi_3^+ \pi_2^+ \pi_1^+). \end{split} \end{equation*} \end{example} The elements $C_D$ are the images, under the natural quotient map from the $0$-Hecke algebra, of the \emph{diagram demipotents} constructed in~\cite{Denton.2010.FPSAC,Denton.2010}. An element $x$ of an algebra is \emph{demipotent} if there exists some finite integer $n$ such that $x^n=x^{n+1}$ is idempotent. It was shown in~\cite{Denton.2010.FPSAC,Denton.2010} that, in the $0$-Hecke algebra, raising the diagram demipotents to the power $N$ yields a set of primitive orthogonal idempotents for the $0$-Hecke algebra. It turns out that, under the quotient to ${\operatorname{NDPF}}_N$, these elements $C_D$ are right away orthogonal idempotents, which we prove now. \begin{remark} \label{remark.fix_i} Fix $i$, and assume that $f$ is an element in the monoid generated by $\pi^-_{i+1},...,\pi^-_N$ and $\pi^+_{i+1},...,\pi^+_N$. Then, applying repeatedly Lemma~\ref{lem:ndpfrels} yields $$\pi^-_i f \pi^-_i = \pi^-_i f \qquad \text{and} \qquad \pi^+_i f \pi^+_i = f \pi^+_i\,.$$ \end{remark} The following proposition states that the elements $C_D$ are also the images of Norton's generators of the projective modules of the $0$-Hecke algebra through the natural quotient map to ${\operatorname{NDPF}}_N$. \begin{proposition} \label{proposition.norton_ndpf} Let $D$ be a signed diagram. Then, \begin{displaymath} C_D = \prod_{i=1,\dots,n,\ i \not\in D} \pi^-_i \quad \prod_{i=n,\dots,1,\ i\in D} \pi^+_i\,. \end{displaymath} In other words $C_D$ reduces to one of the following two forms: \begin{itemize} \item $C_D = (\pi_{P_1}^-\pi_{P_3}^-\cdots \pi_{P_{2k\pm 1}}^-) (\pi_{P_2}^+\pi_{P_4}^+\cdots \pi_{P_{2k}}^+)$, or \item $C_D = (\pi_{P_2}^-\pi_{P_4}^-\cdots \pi_{P_{2k}}^-) (\pi_{P_1}^+\pi_{P_3}^+\cdots \pi_{P_{2k\pm 1}}^+)$. \end{itemize} \end{proposition} \begin{proof} Let $D$ be a signed diagram. If it is of the form $-E$, where $E$ is a signed diagram for the generators $\pi_2,\dots,\pi_{N-1}$, then using Remark~\ref{remark.fix_i}, $$C_D = \pi^-_1 C_E \pi^-_1 = \pi^-_1 C_E\,.$$ Similarly, if it is of the form $+E$, then: $$C_D = \pi^+_1 C_E \pi^+_1 = C_E \pi^+_1\,.$$ Using induction on the isomorphic copy of ${\operatorname{NDPF}}_{N-1}$ generated by $\pi_2,\dots,\pi_{N-1}$ yields the desired formula. \end{proof} \begin{proposition} \label{proposition.idempotents.ndpf} The collection of all $C_D$ forms a complete set of orthogonal idempotents for ${\operatorname{NDPF}}_N$. \end{proposition} \begin{proof} First note that $C_D$ is never zero; for example, it is clear from Proposition~\ref{proposition.norton_ndpf} that the full expansion of $C_D$ has coefficient $1$ on $\prod_{i=n,\dots,1,\ i\in D} \pi^+_i$. Take now $D$ and $D'$ two signed diagrams. If they differ in the first position, it is clear that $C_D C_{D'} =0$. Otherwise, write $D = \epsilon E$, and $D' = \epsilon E'$. Then, using Remark~\ref{remark.fix_i} and induction, \begin{equation*} \begin{split} C_D C_D' & = \pi^\epsilon_1 C_E \pi^\epsilon_1 \pi^\epsilon_1 C_{E'} \pi^\epsilon_1 = \pi^\epsilon_1 C_E \pi^\epsilon_1 C_{E'} \pi^\epsilon_1\\ &= \pi^\epsilon_1 C_E C_{E'} \pi^\epsilon_1 = \pi^\epsilon_1 \delta_{E,E'} C_E \pi^\epsilon_1 = \delta_{D,D'} C_D\,. \end{split} \end{equation*} Therefore, the $C_D$'s form a collection of $2^{N-1}$ nonzero orthogonal idempotents, which has to be complete by cardinality. \end{proof} One can interpret the diagram demipotents for ${\operatorname{NDPF}}_N$ as branching from the diagram demipotents for ${\operatorname{NDPF}}_{N-1}$ in the following way. For any $C_D=L_DR_D$ in ${\operatorname{NDPF}}_{N-1}$, the leading term of $C_D$ will be the longest element in the generators marked by plusses in $D$. This leading idempotent has an image set which we will denote $\operatorname{im}(D)$ by abuse of notation. Now in ${\operatorname{NDPF}}_N$ we can associated two `children' to $C_D$: \[ C_{D+}=L_D\pi_{N}^+R_D \text{ and } C_{D-}=L_D\pi_{N}^-R_D. \] Then we have $C_{D+}+C_{D-}=C_D$, $\operatorname{im}(D+)=\operatorname{im}(D)$ and $\operatorname{im}(D-)=\operatorname{im}(D)\bigcup \{N\}$. \bigskip We now generalize this branching construction to any meet semi-lattice to derive a conjectural recursive formula for a decomposition of the identity into orthogonal idempotents. This construction relies on the branching rule for the idempotents of $\mathcal{OR}(L)$, and the existence of the maximal idempotents $e_{a,b}$ of Remark~\ref{remark.oo.eab}. \medskip Let $L$ be a meet semi-lattice, and fix a linear extension of $L$. For simplicity, we assume that the elements of $L$ are labelled $1,\dots,N$ along this linear extension. Recall that, by Proposition~\ref{proposition.idempotents.OO}, the idempotents are indexed by the subsets of $L$ which contain the minimal elements of $L$ and are stable under joins. In order to distinguish subsets of $\{1,\dots,N\}$ and subsets of, say, $\{1,\dots,N-1\}$, even if they have the same elements, it is convenient to identify them with $+-$ diagrams as we did for ${\operatorname{NDPF}}_N$. The \emph{valid diagrams} are those corresponding to subsets which contain the minimal elements and are stable under joins. A prefix of length $k$ of a valid diagram is still a valid diagram (for $L$ restricted to $\{1,\dots,k\}$), and they are therefore naturally organized in a binary prefix tree. Let $D$ be a valid diagram, $e=\sup_D$ be the corresponding idempotent. If $L$ is empty, $D=\{\}$, and we set $L_{\{\}}=R_{\{\}}=1$. Otherwise, let $L'$ be the meet semi-lattice obtained by restriction of $L$ to $\{1,\dots,N-1\}$, and $D'$ the restriction of $D$ to $\{1,\dots,N-1\}$. \begin{itemize} \item[Case 1] $N$ is the join of two elements of $\operatorname{im}(D')$ (and in particular, $N\in \operatorname{im}(D)$). Then, set $L_D=L_{D'}$ and $R_D=R_{D'}$. \item[Case 2] $N\in \operatorname{im}(D)$. Then, set $L_D=L_{D'} \pi_{N, N.e}$ and $R_D=\pi_{N, N.e}R_{D'}$. \item[Case 3] $N\not \in \operatorname{im}(D)$. Then, set $L_D=L_{D'} (1-\pi_{N, N.e})$ and $R_D=(1-\pi_{N, N.e})R_{D'}$. \end{itemize} Finally, set $C_D=L_D R_D$. \begin{remark}[Branching rule] Fix now $D'$ a valid diagram for $L'$. If $N$ is the join of two elements of $I'$, then $C_{D'}=C_{D'+}$. Otherwise $C_{D'}=C_{D'-} + C_{D'+}$. Hence, in the prefix tree of valid diagrams, the two sums of all $C_D$'s at depth $k$ and at depth $k+1$ respectively coincide. Branching recursively all the way down to the root of the prefix tree, it follows that the elements $C_D$ form a decomposition of the identity. Namely, \begin{equation*} 1 = \sum_{D \text{ valid diagram}} C_D\,. \end{equation*} \end{remark} \begin{conjecture} \label{conjecture.demi} Let $L$ be a meet semi-lattice. Then, the set $\{C_D\mid D \text{ valid diagram}\}$ forms a set of demipotent elements for $\mathcal{OR}(L)$ which, raised each to a sufficiently high power, yield a set of primitive orthogonal idempotents. \end{conjecture} This conjecture is supported by Proposition~\ref{proposition.idempotents.ndpf}, as well as by computer exploration on all $1377$ meet semi-lattices with at most $8$ elements and on a set of meet semi-lattices of larger size which were considered likely to be problematic by the authors. In all cases, the demipotents were directly idempotents, which might suggest that Conjecture~\ref{conjecture.demi} could be strengthened to state that the collection $\{C_D\mid D \text{ valid diagram}\}$ forms directly a set of primitive orthogonal idempotents for $\mathcal{OR}(L)$. \bibliographystyle{web-alpha}
-119,987.958228
[ -1.73046875, 1.4541015625 ]
23.547401
[ -3.099609375, 0.466796875, -2.21484375, -6.5, -0.73388671875, 9.3046875 ]
[ 3.572265625, 9.4375, 2.84765625, 5.48046875 ]
1,591
17,235
[ -3.412109375, 4.171875 ]
34.569045
[ -5, -3.203125, -4.87890625, -2.234375, 1.4052734375, 12.0390625 ]
0.795554
11.454435
17.367597
3.406859
[ 1.822962999343872 ]
-63,185.604027
5.32405
-118,690.850075
0.27136
6.320689
[ -1.625, -3.140625, -3.7578125, -5.0703125, 1.9375, 12.0390625 ]
[ -5.94140625, -2.66796875, -2.078125, -1.162109375, 4.12890625, 5.0625 ]
BkiUdGM4dbgg4JMhnNMO
\chapter{Acronyms} \label{chap:abbrv} \vspace{-0.5cm} \begin{longtable}[l]{p{80pt} p{300pt}} \toprule \textbf{Acronym} & \textbf{Full Form} \\ \midrule AR & Acceptance Rate \\ BCL & Bilateral Convolutional Layer\\ BI & Bilateral Inception\\ BNN & Bilateral Neural Network\\ BP & Belief Propagation \\ CMP & Consensus Message Passing \\ CNN & Convolutional Neural Network\\ CPU & Central Processing Unit \\ CRF & Conditional Random Field \\ DDMCMC & Data Driven Markov Chain Monte Carlo\\ EP & Expectation Propagation \\ FC & Fully Connected \\ GPU & Graphics Processing Unit \\ IoU & Intersection over Union \\ INF-INDMH & Informed Independent Metropolis Hastings\\ INF-MIXMH & Informed Mixture Metropolis Hastings\\ INF-MH & Informed Metropolis Hastings\\ KDE & Kernel Density Estimation \\ MAP & Maximum a Posteriori \\ MCMC & Markov Chain Monte Carlo\\ MF & Mean-Field \\ MH & Metropolis Hastings\\ MHWG & Metropolis Hastings Within Gibbs \\ MP & Message Passing \\ PGM & Probabilistic Graphical Model \\ PSNR & Peak Signal to Noise Ratio \\ PSRF & Potential Scale Reduction Factor \\ PT & Parallel Tempering \\ REG-MH & Regenerative Metropolis Hastings \\ RF & Random Forest \\ RMSE & Root Mean Square Error \\ VMP & Variational Message Passing \\ VPN & Video Propagation Network \\ \bottomrule \end{longtable} \chapter{Abstract} \label{chap:abstract} Computer vision can be understood as the ability to perform \emph{inference} on image data. Breakthroughs in computer vision technology are often marked by advances in inference techniques, as even the model design is often dictated by the complexity of inference in them. This thesis proposes learning based inference schemes and demonstrates applications in computer vision. We propose techniques for inference in both generative and discriminative computer vision models. Despite their intuitive appeal, the use of generative models in vision is hampered by the difficulty of posterior inference, which is often too complex or too slow to be practical. We propose techniques for improving inference in two widely used techniques: Markov Chain Monte Carlo (MCMC) sampling and message-passing inference. Our inference strategy is to learn separate discriminative models that assist Bayesian inference in a generative model. Experiments on a range of generative vision models show that the proposed techniques accelerate the inference process and/or converge to better solutions. A main complication in the design of discriminative models is the inclusion of prior knowledge in a principled way. For better inference in discriminative models, we propose techniques that modify the original model itself, as inference is simple evaluation of the model. We concentrate on convolutional neural network (CNN) models and propose a generalization of standard spatial convolutions, which are the basic building blocks of CNN architectures, to bilateral convolutions. First, we generalize the existing use of bilateral filters and then propose new neural network architectures with learnable bilateral filters, which we call `Bilateral Neural Networks'. We show how the bilateral filtering modules can be used for modifying existing CNN architectures for better image segmentation and propose a neural network approach for temporal information propagation in videos. Experiments demonstrate the potential of the proposed bilateral networks on a wide range of vision tasks and datasets. In summary, we propose learning based techniques for better inference in several computer vision models ranging from inverse graphics to freely parameterized neural networks. In generative vision models, our inference techniques alleviate some of the crucial hurdles in Bayesian posterior inference, paving new ways for the use of model based machine learning in vision. In discriminative CNN models, the proposed filter generalizations aid in the design of new neural network architectures that can handle sparse high-dimensional data as well as provide a way for incorporating prior knowledge into CNNs. \chapter{Acknowledgements} \label{chap:ack} First, my heartfelt thanks to my PhD advisor Prof. Peter Gehler without whom this thesis work would not have materialized. His suggestions helped me work on very interesting research problems in computer vision and machine learning whereas stimulating discussions with him helped me propose compelling techniques for those problems. He also taught me how to be critical about various aspects of a research project. I thank Dr. Sebastian Nowozin for being a co-advisor in my first PhD research project and teaching me how to be rigorous in research. Thanks to Prof.~Michael Black for allowing me to be a part of his vibrant research group in T\"ubingen, where I conducted my PhD research. Special thanks to Prof. Bipin Indurkhya for teaching me how to be mature in our reasoning and thinking while doing research.~Thanks to my masters advisor Prof.~Jayanthi Sivaswamy for her guidance during the start of my research career. Thanks to Dr. Pushmeet Kohli, Dr. Ali Eslami, Dr. John Winn and Dr. Daniel Tarlow for giving me a great internship opportunity at Microsoft Research that helped me getting a broader perspective over the fields of computer vision and machine learning. Thanks to Prof. Jan Kautz for giving me a wonderful opportunity at Nvidia to further pursue my research interests after my PhD. I am thankful to Prof. Hendrik Lensch, Prof. Peter Gehler, Prof. Felix Wichmann and Prof. Matthias Bethge for being a part of my thesis committee and thanks to Prof. Hendrik Lensch and Prof. Peter Gehler for providing reviews to this thesis work. Special thanks to Dr. Iasonas Kokkinos for providing an additional review for this thesis. Thanks to Thomas Nestmeyer, Dr. Laura Sevilla, Angjoo Kanazawa, Fatma G\"uney, Jonas Wulff and Christoph Lassner for their helpful feedback and comments on this thesis manuscript. I am fortunate to have collaborated with several great researchers during my PhD: Dr. Martin Kiefel, Raghudeep Gadde, Dr. Sebastian Nowozin, Dr. Ali Eslami, Dr. John Winn, Dr. Pushmeet Kohli, Dr. Matthew Loper, Prof. Michael Black, Dr. Laura Sevilla, Dr. Deqing Sun, Daniel Kappler, Dr. Renaud Marlet and Dr. Daniel Tarlow. I am very grateful to them. Special thanks to Raghudeep Gadde and Dr. Laura Sevilla for trusting me and letting me be a part of their research projects. I am highly thankful to Raghudeep Gadde for the countless hours we spent together on brainstorming various research questions and techniques. I want to express my gratitude to my wonderful colleagues at Max-Planck Institute Thomas Nestmeyer, Raghudeep Gadde, Abhilash Srikantha, Dr. Laura Sevilla, Dr. Silvia Zuffi, Jonas Wulff, Fatma G\"uney, Dr. Martin Kiefel, Dr. Andreas Lehrmann, Christoph Lassner, Sergey Prokudin, Dr.~Osman Ulusoy, Dr. Federica Bogo, Daniel Kappler, Prof. Michael Black, Dr. Andreas Geiger, Prof. J\"urgen Gall, Dr. Chaohui Wang, Dr. Javier Romero, Dr. Hueihan Jhuang, Dr. Dimitris Tzionas, Dr. Gerard Pons-Moll, Dr. Cristina Cifuentes, Dr. Naejin Kong, Naureen Mohamed, Angjoo Kanazawa, Dr. Ijaz Akhter, Dr. S{\o}ren Hauberg, Dr. Matthew Loper, Dr. Aggeliki Tsoli, Dr. Sergi Pujades, Siyu Tang, Dr. Si Yong Yeo, Joel Janai, Yiyi Liao and others for their insightful discussions, technical and non-technical conversations and making my road to obtaining my PhD less bumpy. Thanks to Melanie Feldhofer, Jon Anning and Nicole Overbaugh for their great support in administrative matters. I am highly indebted to my friends Srikar Ravulapalli, Srujan Sunkoju, Srinivas Sunkara, Vamshi Velagapuri, Chandrasekhar Varkala, Sunil Soni, Rohit Gernapudi, Raghudeep Gadde, Seetharamakrishna Madamsetti, Tenzin Bety, Tsering Bhuti, Phurbu Thackchoe, Varsha Eluri and Dharani Kodali for always supporting, helping and encouraging me in the times of need. Thanks to Dr. Sowmya Vajjala and Dr. Lorenzo Ducci for the fun times we had in and around T\"ubingen and also patiently listening to my ravings about both research and life in general. Finally, I would like to thank my family members who have given me great freedom and support to pursue my interests. I am very thankful to my parents Dr. Chandrasekhar Jampani and Sudharani Kodali, my brother Prasanth Jampani, my sister-in-law Neelima Kamani, my grandmother Krishnakumari Kodali for their great support during all these years. Special thanks to my late grandfather and my favorite person Anjaneyulu Kodali for understanding me so well and encouraging me in different aspects of life. He will always be in my heart. Lastly, but importantly, I am very grateful to my wife Dr. Yuyu Liu for her love and continued support over the last few years and for sharing the ups and downs of my life. \chapter{Sample Title} Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. \chapter{Sample Title} Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. \chapter{Baseline Results and Analysis for Informed Sampler} \label{appendix:chap3} In this Appendix, we give an in-depth performance analysis of the various samplers and the effect of their hyperparameters. We choose hyperparameters with the lowest PSRF value after $10k$ iterations, for each sampler individually. If the differences between PSRF are not significantly different among multiple values, we choose the one that has the highest acceptance rate. \section{Experiment: Estimating Camera Extrinsics} \label{appendix:chap3:room} \subsection{Parameter Selection} \paragraph{Metropolis Hastings (MH\xspace)} Figure~\ref{fig:exp1_MH} shows the median acceptance rates and PSRF values corresponding to various proposal standard deviations of plain MH\xspace~sampling. Mixing gets better and the acceptance rate gets worse as the standard deviation increases. The value $0.3$ is selected standard deviation for this sampler. \paragraph{Metropolis Hastings Within Gibbs (MHWG\xspace)} As mentioned in Section~\ref{sec:room}, the MHWG\xspace~sampler with one-dimensional updates did not converge for any value of proposal standard deviation. This problem has high correlation of the camera parameters and is of multi-modal nature, which this sampler has problems with. \paragraph{Parallel Tempering (PT\xspace)} For PT\xspace~sampling, we took the best performing MH\xspace~sampler and used different temperature chains to improve the mixing of the sampler. Figure~\ref{fig:exp1_PT} shows the results corresponding to different combination of temperature levels. The sampler with temperature levels of $[1,3,27]$ performed best in terms of both mixing and acceptance rate. \paragraph{Effect of Mixture Coefficient in Informed Sampling (INF-MH\xspace)} Figure~\ref{fig:exp1_alpha} shows the effect of mixture coefficient ($\alpha$) on the informed sampling INF-MH\xspace. Since there is no significant different in PSRF values for $0 \le \alpha \le 0.7$, we chose $0.7$ due to its high acceptance rate. \begin{figure}[h] \centering \subfigure[MH]{% \includegraphics[width=.48\textwidth]{figures/supplementary/camPose_MH.pdf} \label{fig:exp1_MH} } \subfigure[PT]{% \includegraphics[width=.48\textwidth]{figures/supplementary/camPose_PT.pdf} \label{fig:exp1_PT} } \\ \subfigure[INF-MH]{% \includegraphics[width=.48\textwidth]{figures/supplementary/camPose_alpha.pdf} \label{fig:exp1_alpha} } \mycaption{Results of the `Estimating Camera Extrinsics' experiment}{PRSFs and Acceptance rates corresponding to (a) various standard deviations of MH\xspace, (b) various temperature level combinations of PT\xspace sampling and (c) various mixture coefficients of INF-MH\xspace sampling.} \end{figure} \begin{figure}[!t] \centering \subfigure[MH\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/occlusionExp_MH.pdf} \label{fig:exp2_MH} } \subfigure[BMHWG\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/occlusionExp_BMHWG.pdf} \label{fig:exp2_BMHWG} } \\ \subfigure[MHWG\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/occlusionExp_MHWG.pdf} \label{fig:exp2_MHWG} } \subfigure[PT\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/occlusionExp_PT.pdf} \label{fig:exp2_PT} } \\ \subfigure[INF-BMHWG\xspace]{% \includegraphics[width=.5\textwidth]{figures/supplementary/occlusionExp_alpha.pdf} \label{fig:exp2_alpha} } \mycaption{Results of the `Occluding Tiles' experiment}{PRSF and Acceptance rates corresponding to various standard deviations of (a) MH\xspace, (b) BMHWG\xspace, (c) MHWG\xspace, (d) various temperature level combinations of PT\xspace~sampling and; (e) various mixture coefficients of our informed INF-BMHWG\xspace sampling.} \end{figure} \section{Experiment: Occluding Tiles} \label{appendix:chap3:tiles} \subsection{Parameter Selection} \paragraph{Metropolis Hastings (MH\xspace)} Figure~\ref{fig:exp2_MH} shows the results of MH\xspace~sampling. Results show the poor convergence for all proposal standard deviations and rapid decrease of AR with increasing standard deviation. This is due to the high-dimensional nature of the problem. We selected a standard deviation of $1.1$. \paragraph{Blocked Metropolis Hastings Within Gibbs (BMHWG\xspace)} The results of BMHWG\xspace are shown in Figure~\ref{fig:exp2_BMHWG}. In this sampler we update only one block of tile variables (of dimension four) in each sampling step. Results show much better performance compared to plain MH\xspace. The optimal proposal standard deviation for this sampler is $0.7$. \paragraph{Metropolis Hastings Within Gibbs (MHWG\xspace)} Figure~\ref{fig:exp2_MHWG} shows the result of MHWG\xspace sampling. This sampler is better than BMHWG\xspace and converges much more quickly. Here a standard deviation of $0.9$ is found to be best. \paragraph{Parallel Tempering (PT\xspace)} Figure~\ref{fig:exp2_PT} shows the results of PT\xspace sampling with various temperature combinations. Results show no improvement in AR from plain MH\xspace sampling and again $[1,3,27]$ temperature levels are found to be optimal. \paragraph{Effect of Mixture Coefficient in Informed Sampling (INF-BMHWG\xspace)} Figure~\ref{fig:exp2_alpha} shows the effect of mixture coefficient ($\alpha$) on the blocked informed sampling INF-BMHWG\xspace. Since there is no significant different in PSRF values for $0 \le \alpha \le 0.8$, we chose $0.8$ due to its high acceptance rate. \section{Experiment: Estimating Body Shape} \label{appendix:chap3:body} \subsection{Parameter Selection} \paragraph{Metropolis Hastings (MH\xspace)} Figure~\ref{fig:exp3_MH} shows the result of MH\xspace~sampling with various proposal standard deviations. The value of $0.1$ is found to be best. \paragraph{Metropolis Hastings Within Gibbs (MHWG\xspace)} For MHWG\xspace sampling we select $0.3$ proposal standard deviation. Results are shown in Fig.~\ref{fig:exp3_MHWG}. \paragraph{Parallel Tempering (PT\xspace)} As before, results in Fig.~\ref{fig:exp3_PT}, the temperature levels were selected to be $[1,3,27]$ due its slightly higher AR. \paragraph{Effect of Mixture Coefficient in Informed Sampling (INF-MH\xspace)} Figure~\ref{fig:exp3_alpha} shows the effect of $\alpha$ on PSRF and AR. Since there is no significant differences in PSRF values for $0 \le \alpha \le 0.8$, we choose $0.8$. \begin{figure}[t] \centering \subfigure[MH\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/bodyShape_MH.pdf} \label{fig:exp3_MH} } \subfigure[MHWG\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/bodyShape_MHWG.pdf} \label{fig:exp3_MHWG} } \\ \subfigure[PT\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/bodyShape_PT.pdf} \label{fig:exp3_PT} } \subfigure[INF-MH\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/bodyShape_alpha.pdf} \label{fig:exp3_alpha} } \\ \mycaption{Results of the `Body Shape Estimation' experiment}{PRSFs and Acceptance rates corresponding to various standard deviations of (a) MH\xspace, (b) MHWG\xspace; (c) various temperature level combinations of PT\xspace sampling and; (d) various mixture coefficients of the informed INF-MH\xspace sampling.} \end{figure} \section{Results Overview} Figure~\ref{fig:exp_summary} shows the summary results of the all the three experimental studies related to informed sampler. \begin{figure*}[h!] \centering \subfigure[Results for: Estimating Camera Extrinsics]{% \includegraphics[width=0.9\textwidth]{figures/supplementary/camPose_ALL.pdf} \label{fig:exp1_all} } \subfigure[Results for: Occluding Tiles]{% \includegraphics[width=0.9\textwidth]{figures/supplementary/occlusionExp_ALL.pdf} \label{fig:exp2_all} } \subfigure[Results for: Estimating Body Shape]{% \includegraphics[width=0.9\textwidth]{figures/supplementary/bodyShape_ALL.pdf} \label{fig:exp3_all} } \label{fig:exp_summary} \mycaption{Summary of the statistics for the three experiments}{Shown are for several baseline methods and the informed samplers the acceptance rates (left), PSRFs (middle), and RMSE values (right). All results are median results over multiple test examples.} \end{figure*} \section{Additional Qualitative Results} \subsection{Occluding Tiles} In Figure~\ref{fig:exp2_visual} more qualitative results of the occluding tiles experiment are shown. The informed sampling approach (INF-BMHWG\xspace) is better than the best baseline (MHWG\xspace). This still is a very challenging problem since the parameters for occluded tiles are flat over a large region. Some of the posterior variance of the occluded tiles is already captured by the informed sampler. \begin{figure*}[h!] \begin{center} \centerline{\includegraphics[width=0.95\textwidth]{figures/supplementary/occlusionExp_Visual.pdf}} \mycaption{Additional qualitative results of the occluding tiles experiment} {From left to right: (a) Given image, (b) Ground truth tiles, (c) OpenCV heuristic and most probable estimates from 5000 samples obtained by (d) MHWG sampler (best baseline) and (e) our INF-BMHWG sampler. (f) Posterior expectation of the tiles boundaries obtained by INF-BMHWG sampling (First 2000 samples are discarded as burn-in).} \label{fig:exp2_visual_more} \end{center} \end{figure*} \subsection{Body Shape} Figure~\ref{fig:exp3_bodyMeshes} shows some more results of 3D mesh reconstruction using posterior samples obtained by our informed sampling INF-MH\xspace. \begin{figure*}[t] \begin{center} \centerline{\includegraphics[width=0.75\textwidth]{figures/supplementary/bodyMeshResults.pdf}} \mycaption{Qualitative results for the body shape experiment} {Shown is the 3D mesh reconstruction results with first 1000 samples obtained using the INF-MH\xspace informed sampling method. (blue indicates small values and red indicates high values)} \label{fig:exp3_bodyMeshes} \end{center} \end{figure*} \chapter{Additional Results for Consensus Message Passing} \section{Additional results on the face problem} \subsection{Qualitative results} Figure~\ref{fig:shading-qualitative-multiple-subjects} shows inference results for reflectance maps, normal maps and lights for randomly chosen test images, and Figure~\ref{fig:shading-qualitative-same-subject} shows reflectance estimation results on multiple images of the same subject produced under different illumination conditions. Consensus message passing\@\xspace is able to produce reflectance estimates that are closer to the photometric stereo groundtruth across subjects and across different illumination conditions. \subsection{Quantitative results} Figure~\ref{fig:shading-quantitative-reflectance} shows quantitative results for both real images from `Yale B' and `Extended Yale B' datasets \citep{Georghiades2001, Lee2005} and synthetic shadowless images. The synthetic shadowless images were created using the same light, reflectance and normal map statistics as that of images in the real dataset (however estimated using photometric stereo~\citep{Queau2013}). Subject recognition results indicate superior performance of CMP\@\xspace in comparison to other baselines in both real and synthetic image settings. Figure~\ref{fig:shading-quantitative-light} shows the quantitative results of light inference using the different inference techniques. We use the cosine angle distance between the estimated light and the photometric stereo groundtruth ($\textrm{error} = \cos^{-1}(\hat{\mathbf{l}}_\textrm{est}.\hat{\mathbf{l}}_\textrm{ps})$) as an error metric. Here, $\hat{\mathbf{l}}_\textrm{est}$ is a unit vector in the same direction as the mean of the posterior light estimate of CMP\@\xspace and $\hat{\mathbf{l}}_\textrm{ps}$ is a unit vector in the same direction as the corresponding photometric stereo groundtruth. Again, these results indicate the superior performance of CMP\@\xspace in comparison to other baselines in both real and synthetic image settings. \begin{figure*}[h] \centering \setlength\fboxsep{0.2mm} \setlength\fboxrule{0pt} \begin{tikzpicture} \node at (-7.3, 2.9) {\small (a) Observed}; \node at (-4.2, 2.9) {\small (b) Reflectance}; \node at (-0.58, 2.9) {\small (c) Variance}; \node at (2.35, 2.9) {\small (d) Light}; \node at (6.4, 2.9) {\small (e) Normal}; \node at (-4.15, 2.6) {\tiny $\color{gray}{\overbrace{\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad}}$}; \node at (2.45, 2.6) {\tiny $\color{gray}{\overbrace{\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad}}$}; \node at (6.58, 2.6) {\tiny $\color{gray}{\overbrace{\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\,\,\,}}$}; \matrix at (0, 0) [matrix of nodes, nodes={anchor=east}, column sep=-0.15cm, row sep=-0.2cm] { \fbox{\includegraphics[width=1cm]{figures/sample_2_5_X.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_R.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_biswasR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_vmpR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_forestR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_cmpR.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_cmpvarR.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_L.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_vmpL.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_forestL.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_cmpL.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_N.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_vmpN.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_5_cmpN.png}} \\ \fbox{\includegraphics[width=1cm]{figures/sample_2_6_X.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_R.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_biswasR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_vmpR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_forestR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_cmpR.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_cmpvarR.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_L.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_vmpL.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_forestL.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_cmpL.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_N.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_vmpN.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_6_cmpN.png}} \\ \fbox{\includegraphics[width=1cm]{figures/sample_2_7_X.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_R.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_biswasR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_vmpR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_forestR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_cmpR.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_cmpvarR.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_L.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_vmpL.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_forestL.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_cmpL.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_N.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_vmpN.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_7_cmpN.png}} \\ \fbox{\includegraphics[width=1cm]{figures/sample_2_8_X.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_R.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_biswasR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_vmpR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_forestR.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_cmpR.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_cmpvarR.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_L.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_vmpL.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_forestL.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_cmpL.png}} & \hspace{0.12cm} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_N.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_vmpN.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_2_8_cmpN.png}} \\ }; \node at (-6.4, -2.7) {\small GT}; \node at (-5.25, -2.7) {\small BU}; \node at (-4.1, -2.7) {\small MP}; \node at (-3.0, -2.7) {\small Forest}; \node at (-1.87, -2.7) {\small \textbf{CMP}}; \node at (-0.55, -2.7) {\small \textbf{CMP}}; \node at (0.75, -2.7) {\small GT}; \node at (1.9, -2.7) {\small MP}; \node at (3, -2.7) {\small Forest}; \node at (4.15, -2.7) {\small \textbf{CMP}}; \node at (5.5, -2.7) {\small GT}; \node at (6.6, -2.7) {\small MP}; \node at (7.72, -2.7) {\small \textbf{CMP}}; \end{tikzpicture} \vspace{-0.4cm} \mycaption{A visual comparison of inference results}{For 4 randomly chosen test images, we show inference results obtained by competing methods. (a)~Observed images. (b)~Inferred reflectance maps. \textit{GT} is the photometric stereo groundtruth, \textit{BU} is the Biswas \textit{et~al.}\@\xspace (2009) reflectance estimate and \textit{Forest} is the consensus prediction. (c)~The variance of the inferred reflectance estimate produced by CMP\@\xspace (normalized across rows). High variance regions correlate strongly with cast shadows. (d)~Visualization of inferred light directions. (e)~Inferred normal maps.} \label{fig:shading-qualitative-multiple-subjects-supp} \end{figure*} \begin{figure*}[H] \centering \setlength\fboxsep{0.2mm} \setlength\fboxrule{0pt} \begin{tikzpicture} \matrix at (0, 0) [matrix of nodes, nodes={anchor=east}, column sep=-0.15cm, row sep=-0.2cm] { \fbox{\includegraphics[width=1cm]{figures/sample_3_1_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_CMPVAR.png}} & \hspace{0.25cm} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_CMPVAR.png}} \\ \fbox{\includegraphics[width=1cm]{figures/sample_3_2_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_CMPVAR.png}} & \hspace{0.25cm} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_CMPVAR.png}} \\ \fbox{\includegraphics[width=1cm]{figures/sample_3_3_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_CMPVAR.png}} & \hspace{0.25cm} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_CMPVAR.png}} \\ }; \node at (-7.47, -2.0) {\small Observed}; \node at (-6.33, -2.0) {\small `GT'}; \node at (-5.2, -2.0) {\small BU}; \node at (-4.1, -2.0) {\small VMP}; \node at (-2.95, -2.0) {\small Forest}; \node at (-1.87, -2.0) {\small \textbf{CMP}}; \node at (-0.75, -2.0) {\small Variance}; \node at (0.75, -2.0) {\small Observed}; \node at (1.87, -2.0) {\small `GT'}; \node at (2.95, -2.0) {\small BU}; \node at (4.1, -2.0) {\small VMP}; \node at (5.25, -2.0) {\small Forest}; \node at (6.35, -2.0) {\small \textbf{CMP}}; \node at (7.48, -2.0) {\small Variance}; \end{tikzpicture} \mycaption{Robustness to varying illumination}{Reflectance estimation on two subject images with varying illumination. Left to right: observed image, photometric stereo estimate which is used as a proxy for groundtruth, bottom-up estimate of \cite{Biswas2009}, VMP result, consensus forest estimate, CMP mean, and CMP variance.} \label{fig:shading-qualitative-same-subject-supp} \end{figure*} \chapter{Supplementary Material for Chapter-5} This supplementary material contains a more detailed overview of the permutohedral lattice convolution in Section~\ref{sec:permconv}, more experiments in Section~\ref{sec:addexps} and additional results with experiment protocols for the experiments presented before in Section~\ref{sec:addresults}. \section{General Permutohedral Convolutions} \label{sec:permconv} A core technical contribution of this work is the generalization of the Gaussian permutohedral lattice convolution proposed in~\cite{adams2010fast} to the full non-separable case with the ability to perform backpropagation. Although, conceptually, there are minor difference between non-Gaussian and general parameterized filters, there are non-trivial practical differences in terms of the algorithmic implementation. The Gauss filters belong to the separable class and can thus be decomposed into multiple sequential one dimensional convolutions. We are interested in the general filter convolutions, which can not be decomposed. Thus, performing a general permutohedral convolution at a lattice point requires the computation of the inner product with the neighboring elements in all the directions in the high-dimensional space. Here, we give more details of the implementation differences of separable and non-separable filters. In the following we will explain the scalar case first. Recall, that the forward pass of general permutohedral convolution involves 3 steps: \textit{splatting}, \textit{convolving} and \textit{slicing}. We follow the same splatting and slicing strategies as in~\cite{adams2010fast} since these operations do not depend on the filter kernel. The main difference between our work and the existing implementation of~\cite{adams2010fast} is the way that the convolution operation is executed. This proceeds by constructing a \emph{blur neighbor} matrix $K$ that stores for every lattice point all values of the lattice neighbors that are needed to compute the filter output. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/supplementary/lattice_construction} \mycaption{Illustration of 1D permutohedral lattice construction} {A $4\times 4$ $(x,y)$ grid lattice is projected onto the plane defined by the normal vector $(1,1)^{\top}$. This grid has $s+1=4$ and $d=2$ $(s+1)^{d}=4^2=16$ elements. In the projection, all points of the same color are projected onto the same points in the plane. The number of elements of the projected lattice is $t=(s+1)^d-s^d=4^2-3^2=7$, that is the $(4\times 4)$ grid minus the size of lattice that is $1$ smaller at each size, in this case a $(3\times 3)$ lattice (the upper right $(3\times 3)$ elements). } \label{fig:latticeconstruction} \end{figure} The blur neighbor matrix is constructed by traversing through all the populated lattice points and their neighboring elements. This is done recursively to share computations. For any lattice point, the neighbors that are $n$ hops away are the direct neighbors of the points that are $n-1$ hops away. The size of a $d$ dimensional spatial filter with width $s+1$ is $(s+1)^{d}$ (\textit{e.g.}\@\xspace, a $3\times 3$ filter, $s=2$ in $d=2$ has $3^2=9$ elements) and this size grows exponentially in the number of dimensions $d$. The permutohedral lattice is constructed by projecting a regular grid onto the plane spanned by the $d$ dimensional normal vector ${(1,\ldots,1)}^{\top}$. See Fig.~\ref{fig:latticeconstruction} for an illustration of 1D lattice construction. Many corners of a grid filter are projected onto the same point, in total $t = {(s+1)}^{d} - s^{d}$ elements remain in the permutohedral filter with $s$ neighborhood in $d-1$ dimensions. If the lattice has $m$ populated elements, the matrix $K$ has size $t\times m$. Note that, since the input signal is typically sparse, only a few lattice corners are being populated in the \textit{slicing} step. We use a hash-table to keep track of these points and traverse only through the populated lattice points for this neighborhood matrix construction. Once the blur neighbor matrix $K$ is constructed, we can perform the convolution by the matrix vector multiplication \begin{equation} l' = BK, \label{eq:conv} \end{equation} where $B$ is the $1 \times t$ filter kernel (whose values we will learn) and $l'\in\mathbb{R}^{1\times m}$ is the result of the filtering at the $m$ lattice points. In practice, we found that the matrix $K$ is sometimes too large to fit into GPU memory and we divided the matrix $K$ into smaller pieces to compute Eq.~\ref{eq:conv} sequentially. In the general multi-dimensional case, the signal $l$ is of $c$ dimensions. Then the kernel is of size $B\times t$ and $K$ stores the $c$ dimensional vectors accordingly. When the input and output points are different, we slice only the input points and splat only at the output points. \section{Additional Experiments} \label{sec:addexps} In this section we discuss more use-cases for the learned bilateral filters, one use-case of BNNs and two single filter applications for image and 3D mesh denoising. \subsection{Recognition of subsampled MNIST}\label{sec:app_mnist} One of the strengths of the proposed filter convolution is that it does not require the input to lie on a regular grid. The only requirement is to define a distance between features of the input signal. We highlight this feature with the following experiment using the classical MNIST ten class classification problem~\cite{lecun1998mnist}. We sample a sparse set of $N$ points $(x,y)\in [0,1]\times [0,1]$ uniformly at random in the input image, use their interpolated values as signal and the \emph{continuous} $(x,y)$ positions as features. This mimics sub-sampling of a high-dimensional signal. To compare against a spatial convolution, we interpolate the sparse set of values at the grid positions. We take a reference implementation of LeNet~\cite{lecun1998gradient} that is part of the Caffe project~\cite{jia2014caffe} and compare it against the same architecture but replacing the first convolutional layer with a bilateral convolution layer (BCL). The filter size and numbers are adjusted to get a comparable number of parameters ($5\times 5$ for LeNet, $2$-neighborhood for BCL). The results are shown in Table~\ref{tab:all-results}. We see that training on the original MNIST data (column Original, LeNet vs. BNN) leads to a slight decrease in performance of the BNN (99.03\%) compared to LeNet (99.19\%). The BNN can be trained and evaluated on sparse signals, and we resample the image as described above for $N=$ 100\%, 60\% and 20\% of the total number of pixels. The methods are also evaluated on test images that are subsampled in the same way. Note that we can train and test with different subsampling rates. We introduce an additional bilinear interpolation layer for the LeNet architecture to train on the same data. In essence, both models perform a spatial interpolation and thus we expect them to yield a similar classification accuracy. Once the data is of higher dimensions the permutohedral convolution will be faster due to hashing the sparse input points, as well as less memory demanding in comparison to naive application of a spatial convolution with interpolated values. \begin{table}[t] \begin{center} \footnotesize \centering \begin{tabular}[t]{lllll} \toprule & & \multicolumn{3}{c}{Test Subsampling} \\ Method & Original & 100\% & 60\% & 20\%\\ \midrule LeNet & \textbf{0.9919} & 0.9660 & 0.9348 & \textbf{0.6434} \\ BNN & 0.9903 & \textbf{0.9844} & \textbf{0.9534} & 0.5767 \\ \hline LeNet 100\% & 0.9856 & 0.9809 & 0.9678 & \textbf{0.7386} \\ BNN 100\% & \textbf{0.9900} & \textbf{0.9863} & \textbf{0.9699} & 0.6910 \\ \hline LeNet 60\% & 0.9848 & 0.9821 & 0.9740 & 0.8151 \\ BNN 60\% & \textbf{0.9885} & \textbf{0.9864} & \textbf{0.9771} & \textbf{0.8214}\\ \hline LeNet 20\% & \textbf{0.9763} & \textbf{0.9754} & 0.9695 & 0.8928 \\ BNN 20\% & 0.9728 & 0.9735 & \textbf{0.9701} & \textbf{0.9042}\\ \bottomrule \end{tabular} \end{center} \vspace{-.2cm} \caption{Classification accuracy on MNIST. We compare the LeNet~\cite{lecun1998gradient} implementation that is part of Caffe~\cite{jia2014caffe} to the network with the first layer replaced by a bilateral convolution layer (BCL). Both are trained on the original image resolution (first two rows). Three more BNN and CNN models are trained with randomly subsampled images (100\%, 60\% and 20\% of the pixels). An additional bilinear interpolation layer samples the input signal on a spatial grid for the CNN model. } \label{tab:all-results} \vspace{-.3cm} \end{table} \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/supplementary/sample_body_data.jpg} \mycaption{Sample data for 3D mesh denoising} {(top) Some 3D body meshes sampled from~\cite{SMPL:2015} and (bottom) the corresponding noisy meshes used in denoising experiments.} \label{fig:samplebody} \end{figure} \subsection{Image Denoising} The main application that inspired the development of the bilateral filtering operation is image denoising~\cite{aurich1995non}, there using a single Gaussian kernel. Our development allows to learn this kernel function from data and we explore how to improve using a \emph{single} but more general bilateral filter. We use the Berkeley segmentation dataset (BSDS500)~\cite{arbelaezi2011bsds500} as a test bed. The color images in the dataset are converted to gray-scale, and corrupted with Gaussian noise with a standard deviation of $\frac {25} {255}$. We compare the performance of four different filter models on a denoising task. The first baseline model (``Spatial'' in Table \ref{tab:denoising}, $25$ weights) uses a single spatial filter with a kernel size of $5$ and predicts the scalar gray-scale value at the center pixel. The next model (``Gauss Bilateral'') applies a bilateral \emph{Gaussian} filter to the noisy input, using position and intensity features $\mathbf{f}=(x,y,v)^\top$. The third setup (``Learned Bilateral'', $65$ weights) takes a Gauss kernel as initialization and fits all filter weights on the ``train'' image set to minimize the mean squared error with respect to the clean images. We run a combination of spatial and permutohedral convolutions on spatial and bilateral features (``Spatial + Bilateral (Learned)'') to check for a complementary performance of the two convolutions. \label{sec:exp:denoising} \begin{table}[!h] \begin{center} \footnotesize \begin{tabular}[t]{lr} \toprule Method & PSNR \\ \midrule Noisy Input & $20.17$ \\ Spatial & $26.27$ \\ Gauss Bilateral & $26.51$ \\ Learned Bilateral & $26.58$ \\ Spatial + Bilateral (Learned) & $26.65$ \\ \bottomrule \end{tabular} \end{center} \vspace{-0.5em} \caption{PSNR results of a denoising task using the BSDS500 dataset~\cite{arbelaezi2011bsds500}} \vspace{-0.5em} \label{tab:denoising} \end{table} The PSNR scores evaluated on full images of the ``test'' image set are shown in Table \ref{tab:denoising}. We find that an untrained bilateral filter already performs better than a trained spatial convolution ($26.27$ to $26.51$). A learned convolution further improve the performance slightly. We chose this simple one-kernel setup to validate an advantage of the generalized bilateral filter. A competitive denoising system would employ RGB color information and also needs to be properly adjusted in network size. Multi-layer perceptrons have obtained state-of-the-art denoising results~\cite{burger12cvpr} and the permutohedral lattice layer can readily be used in such an architecture, which is intended future work. \subsection{3D Mesh Denoising}\label{sec:mesh_denoising} Permutohedral convolutions can naturally be extended to higher ($>2$) dimensional data. To highlight this, we use the proposed convolution for the task of denoising 3D meshes. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/supplementary/isomap_features.jpg} \mycaption{4D isomap features for 3D human bodies} {Visualization of 4D isomap features for a sample 3D mesh. Isomap feature values are overlaid onto mesh vertices.} \label{fig:isomap} \end{figure} We sample 3D human body meshes using a generative 3D body model from~\cite{SMPL:2015}. To the clean meshes, we add Gaussian random noise displacements along the surface normal at each vertex location. Figure~\ref{fig:samplebody} shows some sample 3D meshes sampled from~\cite{SMPL:2015} and corresponding noisy meshes. The task is to take the noisy meshes as inputs and recover the original 3D body meshes. We create 1000 training, 200 validation and another 500 testing examples for the experiments. \paragraph{Mesh Representation:} The 3D human body meshes from~\cite{SMPL:2015} are represented with 3D vertex locations and the edge connections between the vertices. We found that this signal representation using global 3D coordinates is not suitable for denoising with bilateral filtering. Therefore, we first smooth the noisy mesh using mean smoothing applied to the face normals~\cite{yagou2002mesh} and represent the noisy mesh vertices as 3D vector displacements with respect to the corresponding smoothed mesh. Thus, the task becomes denoising the 3D vector displacements with respect to the smoothed mesh. \paragraph{Isomap Features:} To apply permutohedral convolution, we need to define features at each input vertex point. We use 4 dimensional isomap embedding~\cite{tenenbaum2000global} of the given 3D mesh graph as features. The given 3D mesh is converted into weighted edge graph with edge weights set to Euclidean distance between the connected vertices and to infinity between the non-connected vertices. Then 4 dimensional isomap embedding is computed for this weighted edge graph using a publicly available implementation~\cite{isomap_code}. Fig.~\ref{fig:isomap} shows the visualization of isomap features on a sample 3D mesh. \setlength{\tabcolsep}{2pt} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \begin{table}[t] \scriptsize \centering \begin{tabular}{C{2.0cm} C{1.5cm} C{1.5cm} C{1.5cm} C{1.5cm}} \toprule & \textbf{Noisy Mesh} & \textbf{Normal Smoothing} & \textbf{Gauss Bilateral} & \textbf{Learned Bilateral} \\ [0.1cm] \midrule \textbf{Vertex Distance (RMSE)} & 5.774 & 3.183 & 2.872 & \textbf{2.825} \\ \textbf{Normal Angle Error} & 19.680 & 19.707 & 19.357 & \textbf{19.207} \\ \bottomrule \\ \end{tabular} \vspace{-0.2cm} \mycaption{Body Denoising} {Vertex distance RMSE values and normal angle error (in degrees) corresponding to different denoising strategies averaged over 500 test meshes.} \label{tbl:bodydenoise} \end{table} \paragraph{Experimental Results:} Mesh denoising with a bilateral filter proceeds by splatting the input 3D mesh vectors (displacements with respect to smoothed mesh) into the 4D isomap feature space, filtering the signal in this 4D space and then slicing back into original 3D input space. The Table~\ref{tbl:bodydenoise} shows quantitative results as RMSE for different denoising strategies. The normal smoothing~\cite{yagou2002mesh} already reduces the RMSE. The Gauss bilateral filter results in significant improvement over normal smoothing with and learning the filter weights again improves the result. A visual result is shown in Figure \ref{fig:bodyresult}. \begin{figure}[t!] \centering \includegraphics[width=\columnwidth]{figures/supplementary/body_sample_result.jpg} \mycaption{Sample Denoising Result} {Ground truth mesh (left), corresponding given noisy mesh (middle) and the denoised result (right) using learned bilateral filter.} \label{fig:bodyresult} \end{figure} \section{Additional results} \label{sec:addresults} This section contains more qualitative results for the experiments of the main paper. \begin{figure*}[th!] \centering \includegraphics[width=2\columnwidth,trim={5cm 2.5cm 5cm 4.5cm},clip]{figures/supplementary/lattice_viz.pdf} \mycaption{Visualization of the Permutohedral Lattice} {Sample lattice visualizations for different feature spaces. All pixels falling in the same simplex cell are shown with the same color. $(x,y)$ features correspond to image pixel positions, and $(r,g,b) \in [0,255]$ correspond to the red, green and blue color values.} \label{fig:latticeviz} \end{figure*} \subsection{Lattice Visualization} Figure~\ref{fig:latticeviz} shows sample lattice visualizations for different feature spaces. \begin{table}[t] \scriptsize \centering \begin{tabular}{c c c c c c} \toprule & & \multicolumn{4}{c}{Test Factor} \\ [0.1cm] & & \textbf{2$\times$} & \textbf{4$\times$} & \textbf{8$\times$} & \textbf{16$\times$} \\ [0.15cm] \parbox[t]{3mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Train Factor}}} & \textbf{2$\times$} & \textbf{38.45} & 36.12 & 34.06 & 32.43 \\ [0.1cm] & \textbf{4$\times$} & 38.40 & \textbf{36.16} & \textbf{34.08} & 32.47 \\ [0.1cm] & \textbf{8$\times$} & 38.40 & 36.15 & \textbf{34.08} & 32.47 \\ [0.1cm] & \textbf{16$\times$} & 38.26 & 36.13 & 34.06 & \textbf{32.49} \\ \bottomrule \\ \end{tabular} \vspace{-0.2cm} \mycaption{Color Upsampling with different train and test upsampling factors} {PSNR values corresponding to different upsampling factors used at train and test times on the 2 megapixel image dataset, using our learned bilateral filters.} \label{tbl:crossupsample} \vspace{-0.4cm} \end{table} \subsection{Color Upsampling}\label{sec:color_upsampling} In addition to the experiments discussed in the main paper, we performed the cross-factor analysis of training and testing at different upsampling factors. Table~\ref{tbl:crossupsample} shows the PSNR results for this analysis. Although, in terms of PSNR, it is optimal to train and test at the same upsampling factor, the differences are small when training and testing upsampling factors are different. Some images of the upsampling for the Pascal VOC12 dataset are shown in Fig.~\ref{fig:Colour_upsample_visuals}. It is especially the low level image details that are better preserved with a learned bilateral filter compared to the Gaussian case. \begin{figure*}[t!] \centering \subfigure{% \raisebox{2.0em}{ \includegraphics[width=.12\columnwidth]{figures/supplementary/2007_004969.jpg} } } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_004969_gray.pdf} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_004969_gt.pdf} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_004969_bicubic.pdf} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_004969_gauss.pdf} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_004969_learnt.pdf} }\\ \subfigure{% \raisebox{2.0em}{ \includegraphics[width=.12\columnwidth]{figures/supplementary/2007_003106.jpg} } } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_003106_gray.pdf} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_003106_gt.pdf} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_003106_bicubic.pdf} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_003106_gauss.pdf} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_003106_learnt.pdf} }\\ \setcounter{subfigure}{0} \subfigure[Input]{% \raisebox{2.0em}{ \includegraphics[width=.12\columnwidth]{figures/supplementary/2007_006837.jpg} } } \subfigure[Gray Guidance]{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_006837_gray.pdf} } \subfigure[Ground Truth]{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_006837_gt.pdf} } \subfigure[Bicubic Interpolation]{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_006837_bicubic.pdf} } \subfigure[Gauss Bilateral]{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_006837_gauss.pdf} } \subfigure[Learned Bilateral]{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2007_006837_learnt.pdf} } \mycaption{Color Upsampling}{Color $8\times$ upsampling results using different methods (best viewed on screen).} \label{fig:Colour_upsample_visuals} \end{figure*} \subsection{Depth Upsampling} Figure~\ref{fig:depth_upsample_visuals} presents some more qualitative results comparing bicubic interpolation, Gauss bilateral and learned bilateral upsampling on NYU depth dataset image~\cite{silberman2012indoor}. \subsection{Character Recognition}\label{sec:app_character} Figure~\ref{fig:nnrecognition} shows the schematic of different layers of the network architecture for LeNet-7~\cite{lecun1998mnist} and DeepCNet(5, 50)~\cite{ciresan2012multi,graham2014spatially}. For the BNN variants, the first layer filters are replaced with learned bilateral filters and are learned end-to-end. \subsection{Semantic Segmentation}\label{sec:app_semantic_segmentation} Some more visual results for semantic segmentation are shown in Figure~\ref{fig:semantic_visuals}. These include the underlying DeepLab CNN\cite{chen2014semantic} result (DeepLab), the 2 step mean-field result with Gaussian edge potentials (+2stepMF-GaussCRF) and also corresponding results with learned edge potentials (+2stepMF-LearnedCRF). In general, we observe that mean-field in learned CRF leads to slightly dilated classification regions in comparison to using Gaussian CRF thereby filling-in the false negative pixels and also correcting some mis-classified regions. \begin{figure*}[t!] \centering \subfigure{% \raisebox{2.0em}{ \includegraphics[width=.12\columnwidth]{figures/supplementary/2bicubic} } } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2given_image} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2ground_truth} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2bicubic} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2gauss} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/2learnt} }\\ \subfigure{% \raisebox{2.0em}{ \includegraphics[width=.12\columnwidth]{figures/supplementary/32bicubic} } } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/32given_image} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/32ground_truth} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/32bicubic} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/32gauss} } \subfigure{% \includegraphics[width=.36\columnwidth]{figures/supplementary/32learnt} }\\ \setcounter{subfigure}{0} \subfigure[Input]{% \raisebox{2.0em}{ \includegraphics[width=.12\columnwidth]{figures/supplementary/41bicubic} } } \subfigure[Guidance]{% \includegraphics[width=.36\columnwidth]{figures/supplementary/41given_image} } \subfigure[Ground Truth]{% \includegraphics[width=.36\columnwidth]{figures/supplementary/41ground_truth} } \subfigure[Bicubic Interpolation]{% \includegraphics[width=.36\columnwidth]{figures/supplementary/41bicubic} } \subfigure[Gauss Bilateral]{% \includegraphics[width=.36\columnwidth]{figures/supplementary/41gauss} } \subfigure[Learned Bilateral]{% \includegraphics[width=.36\columnwidth]{figures/supplementary/41learnt} } \mycaption{Depth Upsampling}{Depth $8\times$ upsampling results using different upsampling strategies.} \label{fig:depth_upsample_visuals} \end{figure*} \subsection{Material Segmentation}\label{sec:app_material_segmentation} In Fig.~\ref{fig:material_visuals}, we present visual results comparing 2 step mean-field inference with Gaussian and learned pairwise CRF potentials. In general, we observe that the pixels belonging to dominant classes in the training data are being more accurately classified with learned CRF. This leads to a significant improvements in overall pixel accuracy. This also results in a slight decrease of the accuracy from less frequent class pixels thereby slightly reducing the average class accuracy with learning. We attribute this to the type of annotation that is available for this dataset, which is not for the entire image but for some segments in the image. We have very few images of the infrequent classes to combat this behaviour during training. \subsection{Experiment Protocols} \label{sec:protocols} Table~\ref{tbl:parameters} shows experiment protocols of different experiments. \begin{figure*}[t!] \centering \subfigure[LeNet-7]{ \includegraphics[width=1.5\columnwidth]{figures/supplementary/lenet_cnn_network} }\\ \subfigure[DeepCNet]{ \includegraphics[width=2\columnwidth]{figures/supplementary/deepcnet_cnn_network} } \mycaption{CNNs for Character Recognition} {Schematic of (top) LeNet-7~\cite{lecun1998mnist} and (bottom) DeepCNet(5,50)~\cite{ciresan2012multi,graham2014spatially} architectures used in Assamese character recognition experiments.} \label{fig:nnrecognition} \end{figure*} \definecolor{voc_1}{RGB}{0, 0, 0} \definecolor{voc_2}{RGB}{128, 0, 0} \definecolor{voc_3}{RGB}{0, 128, 0} \definecolor{voc_4}{RGB}{128, 128, 0} \definecolor{voc_5}{RGB}{0, 0, 128} \definecolor{voc_6}{RGB}{128, 0, 128} \definecolor{voc_7}{RGB}{0, 128, 128} \definecolor{voc_8}{RGB}{128, 128, 128} \definecolor{voc_9}{RGB}{64, 0, 0} \definecolor{voc_10}{RGB}{192, 0, 0} \definecolor{voc_11}{RGB}{64, 128, 0} \definecolor{voc_12}{RGB}{192, 128, 0} \definecolor{voc_13}{RGB}{64, 0, 128} \definecolor{voc_14}{RGB}{192, 0, 128} \definecolor{voc_15}{RGB}{64, 128, 128} \definecolor{voc_16}{RGB}{192, 128, 128} \definecolor{voc_17}{RGB}{0, 64, 0} \definecolor{voc_18}{RGB}{128, 64, 0} \definecolor{voc_19}{RGB}{0, 192, 0} \definecolor{voc_20}{RGB}{128, 192, 0} \definecolor{voc_21}{RGB}{0, 64, 128} \definecolor{voc_22}{RGB}{128, 64, 128} \begin{figure*}[t] \centering \fcolorbox{white}{voc_1}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Background~~ \fcolorbox{white}{voc_2}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Aeroplane~~ \fcolorbox{white}{voc_3}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Bicycle~~ \fcolorbox{white}{voc_4}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Bird~~ \fcolorbox{white}{voc_5}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Boat~~ \fcolorbox{white}{voc_6}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Bottle~~ \fcolorbox{white}{voc_7}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Bus~~ \fcolorbox{white}{voc_8}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Car~~ \\ \fcolorbox{white}{voc_9}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Cat~~ \fcolorbox{white}{voc_10}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Chair~~ \fcolorbox{white}{voc_11}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Cow~~ \fcolorbox{white}{voc_12}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Dining Table~~ \fcolorbox{white}{voc_13}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Dog~~ \fcolorbox{white}{voc_14}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Horse~~ \fcolorbox{white}{voc_15}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Motorbike~~ \fcolorbox{white}{voc_16}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Person~~ \\ \fcolorbox{white}{voc_17}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Potted Plant~~ \fcolorbox{white}{voc_18}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Sheep~~ \fcolorbox{white}{voc_19}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Sofa~~ \fcolorbox{white}{voc_20}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Train~~ \fcolorbox{white}{voc_21}{\rule{0pt}{6pt}\rule{6pt}{0pt}} TV monitor~~ \\ \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001423_given.jpg} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001423_gt.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001423_cnn.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001423_gauss.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001423_learnt.png} }\\ \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001430_given.jpg} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001430_gt.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001430_cnn.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001430_gauss.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001430_learnt.png} }\\ \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_007996_given.jpg} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_007996_gt.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_007996_cnn.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_007996_gauss.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_007996_learnt.png} }\\ \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_002682_given.jpg} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_002682_gt.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_002682_cnn.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_002682_gauss.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_002682_learnt.png} }\\ \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_004789_given.jpg} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_004789_gt.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_004789_cnn.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_004789_gauss.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_004789_learnt.png} }\\ \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001311_given.jpg} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001311_gt.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001311_cnn.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001311_gauss.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2007_001311_learnt.png} }\\ \setcounter{subfigure}{0} \subfigure[Input]{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_003531_given.jpg} } \subfigure[Ground Truth]{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_003531_gt.png} } \subfigure[DeepLab]{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_003531_cnn.png} } \subfigure[+2stepMF-GaussCRF]{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_003531_gauss.png} } \subfigure[+2stepMF-LearnedCRF]{% \includegraphics[width=.4\columnwidth]{figures/supplementary/2010_003531_learnt.png} } \mycaption{Semantic Segmentation}{Example results of semantic segmentation. (c) depicts the unary results before application of MF, (d) after two steps of MF with Gaussian edge CRF potentials, (e) after two steps of MF with learned edge CRF potentials.} \label{fig:semantic_visuals} \end{figure*} \definecolor{minc_1}{HTML}{771111} \definecolor{minc_2}{HTML}{CAC690} \definecolor{minc_3}{HTML}{EEEEEE} \definecolor{minc_4}{HTML}{7C8FA6} \definecolor{minc_5}{HTML}{597D31} \definecolor{minc_6}{HTML}{104410} \definecolor{minc_7}{HTML}{BB819C} \definecolor{minc_8}{HTML}{D0CE48} \definecolor{minc_9}{HTML}{622745} \definecolor{minc_10}{HTML}{666666} \definecolor{minc_11}{HTML}{D54A31} \definecolor{minc_12}{HTML}{101044} \definecolor{minc_13}{HTML}{444126} \definecolor{minc_14}{HTML}{75D646} \definecolor{minc_15}{HTML}{DD4348} \definecolor{minc_16}{HTML}{5C8577} \definecolor{minc_17}{HTML}{C78472} \definecolor{minc_18}{HTML}{75D6D0} \definecolor{minc_19}{HTML}{5B4586} \definecolor{minc_20}{HTML}{C04393} \definecolor{minc_21}{HTML}{D69948} \definecolor{minc_22}{HTML}{7370D8} \definecolor{minc_23}{HTML}{7A3622} \definecolor{minc_24}{HTML}{000000} \begin{figure*}[t] \centering \fcolorbox{white}{minc_1}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Brick~~ \fcolorbox{white}{minc_2}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Carpet~~ \fcolorbox{white}{minc_3}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Ceramic~~ \fcolorbox{white}{minc_4}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Fabric~~ \fcolorbox{white}{minc_5}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Foliage~~ \fcolorbox{white}{minc_6}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Food~~ \fcolorbox{white}{minc_7}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Glass~~ \fcolorbox{white}{minc_8}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Hair~~ \\ \fcolorbox{white}{minc_9}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Leather~~ \fcolorbox{white}{minc_10}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Metal~~ \fcolorbox{white}{minc_11}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Mirror~~ \fcolorbox{white}{minc_12}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Other~~ \fcolorbox{white}{minc_13}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Painted~~ \fcolorbox{white}{minc_14}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Paper~~ \fcolorbox{white}{minc_15}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Plastic~~ \fcolorbox{white}{minc_16}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Polished Stone~~ \\ \fcolorbox{white}{minc_17}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Skin~~ \fcolorbox{white}{minc_18}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Sky~~ \fcolorbox{white}{minc_19}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Stone~~ \fcolorbox{white}{minc_20}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Tile~~ \fcolorbox{white}{minc_21}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Wallpaper~~ \fcolorbox{white}{minc_22}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Water~~ \fcolorbox{white}{minc_23}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Wood~~ \\ \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000010868_given.jpg} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000010868_gt.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000010868_cnn.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000010868_gauss.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000010868_learnt.png} }\\[-2ex] \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000006011_given.jpg} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000006011_gt.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000006011_cnn.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000006011_gauss.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000006011_learnt.png} }\\[-2ex] \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000008553_given.jpg} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000008553_gt.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000008553_cnn.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000008553_gauss.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000008553_learnt.png} }\\[-2ex] \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000009188_given.jpg} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000009188_gt.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000009188_cnn.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000009188_gauss.png} } \subfigure{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000009188_learnt.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[Input]{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000023570_given.jpg} } \subfigure[Ground Truth]{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000023570_gt.png} } \subfigure[DeepLab]{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000023570_cnn.png} } \subfigure[+2stepMF-GaussCRF]{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000023570_gauss.png} } \subfigure[+2stepMF-LearnedCRF]{% \includegraphics[width=.4\columnwidth]{figures/supplementary/000023570_learnt.png} } \mycaption{Material Segmentation}{Example results of material segmentation. (c) depicts the unary results before application of MF, (d) after two steps of MF with Gaussian edge CRF potentials, (e) after two steps of MF with learned edge CRF potentials.} \label{fig:material_visuals} \end{figure*} \begin{table*}[h] \scriptsize \centering \begin{tabular}{L{2.5cm} L{2.8cm} C{1.6cm} C{0.8cm} C{0.6cm} C{0.7cm} C{0.7cm} C{0.7cm} C{1.6cm} C{0.7cm} C{0.7cm} C{0.7cm}} \toprule & & & & & \multicolumn{3}{c}{\textbf{Data Statistics}} & \multicolumn{4}{c}{\textbf{Training Protocol}} \\ \textbf{Experiment} & \textbf{Feature Types} & \textbf{Feature Scales} & \textbf{Filter Size} & \textbf{Filter Nbr.} & \textbf{Train} & \textbf{Val.} & \textbf{Test} & \textbf{Loss Type} & \textbf{LR} & \textbf{Batch} & \textbf{Epochs} \\ \midrule \multicolumn{2}{c}{\textbf{Single Bilateral Filter Applications}} & & & & & & & & & \\ \textbf{2$\times$ Color Upsampling} & Position$_{1}$, Intensity (3D) & 0.13, 0.17 & 65 & 2 & 10581 & 1449 & 1456 & MSE & 1e-06 & 200 & 94.5\\ \textbf{4$\times$ Color Upsampling} & Position$_{1}$, Intensity (3D) & 0.06, 0.17 & 65 & 2 & 10581 & 1449 & 1456 & MSE & 1e-06 & 200 & 94.5\\ \textbf{8$\times$ Color Upsampling} & Position$_{1}$, Intensity (3D) & 0.03, 0.17 & 65 & 2 & 10581 & 1449 & 1456 & MSE & 1e-06 & 200 & 94.5\\ \textbf{16$\times$ Color Upsampling} & Position$_{1}$, Intensity (3D) & 0.02, 0.17 & 65 & 2 & 10581 & 1449 & 1456 & MSE & 1e-06 & 200 & 94.5\\ \textbf{Depth Upsampling} & Position$_{1}$, Color (5D) & 0.05, 0.02 & 665 & 2 & 795 & 100 & 654 & MSE & 1e-07 & 50 & 251.6\\ \textbf{Mesh Denoising} & Isomap (4D) & 46.00 & 63 & 2 & 1000 & 200 & 500 & MSE & 100 & 10 & 100.0 \\ \midrule \multicolumn{2}{c}{\textbf{DenseCRF Applications}} & & & & & & & & &\\ \multicolumn{2}{l}{\textbf{Semantic Segmentation}} & & & & & & & & &\\ \textbf{- 1step MF} & Position$_{1}$, Color (5D); Position$_{1}$ (2D) & 0.01, 0.34; 0.34 & 665; 19 & 2; 2 & 10581 & 1449 & 1456 & Logistic & 0.1 & 5 & 1.4 \\ \textbf{- 2step MF} & Position$_{1}$, Color (5D); Position$_{1}$ (2D) & 0.01, 0.34; 0.34 & 665; 19 & 2; 2 & 10581 & 1449 & 1456 & Logistic & 0.1 & 5 & 1.4 \\ \textbf{- \textit{loose} 2step MF} & Position$_{1}$, Color (5D); Position$_{1}$ (2D) & 0.01, 0.34; 0.34 & 665; 19 & 2; 2 &10581 & 1449 & 1456 & Logistic & 0.1 & 5 & +1.9 \\ \\ \multicolumn{2}{l}{\textbf{Material Segmentation}} & & & & & & & & &\\ \textbf{- 1step MF} & Position$_{2}$, Lab-Color (5D) & 5.00, 0.05, 0.30 & 665 & 2 & 928 & 150 & 1798 & Weighted Logistic & 1e-04 & 24 & 2.6 \\ \textbf{- 2step MF} & Position$_{2}$, Lab-Color (5D) & 5.00, 0.05, 0.30 & 665 & 2 & 928 & 150 & 1798 & Weighted Logistic & 1e-04 & 12 & +0.7 \\ \textbf{- \textit{loose} 2step MF} & Position$_{2}$, Lab-Color (5D) & 5.00, 0.05, 0.30 & 665 & 2 & 928 & 150 & 1798 & Weighted Logistic & 1e-04 & 12 & +0.2\\ \midrule \multicolumn{2}{c}{\textbf{Neural Network Applications}} & & & & & & & & &\\ \textbf{Tiles: CNN-9$\times$9} & - & - & 81 & 4 & 10000 & 1000 & 1000 & Logistic & 0.01 & 100 & 500.0 \\ \textbf{Tiles: CNN-13$\times$13} & - & - & 169 & 6 & 10000 & 1000 & 1000 & Logistic & 0.01 & 100 & 500.0 \\ \textbf{Tiles: CNN-17$\times$17} & - & - & 289 & 8 & 10000 & 1000 & 1000 & Logistic & 0.01 & 100 & 500.0 \\ \textbf{Tiles: CNN-21$\times$21} & - & - & 441 & 10 & 10000 & 1000 & 1000 & Logistic & 0.01 & 100 & 500.0 \\ \textbf{Tiles: BNN} & Position$_{1}$, Color (5D) & 0.05, 0.04 & 63 & 1 & 10000 & 1000 & 1000 & Logistic & 0.01 & 100 & 30.0 \\ \textbf{LeNet} & - & - & 25 & 2 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \textbf{Crop-LeNet} & - & - & 25 & 2 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \textbf{BNN-LeNet} & Position$_{2}$ (2D) & 20.00 & 7 & 1 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \textbf{DeepCNet} & - & - & 9 & 1 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \textbf{Crop-DeepCNet} & - & - & 9 & 1 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \textbf{BNN-DeepCNet} & Position$_{2}$ (2D) & 40.00 & 7 & 1 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \bottomrule \\ \end{tabular} \mycaption{Experiment Protocols} {Experiment protocols for the different experiments presented in this work. \textbf{Feature Types}: Feature spaces used for the bilateral convolutions. Position$_1$ corresponds to un-normalized pixel positions whereas Position$_2$ corresponds to pixel positions normalized to $[0,1]$ with respect to the given image. \textbf{Feature Scales}: Cross-validated scales for the features used. \textbf{Filter Size}: Number of elements in the filter that is being learned. \textbf{Filter Nbr.}: Half-width of the filter. \textbf{Train}, \textbf{Val.} and \textbf{Test} corresponds to the number of train, validation and test images used in the experiment. \textbf{Loss Type}: Type of loss used for back-propagation. ``MSE'' corresponds to Euclidean mean squared error loss and ``Logistic'' corresponds to multinomial logistic loss. ``Weighted Logistic'' is the class-weighted multinomial logistic loss. We weighted the loss with inverse class probability for material segmentation task due to the small availability of training data with class imbalance. \textbf{LR}: Fixed learning rate used in stochastic gradient descent. \textbf{Batch}: Number of images used in one parameter update step. \textbf{Epochs}: Number of training epochs. In all the experiments, we used fixed momentum of 0.9 and weight decay of 0.0005 for stochastic gradient descent. ```Color Upsampling'' experiments in this Table corresponds to those performed on Pascal VOC12 dataset images. For all experiments using Pascal VOC12 images, we use extended training segmentation dataset available from~\cite{hariharan2011moredata}, and used standard validation and test splits from the main dataset~\cite{voc2012segmentation}.} \label{tbl:parameters} \end{table*} \chapter{Supplementary Material} \label{appendix} In this appendix, we present supplementary material for the techniques and experiments presented in the main text. \section{Baseline Results and Analysis for Informed Sampler} \label{appendix:chap3} Here, we give an in-depth performance analysis of the various samplers and the effect of their hyperparameters. We choose hyperparameters with the lowest PSRF value after $10k$ iterations, for each sampler individually. If the differences between PSRF are not significantly different among multiple values, we choose the one that has the highest acceptance rate. \subsection{Experiment: Estimating Camera Extrinsics} \label{appendix:chap3:room} \subsubsection{Parameter Selection} \paragraph{Metropolis Hastings (MH\xspace)} Figure~\ref{fig:exp1_MH} shows the median acceptance rates and PSRF values corresponding to various proposal standard deviations of plain MH\xspace~sampling. Mixing gets better and the acceptance rate gets worse as the standard deviation increases. The value $0.3$ is selected standard deviation for this sampler. \paragraph{Metropolis Hastings Within Gibbs (MHWG\xspace)} As mentioned in Section~\ref{sec:room}, the MHWG\xspace~sampler with one-dimensional updates did not converge for any value of proposal standard deviation. This problem has high correlation of the camera parameters and is of multi-modal nature, which this sampler has problems with. \paragraph{Parallel Tempering (PT\xspace)} For PT\xspace~sampling, we took the best performing MH\xspace~sampler and used different temperature chains to improve the mixing of the sampler. Figure~\ref{fig:exp1_PT} shows the results corresponding to different combination of temperature levels. The sampler with temperature levels of $[1,3,27]$ performed best in terms of both mixing and acceptance rate. \paragraph{Effect of Mixture Coefficient in Informed Sampling (INF-MH\xspace)} Figure~\ref{fig:exp1_alpha} shows the effect of mixture coefficient ($\alpha$) on the informed sampling INF-MH\xspace. Since there is no significant different in PSRF values for $0 \le \alpha \le 0.7$, we chose $0.7$ due to its high acceptance rate. \begin{figure}[h] \centering \subfigure[MH]{% \includegraphics[width=.48\textwidth]{figures/supplementary/camPose_MH.pdf} \label{fig:exp1_MH} } \subfigure[PT]{% \includegraphics[width=.48\textwidth]{figures/supplementary/camPose_PT.pdf} \label{fig:exp1_PT} } \\ \subfigure[INF-MH]{% \includegraphics[width=.48\textwidth]{figures/supplementary/camPose_alpha.pdf} \label{fig:exp1_alpha} } \mycaption{Results of the `Estimating Camera Extrinsics' experiment}{PRSFs and Acceptance rates corresponding to (a) various standard deviations of MH\xspace, (b) various temperature level combinations of PT\xspace sampling and (c) various mixture coefficients of INF-MH\xspace sampling.} \end{figure} \begin{figure}[!t] \centering \subfigure[MH\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/occlusionExp_MH.pdf} \label{fig:exp2_MH} } \subfigure[BMHWG\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/occlusionExp_BMHWG.pdf} \label{fig:exp2_BMHWG} } \\ \subfigure[MHWG\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/occlusionExp_MHWG.pdf} \label{fig:exp2_MHWG} } \subfigure[PT\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/occlusionExp_PT.pdf} \label{fig:exp2_PT} } \\ \subfigure[INF-BMHWG\xspace]{% \includegraphics[width=.5\textwidth]{figures/supplementary/occlusionExp_alpha.pdf} \label{fig:exp2_alpha} } \mycaption{Results of the `Occluding Tiles' experiment}{PRSF and Acceptance rates corresponding to various standard deviations of (a) MH\xspace, (b) BMHWG\xspace, (c) MHWG\xspace, (d) various temperature level combinations of PT\xspace~sampling and; (e) various mixture coefficients of our informed INF-BMHWG\xspace sampling.} \end{figure} \subsection{Experiment: Occluding Tiles} \label{appendix:chap3:tiles} \subsubsection{Parameter Selection} \paragraph{Metropolis Hastings (MH\xspace)} Figure~\ref{fig:exp2_MH} shows the results of MH\xspace~sampling. Results show the poor convergence for all proposal standard deviations and rapid decrease of AR with increasing standard deviation. This is due to the high-dimensional nature of the problem. We selected a standard deviation of $1.1$. \paragraph{Blocked Metropolis Hastings Within Gibbs (BMHWG\xspace)} The results of BMHWG\xspace are shown in Figure~\ref{fig:exp2_BMHWG}. In this sampler we update only one block of tile variables (of dimension four) in each sampling step. Results show much better performance compared to plain MH\xspace. The optimal proposal standard deviation for this sampler is $0.7$. \paragraph{Metropolis Hastings Within Gibbs (MHWG\xspace)} Figure~\ref{fig:exp2_MHWG} shows the result of MHWG\xspace sampling. This sampler is better than BMHWG\xspace and converges much more quickly. Here a standard deviation of $0.9$ is found to be best. \paragraph{Parallel Tempering (PT\xspace)} Figure~\ref{fig:exp2_PT} shows the results of PT\xspace sampling with various temperature combinations. Results show no improvement in AR from plain MH\xspace sampling and again $[1,3,27]$ temperature levels are found to be optimal. \paragraph{Effect of Mixture Coefficient in Informed Sampling (INF-BMHWG\xspace)} Figure~\ref{fig:exp2_alpha} shows the effect of mixture coefficient ($\alpha$) on the blocked informed sampling INF-BMHWG\xspace. Since there is no significant different in PSRF values for $0 \le \alpha \le 0.8$, we chose $0.8$ due to its high acceptance rate. \subsection{Experiment: Estimating Body Shape} \label{appendix:chap3:body} \subsubsection{Parameter Selection} \paragraph{Metropolis Hastings (MH\xspace)} Figure~\ref{fig:exp3_MH} shows the result of MH\xspace~sampling with various proposal standard deviations. The value of $0.1$ is found to be best. \paragraph{Metropolis Hastings Within Gibbs (MHWG\xspace)} For MHWG\xspace sampling we select $0.3$ proposal standard deviation. Results are shown in Fig.~\ref{fig:exp3_MHWG}. \paragraph{Parallel Tempering (PT\xspace)} As before, results in Fig.~\ref{fig:exp3_PT}, the temperature levels were selected to be $[1,3,27]$ due its slightly higher AR. \paragraph{Effect of Mixture Coefficient in Informed Sampling (INF-MH\xspace)} Figure~\ref{fig:exp3_alpha} shows the effect of $\alpha$ on PSRF and AR. Since there is no significant differences in PSRF values for $0 \le \alpha \le 0.8$, we choose $0.8$. \begin{figure}[t] \centering \subfigure[MH\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/bodyShape_MH.pdf} \label{fig:exp3_MH} } \subfigure[MHWG\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/bodyShape_MHWG.pdf} \label{fig:exp3_MHWG} } \\ \subfigure[PT\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/bodyShape_PT.pdf} \label{fig:exp3_PT} } \subfigure[INF-MH\xspace]{% \includegraphics[width=.48\textwidth]{figures/supplementary/bodyShape_alpha.pdf} \label{fig:exp3_alpha} } \\ \mycaption{Results of the `Body Shape Estimation' experiment}{PRSFs and Acceptance rates corresponding to various standard deviations of (a) MH\xspace, (b) MHWG\xspace; (c) various temperature level combinations of PT\xspace sampling and; (d) various mixture coefficients of the informed INF-MH\xspace sampling.} \end{figure} \subsection{Results Overview} Figure~\ref{fig:exp_summary} shows the summary results of the all the three experimental studies related to informed sampler. \begin{figure*}[h!] \centering \subfigure[Results for: Estimating Camera Extrinsics]{% \includegraphics[width=0.9\textwidth]{figures/supplementary/camPose_ALL.pdf} \label{fig:exp1_all} } \subfigure[Results for: Occluding Tiles]{% \includegraphics[width=0.9\textwidth]{figures/supplementary/occlusionExp_ALL.pdf} \label{fig:exp2_all} } \subfigure[Results for: Estimating Body Shape]{% \includegraphics[width=0.9\textwidth]{figures/supplementary/bodyShape_ALL.pdf} \label{fig:exp3_all} } \label{fig:exp_summary} \mycaption{Summary of the statistics for the three experiments}{Shown are for several baseline methods and the informed samplers the acceptance rates (left), PSRFs (middle), and RMSE values (right). All results are median results over multiple test examples.} \end{figure*} \subsection{Additional Qualitative Results} \subsubsection{Occluding Tiles} In Figure~\ref{fig:exp2_visual_more} more qualitative results of the occluding tiles experiment are shown. The informed sampling approach (INF-BMHWG\xspace) is better than the best baseline (MHWG\xspace). This still is a very challenging problem since the parameters for occluded tiles are flat over a large region. Some of the posterior variance of the occluded tiles is already captured by the informed sampler. \begin{figure*}[h!] \begin{center} \centerline{\includegraphics[width=0.95\textwidth]{figures/supplementary/occlusionExp_Visual.pdf}} \mycaption{Additional qualitative results of the occluding tiles experiment} {From left to right: (a) Given image, (b) Ground truth tiles, (c) OpenCV heuristic and most probable estimates from 5000 samples obtained by (d) MHWG sampler (best baseline) and (e) our INF-BMHWG sampler. (f) Posterior expectation of the tiles boundaries obtained by INF-BMHWG sampling (First 2000 samples are discarded as burn-in).} \label{fig:exp2_visual_more} \end{center} \end{figure*} \subsubsection{Body Shape} Figure~\ref{fig:exp3_bodyMeshes} shows some more results of 3D mesh reconstruction using posterior samples obtained by our informed sampling INF-MH\xspace. \begin{figure*}[t] \begin{center} \centerline{\includegraphics[width=0.75\textwidth]{figures/supplementary/bodyMeshResults.pdf}} \mycaption{Qualitative results for the body shape experiment} {Shown is the 3D mesh reconstruction results with first 1000 samples obtained using the INF-MH\xspace informed sampling method. (blue indicates small values and red indicates high values)} \label{fig:exp3_bodyMeshes} \end{center} \end{figure*} \clearpage \section{Additional Results on the Face Problem with CMP} Figure~\ref{fig:shading-qualitative-multiple-subjects-supp} shows inference results for reflectance maps, normal maps and lights for randomly chosen test images, and Fig.~\ref{fig:shading-qualitative-same-subject-supp} shows reflectance estimation results on multiple images of the same subject produced under different illumination conditions. CMP is able to produce estimates that are closer to the groundtruth across different subjects and illumination conditions. \begin{figure*}[h] \begin{center} \centerline{\includegraphics[width=1.0\columnwidth]{figures/face_cmp_visual_results_supp.pdf}} \vspace{-1.2cm} \end{center} \mycaption{A visual comparison of inference results}{(a)~Observed images. (b)~Inferred reflectance maps. \textit{GT} is the photometric stereo groundtruth, \textit{BU} is the Biswas \textit{et~al.}\@\xspace (2009) reflectance estimate and \textit{Forest} is the consensus prediction. (c)~The variance of the inferred reflectance estimate produced by CMP\@\xspace (normalized across rows).(d)~Visualization of inferred light directions. (e)~Inferred normal maps.} \label{fig:shading-qualitative-multiple-subjects-supp} \end{figure*} \begin{figure*}[h] \centering \setlength\fboxsep{0.2mm} \setlength\fboxrule{0pt} \begin{tikzpicture} \matrix at (0, 0) [matrix of nodes, nodes={anchor=east}, column sep=-0.05cm, row sep=-0.2cm] { \fbox{\includegraphics[width=1cm]{figures/sample_3_4_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_4_CMPVAR.png}} \\ \fbox{\includegraphics[width=1cm]{figures/sample_3_5_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_5_CMPVAR.png}} \\ \fbox{\includegraphics[width=1cm]{figures/sample_3_6_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_6_CMPVAR.png}} \\ }; \node at (-3.85, -2.0) {\small Observed}; \node at (-2.55, -2.0) {\small `GT'}; \node at (-1.27, -2.0) {\small BU}; \node at (0.0, -2.0) {\small MP}; \node at (1.27, -2.0) {\small Forest}; \node at (2.55, -2.0) {\small \textbf{CMP}}; \node at (3.85, -2.0) {\small Variance}; \end{tikzpicture} \mycaption{Robustness to varying illumination}{Reflectance estimation on a subject images with varying illumination. Left to right: observed image, photometric stereo estimate (GT) which is used as a proxy for groundtruth, bottom-up estimate of \cite{Biswas2009}, VMP result, consensus forest estimate, CMP mean, and CMP variance.} \label{fig:shading-qualitative-same-subject-supp} \end{figure*} \clearpage \section{Additional Material for Learning Sparse High Dimensional Filters} \label{sec:appendix-bnn} This part of supplementary material contains a more detailed overview of the permutohedral lattice convolution in Section~\ref{sec:permconv}, more experiments in Section~\ref{sec:addexps} and additional results with protocols for the experiments presented in Chapter~\ref{chap:bnn} in Section~\ref{sec:addresults}. \vspace{-0.2cm} \subsection{General Permutohedral Convolutions} \label{sec:permconv} A core technical contribution of this work is the generalization of the Gaussian permutohedral lattice convolution proposed in~\cite{adams2010fast} to the full non-separable case with the ability to perform back-propagation. Although, conceptually, there are minor differences between Gaussian and general parameterized filters, there are non-trivial practical differences in terms of the algorithmic implementation. The Gauss filters belong to the separable class and can thus be decomposed into multiple sequential one dimensional convolutions. We are interested in the general filter convolutions, which can not be decomposed. Thus, performing a general permutohedral convolution at a lattice point requires the computation of the inner product with the neighboring elements in all the directions in the high-dimensional space. Here, we give more details of the implementation differences of separable and non-separable filters. In the following, we will explain the scalar case first. Recall, that the forward pass of general permutohedral convolution involves 3 steps: \textit{splatting}, \textit{convolving} and \textit{slicing}. We follow the same splatting and slicing strategies as in~\cite{adams2010fast} since these operations do not depend on the filter kernel. The main difference between our work and the existing implementation of~\cite{adams2010fast} is the way that the convolution operation is executed. This proceeds by constructing a \emph{blur neighbor} matrix $K$ that stores for every lattice point all values of the lattice neighbors that are needed to compute the filter output. \begin{figure}[t!] \centering \includegraphics[width=0.6\columnwidth]{figures/supplementary/lattice_construction} \mycaption{Illustration of 1D permutohedral lattice construction} {A $4\times 4$ $(x,y)$ grid lattice is projected onto the plane defined by the normal vector $(1,1)^{\top}$. This grid has $s+1=4$ and $d=2$ $(s+1)^{d}=4^2=16$ elements. In the projection, all points of the same color are projected onto the same points in the plane. The number of elements of the projected lattice is $t=(s+1)^d-s^d=4^2-3^2=7$, that is the $(4\times 4)$ grid minus the size of lattice that is $1$ smaller at each size, in this case a $(3\times 3)$ lattice (the upper right $(3\times 3)$ elements). } \label{fig:latticeconstruction} \end{figure} The blur neighbor matrix is constructed by traversing through all the populated lattice points and their neighboring elements. This is done recursively to share computations. For any lattice point, the neighbors that are $n$ hops away are the direct neighbors of the points that are $n-1$ hops away. The size of a $d$ dimensional spatial filter with width $s+1$ is $(s+1)^{d}$ (\textit{e.g.}\@\xspace, a $3\times 3$ filter, $s=2$ in $d=2$ has $3^2=9$ elements) and this size grows exponentially in the number of dimensions $d$. The permutohedral lattice is constructed by projecting a regular grid onto the plane spanned by the $d$ dimensional normal vector ${(1,\ldots,1)}^{\top}$. See Fig.~\ref{fig:latticeconstruction} for an illustration of the 1D lattice construction. Many corners of a grid filter are projected onto the same point, in total $t = {(s+1)}^{d} - s^{d}$ elements remain in the permutohedral filter with $s$ neighborhood in $d-1$ dimensions. If the lattice has $m$ populated elements, the matrix $K$ has size $t\times m$. Note that, since the input signal is typically sparse, only a few lattice corners are being populated in the \textit{slicing} step. We use a hash-table to keep track of these points and traverse only through the populated lattice points for this neighborhood matrix construction. Once the blur neighbor matrix $K$ is constructed, we can perform the convolution by the matrix vector multiplication \begin{equation} \ell' = BK, \label{eq:conv} \end{equation} where $B$ is the $1 \times t$ filter kernel (whose values we will learn) and $\ell'\in\mathbb{R}^{1\times m}$ is the result of the filtering at the $m$ lattice points. In practice, we found that the matrix $K$ is sometimes too large to fit into GPU memory and we divided the matrix $K$ into smaller pieces to compute Eq.~\ref{eq:conv} sequentially. In the general multi-dimensional case, the signal $\ell$ is of $c$ dimensions. Then the kernel $B$ is of size $c \times t$ and $K$ stores the $c$ dimensional vectors accordingly. When the input and output points are different, we slice only the input points and splat only at the output points. \subsection{Additional Experiments} \label{sec:addexps} In this section, we discuss more use-cases for the learned bilateral filters, one use-case of BNNs and two single filter applications for image and 3D mesh denoising. \subsubsection{Recognition of subsampled MNIST}\label{sec:app_mnist} One of the strengths of the proposed filter convolution is that it does not require the input to lie on a regular grid. The only requirement is to define a distance between features of the input signal. We highlight this feature with the following experiment using the classical MNIST ten class classification problem~\cite{lecun1998mnist}. We sample a sparse set of $N$ points $(x,y)\in [0,1]\times [0,1]$ uniformly at random in the input image, use their interpolated values as signal and the \emph{continuous} $(x,y)$ positions as features. This mimics sub-sampling of a high-dimensional signal. To compare against a spatial convolution, we interpolate the sparse set of values at the grid positions. We take a reference implementation of LeNet~\cite{lecun1998gradient} that is part of the Caffe project~\cite{jia2014caffe} and compare it against the same architecture but replacing the first convolutional layer with a bilateral convolution layer (BCL). The filter size and numbers are adjusted to get a comparable number of parameters ($5\times 5$ for LeNet, $2$-neighborhood for BCL). The results are shown in Table~\ref{tab:all-results}. We see that training on the original MNIST data (column Original, LeNet vs. BNN) leads to a slight decrease in performance of the BNN (99.03\%) compared to LeNet (99.19\%). The BNN can be trained and evaluated on sparse signals, and we resample the image as described above for $N=$ 100\%, 60\% and 20\% of the total number of pixels. The methods are also evaluated on test images that are subsampled in the same way. Note that we can train and test with different subsampling rates. We introduce an additional bilinear interpolation layer for the LeNet architecture to train on the same data. In essence, both models perform a spatial interpolation and thus we expect them to yield a similar classification accuracy. Once the data is of higher dimensions, the permutohedral convolution will be faster due to hashing the sparse input points, as well as less memory demanding in comparison to naive application of a spatial convolution with interpolated values. \begin{table}[t] \begin{center} \footnotesize \centering \begin{tabular}[t]{lllll} \toprule & & \multicolumn{3}{c}{Test Subsampling} \\ Method & Original & 100\% & 60\% & 20\%\\ \midrule LeNet & \textbf{0.9919} & 0.9660 & 0.9348 & \textbf{0.6434} \\ BNN & 0.9903 & \textbf{0.9844} & \textbf{0.9534} & 0.5767 \\ \hline LeNet 100\% & 0.9856 & 0.9809 & 0.9678 & \textbf{0.7386} \\ BNN 100\% & \textbf{0.9900} & \textbf{0.9863} & \textbf{0.9699} & 0.6910 \\ \hline LeNet 60\% & 0.9848 & 0.9821 & 0.9740 & 0.8151 \\ BNN 60\% & \textbf{0.9885} & \textbf{0.9864} & \textbf{0.9771} & \textbf{0.8214}\\ \hline LeNet 20\% & \textbf{0.9763} & \textbf{0.9754} & 0.9695 & 0.8928 \\ BNN 20\% & 0.9728 & 0.9735 & \textbf{0.9701} & \textbf{0.9042}\\ \bottomrule \end{tabular} \end{center} \vspace{-.2cm} \caption{Classification accuracy on MNIST. We compare the LeNet~\cite{lecun1998gradient} implementation that is part of Caffe~\cite{jia2014caffe} to the network with the first layer replaced by a bilateral convolution layer (BCL). Both are trained on the original image resolution (first two rows). Three more BNN and CNN models are trained with randomly subsampled images (100\%, 60\% and 20\% of the pixels). An additional bilinear interpolation layer samples the input signal on a spatial grid for the CNN model. } \label{tab:all-results} \vspace{-.5cm} \end{table} \subsubsection{Image Denoising} The main application that inspired the development of the bilateral filtering operation is image denoising~\cite{aurich1995non}, there using a single Gaussian kernel. Our development allows to learn this kernel function from data and we explore how to improve using a \emph{single} but more general bilateral filter. We use the Berkeley segmentation dataset (BSDS500)~\cite{arbelaezi2011bsds500} as a test bed. The color images in the dataset are converted to gray-scale, and corrupted with Gaussian noise with a standard deviation of $\frac {25} {255}$. We compare the performance of four different filter models on a denoising task. The first baseline model (`Spatial' in Table \ref{tab:denoising}, $25$ weights) uses a single spatial filter with a kernel size of $5$ and predicts the scalar gray-scale value at the center pixel. The next model (`Gauss Bilateral') applies a bilateral \emph{Gaussian} filter to the noisy input, using position and intensity features $\mathbf{f}=(x,y,v)^\top$. The third setup (`Learned Bilateral', $65$ weights) takes a Gauss kernel as initialization and fits all filter weights on the train set to minimize the mean squared error with respect to the clean images. We run a combination of spatial and permutohedral convolutions on spatial and bilateral features (`Spatial + Bilateral (Learned)') to check for a complementary performance of the two convolutions. \label{sec:exp:denoising} \begin{table}[!h] \begin{center} \footnotesize \begin{tabular}[t]{lr} \toprule Method & PSNR \\ \midrule Noisy Input & $20.17$ \\ Spatial & $26.27$ \\ Gauss Bilateral & $26.51$ \\ Learned Bilateral & $26.58$ \\ Spatial + Bilateral (Learned) & \textbf{$26.65$} \\ \bottomrule \end{tabular} \end{center} \vspace{-0.5em} \caption{PSNR results of a denoising task using the BSDS500 dataset~\cite{arbelaezi2011bsds500}} \vspace{-0.5em} \label{tab:denoising} \end{table} \vspace{-0.2em} The PSNR scores evaluated on full images of the test set are shown in Table \ref{tab:denoising}. We find that an untrained bilateral filter already performs better than a trained spatial convolution ($26.27$ to $26.51$). A learned convolution further improve the performance slightly. We chose this simple one-kernel setup to validate an advantage of the generalized bilateral filter. A competitive denoising system would employ RGB color information and also needs to be properly adjusted in network size. Multi-layer perceptrons have obtained state-of-the-art denoising results~\cite{burger12cvpr} and the permutohedral lattice layer can readily be used in such an architecture, which is intended future work. \subsection{Additional results} \label{sec:addresults} This section contains more qualitative results for the experiments presented in Chapter~\ref{chap:bnn}. \begin{figure*}[th!] \centering \includegraphics[width=\columnwidth,trim={5cm 2.5cm 5cm 4.5cm},clip]{figures/supplementary/lattice_viz.pdf} \vspace{-0.7cm} \mycaption{Visualization of the Permutohedral Lattice} {Sample lattice visualizations for different feature spaces. All pixels falling in the same simplex cell are shown with the same color. $(x,y)$ features correspond to image pixel positions, and $(r,g,b) \in [0,255]$ correspond to the red, green and blue color values.} \label{fig:latticeviz} \end{figure*} \subsubsection{Lattice Visualization} Figure~\ref{fig:latticeviz} shows sample lattice visualizations for different feature spaces. \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \subsubsection{Color Upsampling}\label{sec:color_upsampling} \label{sec:col_upsample_extra} Some images of the upsampling for the Pascal VOC12 dataset are shown in Fig.~\ref{fig:Colour_upsample_visuals}. It is especially the low level image details that are better preserved with a learned bilateral filter compared to the Gaussian case. \begin{figure*}[t!] \centering \subfigure{% \raisebox{2.0em}{ \includegraphics[width=.06\columnwidth]{figures/supplementary/2007_004969.jpg} } } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_004969_gray.pdf} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_004969_gt.pdf} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_004969_bicubic.pdf} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_004969_gauss.pdf} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_004969_learnt.pdf} }\\ \subfigure{% \raisebox{2.0em}{ \includegraphics[width=.06\columnwidth]{figures/supplementary/2007_003106.jpg} } } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_003106_gray.pdf} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_003106_gt.pdf} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_003106_bicubic.pdf} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_003106_gauss.pdf} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_003106_learnt.pdf} }\\ \setcounter{subfigure}{0} \small{ \subfigure[Inp.]{% \raisebox{2.0em}{ \includegraphics[width=.06\columnwidth]{figures/supplementary/2007_006837.jpg} } } \subfigure[Guidance]{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_006837_gray.pdf} } \subfigure[GT]{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_006837_gt.pdf} } \subfigure[Bicubic]{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_006837_bicubic.pdf} } \subfigure[Gauss-BF]{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_006837_gauss.pdf} } \subfigure[Learned-BF]{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2007_006837_learnt.pdf} } } \vspace{-0.5cm} \mycaption{Color Upsampling}{Color $8\times$ upsampling results using different methods, from left to right, (a)~Low-resolution input color image (Inp.), (b)~Gray scale guidance image, (c)~Ground-truth color image; Upsampled color images with (d)~Bicubic interpolation, (e) Gauss bilateral upsampling and, (f)~Learned bilateral updampgling (best viewed on screen).} \label{fig:Colour_upsample_visuals} \end{figure*} \subsubsection{Depth Upsampling} \label{sec:depth_upsample_extra} Figure~\ref{fig:depth_upsample_visuals} presents some more qualitative results comparing bicubic interpolation, Gauss bilateral and learned bilateral upsampling on NYU depth dataset image~\cite{silberman2012indoor}. \subsubsection{Character Recognition}\label{sec:app_character} Figure~\ref{fig:nnrecognition} shows the schematic of different layers of the network architecture for LeNet-7~\cite{lecun1998mnist} and DeepCNet(5, 50)~\cite{ciresan2012multi,graham2014spatially}. For the BNN variants, the first layer filters are replaced with learned bilateral filters and are learned end-to-end. \subsubsection{Semantic Segmentation}\label{sec:app_semantic_segmentation} \label{sec:semantic_bnn_extra} Some more visual results for semantic segmentation are shown in Figure~\ref{fig:semantic_visuals}. These include the underlying DeepLab CNN\cite{chen2014semantic} result (DeepLab), the 2 step mean-field result with Gaussian edge potentials (+2stepMF-GaussCRF) and also corresponding results with learned edge potentials (+2stepMF-LearnedCRF). In general, we observe that mean-field in learned CRF leads to slightly dilated classification regions in comparison to using Gaussian CRF thereby filling-in the false negative pixels and also correcting some mis-classified regions. \begin{figure*}[t!] \centering \subfigure{% \raisebox{2.0em}{ \includegraphics[width=.06\columnwidth]{figures/supplementary/2bicubic} } } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2given_image} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2ground_truth} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2bicubic} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2gauss} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/2learnt} }\\ \subfigure{% \raisebox{2.0em}{ \includegraphics[width=.06\columnwidth]{figures/supplementary/32bicubic} } } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/32given_image} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/32ground_truth} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/32bicubic} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/32gauss} } \subfigure{% \includegraphics[width=.17\columnwidth]{figures/supplementary/32learnt} }\\ \setcounter{subfigure}{0} \small{ \subfigure[Inp.]{% \raisebox{2.0em}{ \includegraphics[width=.06\columnwidth]{figures/supplementary/41bicubic} } } \subfigure[Guidance]{% \includegraphics[width=.17\columnwidth]{figures/supplementary/41given_image} } \subfigure[GT]{% \includegraphics[width=.17\columnwidth]{figures/supplementary/41ground_truth} } \subfigure[Bicubic]{% \includegraphics[width=.17\columnwidth]{figures/supplementary/41bicubic} } \subfigure[Gauss-BF]{% \includegraphics[width=.17\columnwidth]{figures/supplementary/41gauss} } \subfigure[Learned-BF]{% \includegraphics[width=.17\columnwidth]{figures/supplementary/41learnt} } } \mycaption{Depth Upsampling}{Depth $8\times$ upsampling results using different upsampling strategies, from left to right, (a)~Low-resolution input depth image (Inp.), (b)~High-resolution guidance image, (c)~Ground-truth depth; Upsampled depth images with (d)~Bicubic interpolation, (e) Gauss bilateral upsampling and, (f)~Learned bilateral updampgling (best viewed on screen).} \label{fig:depth_upsample_visuals} \end{figure*} \subsubsection{Material Segmentation}\label{sec:app_material_segmentation} \label{sec:material_bnn_extra} In Fig.~\ref{fig:material_visuals-app2}, we present visual results comparing 2 step mean-field inference with Gaussian and learned pairwise CRF potentials. In general, we observe that the pixels belonging to dominant classes in the training data are being more accurately classified with learned CRF. This leads to a significant improvements in overall pixel accuracy. This also results in a slight decrease of the accuracy from less frequent class pixels thereby slightly reducing the average class accuracy with learning. We attribute this to the type of annotation that is available for this dataset, which is not for the entire image but for some segments in the image. We have very few images of the infrequent classes to combat this behaviour during training. \subsubsection{Experiment Protocols} \label{sec:protocols} Table~\ref{tbl:parameters} shows experiment protocols of different experiments. \begin{figure*}[t!] \centering \subfigure[LeNet-7]{ \includegraphics[width=0.7\columnwidth]{figures/supplementary/lenet_cnn_network} }\\ \subfigure[DeepCNet]{ \includegraphics[width=\columnwidth]{figures/supplementary/deepcnet_cnn_network} } \mycaption{CNNs for Character Recognition} {Schematic of (top) LeNet-7~\cite{lecun1998mnist} and (bottom) DeepCNet(5,50)~\cite{ciresan2012multi,graham2014spatially} architectures used in Assamese character recognition experiments.} \label{fig:nnrecognition} \end{figure*} \definecolor{voc_1}{RGB}{0, 0, 0} \definecolor{voc_2}{RGB}{128, 0, 0} \definecolor{voc_3}{RGB}{0, 128, 0} \definecolor{voc_4}{RGB}{128, 128, 0} \definecolor{voc_5}{RGB}{0, 0, 128} \definecolor{voc_6}{RGB}{128, 0, 128} \definecolor{voc_7}{RGB}{0, 128, 128} \definecolor{voc_8}{RGB}{128, 128, 128} \definecolor{voc_9}{RGB}{64, 0, 0} \definecolor{voc_10}{RGB}{192, 0, 0} \definecolor{voc_11}{RGB}{64, 128, 0} \definecolor{voc_12}{RGB}{192, 128, 0} \definecolor{voc_13}{RGB}{64, 0, 128} \definecolor{voc_14}{RGB}{192, 0, 128} \definecolor{voc_15}{RGB}{64, 128, 128} \definecolor{voc_16}{RGB}{192, 128, 128} \definecolor{voc_17}{RGB}{0, 64, 0} \definecolor{voc_18}{RGB}{128, 64, 0} \definecolor{voc_19}{RGB}{0, 192, 0} \definecolor{voc_20}{RGB}{128, 192, 0} \definecolor{voc_21}{RGB}{0, 64, 128} \definecolor{voc_22}{RGB}{128, 64, 128} \begin{figure*}[t] \centering \small{ \fcolorbox{white}{voc_1}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Background~~ \fcolorbox{white}{voc_2}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Aeroplane~~ \fcolorbox{white}{voc_3}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Bicycle~~ \fcolorbox{white}{voc_4}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Bird~~ \fcolorbox{white}{voc_5}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Boat~~ \fcolorbox{white}{voc_6}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Bottle~~ \fcolorbox{white}{voc_7}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Bus~~ \fcolorbox{white}{voc_8}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Car~~ \\ \fcolorbox{white}{voc_9}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Cat~~ \fcolorbox{white}{voc_10}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Chair~~ \fcolorbox{white}{voc_11}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Cow~~ \fcolorbox{white}{voc_12}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Dining Table~~ \fcolorbox{white}{voc_13}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Dog~~ \fcolorbox{white}{voc_14}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Horse~~ \fcolorbox{white}{voc_15}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Motorbike~~ \fcolorbox{white}{voc_16}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Person~~ \\ \fcolorbox{white}{voc_17}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Potted Plant~~ \fcolorbox{white}{voc_18}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Sheep~~ \fcolorbox{white}{voc_19}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Sofa~~ \fcolorbox{white}{voc_20}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Train~~ \fcolorbox{white}{voc_21}{\rule{0pt}{6pt}\rule{6pt}{0pt}} TV monitor~~ \\ } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001423_given.jpg} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001423_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001423_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001423_gauss.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001423_learnt.png} }\\ \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001430_given.jpg} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001430_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001430_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001430_gauss.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001430_learnt.png} }\\ \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_007996_given.jpg} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_007996_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_007996_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_007996_gauss.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_007996_learnt.png} }\\ \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_002682_given.jpg} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_002682_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_002682_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_002682_gauss.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_002682_learnt.png} }\\ \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_004789_given.jpg} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_004789_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_004789_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_004789_gauss.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_004789_learnt.png} }\\ \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001311_given.jpg} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001311_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001311_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001311_gauss.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2007_001311_learnt.png} }\\ \setcounter{subfigure}{0} \subfigure[Input]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_003531_given.jpg} } \subfigure[Ground Truth]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_003531_gt.png} } \subfigure[DeepLab]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_003531_cnn.png} } \subfigure[+GaussCRF]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_003531_gauss.png} } \subfigure[+LearnedCRF]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/2010_003531_learnt.png} } \vspace{-0.3cm} \mycaption{Semantic Segmentation}{Example results of semantic segmentation. (c)~depicts the unary results before application of MF, (d)~after two steps of MF with Gaussian edge CRF potentials, (e)~after two steps of MF with learned edge CRF potentials.} \label{fig:semantic_visuals} \end{figure*} \definecolor{minc_1}{HTML}{771111} \definecolor{minc_2}{HTML}{CAC690} \definecolor{minc_3}{HTML}{EEEEEE} \definecolor{minc_4}{HTML}{7C8FA6} \definecolor{minc_5}{HTML}{597D31} \definecolor{minc_6}{HTML}{104410} \definecolor{minc_7}{HTML}{BB819C} \definecolor{minc_8}{HTML}{D0CE48} \definecolor{minc_9}{HTML}{622745} \definecolor{minc_10}{HTML}{666666} \definecolor{minc_11}{HTML}{D54A31} \definecolor{minc_12}{HTML}{101044} \definecolor{minc_13}{HTML}{444126} \definecolor{minc_14}{HTML}{75D646} \definecolor{minc_15}{HTML}{DD4348} \definecolor{minc_16}{HTML}{5C8577} \definecolor{minc_17}{HTML}{C78472} \definecolor{minc_18}{HTML}{75D6D0} \definecolor{minc_19}{HTML}{5B4586} \definecolor{minc_20}{HTML}{C04393} \definecolor{minc_21}{HTML}{D69948} \definecolor{minc_22}{HTML}{7370D8} \definecolor{minc_23}{HTML}{7A3622} \definecolor{minc_24}{HTML}{000000} \begin{figure*}[t] \centering \small{ \fcolorbox{white}{minc_1}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Brick~~ \fcolorbox{white}{minc_2}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Carpet~~ \fcolorbox{white}{minc_3}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Ceramic~~ \fcolorbox{white}{minc_4}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Fabric~~ \fcolorbox{white}{minc_5}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Foliage~~ \fcolorbox{white}{minc_6}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Food~~ \fcolorbox{white}{minc_7}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Glass~~ \fcolorbox{white}{minc_8}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Hair~~ \\ \fcolorbox{white}{minc_9}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Leather~~ \fcolorbox{white}{minc_10}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Metal~~ \fcolorbox{white}{minc_11}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Mirror~~ \fcolorbox{white}{minc_12}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Other~~ \fcolorbox{white}{minc_13}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Painted~~ \fcolorbox{white}{minc_14}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Paper~~ \fcolorbox{white}{minc_15}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Plastic~~\\ \fcolorbox{white}{minc_16}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Polished Stone~~ \fcolorbox{white}{minc_17}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Skin~~ \fcolorbox{white}{minc_18}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Sky~~ \fcolorbox{white}{minc_19}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Stone~~ \fcolorbox{white}{minc_20}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Tile~~ \fcolorbox{white}{minc_21}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Wallpaper~~ \fcolorbox{white}{minc_22}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Water~~ \fcolorbox{white}{minc_23}{\rule{0pt}{6pt}\rule{6pt}{0pt}} Wood~~ \\ } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000010868_given.jpg} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000010868_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000010868_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000010868_gauss.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000010868_learnt.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000006011_given.jpg} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000006011_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000006011_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000006011_gauss.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000006011_learnt.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000008553_given.jpg} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000008553_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000008553_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000008553_gauss.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000008553_learnt.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000009188_given.jpg} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000009188_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000009188_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000009188_gauss.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000009188_learnt.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[Input]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000023570_given.jpg} } \subfigure[Ground Truth]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000023570_gt.png} } \subfigure[DeepLab]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000023570_cnn.png} } \subfigure[+GaussCRF]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000023570_gauss.png} } \subfigure[+LearnedCRF]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/000023570_learnt.png} } \mycaption{Material Segmentation}{Example results of material segmentation. (c)~depicts the unary results before application of MF, (d)~after two steps of MF with Gaussian edge CRF potentials, (e)~after two steps of MF with learned edge CRF potentials.} \label{fig:material_visuals-app2} \end{figure*} \begin{table*}[h] \tiny \centering \begin{tabular}{L{2.3cm} L{2.25cm} C{1.5cm} C{0.7cm} C{0.6cm} C{0.7cm} C{0.7cm} C{0.7cm} C{1.6cm} C{0.6cm} C{0.6cm} C{0.6cm}} \toprule & & & & & \multicolumn{3}{c}{\textbf{Data Statistics}} & \multicolumn{4}{c}{\textbf{Training Protocol}} \\ \textbf{Experiment} & \textbf{Feature Types} & \textbf{Feature Scales} & \textbf{Filter Size} & \textbf{Filter Nbr.} & \textbf{Train} & \textbf{Val.} & \textbf{Test} & \textbf{Loss Type} & \textbf{LR} & \textbf{Batch} & \textbf{Epochs} \\ \midrule \multicolumn{2}{c}{\textbf{Single Bilateral Filter Applications}} & & & & & & & & & \\ \textbf{2$\times$ Color Upsampling} & Position$_{1}$, Intensity (3D) & 0.13, 0.17 & 65 & 2 & 10581 & 1449 & 1456 & MSE & 1e-06 & 200 & 94.5\\ \textbf{4$\times$ Color Upsampling} & Position$_{1}$, Intensity (3D) & 0.06, 0.17 & 65 & 2 & 10581 & 1449 & 1456 & MSE & 1e-06 & 200 & 94.5\\ \textbf{8$\times$ Color Upsampling} & Position$_{1}$, Intensity (3D) & 0.03, 0.17 & 65 & 2 & 10581 & 1449 & 1456 & MSE & 1e-06 & 200 & 94.5\\ \textbf{16$\times$ Color Upsampling} & Position$_{1}$, Intensity (3D) & 0.02, 0.17 & 65 & 2 & 10581 & 1449 & 1456 & MSE & 1e-06 & 200 & 94.5\\ \textbf{Depth Upsampling} & Position$_{1}$, Color (5D) & 0.05, 0.02 & 665 & 2 & 795 & 100 & 654 & MSE & 1e-07 & 50 & 251.6\\ \textbf{Mesh Denoising} & Isomap (4D) & 46.00 & 63 & 2 & 1000 & 200 & 500 & MSE & 100 & 10 & 100.0 \\ \midrule \multicolumn{2}{c}{\textbf{DenseCRF Applications}} & & & & & & & & &\\ \multicolumn{2}{l}{\textbf{Semantic Segmentation}} & & & & & & & & &\\ \textbf{- 1step MF} & Position$_{1}$, Color (5D); Position$_{1}$ (2D) & 0.01, 0.34; 0.34 & 665; 19 & 2; 2 & 10581 & 1449 & 1456 & Logistic & 0.1 & 5 & 1.4 \\ \textbf{- 2step MF} & Position$_{1}$, Color (5D); Position$_{1}$ (2D) & 0.01, 0.34; 0.34 & 665; 19 & 2; 2 & 10581 & 1449 & 1456 & Logistic & 0.1 & 5 & 1.4 \\ \textbf{- \textit{loose} 2step MF} & Position$_{1}$, Color (5D); Position$_{1}$ (2D) & 0.01, 0.34; 0.34 & 665; 19 & 2; 2 &10581 & 1449 & 1456 & Logistic & 0.1 & 5 & +1.9 \\ \\ \multicolumn{2}{l}{\textbf{Material Segmentation}} & & & & & & & & &\\ \textbf{- 1step MF} & Position$_{2}$, Lab-Color (5D) & 5.00, 0.05, 0.30 & 665 & 2 & 928 & 150 & 1798 & Weighted Logistic & 1e-04 & 24 & 2.6 \\ \textbf{- 2step MF} & Position$_{2}$, Lab-Color (5D) & 5.00, 0.05, 0.30 & 665 & 2 & 928 & 150 & 1798 & Weighted Logistic & 1e-04 & 12 & +0.7 \\ \textbf{- \textit{loose} 2step MF} & Position$_{2}$, Lab-Color (5D) & 5.00, 0.05, 0.30 & 665 & 2 & 928 & 150 & 1798 & Weighted Logistic & 1e-04 & 12 & +0.2\\ \midrule \multicolumn{2}{c}{\textbf{Neural Network Applications}} & & & & & & & & &\\ \textbf{Tiles: CNN-9$\times$9} & - & - & 81 & 4 & 10000 & 1000 & 1000 & Logistic & 0.01 & 100 & 500.0 \\ \textbf{Tiles: CNN-13$\times$13} & - & - & 169 & 6 & 10000 & 1000 & 1000 & Logistic & 0.01 & 100 & 500.0 \\ \textbf{Tiles: CNN-17$\times$17} & - & - & 289 & 8 & 10000 & 1000 & 1000 & Logistic & 0.01 & 100 & 500.0 \\ \textbf{Tiles: CNN-21$\times$21} & - & - & 441 & 10 & 10000 & 1000 & 1000 & Logistic & 0.01 & 100 & 500.0 \\ \textbf{Tiles: BNN} & Position$_{1}$, Color (5D) & 0.05, 0.04 & 63 & 1 & 10000 & 1000 & 1000 & Logistic & 0.01 & 100 & 30.0 \\ \textbf{LeNet} & - & - & 25 & 2 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \textbf{Crop-LeNet} & - & - & 25 & 2 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \textbf{BNN-LeNet} & Position$_{2}$ (2D) & 20.00 & 7 & 1 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \textbf{DeepCNet} & - & - & 9 & 1 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \textbf{Crop-DeepCNet} & - & - & 9 & 1 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \textbf{BNN-DeepCNet} & Position$_{2}$ (2D) & 40.00 & 7 & 1 & 5490 & 1098 & 1647 & Logistic & 0.1 & 100 & 182.2 \\ \bottomrule \\ \end{tabular} \mycaption{Experiment Protocols} {Experiment protocols for the different experiments presented in this work. \textbf{Feature Types}: Feature spaces used for the bilateral convolutions. Position$_1$ corresponds to un-normalized pixel positions whereas Position$_2$ corresponds to pixel positions normalized to $[0,1]$ with respect to the given image. \textbf{Feature Scales}: Cross-validated scales for the features used. \textbf{Filter Size}: Number of elements in the filter that is being learned. \textbf{Filter Nbr.}: Half-width of the filter. \textbf{Train}, \textbf{Val.} and \textbf{Test} corresponds to the number of train, validation and test images used in the experiment. \textbf{Loss Type}: Type of loss used for back-propagation. ``MSE'' corresponds to Euclidean mean squared error loss and ``Logistic'' corresponds to multinomial logistic loss. ``Weighted Logistic'' is the class-weighted multinomial logistic loss. We weighted the loss with inverse class probability for material segmentation task due to the small availability of training data with class imbalance. \textbf{LR}: Fixed learning rate used in stochastic gradient descent. \textbf{Batch}: Number of images used in one parameter update step. \textbf{Epochs}: Number of training epochs. In all the experiments, we used fixed momentum of 0.9 and weight decay of 0.0005 for stochastic gradient descent. ```Color Upsampling'' experiments in this Table corresponds to those performed on Pascal VOC12 dataset images. For all experiments using Pascal VOC12 images, we use extended training segmentation dataset available from~\cite{hariharan2011moredata}, and used standard validation and test splits from the main dataset~\cite{voc2012segmentation}.} \label{tbl:parameters} \end{table*} \clearpage \section{Parameters and Additional Results for Video Propagation Networks} In this Section, we present experiment protocols and additional qualitative results for experiments on video object segmentation, semantic video segmentation and video color propagation. Table~\ref{tbl:parameters_supp} shows the feature scales and other parameters used in different experiments. Figures~\ref{fig:video_seg_pos_supp} show some qualitative results on video object segmentation with some failure cases in Fig.~\ref{fig:video_seg_neg_supp}. Figure~\ref{fig:semantic_visuals_supp} shows some qualitative results on semantic video segmentation and Fig.~\ref{fig:color_visuals_supp} shows results on video color propagation. \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \begin{table*}[h] \tiny \centering \begin{tabular}{L{3.0cm} L{2.4cm} L{2.8cm} L{2.8cm} C{0.5cm} C{1.0cm} L{1.2cm}} \toprule \textbf{Experiment} & \textbf{Feature Type} & \textbf{Feature Scale-1, $\Lambda_a$} & \textbf{Feature Scale-2, $\Lambda_b$} & \textbf{$\alpha$} & \textbf{Input Frames} & \textbf{Loss Type} \\ \midrule \textbf{Video Object Segmentation} & ($x,y,Y,Cb,Cr,t$) & (0.02,0.02,0.07,0.4,0.4,0.01) & (0.03,0.03,0.09,0.5,0.5,0.2) & 0.5 & 9 & Logistic\\ \midrule \textbf{Semantic Video Segmentation} & & & & & \\ \textbf{with CNN1~\cite{yu2015multi}-NoFlow} & ($x,y,R,G,B,t$) & (0.08,0.08,0.2,0.2,0.2,0.04) & (0.11,0.11,0.2,0.2,0.2,0.04) & 0.5 & 3 & Logistic \\ \textbf{with CNN1~\cite{yu2015multi}-Flow} & ($x+u_x,y+u_y,R,G,B,t$) & (0.11,0.11,0.14,0.14,0.14,0.03) & (0.08,0.08,0.12,0.12,0.12,0.01) & 0.65 & 3 & Logistic\\ \textbf{with CNN2~\cite{richter2016playing}-Flow} & ($x+u_x,y+u_y,R,G,B,t$) & (0.08,0.08,0.2,0.2,0.2,0.04) & (0.09,0.09,0.25,0.25,0.25,0.03) & 0.5 & 4 & Logistic\\ \midrule \textbf{Video Color Propagation} & ($x,y,I,t$) & (0.04,0.04,0.2,0.04) & No second kernel & 1 & 4 & MSE\\ \bottomrule \\ \end{tabular} \mycaption{Experiment Protocols} {Experiment protocols for the different experiments presented in this work. \textbf{Feature Types}: Feature spaces used for the bilateral convolutions, with position ($x,y$) and color ($R,G,B$ or $Y,Cb,Cr$) features $\in [0,255]$. $u_x$, $u_y$ denotes optical flow with respect to the present frame and $I$ denotes grayscale intensity. \textbf{Feature Scales ($\Lambda_a, \Lambda_b$)}: Cross-validated scales for the features used. \textbf{$\alpha$}: Exponential time decay for the input frames. \textbf{Input Frames}: Number of input frames for VPN. \textbf{Loss Type}: Type of loss used for back-propagation. ``MSE'' corresponds to Euclidean mean squared error loss and ``Logistic'' corresponds to multinomial logistic loss.} \label{tbl:parameters_supp} \end{table*} \begin{figure}[th!] \begin{center} \centerline{\includegraphics[width=0.7\textwidth]{figures/video_seg_visuals_supp_positive.pdf}} \mycaption{Video Object Segmentation} {Shown are the different frames in example videos with the corresponding ground truth (GT) masks, predictions from BVS~\cite{marki2016bilateral}, OFL~\cite{tsaivideo}, VPN (VPN-Stage2) and VPN-DLab (VPN-DeepLab) models.} \label{fig:video_seg_pos_supp} \end{center} \vspace{-1.0cm} \end{figure} \begin{figure}[th!] \begin{center} \centerline{\includegraphics[width=0.7\textwidth]{figures/video_seg_visuals_supp_negative.pdf}} \mycaption{Failure Cases for Video Object Segmentation} {Shown are the different frames in example videos with the corresponding ground truth (GT) masks, predictions from BVS~\cite{marki2016bilateral}, OFL~\cite{tsaivideo}, VPN (VPN-Stage2) and VPN-DLab (VPN-DeepLab) models.} \label{fig:video_seg_neg_supp} \end{center} \vspace{-1.0cm} \end{figure} \begin{figure}[th!] \begin{center} \centerline{\includegraphics[width=0.9\textwidth]{figures/supp_semantic_visual.pdf}} \mycaption{Semantic Video Segmentation} {Input video frames and the corresponding ground truth (GT) segmentation together with the predictions of CNN~\cite{yu2015multi} and with VPN-Flow.} \label{fig:semantic_visuals_supp} \end{center} \vspace{-0.7cm} \end{figure} \begin{figure}[th!] \begin{center} \centerline{\includegraphics[width=\textwidth]{figures/colorization_visuals_supp.pdf}} \mycaption{Video Color Propagation} {Input grayscale video frames and corresponding ground-truth (GT) color images together with color predictions of Levin et al.~\cite{levin2004colorization} and VPN-Stage1 models.} \label{fig:color_visuals_supp} \end{center} \vspace{-0.7cm} \end{figure} \clearpage \section{Additional Material for Bilateral Inception Networks} \label{sec:binception-app} In this section of the Appendix, we first discuss the use of approximate bilateral filtering in BI modules (Sec.~\ref{sec:lattice}). Later, we present some qualitative results using different models for the approach presented in Chapter~\ref{chap:binception} (Sec.~\ref{sec:qualitative-app}). \subsection{Approximate Bilateral Filtering} \label{sec:lattice} The bilateral inception module presented in Chapter~\ref{chap:binception} computes a matrix-vector product between a Gaussian filter $K$ and a vector of activations $\mathbf{z}_c$. Bilateral filtering is an important operation and many algorithmic techniques have been proposed to speed-up this operation~\cite{paris2006fast,adams2010fast,gastal2011domain}. In the main paper we opted to implement what can be considered the brute-force variant of explicitly constructing $K$ and then using BLAS to compute the matrix-vector product. This resulted in a few millisecond operation. The explicit way to compute is possible due to the reduction to super-pixels, e.g., it would not work for DenseCRF variants that operate on the full image resolution. Here, we present experiments where we use the fast approximate bilateral filtering algorithm of~\cite{adams2010fast}, which is also used in Chapter~\ref{chap:bnn} for learning sparse high dimensional filters. This choice allows for larger dimensions of matrix-vector multiplication. The reason for choosing the explicit multiplication in Chapter~\ref{chap:binception} was that it was computationally faster. For the small sizes of the involved matrices and vectors, the explicit computation is sufficient and we had no GPU implementation of an approximate technique that matched this runtime. Also it is conceptually easier and the gradient to the feature transformations ($\Lambda \mathbf{f}$) is obtained using standard matrix calculus. \subsubsection{Experiments} We modified the existing segmentation architectures analogous to those in Chapter~\ref{chap:binception}. The main difference is that, here, the inception modules use the lattice approximation~\cite{adams2010fast} to compute the bilateral filtering. Using the lattice approximation did not allow us to back-propagate through feature transformations ($\Lambda$) and thus we used hand-specified feature scales as will be explained later. Specifically, we take CNN architectures from the works of~\cite{chen2014semantic,zheng2015conditional,bell2015minc} and insert the BI modules between the spatial FC layers. We use superpixels from~\cite{DollarICCV13edges} for all the experiments with the lattice approximation. Experiments are performed using Caffe neural network framework~\cite{jia2014caffe}. \begin{table} \small \centering \begin{tabular}{p{5.5cm}>{\raggedright\arraybackslash}p{1.4cm}>{\centering\arraybackslash}p{2.2cm}} \toprule \textbf{Model} & \emph{IoU} & \emph{Runtime}(ms) \\ \midrule DeepLab & 68.9 & 145ms\\ \midrule \bi{7}{2}-\bi{8}{10}& \textbf{73.8} & +600 \\ \midrule DeepLab-CRF~\cite{chen2014semantic} & 72.7 & +830\\ DeepLab-MSc-CRF~\cite{chen2014semantic} & \textbf{73.6} & +880\\ DeepLab-EdgeNet~\cite{chen2015semantic} & 71.7 & +30\\ DeepLab-EdgeNet-CRF~\cite{chen2015semantic} & \textbf{73.6} & +860\\ \bottomrule \\ \end{tabular} \mycaption{Semantic Segmentation using the DeepLab model} {IoU scores on the Pascal VOC12 segmentation test dataset with different models and our modified inception model. Also shown are the corresponding runtimes in milliseconds. Runtimes also include superpixel computations (300 ms with Dollar superpixels~\cite{DollarICCV13edges})} \label{tab:largefovresults} \end{table} \paragraph{Semantic Segmentation} The experiments in this section use the Pascal VOC12 segmentation dataset~\cite{voc2012segmentation} with 21 object classes and the images have a maximum resolution of 0.25 megapixels. For all experiments on VOC12, we train using the extended training set of 10581 images collected by~\cite{hariharan2011moredata}. We modified the DeepLab~network architecture of~\cite{chen2014semantic} and the CRFasRNN architecture from~\cite{zheng2015conditional} which uses a CNN with deconvolution layers followed by DenseCRF trained end-to-end. \paragraph{DeepLab Model}\label{sec:deeplabmodel} We experimented with the \bi{7}{2}-\bi{8}{10} inception model. Results using the~DeepLab~model are summarized in Tab.~\ref{tab:largefovresults}. Although we get similar improvements with inception modules as with the explicit kernel computation, using lattice approximation is slower. \begin{table} \small \centering \begin{tabular}{p{6.4cm}>{\raggedright\arraybackslash}p{1.8cm}>{\raggedright\arraybackslash}p{1.8cm}} \toprule \textbf{Model} & \emph{IoU (Val)} & \emph{IoU (Test)}\\ \midrule CNN & 67.5 & - \\ DeconvNet (CNN+Deconvolutions) & 69.8 & 72.0 \\ \midrule \bi{3}{6}-\bi{4}{6}-\bi{7}{2}-\bi{8}{6}& 71.9 & - \\ \bi{3}{6}-\bi{4}{6}-\bi{7}{2}-\bi{8}{6}-\gi{6}& 73.6 & \href{http://host.robots.ox.ac.uk:8080/anonymous/VOTV5E.html}{\textbf{75.2}}\\ \midrule DeconvNet-CRF (CRF-RNN)~\cite{zheng2015conditional} & 73.0 & 74.7\\ Context-CRF-RNN~\cite{yu2015multi} & ~~ - ~ & \textbf{75.3} \\ \bottomrule \\ \end{tabular} \mycaption{Semantic Segmentation using the CRFasRNN model}{IoU score corresponding to different models on Pascal VOC12 reduced validation / test segmentation dataset. The reduced validation set consists of 346 images as used in~\cite{zheng2015conditional} where we adapted the model from.} \label{tab:deconvresults-app} \end{table} \paragraph{CRFasRNN Model}\label{sec:deepinception} We add BI modules after score-pool3, score-pool4, \fc{7} and \fc{8} $1\times1$ convolution layers resulting in the \bi{3}{6}-\bi{4}{6}-\bi{7}{2}-\bi{8}{6} model and also experimented with another variant where $BI_8$ is followed by another inception module, G$(6)$, with 6 Gaussian kernels. Note that here also we discarded both deconvolution and DenseCRF parts of the original model~\cite{zheng2015conditional} and inserted the BI modules in the base CNN and found similar improvements compared to the inception modules with explicit kernel computaion. See Tab.~\ref{tab:deconvresults-app} for results on the CRFasRNN model. \paragraph{Material Segmentation} Table~\ref{tab:mincresults-app} shows the results on the MINC dataset~\cite{bell2015minc} obtained by modifying the AlexNet architecture with our inception modules. We observe similar improvements as with explicit kernel construction. For this model, we do not provide any learned setup due to very limited segment training data. The weights to combine outputs in the bilateral inception layer are found by validation on the validation set. \begin{table}[t] \small \centering \begin{tabular}{p{3.5cm}>{\centering\arraybackslash}p{4.0cm}} \toprule \textbf{Model} & Class / Total accuracy\\ \midrule AlexNet CNN & 55.3 / 58.9 \\ \midrule \bi{7}{2}-\bi{8}{6}& 68.5 / 71.8 \\ \bi{7}{2}-\bi{8}{6}-G$(6)$& 67.6 / 73.1 \\ \midrule AlexNet-CRF & 65.5 / 71.0 \\ \bottomrule \\ \end{tabular} \mycaption{Material Segmentation using AlexNet}{Pixel accuracy of different models on the MINC material segmentation test dataset~\cite{bell2015minc}.} \label{tab:mincresults-app} \end{table} \paragraph{Scales of Bilateral Inception Modules} \label{sec:scales} Unlike the explicit kernel technique presented in the main text (Chapter~\ref{chap:binception}), we didn't back-propagate through feature transformation ($\Lambda$) using the approximate bilateral filter technique. So, the feature scales are hand-specified and validated, which are as follows. The optimal scale values for the \bi{7}{2}-\bi{8}{2} model are found by validation for the best performance which are $\sigma_{xy}$ = (0.1, 0.1) for the spatial (XY) kernel and $\sigma_{rgbxy}$ = (0.1, 0.1, 0.1, 0.01, 0.01) for color and position (RGBXY) kernel. Next, as more kernels are added to \bi{8}{2}, we set scales to be $\alpha$*($\sigma_{xy}$, $\sigma_{rgbxy}$). The value of $\alpha$ is chosen as 1, 0.5, 0.1, 0.05, 0.1, at uniform interval, for the \bi{8}{10} bilateral inception module. \subsection{Qualitative Results} \label{sec:qualitative-app} In this section, we present more qualitative results obtained using the BI module with explicit kernel computation technique presented in Chapter~\ref{chap:binception}. Results on the Pascal VOC12 dataset~\cite{voc2012segmentation} using the DeepLab-LargeFOV model are shown in Fig.~\ref{fig:semantic_visuals-app}, followed by the results on MINC dataset~\cite{bell2015minc} in Fig.~\ref{fig:material_visuals-app} and on Cityscapes dataset~\cite{Cordts2015Cvprw} in Fig.~\ref{fig:street_visuals-app}. \definecolor{voc_1}{RGB}{0, 0, 0} \definecolor{voc_2}{RGB}{128, 0, 0} \definecolor{voc_3}{RGB}{0, 128, 0} \definecolor{voc_4}{RGB}{128, 128, 0} \definecolor{voc_5}{RGB}{0, 0, 128} \definecolor{voc_6}{RGB}{128, 0, 128} \definecolor{voc_7}{RGB}{0, 128, 128} \definecolor{voc_8}{RGB}{128, 128, 128} \definecolor{voc_9}{RGB}{64, 0, 0} \definecolor{voc_10}{RGB}{192, 0, 0} \definecolor{voc_11}{RGB}{64, 128, 0} \definecolor{voc_12}{RGB}{192, 128, 0} \definecolor{voc_13}{RGB}{64, 0, 128} \definecolor{voc_14}{RGB}{192, 0, 128} \definecolor{voc_15}{RGB}{64, 128, 128} \definecolor{voc_16}{RGB}{192, 128, 128} \definecolor{voc_17}{RGB}{0, 64, 0} \definecolor{voc_18}{RGB}{128, 64, 0} \definecolor{voc_19}{RGB}{0, 192, 0} \definecolor{voc_20}{RGB}{128, 192, 0} \definecolor{voc_21}{RGB}{0, 64, 128} \definecolor{voc_22}{RGB}{128, 64, 128} \begin{figure*}[!ht] \small \centering \fcolorbox{white}{voc_1}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Background~~ \fcolorbox{white}{voc_2}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Aeroplane~~ \fcolorbox{white}{voc_3}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Bicycle~~ \fcolorbox{white}{voc_4}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Bird~~ \fcolorbox{white}{voc_5}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Boat~~ \fcolorbox{white}{voc_6}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Bottle~~ \fcolorbox{white}{voc_7}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Bus~~ \fcolorbox{white}{voc_8}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Car~~\\ \fcolorbox{white}{voc_9}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Cat~~ \fcolorbox{white}{voc_10}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Chair~~ \fcolorbox{white}{voc_11}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Cow~~ \fcolorbox{white}{voc_12}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Dining Table~~ \fcolorbox{white}{voc_13}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Dog~~ \fcolorbox{white}{voc_14}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Horse~~ \fcolorbox{white}{voc_15}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Motorbike~~ \fcolorbox{white}{voc_16}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Person~~\\ \fcolorbox{white}{voc_17}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Potted Plant~~ \fcolorbox{white}{voc_18}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Sheep~~ \fcolorbox{white}{voc_19}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Sofa~~ \fcolorbox{white}{voc_20}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Train~~ \fcolorbox{white}{voc_21}{\rule{0pt}{4pt}\rule{4pt}{0pt}} TV monitor~~\\ \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001308_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001308_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001308_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001308_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001308_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001308_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001821_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001821_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001821_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001821_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001821_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_001821_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_004612_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_004612_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_004612_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_004612_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_004612_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2008_004612_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_001008_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_001008_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_001008_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_001008_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_001008_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_001008_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_004497_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_004497_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_004497_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_004497_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_004497_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2009_004497_ours.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[\scriptsize Input]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2010_001327_given.png} } \subfigure[\scriptsize Superpixels]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2010_001327_sp.png} } \subfigure[\scriptsize GT]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2010_001327_gt.png} } \subfigure[\scriptsize Deeplab]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2010_001327_cnn.png} } \subfigure[\scriptsize +DenseCRF]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2010_001327_crf.png} } \subfigure[\scriptsize Using BI]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/2010_001327_ours.png} } \mycaption{Semantic Segmentation}{Example results of semantic segmentation on the Pascal VOC12 dataset. (d)~depicts the DeepLab CNN result, (e)~CNN + 10 steps of mean-field inference, (f~result obtained with bilateral inception (BI) modules (\bi{6}{2}+\bi{7}{6}) between \fc~layers.} \label{fig:semantic_visuals-app} \end{figure*} \definecolor{minc_1}{HTML}{771111} \definecolor{minc_2}{HTML}{CAC690} \definecolor{minc_3}{HTML}{EEEEEE} \definecolor{minc_4}{HTML}{7C8FA6} \definecolor{minc_5}{HTML}{597D31} \definecolor{minc_6}{HTML}{104410} \definecolor{minc_7}{HTML}{BB819C} \definecolor{minc_8}{HTML}{D0CE48} \definecolor{minc_9}{HTML}{622745} \definecolor{minc_10}{HTML}{666666} \definecolor{minc_11}{HTML}{D54A31} \definecolor{minc_12}{HTML}{101044} \definecolor{minc_13}{HTML}{444126} \definecolor{minc_14}{HTML}{75D646} \definecolor{minc_15}{HTML}{DD4348} \definecolor{minc_16}{HTML}{5C8577} \definecolor{minc_17}{HTML}{C78472} \definecolor{minc_18}{HTML}{75D6D0} \definecolor{minc_19}{HTML}{5B4586} \definecolor{minc_20}{HTML}{C04393} \definecolor{minc_21}{HTML}{D69948} \definecolor{minc_22}{HTML}{7370D8} \definecolor{minc_23}{HTML}{7A3622} \definecolor{minc_24}{HTML}{000000} \begin{figure*}[!ht] \small \centering \fcolorbox{white}{minc_1}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Brick~~ \fcolorbox{white}{minc_2}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Carpet~~ \fcolorbox{white}{minc_3}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Ceramic~~ \fcolorbox{white}{minc_4}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Fabric~~ \fcolorbox{white}{minc_5}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Foliage~~ \fcolorbox{white}{minc_6}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Food~~ \fcolorbox{white}{minc_7}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Glass~~ \fcolorbox{white}{minc_8}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Hair~~\\ \fcolorbox{white}{minc_9}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Leather~~ \fcolorbox{white}{minc_10}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Metal~~ \fcolorbox{white}{minc_11}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Mirror~~ \fcolorbox{white}{minc_12}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Other~~ \fcolorbox{white}{minc_13}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Painted~~ \fcolorbox{white}{minc_14}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Paper~~ \fcolorbox{white}{minc_15}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Plastic~~\\ \fcolorbox{white}{minc_16}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Polished Stone~~ \fcolorbox{white}{minc_17}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Skin~~ \fcolorbox{white}{minc_18}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Sky~~ \fcolorbox{white}{minc_19}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Stone~~ \fcolorbox{white}{minc_20}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Tile~~ \fcolorbox{white}{minc_21}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Wallpaper~~ \fcolorbox{white}{minc_22}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Water~~ \fcolorbox{white}{minc_23}{\rule{0pt}{4pt}\rule{4pt}{0pt}} Wood~~\\ \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000008468_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000008468_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000008468_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000008468_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000008468_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000008468_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000009053_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000009053_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000009053_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000009053_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000009053_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000009053_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000014977_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000014977_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000014977_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000014977_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000014977_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000014977_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000022922_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000022922_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000022922_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000022922_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000022922_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000022922_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000025711_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000025711_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000025711_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000025711_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000025711_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000025711_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000034473_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000034473_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000034473_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000034473_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000034473_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000034473_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035463_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035463_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035463_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035463_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035463_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035463_ours.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[\scriptsize Input]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035993_given.png} } \subfigure[\scriptsize Superpixels]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035993_sp.png} } \subfigure[\scriptsize GT]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035993_gt.png} } \subfigure[\scriptsize AlexNet]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035993_cnn.png} } \subfigure[\scriptsize +DenseCRF]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035993_crf.png} } \subfigure[\scriptsize Using BI]{% \includegraphics[width=.15\columnwidth]{figures/supplementary/000035993_ours.png} } \mycaption{Material Segmentation}{Example results of material segmentation. (d)~depicts the AlexNet CNN result, (e)~CNN + 10 steps of mean-field inference, (f)~result obtained with bilateral inception (BI) modules (\bi{7}{2}+\bi{8}{6}) between \fc~layers.} \label{fig:material_visuals-app} \end{figure*} \definecolor{city_1}{RGB}{128, 64, 128} \definecolor{city_2}{RGB}{244, 35, 232} \definecolor{city_3}{RGB}{70, 70, 70} \definecolor{city_4}{RGB}{102, 102, 156} \definecolor{city_5}{RGB}{190, 153, 153} \definecolor{city_6}{RGB}{153, 153, 153} \definecolor{city_7}{RGB}{250, 170, 30} \definecolor{city_8}{RGB}{220, 220, 0} \definecolor{city_9}{RGB}{107, 142, 35} \definecolor{city_10}{RGB}{152, 251, 152} \definecolor{city_11}{RGB}{70, 130, 180} \definecolor{city_12}{RGB}{220, 20, 60} \definecolor{city_13}{RGB}{255, 0, 0} \definecolor{city_14}{RGB}{0, 0, 142} \definecolor{city_15}{RGB}{0, 0, 70} \definecolor{city_16}{RGB}{0, 60, 100} \definecolor{city_17}{RGB}{0, 80, 100} \definecolor{city_18}{RGB}{0, 0, 230} \definecolor{city_19}{RGB}{119, 11, 32} \begin{figure*}[!ht] \small \centering \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_016005_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_016005_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_016005_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_016005_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_016005_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_004617_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_004617_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_004617_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_004617_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_004617_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_020880_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_020880_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_020880_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_020880_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00000_020880_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_007285_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_007285_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_007285_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_007285_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_007285_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_059789_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_059789_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_059789_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_059789_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_059789_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_068208_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_068208_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_068208_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_068208_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_068208_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_082466_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_082466_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_082466_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_082466_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/frankfurt00001_082466_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00033_000019_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00033_000019_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00033_000019_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00033_000019_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00033_000019_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00052_000019_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00052_000019_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00052_000019_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00052_000019_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00052_000019_ours.png} }\\[-2ex] \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00027_000019_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00027_000019_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00027_000019_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00027_000019_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00027_000019_ours.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[\scriptsize Input]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00029_000019_given.png} } \subfigure[\scriptsize Superpixels]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00029_000019_sp.png} } \subfigure[\scriptsize GT]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00029_000019_gt.png} } \subfigure[\scriptsize Deeplab]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00029_000019_cnn.png} } \subfigure[\scriptsize Using BI]{% \includegraphics[width=.18\columnwidth]{figures/supplementary/lindau00029_000019_ours.png} \mycaption{Street Scene Segmentation}{Example results of street scene segmentation. (d)~depicts the DeepLab results, (e)~result obtained by adding bilateral inception (BI) modules (\bi{6}{2}+\bi{7}{6}) between \fc~layers.} \label{fig:street_visuals-app} \end{figure*} \chapter{Introduction} \label{chap:intro} Computer vision is the task of inferring properties of the world from the observed visual data. The observed visual data can originate from a variety of sensors such as color or depth cameras, laser scans etc. The properties of the world range from low-level material properties such as reflectance to high-level object properties such as 3D shape and pose. The field of computer vision encompasses a broad range of problems involving a variety of sensor data and world properties. Some example problems include: `Inferring 3D pose and shape of an object from a depth image'; `Inferring the actions of the persons from video' etc. Computer vision is hard because of variability in lighting, shape and texture in the scene. Moreover, there is sensor noise and the image signal is non-additive due to occlusion. Vision problems are inherently ambiguous as the sensor data is an incomplete representation of the richer 3D world. As a result, probabilistic frameworks are typically employed to deal with such ambiguity. Following~\cite{prince2012computer}, there are three main components in any vision system: \textit{model}, \textit{learning} algorithm and \textit{inference} algorithm. The \textit{Model} forms the core of any vision system describing the mathematical relationship between the observed data and the desirable properties of the world. The set of parameters and/or structure of this mathematical model is learned using a \textit{learning} algorithm. Once the model is learned, an \textit{inference} algorithm is used to predict the world properties from a given observation. Since models form the core of any vision system, let us briefly discuss the two broad categories in computer vision models: \textit{Generative} and \textit{Discriminative} models, which can be viewed as complementary and inverse to each other. Let us denote the observed data as a vector of random variables $\mathbf{x} \in \mathbb{R}^k$ and the target world properties as another vector of random variables $\mathbf{y} \in \mathbb{R}^l$. For example, $\mathbf{x}$ can be a vectorized representation of image pixels and $\mathbf{y}$ can be a vector representing the parameterized shape of an object in the image. Generative models characterize the probability of observed data given the world properties $P(\mathbf{x}|\mathbf{y},\mathbf{\theta})$ (called `likelihood') as well as a prior on target variables $P(\mathbf{y})$, where $\mathbf{\theta}$ denotes the parameters of the model. Some example generative models include graphics systems and probabilistic graphical models. In this thesis, we use the term `generative' more loosely in the sense that any model which characterizes the likelihood $P(\mathbf{x}|\mathbf{y},\mathbf{\theta})$ and/or prior over target variables $P(\mathbf{y})$ is considered a `generative' model. Discriminative models characterize the probability of world properties given the observed data $P(\mathbf{y}|\mathbf{x},\mathbf{\theta})$ (called `posterior distribution'). In other words, generative models model the image formation process as a function of world parameters, whereas discriminative approaches model the target world parameters as a function of the given image. Once the model is defined, a learning algorithm is used to learn the model parameters $\theta$ and then an inference algorithm is used to predict the posterior distribution $P(\mathbf{y}|\mathbf{x},\mathbf{\theta})$. In Chapter~\ref{chap:models}, we will discuss more about these models along with the inference and learning in them. Depending on the type of model, a specialized learning or inference algorithm may not be required. For example, in the case of manually specified generative models (e.g.\ graphics system or fully specified Bayesian network), there is no need for a specialized learning algorithm since all the model parameters are already hand specified, but specialized inference techniques are required to invert such models. In the case of discriminative models, where the posterior distribution $P(\mathbf{y}|\mathbf{x},\mathbf{\theta})$ is directly modeled, (e.g.\ neural networks or random forests), the inference mechanism just reduces to a simple evaluation of the model. This dissertation focuses on improving the \textit{inference} in prominent computer vision models. Inference plays a crucial role in any vision system as this would produce the desired end result for a given observation. The breakthroughs in computer vision technology are often marked by the advances in the inference techniques as even the model design is often dictated by the complexity of the inference in them. The inference result is what matters at the end and having a model with high fidelity but with no feasible or practical inference scheme (for instance recent photo-realistic graphics systems) is of little use for addressing vision problems. Thus, better inference techniques not only improve the existing computer vision systems but also help to develop better models. This thesis work proposes techniques for better inference in existing and widely used computer vision models. \section{Thesis Overview} In this section, we will discuss the objective of this thesis followed by the organization and contributions of various chapters. \subsection{Objective} The main aim of the work presented in this thesis is to improve the performance of inference algorithms used in different computer vision models. Inference is highly inter-linked with model design and depending on the type of model, we propose different techniques to improve inference. Generative models characterize the image formation process and the inference is typically performed via Bayesian inference techniques. Despite their intuitive appeal, the use of generative models in vision is hampered by the difficulty of posterior inference~(estimating $P(\mathbf{y}|\mathbf{x},\mathbf{\theta})$). Existing inference techniques are either often too complex or too slow to be practical. In this thesis, we aim to alleviate some of these inference challenges in generative models. Specifically, we concentrate on improving two different inference schemes for generative vision models: First one is Markov chain Monte Carlo (MCMC) inference for inverting graphics engines and the second one is message passing inference in layered graphical models. A common strategy that we follow to improce inference in generative models is to learn a new discriminative model that is separate from the given generative model and propose modified inference schemes that make use of this new discriminative model for better inference. Discriminative models directly model the posterior distribution $P(\mathbf{y}|\mathbf{x},\mathbf{\theta})$ of the desired world parameters given the observed data. Thus the inference amounts to simple evaluation of the model. One of the main limitations of inference in discriminative models in the lack of principled techniques to incorporate our prior knowledge about the task. This is especially the case with the prominent convolutional neural network (CNN) models. In this thesis, we concentrate on CNN models and propose techniques to improve inference in them. We modify the original CNN models and make them more amenable for the incorporation of prior knowledge. In summary, the aim of this thesis is to improve inference in general computer vision models. We do this by leveraging machine learning techniques to learn a new model for inference that is either separate from the original model (in case of generative models) or modifying the original model itself (in case of discriminative models). The work in this thesis deals with the construction and learning of such inference models and how such models can be integrated into the original vision models for better inference. We propose techniques for inference in diverse computer vision models ranging from hand-specified graphics systems to freely-parameterized neural network models. We concentrate on three types of models which are prevalent in modern computer vision systems: 1.~Graphics systems; 2.~Layered graphical models and 3.~Convolutional neural networks. \subsection{Organization and Contributions} Since models form the core part of any vision system and this thesis involves the construction of new models for inference, in Chapter~\ref{chap:models}, we give an overview of different computer vision models along with the learning and inference mechanisms that are usually employed in them. In addition, we review some existing techniques that aim to improve inference in vision models by combining generative and discriminative approaches. \vspace{-0.3cm} \paragraph{Part I: Inference in Generative Vision Models} In Part I (Chapters~\ref{chap:infsampler} and~\ref{chap:cmp}) of the thesis, we propose techniques for inference in generative computer vision models. \vspace{-0.3cm} \paragraph{Chapter~\ref{chap:infsampler} - The Informed Sampler} In this Chapter, we propose a new sampling technique for inference in complex generative models like graphics systems. Markov chain Monte Carlo (MCMC) sampling is one of the most widely used and most generic inference scheme in such complex generative models. Although generic, in practice, MCMC sampling suffers from slow convergence unless the posterior is a unimodal low-dimensional distribution. By leveraging discriminative learning techniques with ancillary clustering and random forest models, we devise a mixture sampling technique that helps in faster mixing without losing the acceptance rate. We call this `Informed Sampler' and demonstrate it using challenging generative graphics models and a popular model of human bodies~\cite{hirshberg2012coregistration}. Our method is similar in spirit to 'Data Driven Markov Chain Monte Carlo' methods~\cite{zhu2000integrating}. \vspace{-0.3cm} \paragraph{Chapter~\ref{chap:cmp} - Consensus Message Passing} In this Chapter, we concentrate on layered and loopy graphical models that are prevalent in computer vision applications. When factors (relationship between the variables) in such graphical models are from a pre-defined family of distributions, inference is generally performed using standard message passing techniques such as `expectation propagation'~\cite{Minka2001} and `variational message passing'~\cite{Winn2005}. We observe that these inference techniques fail to converge or even diverge when the graphical model is loopy with a large number of variables. The failure of these inference techniques can be attributed to the algorithm's inability to determine the values of a relatively small number of influential variables which we call global variables (e.g.\ light in a scene). Without accurate estimation of these global variables, it can be very difficult for message passing to make meaningful progress on the other variables in the model. As a remedy, we exploit the layered structure of the model and learn ancillary random forest models that learn to predict these influential variables and use them for better message passing inference. We call this method `Consensus Message Passing' (CMP) and demonstrate it on a variety of layered vision models. Experiments show that CMP leads to significantly more accurate inference results whilst preserving the computational efficiency of standard message passing. \vspace{-0.3cm} \paragraph{Part II: Inference in Discriminative Vision Models} In Part II (Chapters~\ref{chap:bnn},~\ref{chap:vpn} and~\ref{chap:binception}) of the thesis, we focus on inference in discriminative CNN models. \vspace{-0.3cm} \paragraph{Chapter~\ref{chap:bnn} - Learning Sparse High Dimensional Filters} 2D spatial convolutions form the basic unit of CNN models. Spatial convolutions are perhaps the simplest, fastest and most used way of propagating information across pixels. Despite their staggering success in a wide range of vision tasks, spatial convolutions have several drawbacks: There are no well-established ways of incorporating prior knowledge into spatial filters; Spatial convolutions quickly get intractable when filtering data of increasing dimensionality; and the receptive fields of the filters are image-agnostic. Spatial convolutions are usually confined to a local neighborhood of pixels and thus many deep layers of spatial convolutions or post-processing conditional random field (CRF) formulations are required for long-range propagation of information across pixels. Bilateral filtering~\cite{aurich1995non, tomasi1998bilateral}, on the other hand, provides a simple yet powerful framework for long range information propagation across pixels. But the traditional use of bilateral filtering is confined to a manually chosen parametric from, usually a Gaussian filter. In this chapter, we generalize the bilateral filter parameterization using a sparse high-dimensional linear approximation and derive a gradient descent algorithm, so the filter parameters can be learned from data. We demonstrate the use of learned bilateral filters in several diverse applications where Gaussian bilateral filters are traditionally employed: color up-sampling, depth up-sampling~\cite{kopf2007joint} and 3D mesh denoising~\cite{fleishman2003bilateral}. The ability to learn generic high-dimensional sparse filters allows us to stack several parallel and sequential filters like in convolutional neural networks (CNN) resulting in a generalization of 2D CNNs which we call `Bilateral Neural Networks' (BNN). We demonstrate the use of BNNs using an illustrative segmentation problem and sparse character recognition. Gaussian bilateral filters are also employed for mean-field inference in fully connected conditional random fields (DenseCRF)~\cite{krahenbuhl2012efficient}. Existing works on DenseCRFs are confined to using Gaussian pairwise potentials due to the traditional use of Gaussian kernels in bilateral filtering. By learning bilateral filters, we remove the need of confining to Gaussian pairwise potentials which has the added advantage of directly learning the pairwise potentials for a given task. We showcase the use of learning edge potentials in DenseCRF with experiments on semantic segmentation and material segmentation. In summary, we propose a general technique for learning sparse high-dimensional filters that help in improving the model and inference in DenseCRF models and also generalizes 2D CNNs. \vspace{-0.3cm} \paragraph{Chapter~\ref{chap:vpn} - Video Propagation Networks} Videos carry redundant information across frames and the information propagation across video frames is valuable for many computer vision applications such as video segmentation, color propagation etc. In this chapter, we propose a novel neural network architecture for video information propagation. We leverage learnable bilateral filters, developed in the previous chapter, and propose a `Video Propagation Network' (VPN) that processes video frames in an adaptive manner. The model is applied online: it propagates information forward without the need to access future frames other than the current ones. In particular we combine two components, a temporal bilateral network for dense and video adaptive filtering, followed by a spatial network to refine features and increased flexibility. We present experiments on video object segmentation and semantic video segmentation and show increased performance comparing to the best previous task-specific methods, while having favorable runtime. Additionally we demonstrate our approach on an example regression task of propagating color in a grayscale video. \vspace{-0.3cm} \paragraph{Chapter~\ref{chap:binception} - Bilateral Inception Networks} In this Chapter, we propose a new CNN module which we call the `Bilateral Inception' (BI) module that can be inserted into \emph{existing} segmentation CNN models. BI modules help in image adaptive long-range information propagation across intermediate CNN units at multiple scales. We show empirically that this alleviates some of the need for standard post-processing inference techniques such as DenseCRF.~In addition, our module helps in recovering the full resolution of segmentation result, which is generally lost due to max-pooling and striding. Experiments on different base segmentation networks and datasets showed that our BI modules result in reliable performance gains in terms of both speed and accuracy in comparison to traditionally employed DenseCRF/Deconvolution techniques and also recently introduced dense pixel prediction techniques. \section{List of Publications} \nobibliography* The contributions in this thesis mainly comprise of work from the following publications ~\cite{jampani2014,jampani15aistats,kiefel15bnn,arxivpaper,gadde16bilateralinception, jampani16vpn}: \begin{itemize} \item \bibentry{jampani2014}.~\cite{jampani2014} \item \bibentry{jampani15aistats}.~\cite{jampani15aistats} \item \bibentry{kiefel15bnn}.~\cite{kiefel15bnn} \item \bibentry{arxivpaper}.~\cite{arxivpaper} \item \bibentry{gadde16bilateralinception}.~\cite{gadde16bilateralinception} \item \bibentry{jampani16vpn}.~\cite{jampani16vpn} \end{itemize} The following publications~\cite{jampani15wacv,jampani15wacv,gadde2016efficient} are a part of my PhD research but are outside the scope of this thesis: \begin{itemize} \item \bibentry{jampani15wacv}.~\cite{jampani15wacv} \item \bibentry{sevilla:CVPR:2016}.~\cite{sevilla:CVPR:2016} \item \bibentry{gadde2016efficient}.~\cite{gadde2016efficient} \end{itemize} \chapter{Models and Inference in Computer Vision} \label{chap:models} Models play a central role in the study of both biological and artificial vision. Helmholtz, in the 19th century, popularized the human vision as a result of psychological inference in learned models~\cite{frey2003advances,cahan1993hermann} as opposed to native processing in lower visual system or eyes. With the advent of computers in the 20th century, researchers are able to formulate, learn and evaluate several computational models of vision. More recently, with powerful parallel computing hardware like graphics processing units (GPU), researchers are able to learn and do inference in highly complex models with even millions of parameters. In this chapter, we present an overview of different computer vision models and discuss the inference and learning techniques therein. Since this thesis work mainly constitutes the development of new inference techniques, we emphasize the difficulty of inference in different models and discuss several remedies proposed in the literature. \section{Models in Computer Vision} Models describe the mathematical relationship between the observed data and the desired properties of the world. Computer vision models are often probabilistic in nature due to the inherent ambiguity in vision problems. Due to the broad range of problems in computer vision, there is no single model that can work well for various vision tasks. Depending on the nature of problem and the availability of data, different models work well for different scenarios. Visual data is complex with variability arising due world properties such as occlusion, lighting, texture, geometry, depth ordering etc. It is very difficult to model the relationship between all the aspects of the world and the visual data, and do inference therein. Vision models usually are highly specific to one or few aspects of the world. As briefly mentioned in Chapter~\ref{chap:intro}, computer vision models can be broadly classified into two types: \textit{Generative} and \textit{Discriminative} models, which can be viewed as complementary and inverse to each other. Generative models characterize the probability of observed data given the world properties $P(\mathbf{x}|\mathbf{y},\mathbf{\theta})$ and discriminative models characterize the probability of world properties given the observed data $P(\mathbf{y}|\mathbf{x},\mathbf{\theta})$, where $\mathbf{\theta}$ denotes the parameters of the model. In other words, generative models model the image formation process as a function of world parameters, whereas discriminative approaches model the desired world parameters as a function of the given image. As mentioned in Chapter~\ref{chap:intro}, we use the term `generative' more loosely in the sense that any model which characterizes the likelihood $P(\mathbf{x}|\mathbf{y},\mathbf{\theta})$ and/or prior over target variables $P(\mathbf{y})$ is considered a `generative' model. Next, we give an overview of these two complementary models discussing the advantages and disadvantages of both. \begin{figure}[th!] \centering \subfigure[Graphics Renderings]{ \setlength\fboxsep{-0.3mm} \setlength\fboxrule{0pt} \parbox[b]{5cm}{ \centering \small \fbox{\includegraphics[width=5cm]{figures/cryengine.jpg}}\\ \vspace{4.5mm} \fbox{\includegraphics[width=5cm]{figures/cryengine_2.jpg}}\\ \vspace{4.5mm} \fbox{\includegraphics[width=5cm]{figures/lumberyard.png}} \\ } \label{fig:sample-graphics} } \hspace{1cm} \subfigure[A face generative model]{ \begin{tikzpicture} % \node[obs] (x) {$x_i$}; % \node[latent, above=of x] (z) {$z_i$}; % \factor[above=of z, yshift=10mm] {times} {below left: $\times$} {} {}; % \node[latent, above=of times, yshift=-5mm] (s) {$s_i$}; % \factor[above=of x, yshift=1mm] {noise1} {left:Gaussian} {} {}; % \node[latent, left=of times] (r) {$r_i$}; % \factor[above=of r] {pr} {} {} {}; % \factor[above=of s, yshift=10mm] {inner} {left:Product} {} {}; % \node[latent, above=of inner, yshift=-5mm] (n) {$\mathbf{n}_i$}; % \node[latent, right=of inner] (l) {$\mathbf{l}$}; % \factor[above=of l] {pl} {} {} {}; % \factor[above=of n] {pn} {} {} {}; % \factoredge {z} {noise1} {x}; % \factoredge {s} {times} {z}; % \factoredge {r} {times} {}; % \factoredge {n} {inner} {s}; % \factoredge {l} {inner} {}; % \factoredge {} {pn} {n}; % \factoredge {} {pl} {l}; % \factoredge {} {pr} {r}; % \plate {} {(pn) (x) (r)} {}; % % % % \node[yshift=0.35cm, xshift=-0.83cm] at (inner) { Inner }; \end{tikzpicture} \label{fig:face-model} } \subfigure[Example face data]{ \setlength\fboxsep{-0.3mm} \setlength\fboxrule{0pt} \parbox[b]{3.4cm}{ \centering \small \fbox{\includegraphics[width=1.3cm]{figures/sample_N.png}}\\ Normal $\{\mathbf{n}_i \}$ \vspace{4.5mm} \\ \fbox{\includegraphics[width=1.3cm]{figures/sample_S.png}} \\ Shading $\{ s_i \}$ \vspace{4.5mm} \\ \fbox{\includegraphics[width=1.3cm]{figures/sample_R.png}} \\ Reflectance $\{ r_i \}$ \vspace{4.5mm} \\ \fbox{\includegraphics[width=1.3cm]{figures/sample_X.png}} \\ Observed image $\{ x_i \}$\\ } \label{fig:face-data} } \mycaption{Sample generative models in computer vision} {(a)~Sample renderings from modern graphics engines~\cite{cryengine,lumberyard}. Modern graphics provide renderings with stunning level of realism and vision problems can be approached as inverting such graphics systems. Images courtesy from the official websites of CryEngine~\cite{cryengine} and Lumberyard~\cite{lumberyard}. (b)~A layered graphical model (factor graph) for faces (explained in Sec.~\ref{sec:pgm}). Here the vision problem could be inferring the reflectance map, normal map and light direction from a given face image. (c) Sample face data from Yale B face dataset~\cite{Georghiades2001,Lee2005}.} \label{fig:sample-gen-models} \end{figure} \section{Generative Vision Models} \label{sec:gen-models} A conceptually elegant view on computer vision is to consider a generative model of the physical image formation process. The observed image becomes a function of unobserved variables of interest (for instance the presence and positions of objects) and nuisance variables (for instance light sources and shadows). When building such a generative model, we can think of a scene description $\mathbf{y}$ that produces an image $\mathbf{x}=G(\mathbf{y},\theta)$ using a deterministic rendering engine $G$ with parameters $\mathbf{\theta}$, or more generally, results in a distribution over images, $P(\mathbf{x}|\mathbf{y},\mathbf{\theta})$. Generative models provide a powerful framework for probabilistic reasoning and are applicable across a wide variety of domains, including computational biology, natural language processing, and computer vision. For example, in computer vision, one can use graphical models to express the process by which a face is lit and rendered into an image, incorporating knowledge of surface normals, lighting and even the approximate symmetry of human faces. Models that make effective use of this information will generalize well, and they will require less labelled training data than their discriminative counterparts (e.g.\ random forests or neural networks) in order to make accurate predictions. Given an image observation $\mathbf{x}$ and a prior over scenes $P(\mathbf{y})$ we can then perform \textit{Bayesian inference} to obtain the posterior distribution over the target variables $P(\mathbf{y} | \mathbf{x},\mathbf{\theta})$ (also called `posterior inference'): \begin{equation} P(\mathbf{y} | \mathbf{x},\mathbf{\theta}) = \frac{P(\mathbf{x}|\mathbf{y},\mathbf{\theta})P(\mathbf{y})}{P(\mathbf{x})} = \frac{P(\mathbf{x}|\mathbf{y},\mathbf{\theta})P(\mathbf{y})}{\sum_{\mathbf{y}'} P(\mathbf{x}|\mathbf{y}',\mathbf{\theta})P(\mathbf{y}')}. \label{eqn:bayes} \end{equation} The summation in the denominator runs over all the possible values of $\mathbf{y}$ variable and would become integral in the case of continuous variables. $P(\mathbf{x}|\mathbf{y},\mathbf{\theta})$ is the \emph{likelihood} of the observed data. Inspecting the above Bayesian formula shows that it is straight forward to compute the numerator as that involves simply evaluating the generative model. The key difficulty with Bayesian inference is computing the denominator in Eq.~\ref{eqn:bayes}. Often, it is not feasible to evaluate the summation over all the possible target variables even for slightly non-trivial models. This makes it difficult to obtain closed-form solutions for posterior inference resulting in the development and use of several approximate inference techniques such as Markov Chain Monte Carlo (MCMC) sampling and variational inference, which we will briefly discuss later. There are different types of generative models with varying model fidelity and complexity of inference therein. In general, generative models which accurately model the image formation process (e.g.\ graphics engines) have complex non-linear functions resulting in a challenging inference task. Building a good generative model for a given task involves finding a good trade-off between model fidelity and inference complexity. Figure~\ref{fig:sample-gen-models} shows some sample generative models in computer vision. Advances in graphics and physics of light transport resulted in generative models with high fidelity as is evident in modern computer games and animation movies. But it is difficult to invert such complex models. Probabilistic graphical models provide the widely adopted framework for generative computer vision models. Although graphical models have less fidelity and only model one or two aspects of the world properties, they are generally preferred over graphic systems as inference in them is faster and more reliable. Next, we will discuss these two prominent generative vision models: \textit{Inverse graphics} and \textit{Probabilistic graphical models}. \subsection{Inverse Graphics} \label{sec:inv-graphics} Modern graphics engines (e.g., game engines like CryEngine~\cite{cryengine} and Lumberyard~\cite{lumberyard}) leverage dedicated hardware setups and provide real-time renderings with stunning level of realism. Some sample renderings from modern game engines~\cite{cryengine,lumberyard} are shown in Fig.~\ref{fig:sample-graphics}. Vision problems can be tackled with posterior inference (Eq.~\ref{eqn:bayes}) in such accurate computer graphics systems. This approach for solving vision problems can be understood as \textit{Inverse Graphics}~\cite{baumgart1974inversegraphics}. The target variables $\mathbf{y}$ correspond to the input to the graphics system and the observation variables are the output $\mathbf{x}=G(\mathbf{y},\theta)$. The deterministic graphics system $G$ can be converted into a probabilistic generative model by defining a prior over the target variables (input to the graphics system) $P(\mathbf{y})$ and also defining an approximate likelihood function $P(\mathbf{x}|G(\mathbf{y},\theta))$ characterizing the model imperfections. If the model imperfections are neglected, the likelihood can be given using the \emph{delta} function $P(\mathbf{x}|\mathbf{y}) = \delta(\mathbf{x}-G(\mathbf{y},\theta))$. An example `inverse graphics' problem, which we tackle in the next chapter, is depicted in Fig.~\ref{fig:invgraphicsteaser}, where the graphics engine renders a depth image given 3D body mesh and camera parameters, and a vision problem would be the inverse estimation of body shape given a depth image. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=0.7\columnwidth]{figures/invGraphicsDemo.pdf}} \mycaption{An example `inverse graphics' problem}{A graphics engine renders a 3D body mesh and a depth image using an artificial camera. By Inverse Graphics we refer to the process of estimating the posterior probability over possible bodies given the depth image.} \label{fig:invgraphicsteaser} \end{center} \end{figure} Modern graphic engines are based on physics principles and thus most of the rendering parameters $\mathbf{\theta}$ are set to mimic the real world physics. Learning in graphics generative models mainly involves learning the prior $P(\mathbf{y})$ over the target world properties which are input to the graphics system. Depending on the type of model, several learned priors are proposed in the literature. An example is the SCAPE model~\cite{anguelov2005scape,hirshberg2012coregistration} for modeling the prior over human body shape and pose for the example shown in Fig.~\ref{fig:invgraphicsteaser}. Since modern rendering engines involve complex non-linear functions, it is usually not feasible to obtain a closed-form solution for the posterior distribution (Eq.~\ref{eqn:bayes}). Even several approximate inference techniques like variational optimization techniques~\cite{koller2009probabilistic} cannot be easily employed for posterior inference in complex graphics systems. `Monte Carlo' sampling provides a generic inference technique for such complex models and can be used even when the internals of the graphics systems are not known. The aim of sampling methods is to characterize the posterior distribution with \emph{independent and identically distributed} samples. There exists many Monte Carlo sampling strategies such as uniform sampling, rejection sampling, importance sampling~\cite{hammersley1954poor,rosenbluth1955monte} etc. Here, we limit our discussion to the widely used Markov Chain Monte Carlo (MCMC) sampling techniques. \subsubsection*{MCMC Sampling} MCMC sampling~\cite{metropolis1953} is a particular instance of sampling methods that generates a sequence of target variables (samples) by simulating a \textit{reversible} Markov chain. \begin{equation} \mathbf{y}_1 \to \mathbf{y}_2 \to \mathbf{y}_3 \to \dots \to \mathbf{y}_n \end{equation}• This Markov chain of samples approximates the target distribution (posterior distribution in the case of `inverse graphics'). The Markov property states that at every sequence step $t$, given the present sample $\mathbf{y}_t$ in the sequence, the next sample $\mathbf{y}_{t+1}$ is independent of all the previous samples $P(\mathbf{y}_{t+1}|\mathbf{y}_t,\dots,\mathbf{y}_1) = P(\mathbf{y}_{t+1}|\mathbf{y}_t)$. Let us denote the target distribution with $\pi(\cdot)$. In the inverse graphics setting, the target distribution is the posterior $\pi(\mathbf{y})=P(\mathbf{y} | \mathbf{x},\mathbf{\theta})$. And, let us denote the transition probability between the two states (samples) in the Markov chain be $T(\mathbf{y}_{t+1}|\mathbf{y}_{t})$ or $T(\mathbf{y}_t\to\mathbf{y}_{t+1}) \in [0,1]$. One way to ensure that the Markov chain is \emph{reversible} and converging to the target distribution is to check whether the following \emph{detailed balance condition} holds for any two states $\mathbf{y}_t$ and $\bar{\mathbf{y}}$~\cite{koller2009probabilistic}: \begin{equation} \pi(\mathbf{y}_t) T(\mathbf{y}_t \to \bar{\mathbf{y}}) = \pi(\bar{\mathbf{y}}) T(\bar{\mathbf{y}} \to \mathbf{y}_t) \end{equation} Note that the above detailed balance condition is satisfied when the transition probability distribution is close to the target distribution. Since we do not know the target distribution, designing transition probability distributions that satisfies the detailed balance condition is difficult. The Metropolis-Hastings (MH) algorithm~\cite{metropolis1953}, instead of devising special transition probabilities, introduces the \emph{acceptance probability} to each Markov chain transition $A(\mathbf{y}_t \to \bar{\mathbf{y}}) \in [0,1]$. The detailed balance condition then becomes: \begin{equation} \pi(\mathbf{y}_t) T(\mathbf{y}_t \to \bar{\mathbf{y}}) A(\mathbf{y}_t\to\bar{\mathbf{y}}) = \pi(\bar{\mathbf{y}}) T(\bar{\mathbf{y}} \to \mathbf{y}_t) A(\bar{\mathbf{y}}\to\mathbf{y}_t) \end{equation} It can be verified~\cite{koller2009probabilistic} that the following acceptance probability satisfies the above detailed balance condition: \begin{equation} A(\mathbf{y}_t\to\bar{\mathbf{y}}) = \min\left(1,\frac{\pi(\bar{\mathbf{y}}) T(\bar{\mathbf{y}} \to \mathbf{y}_t)}{\pi(\mathbf{y}_t) T(\mathbf{y}_t \to \bar{\mathbf{y}})} \right) \end{equation} With the use of the above acceptance rule, instead of designing task-specific transition probability distributions, any transition probability distribution $T$ with non-zero probability over the range of all target variables can be used for MH sampling. $T$ is also called `proposal distribution' since it is used to propose the next sample in the Markov chain, which is then either accepted or rejected based on the acceptance probability. Below, we summarize the Metropolis-Hastings (MH) MCMC algorithm. \paragraph{Metropolis-Hastings (MH) MCMC:} Sampling from $\pi(\cdot)$ consists of repeating the following two steps~\cite{liu2001montecarlo}: \begin{enumerate} \item Propose a transition using a \textit{proposal distribution $T$} and the current state $\mathbf{y}_t$ \begin{equation*} \bar{\mathbf{y}} \sim T(\cdot|\mathbf{y}_t) \end{equation*} \item Accept or reject the transition based on Metropolis Hastings (MH) acceptance rule: \begin{equation*} \mathbf{y}_{t+1} = \left\{ \begin{array}{cl} \bar{\mathbf{y}}, & \textrm{rand}(0,1) < \min\left(1,\frac{\pi(\bar{\mathbf{y}}) T(\bar{\mathbf{y}} \to \mathbf{y}_t)}{\pi(\mathbf{y}_t) T(\mathbf{y}_t \to \bar{\mathbf{y}})} \right), \\ \mathbf{y}_t, & \textrm{otherwise.} \end{array} \right. \end{equation*} \end{enumerate} Different MCMC techniques mainly differ in the type of the proposal distribution $T$. Note that we do not need to compute the target (posterior) probabilities, but only the \textit{ratio} of posterior probabilities $\frac{\pi(\bar{\mathbf{y}})}{\pi(\mathbf{y}_t)}$. This makes MCMC sampling suitable for the inverse graphics setting where it is not feasible to get a closed-form solution for the normalization constant in the posterior distribution (denominator in Eq.~\ref{eqn:bayes}). The key aspect of the MH sampling is the number of steps it requires until it converges to the target distribution. If the proposal distribution is very different from the target distribution, the samples tend to be frequently rejected resulting in a long wait for convergence. In practice, it is difficult to measure the convergence of any sampler since we do not know the target distribution. In Chapter~\ref{chap:infsampler}, we discuss several diagnostic measures that indicate the convergence of MCMC sampling. In the case of `inverse graphics', each forward rendering step takes a considerable amount of time and we would like the sampler to accept as many samples as possible. A key for improving the MH sampling efficiency is to design the proposal distributions that match the target posterior distribution. In Chapter~\ref{chap:infsampler}, we devise such technique by leveraging discriminative learning techniques for learning the proposal distribution. Refer to~\cite{liu2001montecarlo,koller2009probabilistic} for more details on MCMC sampling. In Chapter~\ref{chap:infsampler}, we study the behavior of MCMC sampling and its variants for inverting graphics engines and propose techniques for improving the sampling efficiency. \subsection{Probabilistic Graphical Models} \label{sec:pgm} Probabilistic graphical models (PGM) provide a rigorous mathematical framework, based on probability and graph theory, for modeling the relationship between the world and image properties. PGMs have been popular not only in computer vision but also in related fields such as natural language processing, speech processing, etc. Several model representations, learning and inference schemes haven been developed in the PGM literature and even a concise description of them would be an inundating task and is outside the scope of this thesis. Refer to~\cite{koller2009probabilistic} for a comprehensive overview of PGMs. PGMs generally represent input-output relationships with factorized functions and are typically confined to a restricted domain so that efficient inference techniques can be applied. PGMs are popular models of choice when the joint distribution of all the target and observation variables can be factorized into independent distributions each involving a subset of variables. This factorization of the joint distribution is represented with the structure of the graph, where each node represents a subset of variables and edges between the nodes represent the joint or conditional distributions between the corresponding node variables. \begin{figure}[t] \centering \begin{tikzpicture} \node[latent] (var_a) {a}; \node[latent, right=of var_a, xshift=4mm] (var_f) {f}; \node[latent, right=of var_f, xshift=4mm] (var_ce) {c,e}; \node[latent, below=of var_f, yshift=-4mm] (var_b) {b}; \node[latent, below=of var_ce, yshift=-4mm] (var_d) {d}; \factor[left=of var_a, xshift=-2mm] {factor_a} {$i$} {} {}; \factor[left=of var_f, xshift=-2mm] {factor_f} {$m$} {} {}; \factor[below right=of var_a, xshift=5mm, yshift=-5mm] {factor_ab} {$j$} {} {}; \factor[above right=of var_b, xshift=4mm, yshift=4mm] {factor_fceb} {$l$} {} {}; \factor[right =of var_b, xshift=2mm] {factor_bd} {$k$} {} {}; \factor[right =of var_f, xshift=2mm] {factor_fce} {$n$} {} {}; \edge[-] {factor_a} {var_a}; \edge[-] {factor_f} {var_f}; \edge[-] {var_a} {var_b}; \edge[-] {var_b} {var_ce}; \edge[-] {var_f} {factor_fceb}; \edge[-] {var_b} {var_d}; \edge[-] {var_f} {var_ce}; \edge[->, transform canvas={xshift=8.4mm,yshift=-2.4mm,scale=0.8},black!60!green] {var_ce} {factor_fceb}; \edge[->, transform canvas={xshift=4mm,yshift=-2mm,scale=0.8},black!60!green] {var_f} {factor_fceb}; \edge[->, transform canvas={xshift=6.5mm,yshift=-4mm,scale=0.8},red] {factor_fceb} {var_b}; \edge[->, transform canvas={xshift=0.56cm,yshift=-5.7mm,scale=0.8},black!60!green] {var_b} {factor_bd}; \edge[->, transform canvas={xshift=0.2cm,yshift=-4mm,scale=0.8},red] {factor_ab} {var_b}; \end{tikzpicture} \label{fig:example-graph} \mycaption{An example factor graph}{Variable nodes (circles) and factor nodes (squares) representing the factorized function $P(a,b,c,d,e,f)=\psi_i(a)\psi_j(a,b)\psi_k(b,d)\psi_l(b,c,e,f)\psi_m(f)\psi_n(c,e,f)$. Also shown are sample variable-to-factor messages (green arrows) and, factor-to-variable messages (red arrows).} \end{figure} \subsubsection{Factor Graph Representation} Factor graphs provide a useful visualization and mathematical formalization for representing probabilistic graphical models. Factor graphs are \emph{bipartite} graphs where nodes in the graph are divided into two types: `variable' nodes represented as circles and `factor' nodes represented as squares. Variable nodes represent the random variables in the graphical model and factor nodes represent the statistical relationship between the variable nodes that they are connected to. Every edge in the factor graph connects a variable node to factor node. That is, there are no \emph{direct} edges among the variable nodes or factor nodes. An example factor graph is shown in Fig.~\ref{fig:example-graph}, where we represent variable nodes as $a,b,c,\cdots$ and factor nodes as $i,j,k,\cdots$. The factor function associated with a factor node $a$ is represented as $f_a$ and the states of variables associated with a variable node $i$ is represented as $i$. The joint function in Fig.~\ref{fig:example-graph}, $P(a,b,c,d,e)$ is factorized into five independent factors $\psi_i(a)\psi_j(a,b)\psi_k(b,d)\psi_l(b,c,e,f)\psi_m(f)f_n(c,e,f)$. Each factor in the graph represent a function and the product of all factor functions makes up the joint function. In the context of probabilistic generative models, these functions are probability distributions. The edges in the factor can be either directed or un-directed representing the joint and conditional distributions respectively. In the case of discrete variables, the factor functions are probability tables assigning the probabilities for all the states of the factor variables. And, in the case of continuous variables, the factor functions are probability density functions. Another example of factor graph, representing a generative model of faces, is shown in Fig.~\ref{fig:face-model}. We will discuss more about this face model later in this section. Graphical representations like factor graphs have several advantages. They provide an intuitive way of representing the statistical dependencies between the variables. Given the factor graph, it is easy to visualize \emph{conditional independence} between variables. For a factor graph with undirected edges, variables $s$ and $t$ are independent given a set of variables $\nu$ ($s \mathrel{\text{\scalebox{1}{$\perp\mkern-10mu\perp$}}} t | \nu$) if every path between $s$ and $t$ have some node $v\in\nu$. For instance, in the factor graph shown in Fig.~\ref{fig:example-graph}, variable $d$ is independent of $a,c,e$ and $f$ given $b$ is observed: $d \mathrel{\text{\scalebox{1}{$\perp\mkern-10mu\perp$}}} a,c,e,f | b$. Perhaps the most important advantage of graphical representations like factor graphs is that we can perform Bayesian inference by using the graph structure. For example, the widely used \emph{message-passing} inference is performed by passing messages between factor and variable nodes. \subsubsection{Message Passing} Message Passing (MP) inference forms a general class of algorithms that are used to estimate the factor graph distributions, i.e. maximizing or minimizing the joint distribution (P(a,b,c,d,e,f) in Fig.~\ref{fig:example-graph}). MP inference proceeds by passing messages (distributions or density functions) between variable and factor nodes. Here, we describe the message passing in the famous `Sum-Product' belief-propagation (BP) algorithm. Messages are probability distributions that represent the beliefs over the variables to/from which the messages are being sent. MP inference in factor graphs has two types of messages: Messages from variable to factor nodes $\mu_{a\rightarrow i}$ (green arrows in Fig.~\ref{fig:example-graph}) and messages from factor to variable nodes $\mu_{i\rightarrow a}$ (red arrows in Fig.~\ref{fig:example-graph}). In the case of Sum-Product BP, these messages are defined as follows. \textit{Variable-to-Factor Message:} A message from variable to factor node is the product of all the messages that the variable node receives from its neighboring factor nodes except the recipient factor node. In Fig.~\ref{fig:example-graph}, the message $\mu_{b \rightarrow m}$ from variable $b$ to factor $m$ is the product of the messages that $b$ receives, i.e. $\mu_{k \rightarrow b} \mu_{l \rightarrow b}$. In general, a message from a variable node $g$ to factor node $r$ is defined as: \begin{equation} \mu_{g \rightarrow r}(x_g) = \prod_{\hat{r}\in N(g)\setminus \{r\}} \mu_{\hat{r} \rightarrow g}(x_g), \end{equation} where $N(g)$ is the set of neighboring nodes to $g$ and $x_g$ represent the states of $g$. \textit{Factor-to-Variable Message:} A message from factor to variable node is first computed as the product of factor function with all the incoming messages from variables except the target variable. The resulting product is then marginalized over all the variables except the one associated with the target node variables. For example, in Fig.~\ref{fig:example-graph}, the message from $l$ to $b$ is computed by marginalizing the product $\psi_l(b,c,e,f)\mu_{c,e\rightarrow l}\mu_{f\rightarrow l}$ over non-target variables $c,e,f$. In general, a message from factor node $r$ to variable node $g$ is defined as: \begin{equation} \mu _{r\to g}(x_g)=\sum _{x':V_{r}\setminus g} \psi_{r}(x')\prod_{\hat{g}\in N(r)\setminus \{g\}}\mu _{\hat{g}\to r}(x_{\hat{g}}), \end{equation} where $N(r)$ represents the neighboring nodes of $r$ and $V_r$ represent all the variables attached to $r$. The summation in the above equation becomes an integral when dealing with continuous variables. In a typical message passing inference run, the above messages are repeatedly computed and sent between factor and variable nodes. Depending on the structure of the factor graph, different message priorities are used. Upon convergence or passing messages for a pre-defined number of iterations, the marginal distribution at each variable node is computed as the product of all its incoming messages from the neighboring factor nodes: \begin{equation} P(x_g) = \prod_{\hat{r}\in N(g)} \mu_{\hat{r} \rightarrow g}(x_g). \end{equation} Messages represent a probability of each possible state of a variable. In the case of discrete variables with small number of states, it is easy to represent messages as probability distribution tables. However, in the case of continuous variables, the messages must be full functions of the variables. Some approximations are typically required to represent messages with continuous distributions. Different message passing algorithms differ in the way the messages are approximated. Some MP algorithms assume Gaussian form for the messages with either restricting the type of factor functions~\cite{weiss2001correctness} that always result in Gaussian messages or projecting the computed messages into Gaussian form~\cite{Minka2001}. Some other MP algorithms use mixture of Gaussians~\cite{sudderth2010nonparametric} or set of particles/samples~\cite{ihler2009particle} for representing messages. \subsubsection{Variational Inference} Except for small graphical models, exact Bayesian inference (say, with the above mentioned Sum-Product BP) in PGMs is usually not possible. This is especially true for computer vision models which typically involve several hundreds or thousands of variables. Variational Bayesian inference is one of the widely used technique for performing inference in PGMs. These methods try to find an approximation $Q(\mathbf{y})$ to the true posterior distribution $P(\mathbf{y}|\mathbf{x})$ via optimization techniques. This approximate distribution is usually taken from a known simpler family of distributions (such as exponential family) and the optimization is performed over the space of that family of distributions. For example, for the above joint distribution function $P(a,b,c,d,e,f)$, an approximation could be a factorized distribution $Q = Q_1(a) Q_2(b) Q_3(c) Q_4(d) Q_5(e) Q_6(f)$, each from an exponential family of distributions. The most commonly used optimization function is minimizing the Kullback-Leibler (KL) divergence of $P$ from $Q$ in order to find a close approximation to $P$: \begin{equation} D_{KL}(Q||P) = \underbrace{\sum_z Q(\mathbf{y}) \frac{Q(\mathbf{y})}{P(\mathbf{y},\mathbf{x})}}_{-\mathcal{L}(Q)} + \log P(\mathbf{x}) \label{eqn:kl} \vspace{-0.2cm} \end{equation} Minimizing the above KL-divergence translates to maximizing the variational lower bound $\mathcal{L}(Q)$ as $P(\mathbf{x})$ is constant. $\mathcal{L}(Q)$ is also called \textit{energy functional} or \textit{variational free energy} as it can be written as the sum of energy $\mathbb{E}_Q[\log P(\mathbf{y},\mathbf{x})]$ and the entropy of $Q$. $Q$ is chosen in such a way that $\mathcal{L}(Q)$ becomes tractable and maximizable. In general, the energy functional is maximized by passing messages between different variable nodes in the factor graph (e.g. factor graphs in Figs.~\ref{fig:example-graph},~\ref{fig:face-model}). Following~\cite{koller2009probabilistic}, depending on the type of approximations to $Q$ and the energy functional, methods for maximizing $\mathcal{L}(Q)$ can be categorized into three types. The first category of methods optimizes approximate versions of the energy functions by passing messages in the simplified versions of the given factor graph. This includes the \textit{loopy belief propagation}~\cite{frey1998revolution,weiss2000correctness} algorithm. The second category of methods try to maximize the exact energy functional but uses approximate message propagation steps, for example approximating complex messages with distributions from the exponential family. This is equivalent to using relaxed consistency constraints on $Q$. These class of methods are also known as \textit{expectation propagation} (EP)~\cite{Minka2001} algorithms. The third commonly used category of methods maximize the exact energy functional but restrict $Q$ to simple factorized distributions, which is called \textit{mean-field} approximation. One of the most commonly used message passing technique for optimizing the energy functional is \textit{variational message passing} (VMP)~\cite{Winn2005}. In Chapter~\ref{chap:cmp}, we show how EP and VMP fail to converge for inference in model shown in Fig.~\ref{fig:face-model} and propose a remedy for that. Several other inference techniques such as MCMC sampling can be used for inference in PGMs. Refer to~\cite{fox2012tutorial} for a tutorial on variational Bayesian inference and to~\cite{koller2009probabilistic,wainwright2008graphical} for a comprehensive review of various inference techniques in PGMs. \subsubsection{Two Example Models} \label{sec:example_modesl_crf} Here, we give a brief overview of two popular types of PGMs in vision which are later used in this thesis: Layered graphical models and fully connected CRFs. \paragraph{Layered Graphical Models:} Several vision models are hierarchical in nature and can be naturally expressed with layered graphical models. Figure~\ref{fig:face-model} shows an example layered \textit{factor graph} model for faces. Here, a vision task could be: Given an observation of pixels $\mathbf{x} = \{x_i\}$, we wish to infer the reflectance value $r_i$ and normal vector $\mathbf{n_i}$ for each pixel $i$ (see Fig.~\ref{fig:face-data}). The model shown in Fig.~\ref{fig:face-model} represents the following approximate image formation process: $x_i = (\mathbf{n_i} \cdot \mathbf{l}) \times r_i + \epsilon$, thereby assuming Lambertian reflection and an infinitely distant directional light source with variable intensity. Each factor in the graph is a conditional probability distribution providing the factorization for the joint distribution: \begin{equation} P(\mathbf{x}, \mathbf{z}, \mathbf{r}, \mathbf{s}, \mathbf{n}, \mathbf{l}) = P(\mathbf{n}) P(\mathbf{l}) P(\mathbf{r}) \prod_i P(x_i) P(x_i | z_i) P(z_i | r_i, s_i) P(s_i | \mathbf{n}_i, \mathbf{l}), \end{equation} where $s_i$ and $z_i$ represent the intermediate shading and non-noisy image observation variables. We omitted the model parameters $\mathbf{\theta}$ in the above equation for the sake of simplicity. A vision task could to estimate the posterior distribution $P(\mathbf{r}, \mathbf{s}, \mathbf{n}, \mathbf{l} | \mathbf{x})$. Note that this generative model is only a crude approximation of the true image formation process (e.g. each pixel is modeled independently and it does not account for shadows or specularities). Such approximations are customary to PGMs as several PGM inference techniques cannot be applied for models with complex non-linear factors. Note that even for a relatively small image of size $96 \times 84$, the face model contains over 48,000 latent variables and 56,000 factors, and as we will show in Chapter~\ref{chap:cmp}, standard message passing routinely fails to converge to accurate solutions. \definecolor{city_1}{RGB}{128, 64, 128} \definecolor{city_2}{RGB}{244, 35, 232} \definecolor{city_3}{RGB}{70, 70, 70} \definecolor{city_4}{RGB}{102, 102, 156} \definecolor{city_5}{RGB}{190, 153, 153} \definecolor{city_6}{RGB}{153, 153, 153} \definecolor{city_7}{RGB}{250, 170, 30} \definecolor{city_8}{RGB}{220, 220, 0} \definecolor{city_9}{RGB}{107, 142, 35} \definecolor{city_10}{RGB}{152, 251, 152} \definecolor{city_11}{RGB}{70, 130, 180} \definecolor{city_12}{RGB}{220, 20, 60} \definecolor{city_13}{RGB}{255, 0, 0} \definecolor{city_14}{RGB}{0, 0, 142} \definecolor{city_15}{RGB}{0, 0, 70} \definecolor{city_16}{RGB}{0, 60, 100} \definecolor{city_17}{RGB}{0, 80, 100} \definecolor{city_18}{RGB}{0, 0, 230} \definecolor{city_19}{RGB}{119, 11, 32} \begin{figure}[t] \scriptsize \centering \fcolorbox{white}{city_1}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Road~~ \fcolorbox{white}{city_2}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Sidewalk~~ \fcolorbox{white}{city_3}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Building~~ \fcolorbox{white}{city_4}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Wall~~ \fcolorbox{white}{city_5}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Fence~~ \fcolorbox{white}{city_6}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Pole~~ \fcolorbox{white}{city_7}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Traffic Light~~ \fcolorbox{white}{city_8}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Traffic Sign~~\\ \fcolorbox{white}{city_9}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Vegetation~~ \fcolorbox{white}{city_10}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Terrain~~ \fcolorbox{white}{city_11}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Sky~~ \fcolorbox{white}{city_12}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Person~~ \fcolorbox{white}{city_13}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Rider~~ \fcolorbox{white}{city_14}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Car~~ \fcolorbox{white}{city_15}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Truck~~\\ \fcolorbox{white}{city_16}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Bus~~ \fcolorbox{white}{city_17}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Train~~ \fcolorbox{white}{city_18}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Motorcycle~~ \fcolorbox{white}{city_19}{\rule{0pt}{1pt}\rule{1pt}{0pt}} Bicycle~~\\ \subfigure[Sample Image]{% \includegraphics[width=.45\columnwidth]{figures/frankfurt00000_008206_given.png} }\hspace{4.5mm} \subfigure[Ground Truth Semantics]{% \includegraphics[width=.45\columnwidth]{figures/frankfurt00000_008206_gt.png} } \mycaption{Illustration of semantic segmentation task}{A sample image from Cityscapes street scene dataset~\cite{Cordts2015Cvprw} and the corresponding ground truth semantic labels.} \label{fig:example-seg} \end{figure} \paragraph{Fully Connected CRFs:} Fully connected conditional random fields, also known as DenseCRFs, are CRF models where every variable in the image is connected to every other variable via pairwise edge potentials. For illustration purposes, let us consider the task of semantic segmentation which is labelling each pixel in a given image with a semantic class. See Fig.~\ref{fig:example-seg} for an illustration. For the segmentation problem, DenseCRFs are generally used to encode the prior knowledge about the problem: `Pixels that are spatially and photometrically similar are more likely to have the same label'. For an image $\mathbf{x}$ with $n$ pixels, the semantic segmentation task is to produce a labelling $\mathbf{y}$ with discrete values $\{y_1,\dots,y_n\}$ in the label space $y_i\in\{1,\ldots,\mathcal{L}\}$. The DenseCRF model has unary potentials $\psi_u(y)\in\mathbb{R}$, e.g., these can be the output of CNNs. The pairwise potentials, as introduced in~\cite{krahenbuhl2012efficient}, are of the form $\psi^{ij}_p(y_i,y_j) = \mu(y_i,y_j) k(\mathbf{f}_i,\mathbf{f}_j)$ where $\mu$ is a label compatibility matrix, $k$ is a Gaussian kernel $k(\mathbf{f}_i,\mathbf{f}_j)=\exp(-(\mathbf{f}_i-\mathbf{f}_j)^{\top}\Sigma^{-1}(\mathbf{f}_i-\mathbf{f}_j))$ and the vectors $\mathbf{f}_i$ are feature vectors at each point. Commonly used features are position and color values at the pixels (e.g., $\mathbf{f}=(x,y,r,g,b)^\top$). In the DenseCRF model, the energy functional for an image $\mathbf{x}$ thus reads: \begin{equation} P(\mathbf{y}|\mathbf{x})\propto\exp(-\sum_i\psi_u(y_i)-\sum_{i>j}\psi^{ij}_p(y_i,y_j)). \end{equation} Because of the dense connectivity, exact MAP or marginal inference is intractable. The main result of~\cite{krahenbuhl2012efficient} is to derive the mean-field approximation for this model and to relate it to bilateral filtering which enables tractable approximate inference. As described above, mean-field approximation is a type of variational inference where the approximate distribution $Q$ is considered to be fully-factorized across pixels: $Q=\prod_{i \in n} Q_i(x_i)$. Variational inference then solves for $Q$ by minimizing the KL divergence of $P$ from $Q$(see Eq.~\ref{eqn:kl}).The work of~\cite{krahenbuhl2012efficient} showed that the inference can be performed with efficient bilateral filtering ~\cite{aurich1995non, smith97ijcv, tomasi1998bilateral, adams2010fast} operations. Specifially, mean-field inference results in a fixed point equation which can be solved iteratively $t=0,1,\ldots$ to update the marginal distributions $Q_i$: \begin{equation} Q^{t+1}_i(x_i) = \frac{1}{Z_i} \exp(-\psi_u(x_i) - \sum_{l \in \mathcal{L}}\underbrace{\sum_{j \ne i} \psi^{ij}_p(x_i,l) Q^{t}_j(l)}_\text{bilateral filtering}), \label{eq:mfupdate} \end{equation} where $Z_i$ denotes a normalization constant and can be easily computed as $Q_i$ is a single dimensional distribution. Although we used semantic segmentation task for illustration purposes, DenseCRFs are shown to be useful for tackling other tasks such as material segmentation~\cite{bell2015minc}, optical flow estimation~\cite{sun2013fully} and intrinsic image decomposition~\cite{bell2014intrinsic}. One of the fundamental limitations of the existing use of DenseCRFs is the confinement of pairwise potentials $\psi^{ij}_p(y_i,y_j)$ to be Gaussian as bilateral filtering is traditionally implemented with a Gaussian kernel. In Chapter~\ref{chap:bnn}, we show how we can learn a more general form of bilateral filters and apply that technique for learning pairwise edge potentials in DenseCRF. \subsection{Advantages and Limitations} \label{sec:gen-adv-limits} This generative modeling view is appealing as it is relatively easy to incorporate our knowledge of physics and light transport into models and was advocated since the late 1970~\cite{horn1977imageintensities, grenander1976patterntheory,zhu1997learning,mumford2010patterntheory, mansinghka2013approximate,yuille2006vision}. For example, the knowledge of how light reflects on objects with different material properties or the knowledge of how roads and buildings are structured are relatively easy to incorporate into generative models. Due to incorporation of strong prior knowledge into the systems, generative models usually work better when there is little or no data available for a particular problem. Since a single generative model can model different world and image characteristics, it can be used for many different applications. In addition, it is easier to diagnose the flaws in generative model as most of the model is manually designed. Despite its intuitive appeal and advantages, in practice, generative models are used only for a few vision problems. The few successes of the idea have been in limited settings. In the successful examples, either the generative model was restricted to few high-level latent variables, e.g.,~\cite{oliver2000humaninteractions}, or restricted to a set of image transformations in a fixed reference frame, e.g.,~\cite{black2000imageappearance}, or it modeled only a limited aspect such as object shape masks~\cite{eslami2012shapeboltzmann}, or the generative model was merely used to generate training data for a discriminative model~\cite{shotton2011kinect,gaidon2016virtual, ros2016synthia,richter2016playing,shafaei2016play}. With all its intuitive appeal, its beauty and simplicity, it is fair to say that the track record of generative models in computer vision is poor. As a result, the field of computer vision is now dominated by efficient but data-hungry discriminative models, the use of empirical risk minimization for learning, and energy minimization on heuristic objective functions for inference. Why did generative models not succeed? There are two key problems that need to be addressed, the design of an accurate generative model, and the inference therein. The first key problem which is the design of accurate generative model is partly addressed by recent advances in graphics. Although modern graphics provide rendering with stunning level of realism, priors of world parameters are difficult to characterize. This results in complex priors together with more complex forward models for accurate generative models which in turn results in difficult inference. This brings us to the second key problem in the generative world view which is the difficulty of posterior inference at test time. This difficulty stems from a number of reasons: \emph{first}, the target variable $\mathbf{y}$ is typically high-dimensional and so is the posterior. \emph{Second}, given $\mathbf{y}$, the image formation process realizes complex and \emph{dynamic} dependency structures, for example when objects occlude or self-occlude each other. These intrinsic ambiguities result in multi-modal posterior distributions. \emph{Third}, while most renderers are real-time, each simulation of the forward process is expensive and prevents exhaustive enumeration. Overall, the limitations of generative approaches out-weigh their advantages making them not succeed in building practical computer vision systems. Despite these limitations, we still believe in the usefulness of generative models in computer vision, but argue that we need to leverage existing discriminative or even heuristic computer vision methods for alleviating some of the difficulties in the posterior inference. Inference techniques proposed in this thesis are steps in this direction. \section{Discriminative Vision Models} With the advances in internet and image capturing technology, there is an explosive growth of visual data during the last few years. Moreover, presence of crowd-sourcing platforms like `Amazon Mechanical Turk'~\cite{mturk} make it easier to annotate large amounts of data by millions of people. Discriminative models directly model the contingency of world properties on the observed data $P(\mathbf{y}|\mathbf{x})$. Unlike generative models, discriminative models are task-specific and learning takes the central role in defining the model. Discriminative models comprise of functions directly approximating the posterior distribution $P(\mathbf{y}|\mathbf{x}, \mathbf{\theta})$, where $\mathbf{\theta}$ denote the parameters of the model. Supervised learning with annotated training data is usually employed to fit the model parameters to the given task. Since discriminative models directly characterize the posterior distribution, inference is reduced to simple evaluation of the model. Due to the availability of large amounts of training data and the computing power that can handle rich high-capacity models, discriminative models have been very successful in many vision problems. In addition, inference is fast since this involves a simple evaluation of the model. This makes discriminative models particularly attractive for many practical applications. Many mathematical functions that are rich enough to capture the relationship between the observation and target variables can be used as discriminative models. Hence, many types of discriminative models have been used in the computer vision literature. Discriminative models are traditionally partitioned into two modules: \textit{feature extraction} and \textit{prediction} modules. Before the advent of modern convolutional neural networks (CNNs), these two components are studied separately in the literature. We briefly discuss these two components in the discriminative models. \paragraph{Feature Extraction:} Depending on the type of vision task, features are extracted either at all pixels (points) in the observed data or only at some key points. For example, registering two images taken from different view-points requires finding corresponding points (key points) in each image and then matching. For such tasks, an additional step of key point detection is required before feature computation. For image classification, a single feature vector is extracted for the entire image. An ideal feature representation should be compact, efficient to compute and invariant to specific transformations. As an example, for semantic segmentation, features should be invariant to intra-class variations such as illumination, scale, rotation, object articulations, etc., while being sensitive to changes across different semantic categories. Several feature extraction schemes have been proposed in the vision literature, most of them are hand crafted. Some popular choices include SIFT~\cite{lowe1999object}, HoG~\cite{bay2006surf}, SURF~\cite{dalal2005histograms}, DAISY~\cite{tola2008fast}, etc. Models for feature extraction and prediction are plentiful and discussing all of them is outside the scope of this thesis. With the recent advances in CNNs, feature extraction is coupled with prediction which are learned together end-to-end. \paragraph{Prediction:} Once the image features are extracted, the task is to estimate the posterior distribution $P(\mathbf{y}|\mathbf{f}(\mathbf{x}))$, where $\mathbf{f}(\mathbf{x})$ denotes the features. A common strategy is to learn a rich parametric or non-parametric model with supervised learning techniques. This makes the availability of training data crucial for discriminative approaches. Several learning based prediction models have become popular in tackling vision tasks including support vector machines (SVM)~\cite{cortes1995support}, boosting~\cite{schapire1990strength, freund1995desicion}, random forests~\cite{breiman2001random,ho1995random}, deep convolutional neural networks~\cite{lecun1998gradient}, etc. Refer to~\cite{friedman2001elements} for a review of different prediction techniques. Next, we briefly review random forests and CNN models as we either make use of or propose extensions to these models in this thesis. \subsection{Random Forests} \label{sec:forests} Random forests~\cite{breiman2001random,ho1995random} are an ensemble of $K$ randomly trained prediction (classification or regression) trees, where each tree $T(\mathbf{y}|\mathbf{f}(\mathbf{x}),\mathbf{\theta}^k)$ represents a non-linear function approximating the posterior $P(\mathbf{y}|\mathbf{f}(\mathbf{x}))$. The trees are typically binary trees and can be viewed as performing a discriminative hierarchical clustering of the feature space. And a simple model fit (e.g.,\ linear model) is used in each cluster. Trees are grown incrementally from the root node to the leaves and each node represents a partition of the feature space. These partitions can be any linear or non-linear functions, but the simple axis-aligned partitions are the most used ones due to their simplicity and efficiency. For simplicity, let us assume the partition functions are axis-aligned. At each node, a feature $\kappa$ and its split value $\tau$ are chosen to split the feature space, so as to minimize an energy function $E$. Let us consider training the $j^{th}$ node in a $k^{th}$ tree. Let all the data points falling in that node be $\mathcal{S}_j$ (due to splitting of its ancestor nodes) and $\mathcal{T}_j$ denotes the discrete set of \textit{randomly} selected feature axes $\{(\kappa,\tau)_i\}$ (feature indices and their corresponding values) for the node $j$. Training the $j^{th}$ node corresponds to choosing the optimal split $\theta^k_j \in \mathcal{T}_j$ among the randomly chosen splits that minimizes an energy function $E$: \begin{equation} \theta^k_j = \operatornamewithlimits{argmin}_{\gamma \in \mathcal{T}_j} E(\mathcal{S}_j,\gamma) \end{equation} Depending on the type of task and data, different energy functions $E$ are used. Each split $\gamma$ partitions the training data $\mathcal{S}_j$ in the node $j$ into two parts $\mathcal{S}^L_j$ and $\mathcal{S}^R_j$ which are assigned to left and right child nodes respectively. A common energy function measures how well a regression/classification model fit the data in each of the left and right child nodes created by a split $\gamma$: \begin{equation} E(\mathcal{S}_j,\gamma) = - (M(\mathcal{S}^L_j, \beta) + M(\mathcal{S}^R_j, \beta)). \label{eqn:forest_energy} \end{equation} Where $M(\mathcal{S}_j,\beta)$ denotes the model likelihood i.e., how well the model with parameters $\beta$ can explain the data $\mathcal{S}_j$. For example, in the case of regression tasks, $M$ can be a linear regression fit and in the case of classification (like in semantic segmentation), $M$ can be the classification accuracy. Like this, the trees are recursively grown by splitting the leaf nodes into left and right child nodes. The set of all node splits $\theta^k=\{\theta^k_j\}_{j=1,\cdots,J}$ represents the parameters of the $k^{th}$ tree. Once a tree is trained, a simple prediction model is fitted to the data in the leaf nodes. A deep tree might overfit the data and a shallow tree would under-fit the data and miss important structure. Restricting the tree size corresponds to regularizing the model and size should be adaptively chosen based on the training data. Some training stopping criteria include setting the maximum depth of the trees; minimum number of data points in each node; a threshold for energy function $E$, etc. Random forests are distinguished from other tree-based supervised learning techniques such as boosted decision trees, by the way different trees are trained in a forest. Each tree in a random forest is trained independently and randomness is added either in terms of choosing a random subset of training data for each tree (called \textit{bagging}~\cite{breiman1996bagging}) and/or randomly choosing the split candidates (feature indices and their values) at each node. Typically, the estimates across the trees $T(\mathbf{y}|\mathbf{f}(\mathbf{x}),\mathbf{\theta}^k)$ are averaged to get the final model $P(\mathbf{y}|\mathbf{f}(\mathbf{x}))$: \begin{equation} P(\mathbf{y}|\mathbf{f}(\mathbf{x})) = \frac{1}{K}\sum_{k} T(\mathbf{y}|\mathbf{f}(\mathbf{x}),\mathbf{\theta}^k). \end{equation} Due to the randomness, different trees are identically distributed resulting in a low-variance estimate when the final estimate is taken as the average across the trees. Random forests are highly flexible and several different types of models are conceivable with using a combination of different splitting criteria. Due to their simplicity and flexibility, random forests have become a popular choice for supervised learning in vision. Random forests are easy to implement and train. Also, they can be easily adapted to a wide range of classification and regression tasks with relatively simple changes to the model. Moreover, they are non-parametric in nature with the ability to consume large amounts of training data. Random forests are successfully applied for vision tasks such as human pose estimation~\cite{shotton2011kinect}, semantic segmentation~\cite{shotton2008semantic}, etc. In the case of semantic segmentation, a popular model is to extract TextonBoost features~\cite{shotton2006textonboost} at each pixel and then train a random forest classifier to predict the class label at each pixel. One of the crucial advantages of random forests with respect to neural networks is that the loss function $E$ need not be differentiable. In Chapter~\ref{chap:infsampler}, we use random forest models to improve inference in inverse graphics via our informed sampler approach. In Chapter~\ref{chap:cmp}, we use random forests for predicting messages resulting in improved variational inference in layered graphical models. Refer to~\cite{Criminisi2013} for a comprehensive overview of random forests and their applications in computer vision and medical image analysis. \subsection{Convolutional Neural Networks} \label{sec:cnn} Neural networks are a class of models with complex parametric non-linear functions relating the input $\mathbf{x}$ to the target $\mathbf{y}$. The complex non-linear function is usually realized by stacking a series of simple and differentiable linear and non-linear functions: \begin{equation} P(\mathbf{y}|\mathbf{x},\mathbf{\theta}) = \mathcal{F}_1(\mathcal{F}_2(\cdots \mathcal{F}_k(\mathbf{x},\theta_k)\cdots, \mathbf{\theta}_2),\mathbf{\theta}_1). \end{equation} Learning involves finding the parameters $\{\mathbf{\theta}_1, \mathbf{\theta}_2, \cdots, \mathbf{\theta}_k\}$ that best approximates the desired relationship between the input and target variables. The component functions are usually simple linear functions such as convolutions $\mathcal{F}(\mathbf{s},\theta)=\mathbf{W(\mathbf{\theta})}\mathbf{s}+b$ (where $\mathbf{s} \in \mathbb{R}^q$, $\mathbf{W} \in \mathbb{R}^{p \times q}$) interleaved with simple non-linear functions such as rectified linear units (ReLU) $\mathcal{F}(\mathbf{s})=max(0,\mathbf{s})$. A linear function together with a non-linearity is usually called a single layer in the network. Intermediate layers in a neural network are also called \textit{hidden} layers and the number of units in intermediate layers determine the \textit{width} of the network. A theoretical result~\cite{csaji2001approximation,hornik1991approximation} is that any complex continuous function can be approximated by a simple two layered neural network, given sufficient number of intermediate units (width of the network). From a practical point of view, neural networks are attractive because of their fast inference (simple forward pass through the network) and an end-to-end prediction (going from input to output variables without the intermediate handcrafted feature extraction) capabilities. Convolutional neural networks (CNN) are a special class of neural networks tailored for processing 2D or higher dimensional visual data on a grid. The main characteristic of CNNs is the use of spatial convolutions instead of \textit{fully-connected} matrix-vector multiplications for building linear functions. This greatly reduces the amount of parameters due to parameter sharing across different spatial locations and speeds up the network computation and training. One of the main hurdles for the success of CNNs was the lack of computational resources required to train models with millions of parameters. Recent availability of large datasets together with efficient model and training implementations in GPUs made it possible to successfully apply CNNs to real-world vision tasks. Since CNNs typically have millions of parameters, they are highly prone to overfit the training data. Advances in simple yet powerful regularization techniques (such as DropOut~\cite{srivastava2014dropout}) are another reason for the successful deployment of CNN models. Currently, CNNs are state-of-the-art in many traditional vision problems such as image classification~\cite{he2015deep,krizhevsky2012imagenet}, object detection~\cite{girshick2014rich,redmon2015you,ren2015faster}, semantic segmentation~\cite{long2014fully,chen2016deeplab}, etc. \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{figures/supplementary/lenet_cnn_network} \mycaption{Sample CNN architecture for character recognition} {LeNet-7~\cite{lecun1998mnist} architecture generally used for character recognition (used for Assamese character recognition in Chapter~\ref{chap:bnn}). `C$n$' corresponds to convolution layer with $n\times n$ filters; `MP$n$' corresponds to max-pooling layer with window size $n$; `IP' corresponds to fully-connected inner-product layer; `ReLU' and `TanH' corresponds to rectified linear units and tanh non-linear layers and `Softmax' layer produces probabilities for 183 output classes.} \label{fig:lenet7} \end{figure} CNN architectures are typically composed of the following layers: Convolution, pooling, non-linearity, fully-connected (FC) and loss layers. Convolution layers are simple spatial convolutions, pooling layers do spatial downsampling and FC layers connect each output unit to all input units. Non-linear layers are simple yet important functions that model non-linearities in the CNN model. Some popular non-linear functions include ReLU, TanH, sigmoid function, etc. Loss layers are problem specific layers that are used at the end of the network and implement the differentiable empirical loss $\sigma(\hat{\mathbf{y}}, \mathbf{y}^*)$ between the predicted target $\hat{\mathbf{y}}$ and the ground truth target $\mathbf{y}^*$. Figure~\ref{fig:lenet7} shows a sample CNN architecture generally used for character recognition. The input is a grayscale image $\mathbf{x}$ with the size $96\times96$ and the output is a vector of probabilities for each class $P(\mathbf{y}|\mathbf{x})$. Also shown are the sizes of intermediate CNN representations. Training the parameters $\mathbf{\theta}_k$ of each layer involves back-propagating the empirical loss from the loss layers backwards to the early CNN layers. To avoid over-fitting to the training data, the loss is usually augmented with a regularization over the network parameters. The optimization objective for a given dataset with $m$ training instances is given as an average loss $L$: \begin{equation} L(\mathbf{\theta}) = \frac{1}{m} \sum_{i=1}^{m} \sigma(\hat{\mathbf{y}_i}, \mathbf{y}_i^{*}) + \lambda r(\mathbf{\theta}), \end{equation} where $r$ denotes the regularization over the parameters $\mathbf{\theta}$ with weight $\lambda$. Then the parameters $\theta$ are updated using gradient descent methods such as stochastic gradient descent (SGD), AdaDelta~\cite{zeiler2012adadelta}, Adam~\cite{kingma2014adam}, etc. The parameter update steps to update a single parameter $\mathbf{\theta}^i \in \mathbf{\theta}$ in SGD are given as: \begin{align} \begin{split} v_{t+1} &= \mu v_t - \gamma \nabla L(\mathbf{\theta}^i_t) \\ \mathbf{\theta}^i_{t+1} &= \mathbf{\theta}_t + v_{t+1} \end{split} \end{align} where $v, \gamma, \mu \in \mathbb{R}$, $v_t$ denotes the parameter update in the previous step and $\nabla L(\mathbf{\theta}^i_t)$ denotes the gradient of loss $L$ with respect to the parameter $\mathbf{\theta}^i$. Thus the parameter update $v_{t+1}$ is a weighted combination of the previous update and the negative gradient of loss $L$. The weights $\mu$ and $\gamma$ are called \textit{momentum} and \textit{learning rate} respectively which are generally chosen to obtain good performance on a given validation data. In practice, since the size of the dataset $m$ is large, only a small subset (batch) of dataset is used for computing the loss and updating parameters in each step. Instead of computing the gradients of loss $L$ with respect to all the network parameters in one step, the gradients are back-propagated across the layers. Thus, one of the fundamental requirements for a component function (except the first layer) in CNNs is that it should be differentiable with respect to both its inputs and its parameters. Once the network is trained, inference is a simple forward pass through the network. Like in many discriminative models, inference in CNNs amounts to evaluation of the model. In Chapter~\ref{chap:bnn}, we generalize the standard spatial convolutions found in CNNs to sparse high-dimensional filters and in Chapter~\ref{chap:binception}, we propose an efficient CNN module for long-range spatial propagation of information across intermediate CNN representations. There are several other types of neural networks, such as recurrent neural networks, that are currently being used in the computer vision community. Refer to~\cite{Goodfellow-et-al-2016-Book,888} for more details regarding CNNs or neural networks in general. \subsection{Advantages and Limitations} Discriminative models are mainly data-driven methods where inference amounts to simple evaluation of the model. In general, discriminative models are fast, robust to model mismatch and are also high performing when trained with enough amounts of data. Discriminative models are attractive because of their practical utility and also their flexibility in terms of being able to use same model architecture and training for different vision tasks. \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{figures/seg_net_illustration.pdf} \mycaption{Sample CNN architecture for semantic segmentation} {Semantic segmentation CNN architectures typically consists of convolution (Conv.), pooling (Pool) and $1\times1$ convolution layers (FC) interleaved with non-linearities (ReLU). The use of pooling results in lower resolution CNN output which is generally up-sampled with either interpolation, deconvolution and/or CRF techniques. CRF techniques also help in incorporating prior knowledge about semantic segmentation.} \label{fig:seg-net} \end{figure} On the other hand, discriminative models also have several limitations. \begin{itemize} \item Discriminative models are data hungry and typically fail to work where there is little data available. Availability of large datasets for several vision tasks and online annotation tools like Amazon Mechanical Turk~\cite{mturk} help in mitigating this limitation. \item Since discriminative models tend to have a large number of parameters which are directly learned from data, when the performance is not as expected, it is difficult to find the cause of the problem and accordingly modify the models. As a result, there are no guaranteed ways to find the right model architecture for a given task. These problems are generally handled with either regularizations on model complexity or with trial-and-error strategy on various model architectures. \item One of the fundamental limitations of discriminative models is the lack of key approaches to inject prior knowledge into the models. This is especially true in the case of end-to-end trained models like CNNs. In the case of hand crafted features, we can inject some prior context in the form of image or pixel features. It is not easy to inject the knowledge of generative models into discriminative approaches such as CNNs. For example, in the case of semantic segmentation, post-processing steps such as DenseCRFs are generally employed to model the relationship (prior knowledge) between the pixels and output labels. Figure~\ref{fig:seg-net} shows a prominent CNN architecture for semantic segmentation. \item Discriminative models are task-specific and a single trained model can not be easily transferred to other vision problems. Although several transfer learning techniques~\cite{pan2010survey} exists that can help transfer the knowledge across different discriminative models, they are generally task specific and have limited success. One of the main advantages of CNNs, in comparison to other discriminative models, is that a CNN trained on image classification task is shown to perform reasonably well on other related tasks such as semantic segmentation, object detection etc. with only minor adaptions to the new task. \end{itemize} Recently, hybrid models combining generative and discriminative models are proposed to alleviate some of the limitations in both and make use of their complementary advantages. We will discuss these models in the next section. \section{Combining Generative and Discriminative Models} \label{sec:gen-disc-comb} As we have argued in the previous sections, generative and discriminative models have complementary advantages and limitations. Generative models have the advantage of incorporating prior knowledge while being slow; whereas discriminative models are fast and robust but it is difficult to incorporate prior knowledge. Typically, generative models suffer from high bias due to model mismatch, whereas discriminative models suffer from higher variance. In general, discriminative models work well and are robust to model mismatch when the available annotated training data is large. If the available data is small in comparison to the required model complexity, we need ways to constrain model parameters with the use of prior knowledge. Generative models provide principled ways to incorporate such prior knowledge and can even make use of unlabelled data which is generally abundant. The work of~\cite{jordan2002discriminative} is one of the first comparative studies on generative and discriminative models resulting in the common knowledge of using discriminative models when the data is abundant, otherwise use generative approach for a given problem. We hypothesize that combining generative and discriminative models can leverage the advantages in both. At the same time, combining these complementary models can also bring forward the limitations in both. The generative and discriminative models are often studied in isolation, but during the past decade, several synergistic combinations of generative and discriminative models have been proposed. There are 3 ways in which generative and discriminative models can be combined. 1. Use a generative model to improve the model or the inference in the discriminative model (indicated as `Generative $\rightarrow $ Discriminative'); 2. Use a discriminative model for improving the model and/or inference in the generative model (indicated as `Discriminative $\rightarrow $ Generative'); and 3. Hybrid generative and discriminative models (indicated as `Generative $\leftrightarrow $ Discriminative'). \subsubsection{Generative $\rightarrow $ Discriminative} One way to use generative models for improving discriminative models is by feature extraction using generative models. The work of~\cite{jaakkola1999exploiting} showed that the gradients of the generative models can be used as features in discriminative models. The gradients of a generative model are called `Fisher vectors' and are particularly useful for building kernel functions (Fisher kernels) that can be used in kernel based techniques such as SVMs. Another popular way to incorporate generative prior knowledge is to provide prior constraints while training discriminative models. For instance, CNN models can be trained with extra loss layers encoding the prior relationship between the output variables. The overall training loss is a combination of the discriminative prediction loss and also a generative prior loss. A related strategy for training CNNs is to first train a discriminative CNN using generative prior loss with large amounts of unlabelled data, and then fine-tune the network using discriminative prediction loss with limited labelled data. Instead of training a single discriminative model with prior constraints,~\cite{tu2010auto} proposed to train a sequence of discriminative predictors, each taking as input not only the input features but also features from previous stage predictions. This way, it is easy to incorporate the prior constraints on output target variables using features extracted on predictions. This technique is called `Auto-Context'~\cite{tu2010auto} and the sequence of predictors is usually trained with stacked generalization method~\cite{wolpert1992stacked}. Despite being simple, auto-context method is shown to be powerful and useful in many vision problems (for e.g.,~\cite{tu2010auto,jiang2009efficient,jampani15wacv}). More recently, structured prediction layers~\cite{ionescu2015matrix,zheng2015conditional,schwing2015fully,chandra2016fast} are introduced into discriminative CNN frameworks. These layers are mainly adapted from the models and inference techniques in generative models. For example, in the case of semantic segmentation CNNs, mean-field inference in fully-connected CRFs can be formulated as recurrent neural network modules~\cite{domke2013learning,zheng2015conditional} and is used to augment the existing CNN architectures resulting in better performance. The work of~\cite{chandra2016fast} proposed a way to incorporate Gaussian CRFs into end-to-end trained semantic segmentation CNNs. In Chapter~\ref{chap:bnn}, we generalize the standard spatial convolutions in CNNs to sparse high-dimensional filters and show it can be used to incorporate structured prior knowledge into CNNs resulting in better model and inference. In Chapter~\ref{chap:binception}, we propose a specialized structured prediction module to be used in CNNs for dense pixel prediction tasks such as semantic segmentation. \subsubsection{Discriminative $\rightarrow $ Generative} Although generative models provide an elegant formalism to encode prior knowledge about the problem, their use is mainly hampered by the difficulty in the posterior inference. Since discriminative models directly characterize the posterior distribution, they have the potential to be useful for inference in a corresponding generative model. Since many of the generative models require approximate Bayesian inference for estimating the posterior distribution, some components of the Bayesian inference can be completely replaced with discriminative models. \textit{Inference machines}~\cite{Ross2011} are a successful example of such technique. Inference machines pose the message passing inference in a given generative model as a sequence of computations that can be performed efficiently by training discriminative models like random forests. Instead of learning complex potential functions and computing messages between the variables, discriminative predictors that directly learn to pass the messages are proposed. This technique is shown to perform well on real world tasks~\cite{Ross2011,ramakrishna2014pose,shapovalov2013spatial} such as human pose estimation, 3D surface layout estimation and 3D point cloud estimation. Inference machines help bridging the gap between the message passing and random forest techniques. Similar technique~\cite{Heess2013} is also shown to be useful to predict messages for expectation propagation~\cite{Minka2001} inference in generative models. More recently, ~\cite{lin2015deeply} proposed to use discriminative deep learning models for predicting messages in message passing inference. By completely replacing the components of Bayesian inference with discriminative predictors, we lose the theoretical guarantees from the original inference techniques. However, discriminative models can still be used to improve the inference process. Data driven Markov chain Monte Carlo (DDMCMC)~\cite{tu2002image} methods leverage discriminative models to speed up the MCMC inference. DDMCMC methods have been used in image segmentation~\cite{tu2002image}, object recognition~\cite{zhu2000integrating}, and human pose estimation~\cite{lee2004proposal}. In Chapters~\ref{chap:infsampler} and~\ref{chap:cmp}, we propose principal techniques for leveraging discriminative models for Bayesian inference in inverse graphics and layered graphical models respectively. Another way of using discriminative models to improve generative approaches is to use discriminative prediction loss for training the generative model parameters. This is called `discriminatively training generative models'~\cite{bouchard2004tradeoff,holub2005discriminative,yakhnenko2005discriminatively} and is akin to using a generative prior loss for training discriminative models. Such models are also called hybrid models~\cite{lasserre2006principled} (discussed more below) if different parameters are used for defining discriminative and generative models. \subsubsection{Generative $\leftrightarrow $ Discriminative} It is possible to define both discriminative and generative models for the same task and train them together. This synergistic training can help in a better model fit in both. With such hybrid models, it is possible to train with both unlabelled and labelled data together~\cite{lasserre2006principled,minka2005discriminative}. Recent advances in deep learning showed that neural network models can also be used as good approximators for generative models of images (for e.g.,~\cite{dosovitskiy2015learning,gregor2015draw,theis2015generative}). Thus, it is possible to define a hybrid model with different neural networks approximating the corresponding generative and discriminative models for a task, and then train them together. One popular model in this category is `Auto-encoding variational Bayes' ~\cite{kingma2013auto,rezende2014stochastic}. Here, a generative model with exponential family distributions is approximated with a neural network. At the same time, variational Bayesian posterior inference in that model is approximated with a different neural network. Both generative and discriminative (inference) networks are trained by minimizing the variational lower bound (Eq.~\ref{eqn:kl}). The work of ~\cite{eslami2016attend} uses such hybrid models with recurrent neural networks and an attention mechanism to tackle vision problems involving multiple unknown number of objects in an image. These models are shown to perform well on small scale vision problems such as character recognition and are not scaled for tackling mainstream vision problems. The formulation of such hybrid models is elegant and has potential to be useful for many vision tasks. Very recently, in a similar spirit to auto-context,~\cite{saining16} proposes to learn a top-down CNN for capturing the contextual relationships between the target variables. The top-down generative CNN learns to predict the target variables from the surrounding target variables (context). This top-down CNN is then coupled with original discriminative CNN to serve as top-down constraints for intermediate CNN representations. As an advantage over the auto-context framework where different models are learned at different stages, a single discriminative model is learned and shown to be sufficient. Hybrid generative and discriminative CNN models are a very active area of research with different architectures being proposed frequently. We only discussed a few model architectures here. It is plausible that in the near future, hybrid generative and discriminative models dominate the field of computer vision. \section{Discussion and Conclusions} In this chapter, we have discussed various generative and discriminative computer vision models. Models form the core of any computer vision system, we choose to discuss some prominent models in relation to this thesis while highlighting the advantages and limitations in popular generative and discriminative models. Generative models are conceptually elegant to incorporate prior knowledge about the task while their use is mainly hampered by the difficulty of posterior inference. Discriminative models, on the other hand, are robust to model mismatch while being fast but require large amount of labelled data and there is a lack of standard approaches for incorporating prior knowledge into them. Hybrid generative and discriminative models, discussed in the previous section, try to bridge the gap between these two complementary approaches. The main aim of this thesis is to improve inference with in various computer vision models. In Part I of the thesis, we concentrate on improving inference in generative vision models. We do this by learning separate discriminative models and propose algorithms for better inference in prominent generative models in vision namely inverse graphics models (Chapter~\ref{chap:infsampler}) and layered graphical models (Chapter~\ref{chap:cmp}). In Part II of the thesis, we concentrate on improving inference in discriminative vision models. Since inference is simple evaluation of the model in discriminative models, we propose techniques for modifying the model itself enabling the introduction of prior knowledge into CNN models. Specifically, we generalize the standard spatial convolutions in prominent CNN models to sparse high-dimensional filtering (Chapter~\ref{chap:bnn}) and then propose a neural network approach for propagating information across video frames (Chapter~\ref{chap:vpn}). In Chapter~\ref{chap:binception}, we propose a new CNN module that can be added to existing segmentation CNN architectures that helps in image adaptive filtering of intermediate CNN units. \chapter{The Informed Sampler} \label{chap:infsampler} \newcommand{INF-INDMH\xspace}{INF-INDMH\xspace} \newcommand{INF-MH\xspace}{INF-MH\xspace} \newcommand{INF-BMHWG\xspace}{INF-BMHWG\xspace} \newcommand{REG-MH\xspace}{REG-MH\xspace} \newcommand{REG-MH\xspace}{REG-MH\xspace} \newcommand{INF-RFMH\xspace}{INF-RFMH\xspace} \newcommand{MH\xspace}{MH\xspace} \newcommand{MHWG\xspace}{MHWG\xspace} \newcommand{PT\xspace}{PT\xspace} \newcommand{BMHWG\xspace}{BMHWG\xspace} \newcommand{\hat{I}}{\hat{I}} \newcommand{\bar{\theta}}{\bar{\theta}} In the previous chapter, we briefly discussed the generative models (Section~\ref{sec:gen-models}) with their advantages and limitations. With all its intuitive appeal, beauty and simplicity, it is fair to say that the track record of generative models in computer vision is poor, which is mainly due to the difficulty of posterior inference. As a result the computer vision community has favored efficient discriminative approaches. We still believe in the usefulness of generative models in computer vision, but argue that we need to leverage existing discriminative or even heuristic computer vision methods. In this chapter, we implement this idea in a principled way with an \emph{informed sampler}, which is a mixture sampling technique, and in careful experiments demonstrate its use on challenging generative models which contain renderer programs as their components. We concentrate on posterior inference in `inverse graphics' models which is briefly described in Section~\ref{sec:inv-graphics}. With experiments on diverse `inverse graphics' models, we show that the informed sampler, using simple discriminative proposals based on existing computer vision technology, achieves significant improvements of inference. \section{Introduction} \label{sec:introduction} As discussed in Section~\ref{sec:inv-graphics}, modern computer graphic systems that leverage dedicated hardware setups produce a stunning level of realism with high frame rates. We believe that these systems will find its way in the design of generative models and will open up exciting modeling opportunities. This observation motivates the research question of this chapter, the design of a general inference technique for efficient posterior inference in accurate computer graphics systems. As such it can be understood as an instance of \emph{Inverse Graphics}~\cite{baumgart1974inversegraphics}, which is briefly discussed in Section~\ref{sec:inv-graphics} and illustrated in Fig.~\ref{fig:teaser} with one of our applications. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=0.7\columnwidth]{figures/invGraphicsDemo.pdf}} \mycaption{An example `inverse graphics' problem}{A graphics engine renders a 3D body mesh and a depth image using an artificial camera. By `Inverse Graphics', we refer to the process of estimating the posterior probability over possible bodies given the depth image.} \label{fig:teaser} \end{center} \end{figure} The key problem in the `inverse graphics' is the difficulty of posterior inference at test-time. This difficulty stems from a number of reasons as outlined in Section~\ref{sec:gen-adv-limits}. Our aim in this work is to devise an inference technique that is general and allow reuse in several different models and novel scenarios. On the other hand we want to maintain correctness in terms of the probabilistic estimates that they produce. One way to improve on inference efficiency in generative models is to leverage existing computer vision features and discriminative models. In this chapter, we propose the \emph{informed sampler}, a Markov chain Monte Carlo (MCMC) method with discriminative proposal distributions. It can be understood as an instance of a data-driven MCMC method~\cite{zhu2000integrating}, and our aim is to design a method that is general enough such that it can be applied across different problems and is not tailored to a particular application. During sampling, the informed sampler leverages computer vision features and discriminative models to make informed proposals for the state of latent variables and these proposals are accepted or rejected based on the generative model. The informed sampler is simple and easy to implement, but it enables inference in generative models that were out of reach for current \emph{uninformed} samplers. We demonstrate this claim on challenging models that incorporate rendering engines, object occlusion, ill-posedness, and multi-modality. We carefully assess convergence statistics for the samplers to investigate their correctness about the probabilistic estimates. Our informed sampler uses existing computer vision technology such as histogram-of-gradients features (HoG)~\cite{dalal2005histograms}, and the OpenCV library,~\cite{bradski2008opencv}, to produce informed proposals. Likewise one of our models is an existing computer vision model, the \emph{BlendSCAPE} model, a parametric model of human bodies~\cite{hirshberg2012coregistration}. In Section~\ref{sec:related-chap3}, we discuss related work and explain our informed sampler approach in Section~\ref{sec:model-chap3}. Section~\ref{sec:experiments-chap3} presents baseline methods and experimental setup. Then we present experimental analysis of informed sampler with three diverse problems of estimating camera extrinsics (Section~\ref{sec:room}), occlusion reasoning (Section~\ref{sec:tiles}) and estimating body shape (Section~\ref{sec:bodyshape}). We conclude with a discussion of future work in Section~\ref{sec:discussion-chap3}. \section{Related Work} \label{sec:related-chap3} This work stands at the intersection of computer vision, computer graphics, and machine learning; it builds on previous approaches we will discuss below. There is a vast literature on approaches to solve computer vision problems by means of generative models. We mention some works that also use an accurate graphics process as a generative model. This includes applications such as indoor scene understanding~\cite{del2012bayesian}, human pose estimation~\cite{lee2004proposal}, hand pose estimation~\cite{de2008model} and many more. Most of these works are however interested in inferring MAP solutions, rather than the full posterior distribution. Our method is similar in spirit to a \emph{Data Driven Markov Chain Monte Carlo} (DDMCMC) methods that use a bottom-up approach to help convergence of MCMC sampling. DDMCMC methods have been used in image segmentation~\cite{tu2002image}, object recognition~\cite{zhu2000integrating}, and human pose estimation~\cite{lee2004proposal}. The idea of making Markov samplers data dependent is very general, but in the works mentioned above, lead to highly problem specific implementations, mostly using approximate likelihood functions. Since these methods provide specialized solutions for a particular problem, they are not easily transferable to new problems. In contrast, we aim to provide a simple, yet efficient and general inference technique for problems where an accurate generative model exists. Because our method is general we believe that it is easy to adapt to a variety of new models and tasks. The idea to invert graphics~\cite{baumgart1974inversegraphics} in order to understand scenes also has roots in the computer graphics community under the term `inverse rendering'. The goal of inverse rendering however is to derive a direct mathematical model for the forward light transport process and then to analytically invert it. The work of~\cite{ramamoorthi2001signal} falls in this category. The authors formulate the light reflection problem as a convolution, to then cast the inverse light transport problem as a deconvolution. While this is a very elegant way to pose the problem, it requires a specification of the inverse process, a requirement generative modeling approaches try to circumvent. Our approach can also be viewed as an instance of a probabilistic programming approach. In the recent work of~\cite{mansinghka2013approximate}, the authors combine graphics modules in a probabilistic programming language to formulate an approximate Bayesian computation. Inference is then implemented using Metropolis-Hastings (MH\xspace) sampling. This approach is appealing in its generality and elegance, however we show that for our graphics problems, a plain MH\xspace sampling approach is not sufficient to achieve reliable inference and that our proposed informed sampler can achieve robust convergence in these challenging models. Another piece of work from~\cite{stuhlmueller2013nips} is similar to our proposed inference method in that the knowledge about the forward process is learned as ``stochastic inverses'', then applied for MCMC sampling in a Bayesian network. In the present work, we devise an MCMC sampler that works in both a multi-modal problem as well as for inverting an existing piece of image rendering code. In summary, our method can be understood in a similar context as the above-mentioned papers, including~\cite{mansinghka2013approximate}. \section{The Informed Sampler} \label{sec:model-chap3} In general, inference about the posterior distribution is challenging because for a complex model $p(\mathbf{x}|\mathbf{y})$ no closed-form simplifications can be made. This is especially true in the case that we consider, where $p(\mathbf{x}|\mathbf{y})$ corresponds to a graphics engine rendering images. Despite this apparent complexity, we observe the following: for many computer vision applications there exist well performing discriminative approaches, which, given the image, predict some target variables $\mathbf{y}$ or distributions thereof. These do not correspond to the posterior distribution that we are interested in, but, \emph{intuitively} the availability of discriminative inference methods should make the task of inferring $p(\mathbf{y}|\mathbf{x})$ easier. \emph{Furthermore} a physically accurate generative model can be used in an offline stage prior to inference to generate as many samples as we would like or can afford computationally. Again, \emph{intuitively} this should allow us to prepare and summarize useful information about the distribution in order to accelerate the test-time inference. Concretely, in our case we will use a discriminative method to provide a global density $T_G(\mathbf{y}|\mathbf{x})$, which we then use in a valid MCMC inference method. The standard Metropolis-Hasting Markov Chain Monte Carlo (MCMC) is already described in Section~\ref{sec:inv-graphics} of the previous chapter, where in each time-step, a proposal is made with a proposal distribution which is then either accepted or rejected based on the acceptance probability: \begin{enumerate} \item Propose a transition using a \textit{proposal distribution $T$} and the current state $\mathbf{y}_t$ \begin{equation*} \bar{\mathbf{y}} \sim T(\cdot|\mathbf{y}_t) \end{equation*} \item Accept or reject the transition based on Metropolis Hastings (MH) acceptance rule: \begin{equation*} \mathbf{y}_{t+1} = \left\{ \begin{array}{cl} \bar{\mathbf{y}}, & \textrm{rand}(0,1) < \min\left(1,\frac{\pi(\bar{\mathbf{y}}) T(\bar{\mathbf{y}} \to \mathbf{y}_t)}{\pi(\mathbf{y}_t) T(\mathbf{y}_t \to \bar{\mathbf{y}})} \right), \\ \mathbf{y}_t, & \textrm{otherwise.} \end{array} \right. \end{equation*} \end{enumerate} Refer to Section~\ref{sec:inv-graphics} for more details about MCMC sampling. Different MCMC techniques mainly differ in the type of the proposal distribution $T$. Next, we describe our informed proposal distribution which we use in standard Metropolis-Hastings sampling resulting in our proposed `Informed Sampler' technique. \subsection{Informed Proposal Distribution} We use a common mixture kernel for Metropolis-Hastings (MH) sampling. Given the present target sample $\mathbf{y}_t$, the informed proposal distribution for MH sampling is given as: \begin{equation} T_{\alpha}(\cdot | \mathbf{x}, \mathbf{y}_t) = \alpha \: T_L(\cdot | \mathbf{y}_t) + (1-\alpha) \: T_G(\cdot | \mathbf{x}). \label{eq:proposal} \end{equation} Here $T_L$ is an ordinary \emph{local} proposal distribution, for example a multivariate Normal distribution centered around the current sample $\mathbf{y}_t$, and $T_G$ is a \emph{global} proposal distribution independent of the current state. We inject knowledge by conditioning the global proposal distribution $T_G$ on the image observation $\mathbf{x}$. We learn the informed proposal $T_G(\cdot | \mathbf{x})$ discriminatively in an offline training stage using a non-parametric density estimator described below. The mixture parameter $\alpha \in [0,1]$ controls the contribution of each proposal, for $\alpha=1$ we recover MH\xspace. For $\alpha=0$ the proposal $T_{\alpha}$ would be identical to $T_G(\cdot | \mathbf{x})$ and the resulting Metropolis sampler would be a valid Metropolized independence sampler~\cite{liu2001montecarlo}. With $\alpha=0$, we call this baseline method `Informed Independent MH\xspace' (INF-INDMH\xspace). For intermediate values, $\alpha\in(0,1)$, we combine local with global moves in a valid Markov chain. We call this method `Informed Metropolis Hastings' (INF-MH\xspace). \subsection{Discriminatively Learning $T_G$} The key step in the construction of $T_G$ is to include some discriminative information about the sample $\mathbf{x}$. Ideally we would hope to have $T_G$ propose global moves which improve mixing and even allow mixing between multiple modes, whereas the local proposal $T_L$ is responsible for exploring the density locally. To see that this is possible in principle, consider the case of a perfect global proposal where $T_G$ matches the true posterior distribution, that is, $T_G(\mathbf{y} | \mathbf{x})=P(\mathbf{y} | \mathbf{x})$. In this case, we would get independent samples with $\alpha=0$ because every proposal is accepted. In practice $T_G$ is only an approximation to the true posterior $P(\mathbf{y}|\mathbf{x})$. If the approximation is good enough then the mixture of local and global proposals will have a high acceptance rate and explore the density rapidly. In principle, we can use any conditional density estimation technique for learning a proposal $T_G$ from samples. Typically high-dimensional density estimation is difficult and even more so in the conditional case; however, in our case we do have the true generating process available to provide example pairs $(\mathbf{y},\mathbf{x})$. Therefore we use a simple but scalable non-parametric density estimation method based on clustering a feature representation of the observed image, $v(\mathbf{x}) \in \mathbb{R}^d$. For each cluster we then estimate an unconditional density over $\mathbf{y}$ using kernel density estimation (KDE). We chose this simple setup since it can easily be reused in many different scenarios, in the experiments we solve diverse problems using the same method. This method yields a valid transition kernel for which detailed balance holds. In addition to the KDE estimate for the global transition kernel we also experimented with a random forest approach that maps the observations to transition kernels $T_G$. More details will be given in Section~\ref{sec:bodyshape}. \begin{algorithm}[t] \caption{Learning a global proposal $T_G(\mathbf{y}|\mathbf{x})$} \label{alg:training} \begin{algorithmic} \STATE 1. Simulate $\{(\mathbf{y}^{(i)},\mathbf{x}^{(i)})\}_{i=1,\dots,n}$ from $p(\mathbf{x}|\mathbf{y}) \: p(\mathbf{y})$ \STATE 2. Compute a feature representation $v(\mathbf{x}^{(i)})$ \STATE 3. Perform k-means clustering of $\{v(\mathbf{x}^{(i)})\}_i$ \STATE 4. For each cluster $C_j \subset \{1,\dots,n\}$, fit a kernel density estimate $\textrm{KDE}(C_j)$ to the vectors $\mathbf{y}^{\{C_j\}}$ \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \caption{INF-MH\xspace (Informed Metropolis-Hastings)} \label{alg:sampling} \begin{algorithmic} \STATE \textbf{Input:} observed image $\mathbf{x}$ \STATE $T_L$ $\leftarrow$ Local proposal distribution (Gaussian) \STATE $C$ $\leftarrow$ cluster for $v(\mathbf{x})$ \STATE $T_G$ $\leftarrow KDE(C)$ (as obtained by Alg.~\ref{alg:training}) \STATE $T = \alpha T_L + (1-\alpha) T_G$ \STATE $\pi(\mathbf{y}|\mathbf{x})$ $\leftarrow$ Posterior distribution $P(\mathbf{y}|\mathbf{x})$ \STATE Initialize $\mathbf{y}_1$ \FOR{$t=1$ {\bfseries to} $N-1$} \STATE 1. Sample $\bar{\mathbf{y}} \sim T(\cdot)$ \STATE 2. $\gamma = \min\left(1,\frac{\pi(\bar{\mathbf{y}}|\mathbf{x}) T(\bar{\mathbf{y}} \to \mathbf{y}_t)}{\pi(\mathbf{y}_t|\mathbf{x}) T(\mathbf{y}_t \to \bar{\mathbf{y}})} \right)$ \IF{rand$(0,1) < \gamma$} \STATE $\mathbf{y}_{t+1} = \bar{\mathbf{y}}$ \ELSE \STATE $\mathbf{y}_{t+1} = \mathbf{y}_t$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} For the feature representation, we leverage successful discriminative features and heuristics developed in the computer vision community. Different task specific feature representations can be used in order to provide invariance to small changes in $\mathbf{y}$ and to nuisance parameters. The main inference method remains the same across all problems. We construct the KDE for each cluster and we use a relatively small kernel bandwidth in order to accurately represent the high probability regions in the posterior. This is similar in spirit to using only high probability regions as ``darts'' in the \emph{Darting Monte Carlo} sampling technique of~\cite{sminchisescu2011generalized}. We summarize the offline training in Algorithm~\ref{alg:training}. At test time, this method has the advantage that given an image $\mathbf{x}$ we only need to identify the corresponding cluster once using $v(\mathbf{x})$ in order to sample efficiently from the kernel density $T_G$. We show the full procedure in Algorithm~\ref{alg:sampling}. This method yields a transition kernel that is a mixture kernel of a reversible symmetric Metropolis-Hastings kernel and a metropolized independence sampler. The combined transition kernel $T$ is hence also reversible. Because the measure of each kernel dominates the support of the posterior, the kernel is ergodic and has the correct stationary distribution~\cite{brooks2011mcmchandbook}. This ensures correctness of the inference and in the experiments we investigate the efficiency of the different methods in terms of convergence statistics. \section{Setup and Baseline Methods} \label{sec:experiments-chap3} In the remainder of the chapter we demonstrate the proposed method in three different experimental setups. For all experiments, we use four parallel chains initialized at different random locations sampled from the prior. The reported numbers are median statistics over multiple test images except when noted otherwise. \subsection{Baseline Methods} \paragraph{Metropolis Hastings (MH\xspace)} Described in Section~\ref{sec:inv-graphics}, corresponds to $\alpha=1$, we use a symmetric diagonal Gaussian distribution, centered at $\mathbf{y}_t$. \paragraph{Metropolis Hastings within Gibbs (MHWG\xspace)} We use a Metropolis Hastings scheme in a Gibbs sampler, that is, we draw from one-dimensional conditional distributions for proposing moves and the Markov chain is updated along one dimension at a time. We further use a blocked variant of this MHWG\xspace~sampler, where we update blocks of dimensions at a time, and denote it by BMHWG\xspace. \paragraph{Parallel Tempering (PT\xspace)} We use Parallel Tempering to address the problem of sampling from multi-modal distributions~\cite{swendsen1986replicamontecarlo,geyer1991paralleltempering}. This technique is also known as ``replica exchange MCMC sampling''~\cite{hukushima1996exchange}. We run different parallel chains at different temperatures $T$, sampling $\pi(\cdot)^{\frac{1}{T}}$ and at each sampling step propose to exchange two randomly chosen chains. In our experiments we run three chains at temperature levels $T\in\{1,3,27\}$ that were found to be best working out of all combinations in $\{1,3,9,27\}$ for all experiments individually. The highest temperature levels corresponds to an almost flat distribution. \paragraph{Regeneration Sampler (REG-MH\xspace)} We implemented a regenerative MCMC method \cite{mykland1995regeneration} that performs adaption~\cite{gilks1998adaptive} of the proposal distribution during sampling. Adapting the proposal distribution with existing MCMC samples is not straight-forward as this would potentially violate the Markov property of the chain~\cite{atchade2005adaptivemcmc}. One approach is to identify \textit{times of regeneration} at which the chain can be restarted and the proposal distribution can be adapted using samples drawn previously. Several approaches to identify good regeneration times in a general Markov chain have been proposed~\cite{athreya1978new, nummelin1978splitting}. We build on~\cite{mykland1995regeneration} that proposed two \textit{splitting} methods for finding the regeneration times. Here, we briefly describe the method that we implemented in this study. Let the present state of the sampler be $\mathbf{y}_t$ and let the independent global proposal distribution be $T_G$. When $\bar{\mathbf{y}} \sim T_G$ is accepted according to the MH acceptance rule, the probability of a regeneration is given by: \begin{equation} r(\mathbf{y}_t,\bar{\mathbf{y}}) = \left\{ \begin{array}{ll} \max\{ \frac{c}{w(\mathbf{y}_t)}, \frac{c}{w(\bar{\mathbf{y}})} \},& \textrm{if $w(\mathbf{y}_t)>c$ and $w(\bar{\mathbf{y}})>c$},\\ \max\{ \frac{w(\mathbf{y}_t)}{c}, \frac{w(\bar{\mathbf{y}})}{c} \},& \textrm{if $w(\mathbf{y}_t)<c$ and $w(\bar{\mathbf{y}})<c$},\\ 1, & \textrm{otherwise}, \end{array} \right. \label{reg_eq} \end{equation} where $c > 0$ is an arbitrary constant and $w(\mathbf{y}_t) = \frac{\pi(\mathbf{y}_t)}{T_G(\mathbf{y}_t)}$. The value of $c$ can be set to maximize the regeneration probability. At every sampling step, if a sample from the independent proposal distribution is accepted, we compute regeneration probability using equation~\ref{reg_eq}. If a regeneration occurs, the present sample is discarded and replaced with one from the independent proposal distribution $T_G$. We use the same mixture proposal distribution as in our informed sampling approach where we initialize the global proposal $T_G$ with a prior distribution and at times of regeneration fit a KDE to the existing samples. This becomes the new adapted distribution $T_G$. Refer to \cite{mykland1995regeneration} for more details of this regeneration technique. In the work of~\cite{ahn2013distributed}, this regeneration technique is used with success in a Darting Monte Carlo sampler. We use the mixture kernel (Eq.~\ref{eq:proposal}) as proposal distribution and adapt only the global part $T_G(\cdot|\mathbf{x})$. This is initialized as the prior over target variables $P(\mathbf{y})$ and at times of regeneration we fit a KDE to the already drawn samples. For comparison we used the same mixture coefficient $\alpha$ as for INF-MH\xspace. \subsection{MCMC Diagnostics}\label{sec:mcmcdiagnostics} We use established methods for monitoring the convergence of our MCMC method~\cite{kass1998roundtable,flegal2008mcmcdiagnostics}. In particular, we report different diagnostics. We compare the different samplers with respect to the number of iterations instead of time. The forward graphics process significantly dominates the runtime and therefore the iterations in our experiments correspond linearly to the runtime. \paragraph{Acceptance Rate (AR)} The ratio of accepted samples to the total Markov chain length. The higher the acceptance rate, the fewer samples we need to approximate the posterior. Acceptance rate indicates how well the proposal distribution approximates the true distribution \emph{locally}. \paragraph{Potential Scale Reduction Factor (PSRF)} The PSRF diagnostics~\cite{gelman1992psrf,brooks1998convergence} is derived by comparing within-chain variances with between-chain variances of sample statistics. For this, it requires independent runs of multiple chains (4 in our case) in parallel. Because our sample $\mathbf{y}$ is multi-dimensional, we estimate the PSRF for each parameter dimension separately and take the maximum PSRF value as final PSRF value. A value close to one indicates that all chains characterize the same distribution. This does not imply convergence, as the chains may all collectively miss a mode. However, a PSRF value much larger than one is a certain sign of the lack of convergence. PSRF also indicates how well the sampler visits different modes of a multi-modal distribution. \paragraph{Root Mean Square Error (RMSE)} During our experiments we have access to the input parameters $\mathbf{y}^*$ that generated the image. To assess whether the posterior distribution recovers the \emph{correct} value, we report the RMSE between the posterior expectation $\mathbb{E}_{P(\mathbf{y} |\mathbf{x})}[G(\mathbf{y})]$ and the value $G(\mathbf{y}^*)$ of the generating input. Since there is noise being added to the observation, we do not have access to the ground truth posterior expectation and therefore this measure is only an indicator. Under convergence all samplers would agree on the same correct value. \subsection{Parameter Selection} For each sampler we individually selected hyper-parameters that gave the best PSRF value after $10k$ iterations. In case the PSRF does not differ for multiple values, we chose the one with the highest acceptance rate. We include a detailed analysis of the baseline samplers and parameter selection in Appendix~\ref{appendix:chap3}. \section{Experiments} We studied the use of informed sampler with three different problem scenarios: Estimating camera extrinsics (Section~\ref{sec:room}), occlusion reasoning (Section~\ref{sec:tiles}) and estimating body shape (Section~\ref{sec:bodyshape}). \subsection{Estimating Camera Extrinsics} \label{sec:room} We implement the following simple graphics scenario to create a challenging multi-modal problem. We render a cubical room of edge length 2 with a point light source in the center of the room, $(0,0,0)$, from a camera somewhere inside the room. The camera parameters are described by its $(x,y,z)$-position and the orientation, specified by yaw, pitch, and roll angles. The inference process consists of estimating the posterior over these 6D camera parameters $\mathbf{y}$. See Fig.~\ref{fig:room} for two example renderings. Posterior inference is a highly multi-modal problem because the room is a cubical and thus symmetric. There are 24 different camera parameters that will result in the same image. This is also shown in Fig.~\ref{fig:room} where we plot the position and orientation (but not camera roll) of all camera parameters that create the same image. A rendering of a $200\times 200$ image with a resolution of $32bit$ using a single core on an Intel Xeon 2.66GHz machine takes about $11\textrm{ms}$ on average. \begin{figure}[!h] \begin{center} \centerline{\includegraphics[width=.7\columnwidth]{figures/camPoses.pdf}} \mycaption{Sample images for the experiment on `Estimating Camera Extrinsics'} {Two rendered room images with possible camera positions and headings that produce the same image. Not shown are the orientations; in the left example all six headings can be rolled by 90,180, and 270 degrees for the same image.} \label{fig:room} \end{center} \end{figure} A small amount of isotropic Gaussian noise is added to the rendered image $G(\mathbf{y})$, using a standard deviation of $\sigma=0.02$. The posterior distribution we try to infer is then: $p(\mathbf{y}|\mathbf{x}) \propto p(\mathbf{x}|\mathbf{y})p(\mathbf{y}) = \mathcal{N}(\mathbf{x}|G(\mathbf{y}),\sigma^2) \: \textrm{Uniform}(\mathbf{y})$. The uniform prior over location parameters ranges between $-1.0$ and $1.0$ and the prior over angle parameters is modeled with wrapped uniform distribution over $[-\pi,\pi]$. To learn the informed part of the proposal distribution from data, we computed a histogram of oriented gradients (HOG) descriptor~\cite{dalal2005histograms} from the image, using 9 gradient orientations and cells of size $20\times20$ yielding a feature vector $v(\mathbf{x})\in\mathbb{R}^{900}$. We generated $300k$ training images using a uniform prior over the camera extrinsic parameters, and performed k-means clustering using $5k$ cluster centers based on the HOG feature vectors. For each cluster cell, we then computed and stored a KDE for the 6 dimensional camera parameters, following the steps in Algorithm~\ref{alg:training}. As test data, we create 30 images using extrinsic parameters sampled uniform at random over their range. \subsubsection{Results} \label{sec:roomresults} We show results in Fig.~\ref{fig:camPose_ALL}. We observe that both MH\xspace and PT\xspace yield low acceptance rates (AR) compared to other methods. However parallel tempering appears to overcome the multi-modality better and improves over MH\xspace in terms of convergence. The same holds for the regeneration technique, we observe many regenerations, good convergence and AR. Both INF-INDMH\xspace and INF-MH\xspace converge quickly. In this experimental setup, we have access to the 24 different exact modes. We analyze how quickly the samplers visit the modes and whether or not they capture all of them. For every different instance, the pairwise distances between the modes changes, therefore we chose to define `visiting a mode' in the following way. We compute a Voronoi tessellation with the modes as centers. A mode is visited if a sample falls into its corresponding Voronoi cell, that is, it is closer than to any other mode. Sampling uniform at random would quickly find the modes (depending on the cell sizes) but is not a valid sampler that characterizes the desired posterior distribution. We also experimented with balls of different radii around the modes and found a similar behavior to the one we report here. Figure~\ref{fig:camPose_ALL} (right) shows results for various samplers. We find that INF-MH\xspace discovers different modes quicker when compared to other baseline samplers. Just sampling from the global proposal distribution INF-INDMH\xspace is initially visiting more modes (it is not being held back by local steps) but is dominated by INF-MH\xspace over some range. This indicates that the mixture kernel takes advantage of both local and global moves, either one of them is exploring slower. Also in most examples, all samplers miss some modes under our definition, the average number of discovered modes is 21 for INF-MH\xspace and even lower for MH\xspace. Figure~\ref{fig:exp1_alpha:chap3} shows the effect of mixture coefficient ($\alpha$) on the informed sampling INF-MH\xspace. Since there is no significant difference in PSRF values for $0 \le \alpha \le 0.7$, we chose $0.7$ due to its high acceptance rate. Likewise, the parameters of the baseline samplers are chosen based on the PSRF and acceptance rate metrics. See Appendix~\ref{appendix:chap3:room} for the analysis of the baseline samplers and the parameter selection. \begin{figure*}[!t] \begin{center} \centerline{\includegraphics[width=\textwidth]{figures/camPose_ALL.pdf}} \mycaption{Results of the `Estimating Camera Extrinsics' experiment} {Acceptance Rates (left), PSRFs (middle), and average number of modes visited (right) for different sampling methods. We plot the median/average statistics over 30 test examples.} \label{fig:camPose_ALL} \vspace{-0.3cm} \end{center} \end{figure*} \begin{figure*}[h] \centerline{\includegraphics[width=.9\textwidth]{figures/supplementary/camPose_alpha.pdf}} \mycaption{Role of mixture coefficient} {PRSFs and Acceptance rates corresponding to various mixture coefficients ($\alpha$) of INF-MH\xspace sampling in `Estimating Camera Extrinsics' experiment.} \label{fig:exp1_alpha:chap3} \end{figure*} \begin{figure*}[!t] \begin{center} \centerline{\includegraphics[width=0.9\textwidth]{figures/camPose_AC.pdf}} \mycaption{Independence of obtained samples}{Auto-Correlation of samples obtained by different sampling techniques in camera extrinsics experiment, for each of the six extrinsic camera parameters.} \label{fig:camPose_AC} \vspace{-0.3cm} \end{center} \end{figure*} We also tested the MHWG\xspace sampler and found that it did not converge even after $100k$ iterations, with a PSRF value around 3. This is to be expected since single variable updates will not traverse the multi-modal posterior distributions fast enough due to the high correlation of the camera parameters. In Fig.~\ref{fig:camPose_AC}, we plot the median auto-correlation of samples obtained by different sampling techniques, separately for each of the six extrinsic camera parameters. The informed sampling approach (INF-MH\xspace~and INF-INDMH\xspace) appears to produce samples which are more independent compared to other baseline samplers. As expected, some knowledge of the multi-modal structure of the posterior needs to be available for the sampler to perform well. The methods INF-INDMH\xspace and INF-MH\xspace have this information and perform better than baseline methods and REG-MH\xspace. \subsection{Occluding Tiles} \label{sec:tiles} In a second experiment, we render images depicting a fixed number of six quadratic tiles placed at a random location $(x, y)$ in the image at a random depth $z$ and orientation $\theta$. We blur the image and add a bit of Gaussian random noise ($\sigma = 0.02$). An example is depicted in Fig.~\ref{fig:occlusion_main}(a), note that all the tiles are of the same size, but farther away tiles look smaller. A rendering of one $200 \times 200$ image takes about $25\textrm{ms}$ on average. Here, as prior, we again use the uniform distribution over the 3D cube for tile location parameters, and wrapped uniform distribution over $[-\frac{\pi}{4},\frac{\pi}{4}]$ for tile orientation angle. To avoid label switching issues, each tile is given a fixed color and is not changed during the inference. We chose this experiment such that it resembles the `dead leaves model' of~\cite{lee2001occlusion}, because it has properties that are commonplace in computer vision. It is a scene composed of several objects that are independent, except for occlusion, which complicates the problem. If occlusion did not exist, the task is readily solved using a standard OpenCV~\cite{bradski2008opencv} rectangle finding algorithm ($\textrm{minAreaRect}$). The output of such an algorithm can be seen in Fig.~\ref{fig:occlusion_main}(c), and we use this algorithm as a discriminative source of information. This problem is higher dimensional than the previous one (24, due to 6 tiles of 4 parameters). Inference becomes more challenging in higher dimensions and our approach without modification does not scale well with increasing dimensionality. One way to approach this problem, is to factorize the joint distribution into blocks and learn informed proposals separately. In the present experiment, we observed that both baseline samplers and the plain informed sampling fail when proposing all parameters jointly. Since the tiles are independent except for the occlusion, we can approximate the full joint distribution as product of block distributions where each block corresponds to the parameters of a single tile. To estimate the full posterior distribution, we learn global proposal distributions for each block separately and use a block-Gibbs like scheme in our sampler where we propose changes to one tile at a time, alternating between tiles. \begin{figure*}[th] \begin{center} \centerline{\includegraphics[width=\textwidth]{figures/OcclusionExp_pic.pdf}} \mycaption{A visual result in `Occluding Tiles' experiment} {(a) A sample rendered image, (b) Ground truth squares, (c) Rectangle fitting with OpenCV MinAreaRect algorithm and most probable estimates from 5000 samples obtained by (d) MHWG\xspace sampler (best baseline) and (e) the INF-BMHWG\xspace sampler. (f) Posterior expectation of the square boundaries obtained by INF-BMHWG\xspace sampling (The first 2000 samples are discarded as burn-in).} \label{fig:occlusion_main} \end{center} \end{figure*} The experimental protocol is the same as before, we render 500$k$ images, apply the OpenCV MinAreaRect algorithm to fit rectangles and take their found four parameters as features for clustering (10$k$ clusters). Again KDE distributions are fit to each cluster and at test time, we assign the observed image to its corresponding cluster. The KDE in that chosen cluster determines the global sampler $T_G$ for that tile. We then use $T_G$ to propose an update to all 4 parameters of the tile. We refer to this procedure as INF-BMHWG\xspace. Empirically we find $\alpha = 0.8$ to be optimal for INF-BMHWG\xspace sampling. Analysis of various samplers is presented in Appendix~\ref{appendix:chap3:tiles}. \subsubsection{Results} An example visual result is shown in Fig.~\ref{fig:occlusion_main}. We found that the the MH\xspace and INF-MH\xspace samplers fail entirely on this problem. Both use a proposal distribution for the entire state and due to the high dimensionality, there is almost no acceptance ($< 1\%$) and thus they do not reach convergence. The MHWG\xspace sampler, updating one dimension at a time, is found to be the best among the baseline samplers with acceptance rate of around $42\%$, followed by a block sampler that samples each tile separately. The OpenCV algorithm produces a reasonable initial guess but fails in occlusion cases. The median diagnostic statistical curves for 10 test examples are shown in Fig.~\ref{fig:occlusion_ALL}, INF-BMHWG\xspace by far produces lower reconstruction errors. The block wise informed sampler INF-BMHWG\xspace converges quicker, with higher acceptance rates ($\approx 53\%$), and lower reconstruction error. Also in Fig~\ref{fig:occlusion_main}(f) the posterior distribution is visualized, fully visible tiles are more localized, position and orientation of occluded tiles more uncertain. Figure~\ref{fig:exp2_visual_more}, in the Appendix~\ref{appendix:chap3}, shows some more visual results. Although the model is relatively simple, all the baseline samplers perform poorly and discriminative information is crucial to enable accurate inference. Here the discriminative information is provided by a readily available heuristic in the OpenCV library. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=\textwidth]{figures/occlusionExp_ALL.pdf}} \mycaption{Results of the `Occluding Tiles' experiment} {Acceptance Rates (left), PSRFs (middle), and RMSEs (right) for different sampling methods. Median results for 10 test examples.} \label{fig:occlusion_ALL} \end{center} \end{figure} This experiment illustrates a variation of the informed sampling strategy that can be applied to sampling from high-dimensional distributions. Inference methods for general high-dimensional distributions is an active area of research and intrinsically difficult. The occluding tiles experiment is simple but illustrates this point, namely that all non-block baseline samplers fail. Block sampling is a common strategy in such scenarios and many computer vision problems have such block-structure. Again the informed sampler improves in convergence speed over the baseline method. Other techniques that produce better fits to the conditional (block-) marginals should give faster convergence. \subsection{Estimating Body Shape} \label{sec:bodyshape} The last experiment is motivated by a real world problem: estimating the 3D body shape of a person from a single static depth image. With the recent availability of cheap active depth sensors, the use of RGBD data has become ubiquitous in computer vision~\cite{shao2013rgbd,han2013computervisionkinect}. To represent a human body, we use the \emph{BlendSCAPE} model~\cite{hirshberg2012coregistration}, which updates the originally proposed SCAPE model~\cite{anguelov2005scape} with better training and blend weights. This model produces a 3D mesh of a human body as shown in Fig.~\ref{fig:meshVariances} as a function of shape and pose parameters. The shape parameters allow us to represent bodies of many builds and sizes, and includes a statistical characterization (being roughly Gaussian). These parameters control directions in deformation space, which were learned from a corpus of roughly 2000 3D mesh models registered to scans of human bodies via PCA. The pose parameters are joint angles which indirectly control local orientations of predefined parts of the model. Our model uses 57 pose parameters and any number of shape parameters to produce a 3D mesh with 10,777 vertices. We use the first 7 SCAPE components to represent the shape of a person and keep the pose fixed. The camera viewpoint, orientation, and pose of the person is held fixed. Thus a rendering process takes $\mathbf{y}\in\mathbb{R}^7$, generates a 3D mesh representation of it and projects it through a virtual depth camera to create a depth image of the person $\mathbf{x}$. This can be done in various resolutions, we chose $430\times 260$ with depth values represented as $32\textrm{bit}$ numbers in the interval $[0,4]$. On average, a full render path takes about $28\textrm{ms}$. We add Gaussian noise with standard deviation of $0.02$ to the created depth image. See Fig.\ref{fig:meshVariances}(left) for an example. We used very simple low level features for feature representation. In order to learn the global proposal distribution we compute depth histogram features on a $15\times 10$ grid on the image. For each cell we record the mean and variance of the depth values. Additionally we add the height and the width of the body silhouette as features resulting in a feature vector $v(\mathbf{x})\in\mathbb{R}^{302}$. As normalization, each feature dimension is divided by the maximum value in the training set. We used $400k$ training images sampled from the standard normal prior distribution and 10$k$ clusters to learn the KDE proposal distributions in each cluster cell. For this experiment, we also experimented with a different conditional density estimation approach using a forest of random regression trees~\cite{breiman1984cart,breiman2001random}. See Section~\ref{sec:forests} for a brief overview of random forests. In the previous experiments, utilizing the KDE estimates, the discriminative information entered through the feature representation. Then, suppose if there was no relation between some observed features and the variables that we are trying to infer, we would require a large number of samples to reliably estimate the densities in the different clusters. The regression forest can adaptively partition the parameter space based on observed features and is able to ignore uninformative features, thus may lead to better fits of the conditional densities. It can thus be understood as the adaptive version of the k-means clustering technique that solely relies on the used metric (Euclidean in our case). In particular, we use the same features as for k-means clustering but grow the regression trees using a mean square error criterion for scoring the split functions. A forest of 10 binary trees with a depth of 15 is grown, with the constraint of having a minimum of 40 training points per leaf node. Then for each of the leaf nodes, a KDE is trained as before. At test time the regression forest yields a mixture of KDEs as the global proposal distribution. We denote this method as INF-RFMH\xspace in the experiments. Instead of using one KDE model for each cluster, we could also explore a regression approach, for example using a discriminative linear regression model to map observations into proposal distributions. By using informative covariates in the regression model, one should be able to overcome the curse of dimensionality. Such a semi-parametric approach would allow to capture explicit parametric dependencies of the variables (for example linear dependencies) and combine them with non-parametric estimates of the residuals. We are exploring this technique as future work. \begin{figure*}[t] \begin{center} \centerline{\includegraphics[width=\columnwidth]{figures/meshVariances_new.pdf}} \mycaption{Inference of body shape from a depth image} {A sample test result showing the result of 3D mesh reconstruction with the first 1000 samples obtained using our INF-MH\xspace sampling method. We visualize the angular error (in degrees) between the estimated and ground truth edge and project onto the mesh.} \label{fig:meshVariances} \end{center} \end{figure*} Again, we chose parameters for all samplers individually, based on empirical mixing rates. For informed samplers, we chose $\alpha = 0.8$, and a local proposal standard deviation of 0.05. The full analysis for all samplers is included in Appendix~\ref{appendix:chap3:body}. \subsubsection{Results} \label{sec:bodyresults} We tested the different approaches on 10 test images that are generated by the shape parameters drawn from the standard normal prior distribution. Figure~\ref{fig:bodyShape_ALL} summarizes the results of the sampling methods. We make the following observations. The baselines methods MH\xspace, MHWG\xspace, and PT\xspace show inferior convergence results and, MH\xspace and PT\xspace also suffer from lower acceptance rates. Just sampling from the distribution of the discriminative step (INF-INDMH\xspace) is not enough, because the low acceptance rate indicates that the global proposals do not represent the correct posterior distribution. However, combined with a local proposal in a mixture kernel, we achieve a higher acceptance rate, faster convergence and a decrease in RMSE. The regression forest approach has slower convergence than INF-MH\xspace. In this example, the regeneration sampler REG-MH\xspace does not improve over simpler baseline methods. We attribute this to rare regenerations which may be improved with more specialized methods. We believe that our simple choice of depth image representation can also be significantly improved. For example, features can be computed from identified body parts, something that the simple histogram features have not taken into account. In the computer vision literature, some discriminative approaches for pose estimation do exist, most prominent being the influential work on pose recovery in parts for the Kinect Xbox system~\cite{shotton2011kinect}. In future work, we plan to use similar methods to deal with pose variation and complicated dependencies between parameters and observations. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=\columnwidth]{figures/bodyShape_ALL.pdf}} \mycaption{Results of the `Body Shape' experiment}{Acceptance Rates (left), PSRFs (middle), and RMSEs (right) for different sampling methods in the body shape experiment. Median results over 10 test examples.} \label{fig:bodyShape_ALL} \end{center} \end{figure} \subsubsection{3D Mesh Reconstruction} In Fig.~\ref{fig:meshVariances}, we show a sample 3D body mesh reconstruction result using the INF-MH\xspace sampler after only 1000 iterations. We visualized the difference of the mean posterior and the ground truth 3D mesh in terms of mesh edge directions. One can observe that most differences are in the belly region and the feet of the person. The retrieved posterior distribution allows us to assess the model uncertainty. To visualize the posterior variance we record standard deviation over the edge directions for all mesh edges. This is back-projected to achieve the visualization in Fig.~\ref{fig:meshVariances}(right). We see that posterior variance is higher in regions of higher error, that is, our model predicts its own uncertainty correctly~\cite{dawid1982calibration}. In a real-world body scanning scenario, this information will be beneficial; for example, when scanning from multiple viewpoints or in an experimental design scenario, it helps in selecting the next best pose and viewpoint to record. Fig.~\ref{fig:exp3_bodyMeshes}, in appendix, shows more 3D mesh reconstruction results using our sampling approach. \begin{figure}[!t] \begin{center} \centerline{\includegraphics[width=0.9\columnwidth]{figures/bodyMeasurements.pdf}} \mycaption{Body measurements with quantified uncertainty} {Box plots of three body measurements for three test subjects, computed from the first $10k$ samples obtained by the INF-MH\xspace sampler. Dotted lines indicate measurements corresponding to ground truth SCAPE parameters.} \label{bodyMeasurements} \end{center} \end{figure} \subsubsection{Body Measurements} Predicting body measurements has many applications including clothing, sizing and ergonomic design. Given pixel observations, one may wish to infer a distribution over measurements (such as height and chest circumference). Fortunately, our original shape training corpus includes a host of 47 different per-subject measurements, obtained by professional anthropometrists; this allows us to relate shape parameters to measurements. Among many possible forms of regression, regularized linear regression~\cite{zou2005regularization} was found to best predict measurements from shape parameters. This linear relationship allows us to transform any posterior distribution over SCAPE parameters into a posterior over measurements, as shown in Fig.~\ref{bodyMeasurements}. We report for three randomly chosen subjects' (S1, S2, and S3) results on three out of the 47 measurements. The dashed lines corresponds to ground truth values. Our estimate not only faithfully recovers the true value but also yields a characterization of the full conditional posterior. \subsubsection{Incomplete Evidence} Another advantage of using a generative model is the ability to reason with missing observations. We perform a simple experiment by occluding a portion of the observed depth image. We use the same inference and learning codes, with the same parametrization and features as in the non-occlusion case, but retrain the model to account for the changes in the forward graphics process. The result of INF-MH\xspace, computed on the first $10k$ samples is shown in Fig.~\ref{fig:occlusionMeshes}. The 3D reconstruction is reasonable even under large occlusion; the error and the edge direction variance did increase as expected. \section{Discussion and Conclusions} \label{sec:discussion-chap3} This work proposes a method to incorporate discriminative methods into Bayesian inference in a principled way. We augment a sampling technique with discriminative information to enable inference with global accurate generative models. Empirical results on three challenging and diverse computer vision experiments are discussed. We carefully analyze the convergence behavior of several different baselines and find that the informed sampler performs well across all different scenarios. This sampler is applicable to general scenarios and in this work we leverage the accurate forward process for offline training, a setting frequently found in computer vision applications. The main focus is the generality of the approach, this inference technique should be applicable to many different problems and not be tailored to a particular problem. \begin{figure}[!t] \begin{center} \centerline{\includegraphics[width=1.0\columnwidth]{figures/occlusionMeshes_new.pdf}} \mycaption{Inference with incomplete evidence} {Mean 3D mesh and corresponding errors and uncertainties (std. deviations) in mesh edge directions, for the same test case as in Fig.~\ref{fig:meshVariances}, computed from first $10k$ samples of our INF-MH\xspace sampling method with (bottom row) occlusion mask in image evidence. (blue indicates small values and red indicates high values)} \label{fig:occlusionMeshes} \end{center} \end{figure} We show that even for very simple scenarios, most baseline samplers perform poorly or fail completely. By including a global image-conditioned proposal distribution that is informed through discriminative inference we can improve sampling performance. We deliberately use a simple learning technique (KDEs on k-means cluster cells and a forest of regression trees) to enable easy reuse in other applications. Using stronger and more tailored discriminative models should lead to better performance. We see this as a way where top-down inference is combined with bottom-up proposals in a probabilistic setting. There are some avenues for future work; we understand this method as an initial step into the direction of general inference techniques for accurate generative computer vision models. Identifying conditional dependence structure should improve results, e.g. recently~\cite{stuhlmueller2013nips} used structure in Bayesian networks to identify such dependencies. One assumption in our work is that we use an accurate generative model. Relaxing this assumption to allow for more general scenarios where the generative model is known only approximately is important future work. In particular for high-level computer vision problems such as scene or object understanding there are no accurate generative models available yet but there is a clear trend towards physically more accurate 3D representations of the world. This more general setting is different to the one we consider in this chapter, but we believe that some ideas can be carried over. For example, we could create the informed proposal distributions from manually annotated data that is readily available in many computer vision data sets. Another problem domain are trans-dimensional models, that require different sampling techniques like reversible jump MCMC methods~\cite{green1995reversible,brooks2011mcmchandbook}. We are investigating general techniques to \emph{inform} this sampler in similar ways as described in this chapter. We believe that generative models are useful in many computer vision scenarios and that the interplay between computer graphics and computer vision is a prime candidate for studying probabilistic inference and probabilistic programming~\cite{mansinghka2013approximate}. However, current inference techniques need to be improved on many fronts: efficiency, ease of usability, and generality. Our method is a step towards this direction: the informed sampler leverages the power of existing discriminative and heuristic techniques to enable a principled Bayesian treatment in rich graphics generative models. Our emphasis is on generality; we aimed to create a method that can be easily reused in other scenarios with existing code bases. The presented results are a successful example of the inversion of an involved rendering pass. In the future, we plan to investigate ways to combine existing computer vision techniques with principled generative models, with the aim of being general rather than problem specific. \chapter{Consensus Message Passing} \label{chap:cmp} \tikzset{ cluster/.style={rectangle, minimum height=0.4cm, minimum width=0.4cm, inner sep=0.05cm, draw}, mylatent/.style={circle, minimum height=0.5cm, minimum width=0.5cm, inner sep=0.05cm, draw}, myfactor/.style={rectangle, minimum height=0.05cm, minimum width=0.05cm, fill=black, scale=0.75}, myinitfactor/.style={rectangle, minimum height=0.1cm, minimum width=0.1cm, fill=red, scale=0.75}, } \newcommand{Consensus Message Passing\@\xspace}{Consensus Message Passing\@\xspace} \newcommand{Consensus message passing\@\xspace}{Consensus message passing\@\xspace} \newcommand{consensus message passing\@\xspace}{consensus message passing\@\xspace} \newcommand{CMP\@\xspace}{CMP\@\xspace} \makeatletter \renewcommand{\thesubfigure}{\alph{subfigure}} \renewcommand{\@thesubfigure}{(\thesubfigure)\hskip\subfiglabelskip} \makeatother In the last chapter, we have proposed a technique to speed up the sampling process for inverse graphics. Despite having faster convergence with techniques like informed sampler, sampling based inference is often too slow for practical applications. An alternative inference approach in vision, which is often faster, is message-passing in factor graph models (see Section~\ref{sec:pgm}). Generative models in computer vision tend to be large, loopy and layered as discussed in Section~\ref{sec:pgm}. We find that widely-used, general-purpose message passing inference algorithms such as Expectation Propagation (EP) and Variational Message Passing (VMP) fail on the simplest of vision models. With these models in mind, we introduce a modification to message passing that learns to exploit their layered structure by passing \textit{consensus} messages that guide inference towards good solutions. Experiments on a variety of problems show that the proposed technique leads to significantly more accurate inference results, not only when compared to standard EP and VMP, but also when compared to competitive bottom-up discriminative models. Refer to Section~\ref{sec:pgm} for an overview of factor graphs and message-passing inference. \section{Introduction} \label{sec:introduction} As discussed in Section~\ref{sec:gen-adv-limits}, perhaps, the most significant challenge of the generative modeling framework is that inference can be very hard. Sampling-based methods run the risk of slow convergence, while message passing-based methods (which are the focus of this chapter) can converge slowly, converge to bad solutions, or fail to converge at all. Whilst significant efforts have been made to improve the accuracy of message passing algorithms (\textit{e.g.}\@\xspace by using structured variational approximations), many challenges remain, including difficulty of implementation, the problem of computational cost and the question of how the structured approximation should be chosen. The work in this chapter aims to alleviate these problems for general-purpose message-passing algorithms. Our initial observation is that general purpose message passing inference algorithms (\textit{e.g.}\@\xspace EP and VMP; \cite{Minka2001,Winn2005}) fail on even the simplest of computer vision models. We claim that in these models the failure can be attributed to the algorithms' inability to determine the values of a relatively small number of influential variables which we call `global' variables. Without accurate estimation of these global variables, it can be very difficult for message passing to make meaningful progress on the other variables in the model. Latent variables in vision models are often organized in a layered structure (also discussed in Section~\ref{sec:pgm}), where the observed image pixels are at the bottom and high-level scene parameters are at the top. Additionally, knowledge about the values of the variables at level $l$ is sufficient to reason about any global variable at layer $l+1$. With these properties in mind, we develop a method called \emph{Consensus Message Passing\@\xspace} (CMP\@\xspace) that learns to exploit such layered structures and estimate global variables during the early stages of inference. Experimental results on a variety of problems show that CMP\@\xspace leads to significantly more accurate inference results while preserving the computational efficiency of standard message passing. The implication of this work is twofold. First, it adds a useful tool to the toolbox of techniques for improving general-purpose inference, and second, in doing so it overcomes a bottleneck that has restricted the use of model-based machine learning in computer vision. This chapter is organized as follows.~In Section~\ref{sec:related-work-chap4}, we discuss related work and explain our CMP approach in Section~\ref{sec:method-chap4}. Then we present experimental analysis of CMP with three diverse generative vision models in Section~\ref{sec:experiments-chap4}. We conclude with a discussion in Section~\ref{sec:discussion-chap4}. \section{Related Work} \label{sec:related-work-chap4} Inspiration for CMP\@\xspace stems from the kinds of distinctions that have been made for decades between so-called `intuitive', bottom-up, fast inference techniques, and iterative `rational' inference techniques~\citep{Hinton1990}. CMP\@\xspace can be seen as an implementation of such ideas in the context of message passing, where the consensus messages form the `intuitive' part of inference and the following standard message passing forms the `rational' part. Analogues to intuitive and rational inference also exist for sampling, where bottom-up techniques are used to compute proposals for MCMC, leading to significant speedup in inference~\citep{tu2002image, stuhlmueller2013nips} (Chapter~\ref{chap:infsampler}). The works of~\cite{rezende2014stochastic} and~\cite{kingma2013auto} proposed techniques for learning the parameters of both the generative model and the corresponding recognition model. The idea of `learning to infer' also has a long history. Early examples include~\cite{Hinton1995}, where a dedicated set of `recognition' parameters are learned to drive inference. In more modern instances of such ideas \citep{Munoz2010, Ross2011, Domke2011, shapovalov2013spatial, Munoz2013}, message passing is performed by a sequence of predictions defined by a graphical model, and the predictors are jointly trained to ensure that the system produces coherent labelings. However, in these techniques, the resulting inference procedure no longer corresponds to the original (or perhaps to any) graphical model. An important distinction of CMP\@\xspace is that the predictors fit completely within the framework of message passing and final inference results correspond to valid fixed points in the original model of interest. Finally, we note recent works of~\cite{Heess2013} and~\cite{Eslami2014} that make use of regressors (neural networks and random forests, respectively) to learn to pass EP messages. These works are concerned with reducing the computational cost of computing individual messages and do not make any attempt to change the accuracy or rate of convergence in message passing inference as a whole. In contrast, CMP\@\xspace learns to pass messages specifically with the aim of reducing the total number of iterations required for accurate inference in a given generative model. \section{Consensus Message Passing\@\xspace} \label{sec:method-chap4} Consensus message passing\@\xspace exploits the layered characteristic of vision models in order to overcome the aforementioned inference challenges. For illustration, two layers of latent variables of such a model are shown in Fig.~\ref{fig:types-a} using factor graph notation (black). Here the latent variables below ($\mathbf{h}^b = \{ h^b_k\}$) are a function of the latent variables above ($\mathbf{h}^a = \{ h^a_k\}$) and the global variables $p$ and $q$, where $k$ ranges over pixels (in this case $|k|=3$). As we will see in the experiments that follow, this is a recurring pattern that appears in many models of interest in vision. For example, in the case of face modeling, the $\mathbf{h}^a$ variables correspond to the normals $\mathbf{n}_i$, the global variable $p$ to the light vector $\mathbf{l}$, and $\mathbf{h}^b$ to the shading intensities $s_i$ (see Fig.~\ref{fig:shading-model}). Our reasoning follows a recursive structure. Assume for a moment that in Fig.~\ref{fig:types-a}, the messages from the layer below to the inter-layer factors (blue) are both informative and accurate (\textit{e.g.}\@\xspace\ due to being close to the observed pixels). We will refer to these messages collectively as \textit{contextual messages}. It would be desirable, for purposes of both speed and accuracy, that we could ensure that the messages sent to the layer above ($\mathbf{h}^a$) are also accurate and informative. If we had access to an oracle that could give us the correct belief for the global variables ($p$ and $q$) for the image, we could send accurate initial messages from $p$ and $q$ and then compute informative and accurate messages from the inter-layer factors to the layer above. In practice, however, we do not have access to such an oracle. In this work we train regressors to \textit{predict} the values of the global variables given all the messages from the layer below. Should this prediction be good enough, the messages to the layer above will be informative and accurate, and the inductive argument will hold for further layers (if any) above in the factor graph. We describe how these regressors are trained in Section~\ref{sec:training-chap4}. The approach consists of the following two components: \begin{enumerate} \item Before inference, for each global variable in different layers of the model, we train a regressor to predict some oracle's value for the target variable given the values of all the messages from the layer below (\textit{i.e.}\@\xspace the \textit{contextual messages}, Fig.~\ref{fig:types-a}, blue), \item During inference, each regressor sends this belief in the form of a \textit{consensus message} (Fig.~\ref{fig:types-a}, red) to its target variable. \end{enumerate} In some models, it will be useful to employ a second type of CMP\@\xspace, illustrated graphically in Fig.~\ref{fig:types-b}, where global layer variables are absent and loops in the graphical model are due to global variables in other layers. In this case, a consensus message is sent to each variable in the latent layer above, given all the contextual messages. \begin{figure}[t] \centering \subfigure[Type A]{ \begin{tikzpicture} \matrix at (0, 0.2) [matrix, column sep=0.1cm, row sep=0.21cm,ampersand replacement=\&] { \node { }; \& \node (y1e) { $\vdots$ }; \& \node (y2e) { $\vdots$ }; \& \node (y3e) { $\vdots$ }; \& \node { }; \\ \node (a) [mylatent] { $p$ }; \& \node (y1) [mylatent] { $h^a_1$ }; \& \node (y2) [mylatent] { $h^a_2$ }; \& \node (y3) [mylatent] { $h^a_3$ }; \& \node (b) [mylatent] { $q$ }; \\ \& \& \& \& \\ \node { }; \& \node (cl1) [myfactor] { }; \& \node (cl2) [myfactor] { }; \& \node (cl3) [myfactor] { }; \& \node { }; \& \\ \& \& \& \& \\ \& \& \& \& \\ \& \& \& \& \\ \node { }; \& \node (x1) [mylatent] { $h^b_1$ }; \& \node (x2) [mylatent] { $h^b_2$ }; \& \node (x3) [mylatent] { $h^b_3$ }; \& \node { }; \\ \node { }; \& \node (x1e) { $\vdots$ }; \& \node (x2e) { $\vdots$ }; \& \node (x3e) { $\vdots$ }; \& \node { }; \\ }; \draw[red!20] (-1.9, 0) -- (1.8, 0); \draw[red, dotted] (-1.9, 0) -- (1.8, 0); \fill (a.south) ++ (0, -1.1) circle (3pt) [fill=red] { }; \fill (b.south) ++ (0, -1.1) circle (3pt) [fill=red] { }; \node[yshift=-1.45cm] at (a.south) { $\textcolor{red}{\Delta^p}$ }; \node[yshift=-1.45cm] at (b.south) { $\textcolor{red}{\Delta^q}$ }; \draw[-stealth, cyan] (0.1, -0.2) -- (0.1, 0.2); \draw[-stealth, cyan] (0.99, -0.2) -- (0.99, 0.2); \draw[-stealth, cyan] (-0.8, -0.2) -- (-0.8, 0.2); \fill (0.1, 0) circle (1pt) [fill=cyan] { }; \fill (0.99, 0) circle (1pt) [fill=cyan] { }; \fill (-0.8, 0) circle (1pt) [fill=cyan] { }; \draw[-stealth, red] (1.6, 0.25) -- (1.6, 0.9); \draw[-stealth, red] (-1.68, 0.25) -- (-1.68, 0.9); \draw (y1.north) -- (y1e); \draw (y2.north) -- (y2e); \draw (y3.north) -- (y3e); \draw (a.south) -- (cl1.north); \draw (a.south) -- (cl2.north); \draw (a.south) -- (cl3.north); \draw (b.south) -- (cl1.north); \draw (b.south) -- (cl2.north); \draw (b.south) -- (cl3.north); \draw (y1.south) -- (cl1); \draw (y2.south) -- (cl2); \draw (y3.south) -- (cl3); \draw [->] (cl1) -- (x1.north); \draw [->] (cl2) -- (x2.north); \draw [->] (cl3) -- (x3.north); \draw (x1.south) -- (x1e); \draw (x2.south) -- (x2e); \draw (x3.south) -- (x3e); \end{tikzpicture} \label{fig:types-a} } \subfigure[Type B]{ \begin{tikzpicture} \matrix at (0, 0.2) [matrix, column sep=0.4cm, row sep=0.217cm,ampersand replacement=\&] { \node (y1e) { $\vdots$ }; \& \node (y2e) { $\vdots$ }; \& \node (y3e) { $\vdots$ }; \\ \node (y1) [mylatent] { $h^a_1$ }; \& \node (y2) [mylatent] { $h^a_2$ }; \& \node (y3) [mylatent] { $h^a_3$ }; \\ \& \& \\ \node (cl1) [myfactor] { }; \& \node (cl2) [myfactor] { }; \& \node (cl3) [myfactor] { }; \\ \& \& \\ \& \& \\ \& \& \\ \node (x1) [mylatent] { $h^b_1$ }; \& \node (x2) [mylatent] { $h^b_2$ }; \& \node (x3) [mylatent] { $h^b_3$ }; \\ \node (x1e) { $\vdots$ }; \& \node (x2e) { $\vdots$ }; \& \node (x3e) { $\vdots$ }; \\ }; \draw[red!20] (-1.9, 0) -- (1.6, 0); \draw[red, dotted] (-1.9, 0) -- (1.6, 0); \fill (y1.south) ++ (-0.5, -0.98) circle (3pt) [fill=red] { }; \fill (y2.south) ++ (-0.5, -0.98) circle (3pt) [fill=red] { }; \fill (y3.south) ++ (-0.5, -0.98) circle (3pt) [fill=red] { }; \node[xshift=-0.5cm, yshift=-1.3cm] at (y1.south) { $\textcolor{red}{\Delta^1}$ }; \node[xshift=-0.5cm, yshift=-1.3cm] at (y2.south) { $\textcolor{red}{\Delta^2}$ }; \node[xshift=-0.5cm, yshift=-1.3cm] at (y3.south) { $\textcolor{red}{\Delta^3}$ }; \draw[-stealth, cyan] (0.17, -0.2) -- (0.17, 0.2); \draw[-stealth, cyan] (1.35, -0.2) -- (1.35, 0.2); \draw[-stealth, cyan] (-1.03, -0.2) -- (-1.03, 0.2); \fill (0.17, 0) circle (1pt) [fill=cyan] { }; \fill (1.35, 0) circle (1pt) [fill=cyan] { }; \fill (-1.03, 0) circle (1pt) [fill=cyan] { }; \draw[-stealth, red] (-0.46, 0.23) -- (-0.13, 0.87); \draw[-stealth, red] (-1.65, 0.23) -- (-1.32, 0.87); \draw[-stealth, red] (0.71, 0.23) -- (1.04, 0.87); \draw (y1.north) -- (y1e); \draw (y2.north) -- (y2e); \draw (y3.north) -- (y3e); \draw (y1.south) -- (cl1); \draw (y2.south) -- (cl2); \draw (y3.south) -- (cl3); \draw [->] (cl1) -- (x1.north); \draw [->] (cl2) -- (x2.north); \draw [->] (cl3) -- (x3.north); \draw (x1.south) -- (x1e); \draw (x2.south) -- (x2e); \draw (x3.south) -- (x3e); \end{tikzpicture} \label{fig:types-b} } \mycaption{Consensus message passing\@\xspace}{Vision models tend to be large, layered and loopy. (a)~Two adjacent layers of the latent variables of a model of this kind (black). In CMP\@\xspace, consensus messages (red) are computed from contextual messages (blue) and sent to global variables ($p$ and $q$), guiding inference in the layer. (b)~Consensus message passing\@\xspace of a different kind for situations where loops in the graphical model are due to global variables in other layers.} \label{fig:types} \end{figure} Any message passing schedule can be used subject to the constraint that the consensus messages are given maximum priority within a layer and that they are sent bottom up. Naturally, a consensus message can only be sent after its contextual messages have been computed. It is desirable to be able to ensure that the fixed point (result at convergence) reached under this scheme is also a fixed point of standard message passing in the model. One approach for this is to reduce the certainty of the consensus messages over the course of inference, or to only pass them in the first few iterations. In our experiments we found that even passing consensus messages only in the first iteration led to accurate inference, and therefore we follow this strategy for the remainder of the chapter. It is worth emphasizing that message-passing equations remain unchanged and we used the same scheduling scheme in all our experiments (\textit{i.e.}\@\xspace no need for manual tuning). It is important to highlight a crucial difference between consensus message passing\@\xspace and heuristic \textit{initialization}. In the latter, predictions are made from the \textit{observations} no matter how high up in the hierarchy the target variable is, whereas in CMP\@\xspace predictions are made using \textit{messages} that are sent from variables immediately below the target variables of interest. The CMP prediction task is much simpler, since the relationship between the target variables and the variables in the layer immediately below is much less complex than the relationship between the target variables and the observations. Furthermore, we know from the layered structure of the model that all relevant information from the observations is contained in the variables in the layer below. This is because target variables at layer $l+1$ are conditionally independent of all layers $l-1$ and below, given the values of layer $l$. One final note on the capacity of the regressors. Of course it is true that an infinite capacity regressor can make perfect predictions given enough data (whether using CMP\@\xspace or heuristic initialization). However, we are interested in practical ways of obtaining accurate results for models of increasing complexity, where lack of capable regressors and unlimited data is inevitable. One important feature of CMP\@\xspace is that it makes use of predictors in a scalable way, since regressions are only made between adjacent latent layers. \subsection{Predicting Messages for CMP} \label{sec:training-chap4} To recap, the goal is to perform inference in a layered model of observed variables $\mathbf{x}$ with latent variables $\mathbf{h}$. Each predictor $\Delta^t$ (with target $t$) is a function of a collection of its contextual messages $\mathbf{c} = \{ c_k \}$ (incoming from the latent layer below $\mathbf{h}^b$), that produces the consensus message $m$, \textit{i.e.}\@\xspace $m = \Delta^t(\mathbf{c}).$ We adopt an approach in which we \textit{learn} a function for this task that is parameterized by $\bm{\theta}$, \textit{i.e.}\@\xspace $\overline{m} \equiv \mathcal{F}(\mathbf{c}|\bm{\theta}).$ This can be seen as an instance of the canonical regression task. For a given family of regressors $\mathcal{F}$, the goal of training is to find parameters $\bm{\theta}$ that capture the relationship between context and consensus message pairs $\{ (\mathbf{c}_d, m_d) \}_{d=1...D}$ in some set of training examples. \subsubsection{Choice of Predictor Training Data} First we discuss how this training data is obtained. There can be at least three different sources: \textbf{1.\,\,Beliefs at Convergence.} Standard message passing inference is run in the model for a large number of iterations until convergence and for a collection of different observations $\{ \mathbf{x}_d \}$. Message passing is scheduled in precisely the same way as it would be if CMP\@\xspace were present, however no consensus messages are sent. For each observation $\mathbf{x}_d$, the collection of the marginals of the latent variables in the layer below the predictor ($\mathbf{h}^b_d = \{ h^b_{dk} \}$, (see \textit{e.g.}\@\xspace Fig.~\ref{fig:types-a}) at the \textit{first} iteration of message passing is considered to be the context $\mathbf{c}_d$, and the marginal of the target variable $t$ at the \textit{last} iteration of message passing is considered to be the oracle message $m_d$. The intuition is that during inference on new problems, a predictor trained in this way would send messages that \textit{accelerate} convergence to the fixed-point that message passing would have reached by itself anyway. This technique is only useful if standard message passing works but is slow. \textbf{2.\,\,Samples from the Model.} First a collection of samples from the model is generated, giving us both the observation $\mathbf{x}_d$ and its corresponding latent variables $\mathbf{h}_d$ for each sample. Standard message passing inference is then run on the observations $\{ \mathbf{x}_d \}$ only for a single iteration. Message passing is scheduled as before. For each observation $\mathbf{x}_d$, the marginals of the latent variables in the layer below $\mathbf{h}^b_d$ at the \textit{first} iteration of message passing is the context $\mathbf{c}_d$, and the oracle message $m_d$ is considered to be a point-mass centered at the sampled value of the target variable $t$. The intuition is that during inference on new problems, a predictor trained in this way would send messages that guide inference to a fixed-point in which the marginal of the target variable $t$ is close to its sampled value. This technique is useful if standard message passing fails to reach good fixed points no matter how long it is run for. \textbf{3.\,\,Labelled Data.} As above, except the latent variables of interest $\mathbf{h}_d$ are set from real data instead of being sampled from the model. The oracle message $m_d$ is therefore a point-mass centered at the label provided for the target variable $t$ for observation $\mathbf{X}_d$. The aim is that during inference on new problems, a predictor trained in this way would send messages that guide inference to a fixed-point in which the marginal of the target variable $t$ is close to its labelled value, even in the presence of a degree of model mismatch. We demonstrate each of the strategies in the experiments in Section~\ref{sec:experiments-chap4}. \subsubsection{Random Regression Forests for CMP} Our goal is to learn a mapping $\mathcal{F}$ from contextual messages $\mathbf{c}$ to the consensus message $m$ from training data $\{ (\mathbf{c}_d, m_d) \}_{d=1...D}$. This is challenging since the inputs and outputs of the regression problem are both messages (\textit{i.e.}\@\xspace distributions), and special care needs to be taken to account for this fact. We follow closely the methodology of~\cite{Eslami2014}, which use random forests to predict outgoing messages from a factor given the incoming messages to it. Please refer to Section~\ref{sec:forests} for a brief review of random forests. In approximate message passing (\textit{e.g.}\@\xspace EP~\cite{Minka2001} and VMP~\cite{Winn2005}), messages can be represented using only a few numbers, \textit{e.g.}\@\xspace a Gaussian message can be represented by its natural parameters. We represent the contextual messages $\mathbf{c}$ collectively, in two different ways: first as a concatenation of the parameters of its constituent messages which we refer to as `regression parameterization' and denote by $\mathbf{r}_\textrm{c}$; and second as a vector of features computed on the set which we refer to as `tree parameterization' and denote by $\mathbf{t}_\textrm{c}$. This parametrization typically contains features of the set as a whole (\textit{e.g.}\@\xspace moments of their means). We represent the outgoing message $m$ by a vector of real valued numbers $\mathbf{r}_\textrm{m}$. \textbf{Prediction Model.} Each leaf node is associated with a subset of the labelled training data. During testing, a previously unseen set of contextual messages represented by $\mathbf{t}_\textrm{c}$ traverses the tree until it reaches a leaf which by construction is likely to contain similar training examples. Therefore, we use the statistics of the data gathered in that leaf to predict the consensus message with a multivariate regression model of the form: $\mathbf{r}_\textrm{m} = \mathbf{W} \cdot \mathbf{r}_\textrm{c} + \epsilon$ where $\epsilon$ is a vector of normal error terms. We use the learned matrix of coefficients $\mathbf{W}$ at test time to make predictions $\overline{\mathbf{r}}_\textrm{m}$ for a given $\mathbf{r}_\textrm{c}$. To recap, $\mathbf{t}_\textrm{c}$ is used to traverse the contextual messages down to leaves, and $\mathbf{r}_\textrm{c}$ is used by a linear regressor to predict the parameters $\mathbf{r}_\textrm{m}$ of the consensus message. \textbf{Training Objective Function.} Recall the training procedure of random forests from Section~\ref{sec:forests}. Each node is in a tree represents a partition of feature space and split function in each node is chosen in a greedy manner minimizing a splitting criterion $E$. A common split criterion, which we also use here, is the sum of data likelihood in the node's left and right child clusters (see Eq.~\ref{eqn:forest_energy}). We use the `fit residual' as defined in~\citep{Eslami2014} as the likelihood (model fit) function for optimizing splits at each node. In other words, this objective function splits the training data at each node in a way that the relationship between the incoming and outgoing messages is well captured by the regression model in each child. \textbf{Ensemble Model.} During testing, a set of contextual messages simultaneously traverses every tree in the forest from their roots until it reaches their leaves. Combining the predictions into a single forest prediction might be done by averaging the parameters $\overline{\mathbf{r}}_\textrm{m}^t$ of the predicted messages $\overline{m}^t$ by each tree $t$, however this would be sensitive to the chosen parameterization for the messages. Instead we compute the moment average $\overline{m}$ of the distributions $\{ \overline{m}^t \}$ by averaging the first few moments of the predictions across trees, and solving for the distribution parameters which match the averaged moments (see \textit{e.g.}\@\xspace~\cite{Grosse2013}). \section{Experiments} \label{sec:experiments-chap4} We first illustrate the application of CMP\@\xspace to two diagnostic models: one of circles and a second of squares. We then use the approach to improve inference in a more challenging vision model: intrinsic images of faces. In the first experiment, the predictors are trained on beliefs at convergence, in the second on samples from the model, and in the third on annotated labels, showcasing various use-cases of CMP\@\xspace. We show that in all cases, the proposed technique leads to significantly more accurate inference results whilst preserving the computational efficiency of message passing. The experiments were performed using Infer.NET~\cite{InferNET2012} with default settings, unless stated otherwise. For random forest predictors, we set the number of trees in each forest to 8. \subsection{A Generative Model of Circles} \label{sec:circle} \begin{figure}[t] \centering \subfigure[]{ \setlength\fboxsep{-0.3mm} \setlength\fboxrule{0.5pt} \parbox[b]{3.1cm}{ \fbox{\includegraphics[width=\linewidth]{figures/Rotate4_Random_XData_1}} \\ \vspace{1mm} \fbox{\includegraphics[width=\linewidth]{figures/Rotate4_Random_XData_3}} } \hspace{0.1cm} \label{fig:circle-data} } \subfigure[]{ \begin{tikzpicture} \draw[red!20] (-2.3, 2.5) -- (2, 2.5); \draw[red, dotted] (-2.3, 2.5) -- (2, 2.5); \node[obs] (x) {$\mathbf{x}_i$}; % \node[latent, above=of x, yshift=-2mm] (z) {$\mathbf{z}_i$}; % \factor[above=of x] {noise} {left:Gaussian} {} {}; % \factor[above=of z, yshift=10mm] {sum} {left:Sum} {} {}; % \node[latent, right=of sum] (c) {$\mathbf{c}$}; % \node[latent, above=of sum, yshift=-7mm] (p) {$\mathbf{p}_i$}; % \factor[above=of p] {circle} {above:Circle\,\,\,\,\,} {} {}; % \factor[above=of c] {pc} {} {} {}; % \node[latent, left=of circle] (a) {$a_i$}; % \node[latent, right=of circle] (r) {$r$}; % \factor[above=of a] {pa} {} {} {}; % \factor[above=of r] {pr} {} {} {}; % \factoredge {z} {noise} {x}; % \factoredge {p} {sum} {}; % \factoredge {a} {circle} {p}; % \factoredge {r} {circle} {}; % \factoredge {c} {sum} {z}; % \factoredge {} {pc} {c}; % \factoredge {} {pa} {a}; % \factoredge {} {pr} {r}; % \plate {} {(pa) (a) (p) (z) (x)} {}; % \fill (c.south) ++ (0, -0.52) circle (3pt) [fill=red] { }; \draw [-stealth, red] (1.45, 2.65) -- (1.45, 2.95); \node[yshift=-0.9cm] at (c.south) { $\textcolor{red}{\Delta^c}$ };) \end{tikzpicture} \label{fig:circle-model} } \mycaption{The circle problem}{(a)~Given a sample of points on a circle (black), we wish to infer the circle's center (red) and its radius. Two sets of samples are shown. (b)~The graphical model for this problem.} \label{fig:circle} \end{figure} \begin{figure}[t] \centering \subfigure[Center]{ \includegraphics[width=0.46\linewidth]{figures/Rotate4_Random_Center_Distance_10} } \subfigure[Radius]{ \includegraphics[width=0.46\linewidth]{figures/Rotate4_Random_Radius_Distance_10} } \mycaption{Accelerated inference using CMP\@\xspace for the circle problem}{(a)~Distance of the mean of the marginal posterior of center $c$ from its true value as a function of number of inference iterations (Forest: direct prediction, MP: standard VMP, CMP\@\xspace: VMP with consensus). Consensus message passing\@\xspace significantly accelerates convergence. (b)~Similar plot for radius $r$. } \label{fig:circle-results} \end{figure} We begin by studying the behavior of standard message passing on a simplified Gauss and Ceres problem~\citep{Teets1999}. Given a noisy sample of points $\mathbf{x} = \{ \mathbf{x}_i \}_{i=1...N}$ on a circle in the 2D plane (Fig.~\ref{fig:circle-data}, black, $\mathcal{N}(0,0.01)$ noise on each axis), the aim is to infer the coordinates of the circle's center $\mathbf{c}$ (Fig.~\ref{fig:circle-data}, red) and its radius $r$. We can express the data generation process using a graphical model (Fig.~\ref{fig:circle-model}). The Cartesian point $(0, r)$ is rotated $a_i$ radians to generate $\mathbf{p}_i$, then translated by $\mathbf{c}$ to generate the latent $\mathbf{z}_i$, which finally produces the noisy observation $\mathbf{x}_i$. This model can be expressed in a few lines of code in Infer.NET. The circle model is interesting for our purposes ,since it is both layered (the $\mathbf{z}_i$s, $\mathbf{p}_i$s and $a_i$s each form a layer) and loopy (due to the presence of two variables outside the plate). We use this example to highlight the fact that although inference may require many iterations of message passing, message initialization can have a significant effect on the speed of convergence, and to demonstrate how this can be done automatically using CMP\@\xspace. Vanilla message passing inference in this model can take a surprisingly large number of iterations to converge. We draw 10 points $\{ \mathbf{x}_i \}$ from circles with random centers and radii, run VMP and record the accuracy of the marginals of the latent variables at each iteration. We repeat the experiment 50 times and plot results in Fig.~\ref{fig:circle-results} (dashed black). As can be seen from the figure, the marginals contain significant errors even after 50 iterations of message passing. We then experiment with consensus message passing\@\xspace. A predictor $\Delta^c$ is trained to send a consensus message to $\mathbf{c}$ in the initial stages of inference, given the messages coming up from all of the $\mathbf{z}_i$ (indicated graphically in Fig.~\ref{fig:circle-model}, red). The predictor is trained on final beliefs at 100 iterations of standard message passing on $D=500$ sample problems. As can be seen in Fig.~\ref{fig:circle-results} (red), this single consensus message has the effect of significantly increasing the rate of convergence (as indicated by slope) and also inference robustness (as indicated by error bars). For comparison, we also plot how well a regressor of the same capacity as the one used by CMP\@\xspace can directly estimate the latent variables without using the graphical model in Fig.~\ref{fig:circle-results} (blue). Consensus message passing\@\xspace gives us the best of both worlds in this example: speed that is more comparable to one-shot bottom-up prediction and the accuracy of message passing inference in a good model for the problem. \begin{figure}[t] \centering \subfigure[]{ \setlength\fboxsep{-0.2mm} \setlength\fboxrule{0.7pt} \parbox[b]{2.3cm}{ \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_1}} \hspace*{0mm} \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_2}} \vspace*{-2.4mm} \\ \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_3}} \hspace*{0mm} \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_4}} \vspace*{-2.4mm} \\ \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_5}} \hspace*{0mm} \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_6}} \vspace*{-2.4mm} \\ \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_7}} \hspace*{0mm} \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_8}} \vspace*{-2.4mm} \\ \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_9}} \hspace*{0mm} \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_10}} \vspace*{-2.4mm} \\ \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_11}} \hspace*{0mm} \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_12}} \vspace*{-2.4mm} \\ \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_13}} \hspace*{0mm} \fbox{\includegraphics[width=0.45\linewidth]{figures/Translate1_XData_14}} } \hspace{0.2cm} \label{fig:square-data} } \subfigure[]{ \begin{tikzpicture} \draw[red!20] (-2, 2.7) -- (2, 2.7); \draw[red, dotted] (-2, 2.7) -- (2, 2.7); \draw[red!20] (-2, 5.5) -- (2, 5.5); \draw[red, dotted] (-2, 5.5) -- (2, 5.5); \node[obs] (x) {$\mathbf{x}_i$}; % \node[latent, above=of x] (z) {$\mathbf{z}_i$}; % \factor[above=of x, yshift=1mm] {noise} {left:Gaussian} {} {}; % \factor[above=of z, yshift=10mm] {gate} {below left:Gate} {} {}; % \node[latent, right=of gate] (fg) {$\mathrm{\mathbf{fg}}$}; % \node[latent, left=of gate] (bg) {$\mathrm{\mathbf{bg}}$}; % \node[latent, above=of gate, yshift=-5mm] (s) {$s_i$}; % \factor[above=of s, yshift=10mm] {insq} {below left, yshift=-2mm:Square} {} {}; % \factor[above=of fg] {pfg} {} {} {}; % \factor[above=of bg] {pbg} {} {} {}; % \node[latent, left=of insq] (c) {$\mathbf{c}$}; % \node[latent, right=of insq] (l) {$l$}; % \node[latent, above=of insq, yshift=-5mm, draw=none](p) {$p_i$}; % \factor[above=of c] {pc} {} {} {}; % \factor[above=of l] {pr} {} {} {}; % \factoredge {z} {noise} {x}; % \factoredge {s} {gate} {z}; % \factoredge {c} {insq} {}; % \factoredge {l} {insq} {}; % \factoredge {p} {insq} {s}; % \factoredge {fg} {gate} {}; % \factoredge {bg} {gate} {}; % \factoredge {} {pfg} {fg}; % \factoredge {} {pbg} {bg}; % \factoredge {} {pc} {c}; % \factoredge {} {pr} {l}; % \plate {} {(p) (x)} {}; % \fill (bg.south) ++ (0, -0.52) circle (3pt) [fill=red] { }; \fill (fg.south) ++ (0, -0.52) circle (3pt) [fill=red] { }; \fill (l.south) ++ (0, -0.52) circle (3pt) [fill=red] { }; \draw [-stealth, red] (-1.45, 2.85) -- (-1.45, 3.15); \draw [-stealth, red] (1.45, 2.85) -- (1.45, 3.15); \draw [-stealth, red] (1.45, 5.65) -- (1.45, 5.95); \node[yshift=-0.9cm] at (fg.south) { $\textcolor{red}{\Delta^{\mathrm{\mathbf{fg}}}}$ }; \node[yshift=-0.9cm] at (bg.south) { $\textcolor{red}{\Delta^{\mathrm{\mathbf{bg}}}}$ }; \node[yshift=-0.9cm] at (l.south) { $\textcolor{red}{\Delta^l}$ };) \end{tikzpicture} \hspace{0.1cm} \label{fig:square-model} } \mycaption{The square problem}{(a)~We wish to infer the square's center and its side length. (b)~A graphical model for this problem. $s_i$ is a boolean variable indicating the square's presence at position $p_i$. Depending on the value of $s_i$, the gate copies the appropriate color ($\mathrm{\mathbf{fg}}$ or $\mathrm{\mathbf{bg}}$) to $\mathbf{z}_i$.} \label{fig:square} \end{figure} \begin{figure}[t] \centering \subfigure[Center]{ \includegraphics[width=0.4\linewidth]{figures/Translate4_Bullseye} \label{fig:square-results-bullseye} } \subfigure[Center]{ \includegraphics[width=0.4\linewidth]{figures/Translate4_Center_Distance} \label{fig:square-results-center} } \subfigure[Side length]{ \includegraphics[width=0.4\linewidth]{figures/Translate4_Radius_Distance} \label{fig:square-results-radius} } \subfigure[BG color]{ \includegraphics[width=0.4\linewidth]{figures/Translate4_BGColor_Distance} \label{fig:square-results-bgcolor} } \mycaption{Robustified inference using CMP\@\xspace for the square problem}{(a)~Position of inferred centers relative to ground-truth. Image boundaries shown in blue for scale. (b,c,d)~Distance of the mean of the posterior of $\mathbf{c}$, $l$ and $\mathrm{\mathbf{bg}}$ from their true values. CMP\@\xspace consistently increases inference accuracy. Results have been averaged over 50 different problems. 1 stage CMP\@\xspace only makes use of the lower predictors $\Delta^\mathrm{\mathbf{fg}}$ and $\Delta^\mathrm{\mathbf{bg}}$.} \label{fig:square-results} \end{figure} \subsection{A Generative Model of Squares} \label{sec:square} Next, we turn our attention to a more challenging problem for which even the best message passing scheme that we could devise frequently finds completely inaccurate solutions. The task is to infer the center $\mathbf{c}$ and side length $r$ of a square in an image (Fig.~\ref{fig:square-data}). Unlike the previous problem where we knew that all points belonged to the circle, here we must first determine which pixels belong to the square and which do not. To do so we might also wish to reason about the color of the foreground $\mathrm{\mathbf{fg}}$ and background $\mathrm{\mathbf{bg}}$, making the task of inference significantly harder. The graphical model for this problem is shown in Fig.~\ref{fig:square-model}. Let $\mathbf{c}$ and $l$ denote square center and side length respectively. At each pixel position $p_i$, $s_i$ is a boolean variable indicating the square's presence. Depending on the value of $s_i$, the gate copies the appropriate color ($\mathrm{\mathbf{fg}}$ or $\mathrm{\mathbf{bg}}$) to $\mathbf{z}_i$. We experiment with 50 test images (themselves samples from the model), perform inference using EP and with a sequential schedule, recording the accuracy of the marginals of the latent variables at each iteration. We additionally place damping with step size 0.95 on messages from the square factor to the center $\mathbf{c}$. We found these choices led to the best performing standard message passing algorithm. Despite this, we observed inference accuracy to be disappointingly poor (see Fig.~\ref{fig:square-results}). In Fig.~\ref{fig:square-results-bullseye} we see that, for many images, message passing converges to highly inaccurate marginals for the center. The low quality of inference can also be seen in quantitative results of Figs.~\ref{fig:square-results}(b-d). We implement CMP\@\xspace predictors at two different layers of the model (see Fig.~\ref{fig:square-model}, red). In the first layer, $\Delta^\mathrm{\mathbf{fg}}$ and $\Delta^\mathrm{\mathbf{bg}}$ send consensus messages to $\mathrm{\mathbf{fg}}$ and $\mathrm{\mathbf{bg}}$ respectively, given the messages coming up from all of the $\mathbf{z}_i$ which take the form of independent Gaussians centered at the appearances of the observed pixels (we use a Gaussian noise model). Therefore $\Delta^\mathrm{\mathbf{fg}}$ and $\Delta^\mathrm{\mathbf{bg}}$ effectively make initial guesses of the values of the foreground and background colors in the image given the observed image. Split features in the internal nodes of the regression forest are designed to test for equality of two randomly chosen pixel positions, and sparse regressors are used at the leaves to prevent overfitting. In the second layer, $\Delta^l$ sends a consensus message to $l$ given the messages coming up from all of the $s_i$. The messages from $s_i$ take the form of independent Bernoullis indicating the algorithm's current beliefs about the presence of the square at each pixel. Therefore, the predictor's job is to predict the square's side length from this probabilistic segmentation map. Note that it is much easier to implement a regressor to perform this task (effectively one only needs to count) than it is to do so using the original observed image pixels $x_i$. We find these predictors to be sufficient for stable inference and so we do not implement a fourth predictor for $\mathbf{c}$. We experiment with single stage CMP\@\xspace, where only the lower predictors $\Delta^\mathrm{\mathbf{fg}}$ and $\Delta^\mathrm{\mathbf{bg}}$ are active, and with two stage CMP\@\xspace, where all three predictors are active. The predictors are trained on $D=500$ samples from the model. The results of these experiments are shown in Fig.~\ref{fig:square-results}. We observe that CMP\@\xspace significantly improves the accuracy of inference for the center $\mathbf{c}$ (Figs.~\ref{fig:square-results-bullseye}, \ref{fig:square-results-center}) but also for the other latent variables (Figs.~\ref{fig:square-results-radius}, \ref{fig:square-results-bgcolor}). Note that single stage CMP\@\xspace appears to be insufficient for guiding message passing to good solutions. Whereas in circle example CMP\@\xspace accelerated convergence, this example demonstrates how it can make inference possible in models that were outside the capabilities of standard message passing. \subsection{A Generative Model of Faces} \label{sec:shading} \begin{figure}[t] \centering \subfigure[]{ \setlength\fboxsep{-0.3mm} \setlength\fboxrule{0pt} \parbox[b]{3cm}{ \centering \small \fbox{\includegraphics[width=1.3cm]{figures/sample_N.png}}\\ Normal $\{\mathbf{n}_i \}$ \vspace{4.5mm} \\ \fbox{\includegraphics[width=1.3cm]{figures/sample_S.png}} \\ Shading $\{ s_i \}$ \vspace{4.5mm} \\ \fbox{\includegraphics[width=1.3cm]{figures/sample_R.png}} \\ Reflectance $\{ r_i \}$ \vspace{4.5mm} \\ \fbox{\includegraphics[width=1.3cm]{figures/sample_X.png}} \\ Observed image $\{ x_i \}$\\ } \hspace{0.1cm} \label{fig:shading-data} } \subfigure[]{ \begin{tikzpicture} \draw[red!20] (-2.4, 2.7) -- (2, 2.7); \draw[red, dotted] (-2.4, 2.7) -- (2, 2.7); \draw[red!20] (-2.4, 5.5) -- (2, 5.5); \draw[red, dotted] (-2.4, 5.5) -- (2, 5.5); \node[obs] (x) {$x_i$}; % \node[latent, above=of x] (z) {$z_i$}; % \factor[above=of z, yshift=10mm] {times} {below left: $\times$} {} {}; % \node[latent, above=of times, yshift=-5mm] (s) {$s_i$}; % \factor[above=of x, yshift=1mm] {noise1} {left:Gaussian} {} {}; % \node[latent, left=of times] (r) {$r_i$}; % \factor[above=of r] {pr} {} {} {}; % \factor[above=of s, yshift=10mm] {inner} {left:Product} {} {}; % \node[latent, above=of inner, yshift=-5mm] (n) {$\mathbf{n_i}$}; % \node[latent, right=of inner] (l) {$\mathbf{l}$}; % \factor[above=of l] {pl} {} {} {}; % \factor[above=of n] {pn} {} {} {}; % \factoredge {z} {noise1} {x}; % \factoredge {s} {times} {z}; % \factoredge {r} {times} {}; % \factoredge {n} {inner} {s}; % \factoredge {l} {inner} {}; % \factoredge {} {pn} {n}; % \factoredge {} {pl} {l}; % \factoredge {} {pr} {r}; % \plate {} {(pn) (x) (r)} {}; % \fill (r.south) ++ (0, -0.52) circle (3pt) [fill=red] { }; \fill (l.south) ++ (0, -0.52) circle (3pt) [fill=red] { }; \draw [-stealth, red] (-1.45, 2.85) -- (-1.45, 3.15); \draw [-stealth, red] (1.45, 5.65) -- (1.45, 5.95); \node[yshift=-0.9cm] at (r.south) { $\textcolor{red}{\Delta_i^\mathbf{r}}$ }; \node[yshift=-0.9cm] at (l.south) { $\textcolor{red}{\Delta^\mathbf{l}}$ }; \node[yshift=0.35cm, xshift=-0.83cm] at (inner) { Inner }; \end{tikzpicture} \label{fig:shading-model} } \mycaption{The face problem}{(a)~We observe an image and wish to infer the corresponding reflectance map and normal map (visualized here as 3D shape). (b)~A graphical model for this problem. Symmetry priors not shown.} \label{fig:shading} \vspace*{-4mm} \end{figure} In this sectin, we also investigate a more realistic application: face modeling. The estimation of reflectance and shape from a single image of a human face is a well-studied problem in computer vision (see \textit{e.g.}\@\xspace \citep{Georghiades2001, Lee2005, Wang2009, Kemelmacher2011, Tang2012}). A primary motivation for this task is that reflectance and shape are invariant to confounding light effects, and are therefore useful for downstream tasks such as recognition. The problem is ill-posed and modern approaches make use of prior knowledge in order to obtain good solutions, \textit{e.g.}\@\xspace in the form of average reflectance and normal statistics \citep{Biswas2009, Biswas2010} or morphable 3D models \citep{Zhang2006, Wang2009}. \textbf{Model.} Given an observation of image pixels $\mathbf{x} = \{x_i\}$, the aim is to infer the reflectance value $r_i$ and normal vector $\mathbf{n_i}$ for each pixel $i$ (see Fig.~\ref{fig:shading-data}). In Fig.~\ref{fig:shading-model}, a model is shown for these variables that represents the following image formation process: $x_i = (\mathbf{n_i} \cdot \mathbf{l}) \times r_i + \epsilon$, thereby assuming Lambertian reflection and an infinitely distant directional light source with variable intensity. We place Gaussian priors over reflectances $\{ r_i \}$, normals $\{ \mathbf{n_i} \}$, and the light $\mathbf{l}$; and set the parameters of the priors using training data. We additionally place a soft symmetry prior on the $\{ r_i \}$ (the reflectance value on one side of the face should be close to its value on the other side) and on the $\{ \mathbf{n_i} \}$ (normal vectors on each side should be approximately symmetric), reflecting our prior knowledge about faces. These symmetry priors can be added to the model in just a few lines of code, illustrating the way in which model-based methods lend themselves to rapid prototyping and experimentation. Although this model is only a crude approximation to the true image formation process (\textit{e.g.}\@\xspace it does not account for shadows or specularities), similar approximations have been found to be useful in prior work \citep{Biswas2009, Biswas2010, Kemelmacher2011}. Additionally, if we can successfully develop algorithms that perform accurate and reliable inference in this class of models, we would then be able to increase its usefulness by updating it to reflect the true image formation process more accurately. Note that even for a relatively small image of size $96 \times 84$, the model contains over 48,000 latent variables and 56,000 factors, and as we will show below, standard message passing in the model routinely fails to converge to accurate solutions. \begin{figure*} \begin{center} \centerline{\includegraphics[width=1.0\columnwidth]{figures/face_cmp_visual_results.pdf}} \vspace{-1.2cm} \end{center} \mycaption{A visual comparison of inference results for the face problem}{For 4 randomly chosen test images, we show inference results obtained by competing methods. (a)~Observed images. (b)~Inferred reflectance maps. \textit{GT} is the stereo estimate which we use as a proxy for ground-truth, \textit{BU} is the bottom-up reflectance estimate of Biswas \textit{et~al.}\@\xspace (2009), \textit{MP} refers to standard variational message passing, \textit{Forest} is the consensus prediction and \textit{CMP} is the proposed consensus message passing technique. (c)~The variance of the inferred reflectance estimate produced by CMP\@\xspace (normalized across rows). High variance regions correlate strongly with cast shadows. (d)~Visualization of inferred light. (e)~Inferred normal maps.} \label{fig:shading-qualitative-multiple-subjects} \end{figure*} \textbf{Consensus Message Passing.} We use predictors at two levels in the model (see Fig.~\ref{fig:shading-model}) to tackle this problem. The first sends consensus messages to \textit{each} reflectance pixel $r_i$, making it an instance of type B of CMP\@\xspace as described in Fig.~\ref{fig:types-b}. Here, each consensus message is predicted using information from all the contextual messages from the $z_i$. We denote each of these predictors by $\Delta_i^\mathbf{r}$. The second predictor sends a consensus message to $\mathbf{l}$ using information from all the messages from the $s_i$ and is denoted by $\Delta^\mathbf{l}$. The first level of predictors effectively make a guess of the reflectance image from the denoised observation, and the second layer predictor produces an estimate of the light from the shading image (which is likely to be easier to do than directly from the observation). The reflectance predictors $\{ \Delta_i^\mathbf{r} \}$ are all powered by a single random forest, however the pixel position $i$ is used as a feature that it can exploit to create location specific behaviour. The tree parameterization of the contextual messages $\mathbf{c}$ for use in the reflectance predictor $\Delta_i^\mathbf{r}$ also includes 16 features such as mean, median, max, min and gradients of a $21 \times 21$ patch around the pixel. The tree parameterization of the contextual messages for use in the lighting predictor $\Delta^\mathbf{l}$ consists of means of the mean of the shading messages in $12 \times 12$ blocks. We deliberately use simple features to maintain generality but one could imagine the use of more specialized regressors for maximal performance. \begin{figure} \centering \setlength\fboxsep{0.2mm} \setlength\fboxrule{0pt} \begin{tikzpicture} \matrix at (0, 0) [matrix of nodes, nodes={anchor=east}, column sep=-0.05cm, row sep=-0.2cm] { \fbox{\includegraphics[width=1cm]{figures/sample_3_1_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_1_CMPVAR.png}} \\ \fbox{\includegraphics[width=1cm]{figures/sample_3_2_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_2_CMPVAR.png}} \\ \fbox{\includegraphics[width=1cm]{figures/sample_3_3_X.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_GT.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_BISWAS.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_VMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_FOREST.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_CMP.png}} & \fbox{\includegraphics[width=1cm]{figures/sample_3_3_CMPVAR.png}} \\ }; \node at (-3.85, -2.0) {\small Observed}; \node at (-2.55, -2.0) {\small `GT'}; \node at (-1.27, -2.0) {\small BU}; \node at (0.0, -2.0) {\small MP}; \node at (1.27, -2.0) {\small Forest}; \node at (2.55, -2.0) {\small \textbf{CMP}}; \node at (3.85, -2.0) {\small Variance}; \end{tikzpicture} \mycaption{Robustness to varying illumination}{Left to right: observed image, photometric stereo estimate (proxy for ground-truth), \cite{Biswas2009} estimate, VMP result, consensus forest estimate, CMP mean, and CMP variance.} \label{fig:shading-qualitative-same-subject} \vspace{-0.3cm} \end{figure} \begin{figure}[t] \centering \subfigure[Without shadows]{ \includegraphics[width=0.4\linewidth]{figures/Shading_Recognition_Rate_Synthetic} } \subfigure[With shadows]{ \includegraphics[width=0.4\linewidth]{figures/Shading_Recognition_Rate_Real} } \mycaption{Reflectance inference accuracy demonstrated through recognition accuracy}{CMP\@\xspace allows us to make use of the full potential of the generative model, thereby outperforming the competitive bottom-up method of \cite{Biswas2009}.} \label{fig:shading-quantitative-reflectance} \subfigure[Without shadows]{ \includegraphics[width=0.4\linewidth]{figures/Shading_Light_Angle_Error_Synthetic} } \subfigure[With shadows]{ \includegraphics[width=0.4\linewidth]{figures/Shading_Light_Angle_Error_Real} } \mycaption{Light inference accuracy}{The presence of cast shadows makes the direct prediction task easier, however CMP\@\xspace is accurate even in their absence.} \label{fig:shading-quantitative-light} \end{figure} \textbf{Datasets.} We experiment with the `Yale B' and `Extended Yale B' datasets~\citep{Georghiades2001, Lee2005}. Together, they contain images of 38 subjects each with 64 illumination directions. We remove images taken with extreme light angles (azimuth or elevation $\ge 85$ degrees) that are almost entirely in shadow, leaving around 45 images for each subject. Images are down-sampled to $96 \times 84$. There are no ground-truth normals or reflectances for this dataset, however it is common practice to create proxy ground-truths using photometric stereo, which we obtain using the code of~\cite{Queau2013}. We use images from 22 subjects for training and test on the remaining 16 subjects. \textbf{Results.} We begin by qualitatively assessing the different inference schemes. In Fig.~\ref{fig:shading-qualitative-multiple-subjects} we show inference results for reflectance maps, normal maps and lights that are obtained after 100 iterations of message passing (VMP). For reflectance (Fig.~\ref{fig:shading-qualitative-multiple-subjects}b), we would like inference to produce estimates that match closely the ground-truth produced by photometric stereo (GT). We also display the reflectance estimates produced by the strong baseline (BU) of \cite{Biswas2009} for reference. We note that the baseline achieves excellent accuracy in regions with strong lighting, however it produces blurry estimates in regions under shadow. As can be seen in Fig.~\ref{fig:shading-qualitative-multiple-subjects}b (MP), standard variational message passing finds solutions that are highly inaccurate with continued presence of illumination and artifacts in areas of cast show. In contrast, inference using CMP\@\xspace produces artefact-free results that much more closely resemble the stereo ground-truths. Arguably CMP\@\xspace also improves over the baseline \citep{Biswas2009}, since its estimates are not blurry in regions with cast shadows. This can be attributed to the presence of symmetry priors in the model. Additionally, we note that the variance of the CMP\@\xspace inference for reflectance (Fig.~\ref{fig:shading-qualitative-multiple-subjects}c) correlates strongly with cast shadows in the observed images (\textit{i.e.}\@\xspace the model is uncertain where it should be) suggesting that in future work it would be fruitful to have the notion of cast shadows explicitly built into the model. Figs.~\ref{fig:shading-qualitative-multiple-subjects}d and \ref{fig:shading-qualitative-multiple-subjects}e show analogous results for lighting and normal maps, and Fig.~\ref{fig:shading-qualitative-same-subject} demonstrates CMP\@\xspace's ability to robustly infer reflectance maps for images of a single subject taken under varying lighting conditions. More visual results are shown in the supplementary at the end of this thesis (Figs.~\ref{fig:shading-qualitative-multiple-subjects-supp},~\ref{fig:shading-qualitative-same-subject}). We use the task of subject recognition (using estimated reflectance) as a quantitative measure of inference accuracy, as it can be difficult to measure in more direct ways (\textit{e.g.}\@\xspace RMSE strongly favors blurry predictions). The reflectance estimate produced by each algorithm is compared to all training subjects' ground-truth reflectances and is assigned the label of its closest match. We have found this evaluation to reflect the quality of inference estimates. Fig.~\ref{fig:shading-quantitative-reflectance} shows the result of this experiment, both for real images and also synthetic images that were produced by taking the stereo ground-truths and adding artificial lighting (but with no cast shadows). We show analogous results for light in Fig.~\ref{fig:shading-quantitative-light}, where error is defined to be the cosine angle distance between the estimated light and the photometric stereo reference. First, we note that standard variational message passing (MP) performs poorly, producing reflectance estimates that are much less useful for recognition than those from \cite{Biswas2009}. Second, we note that CMP\@\xspace in the same model (both 1 stage and 2 stage versions) produces inferences that are significantly more useful downstream. The horizontal line labelled `Forest' represents the accuracy of the consensus messages without any message passing, showing that the model-based fine-tuning provides a significant benefit. Finally, we highlight the fact that initializing light directly from the image and running message passing (Fig.~\ref{fig:shading-quantitative-reflectance}, Init+MP) leads to worse estimates than CMP\@\xspace demonstrating the use of layered predictions as opposed to direct predictions from the observations. These results demonstrate that CMP\@\xspace helps message passing find better fixed points even in the presence of model mis-match (shadows) and make use of the full potential of the generative model. \section{Discussion and Conclusions} \label{sec:discussion-chap4} We have presented Consensus Message Passing\@\xspace and shown that it is a computationally efficient technique that can be used to improve the accuracy of message passing inference in a variety of vision models. The crux of the approach is to recognize the importance of global variables, and to take advantage of layered model structures commonly seen in vision to make rough estimates of their values. The success of CMP\@\xspace depends on the accuracy of the random forest predictors. The design of forest features is not yet completely automated, but we took care in this work to use generic features that can be applied to a broad class of problems. Our forests are implemented in an extensible manner, and we envisage building a library of them that one can choose from, simply by inspecting the data types of the contextual and target variables. In future work, we would like to exploit the benefits of the CMP\@\xspace framework by applying it to more challenging problems from computer vision. Each of the examples in Section~\ref{sec:experiments-chap4} can be extended in various ways, \textit{e.g.}\@\xspace by making considerations for multiple objects, incorporating occlusion in the squares example and cast shadows in the faces example, or by developing more realistic priors. We are also seeking to understand in what other domains the application of our ideas may be fruitful. More broadly, a major challenge in machine learning is that of enriching models in a scalable way. We continually seek to ask our models to provide interpretations of increasingly complicated, heterogeneous data sources. Graphical models provide an appealing framework to manage this complexity, but the difficulty of inference has long been a barrier to achieving these goals. The CMP\@\xspace framework takes us one step in the direction of overcoming this barrier. \chapter{Learning Sparse High Dimensional Filters} \label{chap:bnn} \newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\mathbf{w}}{\mathbf{w}} In the Part-II of this thesis, we focus on inference in discriminative CNN models. Since \emph{inference} amounts to simple evaluation of the \emph{model} in discriminative models, we propose modifications to the original model itself for better inference. This is unlike the inference strategies proposed in Part-I of this thesis, where we proposed to learn a separate inference model that helps in the Bayesian inference of a given generative model. In this chapter, we propose a learning technique for general sparse high-dimensional filters and show how this can be used for generalizing standard spatial convolutions in CNNs to learnable bilateral convolutions with long-range image-adaptive connections. Bilateral filters have wide spread use due to their edge-preserving properties. The common use case is to manually choose a parametric filter type, usually a Gaussian filter. In this chapter, we will generalize the parametrization and in particular derive a gradient descent algorithm so the filter parameters can be learned from data. This derivation allows to learn high dimensional linear filters that operate in sparsely populated feature spaces. We build on the permutohedral lattice construction for efficient filtering. The ability to learn more general forms of high-dimensional filters can be used in several diverse applications. First, we demonstrate the use in applications where single filter applications are desired for runtime reasons. Further, we show how this algorithm can be used to learn the pairwise potentials in densely connected conditional random fields and apply these to different image segmentation tasks. Finally, we introduce layers of bilateral filters in CNNs and propose \emph{bilateral neural networks} for the use of high-dimensional sparse data. This view provides new ways to encode model structure into network architectures. A diverse set of experiments empirically validates the usage of general forms of filters. \section{Introduction} \label{sec:intro-bnn} Image convolutions are basic operations for many image processing and computer vision applications. In this chapter, we will study the class of bilateral filter convolutions and propose a general image adaptive convolution that can be learned from data. The bilateral filter~\cite{aurich1995non, smith97ijcv, tomasi1998bilateral} was originally introduced for the task of image denoising as an edge preserving filter. Since the bilateral filter contains the spatial convolution as a special case, in the following, we will directly state the general case. Given an image $\mathbf{x}=(\mathbf{x}_1,\ldots,\mathbf{x}_n), \mathbf{x}_i\in\mathbb{R}^c$ with $n$ pixels and $c$ channels, and for every pixel $i$, a $d$ dimensional feature vector $\mathbf{f}_i\in\mathbb{R}^d$ (\textit{e.g.}\@\xspace\ the $(x,y)$ position in the image $\mathbf{f}_i=(x_i,y_i)^{\top}$). The bilateral filter then computes \begin{equation} \mathbf{x}'_i = \sum_{j=1}^n \mathbf{w}_{\mathbf{f}_i,\mathbf{f}_j} \mathbf{x}_j.\label{eq:bilateral} \end{equation} for all $i$. Almost the entire literature refers to the bilateral filter as a synonym of the Gaussian parametric form $\mathbf{w}_{\mathbf{f}_i,\mathbf{f}_j} = \exp{(-\frac{1}{2}(\mathbf{f}_i-\mathbf{f}_j)^{\top}\Sigma^{-1}(\mathbf{f}_i-\mathbf{f}_j))}$. The features $\mathbf{f}_i$ are most commonly chosen to be position $(x_i,y_i)$ and color $(r_i,g_i,b_i)$ or pixel intensity. To appreciate the edge-preserving effect of the bilateral filter, consider the five-dimensional feature $\mathbf{f}={(x,y,r,g,b)}^{\top}$. Two pixels $i,j$ have a strong influence $\mathbf{w}_{\mathbf{f}_i,\mathbf{f}_j}$ on each other only if they are close in position \emph{and} color. At edges, the color changes, therefore pixels lying on opposite sides have low influence and thus this filter does not blur across edges. This behavior is sometimes referred to as \emph{image adaptive}, since the filter has a different shape when evaluated at different locations in the image. More precisely, it is the projection of the filter to the two-dimensional image plane that changes, the filter values $\mathbf{w}_{\mathbf{f},\mathbf{f}'}$ do not change. The filter itself can be of $c$ dimensions $\mathbf{w}_{\mathbf{f}_i,\mathbf{f}_j}\in\mathbb{R}^c$, in which case the multiplication in Eq.~\ref{eq:bilateral} becomes an inner product. For the Gaussian case, the filter can be applied independently per channel. For an excellent review of image filtering, we refer to~\cite{milanfar2011tour}. The filter operation of Eq.~\ref{eq:bilateral} is a sparse high-dimensional convolution, a view advocated in~\cite{barash2002fundamental,paris2006fast}. An image $\mathbf{x}$ is not sparse in the spatial domain, we observe pixels values for all locations $(x,y)$. However, when pixels are understood in a higher dimensional feature space, \textit{e.g.}\@\xspace $(x,y,r,g,b)$, the image becomes a sparse signal, since the $r,g,b$ values lie scattered in this five-dimensional space. This view on filtering is the key difference of the bilateral filter compared to the common spatial convolution. An image edge is not \emph{visible} for a filter in the spatial domain alone, whereas in the 5D space it is. The edge-preserving behavior is possible due to the higher dimensional operation. Other data can naturally be understood as sparse signals, \textit{e.g.}\@\xspace 3D surface points. The main contribution of this work is to propose a general and learnable sparse high dimensional convolution. Our technique builds on efficient algorithms that have been developed to approximate the Gaussian bilateral filter and re-uses them for more general high-dimensional filter operations. Due to its practical importance (see related work in Section~\ref{sec:related-bnn}), several efficient algorithms for computing Eq.~\ref{eq:bilateral} have been developed, including the bilateral grid~\cite{paris2006fast}, Gaussian KD-trees~\cite{adams2009gaussian}, and the permutohedral lattice~\cite{adams2010fast}. The design goal for these algorithms was to provide a) fast runtimes and b) small approximation errors for the Gaussian filter case. The key insight of this work is to use the permutohedral lattice and use it not as an approximation of a predefined kernel but to freely parametrize its values. We relax the separable Gaussian filter case from~\cite{adams2010fast} and show how to compute gradients of the convolution (Section~\ref{sec:learning}) in lattice space. This enables learning the filter from data. This insight has several useful consequences. We discuss applications where the bilateral filter has been used before: image filtering (Section~\ref{sec:filtering}) and CRF inference (Section~\ref{sec:densecrf}). Further we will demonstrate how the free parametrization of the filters enables us to use them in deep convolutional neural networks (CNN) and allow convolutions that go beyond the regular spatially connected receptive fields (Section~\ref{sec:bnn}). For all domains, we present various empirical evaluations with a wide range of applications. \section{Related Work} \label{sec:related-bnn} We categorize the related work according to the three different generalizations of this work. \paragraph{Image Adaptive Filtering:} The literature in this area is rich and we can only provide a brief overview. Important classes of image adaptive filters include the bilateral filters~\cite{aurich1995non, tomasi1998bilateral, smith97ijcv}, non-local means~\cite{buades2005non, awate2005higher}, locally adaptive regressive kernels~\cite{takeda2007kernel}, guided image filters~\cite{he2013guided} and propagation filters~\cite{chang2015propagated}. The kernel least-squares regression problem can serve as a unified view of many of them~\cite{milanfar2011tour}. In contrast to the present work that learns the filter kernel using supervised learning, all these filtering schemes use a predefined kernel. Because of the importance of bilateral filtering to many applications in image processing, much effort has been devoted to derive fast algorithms; most notably~\cite{paris2006fast, adams2010fast, adams2009gaussian, gastal2012adaptive}. Surprisingly, the only attempt to learn the bilateral filter we found is~\cite{hu2007trained} that casts the learning problem in the spatial domain by rearranging pixels. However, the learned filter does not necessarily obey the full region of influence of a pixel as in the case of a bilateral filter. The bilateral filter also has been proposed to regularize a large set of applications in~\cite{barron2015bilateral, barron2015defocus} and the respective optimization problems are parametrized in a bilateral space. In these works the filters are part of a learning system but unlike this work restricted to be Gaussian. \paragraph{Dense CRF:} The key observation of~\cite{krahenbuhl2012efficient} is that mean-field inference update steps in densely connected CRFs with Gaussian edge potentials require Gaussian bilateral filtering operations. This enables tractable inference through the application of a fast filter implementation from~\cite{adams2010fast}. This quickly found wide-spread use, \textit{e.g.}\@\xspace the combination of CNNs with a dense CRF is among the best performing segmentation models~\cite{chen2014semantic, zheng2015conditional, bell2015minc}. These works combine structured prediction frameworks on top of CNNs, to model the relationship between the desired output variables thereby significantly improving upon the CNN result. Bilateral neural networks, that are presented in this work, provide a principled framework for encoding the output relationship, using the feature transformation inside the network itself thereby alleviating some of the need for later processing. Several works~\cite{krahenbuhl2013parameter, domke2013learning,kiefel2014human,zheng2015conditional,schwing2015fully} demonstrate how to learn free parameters of the dense CRF model. However, the parametric form of the pairwise term always remains a Gaussian. Campbell \textit{et~al.}\@\xspace~\cite{campbell2013fully} embed complex pixel dependencies into an Euclidean space and use a Gaussian filter for pairwise connections. This embedding is a pre-processing step and can not directly be learned. In Section~\ref{sec:densecrf} we will discuss how to learn the pairwise potentials, while retaining the efficient inference strategy of~\cite{krahenbuhl2012efficient}. \paragraph{Neural Networks:} In recent years, the use of CNNs enabled tremendous progress in a wide range of computer vision applications. Most CNN architectures use spatial convolution layers, which have fixed local receptive fields. This work suggests to replace these layers with bilateral filters, which have a varying spatial receptive field depending on the image content. The equivalent representation of the filter in a higher dimensional space leads to sparse samples that are handled by a permutohedral lattice data structure. Similarly, Bruna \textit{et~al.}\@\xspace~\cite{bruna2013spectral} propose convolutions on irregularly sampled data. Their graph construction is closely related to the high-dimensional convolution that we propose and defines weights on local neighborhoods of nodes. However, the structure of the graph is bound to be fixed and it is not straightforward to add new samples. Furthermore, re-using the same filter among neighborhoods is only possible with their costly spectral construction. Both cases are handled naturally by our sparse convolution. Jaderberg \textit{et~al.}\@\xspace~\cite{jaderberg2015spatial} propose a spatial transformation of signals within the neural network to learn invariances for a given task. The work of~\cite{ionescu2015matrix} propose matrix back-propagation techniques which can be used to build specialized structural layers such as normalized-cuts. Graham \textit{et~al.}\@\xspace~\cite{graham2015sparse} propose extensions from 2D CNNs to 3D sparse signals. Our work enables sparse 3D filtering as a special case, since we use an algorithm that allows for even higher dimensional data. \section{Learning Sparse High Dimensional Filters}\label{sec:learning} In this section, we describe the main technical contribution of this work, we generalize the permutohedral convolution~\cite{adams2010fast} and show how the filter can be learned from data. Recall the form of the bilateral convolution from Eq.~\ref{eq:bilateral}. A naive implementation would compute for every pixel $i$ all associated filter values $\mathbf{w}_{\mathbf{f}_i,\mathbf{f}_j}$ and perform the summation independently. The view of $\mathbf{w}$ as a linear filter in a higher dimensional space, as proposed by~\cite{paris2006fast}, opened the way for new algorithms. Here, we will build on the permutohedral lattice convolution developed in Adams \textit{et~al.}\@\xspace~\cite{adams2010fast} for approximate Gaussian filtering. The most common application of bilateral filters use photometric features (XYRGB). We chose the permutohedral lattice as it is particularly designed for this dimensionality, see Fig.~7 in~\cite{adams2010fast} for a speed comparison. \subsection{Permutohedral Lattice Convolutions} We first review the permutohedral lattice convolution for Gaussian bilateral filters from Adams \textit{et~al.}\@\xspace~\cite{adams2010fast} and describe its most general case. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=\columnwidth]{figures/permutohedral_illustration.pdf}} \mycaption{Schematic of the permutohedral convolution} {Left: splatting the input points (orange) onto the lattice corners (black); Middle: The extent of a filter on the lattice with a $s=2$ neighborhood (white circles), for reference we show a Gaussian filter, with its values color coded. The general case has a free scalar/vector parameter per circle. Right: The result of the convolution at the lattice corners (black) is projected back to the output points (blue). Note that in general the output and input points may be different. \label{fig:ops}} \end{center} \end{figure} As before, we assume that every image pixel $i$ is associated with a $d$-dimensional feature vector $\mathbf{f}_i$. Gaussian bilateral filtering using a permutohedral lattice approximation involves 3 steps. We begin with an overview of the algorithm, then discuss each step in more detail in the next paragraphs. Figure~\ref{fig:ops} schematically shows the three operations for 2D features. First, interpolate the image signal on the $d$-dimensional grid plane of the permutohedral lattice, which is called \emph{splatting}. A permutohedral lattice is the tessellation of space into permutohedral simplices. We refer to~\cite{adams2010fast} for details of the lattice construction and its properties. In Fig.~\ref{fig:lattice}, we visualize the permutohedral lattice in the image plane, where every simplex cell receives a different color. All pixels of the same lattice cell have the same color. Second, \emph{convolve} the signal on the lattice. And third, retrieve the result by interpolating the signal at the original $d$-dimensional feature locations, called \emph{slicing}. For example, if the features used are a combination of position and color $\mathbf{f}_i = (x_i, y_i, r_i, g_i, b_i)^{\top}$, the input signal is mapped into the 5D cross product space of position and color and then convolved with a 5D tensor. Afterwards, the filtered result is mapped back to the original space. In practice we use a feature scaling $\Lambda \mathbf{f}$ with a diagonal matrix~$\Lambda$ and use separate scales for position and color features. The scale determines the distance of points and thus the size of the lattice cells. More formally, the computation is written by $\mathbf{x}' = S_{\text{slice}}BS_{\text{splat}} \mathbf{x}$ and all involved matrices are defined below. For notational convenience we will assume scalar input signals $x_i$, the vector valued case is analogous, the lattice convolution changes from scalar multiplications to inner products. \begin{figure}[t] \centering \subfigure[\scriptsize Sample Image]{% \includegraphics[width=.23\columnwidth]{figures/bird.jpg}\label{fig:bird} } \subfigure[\scriptsize Position]{% \includegraphics[width=.23\columnwidth]{figures/bird_lattice_1.png}\label{fig:bird_lattice_1} } \subfigure[\scriptsize Color]{% \includegraphics[width=.23\columnwidth]{figures/bird_lattice_2.png}\label{fig:bird_lattice_2} } \subfigure[\scriptsize Position, Color]{% \includegraphics[width=.23\columnwidth]{figures/bird_lattice_3.png}\label{fig:bird_lattice_3} } \mycaption{Visualization of the permutohedral lattice} {(a) Input image; Lattice visualizations for different feature spaces: (b) 2D position features: $0.01(x, y)$, (c) color features: $0.01(r, g, b)$ and (d) position and color features: $0.01(x, y, r, g, b)$. All pixels falling in the same simplex cell are shown with the same color.} \label{fig:lattice} \end{figure} \paragraph{Splat:} The splat operation (cf.\ left-most image in Fig.~\ref{fig:ops}) finds the enclosing simplex in $\mathcal{O}(d^2)$ on the lattice of a given pixel feature $\mathbf{f}_i$ and distributes its value $v_i$ onto the corners of the simplex. How strong a pixel contributes to a corner $j$ is defined by its barycentric coordinate $t_{i,j}\in\mathbb{R}$ inside the simplex. Thus, the value $\mathbf{\ell}_j \in \mathbb{R}$ at a lattice point $j$ is computed by summing over all enclosed input points; more precisely, we define an index set $J_i$ for a pixel $i$, which contains all the lattice points $j$ of the enclosing simplex \vspace{-0.7em} \begin{equation} \label{eq:splat} \mathbf{\ell} = S_{\text{splat}} \mathbf{x};% {(S_{\text{splat}})}_{j, i} = t_{i, j}, \text{ if } j\in J_i, \text{ otherwise } 0. \vspace{-0.5em} \end{equation} \vspace{-0.3cm} \paragraph{Convolve:} The permutohedral convolution is defined on the lattice neighborhood $N_s(j)$ of lattice point $j$, \textit{e.g.}\@\xspace\ only $s$ grid hops away. More formally \vspace{-0.7em} \begin{equation} \label{eq:blur} \mathbf{\ell'} = B \mathbf{\ell}; {(B)}_{j', j} = w_{j, j'},% \text{ if } j' \in N_s(j), \text{ otherwise } 0. \vspace{-0.5em} \end{equation} An illustration of a two-dimensional permutohedral filter is shown in Fig.~\ref{fig:ops} (middle). Note that we already presented the convolution in the general form that we will make use of. The work of~\cite{adams2010fast} chooses the filter weights such that the resulting operation approximates a Gaussian blur, which is illustrated in Fig.~\ref{fig:ops}. Further, the algorithm of~\cite{adams2010fast} takes advantage of the separability of the Gaussian kernel. Since we are interested in the most general case, we extended the convolution to include non-separable filters $B$. \vspace{-0.3cm} \paragraph{Slice:} The slice operation (cf.\ right-most image in Fig.~\ref{fig:ops}) computes an output value $x'_{i'}$ for an output pixel $i'$ again based on its barycentric coordinates $t_{i, j}$ and sums over the corner points $j$ of its lattice simplex \vspace{-0.7em} \begin{equation} \label{eq:slice} \mathbf{x}' = S_{\text{slice}} \mathbf{\ell'}; % {(S_{\text{slice}})}_{i, j} = t_{i, j}, \text{ if } j\in J_i, \text{ otherwise } 0 \vspace{-0.5em} \end{equation} The splat and slice operations take a role of an interpolation between the different signal representations: the irregular and sparse distribution of pixels with their associated feature vectors and the regular structure of the permutohedral lattice points. Since high-dimensional spaces are usually sparse, performing the convolution densely on all lattice points is inefficient. So, for speed reasons, we keep track of the populated lattice points using a hash table and only convolve at those locations. \subsection{Learning Permutohedral Filters}\label{sec:backprop} The \emph{fixed} set of filter weights $\mathbf{w}$ from~\cite{adams2010fast} in Eq.~\ref{eq:blur} is designed to approximate a Gaussian filter. However, the convolution kernel $\mathbf{w}$ can naturally be understood as a general filtering operation in the permutohedral lattice space with free parameters. In the exposition above we already presented this general case. As we will show in more detail later, this modification has non-trivial consequences for bilateral filters, CNNs and probabilistic graphical models. The size of the neighborhood $N_s(k)$ for the blur in Eq.~\ref{eq:blur} compares to the filter size of a spatial convolution. The filtering kernel of a common spatial convolution that considers $s$ points to either side in all dimensions has ${(2s + 1)}^d\in\mathcal{O}(s^d)$ parameters. A comparable filter on the permutohedral lattice with an $s$ neighborhood is specified by ${(s+1)}^{d+1} - s^{d+1} \in\mathcal{O}(s^d)$ elements (cf. Appendix~\ref{sec:permconv}). Thus, both share the same asymptotic size. By computing the gradients of the filter elements, we enable the use of gradient based optimizers, \textit{e.g.}\@\xspace back-propagation for CNN in the same way that spatial filters in a CNN are learned. The gradients with respect to $\mathbf{x}$ and the filter weights in $B$ of a scalar loss $L$ are: \begin{eqnarray} \frac {\partial L} {\partial \mathbf{x}} &=& S'_{\text{splat}} % B' % S'_{\text{slice}} % \frac {\partial L} {\partial \mathbf{x}'}, \label{eq:dv}\\ \frac {\partial L} {\partial {(B)}_{i, j}} &=& {\left(S'_{\text{slice}} \frac {\partial L} {\partial \mathbf{x}}\right)}_i {(S_{\text{splat}} \mathbf{x})}_j.\label{eq:db} \end{eqnarray} Both gradients are needed during back-propagation and in experiments, we use stochastic back-propagation for learning the filter kernel. The permutohedral lattice convolution is parallelizable, and scales linearly with the filter size. Specialized implementations run at interactive speeds in image processing applications~\cite{adams2010fast}. Our implementation in the Caffe deep learning framework~\cite{jia2014caffe} allows arbitrary filter parameters and the computation of the gradients on both CPU and GPU. The code is available at http://bilateralnn.is.tuebingen.mpg.de. \section{Single Bilateral Filter Applications}\label{sec:filtering} In this section, we will consider the problems of joint bilateral up-sampling~\cite{kopf2007joint} and 3D body mesh denoising as prominent instances of single bilateral filter applications. See~\cite{paris2009bilateral} for a recent overview of other bilateral filter applications. Further experiments on image denoising are included in Appendix~\ref{sec:appendix-bnn}, together with details about exact experimental protocols and more visualizations. \subsection{Joint Bilateral Upsampling} A typical technique to speed up computer vision algorithms is to compute results on a lower scale and up-sample the result to the full resolution. This up-sampling step may use the original resolution image as a guidance image. A joint bilateral up-sampling approach for this problem setting was developed in~\cite{kopf2007joint}. We describe the procedure for the example of up-sampling a color image. Given a high resolution gray scale image (the guidance image) and the same image on a lower resolution but with colors, the task is to up-sample the color image to the same resolution as the guidance image. Using the permutohedral lattice, joint bilateral up-sampling proceeds by splatting the color image into the lattice, using 2D position and 1D intensity as features and the 3D RGB values as the signal. A convolution is applied in the lattice and the result is read out at the features of the high resolution image, that is using the 2D position and intensity of the guidance image. The possibility of reading out (slicing) points that are not necessarily the input points is an appealing feature of the permutohedral lattice convolution. \begin{figure*}[t!] \centering \subfigure{% \raisebox{2.0em}{ \includegraphics[width=.08\columnwidth]{figures/color_small.png}\label{fig:color_given} } } \subfigure{% \includegraphics[width=.16\columnwidth]{figures/color_guidance.pdf}\label{fig:color_guidance} } \subfigure{% \includegraphics[width=.16\columnwidth]{figures/color_original.pdf}\label{fig:color_original} } \subfigure{% \includegraphics[width=.16\columnwidth]{figures/color_bicubic.pdf}\label{fig:color_bicubic} } \subfigure{% \includegraphics[width=.16\columnwidth]{figures/color_gauss.pdf}\label{fig:color_gauss} } \subfigure{% \includegraphics[width=.16\columnwidth]{figures/color_learnt.pdf}\label{fig:color_learnt} }\\ \setcounter{subfigure}{0} \subfigure[Input]{% \raisebox{2.0em}{ \includegraphics[width=.08\columnwidth]{figures/depth_bicubic.png}\label{fig:depth_given} } } \subfigure[Guidance]{% \includegraphics[width=.16\columnwidth]{figures/depth_image.png}\label{fig:depth_guidance} } \subfigure[Ground Truth]{% \includegraphics[width=.16\columnwidth]{figures/depth_gt.png}\label{fig:depth_gt} } \subfigure[Bicubic]{% \includegraphics[width=.16\columnwidth]{figures/depth_bicubic.png}\label{fig:depth_bicubic} } \subfigure[Gauss-BF]{% \includegraphics[width=.16\columnwidth]{figures/depth_gauss.png}\label{fig:depth_gauss} } \subfigure[Learned-BF]{% \includegraphics[width=.16\columnwidth]{figures/depth_learnt.png}\label{fig:depth_learnt} } \mycaption{Guided up-sampling}{Color (top) and depth (bottom) $8\times$ up-sampling results using different methods: Bicubic - Bicubic interpolation; Gauss-BF - Gaussian bilateral upsampling; Learned-BF - Learned bialteral up-sampling (best viewed on screen).} \label{fig:upsample_visuals} \end{figure*} \begin{table}[t] \scriptsize \centering \begin{tabular}{l c c c c c} \toprule & \textbf{Upsampling factor} & \textbf{Bicubic} & \textbf{Gaussian} & \textbf{Learned} \\ [0.1cm] \midrule \multicolumn{2}{l}\textbf{Color Upsampling (PSNR)} & & &\\ & \textbf{2x} & 24.19 / 30.59 & 33.46 / 37.93 & \textbf{34.05 / 38.74} \\ & \textbf{4x} & 20.34 / 25.28 & 31.87 / 35.66 & \textbf{32.28 / 36.38} \\ & \textbf{8x} & 17.99 / 22.12 & 30.51 / 33.92 & \textbf{30.81 / 34.41} \\ & \textbf{16x} & 16.10 / 19.80 & 29.19 / 32.24 & \textbf{29.52 / 32.75} \\ \midrule \multicolumn{2}{l}\textbf{Depth Upsampling (RMSE)} & & &\\ & \textbf{8x} & 0.753 & 0.753 & \textbf{0.748} \\ \bottomrule \\ \end{tabular} \vspace{-0.2cm} \mycaption{Joint bilateral up-sampling} {(top) PSNR values corresponding to various up-sampling factors and up-sampling strategies on the test images of the Pascal VOC12 segmentation / high-resolution 2MP dataset; (bottom) RMSE error values corresponding to up-sampling depth images estimated using~\cite{eigen2014depth} computed on the test images from the NYU depth dataset~\cite{silberman2012indoor}.} \label{tbl:upsample} \end{table} \vspace{-0.2cm} \subsubsection{Color Up-sampling} For the task of color up-sampling, we compare the Gaussian bilateral filter~\cite{kopf2007joint} against a learned generalized filter. We experimented with two different datasets: Pascal VOC2012 segmentation~\cite{voc2012segmentation} using train, validation and test splits, and 200 higher resolution (2MP) images from Google image search~\cite{google_images} with 100 train, 50 validation and 50 test images. For training, we use the mean squared error (MSE) criterion and perform stochastic gradient descent with a momentum term of $0.9$, and weight decay of $0.0005$, found using the validation set. In Table~\ref{tbl:upsample} we report result in terms of Peak-Signal-to-Noise ratio (PSNR) for the up-sampling factors $2\times, 4\times, 8\times$ and $16\times$. We compare a standard bicubic interpolation, that does not use a guidance image, the Gaussian bilateral filter case (with feature scales optimized on the validation set), and the learned filter. All filters have the same support. For all up-sampling factors, joint bilateral Gaussian up-sampling outperforms bicubic interpolation and is in turn improved using a learned filter. A result of the up-sampling is shown in Fig.~\ref{fig:upsample_visuals} and more results are included in Section~\ref{sec:col_upsample_extra}. The learned filter recovers finer details in the images. We also performed the cross-factor analysis of training and testing at different up-sampling factors. Table~\ref{tbl:crossupsample} shows the PSNR results for this analysis. Although, in terms of PSNR, it is optimal to train and test at the same up-sampling factor, the differences are small when training and testing up-sampling factors are different. \begin{table}[t] \scriptsize \centering \begin{tabular}{c c c c c c} \toprule & & \multicolumn{4}{c}{Test Factor} \\ [0.1cm] & & \textbf{2$\times$} & \textbf{4$\times$} & \textbf{8$\times$} & \textbf{16$\times$} \\ [0.15cm] \parbox[t]{3mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{Train Factor}}} & \textbf{2$\times$} & \textbf{38.45} & 36.12 & 34.06 & 32.43 \\ [0.1cm] & \textbf{4$\times$} & 38.40 & \textbf{36.16} & \textbf{34.08} & 32.47 \\ [0.1cm] & \textbf{8$\times$} & 38.40 & 36.15 & \textbf{34.08} & 32.47 \\ [0.1cm] & \textbf{16$\times$} & 38.26 & 36.13 & 34.06 & \textbf{32.49} \\ \bottomrule \\ \end{tabular} \vspace{-0.2cm} \mycaption{Color upsampling with different train and test up-sampling factors} {PSNR values corresponding to different up-sampling factors used at train and test times on the 2 megapixel image dataset, using our learned bilateral filters.} \label{tbl:crossupsample} \vspace{-0.4cm} \end{table} \subsubsection{Depth Up-sampling} We experimented with depth up-sampling as another joint up-sampling task. We use the dataset of~\cite{silberman2012indoor} that comes with pre-defined train, validation and test splits. The approach of~\cite{eigen2014depth} is a CNN model that produces a result at 1/4th of the input resolution due to down-sampling operations in max-pooling layers. Furthermore, the authors down-sample the $640\times 480$ images to $320\times 240$ as a pre-processing step before CNN convolutions. The final depth result is bicubic interpolated to the original resolution. It is this interpolation that we replace with a Gaussian and learned joint bilateral up-sampling. The features are five-dimensional position and color information from the high resolution input image. The filter is learned using the same protocol as for color up-sampling minimizing MSE prediction error. The quantitative results are shown in Table~\ref{tbl:upsample}, the Gaussian filter performs equal to the bicubic interpolation, the learned filter is better. Qualitative results are shown in Fig~\ref{fig:upsample_visuals}, both joint bilateral up-sampling respect image edges in the result. For this~\cite{ferstl2015b} and other tasks specialized interpolation algorithms exist, \textit{e.g.}\@\xspace deconvolution networks~\cite{zeiler2010deconvolutional}. Part of future work is to equip these approaches with bilateral filters. More qualitative results are presented in Appendix~\ref{sec:depth_upsample_extra}. \begin{figure}[h!] \centering \includegraphics[width=0.7\columnwidth]{figures/supplementary/sample_body_data.jpg} \mycaption{Sample data for 3D mesh denoising} {(top) Some 3D body meshes sampled from~\cite{SMPL:2015} and (bottom) the corresponding noisy meshes used in denoising experiments.} \label{fig:samplebody} \end{figure} \subsection{3D Mesh Denoising}\label{sec:mesh_denoising} Permutohedral convolutions can naturally be extended to higher ($>2$) dimensional data. To highlight this, we use the proposed convolution for the task of denoising 3D meshes. \begin{figure}[t!] \centering \includegraphics[width=0.8\columnwidth]{figures/supplementary/isomap_features.jpg} \mycaption{4D isomap features for 3D human bodies} {Visualization of 4D isomap features for a sample 3D mesh. Isomap feature values are overlaid onto mesh vertices.} \label{fig:isomap} \end{figure} We sample 3D human body meshes using a generative 3D body model from~\cite{SMPL:2015}. To the clean meshes, we add Gaussian random noise displacements along the surface normal at each vertex location. Figure~\ref{fig:samplebody} shows some sample 3D meshes sampled from~\cite{SMPL:2015} and corresponding noisy meshes. The task is to take the noisy meshes as inputs and recover the original 3D body meshes. We create 1000 training, 200 validation and another 500 testing examples for the experiments. Although we use synthetically generated meshes and noise for our experiments for the sake of training, our technique could be potentially used for denoising the noisy meshes arising from 3D scanning devices. \paragraph{Mesh Representation:} The 3D human body meshes from~\cite{SMPL:2015} are represented with 3D vertex locations and the edge connections between the vertices. We found that this signal representation using global 3D coordinates is not suitable for denoising with bilateral filtering. Therefore, we first smooth the noisy mesh using mean smoothing applied to the face normals~\cite{yagou2002mesh} and represent the noisy mesh vertices as 3D vector displacements with respect to the corresponding smoothed mesh. Thus, the task becomes denoising the 3D vector displacements with respect to the smoothed mesh. \paragraph{Isomap Features:} To apply permutohedral convolution, we need to define features at each input vertex point. We use a 4 dimensional isomap embedding~\cite{tenenbaum2000global} of the given 3D mesh graph as features. The given 3D mesh is converted into a weighted edge graph with edge weights set to the Euclidean distance between the connected vertices and to infinity between the non-connected vertices. Then the 4 dimensional isomap embedding is computed for this weighted edge graph using a publicly available implementation~\cite{isomap_code}. Figure~\ref{fig:isomap} shows the visualization of isomap features on a sample 3D mesh. \setlength{\tabcolsep}{2pt} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \begin{table}[t] \scriptsize \centering \begin{tabular}{C{3.0cm} C{1.5cm} C{1.5cm} C{1.5cm} C{1.5cm}} \toprule & \textbf{Noisy Mesh} & \textbf{Normal Smoothing} & \textbf{Gauss Bilateral} & \textbf{Learned Bilateral} \\ [0.1cm] \midrule \textbf{Vertex Distance (RMSE)} & 5.774 & 3.183 & 2.872 & \textbf{2.825} \\ \textbf{Normal Angle Error} & 19.680 & 19.707 & 19.357 & \textbf{19.207} \\ \bottomrule \\ \end{tabular} \vspace{-0.2cm} \mycaption{Body denoising} {Vertex distance RMSE values and normal angle error (in degrees) corresponding to different denoising strategies averaged over 500 test meshes.} \label{tbl:bodydenoise} \end{table} \paragraph{Experimental Results:} Mesh denoising with a bilateral filter proceeds by splatting the input 3D mesh vectors (displacements with respect to the smoothed mesh) into the 4D isomap feature space, filtering the signal in this 4D space and then slicing back into original 3D input space. The Table~\ref{tbl:bodydenoise} shows quantitative results as RMSE for different denoising strategies. The normal smoothing~\cite{yagou2002mesh} already reduces the RMSE. The Gauss bilateral filter results in significant improvement over normal smoothing and learning the filter weights again improves the result. A visual result is shown in Fig.~\ref{fig:bodyresult}. \begin{figure}[t!] \centering \includegraphics[width=0.7\columnwidth]{figures/supplementary/body_sample_result.jpg} \mycaption{Sample denoising result} {Ground truth mesh (left), corresponding given noisy mesh (middle) and the denoised result (right) using the learned bilateral filter.} \label{fig:bodyresult} \end{figure} \section{Learning Pairwise Potentials in Dense CRFs}\label{sec:densecrf} The bilateral convolution from Section~\ref{sec:learning} generalizes the class of DenseCRF models for which the mean-field inference from~\cite{krahenbuhl2012efficient} applies. The DenseCRF models have found wide-spread use in various computer vision applications~\cite{sun2013fully, bell2014intrinsic, zheng2015conditional, vineet12eccv, vineet12emmcvpr, bell2015minc}. Recall from Section~\ref{sec:example_modesl_crf}, for a DenseCRF model with unary potentials $\psi_u$ and pairwise potentials $\psi^{ij}_p$, the mean-field inference results in a fixed point equation which can be solved iteratively to update the marginal distributions $Q_i$. In iteration $t$, we have: \begin{equation} Q^{t+1}_i(x_i) = \frac{1}{Z_i} \exp\{-\psi_u(x_i) - \sum_{l \in \mathcal{L}}\underbrace{\sum_{j \ne i} \psi^{ij}_p(x_i,l) Q^{t}_j(l)}_\text{bilateral filtering}\}. \label{eq:mfupdate-2} \end{equation} Thus bilateral filtering is used for fast mean-field inference in DenseCRF models. One of the fundamental limitations with the existing use of DenseCRFs is the confinement of pairwise potentials $\psi^{ij}_p(y_i,y_j)$ to be Gaussian as bilateral filtering is traditionally applied with Gaussian kernel. \subsection{Learning Pairwise Potentials} The proposed bilateral convolution generalizes the class of potential functions $\psi^{ij}_p$, since they allow a richer class of kernels $k(\mathbf{f}_i,\mathbf{f}_j)$ that furthermore can be learned from data. So far, all dense CRF models have used Gaussian potential functions $k$, we replace it with the general bilateral convolution and learn the parameters of kernel $k$, thus in effect learn the pairwise potentials of the dense CRF. This retains the desirable properties of this model class -- efficient inference through mean-field and the feature dependency of the pairwise potential. In order to learn the form of the pairwise potentials $k$ we make use of the gradients for filter parameters in $k$ and use back-propagation through the mean-field iterations~\cite{domke2013learning, li2014mean} to learn them. The work of~\cite{krahenbuhl2013parameter} derived gradients to learn the feature scaling $\Lambda$ but not the form of the kernel $k$, which still was Gaussian. In~\cite{campbell2013fully}, the features $\mathbf{f}_i$ were derived using a non-parametric embedding into a Euclidean space and again a Gaussian kernel was used. The computation of the embedding was a pre-processing step, not integrated in a end-to-end learning framework. Both aforementioned works are generalizations that are orthogonal to our development and can be used in conjunction. \subsection{Experimental Evaluation} We evaluate the effect of learning more general forms of potential functions on two pixel labeling tasks, semantic segmentation of VOC data~\cite{voc2012segmentation} and material classification~\cite{bell2015minc}. We use pre-trained models from the literature and compare the relative change when learning the pairwise potentials, as in the last section. For both the experiments, we use multinomial logistic classification loss and learn the filters via back-propagation~\cite{domke2013learning}. This has also been understood as recurrent neural network variants~\cite{zheng2015conditional}, the following experiments demonstrate the learnability of bilateral filters. \begin{figure}[t] \centering \subfigure{% \includegraphics[width=.23\columnwidth]{figures/2010_003531_given.jpg} \label{fig:seg_given} } \subfigure{% \includegraphics[width=.23\columnwidth]{figures/2010_003531_gt.png} \label{fig:seg_gt} } \subfigure{% \includegraphics[width=.23\columnwidth]{figures/2010_003531_cnn.png} \label{fig:seg_cnn} } \subfigure{% \includegraphics[width=.23\columnwidth]{figures/2010_003531_learnt.png} \label{fig:seg_mf2steps} }\\ \setcounter{subfigure}{0} \subfigure[Input]{% \includegraphics[width=.23\columnwidth]{figures/000006011_given.jpg} \label{fig:minc_given} } \subfigure[Ground Truth]{% \includegraphics[width=.23\columnwidth]{figures/000006011_gt.png} \label{fig:minc_gt} } \subfigure[CNN]{% \includegraphics[width=.23\columnwidth]{figures/000006011_cnn.png} \label{fig:minc_cnn} } \subfigure[+\textit{loose}MF]{% \includegraphics[width=.23\columnwidth]{figures/000006011_learnt.png} \label{fig:minc_mf2steps} } \mycaption{Segmentation results}{An example result for semantic (top) and material (bottom) segmentation. (c) depicts the unary results before application of MF, (d) after two steps of \textit{loose}-MF with a learned CRF. More examples with comparisons to Gaussian pairwise potentials can be found in the supplementary material.} \label{fig:seg_visuals} \end{figure} \subsubsection{Semantic Segmentation} Semantic segmentation is the task of assigning a semantic label to every pixel. We choose the DeepLab network~\cite{chen2014semantic}, a variant of the VGGnet~\cite{simonyan2014very} for obtaining unaries. The DeepLab architecture runs a CNN model on the input image to obtain a result that is down-sampled by a factor of 8. The result is then bilinear interpolated to the desired resolution and serves as unaries $\psi_u(x_i)$ in a dense CRF. We use the same Pott's label compatibility function $\mu$, and also use two kernels $k^1(\mathbf{f}_i,\mathbf{f}_j) + k^2(p_i,p_j)$ with the same features $\mathbf{f}_i=(x_i,y_i,r_i,g_i,b_i)^{\top}$ and $\mathbf{p}_i=(x_i,y_i)^{\top}$ as in~\cite{chen2014semantic}. Thus, the two filters operate in parallel on color \& position, and spatial domain respectively. We also initialize the mean-field update equations with the CNN unaries. The only change in the model is the type of the pairwise potential function from Gauss to a generalized form. We evaluate the result after 1 step and 2 steps of mean-field inference and compare the Gaussian filter versus the learned version (cf. Tab.~\ref{tbl:seg_results}). First, as in~\cite{chen2014semantic} we observe that one step of mean field improves the performance by 2.48\% in Intersection over Union (IoU) score. However, a learned potential increases the score by 2.93\%. The same behavior is observed for 2 steps: the learned result again adds on top of the raised Gaussian mean field performance. Further, we tested a variant of the mean-field model that learns a separate kernel for the first and second step~\cite{li2014mean}. This `loose' mean-field model leads to further improvement of the performance. It is not obvious how to take advantage of a loose model in the case of Gaussian potentials. \begin{table}[t] \setlength{\tabcolsep}{2pt} \scriptsize \centering \begin{tabular}{l c c c c} \toprule & & + \textbf{MF-1step} & + \textbf{MF-2 step} & + \textbf{\textit{loose} MF-2 step} \\ [0.1cm] \midrule \multicolumn{4}{l}{Semantic segmentation (IoU) - CNN~\cite{chen2014semantic}: 72.08 / 66.95} & \\ & Gauss CRF & +2.48 & +3.38 & +3.38 / +3.00 \\ & Learned CRF & +2.93 & +3.71 & \textbf{+3.85 / +3.37} \\ \midrule \multicolumn{4}{l}{Material segmentation (Pixel Accuracy) - CNN~\cite{bell2015minc}: 67.21 / 69.23 } & \\ & Gauss CRF & +7.91 / +6.28 & +9.68 / +7.35 & +9.68 / +7.35 \\ & Learned CRF & +9.48 / +6.23 & +11.89 / +6.93 & \textbf{+11.91 / +6.93} \\ \\ \end{tabular} \mycaption{Improved mean-field inference with learned potentials} {(top) Average IoU score on Pascal VOC12 validation/test data~\cite{voc2012segmentation} for semantic segmentation; (bottom) Accuracy for all pixels / averaged over classes on the MINC test data~\cite{bell2015minc} for material segmentation.} \label{tbl:seg_results} \end{table} \subsubsection{Material Segmentation} We adopt the method and dataset from~\cite{bell2015minc} for the material segmentation task. Their approach proposes the same architecture as in the previous section; a CNN to predict the material labels (\textit{e.g.}\@\xspace\ wool, glass, sky, etc.) followed by a densely connected CRF using Gaussian potentials and mean-field inference. We re-use the pre-trained CNN and choose the CRF parameters and Lab color/position features as in~\cite{bell2015minc}. Results for pixel accuracy and class-averaged pixel accuracy are shown in Table~\ref{tbl:seg_results}. Following the CRF validation in~\cite{bell2015minc}, we ignored the label `other' for both the training and evaluation. For this dataset, the availability of training data is small, 928 images with only sparse segment annotations. While this is enough to cross-validate few hyper-parameters, we would expect the general bilateral convolution to benefit from more training data. Visual results are shown in Fig.~\ref{fig:seg_visuals} and more are included in Appendix~\ref{sec:semantic_bnn_extra}. \section{Bilateral Neural Networks}\label{sec:bnn} Probably the most promising opportunity for the generalized bilateral filter is its use in Convolutional Neural Networks. Since we are not restricted to the Gaussian case, we can stack several filters in both parallel and sequential manner in the same way as filters are ordered in layers in typical spatial CNN architectures. Having the gradients available allows for end-to-end training with back-propagation, without the need for any change in CNN training protocols. We refer to the layers of bilateral filters as `bilateral convolution layers' (BCL). As discussed in the introduction, these can be understood as either linear filters in a high dimensional space or a filter with an image adaptive receptive field. In the remainder, we will refer to CNNs that include at least one bilateral convolutional layer as a bilateral neural network (BNN). \setlength{\tabcolsep}{4pt} \begin{table}[h] \scriptsize \centering \begin{tabular}{l c c} \toprule \textbf{Dim.-Features} & \textbf{d-dim caffe} & \textbf{BCL} \\ [0.1cm] \midrule 2D-$(x, y)$ & \textbf{3.3 $\pm$ 0.3 / 0.5$ \pm$ 0.1} & 4.8 $\pm$ 0.5 / 2.8 $\pm$ 0.4 \\ 3D-$(r,g,b)$ & 364.5 $\pm$ 43.2 / 12.1 $\pm$ 0.4 & \textbf{5.1 $\pm$ 0.7 / 3.2 $\pm$ 0.4} \\ 4D-$(x,r,g,b)$ & 30741.8 $\pm$ 9170.9 / 1446.2 $\pm$ 304.7 & \textbf{6.2 $\pm$ 0.7 / 3.8 $\pm$ 0.5} \\ 5D-$(x,y,r,g,b)$ & out of memory & \textbf{7.6 $\pm$ 0.4 / 4.5 $\pm$ 0.4} \\ \\ \end{tabular} \mycaption{Runtime comparison: BCL vs. spatial convolution} {Average CPU/GPU runtime (in ms) of 50 1-neighborhood filters averaged over 1000 images from Pascal VOC. All scaled features $(x,y,r,g,b)\in[0,50)$. BCL includes splatting and splicing operations which in layered networks can be re-used.} \label{tbl:time} \end{table} What are the possibilities of a BCL compared to a standard spatial layer? First, we can define a feature space $\mathbf{f}_i\in\mathbb{R}^d$ to define proximity between elements to perform the convolution. This can include color or intensity as in the previous example. We performed a runtime comparison (Tab.~\ref{tbl:time}) between our current implementation of a BCL and the caffe~\cite{jia2014caffe} implementation of a $d$-dimensional convolution. For 2D positional features (first row), the standard layer is faster since the permutohedral algorithm comes with an overhead. For higher dimensions $d>2$, the runtime depends on the sparsity; but ignoring the sparsity is quickly leading to intractable runtimes for the regular $d$-dimensional convolution. The permutohedral lattice convolution is in effect a sparse matrix-vector product and thus performs favorably in this case. In the original work~\cite{adams2010fast}, it was presented as an approximation to the Gaussian case, here we take the viewpoint of it being the definition of the convolution itself. Next we illustrate two use cases of BNNs and compare against spatial CNNs. Appendix~\ref{sec:appendix-bnn} contains further explanatory experiments with examples on MNIST digit recognition. \begin{figure}[t!] \centering \subfigure[Sample tile images]{% \includegraphics[width=0.6\columnwidth]{figures/sample_slate_images.jpg}\label{fig:slate_sample} }\\ \subfigure[NN architecture]{% \includegraphics[width=.3\columnwidth]{figures/slate_network.pdf}\label{fig:slate_network} } \subfigure[IoU versus Epochs]{% \includegraphics[width=.3\columnwidth]{figures/slate_plots.pdf}\label{fig:slate_plots} } \mycaption{Segmenting Tiles}{(a) Example tile input images; (b) the 3-layer NN architecture used in experiments. `Conv' stands for spatial convolutions, resp.~bilateral convolutions; (c) Training progress in terms of validation IoU versus training epochs.} \label{fig:slate_experiment} \end{figure} \subsection{An Illustrative Example: Segmenting Tiles} In order to highlight the model possibilities of using higher dimensional sparse feature spaces for convolutions through BCLs, we constructed the following illustrative problem. A randomly colored foreground tile with size $20\times 20$ is placed on a random colored background of size $64 \times 64$. Gaussian noise with standard deviation of $0.02$ is added and color values are normalized to $[0,1]$, example images are shown in Fig.~\ref{fig:slate_sample}. The task is to segment out the smaller tile. A pixel classifier can not distinguish foreground from background since the color is random. We train CNNs with three convolution/ReLU layers and varying filters of size $n \times n, n \in \{9, 13, 17, 21\}$. The schematic of the architecture is shown in Fig~\ref{fig:slate_network} ($32,16,2$ filters). We create 10k training, 1k validation and 1k test images and, use the validation set to choose learning rates. In Fig.~\ref{fig:slate_plots}, we plot the validation IoU against training epochs. Now, we replace all spatial convolutions with bilateral convolutions for a full BNN. The features are $\mathbf{f}_i=(x_i,y_i,r_i,g_i,b_i)^{\top}$ and the filter has a neighborhood of $1$. The total number of parameters in this network is around $40k$ compared to $52k$ for $9\times 9$ up to $282k$ for a $21\times 21$ CNN. With the same training protocol and optimizer, the convergence rate of BNN is much faster. In this example as in semantic segmentation discussed in the last section, color is a discriminative information for the label. The bilateral convolutions \emph{see} the color difference, the points are already pre-grouped in the permutohedral lattice and the task remains to assign a label to the two groups. \vspace{-0.4cm} \subsection{Character Recognition} The results for tile, semantic, and material segmentation when using general bilateral filters mainly improved because the feature space was used to encode useful prior information about the problem (similar RGB of close-by pixels have the same label). Such prior knowledge is often available when structured predictions are to be made, but the input signal may also be in a sparse format to begin with. Let us consider handwritten character recognition, one of the prime cases for CNN use. The Assamese character dataset~\cite{bache2013uci} contains 183 different Indo-Aryan symbols with 45 writing samples per class. Some sample character images are shown in Fig.~\ref{fig:assamese_sample}. This dataset has been collected on a tablet PC using a pen input device and has been pre-processed to binary images of size $96 \times 96$. Only about $3\%$ of the pixels contain a pen stroke, which we will denote by $I_i=1$. \begin{figure}[t] \centering \subfigure[Sample Assamese character images (9 classes, 2 samples each)]{% \includegraphics[width=0.6\columnwidth]{figures/sample_assamese.png}\label{fig:assamese_sample} }\\ \subfigure[LeNet training]{% \includegraphics[width=.3\columnwidth]{figures/lenet_plots.pdf}\label{fig:assamese_lenet} } \subfigure[DeepCNet training]{% \includegraphics[width=.3\columnwidth]{figures/deepcnet_plots.pdf}\label{fig:assamese_deepcnet} } \mycaption{Character recognition}{ (a) Sample Assamese character images~\cite{bache2013uci}; and training progression of various models with (b) LeNet and (c) DeepCNet base networks.} \label{fig:assamese_experiment} \vspace{-0.3cm} \end{figure} A CNN is a natural choice to approach this classification task. We experiment with two CNN architectures that have been used for this task, LeNet-7 from~\cite{lecun1998gradient} and DeepCNet~\cite{ciresan2012multi, graham2014spatially}. The LeNet is a shallower network with bigger filter sizes whereas DeepCNet is deeper with smaller convolutions. Both networks are fully specified in Appendix~\ref{sec:addresults}. In order to simplify the task for the networks we cropped the characters by placing a tight bounding box around them and providing the bounding boxes as input to the networks. We will call these networks Crop-LeNet and Crop-DeepCNet. For training, we randomly divided the data into 30 writers for training, 6 for validation and the remaining 9 for test. Fig.\ref{fig:assamese_lenet} and Fig.~\ref{fig:assamese_deepcnet} show the training progress for various LeNet and DeepCNet models respectively. DeepCNet is a better choice for this problem and for both cases, pre-processing the data by cropping improves convergence. The input is spatially sparse and the BCL provides a natural way to take advantage of this. For both networks, we create a BNN variant (BNN-LeNet and BNN-DeepCNet) by replacing the first layer with bilateral convolutions using the features $\mathbf{f}_i = (x_i, y_i)^{\top}$ and we \emph{only} consider the foreground points $I_i=1$. The values $(x_i,y_i)$ denote the position of the pixel with respect to the top-left corner of the bounding box around the character. In effect, the lattice is very sparse which reduces runtime because the convolutions are only performed on $3\%$ of the points that are actually observed. A bilateral filter has $7$ parameters compared to a receptive field of $3\times 3$ for the first DeepCNet layer and $5\times 5$ for the first LeNet layer. Thus, a BCL with the same number of filters has fewer parameters. The result of the BCL convolution is then splatted at all points $(x_i,y_i)$ and passed on to the remaining spatial layers. The convergence behavior is shown in Fig.\ref{fig:assamese_experiment} and again we find faster convergence and also better validation accuracy. The empirical results of this experiment for all tested architectures are summarized in Table~\ref{tbl:assamese}, with BNN variants clearly outperforming their spatial counterparts. The absolute results can be vastly improved by making use of virtual examples, \textit{e.g.}\@\xspace by affine transformations~\cite{graham2014spatially}. The purpose of these experiments is to compare the networks on equal grounds while we believe that additional data will be beneficial for both networks. We have no reason to believe that a particular network benefits more. \setlength{\tabcolsep}{2pt} \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}b{#1}} \begin{table}[t] \scriptsize \centering \begin{tabular}{C{1.65cm} C{1.55cm} C{1.55cm} C{1.55cm} | C{1.55cm} C{1.55cm} C{1.55cm}} \toprule & \textbf{LeNet} & \textbf{Crop-LeNet} & \textbf{BNN-LeNet} & \textbf{DeepCNet} & \textbf{Crop-DeepCNet} & \textbf{BNN-DeepCNet} \\ [0.2cm] \midrule Validation & 59.29 & 68.67 & \textbf{75.05} & 82.24 & 81.88 & \textbf{84.15} \\ Test & 55.74 & 69.10 & \textbf{74.98} & 79.78 & 80.02 & \textbf{84.21} \\ \\ \end{tabular} \mycaption{Results on Assamese character images} {Total recognition accuracy for the different models.} \label{tbl:assamese} \end{table} \section{Discussion and Conclusions} We proposed to learn bilateral filters from data. In hindsight, it may appear obvious that this leads to performance improvements compared to a fixed parametric form, \textit{e.g.}\@\xspace the Gaussian. To understand algorithms that facilitate fast approximate computation of Eq.~\ref{eq:bilateral} as a parameterized implementation of a bilateral filter with free parameters is the key insight and enables gradient descent based learning. We relaxed the non-separability in the algorithm from~\cite{adams2010fast} to allow for more general filter functions. There is a wide range of possible applications for learned bilateral filters~\cite{paris2009bilateral} and we discussed some generalizations of previous work. These include joint bilateral up-sampling and inference in dense CRFs. We further demonstrated a use case of bilateral convolutions in neural networks. The bilateral convolutional layer allows for filters whose receptive field change given the input image. The feature space view provides a canonical way to encode similarity between any kind of objects, not only pixels, but \textit{e.g.}\@\xspace bounding boxes, segmentation, surfaces. The proposed filtering operation is then a natural candidate to define a filter convolutions on these objects, it takes advantage of sparsity and scales to higher dimensions. Therefore, we believe that this view will be useful for several problems where CNNs can be applied. An open research problem is whether the sparse higher dimensional structure also allows for efficient or compact representations for intermediate layers inside CNN architectures. In summary, the proposed technique for learning sparse high-dimensional filters results in a generalization of bilateral filters that helps in learning task-specific bilateral filters. In the context of inference, learnable bilateral filters can be used for better modeling and thus inference in neural network as well as DenseCRF models. \chapter{Video Propagation Networks} \label{chap:vpn} In this chapter, we leverage the learnable bilateral filters developed in the previous chapter, and develop a novel neural network architecture for inference in video data. We focus on the task of propagating information across video frames. Standard CNNs are poor candidates for filtering video data. Standard spatial CNNs have fixed receptive fields whereas the video content changes differently in different videos depending on the type of scene and camera motion. So, filters with video adaptive receptive fields are better candidates for video filtering. Based on this observation, we adapt the bilateral convolution layers (BCL) proposed in the previous chapter for filtering video data. By stacking several BCL and standard spatial convolutional layers, we develop a neural network architecture for video information propagation which we call `Video Propagation Network' (VPN). We evaluate VPN on different tasks of video object segmentation and semantic video segmentation and show increased performance comparing to the best previous task-specific methods, while having favorable runtime. Additionally we demonstrate our approach on an example regression task of propagating color in a grayscale video. \section{Introduction} We focus on the problem of propagating structured information across video frames in this chapter. ~This problem appears in many forms (e.g., semantic segmentation or depth estimation) and is a pre-requisite for many applications.~An example instance is shown in Fig.~\ref{fig:illustration-vpn}. Given an accurate object mask for the first frame, the problem is to propagate this mask forward through the entire video sequence.~Propagation of semantic information through time and video colorization are other problem instances. Videos pose both technical and representational challenges. The presence of scene and camera motion lead to the difficult association problem of optical flow. Video data is computationally more demanding than static images. A naive per-frame approach would scale at least linear with frames. These challenges complicate the use of standard convolutional neural networks (CNNs) for video processing. As a result, many previous works for video propagation use slow optimization based techniques. \begin{figure}[t!] \begin{center} \centerline{\includegraphics[width=\columnwidth]{figures/teaser_network.pdf}} \mycaption{Video Propagation with VPNs} {The end-to-end trained VPN network is composed of a bilateral network followed by a standard spatial network and can be used for propagating information across frames. Shown here is an example result of foreground mask from the 1$^{st}$ frame to other video frames.} \label{fig:illustration-vpn} \vspace{-0.3cm} \end{center} \end{figure} We propose a generic neural network architecture that propagates information across video frames. The main innovation is the use of image adaptive convolutional operations that automatically adapt to the video stream content.~This allows the network to adapt to the changing content of the video stream. It can be applied to several types of information, e.g. labels, colors, etc. and runs online, that is, only requiring current and previous frames. Our architecture is composed of two components (see Fig.~\ref{fig:illustration-vpn}). A temporal \textit{bilateral network} that performs image-adaptive spatio-temporal dense filtering. This part allows to connect densely all pixels from current and previous frames and to propagate associated pixel information to the current frame. The bilateral network allows the specification of a metric between video pixels and allows a straight-forward integration of temporal information. This is followed by a standard \textit{spatial CNN} on the filter output to refine and predict for the present video frame. We call this combination a \textit{Video Propagation Network (VPN)}. In effect we are combining a filtering technique with rather small spatial CNNs which leads to a favorable runtime compared to many previous approaches. VPNs have the following suitable properties for video processing: \vspace{-0.5cm} \paragraph{General applicability:} VPNs can be used for propagating any type of information content i.e., both discrete (e.g., semantic labels) and continuous (e.g. color) information across video frames. \vspace{-0.5cm} \paragraph{Online propagation:} The method needs no future frames and so can be used for online video analysis. \vspace{-0.5cm} \paragraph{Long-range and image adaptive:} VPNs can efficiently handle a large number of input frames and are adaptive to the video. \vspace{-0.5cm} \paragraph{End-to-end trainable:} VPNs can be trained end-to-end, so they can be used in other deep network architectures. \vspace{-0.5cm} \paragraph{Favorable runtime:} VPNs have favorable runtime in comparison to several current best methods, also making them amenable for learning with large datasets. Empirically we show that VPNs, despite being generic, perform better or on-par with current best approaches on video object segmentation and semantic label propagation while being faster. VPNs can easily be integrated into sequential per-frame approaches and require only a small fine-tuning step that can be performed separately. \section{Related Work} \label{sec:related} The literature on propagating information across video frames contains a vast and varied number of approaches. Here, we only discuss those works that are related to our technique and applications. \vspace{-0.3cm} \paragraph{General propagation techniques} Techniques for propagating content across image or video pixels are predominantly optimization based or filtering techniques. Optimization based techniques typically formulate the propagation as an energy minimization problem on a graph constructed across video pixels or frames. A classic example is the color propagation technique from~\cite{levin2004colorization} which uses graph structure that encodes prior knowledge about pixel colors in a local neighborhood. Although efficient closed-form solutions~\cite{levin2008closed} exists for certain scenarios, optimization tends to be slow due to either large graph structures for videos and/or the use of complex connectivity resulting in the use of iterative optimization schemes. Fully-connected conditional random fields (CRFs)~\cite{krahenbuhl2012efficient} open a way for incorporating dense and long-range pixel connections while retaining fast inference. Filtering techniques~\cite{kopf2007joint,chang2015propagated,he2013guided} aim to propagate information with the use of image or video filters resulting in fast runtimes compared to optimization techniques. Bilateral filtering~\cite{aurich1995non,tomasi1998bilateral} is one of the popular filters for long-range information propagation. We have already discussed bilateral filtering and its generalization in the previous chapter. A popular application, that is also discussed in previous chapter, is joint bilateral up-sampling~\cite{kopf2007joint} that up-samples a low-resolution signal with the use of a high-resolution guidance image. Chapter~\ref{chap:bnn} and the works of~\cite{li2014mean,domke2013learning,zheng2015conditional,schwing2015fully,barron2015bilateral} showed that one can back-propagate through the bilateral filtering operation for learning filter parameters (Chapter~\ref{chap:bnn}) or doing optimization in the bilateral space~\cite{barron2015bilateral,barron2015defocus}. Recently, several works proposed to do upsampling in images by learning CNNs that mimics edge-aware filtering~\cite{xu2015deep} or that directly learns to up-sample~\cite{li2016deep,hui2016depth}. Most of these works are confined to images and are either not extendible or computationally too expensive for videos. We leverage some of these previous works and propose a scalable yet robust neural network based approach for video content propagation. \vspace{-0.3cm} \paragraph{Video object segmentation} Prior work on video object segmentation can be broadly categorized into two types: Semi-supervised methods that require manual annotation to define what is foreground object and unsupervised methods that does segmentation completely automatically. Unsupervised techniques such as ~\cite{faktor2014video,li2013video,lee2011key,papazoglou2013fast,wang2015saliency, zhang2013video,taylor2015causal,dondera2014interactive} use some prior information about the foreground objects such as distinctive motion, saliency etc. And, they typically fail if these assumptions do not hold in a video. In this work, we focus on the semi-supervised task of propagating the foreground mask from the first frame to the entire video. Existing works predominantly use graph-based optimization frameworks that perform graph-cuts~\cite{boykov2001fast, boykov2001interactive,shi2000normalized} on video data. Several of these works~\cite{reso2014interactive, li2005video,price2009livecut,wang2005interactive,kohli2007dynamic,jain2014supervoxel} aim to reduce the complexity of graph structure with clustering techniques such as spatio-temporal superpixels and optical flow~\cite{tsaivideo}. Another direction was to estimate correspondence between different frame pixels~\cite{agarwala2004keyframe,bai2009video,lang2012practical} by using nearest neighbor fields~\cite{fan2015jumpcut} or optical flow~\cite{chuang2002video} and then refine the propagated masks with the use of local classifiers. Closest to our technique are the works of~\cite{perazzi2015fully} and~\cite{marki2016bilateral}. ~\cite{perazzi2015fully} proposed to use fully-connected CRF over the refined object proposals across the video frames.~\cite{marki2016bilateral} proposed a graph-cut in the bilateral space. Our approach is similar in the regard that we also use a bilateral space embedding. Instead of graph-cuts, we learn propagation filters in the high-dimensional bilateral space with CNNs. This results in a more generic architecture and allows integration into other deep learning frameworks. Two contemporary works~\cite{caelles2016one,khoreva2016learning} proposed CNN based approaches for video object segmentation. Both works rely on fine-tuning a deep network using the first frame annotation of a given test sequence. This could potentially result in overfitting to the background. In contrast, the proposed approach relies only on offline training and thus can be easily adapted to different problem scenarios. \vspace{-0.3cm} \paragraph{Semantic video segmentation} Earlier methods such as~\cite{brostow2008segmentation,sturgess2009combining} use structure from motion on video frames to compute geometrical and/or motion features. More recent works~\cite{ess2009segmentation,chen2011temporally,de2012line,miksik2013efficient,tripathi2015semantic, kundu2016feature} construct large graphical models on videos and enforce temporal consistency across frames. \cite{chen2011temporally} used dynamic temporal links in their CRF energy formulation. \cite{de2012line} proposes to use Perturb-and-MAP random field model with spatio-temporal energy terms based on Potts model and \cite{miksik2013efficient} propagate predictions across time by learning a similarity function between pixels of consecutive frames. In the recent years, there is a big leap in the performance of semantic image segmentation~\cite{long2014fully,chen2014semantic} with the use of CNNs but mostly applied to images. Recently,~\cite{shelhamer2016clockwork} proposed to retain the intermediate CNN representations while sliding the image based CNN across the frames. Another approach, which inspired our work, is to take unary predictions from CNN and then propagate semantic information across the frames. A recent prominent approach in this direction is of~\cite{kundu2016feature} which proposes a technique for optimizing feature spaces for fully-connected CRF. \section{Video Propagation Networks} \label{sec:vpn} We aim to adapt the bilateral filtering operation to predict information forward in time, across video frames. Formally, we work on a sequence of $n$ (color or grayscale) images $\mathbf{x} = \{\mathbf{x}_1, \mathbf{x}_2, \cdots, \mathbf{x}_n\}$ and denote with $\mathbf{y} = \{\mathbf{y}_1, \mathbf{y}_2, \cdots, \mathbf{y}_n\}$ a sequence of outputs, one per frame. Consider as an example, a sequence $\mathbf{y}_1,\ldots,\mathbf{y}_n$ of foreground masks for a moving object in the scene. Our goal is to develop an online propagation method, that is, a function that has no access to the future frames. Formally we predict $\mathbf{y}_t$, having observed the video up to frame $t$ and possibly previous $\mathbf{y}_{1,\cdots,t-1}$ \begin{equation} \mathcal{F}(\mathbf{y}_{t-1}, \mathbf{y}_{t-2}, \cdots; \mathbf{x}_t, \mathbf{x}_{t-1}, \mathbf{x}_{t-2},\cdots) = \mathbf{y}_t. \end{equation} \begin{figure*}[t!] \begin{center} \centerline{\includegraphics[width=\textwidth]{figures/net_illustration.pdf}} \mycaption{Computation Flow of Video Propagation Network} {Bilateral networks (BNN) consist of a series of bilateral filterings interleaved with ReLU non-linearities. The filtered information from BNN is then passed into a spatial network (CNN) which refines the features with convolution layers interleaved with ReLU non-linearities, resulting in the prediction for the current frame.} \label{fig:net_illustration} \end{center} \end{figure*} If training examples $(\mathbf{x},\mathbf{y})$ with full or partial knowledge of $\mathbf{y}$ are available, it is possible to learn $\mathcal{F}$ and for a complex and unknown relationship between input and output, a deep CNN is a natural design choice. However, any learning based method has to face the main challenge: the scene and camera motion and its effect on $\mathbf{y}$. Since no motion in two different videos is the same, fixed-sized static receptive fields of CNN units are insufficient. We propose to resolve this with video-adaptive convolutional component, an adaption of the bilateral filtering to videos. Our Bilateral Network (Section~\ref{sec:bilateralnetwork}) has a connectivity that adapts to video sequences, its output is then fed into a common Spatial Network (Section~\ref{sec:spatialcnn}) that further refines the desired output. The combined network layout of this Video Propagation Network is depicted in Fig.~\ref{fig:net_illustration}. It is a sequence of learnable bilateral and spatial filters that is efficient, trainable end-to-end and adaptive to the video input. \begin{figure*}[t!] \begin{center} \centerline{\includegraphics[width=\textwidth]{figures/permutohedral_illustration_2.pdf}} \mycaption{Schematic of Fast Bilateral Filtering for Video Processing} {Mask probabilities from previous frames $V_{1,\cdots,t-1}$ are splatted on to the lattice positions defined by the image features $f_{I_{1}},f_{I_2},\cdots,f_{I_{t-1}}$. The splatted result is convolved with a $1 \times 1$ filter $B$, and the filtered result is sliced back to the original image space to get $V_t$ for the present frame. Input and output need not be $V_t$, but can also be an intermediate neural network representation. $B$ is learned via back-propagation through these operations.} \label{fig:filter_illustration} \end{center} \end{figure*} \subsection{Bilateral Network (BNN)}\label{sec:bilateralnetwork} In this section, we describe the extension of the learnable bilateral filtering, proposed in Chapter~\ref{chap:bnn} to video data. Several properties of bilateral filtering make it a perfect candidate for information propagation in videos. In particular, our method is inspired by two main ideas that we extend in this work: joint bilateral up-sampling~\cite{kopf2007joint} and learnable bilateral filters (Chapter~\ref{chap:bnn}). Although, bilateral filtering has been used for filtering video data before~\cite{paris2008edge}, its use has been limited to fixed filter weights (say, Gaussian). {\bf Fast Bilateral Up-sampling across Frames} The idea of joint bilateral up-sampling~\cite{kopf2007joint} is to view up-sampling as a filtering operation. A high resolution guidance image is used to up-sample a low-resolution result. In short, a smaller number of input points $\mathbf{y}_i$ and the corresponding features $\mathbf{f}_i$ are given $\mathbf{y}_i,\mathbf{f}_i; i=1,\ldots,N_{in}$, for example a segmentation result $\mathbf{y}_i$ at a lower resolution. This is then scaled to a larger number of output points $\mathbf{f}_j;j=1,\ldots,N_{out}$ using the bilateral filtering operation, that is to compute the following bilateral filtering equation: \begin{equation} \mathbf{y}'_i = \sum_{j=1}^n \mathbf{w}_{\mathbf{f}_i,\mathbf{f}_j} \mathbf{y}_j \label{eq:bilateral2} \end{equation} where the sum runs over all $N_{in}$ points and the output is computed for all $N_{out}$ positions. We will use this idea to propagate content from previous frames to the current frame (all of which have the same dimensions), using the current frame as a guidance image. This is illustrated in Fig.~\ref{fig:filter_illustration}. We take all previous frame results $\mathbf{y}_{1,\cdots,t-1}$ and splat them into a lattice using the features computed on video frames $\mathbf{x}_{1,\cdots,t-1}$. A filtering (described below) is applied to every lattice point and the result is then sliced back using the current frame $\mathbf{x}_t$. This result need not be the final $\mathbf{y}_t$, in fact we compute a filter bank of responses and continue with further processing as will be discussed. For videos, we need to extend bilateral filtering to temporal data, and there are two natural choices. First, one can simply attach a frame index $t$ as an additional time dimension to the input data, yielding a six dimensional feature vector $\mathbf{f}=(x,y,r,g,b,t)^{\top}$ for every pixel in every frame. The summation in Eq.~\ref{eq:bilateral2} now runs over \emph{all} previous frames and pixels. Imagine a video where an object moves to reveal some background. Pixels of the object and background will be close spatially $(x,y)$ and temporally $(t)$ but likely be of different color $(r,g,b)$. Therefore they will have no strong influence on each other (being splatted to distant positions in the six-dimensional bilateral space). In summary, one can understand the filter to be adaptive to color changes across frames, only pixels that are static and have similar color have a strong influence on each other (end up nearby in the lattice space). The second possibility is to use optical flow. If the perfect flow is available, the video frames could be warped into a common frame of reference. This would resolve the corresponding problem and make information propagation much easier. We can make use of an optical flow estimate by warping pixel positions $(x,y)$ by their displacement vector $(u_x,u_y)$ to $(x+u_x,y+u_y)$. Another property of permutohedral filtering that we exploit is that the \emph{inputs points need not lie on a regular grid} since the filtering is done in the high-dimensional lattice. Instead of splatting millions of pixels on to the lattice, we randomly sample or use superpixels and perform filtering using these sampled points as input to the filter. In practice, we observe that this results in big computational gains with minor drop in performance (more in Sec.~\ref{sec:videoseg}). {\bf Learnable Bilateral Filters} The property of propagating information forward using a guidance image through filtering solves the problem of pixel association. But a Gaussian filter may be insufficient and further, we would like to increase the capacity by using a filter bank instead of a single fixed filter. We propose to use the technique proposed in previous chapter to learn the filter values in the permutohedral lattice using back-propagation. The process works as follows. A input video is used to determine the positions in the bilateral space to splat the input points $\mathbf{y}(i)\in\mathbb{R}^D$ i.e. the features $\mathbf{f}$ (e.g. $(x,y,r,g,b,t)$) define the splatting matrix $S_{splat}$.~This leads to a number of vectors $\mathbf{y}_{splatted} = S_{splat}\mathbf{y}$, that lie on the permutohedral lattice, with dimensionality $\mathbf{y}_{splatted}\in\mathbb{R}^D$. In effect, the splatting operation groups points that are close together, that is, they have similar $\mathbf{f}_i,\mathbf{f}_j$. All lattice points are now filtered using a filter bank $B\in\mathbb{R}^{F\times D}$ which results in $F$ dimensional vectors on the lattice points. These are sliced back to the $N_{out}$ points of interest (present video frame). The values of $B$ are learned by back-propagation. General parameterization of $B$ from previous chapter allows to have any neighborhood size for the filters. Since constructing the neighborhood structure in high-dimensions is time consuming, we choose to use $1 \times 1$ filters for speed reasons. This makes up one \emph{Bilateral Convolution Layer (BCL)} which we will stack and concatenate to form a Bilateral Network. See Fig.~\ref{fig:filter_illustration} for an illustration of a BCL. {\bf BNN Architecture} The Bilateral Network (BNN) is illustrated in the green box of Fig.~\ref{fig:net_illustration}. The input is a video sequence $\mathbf{x}$ and the corresponding predictions $\mathbf{y}$ up to frame $t$. Those are filtered using two BCLs with $32$ filters each. For both BCLs, we use the same features $\mathbf{f}$ but scale them with different diagonal matrices $\mathbf{f}_a=\Lambda_a\mathbf{f},\mathbf{f}_b=\Lambda_b\mathbf{f}$. The feature scales are found by cross-validation. The two $32$ dimensional outputs are concatenated, passed through a ReLU non-linearity and passed to a second layer of two separate BCL filters that uses same feature spaces $\mathbf{f}_a,\mathbf{f}_b$. The output of the second filter bank is then reduced using a $1\times 1$ spatial filter (C-1) to map to the original dimension of $\mathbf{y}$. We investigated scaling frame inputs with an exponential time decay and found that, when processing frame $t$, a re-weighting with $(\alpha \mathbf{y}_{t-1}, \alpha^2 \mathbf{y}_{t-2}, \alpha^3 \mathbf{y}_{t-3} \cdots)$ with $0\le\alpha\le 1$ improved the performance a little bit. In the experiments, we also included a simple BNN variant, where no filters are applied inside the permutohedral space, just splatting and slicing with the two layers $BCL_a$ and $BCL_b$ and adding the results. We will refer to this model as \emph{BNN-Identity}, it corresponds to an image adaptive smoothing of the inputs $\mathbf{y}$. We found this filtering to have a positive effect and include it as a baseline in our experiments. \vspace{-0.1cm} \subsection{Spatial Network}\label{sec:spatialcnn} The BNN was designed to propagate the information from the previous frames, respecting the scene and object motion. We then add a small spatial CNN with 3 layers, each with $32$ filters of size $3\times 3$, interleaved with ReLU non-linearities. The final result is then mapped to the desired output of $\mathbf{y}_t$ using a $1\times 1$ convolution. The main role of this spatial CNN is to refine the information in frame $t$. Depending on the problem and the size of the available training data, other network designs are conceivable. We use the same network architecture shown in Fig.~\ref{fig:net_illustration} for all the experiments to demonstrate the generality of VPNs. \vspace{-0.1cm} \section{Experiments} \label{sec:exps} We evaluated VPN on three different propagation tasks: foreground masks, semantic labels and color information in videos. Our implementation runs in Caffe~\cite{jia2014caffe} using standard settings. We used Adam~\cite{kingma2014adam} stochastic optimization for training VPNs, multinomial-logistic loss for label propagation networks and Euclidean loss for training color propagation networks. Runtime computations were performed using a Nvidia TitanX GPU and a 6 core Intel i7-5820K CPU clocked at 3.30GHz machine. We will make available all the code and experimental results. \subsection{Video Object Segmentation} \label{sec:videoseg} The task of class-agnostic video object segmentation aims to segment foreground objects in videos. Since the semantics of the foreground object is not pre-defined, this problem is usually addressed in a semi-supervised manner. The goal is to propagate a given foreground mask of the first frame to the entire video frames. Object segmentation in videos is useful for several high level tasks such as video editing, summarization, rotoscoping etc. \vspace{-0.5cm} \paragraph{Dataset} We use the recently published DAVIS dataset~\cite{Perazzi2016} for experiments on this task. The DAVIS dataset consists of 50 high-quality (1080p resolution) unconstrained videos with number of frames in each video ranging from 25 to 104. All the frames come with high-quality per-pixel annotation of the foreground object. The videos for this dataset are carefully chosen to contain motion blur, occlusions, view-point changes and other occurrences of object segmentation challenges. For robust evaluation and to get results on all the dataset videos, we evaluate our technique using 5-fold cross-validation. We randomly divided the data into 5 folds, where in each fold, we used 35 images for training, 5 for validation and the remaining 10 for the testing. For the evaluation, we used the 3 metrics that are proposed in~\cite{Perazzi2016}: Intersection over Union (IoU) score, Contour accuracy ($\mathcal{F}$) score and temporal instability ($\mathcal{T}$) score. The widely used IoU score is defined as $TP/(TP+FN+FP)$, where TP: True positives; FN: False negatives and FP: False positives. Please refer to~\cite{Perazzi2016} for the definition of the contour accuracy and temporal instability scores. We are aware of some other datasets for this task such as JumpCut~\cite{fan2015jumpcut} and SegTrack~\cite{tsai2012motion}, but we note that the number of videos in these datasets is too small for a learning based approach. \begin{table}[t] \centering \begin{tabular}{p{3.0cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash} p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm} >{\centering\arraybackslash}p{0.6cm}} \toprule \scriptsize & Fold-1 & Fold-2 & Fold-3 & Fold-4 & Fold-5 & All\\ [0.1cm] \midrule BNN-Identity & 56.4 & 74.0 & 66.1 & 72.2 & 66.5 & 67.0 \\ VPN-Stage1 & 58.2 & 77.7 & 70.4 & 76.0 & 68.1 & 70.1 \\ VPN-Stage2 & \textbf{60.9} & \textbf{78.7} & \textbf{71.4} & \textbf{76.8} & \textbf{69.0} & \textbf{71.3} \\ \bottomrule \\ \end{tabular} \mycaption{5-Fold Validation on DAVIS Video Segmentation Dataset} {Average IoU scores for different models on the 5 folds.} \label{tbl:davis-folds} \end{table} \begin{table}[t] \centering \begin{tabular}{p{3.0cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash} p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{2.3cm}} \toprule \scriptsize & \textit{IoU$\uparrow$} & $\mathcal{F}\uparrow$ & $\mathcal{T}\downarrow$ & \textit{Runtime}(s) \\ [0.1cm] \midrule BNN-Identity & 67.0 & 67.1 & 36.3 & 0.21\\ VPN-Stage1 & 70.1 & 68.4 & 30.1 & 0.48\\ VPN-Stage2 & 71.3 & 68.9 & 30.2 & 0.75\\ \midrule \multicolumn{4}{l}{\emph{With pre-trained models}} & \\ DeepLab & 57.0 & 49.9 & 47.8 & 0.15 \\ VPN-DeepLab & \textbf{75.0} & \textbf{72.4} & 29.5 & 0.63 \\ \midrule OFL~\cite{tsaivideo} & 71.1 & 67.9 & 22.1 & $>$60\\ BVS~\cite{marki2016bilateral} & 66.5 & 65.6 & 31.6 & 0.37\\ NLC~\cite{faktor2014video} & 64.1 & 59.3 & 35.6 & 20\\ FCP~\cite{perazzi2015fully} & 63.1 & 54.6 & 28.5 & 12\\ JMP~\cite{fan2015jumpcut} & 60.7 & 58.6 & \textbf{13.2} & 12\\ HVS~\cite{grundmann2010efficient} & 59.6 & 57.6 & 29.7 & 5\\ SEA~\cite{ramakanth2014seamseg} & 55.6 & 53.3 & 13.7 & 6\\ \bottomrule \\ \end{tabular} \mycaption{Results of Video Object Segmentation on DAVIS dataset} {Average IoU score, contour accuracy ($\mathcal{F}$), temporal instability ($\mathcal{T}$) scores, and average runtimes (in seconds) per frame for different VPN models along with recent published techniques for this task. VPN runtimes also include superpixel computation (10ms). Runtimes of other methods are taken from~\cite{marki2016bilateral,perazzi2015fully,tsaivideo} and only indicative and are not directly comparable to our runtimes. Runtime of VPN-Stage1 includes the runtime of BNN-Identity which is in-turn included in the runtime of VPN-Stage2. Runtime of VPN-DeepLab model includes the runtime of DeepLab.} \label{tbl:davis-main} \vspace{-0.7cm} \end{table} \begin{figure}[t!] \begin{center} \centerline{\includegraphics[width=0.5\columnwidth]{figures/acc_points_plots.pdf}} \mycaption{Random Sampling of Input Points vs. IoU} {The effect of randomly sampling points from input video frames on object segmentation IoU of BNN-Identity on DAVIS dataset. The points sampled are out of $\approx$2 million points from the previous 5 frames.} \label{fig:acc_vs_points} \end{center} \vspace{-0.8cm} \end{figure} \begin{figure}[th!] \begin{center} \centerline{\includegraphics[width=0.65\columnwidth]{figures/video_seg_visuals.pdf}} \mycaption{Video Object Segmentation} {Shown are the different frames in example videos with the corresponding ground truth (GT) masks, predictions from BVS~\cite{marki2016bilateral}, OFL~\cite{tsaivideo}, VPN (VPN-Stage2) and VPN-DLab (VPN-DeepLab) models.} \label{fig:video_seg_visuals} \end{center} \vspace{-1.0cm} \end{figure} \vspace{-0.5cm} \paragraph{VPN and Results} In this task, we only have access to foregound mask for the first frame $V_1$. For the ease of training VPN, we obtain initial set of predictions with \emph{BNN-Identity}. We sequentially apply \emph{BNN-Identity} at each frame and obtain an initial set of foreground masks for the entire video. These BNN-Identity propagated masks are then used as inputs to train a VPN to predict the refined masks at each frame. We refer to this VPN model as \emph{VPN-Stage1}. Once VPN-Stage1 is trained, its refined training mask predictions are in-turn used as inputs to train another VPN model which we refer to as \emph{VPN-Stage2}. This resulted in further refinement of foreground masks. Training further stages did not result in any improvements. Following the recent work of~\cite{marki2016bilateral} on video object segmentation, we used scaled features $\mathbf{f}=(x,y,Y,Cb,Cr,t)$ with YCbCr color features for bilateral filtering. To be comparable with the one of the fastest state-of-the-art technique~\cite{marki2016bilateral}, we do not use any optical flow information. First, we analyze the performance of BNN-Identity by changing the number of randomly sampled input points. Figure~\ref{fig:acc_vs_points} shows how the segmentation IoU changes with the increase in the number of sampled points (out of 2 million points) from the previous frames. The IoU levels out after sampling 25\% of points. For further computational efficiency, we used superpixel sampling instead of random sampling. Usage of superpixels reduced the IoU slightly (0.5\%), while reducing the number of input points by a factor of 10 in comparison to a large number of randomly sampled points. We used 12000 SLIC~\cite{achanta2012slic} superpixels from each frame computed using the fast GPU implementation from~\cite{gSLICr_2015}. For predictions at each frame, we input mask probabilities of previous 9 frames into VPN as we observe no significant improvements with more frames. We set $\alpha$ to $0.5$ and the feature scales for bilateral filtering are presented in Tab.~\ref{tbl:parameters_supp}. Table~\ref{tbl:davis-folds} shows the IoU scores for each of the 5 folds and Tab.~\ref{tbl:davis-main} shows the overall scores and runtimes of different VPN models along with the best performing segmentation techniques. The performance improved consistently across all 5 folds with the addition of new VPN stages.~BNN-Identity already performed reasonably well. And with 1-stage and 2-stage VPNs, we outperformed the present fastest BVS method~\cite{marki2016bilateral} by a significant margin on all the performance measures of IoU, contour accuracy and temporal instability scores, while being comparable in runtime. We perform marginally better than OFL method~\cite{tsaivideo} while being at least 80$\times$ faster and OFL relies on optical flow whereas we obtain similar performance without using any optical flow. ~Further, VPN has the advantage of doing online processing as it looks only at previous frames whereas BVS processes entire video at once. One can obtain better VPN performance with using better superpixels and also incorporating optical flow, but this increases runtime as well. Figure~\ref{fig:video_seg_visuals} shows some qualitative results and more are present in Figs.~\ref{fig:video_seg_pos_supp}. A couple of failure cases are shown in Fig.~\ref{fig:video_seg_neg_supp}. Visual results indicate that learned VPN is able to retain foreground masks even with large variations in viewpoint and object size. \paragraph{Augmenation of Pre-trained Models:} One of the main advantages of the proposed VPN architecture is that it is end-to-end trainable and can be easily integrated into other deep neural network architectures. To demonstrate this, we augmented VPN architecture with standard DeepLab segmentation architecture from~\cite{chen2014semantic}. We replaced the last classification layer of DeepLab-LargeFOV model from~\cite{chen2014semantic} to output 2 classes (foreground and background) in our case and bi-linearly up-sampled the resulting low-resolution probability map to the original image dimension. 5-fold fine-tuning of the DeepLab model on DAVIS dataset resulted in the IoU of 57.0 and other scores are shown in Tab.~\ref{tbl:davis-main}. Then, we combine the VPN and DeepLab models in the following way: The output from the DeepLab network and the bilateral network are concatenated and then passed on to the spatial network. In other words, the bilateral network propagates label information from previous frames to the present frame, whereas the DeepLab network does the prediction for the present frame. The results of both are then combined and refined by the spatial network in the VPN architecture. We call this `VPN-DeepLab' model. We trained this model end-to-end and observed big improvements in performance. As shown in Tab.~\ref{tbl:davis-main}, the VPN-DeepLab model has the IoU score of 75.0 and contour accuracy score of 72.4 resulting in significant improvements over the published results. Since DeepLab has also fast runtime, the total runtime of VPN-DeepLab is only 0.63s which makes this also one of the fastest video segmentation systems. A couple of visual results of VPN-DeepLab model are shown in Fig.~\ref{fig:video_seg_visuals} and more are present in Figs.~\ref{fig:video_seg_pos_supp} and~\ref{fig:video_seg_neg_supp}. \vspace{-0.2cm} \subsection{Semantic Video Segmentation} A semantic video segmentation assigns a semantic label to every video pixel. Since the semantics between adjacent frames does not change radically, intuitively, propagating semantic information across frames should improve the segmentation quality of each frame. Unlike mask propagation in the previous section where the ground-truth mask for the first frame is given, we approach semantic video segmentation in a fully automatic fashion. Specifically, we start with the unary predictions of standard CNNs and use VPN for propagating semantics across the frames. \begin{table}[t] \centering \begin{tabular}{p{5.0cm}>{\centering\arraybackslash}p{2.4cm}>{\centering\arraybackslash}p{3.5cm}} \toprule \scriptsize & \textit{IoU} & \textit{Runtime}(s) \\ [0.1cm] \midrule CNN from ~\cite{yu2015multi} & 65.3 & 0.38\\ + FSO-CRF~\cite{kundu2016feature} & 66.1 & \textbf{$>$}10\\ + BNN-Identity & 65.3 & 0.31\\ + BNN-Identity-Flow & 65.5 & 0.33\\ + VPN (Ours) & 66.5 & 0.35\\ + VPN-Flow (Ours) & \textbf{66.7} & 0.37\\ \midrule CNN from ~\cite{richter2016playing} & 68.9 & 0.30\\ + VPN-Flow (Ours) & \textbf{69.5} & 0.38\\ \bottomrule \\ \end{tabular} \mycaption{Results of Semantic Segmentation on the CamVid Dataset}{ Average IoU and runtimes (in seconds) per frame of different models on \textit{test} split. Runtimes exclude CNN computations which are shown separately. VPN and BNN-Identity runtimes include superpixel computation which takes up large portion of computation time (0.23s).} \label{tbl:camvid} \vspace{-0.5cm} \end{table} \vspace{-0.4cm} \paragraph{Dataset} We use the CamVid dataset~\cite{brostow2009semantic} that contains 4 high quality videos captured at 30Hz while the semantically labelled 11-class ground truth is provided at 1Hz. While the original dataset comes at a resolution of 960$\times$720, similar to previous works~\cite{yu2015multi,kundu2016feature}, we operate on a resolution of 640$\times$480. We use the same splits proposed in~\cite{sturgess2009combining} resulting in 367, 100 and 233 frames with ground-truth for training, validation and testing. Following common practice, we report the IoU scores for evaluation. \begin{figure}[th!] \begin{center} \centerline{\includegraphics[width=0.9\columnwidth]{figures/semantic_visuals.pdf}} \mycaption{Semantic Video Segmentation} {Input video frames and the corresponding ground truth (GT) segmentation together with the predictions of CNN~\cite{yu2015multi} and with VPN-Flow.} \label{fig:semantic_visuals} \end{center} \vspace{-0.7cm} \end{figure} \vspace{-0.5cm} \paragraph{VPN and Results} Since we already have CNN predictions for every frame, we train a VPN that takes the CNN predictions of previous \emph{and} present frames as input and predicts the refined predictions for the present frame. We compare with the state-of-the-art CRF approach for this problem~\cite{kundu2016feature} which we refer to as `FSO-CRF'. Following~\cite{kundu2016feature}, we also experimented with optical flow in our framework and refer that model as \emph{VPN-Flow}. We used the fast optical flow method that uses dense inverse search ~\cite{kroeger2016fast} to compute flows and modify the positional features of previous frames. We used the superpixels method of Dollar et al.~\cite{DollarICCV13edges} for this dataset as gSLICr~\cite{gSLICr_2015} has introduced artifacts. We experimented with predictions from two different CNNs: One is with dilated convolutions~\cite{yu2015multi} (CNN-1) and another one~\cite{richter2016playing} (CNN-2) is trained with the additional data obtained from a video game, which is the present state-of-the-art on this dataset. For CNN-1 and CNN-2, using 2 and 3 previous frames respectively as input to VPN is found to be optimal. Other parameters of the bilateral network are presented in Tab.~\ref{tbl:parameters_supp}. Table~\ref{tbl:camvid} shows quantitative results on this dataset. Using BNN-Identity only slightly improved the CNN performance whereas training the entire VPN significantly improved the CNN performance by over 1.2\% IoU, with both VPN and VPN-Flow networks. Moreover, VPN is at least 25$\times$ faster, and simpler to use compared to the optimization based FSO-CRF which relies on LDOF optical flow~\cite{brox2009large}, long-term tacks~\cite{sundaram2010dense} and edges~\cite{dollar2015fast}. We further improved the performance of the state-of-the-art CNN~\cite{richter2016playing} with the use of VPN-Flow model. Using better optical flow estimation might give even better results. Figure~\ref{fig:semantic_visuals} shows some qualitative results and more are presented in Fig.~\ref{fig:semantic_visuals_supp}. \subsection{Video Color Propagation} We also evaluate VPNs on a different kind of information and experimented with propagating color information in a grayscale video. Given the color image for the first video frame, the task is to propagate the color to the entire video. Note that this task is fundamentally different from automatic colorization of images for which recent CNN based based methods have become popular. For experiments on this task, we again used the DAVIS dataset~\cite{Perazzi2016} with the first 25 frames from each video. We randomly divided the dataset into 30 train, 5 validation and 15 test videos. \begin{table}[t] \centering \begin{tabular}{p{4.0cm}>{\centering\arraybackslash}p{2.6cm}>{\centering\arraybackslash}p{3.5cm}} \toprule \scriptsize & \textit{PSNR} & \textit{Runtime}(s) \\ [0.1cm] \midrule BNN-Identity & 27.89 & 0.29\\ VPN-Stage1 & \textbf{28.15} & 0.90\\ \midrule Levin et al.~\cite{levin2004colorization} & 27.11 & 19\\ \bottomrule \\ \end{tabular} \mycaption{Results of Video Color Propagation}{Average PSNR results and runtimes of different methods for video color propagation on images from DAVIS dataset.} \label{tbl:color} \vspace{-0.5cm} \end{table} \begin{figure}[th!] \begin{center} \centerline{\includegraphics[width=0.9\columnwidth]{figures/colorization_visuals.pdf}} \mycaption{Video Color Propagation} {Input grayscale video frames and corresponding ground-truth (GT) color images together with color predictions of Levin et al.~\cite{levin2004colorization} and VPN-Stage1 models.} \label{fig:color_visuals} \end{center} \vspace{-1.0cm} \end{figure} We work with YCbCr representation of images and propagate CbCr values from previous frames with pixel intensity, position and time features as guidance for VPN. The same strategy as in object segmentation is used, where an initial set of color propagated results was obtained with BNN-Identity and then used to trained a VPN-Stage1 model. Training further VPN stages did not improve the performance. Table~\ref{tbl:color} shows the PSNR results. We use 300K radomly sampled points from previous 3 frames as input to the VPN network. We also show a baseline result of~\cite{levin2004colorization} that does graph based optimization and uses optical flow. We used fast DIS optical flow~\cite{kroeger2016fast} in the baseline method~\cite{levin2004colorization} and we did not observe significant differences with using LDOF optical flow~\cite{brox2009large}. Figure~\ref{fig:color_visuals} shows a visual result with more in Fig.~\ref{fig:color_visuals_supp}. From the results, VPN works reliably better than~\cite{levin2004colorization} while being 20$\times$ faster. The method of~\cite{levin2004colorization} relies heavily on optical flow and so the color drifts away with incorrect flow. We observe that our method also bleeds color in some regions especially when there are large viewpoint changes. We could not compare against recent video color propagation techniques such as ~\cite{heu2009image,sheng2014video} as their codes are not available online. This application shows general applicability of VPNs in propagating different kinds of information. \vspace{-0.3cm} \section{Discussion and Conclusions} \label{sec:conclusion} We proposed a fast, scalable and generic neural network based learning approach for propagating information across video frames.~The video propagation network uses bilateral network for long-range video-adaptive propagation of information from previous frames to the present frame which is then refined by a standard spatial network. Experiments on diverse tasks show that VPNs, despite being generic, outperformed the current state-of-the-art task-specific methods. At the core of our technique is the exploitation and modification of learnable bilateral filtering for the use in video processing. We used a simple and fixed network architecture for all the tasks for showcasing the generality of the approach. Depending on the type of problems and the availability of data, using more filters and deeper layers would result in better performance. In this work, we manually tuned the feature scales which could be amendable to learning. Finding optimal yet fast-to-compute bilateral features for videos together with the learning of their scales is an important future research direction. \chapter{Bilateral Inception Networks} \label{chap:binception} \newcommand{\mathbf{f}}{\mathbf{f}} \newcommand{\mathbf{z}}{\mathbf{z}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{\mathbf{F}}}{\mathcal{\mathbf{F}}} \newcommand{DeconvNet}{DeconvNet} \newcommand{DeconvNet-CRF}{DeconvNet-CRF} \newcommand{AlexNet~}{AlexNet~} \newcommand{AlexNet-CRF~}{AlexNet-CRF~} \newcommand{DeepLab}{DeepLab} \newcommand{DeepLab-CRF}{DeepLab-CRF} \newcommand{DeepLab-MSc-CRF}{DeepLab-MSc-CRF} \newcommand{DeepLab}{DeepLab} \newcommand{DeepLab-CRF}{DeepLab-CRF} \newcommand{DeepLab-COCO-Strong}{DeepLab-COCO-Strong} \newcommand{DeepLab-COCO-Strong-CRF}{DeepLab-COCO-Strong-CRF} \newcommand{DeepLab-LargeFOV}{DeepLab-LargeFOV} \newcommand{\bi}[2]{BI$_{#1}(#2)$} \newcommand{\Bi}[1]{BI$_{#1}$} \newcommand{\gi}[1]{G$(#1)$} \newcommand{\fc}[1]{FC$_{#1}$} \hypersetup{ linkcolor = black, citecolor = black, urlcolor = green!20!black, colorlinks = true, } \DeclareRobustCommand\ifx\let@token.\else.\null\fi\xspace{\futurelet\@let@token\@onedot} \def\ifx\let@token.\else.\null\fi\xspace{\ifx\let@token.\else.\null\fi\xspace} \def\textit{e.g.}\@\xspace{\emph{e.g}\ifx\let@token.\else.\null\fi\xspace} \def\Eg{\emph{E.g}\ifx\let@token.\else.\null\fi\xspace} \def\textit{i.e.}\@\xspace{\emph{i.e}\ifx\let@token.\else.\null\fi\xspace} \def\Ie{\emph{I.e}\ifx\let@token.\else.\null\fi\xspace} \def\emph{c.f}\onedot} \def\Cf{\emph{C.f}\onedot{\emph{c.f}\ifx\let@token.\else.\null\fi\xspace} \def\Cf{\emph{C.f}\ifx\let@token.\else.\null\fi\xspace} \defw.r.t\onedot} \def\dof{d.o.f\onedot{w.r.t\ifx\let@token.\else.\null\fi\xspace} \def\dof{d.o.f\ifx\let@token.\else.\null\fi\xspace} \newcommand{\marginpar{FIX}}{\marginpar{FIX}} \newcommand{\marginpar{NEW}}{\marginpar{NEW}} Following up on previous chapters, where we introduced learnable bilateral filters and their application to a wide range of problems, in this chapter, we construct a CNN module which we call `Bilateral Inception' that can be inserted into \emph{existing} CNN architectures for better inference in pixel prediction tasks. The bilateral inception module performs bilateral filtering, at multiple feature-scales, between superpixels in an image. The feature spaces for bilateral filtering and other parameters of the module are learned end-to-end using standard back-propagation techniques. Instead of using learnable bilateral filtering proposed in Chapter~\ref{chap:bnn}, here, we explicitly construct the Gaussian filter kernel between input and output superpixels. We show how this explicit Gaussian filtering results in fast runtimes and also enables the learning of bilateral features. We focus on the problem of semantic segmentation. The bilateral inception module addresses two issues that arise with general CNN segmentation architectures. First, this module propagates information between (super)pixels while respecting image edges, thus using the structured information of the problem for improved results. Second, the layer recovers a full resolution segmentation result from the lower resolution solution of a CNN. In the experiments, we modify several existing CNN architectures by inserting our inception module between the last CNN ($1\times1$ convolution) layers. Empirical results on three different datasets show reliable improvements not only in comparison to the baseline networks, but also in comparison to several dense-pixel prediction techniques such as CRFs, while being competitive in time. \section{Introduction} In this work, we propose a CNN architecture for semantic image segmentation. Given an image $\mathbf{x}=(x_1,\ldots,x_N$) with $N$ pixels $x_i$, the task of semantic segmentation is to infer a labeling $\mathbf{y}=(y_1,\ldots,y_N)$ with a label $y_i\in\mathcal{Y}$ for every pixel. This problem can be naturally formulated as a structured prediction problem $g:\mathbf{x}\rightarrow \mathbf{y}$. Empirical performance is measured by comparing $\mathbf{y}$ to a human labeled $\mathbf{y}^*$ via a loss function $\Delta(\mathbf{y},\mathbf{y}^*)$, \textit{e.g.}\@\xspace with the Intersection over Union (IoU) or pixel-wise Hamming Loss. A direct way to approach this problem would be to ignore the structure of the output variable $\mathbf{y}$ and train a classifier that predicts the class membership of the center pixel of a given image patch. This procedure reduces the problem to a standard multi-class classification problem and allows the use of standard learning algorithms. The resulting classifier is then evaluated at every possible patch in a sliding window fashion (or using coarse-to-fine strategies) to yield a full segmentation of the image. With high capacity models and large amounts of training data, this approach would be sufficient, given that the loss decomposes over the pixels. Such a per-pixel approach ignores the relationship between the variables $(y_1,\ldots,y_N)$, which are not i.i.d.~since there is an underlying common image. Therefore, besides learning discriminative per-pixel classifiers, most segmentation approaches further encode the output relationship of $\mathbf{y}$. A dominating approach is to use Conditional Random Fields (CRF)~\cite{lafferty2001crf}, which allows an elegant and principled way to combine single pixel predictions and shared structure through unary, pairwise and higher order factors. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=\textwidth]{figures/net_illustration_2.pdf}} \mycaption{Illustration of CNN layout} {We insert the \emph{Bilateral Inception (BI)} modules between the \emph{FC} ($1\times1$ convolution) layers found in most networks thus removing the necessity of further up-scaling algorithms. Bilateral Inception modules also propagate information between distant pixels based on their spatial and color similarity and work better than other label propagation approaches.}\label{fig:illustration} \vspace{-0.3cm} \end{center} \end{figure} What relates the outputs $(y_1,\ldots,y_N$)? The common hypothesis that we use in this chapter could be summarized as: \emph{Pixels that are spatially and photometrically similar are more likely to have the same label.} Particularly if two pixels $x_i,x_j$ are close in the image and have similar $RGB$ values, then their corresponding labels $y_i,y_j$ will most likely be the same. The most prominent example of spatial similarity encoded in a CRF is the Potts model (Ising model for the binary case). The work of~\cite{krahenbuhl2012efficient} described a densely connected pairwise CRF (DenseCRF) that includes pairwise factors encoding both spatial \emph{and} photometric similarity. The DenseCRF has been used in many recent works on image segmentation which find also empirically improved results over pure pixel-wise CNN classifiers~\cite{chen2014semantic,bell2015minc,zheng2015conditional,chen2015semantic}. In this chapter, we implement the above-mentioned hypothesis of nearby pixels which are photometrically similar sharing a common label, by designing a new `Bilateral Inception' (BI) module that can be inserted before/after the last $1\times1$ convolution layers (which we refer to as `FC' layers - `Fully-Connected' in the original image classification network) of the standard segmentation CNN architectures. The bilateral inception module does edge-aware information propagation across different spatial CNN units of the previous FC layer. Instead of using the spatial grid-layout that is common in CNNs, we incorporate the superpixel-layout for information propagation. The information propagation is performed using standard bilateral filters with Gaussian kernels, at different feature scales. This construction is inspired by~\cite{szegedy2014googlenet,lin2014network}. Feature spaces and other parameters of the modules can be learned end-to-end using standard back-propagation techniques. The application of superpixels reduces the number of necessary computations and implements a long-range edge-aware inference between different superpixels. Moreover, since superpixels provides an output at the full image resolution, it removes the need for any additional post-processing step. We introduce BI modules in the CNN segmentation models of~\cite{chen2014semantic,zheng2015conditional,bell2015minc}. See Fig.~\ref{fig:illustration} for an illustration. This achieves better segmentation results than the proposed interpolation/inference techniques of DenseCRF~\cite{bell2015minc,chen2014semantic}, on all three datasets that we experimented with, while being faster. Moreover, the results compare favorably against some recently proposed dense pixel prediction techniques. As illustrated in Fig.~\ref{fig:illustration}, the BI modules provide an alternative approach to commonly used up-sampling and CRF techniques. \vspace{-0.3cm} \section{Related Work}\label{sec:related} The literature on semantic segmentation is large and therefore we will limit our discussion to those works that perform segmentation with CNNs and discuss the different ways to encode the output structure. A natural combination of CNNs and CRFs is to use the CNN as unary potential and combine it with a CRF that also includes pairwise or higher order factors. For instance~\cite{chen2014semantic,bell2015minc} observed large improvements in pixel accuracy when combining a DenseCRF~\cite{krahenbuhl2012efficient} with a CNN. The mean-field steps of the DenseCRF can be learned and back-propagated as noted by~\cite{domke2013learning} and implemented by~\cite{zheng2015conditional,arxivpaper,li2014mean,schwing2015fully} for semantic segmentation and~\cite{kiefel2014human} for human pose estimation. The works of~\cite{chen2014learning,lin2015efficient,liu2015semantic} use CNNs also in pairwise and higher order factors for more expressiveness. The recent work of~\cite{chen2015semantic} replaced the costly DenseCRF with a faster domain transform performing smoothing filtering while predicting the image edge maps at the same time. Our work was inspired by DenseCRF approaches but with the aim to replace the expensive mean-field inference. Instead of propagating information across unaries obtained by a CNN, we aim to do the edge-aware information propagation across \textit{intermediate} representations of the CNN. Experiments on different datasets indicate that the proposed approach generally gives better results in comparison to DenseCRF while being faster. A second group of works aims to inject the structural knowledge in intermediate CNN representations by using structural layers among CNN internal layers. The deconvolution layers model from~\cite{zeiler2010deconvolutional} are being widely used for local propagation of information. They are computationally efficient and are used in segmentation networks, \textit{e.g.}\@\xspace~\cite{long2014fully}. They are however limited to small receptive fields. Another architecture proposed in~\cite{he2014spatial} uses spatial pyramid pooling layers to max-pool over different spatial scales. The work of~\cite{ionescu2015matrix} proposed specialized structural layers such as normalized-cut layers with matrix back-propagation techniques. All these works have either fixed local receptive fields and/or have their complexity increasing exponentially with longer range pixel connections. Our technique allows for modeling long range (super)pixel dependencies without compromising the computational efficiency. A very recent work~\cite{yu2015multi} proposed the use of dilated convolutions for propagating multi-scale contextual information among CNN units. A contribution of this work is to define convolutions over superpixels by defining connectivity among them. In~\cite{he2015supercnn}, a method to use superpixels inside CNNs has been proposed by re-arranging superpixels based on their features. The technique proposed here is more generic and alleviates the need for rearranging superpixels. A method to filter irregularly sampled data has been developed in~\cite{bruna2013spectral} which may be applicable to superpixel convolutions. The difference being that their method requires a pre-defined graph structure for every example/image separately while our approach directly works on superpixels. We experimented with Isomap embeddings~\cite{tenenbaum2000global} of superpixels but for speed reasons opted for the more efficient kernels presented in this chapter. The work of~\cite{mostajabi2014feedforward} extracted multi-scale features at each superpixel and performs semantic segmentation by classifying each superpixel independently. In contrast, we propagate information across superpixels by using bilateral filters with learned feature spaces. Another core contribution of this work is the end-to-end trained bilateral filtering module. Several recent works on bilateral filtering~\cite{barron2015bilateral,barron2015defocus} (including ours in Chapter~\ref{chap:bnn}) back-propagate through permutohedral lattice approximation~\cite{adams2010fast}, to either learn the filter parameters (Chapter~\ref{chap:bnn}) or do optimization in the bilateral space~\cite{barron2015bilateral,barron2015defocus}. Most of the existing works on bilateral filtering use pre-defined feature spaces. In~\cite{campbell2013fully}, the feature spaces for bilateral filtering are obtained via a non-parametric embedding into an Euclidean space. In contrast, by explicitly computing the bilateral filter kernel, we are able to back-propagate through features, thereby learning the task-specific feature spaces for bilateral filters through integration into end-to-end trainable CNNs. \section{Bilateral Inception Networks} We first formally introduce superpixels in Sec.~\ref{sec:superpixels} before we describe the bilateral inception modules in Sec.~\ref{sec:inception}. \subsection{Superpixels}\label{sec:superpixels} The term \emph{superpixel} refers to a set of $n_i$ pixels $s_i=\{t_1,\ldots,t_{n_i}\}$ with $t_k\in\{1,\ldots,N\}$ pixels. We use a set of $M$ superpixels $S=\{s_1,\ldots,s_M\}$ that are disjoint $s_i\cap s_j=\emptyset, \forall i,j$ and decompose the image, $\cup_i s_i = \mathcal{I}$. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{figures/superpixel_plot_both.pdf} \mycaption{Superpixel Quantization Error} {Best achievable segmentation performance with a varying number of SLIC superpixels~\cite{achanta2012slic} on Pascal VOC12 segmentation~\cite{voc2012segmentation} and MINC material segmentation~\cite{bell2015minc} datasets.} \label{fig:quantization} \end{figure} Superpixels have long been used for image segmentation in many previous works, \textit{e.g.}\@\xspace~\cite{Gould:ECCV2014,gonfaus2010harmony,nowozin2010parameter,mostajabi2014feedforward}, as they provide a reduction of the problem size. Instead of predicting a label $y_i$ for every pixel $x_i$, the classifier predicts a label $y_i$ per superpixel $S_i$ and extends this label to all pixels within. A superpixel algorithm can pre-group pixels based on spatial and photometric similarity, reducing the number of elements and also thereby regularizing the problem in a meaningful way. The downside is that superpixels introduce a quantization error whenever pixels within one segment have different ground truth label assignments. Figure~\ref{fig:quantization} shows the superpixel quantization effect with the best achievable performance as a function in the number of superpixels, on two different segmentation datasets: PascalVOC~\cite{voc2012segmentation} and Materials in Context~\cite{bell2015minc}. We find that the quantization effect is small compared to the current best segmentation performance. Practically, we use the SLIC superpixels~\cite{achanta2012slic} for their runtime and~\cite{DollarICCV13edges} for their lower quantization error to decompose the image into superpixels. For details of the algorithms, please refer to the respective papers. We use the publicly-available real-time GPU implementation of SLIC, called gSLICr~\cite{gSLICr_2015}, which runs at over 250 frames per second. And the publicly available Dollar superpixels code~\cite{DollarICCV13edges} computes a super-pixelization for a $400\times 500$ image in about 300ms using an Intel Xeon 3.33GHz CPU. \subsection{Bilateral Inceptions}\label{sec:inception} Next, we describe the \emph{Bilateral Inception} (BI) module that performs Gaussian bilateral filtering on multiple scales of the representations within a CNN. The BI module can be inserted in between layers of existing CNN architectures. {\bfseries Bilateral Filtering:} We first describe the Gaussian bilateral filtering, the building block of the BI module. A visualization of the necessary computations is shown in Fig.~\ref{fig:bi_module}. Let the previous layer CNN activations be $\mathbf{z}\in\mathbb{R}^{P\times C}$, that is $P$ points and $C$ filter responses. With $\mathbf{z}_c\in\mathbb{R}^P$ we denote the vector of activations of filter $c$. Additionally we have for every point $j$ a feature vector $\mathbf{f}_j\in\mathbb{R}^D$. This denotes its spatial position ($D=2$, not necessarily a grid), position and RGB color ($D=5$), or others. Separate from the input points with features $F_{in}=\{\mathbf{f}_1,\ldots,\mathbf{f}_P\}$, we have $Q$ output points with features $F_{out}$. These can be the same set of points, but also fewer ($Q<P$), equal ($Q=P$), or more ($Q>P$) points. For example we can filter a $10\times 10$ grid ($P=100$) and produce the result on a $50\times 50$ grid ($Q=2500$) or vice versa. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=\textwidth]{figures/bi_module/new_bi_module.pdf}} \mycaption{Computation flow of the Gaussian bilateral filtering} { We implemented the bilateral convolution with five separate computation blocks. $\Lambda$ and $\theta$ are the free parameters.}\label{fig:bi_module} \end{center} \end{figure} \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=\textwidth]{figures/inception_module.pdf}} \mycaption{Visualization of a Bilateral Inception (BI) Module} {The unit activations $\mathbf{z}$ are passed through several bilateral filters defined over different feature spaces. The result is linearly combined to $\bar{\mathbf{z}}$ and passed on to the next network layer. Also shown are sample filtered superpixel images using bilateral filters defined over different example feature spaces. $(u,v)$ correspond to position and $(r,g,b)$ correspond to color features.}\label{fig:inception} \end{center} \end{figure} The bilateral filtered result will be denoted as $\hat{\mathbf{z}}\in\mathbb{R}^{Q\times C}$. We apply the same Gaussian bilateral filter to every channel $c$ separately. A filter has two free parameters: the filter specific scale $\theta\in\mathbb{R}_+$ and the global feature transformation parameters $\Lambda\in\mathbb{R}^{D\times D}$. For~$\Lambda$, a more general scaling could be applied using more features or a separate CNN. Technically the bilateral filtering amounts to a matrix-vector multiplication for all $c$: \begin{equation} \hat{\mathbf{z}}_c = K(\theta, \Lambda, F_{in}, F_{out}) \mathbf{z}_c, \label{eq:filter} \end{equation} where $K\in\mathbb{R}^{Q\times P}$ and values for $f_i\in F_{out}, f_j\in F_{in}$: \begin{equation} K_{i,j} = \frac{\exp(-\theta\|\Lambda \mathbf{f}_i- \Lambda \mathbf{f}_j\|^2)}{\sum_{j'}\exp(-\theta\|\Lambda \mathbf{f}_i- \Lambda \mathbf{f}_{j'}\|^2)}. \label{eq:filter} \end{equation} From a kernel learning terminology, $K$ is nothing but a Gaussian Gram matrix and it is symmetric if $F_{in}=F_{out}$. We implemented this filtering in Caffe~\cite{jia2014caffe} using different layers as depicted in Fig.~\ref{fig:bi_module}. While approximate computations of $K\mathbf{z}_c$ exist and have improved runtime~\cite{adams2010fast,paris2006fast,gastal2011domain,adams2009gaussian}, we chose an explicit and exact computation of $K$ due to its small size. Our implementation makes use of the GPU and the intermediate pairwise similarity computations are re-used across different modules. The entire runtime is only a fraction of the CNN runtime, but of course applications to larger values of $P$ and $Q$ would require aforementioned algorithmic speed-ups. {\bfseries Bilateral Inception Module:} The \textit{bilateral inception module} (BI) is a weighted combination of different bilateral filters. We combine the output of $H$ different filter kernels~$K$, with different scales $\theta^1,\ldots,\theta^H$. All kernels use the same feature transformation~$\Lambda$ which allows for easier pre-computation of pairwise difference and avoids an over-parametrization of the filters. The outputs of different filters $\hat{\mathbf{z}}^h$ are combined linearly to produce $\bar{\mathbf{z}}$: \begin{equation} \bar{\mathbf{z}}_c = \sum_{h=1}^H \mathbf{w}_c^h \hat{\mathbf{z}}_c^h, \label{eq:module} \end{equation} using individual weights $\mathbf{w}_c^h$ per scale $\theta^h$ and channel $c$. The weights $\mathbf{w} \in \mathbb{R}^{H\times C}$ are learned using error back-propagation. The result of the inception module has $C$ channels for every of its $Q$ points, thus $\bar{\mathbf{z}} \in \mathbb{R}^{Q \times C}$. The inception module is schematically illustrated in Fig.~\ref{fig:inception}. In short, information from CNN layers below is filtered using bilateral filters defined in a transformed feature space ($\Lambda \mathbf{f}$). Most operations in the inception module are parallelizable, resulting in fast runtimes on a GPU. In this work, inspired from the DenseCRF architecture from~\cite{krahenbuhl2012efficient}, we used pairs of BI modules: one with position features $(u,v)$ and another with both position and color features $(u,v,r,g,b)$, each with multiple scales $\{\theta^h\}$. {\bfseries Motivation and Comparison to DenseCRF:} A BI module filters the activations of a CNN layer. Contrast this with the use of a DenseCRF on the CNN output. At that point the fine-grained information that intermediate CNN layers represent has been condensed already to a low-dimensional vector representing beliefs over labels. Using a mean-field update is propagating information between these beliefs. Similar behavior is obtained using the BI modules but on different scales (using multiple different filters $K(\theta^h)$) and on the intermediate CNN activations $\mathbf{z}$. Since in the end, the to-be-predicted pixels are not i.i.d., this blurring leads to better performance both when using a bilateral filter as an approximate message passing step of a DenseCRF as well as in the system outlined here. Both attempts are encoding prior knowledge about the problem, namely that pixels close in position and color are likely to have the same label. Therefore such pixels can also have the same intermediate representation. Consider one would average CNN representations for all pixels that have the same ground truth label. This would result in an intermediate CNN representation that would be very easy to classify for the later layers. \subsection{Superpixel Convolutions} The bilateral inception module allows to change how information is stored in the higher level of a CNN. This is where the superpixels are used. Instead of storing information on a fixed grid, we compute for every image, superpixels $S$ and use the mean color and position of their included pixels as features. We can insert bilateral inception modules to change from grid representations to superpixel representations and vice versa. Inception modules in between superpixel layers convolve the unit activations between all superpixels depending on their distance in the feature space. This retains all properties of the bilateral filter, superpixels that are spatially close and have a similar mean color will have a stronger influence on each other. Superpixels are not the only choice, in principle one can also sample random points from the image and use them as intermediate representations. We are using superpixels for computational reasons, since they can be used to propagate label information to the full image resolution. Other interpolation techniques are possible, including the well known bilinear interpolation, up-convolution networks~\cite{zeiler2010deconvolutional}, and DenseCRFs~\cite{krahenbuhl2012efficient}. The quantization error mentioned in Sec.~\ref{sec:superpixels} only enters because the superpixels are used for interpolation. Also note that a fixed grid, that is independent of the image is a hard choice of where information should be stored. One could in principle evaluate the CNN densely, at all possible spatial locations, but we found that this resulted in poor performance compared to interpolation methods. \subsubsection{Back-propagation and Training.} All free parameters of the inception module $\mathbf{w}$, $\{\theta^h\}$ and $\Lambda$ are learned via back-propagation. We also back-propagate the error with respect to the module inputs thereby enabling the integration of our inception modules inside CNN frameworks without breaking the end-to-end learning paradigm. As shown in Fig.~\ref{fig:bi_module}, the bilateral filtering can be decomposed into 5 different sub-layers. Derivatives with respect to the open parameters are obtained by the corresponding layer and standard back-propagation through the directed acyclic graph. For example, $\Lambda$ is optimized by back-propagating gradients through $1\times1$ convolution. Derivatives for non-standard layers (pairwise similarity, matrix multiplication) are straight forward to obtain using matrix calculus. To let different filters learn the information propagation at different scales, we initialized $\{\theta^h\}$ with well separated scalar values (\textit{e.g.}\@\xspace $\{1, 0.7, 0.3,...\}$). The learning is performed using the stochastic optimization method of Adam~\cite{kingma2014adam}. The implementation is done in the Caffe neural network framework~\cite{jia2014caffe}, and the code is available at http://segmentation.is.tuebingen.mpg.de. \definecolor{voc_1}{RGB}{0, 0, 0} \definecolor{voc_2}{RGB}{128, 0, 0} \definecolor{voc_3}{RGB}{0, 128, 0} \definecolor{voc_4}{RGB}{128, 128, 0} \definecolor{voc_5}{RGB}{0, 0, 128} \definecolor{voc_6}{RGB}{128, 0, 128} \definecolor{voc_7}{RGB}{0, 128, 128} \definecolor{voc_8}{RGB}{128, 128, 128} \definecolor{voc_9}{RGB}{64, 0, 0} \definecolor{voc_10}{RGB}{192, 0, 0} \definecolor{voc_11}{RGB}{64, 128, 0} \definecolor{voc_12}{RGB}{192, 128, 0} \definecolor{voc_13}{RGB}{64, 0, 128} \definecolor{voc_14}{RGB}{192, 0, 128} \definecolor{voc_15}{RGB}{64, 128, 128} \definecolor{voc_16}{RGB}{192, 128, 128} \definecolor{voc_17}{RGB}{0, 64, 0} \definecolor{voc_18}{RGB}{128, 64, 0} \definecolor{voc_19}{RGB}{0, 192, 0} \definecolor{voc_20}{RGB}{128, 192, 0} \definecolor{voc_21}{RGB}{0, 64, 128} \definecolor{voc_22}{RGB}{128, 64, 128} \begin{figure*}[t] \small \centering \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_crf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/2007_000033_ours.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[\scriptsize Input]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_given.png} } \subfigure[\scriptsize Superpixels]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_sp.png} } \subfigure[\scriptsize GT]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_gt.png} } \subfigure[\scriptsize Deeplab]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_cnn.png} } \subfigure[\scriptsize +DenseCRF]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_crf.png} } \subfigure[\scriptsize Using BI]{% \includegraphics[width=.15\columnwidth]{figures/2009_003564_ours.png} } \mycaption{Semantic segmentation}{Example results of semantic segmentation on Pascal VOC12 dataset. (d) depicts the DeepLab CNN result, (e) CNN + 10 steps of mean-field inference, (f) result obtained with bilateral inception (BI) modules (\bi{6}{2}+\bi{7}{6}) between \fc~layers.}\label{fig:semantic_visuals} \end{figure*} \section{Experiments} We study the effect of inserting and learning bilateral inception modules in various existing CNN architectures. As a testbed, we perform experiments on semantic segmentation using the Pascal VOC12 segmentation benchmark dataset~\cite{voc2012segmentation}, Cityscapes street scene dataset~\cite{Cordts2015Cvprw} and on material segmentation using the Materials in Context (MINC) dataset from~\cite{bell2015minc}. We take different CNN architectures from the works of~\cite{chen2014semantic,zheng2015conditional,bell2015minc} and insert the inception modules before and/or after the spatial FC layers. In Appendix~\ref{sec:binception-app}, we presented some quantitative results with approximate bilateral filtering using the permutohedral lattice~\cite{adams2010fast}. \subsection{Semantic Segmentation} We first use the Pascal VOC12 segmentation dataset~\cite{voc2012segmentation} with 21 object classes. For all experiments on VOC12, we train using the extended training set of 10581 images collected by~\cite{hariharan2011moredata}. Following~\cite{zheng2015conditional}, we use a reduced validation set of 346 images for validation. We experiment on two different network architectures, (a) The DeepLab model from~\cite{chen2014semantic} which uses a CNN followed by DenseCRF and (b) The CRFasRNN model from~\cite{zheng2015conditional} which uses a CNN with deconvolution layers followed by DenseCRF trained end-to-end. \subsubsection{DeepLab}\label{sec:deeplabmodel} We use the publicly available state-of-the-art pre-trained CNN models from~\cite{chen2014semantic}. We use the DeepLab-LargeFOV variant as a base architecture and refer to it as `DeepLab'. The DeepLab~CNN model produces a lower resolution prediction ($\frac{1}{8}\times$) which is then bilinearly interpolated to the input image resolution. The original models have been fine-tuned using both the MSCOCO~\cite{lin2014microsoft} and the extended VOC~\cite{hariharan2011moredata} datasets. Next, we describe modifications to these models and show performance improvements in terms of both IoU and runtimes. \begin{table}[t] \centering \small \begin{tabular}{>{\raggedright\arraybackslash}p{4.8cm}>{\raggedright\arraybackslash}p{1.8cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.7cm}} \toprule \textbf{Model} & Training & \emph{IoU} & \emph{Runtime}(ms)\\ \midrule DeepLab~\cite{chen2014semantic} & & 68.9 & 145\\ \midrule With BI modules & & & \\ \bi{6}{2} & only BI & \href{http://host.robots.ox.ac.uk:8080/anonymous/31URIG.html}{70.8} & +20 \\ \bi{6}{2} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/JOB8CE.html}{71.5} & +20\\ \bi{6}{6} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/IB1UAZ.html}{72.9} & +45\\ \bi{7}{6} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/EQB3CR.html}{73.1} & +50\\ \bi{8}{10} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/JR27XL.html}{72.0} & +30\\ \bi{6}{2}-\bi{7}{6} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/VOTV5E.html}{73.6} & +35\\ \bi{7}{6}-\bi{8}{10} & BI+FC & \href{http://host.robots.ox.ac.uk:8080/anonymous/X7A3GP.html}{73.4} & +55\\ \bi{6}{2}-\bi{7}{6} & FULL & \href{http://host.robots.ox.ac.uk:8080/anonymous/CLLB3J.html}{\textbf{74.1}} & +35\\ \bi{6}{2}-\bi{7}{6}-CRF & FULL & \href{http://host.robots.ox.ac.uk:8080/anonymous/7NGWWU.html}{\textbf{75.1}} & +865\\ \midrule DeepLab-CRF~\cite{chen2014semantic} & & 72.7 & +830\\ DeepLab-MSc-CRF~\cite{chen2014semantic} & & \textbf{73.6} & +880\\ DeepLab-EdgeNet~\cite{chen2015semantic} & & 71.7 & +30\\ DeepLab-EdgeNet-CRF~\cite{chen2015semantic} & & \textbf{73.6} & +860\\ \bottomrule \end{tabular} \mycaption{Semantic segmentation using DeepLab~model} {IoU scores on Pascal VOC12 segmentation test dataset and average runtimes (ms) corresponding to different models. Also shown are the results corresponding to competitive dense pixel prediction techniques that used the same base DeepLab CNN. Runtimes also include superpixel computation (6ms). In the second column, `BI', `FC' and `FULL' correspond to training `BI', `FC' and full model layers respectively.} \label{tab:deeplabresults} \end{table} We add inception modules after different FC layers in the original model and remove the DenseCRF post processing. For this dataset, we use 1000 SLIC superpixels~\cite{achanta2012slic,gSLICr_2015}. The inception modules after \fc{6}, \fc{7} and \fc{8} layers are referred to as \bi{6}{H}, \bi{7}{H} and \bi{8}{H} respectively, where $H$ is the number of kernels. All results using the~DeepLab~model on Pascal VOC12 dataset are summarized in Tab.~\ref{tab:deeplabresults}. We report the `test' numbers without validation numbers, because the released DeepLab model that we adapted was trained using both train and validation sets. The~DeepLab~network achieves an IoU of 68.9 after bilinear interpolation. Experiments with the \bi{6}{2} module indicate that even only learning the inception module while keeping the remaining network fixed results in a reliable IoU improvement ($+1.9$). Additional joint training with \fc{} layers significantly improved the performance. The results also show that more kernels improve performance. Next, we add multiple modules to the base DeepLab network at various stages and train them jointly. This results in further improvement of the performance. The \bi{6}{2}-\bi{7}{6} model with two inception modules shows significant improvement in IoU by $4.7$ and $0.9$ in comparison to the baseline model and DenseCRF application respectively. Finally, fine-tuning the entire network (FULL in Tab.~\ref{tab:deeplabresults}) boosts the performance by $5.2$ and $1.4$ compared to the baseline and DenseCRF application. Some visual results are shown in Fig.~\ref{fig:semantic_visuals} and more are included in Appendix~\ref{sec:qualitative-app}. Several other variants of using BI are conceivable. During our experiments, we have observed that more kernels and more modules improve the performance, so we expect that even better results can be achieved. In Tab.~\ref{tab:deeplabresults}, the runtime in milliseconds is included for several models. These numbers have been obtained using a Nvidia Tesla K80 GPU and standard Caffe time benchmarking~\cite{jia2014caffe}. DenseCRF timings are taken from~\cite{chen2015semantic}. The runtimes indicate that the overhead with BI modules is quite minimal in comparison to using Dense CRF. In addition, we include the results of some other dense pixel prediction methods that are build on top of the same DeepLab base model. DeepLab-MSc-CRF~is a multi-scale version~\cite{chen2014semantic} of DeepLab~with DenseCRF on top. DeepLab-EdgeNet~\cite{chen2015semantic} is a recently proposed fast and discriminatively trained domain transform technique for propagating information across pixels. Comparison with these techniques in terms of performance and runtime indicates that our approach performs on par with latest dense pixel prediction techniques with significantly less time overhead. Several state-of-the-art CNN based systems~\cite{lin2015efficient,liu2015semantic} have achieved higher results than DeepLab~on Pascal VOC12. These models are not yet publicly available and so we could not test the use of BI models in them. A close variant~\cite{barron2015bilateral} of our work, which propose to do optimization in the bilateral space also has fast runtimes, but reported lower performance in comparison to the application of DenseCRF. \begin{table} \centering \small \begin{tabular}{>{\raggedright\arraybackslash}p{5.0cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.9cm}} \toprule \textbf{Model} & \emph{IoU} & \emph{Runtime}(ms)\\ \midrule DeconvNet(CNN+Deconv.) & 72.0 & 190 \\ \midrule With BI modules & & \\ \bi{3}{2}-\bi{4}{2}-\bi{6}{2}-\bi{7}{2} & \textbf{74.9} & 245 \\ \midrule CRFasRNN (DeconvNet-CRF)& 74.7 & 2700\\ \bottomrule \end{tabular} \mycaption{Semantic segmentation using CRFasRNN model}{IoU scores and runtimes corresponding to different models on Pascal VOC12 test dataset. Note that runtime also includes superpixel computation.} \label{tab:deconvresults} \end{table} \subsubsection{CRFasRNN} As a second architecture, we modified the CNN architecture trained by~\cite{zheng2015conditional} that produces a result at an even lower resolution ($\frac{1}{16} \times$). Multiple deconvolution steps are employed to obtain the segmentation at input image resolution. This result is then passed onto the DenseCRF recurrent neural network to obtain the final segmentation result. We insert BI modules after score-pool3, score-pool4, \fc{6} and \fc{7} layers, please see~\cite{long2014fully,zheng2015conditional} for the network architecture details. Instead of combining outputs from the above layers with deconvolution steps, we introduce BI modules after them and linearly combined the outputs to obtain a final segmentation result. Note that we entirely removed both the deconvolution and the DenseCRF parts of the original model~\cite{zheng2015conditional}. See Tab.~\ref{tab:deconvresults} for results on the DeconvNet model. Without the DenseCRF part and only evaluating the deconvolutional part of this model, one obtains an IoU score of $72.0$. Ten steps of mean field inference increase the IoU to $74.7$~\cite{zheng2015conditional}. Our model, with few additional parameters compared to the base CNN, achieves a IoU performance of $74.9$, showing an improvement of 0.2 over the CRFasRNN model. The BI layers lead to better performance than deconvolution and DenseCRF combined while being much faster. \subsubsection{Hierarchical Clustering Analysis} We learned the network parameters using 1000 gSLIC superpixels per image, however the inception module allows to change the resolution (a non-square $K$). To illustrate this, we perform agglomerative clustering of the superpixels, sequentially merging the nearest (in spatial and photometric features) two superpixels into a single one. We then evaluated the DeepLab-\bi{6}{2}-\bi{7}{6} network using different levels of the resulting hierarchy re-using all the trained network parameters. Results in Fig.~\ref{fig:clustering} show that the IoU score on the validation set decreases slowly with decreasing number of points and then drops for less than 200 superpixels. This validates that the network generalizes to different superpixel layouts and it is sufficient to represent larger regions of similar color by fewer points. In future, we plan to explore different strategies to allocate the representation to those regions that require more resolution and to remove the superpixelization altogether. Fig.~\ref{fig:clustering} shows example image with 200, 600, and 1000 superpixels and their obtained segmentation with BI modules. \begin{figure}[t] \begin{tabular}{c} \subfigure{% \includegraphics[width=0.25\textwidth]{figures/superpixel_plot_agg_clustering.pdf} } \end{tabular} \hfill \begin{tabular}{c} \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_given.png} }\\ \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_gt.png} } \end{tabular} \hfill \begin{tabular}{c} \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_1000_200_sp.png} }\\ \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_200_ours.png} } \end{tabular}\hfill \begin{tabular}{c} \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_1000_600_sp.png} }\\ \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_600_ours.png} } \end{tabular}\hfill \begin{tabular}{c} \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_1000_1000_sp.png} }\\ \subfigure{% \includegraphics[width=0.14\textwidth]{figures/2007_001311_agg_1000_ours.png} } \end{tabular} \mycaption{Hierarchical clustering analysis}{From left to right: Validation performance when using different super-pixel layouts, visualization of an image with ground truth segmentation, and the \bi{6}{2}-\bi{7}{6} result with 200, 600, and 1000 superpixels.} \label{fig:clustering} \end{figure} \subsection{Material Segmentation} We also experiment on a different pixel prediction task of material segmentation by adapting a CNN architecture fine-tuned for Materials in Context (MINC)~\cite{bell2015minc} dataset. MINC consists of 23 material classes and is available in three different resolutions with the same aspect ratio: low ($550^2$), mid ($1100^2$) and an original higher resolution. The authors of~\cite{bell2015minc} train CNNs on the mid resolution images and then combine with a DenseCRF to predict and evaluate on low resolution images. We build our work based on the AlexNet model~\cite{krizhevsky2012imagenet} released by the authors of~\cite{bell2015minc}. To obtain a per pixel labeling of a given image, there are several processing steps that~\cite{bell2015minc} use for good performance. First, a CNN is applied at several scales with different strides followed by an interpolation of the predictions to reach the input image resolution and is then followed by a DenseCRF. For simplicity, we choose to run the CNN network with single scale. The authors used just one kernel with $(u, v, L, a, b)$ features in the DenseCRF part. We used the same features in our inception modules. We modified the base AlexNet model by inserting BI modules after \fc{7} and \fc{8} layers. Again, 1000 SLIC superpixels are used for all experiments. Results on the test set are shown in Table~\ref{tab:mincresults}. When inserting BI modules, the performance improves both in total pixel accuracy as well as in class-averaged accuracy. We observe an improvement of $12\%$ compared to CNN predictions and $2-4\%$ compared to CNN+DenseCRF results. Qualitative examples are shown in Fig.~\ref{fig:material_visuals} and more are included in Appendix~\ref{sec:qualitative-app}. The weights to combine outputs in the BI layers are found by validation on the validation set. For this model we do not provide any learned setup due to very limited segment training data. \begin{table} \centering \small \begin{tabular}{p{4.2cm}>{\centering\arraybackslash}p{4cm}>{\centering\arraybackslash}p{1.9cm}} \toprule \textbf{Model} & \emph{Class / Total accuracy} & \emph{Runtime}(ms)\\ \midrule AlexNet CNN & 55.3 / 58.9 & 300 \\ \midrule \bi{7}{2}-\bi{8}{6} & 67.7 / 71.3 & 410 \\ \bi{7}{6}-\bi{8}{6} & \textbf{69.4 / 72.8} & 470 \\ \midrule AlexNet-CRF & 65.5 / 71.0 & 3400 \\ \bottomrule \end{tabular} \mycaption{Material segmentation using AlexNet}{Pixel accuracies and runtimes (in ms) of different models on the MINC material segmentation dataset~\cite{bell2015minc}. Runtimes also include the time for superpixel extraction (15ms).} \label{tab:mincresults} \end{table} \definecolor{minc_1}{HTML}{771111} \definecolor{minc_2}{HTML}{CAC690} \definecolor{minc_3}{HTML}{EEEEEE} \definecolor{minc_4}{HTML}{7C8FA6} \definecolor{minc_5}{HTML}{597D31} \definecolor{minc_6}{HTML}{104410} \definecolor{minc_7}{HTML}{BB819C} \definecolor{minc_8}{HTML}{D0CE48} \definecolor{minc_9}{HTML}{622745} \definecolor{minc_10}{HTML}{666666} \definecolor{minc_11}{HTML}{D54A31} \definecolor{minc_12}{HTML}{101044} \definecolor{minc_13}{HTML}{444126} \definecolor{minc_14}{HTML}{75D646} \definecolor{minc_15}{HTML}{DD4348} \definecolor{minc_16}{HTML}{5C8577} \definecolor{minc_17}{HTML}{C78472} \definecolor{minc_18}{HTML}{75D6D0} \definecolor{minc_19}{HTML}{5B4586} \definecolor{minc_20}{HTML}{C04393} \definecolor{minc_21}{HTML}{D69948} \definecolor{minc_22}{HTML}{7370D8} \definecolor{minc_23}{HTML}{7A3622} \definecolor{minc_24}{HTML}{000000} \begin{figure*}[t] \tiny \centering \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_given.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_sp.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_gt.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_cnn.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_cnncrf.png} } \subfigure{% \includegraphics[width=.15\columnwidth]{figures/000000531_ours.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[\tiny Input]{% \includegraphics[width=.15\columnwidth]{figures/000034078_given.png} } \subfigure[\tiny Superpixels]{% \includegraphics[width=.15\columnwidth]{figures/000034078_sp.png} } \subfigure[\tiny GT]{% \includegraphics[width=.15\columnwidth]{figures/000034078_gt.png} } \subfigure[\tiny AlexNet]{% \includegraphics[width=.15\columnwidth]{figures/000034078_cnn.png} } \subfigure[\tiny +DenseCRF]{% \includegraphics[width=.15\columnwidth]{figures/000034078_cnncrf.png} } \subfigure[\tiny Using BI]{% \includegraphics[width=.15\columnwidth]{figures/000034078_ours.png} } \mycaption{Material segmentation}{Example results of material segmentation. (d) depicts the AlexNet CNN result, (e) CNN + 10 steps of mean-field inference, (f) results obtained with bilateral inception (BI) modules (\bi{7}{2}+\bi{8}{6}) between \fc~layers.} \label{fig:material_visuals} \end{figure*} \begin{table} \centering \small \begin{tabular}{p{3.2cm}>{\centering\arraybackslash}p{2.8cm}>{\centering\arraybackslash}p{2.8cm}>{\centering\arraybackslash}p{1.9cm}} \toprule \textbf{Model} & \emph{IoU (Half-res.)} & \emph{IoU (Full-res.)} & \emph{Runtime}(s)\\ \midrule DeepLab~CNN & 62.2 & 65.7 & 0.3 \\ \midrule \bi{6}{2} & 62.7 & 66.5 & 5.7 \\ \bi{6}{2}-\bi{7}{6} & \textbf{63.1} & \textbf{66.9} & 6.1 \\ \midrule DeepLab-CRF & 63.0 & 66.6 & 6.9 \\ \bottomrule \end{tabular} \mycaption{Street scene Segmentation using DeepLab~model} {IoU scores and runtimes (in sec) of different models on Cityscapes segmentation dataset~\cite{Cordts2015Cvprw}, for both half-resolution and full-resolution images. Runtime computations also include superpixel computation time (5.2s).} \label{tab:cityscaperesults} \end{table} \subsection{Street Scene Segmentation} We further evaluate the use of BI modules on the Cityscapes dataset~\cite{Cordts2015Cvprw}. Cityscapes contains 20K high-resolution ($1024\times2048$) images of street scenes with coarse pixel annotations and another 5K images with fine annotations, all annotations are from 19 semantic classes. The 5K images are divided into 2975 train, 500 validation and remaining test images. Since there are no publicly available pre-trained models for this dataset yet, we trained a DeepLab~model. We trained the base DeepLab~model with half resolution images ($512\times1024$) so that the model fits into GPU memory. The result is then interpolated to full-resolution using bilinear interpolation. \definecolor{city_1}{RGB}{128, 64, 128} \definecolor{city_2}{RGB}{244, 35, 232} \definecolor{city_3}{RGB}{70, 70, 70} \definecolor{city_4}{RGB}{102, 102, 156} \definecolor{city_5}{RGB}{190, 153, 153} \definecolor{city_6}{RGB}{153, 153, 153} \definecolor{city_7}{RGB}{250, 170, 30} \definecolor{city_8}{RGB}{220, 220, 0} \definecolor{city_9}{RGB}{107, 142, 35} \definecolor{city_10}{RGB}{152, 251, 152} \definecolor{city_11}{RGB}{70, 130, 180} \definecolor{city_12}{RGB}{220, 20, 60} \definecolor{city_13}{RGB}{255, 0, 0} \definecolor{city_14}{RGB}{0, 0, 142} \definecolor{city_15}{RGB}{0, 0, 70} \definecolor{city_16}{RGB}{0, 60, 100} \definecolor{city_17}{RGB}{0, 80, 100} \definecolor{city_18}{RGB}{0, 0, 230} \definecolor{city_19}{RGB}{119, 11, 32} \begin{figure*}[t] \tiny \centering \subfigure{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_008206_given.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_008206_sp.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_008206_gt.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_008206_cnn.png} } \subfigure{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_008206_ours.png} }\\[-2ex] \setcounter{subfigure}{0} \subfigure[\scriptsize Input]{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_016005_given.png} } \subfigure[\scriptsize Superpixels]{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_016005_sp.png} } \subfigure[\scriptsize GT]{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_016005_gt.png} } \subfigure[\scriptsize Deeplab]{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_016005_cnn.png} } \subfigure[\scriptsize Using BI]{% \includegraphics[width=.18\columnwidth]{figures/frankfurt00000_016005_ours.png} } \mycaption{Street scene segmentation}{Example results of street scene segmentation. (d) depicts the DeepLab results, (e) result obtained by adding bilateral inception (BI) modules (\bi{6}{2}+\bi{7}{6}) between \fc~layers.} \label{fig:street_visuals} \end{figure*} We experimented with two layouts: only a single \bi{6}{2} and one with two inception \bi{6}{2}-\bi{7}{6} modules. We notice that the SLIC superpixels~\cite{achanta2012slic} give higher quantization error than on VOC and thus used 6000 superpixels using~\cite{DollarICCV13edges} for our experiments. Quantitative results on the validation set are shown in Tab.~\ref{tab:cityscaperesults}. In contrast to the findings on the previous datasets, we only observe modest improvements with both DenseCRF and our inception modules in comparison to the base model. Similar to the previous experiments, the inception modules achieve better performance than DenseCRF while being faster. The majority of the computation time in our approach is due to the extraction of superpixels ($5.2s$) using a CPU implementation. Some visual results with \bi{6}{2}-\bi{7}{6} model are shown in Fig.~\ref{fig:street_visuals} with more in Appendix~\ref{sec:qualitative-app}. \section{Discussion and Conclusions} The DenseCRF~\cite{krahenbuhl2012efficient} with mean field inference has been used in many CNN segmentation approaches. Its main ingredient and reason for the improved performance is the use of a bilateral filter applied to the beliefs over labels. We have introduced a CNN approach that uses this key component in a novel way: filtering intermediate representations of higher levels in CNNs while jointly learning the task-specific feature spaces. This propagates information between earlier and more detailed intermediate representations of the classes instead of beliefs over labels. Further we show that image adaptive layouts in the higher levels of CNNs can be used to an advantage in the same spirit as CRF graphs have been constructed using superpixels in previous works on semantic segmentation. The computations in the $1\times1$ convolution layers scales in the number of superpixels which may be an advantage. Further we have shown that the same representation can be used to interpolate the coarser representations to the full image. The use of image-adaptive convolutions in between the FC layers retains the appealing effect of producing segmentation masks with sharp edges. This is not a property of the superpixels, using them to represent information in FC layers and their use to interpolate to the full resolution are orthogonal. Different interpolation steps can be used to propagate the label information to the entire image, including bilinear interpolation, bilateral upsampling, up-convolutions and DenseCRFs. We plan to investigate the effect of different sampling strategies to represent information in the higher layers of CNNs and apply similar image-adaptive ideas to videos. We believe that the Bilateral Inception models are an interesting step that aims to directly include the model structure of CRF factors into the forward architecture of CNNs. The BI modules are easy to implement and are applicable to CNNs that perform structured output prediction. \chapter{Conclusions and Outlook} \label{chap:conclusion} Generative models provide a strong set of tools for modeling the vision world around us. But their use in computer vision is hampered by the complexity of inference in them causing the vision community to favor data-hungry discriminative models. Both generative and discriminative models have complementary advantages and disadvantages as discussed in Chapter~\ref{chap:models}. Generative models provide an easy handle for incorporating prior knowledge about the task but inference is often too complex in them. Discriminative models, on the other hand, have a straightforward inference scheme as forward evaluation of models, but lack principled ways of incorporating prior knowledge into them. This thesis work proposed techniques for alleviating some of the key issues with prominent computer vision models by improving inference in them. A common strategy that is followed across several techniques proposed in this thesis is leveraging the complementary models for better inference in a given model. That is, we leverage discriminative models for better inference in generative computer vision models. And we used generative knowledge (in the form of bilateral filters) and enriched the existing discriminative CNN models. This way, this thesis made important steps in bridging the gap between generative and discriminative vision models. The proposed inference techniques are flexible enough to deal with different task scenarios (e.g.,\ availability of large or small amounts of data). \paragraph{Inference in Generative Vision Models} In the case of generative models, we leverage discriminative clustering or random forests techniques to accelerate and/or to improve the Bayesian inference. In Chapter~\ref{chap:infsampler}, we proposed a new sampling technique called `Informed Sampler', where discriminative models help in better exploration of target domain, via \emph{informed} proposals, while doing MCMC sampling. In Chapter~\ref{chap:cmp}, we proposed a new message passing technique called `Consensus Message Passing' where random forest predictors are used for predicting \emph{consensus} messages during standard message passing inference resulting in convergence to better solutions. In both `Informed Sampler' and `Consensus Message Passing' (CMP), we made sure that the theoretical guarantees that come with the well established inference techniques are not violated with our modified inference schemes. In the informed sampler, we achieve this by injecting discriminative knowledge in MCMC sampling via proposal distributions and adhering to detailed balance condition while sampling. And in consensus message passing, we used consensus messages from discriminative predictors only during the first few iterations ensuring that the fixed point reached by our modified inference is also a fixed point of standard message passing in the model. We evaluated both the informed sampler and CMP techniques on three different generative models each, reflecting a wide range of problem scenarios, where we consistently observed improved inference with the proposed techniques in comparison to standard sampling and message passing inference techniques. \paragraph{Inference in Discriminative Vision Models} In this thesis, we focus on the inference in prominent CNN models. Spatial convolutions form the basic building block of most CNN architectures. A key observation in this thesis work is that the bilateral filters~\cite{aurich1995non,tomasi1998bilateral} are a generalization of spatial convolutions and do not have many of the limitations that spatial convolutions have. The key issue with the existing use of bilateral filters is they are confined to fixed hand-tuned parameterization. In Chapter~\ref{chap:bnn}, we proposed a generalized bilateral filter and devised a gradient based technique for learning the filter parameters. Experiments on wide range of problems showed the superior performance of learnable bilateral filters with respect to using Gaussian filter kernel. Learnable bilateral filters enabled us to stack several filters together and learn all of them via back-propagation. Using this, we proposed novel neural network architectures which we call `Bilateral Neural Networks' (BNN). In Chapter~\ref{chap:vpn}, we showed how BNNs can be easily adapted to filter video data for propagating temporal information across video frames. In Chapter~\ref{chap:binception}, we proposed new and fast neural network modules, based on explicit Gaussian bilateral filtering called `Bilateral Inceptions' and showcased how we can modify existing segmentation CNN architectures for big improvements in accuracy while adding little time overhead. Bilateral filters form the core of mean-field inference in DenseCRF models~\cite{krahenbuhl2012efficient} and provide a way to incorporate prior knowledge about the scene in the form of dense feature-based connectivity across the pixels. By integrating learnable bilateral filters into standard CNN architectures, we brought the worlds of CRF and CNN closer, providing a way to incorporate prior knowledge into CNNs. \section{Summary of Contributions} The following list summarizes the specific contributions of this thesis work: \begin{itemize} \item We devised a novel MCMC sampling approach called `Informed Sampler' (Chapter~\ref{chap:infsampler}) for doing Bayesian inference in complex generative vision models. The Informed sampler leverages discriminative approaches for improving the efficiency of MCMC sampling. Experiments on a wide range of generative vision models showed significantly faster convergence of our sampler while maintaining higher acceptance rates. This opens up possibilities for using complex generative models like graphics engines for addressing vision problems. \item We devised a novel message passing technique called `Consensus Message Passing' (CMP), in Chapter~\ref{chap:cmp}, for doing Bayesian inference in layered graphical models used in vision. Experiments on diverse graphical models in vision showed that CMP resulted in significantly better performance compared to standard message passing techniques such as expectation propagation or variational message passing. Moreover, CMP is the first instance where the Infer.NET~\cite{InferNET2012} probabilistic programming language is shown to be useful for addressing vision problems. \item We parameterized bilateral filters as general sparse high dimensional filters (Chapter~\ref{chap:bnn}) and devised an approach for learning the filter kernels via standard back-propagation learning techniques. This resulted in a general technique for learning sparse high-dimensional filters, which in turn resulted in a generalization of standard bilateral filters. Experiments on wide range of applications showed improved performance with respect to standard Gaussian bilateral filters. \item We also show how learning bilateral filters can generalize the fully-connected conditional random field models (DenseCRF) to arbitrarily learned pairwise potentials (Section~\ref{sec:densecrf}) instead of standard Gaussian pairwise edge potentials. DenseCRF is one of the widely used CRF techniques in vision and this generalization carries forward to most of its existing applications and helps in better integration into end-to-end trained models like convolutional neural networks (CNN). \item Our technique for learning general sparse high dimensional filters also generalizes standard spatial convolutions in CNN frameworks. This opens up possibilities for applying CNNs to sparse high-dimensional data, which is not feasible with many standard CNN techniques. Moreover, our technique can be used for learning image-adaptive filters inside CNNs instead of standard image-agnostic 2D filters. \item We adapted the learnable sparse high dimensional filters for video filtering, in Chapter~\ref{chap:vpn}, and proposed a novel neural network approach for propagating content across video frames. We call our networks `Video Propagation Networks'~(VPN). Experiments on video object segmentation and semantic video segmentation showed that VPN outperformed existing task-specific methods while being faster. \item In Chapter~\ref{chap:binception}, we devised a new CNN module called `Bilateral Inception' that can be readily inserted into standard segmentation CNN models resulting in better performance while producing the result at original image resolution and also alleviating some of the need for post-processing. Experiments on state-of-the-art CNN models resulted in significantly better performance with our inception module while being competitive in time. \end{itemize} \section{Outlook} With the diverse range of experiments on each proposed technique, I hope to have convinced the readers that this thesis work resulted in several important advances for inference in computer vision models. Inference in computer vision models is still a very active area of research and I hope the work presented in this thesis aids in further advances in this area of research. The following are some of the research topics that could benefit from the concepts and techniques presented in this thesis. \vspace{-0.2cm} \paragraph{Leveraging Photo-Realistic Graphics for Vision:} As discussed in Section~\ref{sec:inv-graphics}, modern graphics engines leverage dedicated hardware setups and provide real-time renderings with stunning level of realism. This is made possible with accurate modeling of image formation. Such realistic graphics models are seldom utilized in building vision systems due to the difficulty in posterior inference. The informed sampler (Chapter~\ref{chap:infsampler}) and consensus message passing (Chapter~\ref{chap:cmp}) techniques presented in this thesis made important steps in making the inference faster. But the proposed techniques are still not fast enough for practical purposes. An important research direction is to further improve the efficiency of inference by making use of the internals of state­-of­-the-­art rendering mechanisms such as 'Metropolis Light Transport'~\cite{vorba2014line}. One way to achieve this is by improving MCMC sampling efficiency via pre­-rejection using the rejection technique of~\cite{korattikara2013austerity}, without the need for doing full rendering for the rejected samples in MCMC. This would result in a better coupling of graphics and Bayesian vision systems. \vspace{-0.2cm} \paragraph{CNNs for Sparse 3D Data:} In recent years, there is an increasing need for techniques that efficiently process 3D data, with the onset of cheaper 3D scanners and the consumer devices that requires processing 3D data (e.g.~virtual reality devices such as Oculus Rift~\cite{oculus}). One of the distinguishing characters of high-dimensional data such as 3D point clouds or videos is that they are sparse. 3D point clouds are inherently sparse and the sparsity of video data comes from the redundant representation across frames. Learnable bilateral filters proposed in this thesis (Chapter~\ref{chap:bnn}), provide a principled way to process sparse high-dimensional data by enabling long-range data dependent connections. This thesis work also demonstrated that one could easily integrate these sparse high-dimensional filters into other CNN architectures and are also shown to be fruitful for video processing (Chapter~\ref{chap:vpn}). I hope this thesis work inspires future research work for efficiently processing 3D data such as point clouds or meshes. Since the convolutions are performed in bilateral space with sparsely populated cells, there is no need to voxelize a given point cloud or meshes for doing 3D convolutions. One of the limitations of bilateral neural networks is that the bilateral feature scales are hand-­tuned. There are other recent works such as~\cite{krahenbuhl2013parameter,kundu2016feature} trying to optimize feature spaces for bilateral filtering but are not integrated into CNN frameworks. An interesting future research direction is to bring these works together by learning feature spaces for bilateral neural networks in an end-­to-­end fashion. \vspace{-0.2cm} \paragraph{Structured CNNs:} A​lthough images are represented with independent pixel values, real­ world is highly structured and prior knowledge therein can be captured by rich modeling tools like graphical models. For example, in the case of urban scene understanding, one could leverage the prior knowledge that objects like cars and persons stand on the ground plane; building facades are vertical etc. Most of the existing CNN frameworks are agnostic to such explicit prior information. One may argue that the CNNs are capable of implicitly learning such prior knowledge directly from the training data. But, in practice, the amount of labelled training data is always limited and thus CNNs need external prior constraints to be able to perform well on several real­ world problems especially when the training data is very limited. In this thesis, we developed techniques for incorporating prior knowledge into CNNs via learnable bilateral filters. Such prior knowledge is low-level and, ideally I would like to have techniques for integrating high-level prior knowledge (e.g., the prior knowledge that is encoded in graphical models). Other existing works in this direction (e.g.~\cite{jain2015structural,zheng2015conditional,tompson2014joint,chandra2016fast}) are also either limited in the type of prior knowledge they model and/or graphical model constraints are generally enforced after the main CNN structure. I believe that a fruitful direction to tackle this problem is developing structured filters that can be easily integrated into CNN frameworks. The work presented in this thesis make important steps in this direction. \vspace{-0.2cm} \paragraph{On Combining Generative and Discriminative Models:} Hybrid generative and discriminative models are an active area of research and I believe that the future of computer vision would be dominated by such hybrid models which are scalable and generic to be applicable to a wide range of problem scenarios. In Section~\ref{sec:gen-disc-comb}, we discussed several recent works that aim to develop such hybrid models. Several inference techniques presented in this thesis are based on the synergistic combinations of generative and discriminative models. I hope that this thesis work inspires or aids in the further development of hybrid generative and discriminative vision models. \chapter{Zusammenfassung} Maschinelles Sehen kann als die F\"ahigkeit verstanden werden Bilddaten zu interpretieren. Durchbr\"uche in diesem Feld gehen oft einher mit Fortschritten in Inferenztechniken, da die Komplexit\"at der Inferenz die Komplexit\"at der verwendeten Modelle bestimmt. Diese Arbeit beschreibt lernbasierte Inferenzmechanismen und zeigt Anwendungen im maschinellen Sehen auf, wobei auf Techniken f\"ur Inferenz in sowohl generativen als auch diskriminativen Modellen eingegangen wird. Obwohl naheliegend und intuitiv verst\"andlich, sind generative Modelle im maschinellen Sehen h\"aufig nur eingeschr\"ankt nutzbar, da die Berechnung der A-Posteriori-Wahrscheinlichkeiten oft zu komplex oder zu langsam ist, um praktikabel zu sein. Wir beschreiben Techniken zur Verbesserung der Inferenz in zwei weit verbreiteten Inferenzverfahren: `Markov Chain Monte Carlo Sampling' (MCMC) und `Message-Passing'. Die vorgeschlagene Verbesserung besteht darin, mehrere diskriminative Modelle zu lernen, die die Grundlage f\"ur Bayes'sche Inferenz \"uber einem generativen Modell bilden. Wir demonstrieren anhand einer Reihe von generativen Modellen, dass die beschriebenen Techniken den Inferenzprozess beschleunigen und/oder zu besseren L\"osungen konvergieren. Eine der gr\"o{\ss}ten Schwierigkeiten bei der Verwendung von diskriminativen Modellen ist die systematische Ber\"ucksichtigung von Vorkenntnissen. Zur Verbesserung der Inferenz in diskriminativen Modellen schlagen wir Techniken vor die das ursprüngliche Modell selbst verändern, da Inferenz in diesen die schlichte Auswertung des Modells ist. Wir konzentrieren uns auf `Convolutional Neural Networks' (CNN) und schlagen eine Generalisierung der Faltungsoperation vor, die den Kern jeder CNN-Architektur bildet. Dazu verallgemeinern wir bilaterale Filter und pr\"asentieren eine neue Netzarchitektur mit trainierbaren bilateralen Filtern, die wir `Bilaterale Neuronale Netze' nennen. Wir zeigen, wie die bilateralen Filtermodule verwendet werden können, um existierende Netzwerkarchitekturen für Bildsegmentierung zu verbessern und entwickeln ein auf Bilateralen Netzen basierendes Modell zur zeitlichen Integration von Information für Videoanalyse. Experimente mit einer breiten Palette von Anwendungen und Datens\"atzen zeigen das Potenzial der vorgeschlagenen bilateralen Netzwerke. Zusammenfassend schlagen wir Lernmethoden f\"ur bessere Inferenz in einer Reihe von Modellen des maschinellen Sehens vor, von inversen Renderern bis zu trainierbaren neuronalen Netzwerken. Unsere Inferenz-Techniken helfen bei der Berechnung der A-Posteriori-Wahrscheinlichkeiten in generativen Modellen und erm\"oglichen so neue Ans\"atze des modellbasierten machinellen Lernens im Bereich des maschinellen Sehens. In diskriminativen Modellen wie CNNs helfen die vorgeschlagenen verallgemeinerten Filter beim Entwurf neuer Netzarchitekturen, die sowohl hochdimensionale Daten verarbeiten k\"onnen als auch Vorkenntnisse in die Inferenz einbeziehen. \chapter{Symbols and Notation} \label{chap:symbols} \newcommand{\mathbf{x}}{\mathbf{x}} \newcommand{\mathbf{y}}{\mathbf{y}} \newcommand{\bar{\mathbf{y}}}{\bar{\mathbf{y}}} \newcommand{\mathbf{\theta}}{\mathbf{\theta}} \newcommand{\mathbf{f}}{\mathbf{f}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\textit{e.g.}\@\xspace}{\textit{e.g.}\@\xspace} \newcommand{\textit{i.e.}\@\xspace}{\textit{i.e.}\@\xspace} \newcommand{\textit{\'{a}~la}\@\xspace}{\textit{\'{a}~la}\@\xspace} \newcommand{\textit{et~al.}\@\xspace}{\textit{et~al.}\@\xspace} Unless otherwise mentioned, we use the following notation and symbols in this thesis. Here, we only list those symbols which are used across multiple chapters. Those symbols that are specific to particular sections or chapters are not listed here. \begin{longtable}[l]{p{50pt} p{300pt}} \toprule \textbf{Symbol} & \textbf{Description} \\ \midrule $\mathbf{x}$ & Observation variables (in vectorized form)\\ $\mathbf{y}$ & Target variables (in vectorized form)\\ $\bar{\mathbf{y}}$ & Intermediate/Proposed target variables\\ $K_t,m_t,\cdots$ & Random variables at time step $t$\\ $\mathbf{\theta}$ & Set of all model parameters \\ $\alpha, \beta, \mu, \gamma$ & Model or training parameters \\ $\mathbf{f}$ & Pixel or superpixel features such as $(x,y,r,g,b)$ \\ $P(\cdot|\cdot)$ & Probability distribution or density \\ $\mathcal{N}(\cdot|\cdot)$ & Gaussian distribution \\ $\psi_u$ & Unary potential at each pixel/superpixel \\ $\psi_p$ & Pairwise potential between two pixels/superpixels \\ $L(\cdot), E(\cdot)$ & Loss/Objective/Energy function \\ $\mathcal{F}(\cdot)$ & Generic function relating input to output variables \\ $\Lambda$ & Diagonal matrix for scaling image features say $(x,y,r,g,b)$ \\ \bottomrule \end{longtable} \part{Inference in Generative Vision Models} \input{chapter3.tex} \input{chapter4.tex} \part{Inference in Discriminative Vision Models} \input{chapter5.tex} \input{chapter6.tex} \input{chapter7.tex} \input{conclusion.tex}
-257,625.267817
[ -0.861328125, 0.548828125 ]
23.898667
[ -3.02734375, 1.298828125, -1.3994140625, -3.6484375, -0.60791015625, 5.51171875 ]
[ -1.1025390625, 3.51171875, 0.8232421875, 4.55859375 ]
5,579
60,871
[ -2.859375, 3.19921875 ]
26.842706
[ -5.1171875, -1.7138671875, -2.212890625, -1.2607421875, 0.85009765625, 8.09375 ]
0.503091
13.733334
12.269233
4.388601
[ 1.5315643548965454 ]
-151,982.170331
6.939084
-253,088.100096
0.068184
6.911143
[ -3.365234375, -2.634765625, -2.171875, -3.375, 2.4296875, 8.8984375 ]
[ -5.87890625, -2.271484375, -2.1875, -1.1259765625, 3.71875, 5.0234375 ]
BkiUbfvxK6Ot9TMSpOC0
\section{Introduction} In recent years, following the seminal works by several groups \cite{Almheiri2014Bulk,Mintun2015Bulk-Boundary,Pastawski2015Holographic,Freivogel2016Precursors} it has become clear that quantum error correction plays an important role in understanding how the bulk geometry of an AdS spacetime emerges from the conformal field theory living on the boundary of that spacetime. \begin{figure}[h] \centering \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[height=100pt]{ads-cylinder-labelled} \caption{Conformal diagram of Anti-deSitter space. Time runs vertically upwards. In global co-ordinates $r=0$ represents the center of the spacetime and the boundary is located at $r=\infty$. $ \Sigma $ is a constant time slice.} \label{fig:ads-cylinder} \end{subfigure} \hfill \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[height=100pt]{rt-surface-labelled} \caption{A single time-slice of the AdS cylinder. The boundary at $r=\infty$ is divided into two sections $A$ and $\bar{A}.$ $\gamma_{A}$ is the surface of minimum area anchored to the boundary and separating the two regions $A$ and $\bar{A}$.} \label{fig:ads-spatial-slice} \end{subfigure} \caption{Conformal diagram of AdS spacetime (left) and the Ryu-Takayanagi surface on a constant time-slice (right).} \label{fig:ads} \end{figure} \autoref{fig:ads} shows the conformal diagram of AdS space on the left and a constant time slice of this spacetime on the right. The surface $\gamma_A$ divides the bulk geometry into two regions, thus also dividing the boundary into two regions labeled $ A $ and $ \bar A $. According to the Ryu-Takayanagi conjecture \cite{Ryu2006Aspects,Ryu2006Holographic} the entropy due to entanglement between the degrees of freedom living in $ A $ with those living in $ \bar A$ is given by the area of the surface $ \gamma_A $ dividing the bulk interior: \begin{equation}\label{key} S_{A,\bar A} = \frac{\text{Area of }\gamma_{A}}{4 G_N^{d+2}} \end{equation} where $ d $ is the number of spatial dimensions of the bulk AdS geometry. The same time slice is shown in \autoref{fig:ads-rt}. The bulk is divided into two regions, now labeled $A$ and $B$, with the dividing surface labeled $S$. Now, imagine shrinking the surface $S$ so that its area gradually decreases. As the area of this surface reduces, the Ryu-Takayanagi formula tells us that the entropy of entanglement, between the degrees of freedom living on the boundary of region $ A $ and those living on the boundary of region $ B $, also starts to reduce. Geometrically, the effect of shrinking $ S $ is to reduce the connectivity between the bulk regions $A$ and $B$ as shown on the right side of \autoref{fig:ads-rt}. When the area of $S$ goes to zero, the two regions will become completely disconnected. At the same time the entropy of entanglement between the degrees of freedom living on the boundary of region $A$ with those living on the boundary of region $B$ will also vanish. \begin{figure}[h] \centering \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[height=100pt]{divided-bulk} \caption{The surface $S$ divides the bulk geometry into two regions $A$ and $B$. The area of $S$ is a measure of the entropy of entanglement between the degrees of freedom living on the boundary of the regions $A$ and $B$ respectively.} \label{fig:divided-bulk} \end{subfigure} \hfill \begin{subfigure}[t]{0.4\textwidth} \centering \includegraphics[height=120pt]{shrinking-area} \caption{As we shrink the surface $S$, we reduce the entropy of entanglement between the two regions. Geometrically this corresponds to separating or cutting the bulk into disconnected regions.} \label{fig:shrinking-area} \end{subfigure} \caption{Illustrating the relation between geometric connectivity and entanglement \cite{Van-Raamsdonk2010Building}} \label{fig:ads-rt} \end{figure} This simple thought experiment was first proposed in 2010 by Mark Van Raamsdonk \cite{Van-Raamsdonk2010Building}. It illustrates in a very simple yet dramatic manner how the RT formula, in conjunction with the AdS/CFT conjecture provides a straightforward description of how classical spacetime can emerge from entanglement between degrees of freedom of some underlying quantum field theory. It provides a very simple, yet physically plausible explanation for how different pieces of a spacetime can be ``sewn'' together to build a macroscopic geometry. \subsection{AdS/CFT Bulk Reconstruction} Further background for the problem requires us to use slightly more technical language. The metric for asymptotically Anti-deSitter spacetimes (AdS) is generally expressed in terms of the time-like co-ordinate $ t $, a ``radial'' co-ordinate $ r $ and a set of co-ordinates $ \vec{x} $ which describe the spatial dimensions orthogonal or transverse to $ r $: $ g(r,\vec{x},t) $. Using the global co-ordinates, the metric can be expressed as: \begin{equation}\label{eqn:ads-global-metric} ds^2 = -\left( 1 + \frac{r^2}{L^2} \right) dt^2 + \left( 1 + \frac{r^2}{L^2} \right)^{-1} dr^2 + r^2 d\vec{x}^2 \end{equation} where $ L $ is the AdS radius. This metric is a solution to the vacuum Einstein equations with a negative cosmological constant $ \Lambda < 0 $, given by: $ \Lambda = -d(d-1)/2L^2 $, where $ d $ is the total number of spacetime dimensions. In the AdS/CFT correspondence one usually talks about spacetimes which are only \emph{asymptotically} AdS, \textit{i.e.}~ whose metric can be written in the form: \begin{equation}\label{eqn:asymptotic-ads-global-metric} ds^2 = -f(r) dt^2 + f(r)^{-1} dr^2 + r^2 d\vec{x}^2 \end{equation} where $ f(r) = 1 + r^2/L^2 $ for $ r $ large enough. The boundary of the spacetime is located at $ r = \infty $. Consider some asymptotically AdS spacetime $ \mc{M} $, with boundary $ \partial \cal M $. The bulk geometry of $ \cal M $ satisfies the Einstein field equations. Consider some fields $ \phi(r,\vec{x}) $ living on a constant time slice $ \Sigma_t $ of $ \cal M $. In terms of these quantities the AdS/CFT conjecture can be stated mathematically in terms of the following expression, known as the Gubser-Klebanov-Polyakov-Witten (GKP-Witten) relation \cite{Gubser1998Gauge,Witten1998Anti}: \begin{equation}\label{eqn:ads-cft-identity} \lim\limits_{r \rightarrow \infty} e^{\imath S[\phi(r,\vec{x})]} \equiv \expect{\exp \left( \imath \int \phi_0(\vec{x}) \mc{O} \right)} \end{equation} where $ S[\phi(r,\vec{x})] $ is the gravitational action evaluated in the bulk for a given field configuration $ \phi(r,\vec{x}) $; $ \phi_0(\vec{x}) $ is the value of the fields evaluated only on the boundary and $ \mc{O} $ is some operator acting on the on the Hilbert space of the boundary field theory. This relation can be understood as the statement that: \begin{quote} \textbf{Proposition I}: \emph{classical expectation values of bulk fields in asymptotically anti-deSitter spacetimes can be calculated in terms of quantum expectation values of operators acting on the Hilbert space of the boundary field theory} \end{quote} There is, however, a caveat which prevents us from constructing a one-to-one correspondence of the physics in the boundary with the physics in the bulk. This is because the equality \eqref{eqn:ads-cft-identity} holds only \emph{asymptotically} as $ r \rightarrow \infty $. The question that therefore presents itself is: \emph{can we reconstruct bulk fields (for $ r < \infty $) from knowledge of boundary fields?} \begin{figure*} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=100pt]{ads-spacelike-labelled} \caption{Local field defined at the bulk point $ x $, can be expressed \cite{Hamilton2006Holographic} in terms of non-local, boundary operators with support on $ \Sigma $ - the set of points on the boundary which are at spacelike separations from $ x $} \label{fig:bulk-boundary-p1} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=100pt]{ads-rindler-p5} \caption{AdS-Rindler wedge reconstruction \cite{Bousso2012Light-sheets,Czech2012The-gravity,Hubeny2012Causal}. Requires knowledge of operators only in subset $ A \subset \Sigma $ of boundary Cauchy surface.} \label{fig:bulk-boundary-p2} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=100pt]{rt-surface-v2-labelled} \caption{Fields at any point $ x $ in the green region can be reconstructed using operators living on $ A $. Ambiguity arises because $ x $ can lie in more than \emph{one} causal wedge.} \label{fig:bulk-boundary-p3} \end{subfigure} \caption{Reconstruction of bulk fields from boundary CFT operators} \end{figure*} It turns out that bulk fields, \textit{i.e.}~ $ \phi(t,r,\vec{x}) $ with $ r < \infty $, can indeed be reconstructed from boundary operators \cite{Hamilton2006Holographic}. For this purpose it is helpful to work in the Poincare co-ordinates for AdS: \begin{equation}\label{eqn:ads-poincare} ds^2 = \frac{1}{z^2}(-dt^2 + dz^2 + d\vec{x}^2) \end{equation} where now the radial co-ordinate is $ z $ and the boundary lies at $ z = 0 $. In these coordinates, on a constant time-slice, a bulk field with normalizable fall-off near the boundary can be written as: \begin{equation}\label{eqn:field-normalizable-falloff} \phi(z, \vec{x}) \sim z^{\Delta} \phi_0(\vec{x}) \end{equation} In terms of these co-ordinates, the value of the field at a point $ (z,\vec{x}) $ in the bulk can be expressed in terms of the boundary field via: \begin{equation}\label{eqn:bulk-fields} \phi(z, \vec{x}) = \int_{\partial M} dx' K(\vec{x}'|z, \vec{x}) \phi_0(\vec{x}') \end{equation} where $ K $ is the kernel or the smearing function. $\phi_0(\vec{x})$ corresponds to a local operator $ \mc{O}(\vec{x}) $ in the CFT: \begin{equation}\label{eqn:field-op-p1} \phi_0(\vec{x}) \leftrightarrow \mc{O}(\vec{x}) \end{equation} This relationship implies that \emph{local} bulk fields are dual to \emph{non-local} boundary operators: \begin{equation}\label{eqn:field-op-p2} \phi(z,\vec{x}) \leftrightarrow \int dx' K (\vec{x}'|z, \vec{x}) \mc{O}(\vec{x}') \end{equation} where the integral has support over a subset of points $ \vec{x}' $ on the boundaryConsider a spatial surface $ \Sigma $ which is a constant time slice of the AdS cylinder as shown in \autoref{fig:bulk-boundary-p1}. Then the integral in \eqref{eqn:field-op-p2} has support on the subset of the boundary shaded in green, \textit{i.e.}~ all the points on the boundary which are at a spacelike separation from the bulk point $ (z,\vec{x}) $. This construction can be further restricted \cite{Bousso2012Light-sheets,Czech2012The-gravity,Hubeny2012Causal} to limit the domain of integration to the portion of the boundary which lies in the ``causal wedge'' of $ (x,\vec{x}) $. This is the boundary region shaded in green shown in \autoref{fig:bulk-boundary-p2}. Here $ A \subset \Sigma $ is a subset of the spatial slice $ \Sigma $, such that the bulk point lies in the region enclosed by $ A $ and the Ryu-Takayanagi surface corresponding to $ A $ as shown in \autoref{fig:bulk-boundary-p3}. \subsection{Redundancy in Bulk Reconstruction} It now becomes apparent that there is a redundancy in this description of bulk reconstruction. Consider for instance a second region $ B $ lying on the boundary of $ \Sigma $, which has non-zero overlap with $ A $ as shown in \autoref{fig:overlapping-wedges}, such that the bulk point $ (z,\vec{x}) $ lies within the causal wedges of both $ A $ \emph{and} $ B $. According to the HRT prescription for bulk reconstruction, fields at the bulk point $ (z,\vec{x}) $ can be mapped \emph{either} to an operator $ \mc{O}_A[\phi(x)] $ with support \emph{only} in $ A $, \emph{or} to an operator $ \mc{O}_B[\phi(x)] $ with support \emph{only} in the region $ B $. \begin{figure*} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=100pt]{ads-rindler-p2} \caption{Fields at a point $ x $ can be represented in terms of operators which lie \emph{either} on the segment of the boundary labeled $ A $ \emph{or} on the segment labeled $ B $.} \label{fig:overlapping-wedges} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=100pt]{ads-rindler-p3} \caption{Three boundary regions $ A,B,C $ such that the bulk point does not lie in the causal wedge of any of the three regions (shown in grey).} \label{fig:non-overlapping-wedges} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=100pt]{ads-rindler-p7} \caption{The bulk point \emph{does} lie within the causal wedges of regions $ A \cup B $, $ B \cup C $ and $ C \cup A $ separately. Here the causal wedge of $ A \cup B $ is shown shaded in green, along with the associated RT surface $ \gamma_{A \cup B} $.} \label{fig:wedge-union} \end{subfigure} \caption{Illustrating the ambiguity in the contruction of bulk fields in terms boundary operators in AdS/CFT} \end{figure*} Now consider the boundary segment which consists of the intersection $ A \cap B $. The associated causal wedge $ W_C[A \cap B] $ \emph{does not} contain the point $ (z,\vec{x}) $, even though as is clear, the bulk point does fall within the intersection $ W_C[A] \cup W_C[B] $ of the causal wedges of $ A $ and $ B $. In order for there to be a \emph{unique} description of the bulk fields in terms of boundary operators we require that: \begin{equation}\label{eqn:equiv-ops} \mc{O}_A[\phi(x)] = \mc{O}_B[\phi(x)] \end{equation} For this to be true, both operators must have support \emph{only} in $ A \cap B $. However, as just noted the causal wedge of this region $ W_C[A \cap B] $ does not contain the bulk point and therefore any CFT operators with support \emph{only} on $ A \cap B $ cannot provide a faithful representation of bulk fields at $ (z,\vec{x}) $. The only conclusion one can arrive at is that: \begin{equation}\label{eqn:inquiv-ops} \mc{O}_A[\phi(x)] \ne \mc{O}_B[\phi(x)] \end{equation} Since both operators \emph{separately} encode the physics near the same bulk point and the fact that the two operators \emph{cannot} be the same, leads us to conclude that \cite{Almheiri2014Bulk}: \begin{quote} \textbf{Proposition II}: \emph{There does not exist any \textbf{unique} representation of bulk fields in terms of CFT operators living on (some subset of) the boundary.} \end{quote} We can shed more light on the nature of this redundancy by considering three segments of the boundary as shown in \autoref{fig:non-overlapping-wedges}. As can be seen the bulk point (in the center) \emph{does not} lie within the causal wedges (shaded in grey) of either of the three regions $ A,B $ or $ C $. Therefore we do not expect to have a representation of bulk fields at the given point to have a representation in terms of operators defined solely on either one of the three regions. Now consider, the third figure \autoref{fig:wedge-union}. From this we can see that the bulk point now lies in the causal wedge of the regions $ A \cup B $, $ B \cup C $ and $ C \cup A $, taken \emph{separately}. Consequently there exists a representation of the bulk fields in terms of operators: \[ \mc{O}_{A \cup B} \ne \mc{O}_{B \cup C} \ne \mc{O}_{C \cup A} \] As Almiehri et. al. point out in \cite{Almheiri2014Bulk} this sort of dependence of the bulk fields on three different operators defined on overlapping boundary regions is reminiscent of the three-qutrit quantum error correcting code wherein one ``logical'' qutrit is encoded in terms of three ``physical'' qutrits. At this stage, let us rapidly overview the basic ideas behind quantum error correcting codes and the three-qutrit code in particular. \section{Quantum Error Correction} In any computation, quantum or classical, errors are inevitable. ``Error correction'' protocols \cite{Kempe2006Approaches,Gottesman2009Introduction} allow us to correct (non-catastrophic) errors. The basic idea behind all error correction protocols, whether classical or quantum, is to encode the ``logical'' degrees of freedom - which carry the information we would like to protect from errors - in terms of some ``physical'' degrees of freedom. The number of physical d.o.f is necessarily larger than the number of logical d.o.f. The simplest example of such a scheme is the repetition code, for either classical or quantum information, in which a single physical d.o.f is encoded in two or more logical d.o.f in the following manner: \begin{align} \text{Classical}: ~ & \tilde 0 \rightarrow 000; \quad \tilde 1 \rightarrow 111 \nonumber \\ \text{Quantum}: ~ & \ket{\tilde 0} \rightarrow \ket{000}; \quad \ket{\tilde 1} \rightarrow \ket{111} \end{align} where symbols with a $ \tilde{~} $ on top represent logical qubits and those without it are physical qubits. Here we have encoded one logical cbit/qubit in three physical cbits/qubits. Let us assume that the errors in our system will only flip one of the physical cbits/qubits in a single code-word, e.g. $ 000 \rightarrow 010 $. Such errors can therefore be corrected by using the \emph{majority rule}. Those cbits/qubits which are greater in number are the ``correct'' ones and those which are in the minority are the ``wrong'' ones. Thus $ 010 $ would become $ 000 $ after correction and $ 101 $ would become $ 111 $. Of course, if the errors cause more than one cbit/qubit to flip then this simple scheme would no longer be effective. The basic principle behind all forms of error correction is \emph{safety through redundancy}. \textbf{Definition:} A $ (N,K) $ quantum error correcting code consists of a triple \cite{Kribs2005Unified}: \begin{equation}\label{eqn:qec-triple} (\mc{C}_K, \mc{E}, \mc{R}) \end{equation} where: $ \mc{C}_K $ is the ``code space'', $ K $ dimensional subspace of a higher dimensional ``physical'' Hilbert space $ \mc{C}_K \subset \mc{H}_N $; $ \mc{E} $ is a set of quantum operations on $ \mc{H}_N $ which generate errors; and $ \mc{R} $ is a set of quantum operations on $ \mc{H}_N $ which can ``undo'' the errors. $ N, K $ denote, respectively, the dimensions of the physical Hilbert space and the code subspace. An example of such a code is Shor's nine qubit code, where a single logical qubit is encoded using nine physical qubits. \begin{align}\label{eqn:nine-qubit-code} \ket{\tilde 0} & = \frac{1}{2\sqrt{2}} \left( \ket{000} + \ket{111} \right)\left( \ket{000} + \ket{111} \right)\left( \ket{000} + \ket{111} \right) \nonumber \\ \ket{\tilde 1} & = \frac{1}{2\sqrt{2}} \left( \ket{000} - \ket{111} \right)\left( \ket{000} - \ket{111} \right)\left( \ket{000} - \ket{111} \right) \end{align} The fundamental ingredient in this code are the GHZ (Greenberger-Horne-Zeilinger) states\footnote{These states are also sometimes referred to as ``cat'' states in honor of Schrodinger's famous feline}: \begin{equation}\label{eqn:ghz-states} \ket{\pm} = \frac{1}{\sqrt{2}}(\ket{000} \pm \ket{111}) \end{equation} For future reference we reproduce below the quantum circuit which is used for generating GHZ states: \begin{figure}[h] \centering \includegraphics[scale=1.0]{qutrit_state} \caption{Quantum circuit for generating a qutrit state using CNOT gates} \label{fig:GHZ-states} \end{figure} This circuit involves application of the CNOT gate to qubits $ (1,2) $ and qubits $ (1,3) $ in succession. The CNOT gate is shown below: \begin{figure}[h] \centering \includegraphics[origin=c]{cnot} \caption{CNOT gate. Here $ x $ and $ y $ take values in $ \{0,1\} $ and $ \oplus $ is the logical XOR operator. If the control qubit $ \ket{x} = \ket{0} $ then the target qubit $ \ket{y} $ is left unchanged, otherwise the target qubit is flipped.} \label{fig:cnot-gate} \end{figure} In the remainder of this paper we will show that the consideration of discrete symmetries of spin-networks - the kinematical states of quantum geometry in the framework of Loop Quantum Gravity - naturally leads us to discover the existence of topological excitations which correspond precisely to CNOT gates. These excitations can then be used to build GHZ states, or in combination with certain other single qubit gates - also represented in terms of operators acting on topological degrees of freedom - can be be used to generate an arbitrary quantum circuit. \section{Spin Networks and Quantum Geometry} Loop Quantum Gravity - or ``LQG'' for short - is an approach\footnote{there are many introductory reviews on LQG, at various levels of technical sophistication ranging from advanced \cite{Thiemann2001Introduction,Thiemann2002Lectures, Ashtekar2004Background} to intermediate \cite{Mercuri2010Introduction,Dona2010Introductory,Esposito2011An-introduction,Ashtekar2012Introduction} to (relatively speaking) elementary \cite{Vaid2014LQG-for-the-Bewildered,Rovelli2011Zakopane,Rovelli2014Covariant}} towards building a non-perturbative theory of quantum gravity. LQG provides a description of quantum states of geometry in terms of objects known as ``spin networks''. A spin-network is an arbitrary graph $ \Gamma $, whose edges are labeled by representations of $ SU(2) $ (\autoref{fig:spin-network}) and whose vertices are labeled by invariant $ SU(2) $ tensors known as ``intertwiners''. More simply, spin-networks are graphs whose edges carry angular momenta and whose vertices provide a means for adding together all the angular momenta coming into that vertex from its adjoining edges. \begin{figure}[h] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=100pt]{network-labelled-j} \caption{} \label{fig:spin-network} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=100pt]{spin-net-vertex} \caption{} \label{fig:spin-network-node} \end{subfigure} \caption{Shown on the left is a spin-network and on the right is a single (tetravalent) vertex.} \end{figure} A spin-network defines a state of quantum geometry. With each edge we can associate a quantum of area and with each vertex a quantum of volume. \autoref{fig:area-puncture} shows a single edge of a spin-network carrying an angular momentum $ j $, puncturing a surface (shaded in blue). This edge endows the surface with a quantum of area given by $ A_j = 8\pi \gamma l_{P}^2 \sqrt{j(j+1)} $, where $ l_{P} $ is the Planck length and $ \gamma $ is a free parameter known as the Barbero-Immirzi parameter. The particular value of $ \gamma $ will not be relevant for our purpose. \begin{figure}[h] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=100pt]{area_puncture} \caption{} \label{fig:area-puncture} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=100pt]{area_punctures} \caption{} \label{fig:area-punctures} \end{subfigure} \caption{Shown on the left is the action of the area operator and on the right an illustration of how a macroscopic surface can be built up from ``gluing'' together many quanta of area.} \end{figure} Spin network vertices are associated with quanta of volume. In \autoref{fig:spin-network-node} a single tetravalent vertex is ``blown up''. Each edge adjoining the vertex carries an angular momentum $ j_i $. With each edge we can associate a triangle of area $ A_{j_i} $. The four triangles associated with the given vertex, will then close up\footnote{The four angular momenta must satisfy the closure condition $ \sum_{i=1}^4 \vect{j_i} = 0 $. This is analogous to the requirement that a classical tetrahedron satisfies $ \sum_{i=1}^4 \vect{N_i} = 0 $, where $ \vect{N_i} $ are the vectors normal to each face.} to form a tetrahedron with which we can associate an operator whose eigenvalues can be interpreted as the volume of the resulting ``quantum tetrahedron''. For more details we direct the interested reader to one of the several reviews listed in the references. Spin networks provide a \emph{genuinely non-perturbative} way to quantize classical geometry. \emph{A priori}, there is no requirement for a background manifold with any sort of topological or geometric structure, in order to be able to define a quantum state of geometry. The only information required to specify a spin-network state $ \ket{\Psi_\Gamma} $ is knowledge of the graph $ \Gamma $ (\textit{i.e.}~ the number of edges and vertices and their connectivity structure) and the labeling of edges and vertices by spins and intertwiners respectively: \begin{equation}\label{eqn:spin-net-info} \ket{\Psi_\Gamma} \equiv (\Gamma, N_v, N_e, A_{pq}, \{j_i\}, \{I_p\}) \end{equation} where $ N_v, N_e $ are the numbers of vertices and edges respectively in the graph, $ A_{pq} $ is the connectivity matrix, $ \{j_i\} $ are the edge labels and $ \{ I_p \} $ are the vertex labels. Now what makes these states \emph{bona-fide} states of quantum geometry, rather than simply being an \emph{ad-hoc} construction is the fact that these states are \emph{exact} solutions of the ADM Hamiltonian. As is very well understood, in the $ 3+1 $ formulation of general relativity, first presented by Arnowitt, Deser and Misner \cite{Arnowitt2004The-Dynamics} Einstein's equations can be expressed as a sum of constraints. When working in the connection formulation there are three constraints. These are known as the Gauss constraint $ \mc{C} $, the vector or ``diffeomorphism'' constraint $ \mc{H}_{diff} $ and the scalar or ``Hamiltonian'' constraint $ \mc{H}_{scalar} $. In this language the problem of quantum gravity reduces to that of finding states which are annihilated by the quantum operator versions of all three constraints. It turns out that spin network states are annihilated by two of the constraints - the Gauss and diffeomorphism constraint. The Gauss constraint is the statement of invariance of the state under $ SU(2) $ gauge transformations. This is satisfied by spin network states due to the closure condition: $ \sum j_e = 0 $, \textit{i.e.}~ the sum of angular momenta carried by all the edges adjoining a given vertex must give zero. The diffeomorphism constraint is trivially satisfied because the embedding of a graph in a smooth manifold does not affect its geometric content. One can move the edges and vertices of the graph around in the ambient manifold, but as long as the connectivity structure of the graph is unchanged, the spin network state is also unaffected. The action of the third constraint - the Hamiltonian constraint - is expected to generate ``time evolution'' of the quantum geometry. The resulting states are known as ``spin foams'' and represent causal histories connecting two different spin network configurations. For our purposes we are interested only in spin-networks which represent the spatial geometry as a given instance of time. \section{Diffeomorphism Invariance of Spin Networks} The ADM constraints discussed in the last section fail to capture the full dynamics of general relativity. The reason for this is the existence of ``large'' diffeomorphisms (those which cannot be continuously deformed to the identity). This is completely analogous to the existence of topologically non-trivial solutions in field theory due to the existence of ``large'' gauge transformations. Diffeomorphisms constitute the gauge symmetry of general relativity. Therefore it is natural to consider not only small diffeomorphisms - those continously deformable to the identity - but also large diffeomorphisms - those which lie in the disconnected component of the gauge group. The ADM constraints only encode the small diffeomorphisms. As was discovered by Freidman and Sorkin in the 1980s \cite{Friedman1982Half-integral}, the consideration of large diffeomorphisms leads to manifolds which have half-integer spin. In order to complete the description of spin network states as candidate states of quantum geometry, one has to consider the action of large diffeomorphisms in addition the small ones. Spin networks are trivially invariant under small (spatial) diffeomorphisms, because as explained in the previous section, moving edges or vertices around without changing the graph connectivity or spin labels does not change the geometric content of the state. Now, large diffeomorphisms would also \emph{not} change the graph connectivity or the spin labels. Thus the questions arises as to what effect, if any, would such transformations have on spin networks? It is not hard to convince oneself that the effect of large diffeomorphisms would be to break and reconnect spin-network edges in such a way that the winding and linking numbers of those edges would change. Thus, while the graph structure and spin labels would remain intact, a spin-network with two of its edges ``braided'' around one another is a topologically distinct state from one which does not have such a braiding. \begin{figure}[h] \centering \includegraphics[scale=0.20]{large-diffeos} \caption{Action of a large diffeomorphism on a spin network state}\label{fig:large-diffeos} \end{figure} This is illustrated in \autoref{fig:large-diffeos}. The upper portion of the figure shows a generic spin network state with two edges connecting two different regions (whose boundary is drawn as the dotted curve) of the graph. The lower portion shows what happens to this graph under the action of a large diffeomorphism. The two edges break and reconnect after having wound around each other once. The resulting graph carries the same geometric information as the initial state but has a different topological structure. If we wish to construct states of quantum geometry which incorporate the \emph{full} dynamics of quantum general relativity, we must construct states which are invariant under \emph{both} small \emph{and} large diffeomorphisms. \section{Diffeomorphism Invariance and Yang-Baxter Equation} Now, in order to understand the effect of large diffeomorphisms on a quantum state of geometry, let us consider the two configurations depicted in figure \autoref{fig:braiding}. There are two surfaces $ S_1 $ and $ S_2 $ with two edges (shown in green) connecting them. Now, as shown for \textit{e.g.}~ in \cite{Ghosh2014CFT/Gravity}, we know that at the points where the edges punctures the surfaces we can associate the Hilbert space of a point particle. Let us denote the state of the two punctures on the first surface as $ \ket{v_1} $, $ \ket{v_2} $ and on the second surface as $ \ket{v'_1} $, $ \ket{v'_2} $. On the left side of the figure the two edges are unbraided and on the right side they are wound once around each other. \begin{figure}[h] \centering \includegraphics[height=100pt]{braiding} \caption{Two configurations of a pair of edges of some graph state $ \ket{\Psi_\Gamma} $. The left side shows the edges as being unbraided. On the right the same two edges are braided once around each other.} \label{fig:braiding} \end{figure} Let us denote the Hilbert space associated with the $ i^{\text{th}} $ puncture as $ \mc{H}_i $. Then the state of the two punctures, either on $ S_1 $ or on $ S_2 $, resides in the tensor product Hilbert space: $ \mc{H}_1 \otimes \mc{H}_2 $. On the right side of the figure we see that braided the edges around each other has the same effect as exchanging the two punctures on the second surface. Now, in general, when we exchange two particles in any quantum system the total state of the system undergoes a unitary transformation. In three spatial dimensions particle exchange can only leads to a change in the phase of the wavefunction by $ +1 $ (for bosons) or $ -1 $ for fermions. However, in two spatial dimensions one can have more complex behavior (see for \textit{e.g.}~ \cite{Jain2007Composite}). The unitary operator associated with such an exchange need no longer be a trivial multiple of the identity. In general we would have the following relationship between the states of the two particles before and after exchange: \begin{equation}\label{eqn:braiding} V_1' \otimes V_2' = R (V_1 \otimes V_2) \end{equation} where $ R $ is some unitary operator. Diagrammatically this is shown in the figure \autoref{fig:braiding-op}. \begin{figure}[h] \centering \includegraphics[height=80pt]{braiding-p1} \caption{Effect of braiding two particles can be represented by the action of a unitary operator $ R $ on the two-particle Hilbert space} \label{fig:braiding-op} \end{figure} The form of the unknown unitary $ R $ is, as yet, undetermined. We can now use the fact that an arbitary spin-network state should be invariant under small diffeomorphisms to determine an exact equation which must be satisfied by any $ R $. Consider the two configurations shown in \autoref{fig:reidemeister-type-iv}. Now instead of two punctures, we are considering the case where each surface contains three punctures. The two figures might seem different, but they are not. It is easy to convince oneself that one can slide the middle thread in such a way as to go from the configuration on the right to the one of the left, \emph{without} breaking or joining any threads. Such a transformation is nothing but a small diffeomorphism acting only on the middle thread! \begin{figure}[h] \centering \includegraphics[height=100pt]{reidemeister-type-iv-v2} \caption{The two configurations shown here are connected to each other by a small diffeomorphism, known as a type IV Reidemeister move.} \label{fig:reidemeister-type-iv} \end{figure} However, while topologically the two configurations appear to be the same, the \emph{order} in which the pairwise unitary operator $ R $ acts on the three-particle state is different in both cases. On the left we see that the, in the first step, first two threads are braided while the first in untouched. This operation on the three particle Hilbert space can be denoted by the operator $ R_{12} \otimes \mb{1}_3 $, where the subscripts denote the particle on which the operation acts. The second step (on the left) can be represented as $ \mb{1}_1 \otimes R_{23} $ and the third one (on the left) is again $ R_{12} \otimes \mb{1}_3 $. The full unitary operation corresponding to the left hand side of \autoref{fig:reidemeister-type-iv} acting on the spin-network state can thus be expressed as: \begin{equation}\label{eqn:yang-baxter-lhs} (R_{12} \otimes \mb{1}_3) (\mb{1}_1 \otimes R_{23}) (R_{12} \otimes \mb{1}_3) \ket{\Psi_\Gamma} \end{equation} In a similar way one can write the total unitary corresponding to the right hand side of \autoref{fig:reidemeister-type-iv}: \begin{equation}\label{eqn:yang-baxter-rhs} (\mb{1}_1 \otimes R_{23}) (R_{12} \otimes \mb{1}_3) (\mb{1}_1 \otimes R_{23}) \ket{\Psi_\Gamma} \end{equation} While the two unitary operations shown in \eqref{eqn:yang-baxter-lhs} and \eqref{eqn:yang-baxter-rhs} \emph{appear} to be similar they are \emph{not} the same. However, if we insist that the state of quantum geometry represented by the given spin network should be invariant under small diffeomorphisms then the two unitary operations must have the same effect on the state $ \ket{\Psi_\Gamma} $. This equivalence can formally be written as an operator equation: \begin{equation}\label{eqn:yang-baxter} (R_{12} \otimes \mb{1}_3) (\mb{1}_1 \otimes R_{23}) (R_{12} \otimes \mb{1}_3) = (\mb{1}_1 \otimes R_{23}) (R_{12} \otimes \mb{1}_3) (\mb{1}_1 \otimes R_{23}) \end{equation} We see here that the particular spin-network state in question factors out because this equation should hold for \emph{all} spin-networks for the theory to be physically consistent, and we are finally left with just an operator equation. This equation is famous in the field of condensed matter and goes by the name of the Yang-Baxter equation \cite{Baxter2008Exactly} in honor of the two individuals most deeply associated with studying its solutions and recognizing its significance. It can be solved exactly to give us the exact form of the unitary operator $ R $. As we shall see in the next section the form of $ R $ is precisely that of a CNOT gate - a two qubit gate which is universal for quantum computation. \section{Yang Baxter Equation and Quantum Computation} It turns out to be the case that the manipulations of spin-network edges described in the previous section are precisely the same operations as those used to perform quantum computation using topological phases of two-dimensional condensed matter systems as the computing ``hardware''. Let us assume for the time being that we can identify the Hilbert space of a puncture $ \mc{H} $ (c.f. \autoref{fig:braiding}) with of a spin $ 1/2 $ particle, \textit{i.e.}~ a qubit. In this case the operator $ R $ is a two qubit gate and can be expressed as a $ 4 \times 4 $ matrix. The exact solution for $ R $ in this case is given by the following expression \cite{Kauffman2004Braiding}: \begin{equation}\label{eqn:cnot-gate} \scalemath{1.0}{ R = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{array} \right) } \end{equation} This operator acts of the state space of two qubits. The basis states of a single qubit can be written as $ \{\ket{0}, \ket{1}\} $. The basis states of a two qubit state can then be written as: $ \{ \ket{00}, \ket{01}, \ket{10}, \ket{11}\} $. The operator $ R $ given in \eqref{eqn:cnot-gate} acts on this state space. The effect of this operator can be described the circuit diagram shown in \autoref{fig:cnot-gate} which is repeated below for the reader's convenience. \begin{figure*}[h] \centering \includegraphics[origin=c]{cnot} \caption*{Circuit diagram for a CNOT gate} \end{figure*} Now, it is a well known fact in classical computation that all of the gates used in Boolean logic - AND, OR, XOR, NAND and NOT - can be constructed given only the NAND gate. The NAND gate is thus said to be ``universal'' for classical computation. Similarly, using only a finite set of discrete two and one qubit gates one can construct \emph{any} unitary operator to arbitrary precision \cite[Ch. 4]{Nielsen2000Quantum}, by repeated application of the gates in the given set. One such set of universal gates consists of the CNOT gate, Hadamard gate, phase gate and the $ \pi/8 $ gate. Except for CNOT, the remaining gates all act on a single qubit. We list these gates below in the standard $ \{\ket{0}, \ket{1}\} $ basis: \begin{subequations}\label{eqn:one-qubit-gates} \begin{align} \text{Hadamard}: & \quad H = \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} \label{eqn:hadamard-gate} \\ \text{Phase}: & \quad S = \begin{pmatrix} 1 & 0 \\ 0 & i \end{pmatrix} \label{eqn:phase-gate} \\ \pi/8: & \quad T = \begin{pmatrix} 1 & 0 \\ 0 & e^{i\pi/4} \end{pmatrix} \label{eqn:piby8-gate} \end{align} \end{subequations} What we have discovered so far is that quantum diffeomorphism invariance of spin network states implies that these states naturally carry an implementation of the entangling two qubit CNOT gate. If, in addition, it would be possible to incorporate the gates mentioned in \eqref{eqn:one-qubit-gates} as some additional structure in spin-network states, then it would imply that spin-network states are universal for quantum computation. Already the existence of the CNOT gate hidden within the state space of loop quantum gravity, tells us that there is a deep connection between quantum computation and quantum gravity, confirming what we already know from various other independent approaches \cite{Almheiri2014Bulk,Mintun2015Bulk-Boundary,Pastawski2015Holographic,Freivogel2016Precursors,Harlow2017The-RyuTakayanagi} to the question of quantum gravity. \section{Framed Edges and Single Qubit Operators} It turns out that spin-networks can be endowed with additional topological structure in a natural manner and the degrees of freedom of this structure can then naturally be interpreted as the action of quantum gates acting on individual qubits. This structure turns out to be required due to two independent considerations. Let us explain these in turn. In his famous paper \cite{Witten1989Quantum} Witten was the first to point the deep connections between (topological) quantum field theory and knot theory. In particular he showed the the expectation value of the Wilson loop functional evaluated on any knotted curve $ \gamma $ in $ SU(2) $ Yang-Mills theory in $ 2+1 $ dimensions is related to the knot invariant of $ \gamma $ known as the Jones polynomial. In this work he showed that a crucial step in the calculation involved the calculation of the self-linking number of a knot. This quantity is ill-defined if the knots are taken to be one-dimensional curves. In order to ``regularize'' the associated integral one has to introduce a \emph{framing} of the knot, \textit{i.e.}~ replacing the one-dimensional curve constituting the knot $ \gamma $ with a two-dimensional ribbon. The second justification for using framed ribbons rather than one-dimensional strings comes from Smolin's seminal work \cite{Smolin1995Linking} linking topological quantum field theory with the field of loop quantum gravity which was in its infancy at the time. By considering the action of certain Wilson loop operators acting on punctures between spin-network edges and an arbitrary two-dimensional surface he was able to deduce that the punctures, and thereby the spin network edges, would have to be endowed with a framing. Taken together, these two lines of argument are enough to convince oneself that the full state of loop quantum gravity should incorporate braided ribbon spin networks rather than the one-dimensional versions commonly encountered in the literature. The careful reader would have noticed that we have used such framed ribbons, rather than one-dimensional lines, to represent spin network edges in several of the figures in this paper. This is not only for aesthetic convenience but also because ultimately we wish to work with framed or ribbon spin networks. The moment we replace spin network edges with framed ribbons we realize that we have gained another topological degree of freedom in the form of \emph{twists} which the ribbon can have. Let us assume that the simplest possible twist can be if one end of the ribbon is rotated $ \theta $ degrees relative to the other edge in either a clockwise or counter-clockwise manner. $ \theta $ can, in principle, take any value between $ 0 $ and $ 2\pi $. Here, for simplicity, we will assume that $ \theta $ can only take values $ \pm \pi $. As we shall see later, this value also arises naturally when considering the application of discrete symmetries of spin-networks. \autoref{fig:twists-110} shows an example of a configuration with three ribbons, with the first two having a $ +\pi $ twist and third one with a $ -\pi $ twist. \autoref{fig:braid-twists-110} shows how the twisting and braiding operations can be combined. \begin{figure}[h] \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=60pt]{twists-example} \caption{An example of a twist by $ \theta = + \pi $ and $ \theta = -\pi $} \label{fig:twists} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=60pt]{twists-110} \caption{A configuration of three ribbons, with the first two having a twist $ \theta = +\pi $ and the last one with a twist $ \theta = -\pi $} \label{fig:twists-110} \end{subfigure} \hfill \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[height=100pt]{braid-with-twists-110} \caption{A configuration of three ribbons with braiding and twisting} \label{fig:braid-twists-110} \end{subfigure} \caption{Illustration of twisted braids} \end{figure} It is natural that, as was the case with braiding of two edges, putting a twist in a single framed edge should also correspond to the action of some unitary operator on the given edge. It is not clear at this stage precisely which form this unitary operator should take out of the one-qubit gates listed in \eqref{eqn:one-qubit-gates}: the Hadamard, phase or $ \pi/8 $ gates, or perhaps some other one qubit unitary. What is clear is that in addition to CNOT, the twisting operation will give us at least one additional single qubit unitary. If somehow we were able to identify additional topological degrees of freedom which could be associated with the three single qubit gates listed in \eqref{eqn:one-qubit-gates} then we would be well placed to make the following proposition: \begin{quote} \textbf{Proposition III}: \emph{The set of framed, braided spin network states provides a complete set of states required for universal quantum computation.} \end{quote} which could alternatively be stated: \begin{quote} \textbf{Proposition III'}: \emph{The state space of loop quantum gravity is dense in the set of operators required for universal quantum computation.} \end{quote} We are now well placed to come to the central result of this work, which is the claim that the state space loop quantum gravity, extended by inclusion of braiding and twisting of framed edges, provides us with a source of generating GHZ states, which are a central resource for quantum error correcting codes. To do so we need only remind ourselves of the structure of the circuit which generates a GHZ state. This circuit is shown in \autoref{fig:GHZ-states}, which is reproduced below for the reader's convenience. \begin{figure}[h] \centering \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[scale=1.0]{qutrit_state} \caption{Quantum circuit for generating a qutrit state using CNOT gates} \label{fig:GHZ-states-2} \end{subfigure} \hfill \begin{subfigure}[t]{0.45\textwidth} \centering \includegraphics[height=50pt]{ghz-braid} \caption{The braid operation corresponding to the quantum circuit for generating a GHZ state. Threads are numbered in increasing order from top to bottom. Time runs from left to right.} \label{fig:ghz-braid} \end{subfigure} \caption{Quantum circuit representation for the GHZ state (left) and the braid representation (right).} \end{figure} The interested reader can easily verify, using the explicit form of the $ R $ operator given in \eqref{eqn:cnot-gate} that the braid operation shown in \autoref{fig:ghz-braid} generates the GHZ state given that the first qubit starts in the state $ \ket{0} + \ket{1} $ and the second and third qubits start in the $ \ket{0} $ state. The shown braid configuration then corresponds to the unitary operator $ (R_{12} \otimes \mbb{1}_3)(\mbb{1}_1 \otimes R_{23}) $. \section{Conclusion: Elementary Particles as Quantum Error Correcting Codes} In this work we have made the following observations. First, in order to consider the full state space of loop quantum gravity, knotting and braiding of spin-network edges needs to be taken in account. Second, the action of the operator version of small diffeomorphisms is given by the Yang-Baxter equation whose solutions yield a two qubit unitary gate which is universal for quantum computation. And, finally, the inclusion of topological degrees of freedom allows us to generate states which are essential ingredients in quantum error correcting codes. It is also interesting to note that the braid configurations shown in \autoref{fig:ghz-braid} are identical to those in the groundbreaking work by Sundance Bilson-Thompson \cite{Bilson-Thompson2005A-topological,Bilson-Thompson2006Quantum} wherein he identified these states with leptons belonging to the first generation of the Standard Model of elementary particles. \begin{figure}[h] \centering \includegraphics[height=100pt]{bilson-thompson-model} \caption{Assignments of configurations of three-ribbon twisted braids with various members of the first generation of leptons in the Standard Model (figure credit \cite{Bilson-Thompson2005A-topological,Bilson-Thompson2006Quantum})} \label{fig:bt-model} \end{figure} Considering \emph{all} possible configurations of the elements of the braid group with three strands with twists, one ends up with the set of configurations shown in \autoref{fig:bt-model}. By making the assumption that the $ \theta = \pm \pi $ twist on a ribbon corresponds to an electric charge of $ \pm 1/3 $ to that ribbon, we are find that there are precisely four configurations with total charge $ \pm 1 $, $ \pm 2/3 $ and $ \pm 1/3 $ each. There are two elements which are uncharged (without any twists on any of the ribbons). The charged elements can be identified with an electron (charge $ - 1 $), an up quark (charge $ +2/3 $), down quark (charge $ -1/3 $) and their antiparticles. After this assignment we are still left with two configurations for each value of the total charge. These two configurations are mirror reflections of each other. This motivates us to identify one set of these configurations with the left handed leptons $ (e^\pm_L, u_L, \bar u_L, d_L, \bar d_L) $ and the other with right handed leptons $ (e^\pm_R, u_R, \bar u_R, d_R, \bar d_R) $. The two uncharged configurations form a particle-antiparticle pair (applying the operation of braid concatenation to the two configurations gives us the identity braid) and are identified with the neutrino ($ \nu $) and the anti-neutrino ($ \bar v $). There is a certain elegance in associating elementary particles with elements of the code words of a quantum error correcting code. The vacuum is filled with fluctuating quantum fields. However, we don't identify all of the possible fluctuations of the quantum vacuum with elementary particles. There is a certain element of stability that elementary particles possess. If we shoot an electron from one end of a long vacuum tube - from which the atmosphere has been evacuated - then, in the absence of any collisions with some stray particles passing by, we expect to find an electron arriving at the other end of the tube. This is true, even though the electron has to interact with the fluctuations of the quantum vacuum in its journey through the tube. In the language of error correction, such a state which is immune to fluctuations in its environment is known as a noiseless subsystem \cite{Zanardi1997Noiseless,Kribs2005Unified}. It is precisely these noiseless subsystems which serve as the elements of the \emph{code space}. It would only be natural if elementary particles were to be identified \cite{Kribs2005Geometry,Konopka2006Constrained} with the noiseless subsystems of any underlying quantum gravity theory. It is important to note that several prior authors \cite{Bilson-Thompson2006Quantum,Wan2007Braid,Wan2009Effective,Vaid2010Embedding,Hackett2011aInvariants,Hackett2011bInvariants} have focused on the role of ribbon spin networks in loop quantum gravity. However, to the best of our knowledge this work (and another prior paper by the author \cite{Vaid2013Elementary}) are the only ones to have made the association between ribbon networks, elementary particles and quantum error correcting codes. In recent work Freidel and colloborators \cite{Freidel2017Loop,Freidel2019Gravitational} have suggested assigning a $ U(1)^3 $ Kac-Moody algebra to the punctures where a spin network edge intersects a two-dimensional surface. They further argue that this implies that the edges of spin networks should be considered as tubes rather than as one dimensional curves. This sort of a structure was first suggested by the present author in a much older work \cite{Vaid2010Embedding} relying on arguments centered around discrete symmetries of spin networks. There might exist a direct link between the work of Freidel and co-workers and the present work. This is suggested by a close relation between solutions of the Yang Baxter equation and Kac-Moody algebras \cite{de-Vega1986Integrable}. This, however, must be left for future works. \bibliographystyle{jhep}
-34,229.203161
[ -2.349609375, 2.25 ]
27.5
[ -2.646484375, 1.83203125, -1.31640625, -4.62890625, -1.6767578125, 6.4140625 ]
[ 2.931640625, 8.6015625, 1.3759765625, 4.609375 ]
317
6,987
[ -3.490234375, 4.046875 ]
25.263685
[ -5.296875, -3.2265625, -3.87109375, -1.978515625, 0.9248046875, 10.28125 ]
1.162938
18.785536
21.225132
3.464916
[ 1.9006741046905518 ]
-22,099.535965
5.661228
-33,743.539794
0.36405
5.9763
[ -2.490234375, -3.41015625, -3.671875, -5.01953125, 2, 11.9609375 ]
[ -5.37890625, -1.703125, -2.669921875, -1.8251953125, 3.541015625, 4.9921875 ]
BkiUbQY5qX_AY1OJK5_l
\section{introduction} Hubbard model is proposed in 1963 \cite{hubbard}, and is now believed to have some relationship with the theoretical mechanism of high temperature superconductivity. In the Hubbard model, except for single electron hoping terms, there are also on-site interactions for electrons occupying the same lattice site. The Hubbard model looks simple, which actually is difficult to solve except in one dimension. In 1990 Yang and Zhang find that the model has a $SO_4$ symmetry and construct many eigenstates that are independent of the on-site interaction strength $U$ \cite{yangzhang}. They believe that they have found all the $U$ independent eigenstates of the model, but they do not know how to prove the statement. Here we show that their statement is incorrect by giving many new $U$ independent eigenstates of the Hubbard model. Our results are demonstrated in two dimensional Hubbard model, but they can be easily extended to higher dimensions. The Hubbard model on a periodic two dimensional $L\times L$ lattice where $L$ is even is defined by the Hamiltonian \begin{equation} H=-t\sum_{\langle\mathbf{r},\mathbf{r}'\rangle}(a_\mathbf{r}^\dag a_{\mathbf{r}'}+b_\mathbf{r}^\dag b_{\mathbf{r}'})+U\sum_\mathbf{r}a_\mathbf{r}^\dag a_\mathbf{r}b_\mathbf{r}^\dag b_\mathbf{r}, \end{equation} where $\langle\mathbf{r},\mathbf{r}'\rangle$ means the sum is over the nearest neighbor, $a_\mathbf{r}$ and $b_\mathbf{r}$ are annihilation operators for spin-up and spin-down electrons in coordinate space, respectively, and $\mathbf{r}=(x,y)$ designates the $L\times L$ lattice sites with $x$ and $y$ being integers $0,1,\ldots,L-1$. The operators obey the usual commutation relations for Fermi operators. The parameter $U$ is the on-site interaction strength, which is usually positive to represent the Coulomb repulsion between electrons on the same site. Define the annihilation operators $a_\mathbf{k}$ and $b_\mathbf{k}$ in momentum space as \begin{equation} a_\mathbf{k}=\frac{1}{L}\sum_\mathbf{r}a_\mathbf{r}e^{-i\mathbf{k}\cdot \mathbf{r}}, b_\mathbf{k}=\frac{1}{L}\sum_\mathbf{r}b_\mathbf{r}e^{-i\mathbf{k}\cdot \mathbf{r}}, \end{equation} where $\mathbf{k}=(k_x,k_y)$, $k_x$ and $k_y$ are integers times $2\pi/L$ and are restricted to the range $-\pi$ to $\pi$. The Hamiltonian $H$ can be rewritten as \begin{equation} H=\sum_\mathbf{k}E(\mathbf{k})(a_\mathbf{k}^\dag a_\mathbf{k}+b_\mathbf{k}^\dag b_\mathbf{k})+U\sum_\mathbf{r}a_\mathbf{r}^\dag a_\mathbf{r}b_\mathbf{r}^\dag b_\mathbf{r}, \end{equation} where $E(\mathbf{k})=-2t(\cos k_x+\cos k_y)$. It is difficult to construct eigenstates of $H$ when neither $t$ nor $U$ is zero. An important progress is made in 1989, Yang construct many eigenstates of $H$ through a mechanism called $\eta$ pairing \cite{yang}. Then in the following year, based on the $\eta$ pairing, Yang and Zhang show that $H$ has a $SO_4$ symmetry and construct more eigenstates of $H$ \cite{yangzhang}. To our knowledge, from then on no new eigenstates of $H$ are constructed. It is necessary to give an introduction to the eigenstates constructed by Yang and Zhang \cite{yangzhang}, before presenting our new eigenstates. Their result is described through spin operators and pseudospin operators. The spin operators are \begin{equation} S_+=\sum_\mathbf{r}a_\mathbf{r}^\dag b_\mathbf{r},S_-=S_+^\dag,S_z=\frac{1}{2}\sum_\mathbf{r}(a_\mathbf{r}^\dag a_\mathbf{r}-b_\mathbf{r}^\dag b_\mathbf{r}). \end{equation} The pseudospin operators are \begin{equation} J_+=\sum_\mathbf{r}e^{i \bm{\pi} \cdot \mathbf{r}}a_\mathbf{r}^\dag b_\mathbf{r}^\dag,J_-=J_+^\dag,J_z=\frac{1}{2}(N-M), \end{equation} where $\bm{\pi}=(\pi,\pi)$, $N=\sum_\mathbf{r}(a_\mathbf{r}^\dag a_\mathbf{r}+b_\mathbf{r}^\dag b_\mathbf{r})$, $M=L^2$ is the total number of the lattice sites, and $J_+$ is the $\eta$ pairing operator constructed by Yang \cite{yang}. The operators $H$, $S^2$, $S_z$, $J^2$ and $J_z$ commute with each other and have common eigenstates, where \begin{equation} S^2=S_+S_-+S_z^2-S_z, J^2=J_+J_- +J_z^2-J_z. \end{equation} Suppose $\ket{\Theta}$ is a common eigenstate of the operators $H$, $S^2$, $S_z$, $J^2$ and $J_z$ with eigenvalues $E$, $s(s+1)$, $s_z$, $j(j+1)$ and $j_z$, respectively, then $J_+^mS_+^n\ket{\Theta}$ is also a common eigenstate with eigenvalues $E+mU$, $s(s+1)$, $s_z+n$, $j(j+1)$ and $j_z+m$, respectively, for $m\leq j-j_z$ and $n\leq s-s_z$. Yang and Zhang consider the state \begin{equation} \label{xxx} \ket{\Upsilon}=b_{\mathbf{k}_1}^\dag b_{\mathbf{k}_2}^\dag\ldots b_{\mathbf{k}_{N_b}}^\dag \ket{0}, N_b=0, 1, \ldots, M, \end{equation} where $\ket{0}$ is the vacuum state, which is obviously a common eigenstate of the operators $H$, $S^2$, $S_z$, $J^2$ and $J_z$. Note that $S_-\ket{\Upsilon}=0$ and $J_-\ket{\Upsilon}=0$, for $\ket{\Upsilon}$ there is \begin{equation} s=-s_z=\frac{1}{2}N_b, j=-j_z=\frac{1}{2}(M-N_b). \end{equation} Therefore, for a given $N_b$, the states $J_+^mS_+^n\ket{\Upsilon}$ are common eigenstates of $H$, $S^2$, $S_z$, $J^2$ and $J_z$ for $m=0,1,\ldots,M-N_b$ and $n=0,1,\ldots,N_b$. These are the eigenstates constructed by Yang and Zhang, which are obviously $U$ independent and are believed to be the only $U$ independent eigenstates of $H$ \cite{yangzhang}. From the way they construct eigenstates, we know that if an eigenstate of $H$ with no double occupation in coordinate space is not an eigenstate of $S^2$, it will be a new eigenstate that is not found by them, because their eigenstates with no double occupation, i.e., the states $S_+^n\ket{\Upsilon}$ for different $n$ and $\ket{\Upsilon}$, and their superpositions, are always eigenstates of $S^2$ when the number of electrons is fixed. \begin{figure}[ht] \centering \includegraphics[width=5cm]{Figure1} \caption{Lattice sites in momentum space. The points $\mathbf{k}$ on the red lines have $E(\mathbf{k})=0$, where $L=8$ is shown as an example.} \end{figure} Our construction of new eigenstates of $H$ begins with an observation that there are many single electron states in momentum space having zero energy. Figure 1 shows the lattice sites in momentum space, where the points $\mathbf{k}=(k_x,k_y)$ with $E(\mathbf{k})=0$ are located on the red lines. It can be found that the number of points $\mathbf{k}$ with $E(\mathbf{k})=0$ grows linearly with $L$. Note that if $E(\mathbf{k})=0$ there is $E(\mathbf{k}+\bm{\pi})=0$. Define \begin{equation} \label{def} A_{\mathbf{k}\pm}=a_{\mathbf{k}}\pm a_{\mathbf{k}+\bm{\pi}}, B_{\mathbf{k}\pm}=b_{\mathbf{k}}\pm b_{\mathbf{k}+\bm{\pi}}, \end{equation} and write them using operators in coordinate space \begin{align} A_{\mathbf{k}\pm}&=\frac{1}{L}\sum_\mathbf{r}a_\mathbf{r}e^{-i\mathbf{k}\cdot \mathbf{r}}(1 \pm e^{-i\bm{\pi} \cdot \mathbf{r}}), \\ B_{\mathbf{k}\pm}&=\frac{1}{L}\sum_\mathbf{r}b_\mathbf{r}e^{-i\mathbf{k}\cdot \mathbf{r}}(1 \pm e^{-i\bm{\pi} \cdot \mathbf{r}}), \end{align} where $e^{-i\bm{\pi} \cdot \mathbf{r}}=e^{-i\pi(x+y)}$, which is $1$ when $x+y$ is even, and $-1$ when $x+y$ is odd. Therefore $A_{\mathbf{k}+}$ ($B_{\mathbf{k}+}$) is only a linear combination of $a_\mathbf{r}$ ($b_\mathbf{r}$) with $\mathbf{r}=(x,y)$ indicated by black points in Fig. 2, while $A_{\mathbf{k}-}$ ($B_{\mathbf{k}-}$) is only a linear combination of $a_\mathbf{r}$ ($b_\mathbf{r}$) with $\mathbf{r}=(x,y)$ indicated by red points in Fig. 2. Now we make the statement that if the state \begin{equation} \label{main} \ket{\Psi}=A_{\mathbf{k}_1+}^\dag \ldots A_{\mathbf{k}_{N_a}+}^\dag B_{\mathbf{q}_{1}-}^\dag \ldots B_{\mathbf{q}_{N_b}-}^\dag \ket{0} \end{equation} is not zero, it will be an eigenstate of $H$ with eigenvalue $E=0$, under the conditions $E(\mathbf{k_i})=0$ for $i=1,\ldots,N_a$ and $E(\mathbf{q_j})=0$ for $j=1,\ldots,N_b$. In the state $\ket{\Psi}$ the $N_a$ spin-up electrons are on the black points and the $N_b$ spin-down electrons are on the red points in Fig. 2, so there is no double occupation in coordinate space. Therefore the statement is obvious. The state $\ket{\Psi}$ is $U$ independent and it is different from the eigenstates given by Yang and Zhang \cite{yangzhang}, because it is not an eigenstate of $S^2$. \begin{figure}[ht] \centering \includegraphics[width=5cm]{Figure2} \caption{Lattice sites in coordinate space. The black points are associated with operators $A_{\mathbf{k}+}$ and $B_{\mathbf{k}+}$, and the red points are associated with operators $A_{\mathbf{k}-}$ and $B_{\mathbf{k}-}$, where $L=8$ is shown as an example.} \end{figure} The above method that we construct eigenstates of $H$ can be slightly modified as follows. In the definition of $A_{\mathbf{k}\pm}$ and $B_{\mathbf{k}\pm}$ in equation (\ref{def}), if we change $\bm{\pi}=(\pi,\pi)$ to $\bm{\pi}_1=(\pi,0)$, then $A_{\mathbf{k}+}$ ($B_{\mathbf{k}+}$) will be only a linear combination of $a_\mathbf{r}$ ($b_\mathbf{r}$) where $x$ is even, while $A_{\mathbf{k}-}$ ($B_{\mathbf{k}-}$) will be only a linear combination of $a_\mathbf{r}$ ($b_\mathbf{r}$) where $x$ is odd. Therefore $\ket{\Psi}$ in equation (\ref{main}) is still a state with no double occupation in coordinate space, and it will be an eigenstate of $H$ if $E(\mathbf{k_i})=E(\mathbf{k_i}+\bm{\pi}_1)$ for $i=1,\ldots,N_a$ and $E(\mathbf{q_j})=E(\mathbf{q_j}+\bm{\pi}_1)$ for $j=1,\ldots,N_b$, which is the same as that the $x$ components of the vectors $\mathbf{k_i}$ and $\mathbf{q_j}$ are $\pm \pi/2$. In a similar way, in the definition of $A_{\mathbf{k}\pm}$ and $B_{\mathbf{k}\pm}$ in equation (\ref{def}), if we change $\bm{\pi}=(\pi,\pi)$ to $\bm{\pi}_2=(0,\pi)$, the state $\ket{\Psi}$ in equation (\ref{main}) is still an eigenstate of $H$ under the condition that the $y$ components of the vectors $\mathbf{k_i}$ and $\mathbf{q_j}$ are $\pm \pi/2$. The eigenstates we construct are product states, which are eigenstates of $H$, $S_z$, $J^2$ and $J_z$, but not eigenstates of $S^2$. Now we outline a method to obtain common eigenstates of $H$, $S^2$, $S_z$, $J^2$ and $J_z$ from the states we constructed. We take the state $\ket{\Psi}$ in equation (\ref{main}) as an example and assume $N_a\geq N_b$ without loss of generality. The state $\ket{\Psi}$ is an eigenstate of $H$, $S_z$, $J^2$ and $J_z$ with $s_z=(N_a-N_b)/2$, $j=(M-N_a-N_b)/2$ and $j_z=-j$, and it can be written as a superposition of eigenstates of $S^2$ with $s=s_z,s_z+1,\ldots,(N_a+N_b)/2$, i.e., \begin{equation} \label{y1} \ket{\Psi}=\sum_{s=s_z}^{(N_a+N_b)/2}\ket{\Psi (s,s_z,j,j_z)}, \end{equation} where $\ket{\Psi (s,s_z,j,j_z)}$ is a common eigenstate of $H$, $S^2$, $S_z$, $J^2$ and $J_z$. Note that \begin{align} & S_+\ket{\Psi (s,s_z,j,j_z)}=\sqrt{f(s,s_z)}\ket{\Psi (s,s_z+1,j,j_z)}, \\ & S_-\ket{\Psi (s,s_z+1,j,j_z)}=\sqrt{f(s,s_z)}\ket{\Psi (s,s_z,j,j_z)}, \end{align} with $f(s,s_z)=s(s+1)-s_z(s_z+1)$, there is \begin{equation} S_-^n S_+^n\ket{\Psi (s,s_z,j,j_z)}=h(s,s_z,n) \ket{\Psi (s,s_z,j,j_z)} \end{equation} with $h(s,s_z,n)=\prod_{i=s_z}^{s_z+n-1}f(s,i)$ when $s_z+n\leq s$, and $h(s,s_z,n)=0$ when $s_z+n > s$. So we can obtain from equation (\ref{y1}) that \begin{equation} \label{y2} S_-^n S_+^n \ket{\Psi}=\sum_{s=s_z+n}^{(N_a+N_b)/2}h(s,s_z,n) \ket{\Psi (s,s_z,j,j_z)} \end{equation} for $n=1,\ldots,N_b$, which together with equation (\ref{y1}) gives us a way to calculate $\ket{\Psi (s,s_z,j,j_z)}$ from $\ket{\Psi}$. For any $s\neq (N_a+N_b)/2$, if the calculated $\ket{\Psi (s,s_z,j,j_z)}$ is not zero, it will be an eigenstate of $H$ that is not found by Yang and Zhang, because the eigenstates found by them with $N_a+N_b$ electrons and no double occupation in coordinate space have the total spin $s=(N_a+N_b)/2$. When the eigenstate $\ket{\Psi (s,s_z,j,j_z)}$ is obtained, more eigenstates of $H$ can be obtained by applying $S_+$, $S_-$, $J_+$ and $J_-$ on it \cite{yangzhang}. Now we give an example to demonstrate how we calculate the common eigenstates $\ket{\Psi (s,s_z,j,j_z)}$ of $H$, $S^2$, $S_z$, $J^2$ and $J_z$ from the state $\ket{\Psi}$ in equation (\ref{main}). The simplest nontrivial case is $N_a=1$ and $N_b=1$, which leads to $s_z=0$, $j=(M-2)/2$ and $j_z=(2-M)/2$. From equation (\ref{y1}) and equation (\ref{y2}) we have \begin{align} & \ket{\Psi}=\ket{\Psi (0,0,j,j_z)}+\ket{\Psi (1,0,j,j_z)}, \\ & S_- S_+ \ket{\Psi}=h(1,0,1) \ket{\Psi (1,0,j,j_z)}, \end{align} where $h(1,0,1)=f(1,0)=2$. Then there is \begin{equation} \label{y3} 2\ket{\Psi (0,0,j,j_z)}=2\ket{\Psi}-S_- S_+ \ket{\Psi}. \end{equation} Substitute $\ket{\Psi}=A_{\mathbf{k}_1+}^\dag B_{\mathbf{q}_{1}-}^\dag \ket{0}$ into equation (\ref{y3}) we get \begin{equation} 2\ket{\Psi (0,0,j,j_z)}=(A_{\mathbf{k}_1+}^\dag B_{\mathbf{q}_{1}-}^\dag - B_{\mathbf{k}_1+}^\dag A_{\mathbf{q}_{1}-}^\dag) \ket{0}, \end{equation} which is a singlet state with $s=0$. Recall the definition of $A_{\mathbf{k}\pm}$ and $B_{\mathbf{k}\pm}$ in equation (\ref{def}), there is \begin{equation} 2\ket{\Psi (0,0,j,j_z)}=C^\dag\ket{0}+D^\dag\ket{0}, \end{equation} where \begin{align} & C^\dag=a_{\mathbf{k}_1}^\dag b_{\mathbf{q}_1}^\dag+a_{\mathbf{q}_1}^\dag b_{\mathbf{k}_1}^\dag-a_{\mathbf{k}_1+\bm{\pi}}^\dag b_{\mathbf{q}_1+\bm{\pi}}^\dag-a_{\mathbf{q}_1+\bm{\pi}}^\dag b_{\mathbf{k}_1+\bm{\pi}}^\dag, \\ & D^\dag=a_{\mathbf{k}_1+\bm{\pi}}^\dag b_{\mathbf{q}_1}^\dag+a_{\mathbf{q}_1}^\dag b_{\mathbf{k}_1+\bm{\pi}}^\dag-a_{\mathbf{k}_1}^\dag b_{\mathbf{q}_1+\bm{\pi}}^\dag-a_{\mathbf{q}_1+\bm{\pi}}^\dag b_{\mathbf{k}_1}^\dag. \end{align} Both $C^\dag\ket{0}$ and $D^\dag\ket{0}$ are common eigenstates of $H$, $S^2$, $S_z$, $J^2$ and $J_z$ with $s=0$ if they are not zero, due to the conservation of total momentum (mod ($2\bm{\pi}$)). The state $C^\dag\ket{0}$ will be zero only when $\mathbf{k}_1+\bm{\pi}=\mathbf{q}_1$ mod (2$\bm{\pi}$), which is equivalent to $\mathbf{q}_1+\bm{\pi}=\mathbf{k}_1$ mod (2$\bm{\pi}$). The state $D^\dag\ket{0}$ will be zero only when $\mathbf{k}_1=\mathbf{q}_1$. Therefore $C^\dag\ket{0}$ and $D^\dag\ket{0}$ can not be both zero. So far, we construct eigenstates of $H$ only using operators in momentum space with $E(\mathbf{k})=0$, $k_x=\pm \pi/2$ or $k_y=\pm \pi/2$. Now we construct some eigenstates of $H$ using more general operators. We make the statement that the state \begin{equation} \label{main2} \ket{\Phi}_{\bm{\pi}}=(a_{\mathbf{k}_1+\bm{\pi}}^\dag b_{\mathbf{q}_1}^\dag -a_{\mathbf{k}_1}^\dag b_{\mathbf{q}_1+\bm{\pi}}^\dag ) \ket{0} \end{equation} is an eigenstate of $H$ with eigenvalue $E=0$, under the condition $E(\mathbf{k}_1)=E(\mathbf{q}_1)$. To prove our statement we need to show that $\ket{\Phi}_{\bm{\pi}}$ has no double occupation in coordinate space. This is obvious by writing it using operators in coordinate space \begin{equation} \ket{\Phi}_{\mathbf{\pi}}=\frac{1}{L^2}\sum_{\mathbf{r}_1}\sum_{\mathbf{r}_2} a_{\mathbf{r}_1}^\dag b_{\mathbf{r}_2}^\dag e^{i\mathbf{k}_1 \cdot \mathbf{r}_1} e^{i\mathbf{q}_1 \cdot \mathbf{r}_2} g(\mathbf{r}_1,\mathbf{r}_2) \ket{0}, \end{equation} where $g(\mathbf{r}_1,\mathbf{r}_2)=e^{i\bm{\pi} \cdot \mathbf{r}_1}-e^{i\bm{\pi} \cdot \mathbf{r}_2}$, which is zero when $\mathbf{r}_1=\mathbf{r}_2$. When $\mathbf{q}_1\neq \mathbf{k}_1$, the state $\ket{\Phi}_{\bm{\pi}}$ is not an eigenstate of $S^2$, thus it is not an eigenstate of $H$ found by Yang and Zhang. When $\mathbf{q}_1=-\mathbf{k}_1$ mod (2$\bm{\pi}$), the condition $E(\mathbf{k}_1)=E(\mathbf{q}_1)$ is satisfied, but the state $\ket{\Phi}_{\bm{\pi}}$ has already been found by Yang because it has the total momentum $\bm{\pi}$ \cite{yang}. However, for any $\mathbf{k}_1=(k_{1x},k_{1y})\neq \mathbf{0}$ there is $\mathbf{q}_1 \neq \pm\mathbf{k}_1$ mod (2$\bm{\pi}$) and satisfying $E(\mathbf{k}_1)=E(\mathbf{q}_1)$, because $\mathbf{q}_1=(q_{1x},q_{1y})$ can be $(k_{1y},k_{1x})$, $(k_{1y},-k_{1x})$, $(-k_{1y},k_{1x})$, $(-k_{1y},-k_{1x})$, $(-k_{1x},k_{1y})$ and $(k_{1x},-k_{1y})$. Therefore $\ket{\Phi}_{\bm{\pi}}$ represents some new eigenstates of $H$. We note that in the definition of $\ket{\Phi}_{\bm{\pi}}$ in equation (\ref{main2}), if we change $\bm{\pi}=(\pi,\pi)$ to $\bm{\pi}_1=(\pi,0)$, then the new state $\ket{\Phi}_{\bm{\pi}_1}$ will be an eigenstate of $H$ under the condition $k_{1x}=\pm q_{1x}$. Similarly, in the definition of $\ket{\Phi}_{\bm{\pi}}$ in equation (\ref{main2}), if we change $\bm{\pi}=(\pi,\pi)$ to $\bm{\pi}_2=(0,\pi)$, then the new state $\ket{\Phi}_{\bm{\pi}_2}$ will be an eigenstate of $H$ under the condition $k_{1y}=\pm q_{1y}$. The above method to construct eigenstates of $H$ can be generalized as follows. Consider the state \begin{equation} \label{main3} \ket{\Phi}_{\bm{\alpha}}=(a_{\mathbf{k}_1+\bm{\alpha}}^\dag b_{\mathbf{q}_1}^\dag -a_{\mathbf{k}_1}^\dag b_{\mathbf{q}_1+\bm{\alpha}}^\dag )\ket{0}, \end{equation} which has no double occupation in coordinate space as $\ket{\Phi}_{\bm{\pi}}$. It will be an eigenstate of $H$ if the condition \begin{equation} E(\mathbf{k}_1+\bm{\alpha})+E(\mathbf{q}_1)=E(\mathbf{k}_1)+E(\mathbf{q}_1+\bm{\alpha}) \end{equation} is satisfied. When $\mathbf{k}_1=\mathbf{q}_1$ the condition is satisfied, but the state has be found by Yang and Zhang because it is an eigenstate of $S^2$ with $s=1$ \cite{yangzhang}. However, when $\mathbf{k}_1\neq \mathbf{q}_1$ the condition can be satisfied in the following cases: (1) $\bm{\alpha}=(\alpha_x,0)$, $k_{1x}=q_{1x}$, $k_{1y}=-q_{1y}$; (2) $\bm{\alpha}=(0,\alpha_y)$, $k_{1x}=-q_{1x}$, $k_{1y}=q_{1y}$; (3) $\bm{\alpha}=(\alpha_x,\alpha_y)$ with $\alpha_x=\alpha_y$, $k_{1x}=q_{1y}$, $k_{1y}=q_{1x}$; (4) $\bm{\alpha}=(\alpha_x,\alpha_y)$ with $\alpha_x=-\alpha_y$, $k_{1x}=-q_{1y}$, $k_{1y}=-q_{1x}$. Therefore $\ket{\Phi}_{\bm{\alpha}}$ can represent some new eigenstates of $H$ that are not found before. Note that in the above four cases, there are $E(\mathbf{k}_1)=E(\mathbf{q}_1)$, $E(\mathbf{k}_1+\bm{\alpha})=E(\mathbf{q}_1+\bm{\alpha})$ and $\mathbf{k}_1+\mathbf{q}_1$ is proportional the vector $\bm{\alpha}$. When $\mathbf{k}_1+\mathbf{q}_1+\bm{\alpha}=\bm{\pi}$ mod ($2 \bm{\pi}$), the state has already been found by Yang because it has a total momentum $\bm{\pi}$ \cite{yang}, however this can only be occasionally happened in the above case (3) and (4). Some new eigenstates of $H$ can be constructed in the spirit of $\eta$ pairing. Suppose $d$ is an integer and define the operators \begin{equation} \label{main7} G_{d}=\sum_{k_x} e^{ik_x d}b_{(k_x,\pi-k_x)}=\sum_x e^{-i\pi (x-d)}b_{(x,x-d)}, \end{equation} where $b_{(k_x,\pi-k_x)}$ is the operator $b_{\mathbf{k}}$ in momentum space with $\mathbf{k}=(k_x,\pi-k_x)$, and $b_{(x,x-d)}$ is the operator $b_{\mathbf{r}}$ in coordinate space with $\mathbf{r}=(x,x-d)$. Define \begin{equation} F_d (k_x,k_y)=e^{ik_x d} a_{(k_x,k_y)}-e^{ik_y d} a_{(k_y,k_x)}, \end{equation} where $a_{(k_x,k_y)}$ and $a_{(k_y,k_x)}$ are operators $a_{\mathbf{k}}$ in momentum space with $\mathbf{k}=(k_x,k_y)$ and $\mathbf{k}=(k_y,k_x)$, respectively. $F_d(k_x,k_y)$ can be rewritten as \begin{equation} F_d(k_x,k_y)=\frac{1}{L}\sum_{\mathbf{r}}(e^{-i (k_x (x-d)+k_y y)}-e^{-i (k_y (x-d)+k_x y})a_{\mathbf{r}}, \end{equation} where $a_{\mathbf{r}}$ is an operator in coordinate space with $\mathbf{r}=(x,y)$. Note that $G_{d}$ is a linear combination of $b_{\mathbf{r}}$ with $y=x-d$, and $F_d(k_x,k_y)$ is a linear combination of $a_{\mathbf{r}}$ with $y\neq x-d$, therefore if \begin{equation} \ket{\Omega}_d=F_d (k_{x_1},k_{y_1})^\dag F_d (k_{x_2},k_{y_2})^\dag \ldots F_d (k_{x_n},k_{y_n})^\dag G_{d}^\dag \ket{0} \end{equation} is not zero, it will be an eigenstate of $H$ because there is no double occupation in coordinate space. Note that $\ket{\Omega}_d$ is not an eigenstate of $S^2$, therefore it is an eigenstate of $H$ not found by Yang and Zhang \cite{yangzhang}. We emphasize that the number of electrons in $\ket{\Omega}_d$ can be of the order $L^2$, which is different from the eigenstates we construct in the above. Before summary we want to give a revisit to the single electron operator $G_d$ in equation (\ref{main7}), where the integer $d$ controls its pattern in coordinate space, and different $d$ represents different pattern. This observation can be used to construct eigenstates of $H$. Define operator $Y_d$ as the same as $G_d$ but using spin-up operator $a_{\mathbf{k}}$ instead of spin-down operator $b_{\mathbf{k}}$, i.e., \begin{equation} Y_{d}=\sum_{k_x} e^{ik_x d}a_{(k_x,\pi-k_x)}=\sum_x e^{-i\pi (x-d)}a_{(x,x-d)}, \end{equation} where $a_{(k_x,\pi-k_x)}$ is the operator $a_{\mathbf{k}}$ in momentum space with $\mathbf{k}=(k_x,\pi-k_x)$, and $a_{(x,x-d)}$ is the operator $a_{\mathbf{r}}$ in coordinate space with $\mathbf{r}=(x,x-d)$. It is obvious that the state \begin{equation} \ket{\Gamma}=Y_{d_1}^\dag Y_{d_2}^\dag \ldots Y_{d_{N_a}}^\dag G_{d_{N_a+1}}^\dag G_{d_{N_a+2}}^\dag \ldots G_{d_{N_a+N_b}}^\dag \ket{0} \end{equation} is an eigenstate of $H$ when all $d_i$ (mod $L$) are different, because it has no double occupation in coordinate space. However the state $\ket{\Gamma}$ is not a new eigenstate of $H$, but a superposition of states $\ket{\Psi}$ in equation (\ref{main}), due to the fact that $Y_{d}$ ($G_{d}$) is a superposition of $A_{\pm}$ ($B_{\pm}$) for different $\mathbf{k}=(k_x,\pi-k_x)$ in equation (\ref{def}). In summary, we have constructed many new $U$ independent eigenstates of $H$ of the Hubbard model in two dimension, and presented a method to obtain common eigenstates of $H$, $S^2$, $S_z$, $J^2$ and $J_z$ from the states we constructed. More new eigenstates can be constructed from the states we obtained with the method given by Yang and Zhang according to the $SO_4$ symmetry of the Hubbard model \cite{yangzhang}. Our results can be easily generalized to higher dimensions. M. Y. Ye would like to thank Z. J. Yao for helpful discussions about Hubbard model and M. X. Shen for drawing the figures. This work was supported by the National Natural Science Foundation of China (Grant No. 61275215), Fujian Provincial College Funds for Distinguished Young Scientists (Grant No. JA14070) and Natural Science Foundation of Fujian Province (Grant No. 2016J01008, 2016J01009).
-34,910.944614
[ -2.697265625, 2.529296875 ]
31.868132
[ -3.494140625, 0.4599609375, -2.0546875, -5.67578125, -0.67333984375, 8.2734375 ]
[ 0.9189453125, 8.015625, 2.38671875, 5.6796875 ]
117
2,565
[ -3.462890625, 4.19921875 ]
36.461295
[ -5.828125, -3.8203125, -3.716796875, -2.216796875, 1.8798828125, 10.8828125 ]
0.788382
9.384151
25.068226
2.347771
[ 2.4585540294647217 ]
-24,456.152342
5.637427
-34,404.247381
2.033195
5.504119
[ -2.599609375, -3.501953125, -3.533203125, -4.8515625, 2.306640625, 11.546875 ]
[ -5.1328125, -1.068359375, -1.7861328125, -1.1171875, 2.625, 2.9140625 ]
BkiUd044eIXh4qTE56WZ
\section{Introduction} \label{sec:intro} Visual speech recognition is a way of understanding speech by observing only the lip movements without having access to the acoustic signal. Several works have been recently presented \cite{petridis2016deep,end2end_multiview,petridis2017deepVisualSpeech,assael2016lipnet,Chung_2017_CVPR,stafylakis2017combining,Potamianos2003} aiming to recognise visual speech. One application of such a system is in noisy acoustic environments since the visual signal is not affected by noise and can enhance the performance of speech recognition systems. Another important application which has been recently proposed is silent speech interfaces (SSI) \cite{denby2010silent}. An SSI is a system enabling speech communication to take place when an audible acoustic signal is unavailable. This means that a speaker would be able to mouth words instead of actually uttering them and the SSI would recognise the speech content. This is particularly useful for persons with speaking difficulties or in situations where speaking is not allowed, e.g., during a meeting. \looseness - 1 However, in all the previous attempts in visual speech recognition, all models were trained on videos of normal / vocalised\footnote{The terms normal and vocalised speech are used interchangeably in this study.} speech. Although this might be useful in cases where the speaker really vocalises, e.g., in a noisy environment or when he/she is far away, it is not as useful for SSI. It is known that lip movements are affected by both the context where speech is produced and the mode of speech. There is evidence that lip movements tend to increase when speech is produced in noise (Lombard speech) \cite{vsimko2016hyperarticulation,garnier2012effect} and in the case of silent speech \cite{bicevskis2016effects}. The latter has also been confirmed in \cite{janke2010impact} where differences in facial electromyography (EMG) signals were observed between vocalised and silent speech. In other words, the lip movements in vocalised and silent speech are different and this may degrade the performance of models trained on vocalised speech and tested on silent speech. To the best of our knowledge the only work which has addressed this issue, but on a rather small database, is \cite{florescu2010silent}. They used ultrasound tongue images and video lip images from 4 participants and reported a significant drop in performance when training and testing was performed on normal and silent speech, respectively. Similar conclusions have also been observed for normal and whispered speech, which can be thought of as an alternative to silent speech for SSI. The performance of models trained on normal speech decreases when tested on whispered speech \cite{tao2014lipreading,fan2011audio}. In this work, we introduce a new audiovisual database which contains normal, whispered and silent speech. We recorded 53 participants from 3 different views (frontal, 45\degree and profile) pronouncing digits and phrases in three speech modes. To the best of our knowledge, this is the first audiovisual database which is publicly available and contains all three speech modes. Tran et al. \cite{tran2013audiovisual} have recorded an audiovisual database of normal and whispered speech but without including silent speech. In addition, their database contains fewer participants, 40, and it is not publicly available at the moment. We also investigate the differences between the 3 speech modes. We conduct subject independent experiments using an end-to-end lipreading model where we train on one speech mode and test on all other modes. To the best of our knowledge, this is the first study that systematically investigates the differences between the 3 speech modes from the visual modality. Results on digits and phrases demonstrate that there are indeed differences between the speech modes. An absolute decrease between 3.3\% and 3.7\% in classification rate is observed when we train on normal speech and test on whispered speech, and vice versa. A higher absolute decrease in classification rate between 5.7\% and 8.5\% is reported when training on normal or whispered speech and testing on silent speech. Silent speech is consistently the worst performing mode and even when a model is trained and tested on it the performance is lower than the corresponding matched conditions in other speech modes. This is an indication that realising SSIs using the visual modality only is not as straightforward as previously thought since it seems normal speech data may not be enough for training silent speech recognisers. \begin{figure}[t] \begin{minipage}[t]{0.95\linewidth} \centering \subfigure[S002]{\includegraphics[width=0.4\linewidth]{Figures/S002_T01_L04_C01_R01_00D_frame150.eps}} \subfigure[S012]{\includegraphics[width=0.4\linewidth, trim = 0 0 0 12,clip] {Figures/S012_T01_L04_C01_R01_00D_frame275.eps}} \caption{Example of mouth ROI extraction for participants S002 and S012.} \label{fig:mouthROI} \end{minipage} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{Figures/systemBLSTM_new.eps} \caption{Overview of the end-to-end visual speech recognition system. Features are extracted directly from the raw mouth ROI. The $\Delta$ and $\Delta\Delta$ features are also appended to the bottleneck layer. The temporal dynamics are modelled by a BLSTM. } \label{fig:system} \end{figure} \section{Database Description} \label{sec:database} For the purposes of this study we have recorded a new audiovisual database which contains normal, whispered and silent speech. The database consists of two parts: digits and short phrases. In the first part, participants were asked to read 10 digits, from 0 to 9, in English in random order five times. They read the digits in three different modes, normal, whispered and silent speech. The only instructions they were given were to speak normally in case of normal speech, to whisper as they would normally whisper and not to produce any audible sound in silent speech. In case of non-native English speakers this part was also repeated in the participant's native language. In total, 53 participants (41 males and 12 females) from 16 nationalities, were recorded with a mean age and standard deviation of 26.7 and 4.3 years, respectively. In the second part, participants were asked to read 10 short phrases. The phrases are the same as the ones used in the OuluVS2 database \cite{Anina2015}: ``Excuse me'', ``Goodbye'', ``Hello'', ``How are you'', ``Nice to meet you'', ``See you'', ``I am sorry'', ``Thank you'', ``Have a good time'', ``You are welcome''. Again, each phrase was repeated five times in 3 different modes, normal, whisper and silent speech. Thirty nine participants (32 males and 7 females) were recorded for this part with a mean age and standard deviation of 26.3 and 3.8 years, respectively. The database was recorded in a lab environment using 3 cameras with resolution of 1280 by 780 at 30 frames per second. The 3 cameras record three different views of the participant's face, frontal, 45\degree and profile view, respectively. Audio is also recorded by the 3 cameras using the built-in microphones at 44.1 kHz. The digits and phrases were displayed in a laptop screen in front of the participant using slides. Additional information regarding the speech mode to be used was also present in the slide. The participants pressed the space bar after each utterance in order to proceed to the next slide. The space bar hit was used to segment the different digits and phrases. Finally, transcriptions for the digits and phrases were extracted from the slides. The database together with annotations and transcription is already publicly available \footnote{https://ibug-avs.eu/}. An example of the frontal view recordings from 2 participants in shown in Fig. \ref{fig:mouthROI}. \section{End-to-end Lipreading} The deep learning lipreading system used in this study is shown in Fig. \ref{fig:system} and is similar to the one presented in \cite{petridis2017deepVisualSpeech}. The main difference is that we use a single stream instead of two streams since we found no additional performance benefits by the use of the second stream as proposed in \cite{petridis2017deepVisualSpeech}. The single stream consists of two parts: an encoder and a BLSTM. The encoder follows a bottleneck architecture in order to compress the high dimensional input image to a low dimensional representation at the bottleneck layer. The same architecture as in \cite{hinton2006reducing} is used, with 3 hidden layers of sizes 2000, 1000 and 500, respectively, followed by a linear bottleneck layer. The rectified linear unit is used as the activation function for the hidden layers. The $\Delta$ (first derivatives) and $\Delta\Delta$ (second derivatives) \cite{young2002htk} features are also computed, based on the bottleneck features, and they are appended to the bottleneck layer. In this way, during training we force the encoding layers to learn compact representations which are discriminative for the task at hand but also produce discriminative $\Delta$ and $\Delta\Delta$ features. The second part is a BLSTM layer added on top of the encoding layers in order to model the temporal dynamics of the features. The output layer is a softmax layer which provides a label for each input frame. The majority label over each utterance is used in order to label the entire utterance. In other words, we follow a classification approach where the models classifies the entire utterance in one out of the ten digits/phrases. \section{EXPERIMENTAL SETUP} \subsection{Mouth ROI Extraction} Sixty eight points are tracked on the face using the tracker proposed in \cite{Kazemi_2014_CVPR}. The faces are first aligned using a normal reference frame in order to normalise them for rotation and size differences. This is done using an affine transform using 6 stable points, the mouth center, two eyes corners in each eye and two points on the nose. Then the center of the mouth is located based on the tracked points and a bounding box with size 68 by 108 is used to extract the mouth region of interest (ROI) as shown in Fig. \ref{fig:mouthROI}. Finally, the mouth ROIs are downscaled to 32 by 50. \subsection{Evaluation Protocol} \looseness - 1 We first partition the data into training, validation and test sets. We follow a subject independent scenario where 30, 10, 13 participants are used in the training, validation and test sets, respectively, for the digits experiments. This means that there are 1500 training utterances, 500 validation utterances and 650 test utterances. For the second set of experiments, i.e., phrases, we use 20, 8, 11 participants in training, validation and test sets, respectively. Hence, there are 1000 training utterances, 400 validation utterances and 550 test utterances. The participants used in each set can be found on the database website (see Section \ref{sec:database}). For all the experiments in this study, we used the digits (in English) and phrases in all 3 modes from the frontal camera only. \subsection{Preprocessing} Since all the experiments are subject independent we first need to reduce the impact of subject dependent characteristics. This is done by subtracting the mean image, computed over the entire utterance, from each frame. \looseness - 1 The next step is the normalisation of data. Each image is z-normalised, i.e. the mean and standard deviation should be equal to 0 and 1 respectively, before training an RBM with linear input units \cite{hinton2012practical}. Finally, due to randomness in initialisation, every time a deep network is trained the results are slightly different. In order to present a more objective evaluation we run each experiment 10 times and we report the mean and standard deviation of the classification rate. \looseness - 1 \subsection{Training} \looseness - 1 \textbf{Initialisation:} The encoding layers are pre-trained using Restricted Boltzmann Machines (RBMs) \cite{hinton2012practical}. Since the input (pixels) is real-valued and the hidden layers are either rectified linear or linear (bottleneck layer) four Gaussian RBMs \cite{hinton2012practical} are used. Each RBM is trained for 20 epochs with a mini-batch size of 100, learning rate of 0.001 and L2 regularisation coefficient of 0.0002 using contrastive divergence. \noindent \textbf{End-to-End Training:} Once the encoder has been pretrained then the BLSTM is added on top and its weights are initialised using glorot initialisation \cite{glorot2010understanding}. The Adam training algorithm \cite{kingma2014adam} is used for end-to-end training with a mini-batch size of 10 utterances and a learning rate of 0.0003. Early stopping with a delay of 5 epochs was also used in order to avoid overfitting. \section{Results} \begin{table}[tb] \renewcommand{\arraystretch}{1.1} \caption{Mean classification rate (and standard deviation) for the digits experiment. Each row (column) corresponds to the data the system was trained (tested) on.} \label{tab:resultsDigits} \centering \begin{tabular}{cccc} \toprule Tested on $\rightarrow$ & Normal & Whispered & Silent \\ Trained on $\downarrow$ \\ \midrule Normal & 68.0 (2.1) & 64.7 (1.3) & 59.7 (1.0) \\ Whisper & 66.9 (1.7) & 70.5 (1.3) & 62.8 (2.0) \\ Silent &57.4 (1.6) & 62.1 (1.4) & 62.2 (0.9)\\ \bottomrule \end{tabular} \end{table} \begin{table}[tb] \renewcommand{\arraystretch}{1.1} \caption{Mean classification rate (and standard deviation) for the phrases experiment. Each row (column) corresponds to the data the system was trained (tested) on.} \label{tab:resultsPhrases} \centering \begin{tabular}{cccc} \toprule Tested on $\rightarrow$ & Normal & Whispered & Silent \\ Trained on $\downarrow$ \\ \midrule Normal & 69.7 (2.1) & 66.3 (2.6) & 61.2 (1.6) \\ Whisper & 67.1 (2.8) & 70.8 (2.2) & 65.1 (1.5) \\ Silent &61.0 (1.8) & 65.2 (0.9) & 64.4 (2.3)\\ \bottomrule \end{tabular} \end{table} Table \ref{tab:resultsDigits} shows the results for the digits experiments. As expected training and testing on the same speech mode leads to the best performance in all 3 cases. On the other hand, training and testing on mismatched modes results in degraded performance. For example, the performance of a model trained on normal speech drops by 3.3\% and 8.3\% when tested on whispered and silent speech examples, respectively. A similar drop is observed when a model trained on whispered speech is tested on normal and silent speech examples. This in line with previous results where the performance of a visual speech recogniser degrades when tested on whispered speech \cite{tao2014lipreading,fan2011audio} and on silent speech \cite{florescu2010silent}. It is also interesting to point out that performance on silent speech is always low even in the matched condition. \looseness -1 Results for the phrases experiments are shown in Table \ref{tab:resultsPhrases}. It is obvious that all the results are slightly higher than the ones in Table \ref{tab:resultsDigits}. This is possibly due to the longer duration of the phrases, i.e., more information is available. We should also note that the results obtained are significantly lower than the ones obtained on OuluVS2, 91.8\% in \cite{end2end_multiview}, using the same utterances. This is due to the fully automatic extraction of the mouth ROIs and the presence of some segmentation errors \footnote{There are some cases where the participants hit the space bar while uttering a sentence or waited for too long before proceeding to the next slide. We are in the process of manually correcting all the timestamps.}. On the other hand, perfect segmentation and perfectly cropped mouth ROIs (by manually correcting the landmarks if they were off the desired location) are provided with the OuluVS2 database. Similar conclusions to the digits experiments can be drawn. The performance of a model trained on normal (whispered) speech drops by 3.4\% (3.7\%) when tested on whispered (normal) speech. Training on normal speech and testing on silent speech results in a 8.5\% absolute decrease in the classification rate. Similarly, training on whispered speech and testing on silent speech results in a 5.7\% absolute decrease. Again, the performance on silent speech is consistently lower no matter which data the model was trained on. This is probably due to the lack of auditory feedback during articulation which is crucial. As shown in \cite{janke2010impact} using EMG signals the lack of acoustic feedback in silent speech is compensated by a stronger focus on somatosensoric feedback. This is achieved by articulating stronger those sounds which provide more tactile feedback. Overall, our results agree with the observations in the phonetics literature \cite{bicevskis2016effects} that lip movements are different in the three speech modes. This needs to be taken into account when training visual speech recognisers and it might have a significant impact on silent speech interfaces. Our results suggest that using a vocalized training data for training a silent speech recognition system, which is very common, results in a significant performance drop. \section{Conclusion} In this work, we introduce a new audiovisual database for normal, whispered and silent speech. Results on subject independent experiments reveal that there are differences in lip movements, as it has already been reported in the phonetics literature, which affect the performance of models when the training and testing speech modes are different. In particular, the performance on silent speech suffers the most which indicates that the common approach of using normal visual speech in order to train silent speech recognisers is not the best. Finally, it would be interesting to explore the performance of an audiovisual system on normal and whispered speech and use the side views available in the database in an effort the improve the performance on silent speech. \section{Acknowledgements} This work has been funded by the European Community Horizon 2020 under grant agreement no. 645094 (SEWA). \bibliographystyle{IEEEbib}
-10,431.478824
[ -1.4892578125, 1.708984375 ]
59.659091
[ -2.876953125, 0.8779296875, -1.876953125, -4.69140625, -0.93798828125, 7.32421875 ]
[ 2.294921875, 5.56640625, 1.5029296875, 6.328125 ]
208
2,649
[ -2.244140625, 2.541015625 ]
22.940049
[ -6.2578125, -3.564453125, -3.349609375, -1.3359375, 2.037109375, 10.671875 ]
2.477121
41.592238
28.878822
5.55683
[ 2.949936866760254 ]
-8,771.276833
5.486221
-10,161.056981
1.032134
5.614849
[ -3.62109375, -3.201171875, -2.318359375, -3.4140625, 2.82421875, 9.3125 ]
[ -6.54296875, -3.0625, -2.759765625, -2.123046875, 4.3671875, 6.81640625 ]
BkiUe5rxK03BfHDu5U-k
\section{Introduction} It is widely believed that magnetic fluctuations are involved in the superconducting mechanism of the iron-based superconductors, but there is currently no complete understanding of the microscopic origin of magnetism or of its detailed relationship with superconductivity in these materials~\cite{mazin_pairing_2009,hirschfeld_gap_2011,stewart_superconductivity_2011,dai_magnetism_2012}. Strong magnetic fluctuations related to those found in the non-superconducting spin density wave (SDW) parent phases have been observed in nearly all families of iron arsenide and chalcogenide superconductors by inelastic neutron scattering (INS)~\cite{stewart_superconductivity_2011,dai_magnetism_2012,johnston_puzzle_2010,lumsden_magnetism_2010}. These fluctuations are found to persist across the phase diagram, including for systems which show no SDW ordering. In superconducting samples the INS spectrum contains a particularly prominent feature known as the spin resonance. This is a peak localized both in momentum and in energy which grows strongly in intensity at temperatures below $T_{\mathrm{c}}$, and is a key piece of evidence linking magnetic fluctuations with superconductivity~\cite{christianson_unconventional_2008}. The iron phosphide systems differ in several respects from their arsenide counterparts. They generally have lower $T_{\mathrm{c}}$ values, and the stoichiometric compounds do not undergo magnetic or structural phase transitions~\cite{kamihara_iron-based_2006,mcqueen_intrinsic_2008,ogino_superconductivity_2009,deng_new_2009,mydeen_temperature-pressure_2010}. Evidence has been found for a nodal superconducting order parameter in LaFePO~\cite{hicks_evidence_2009,fletcher_evidence_2009,sutherland_low-energy_2012}, Sr$_{2}$ScO$_{3}$FeP~\cite{yates_evidence_2010} and LiFeP~\cite{hashimoto_nodal_2012}, in contrast to many of the iron arsenide and selenide superconductors. The difference in gap structure has been explained by the absence of a Fermi surface hole pocket centred on the $M$ point in the Brillouin zone in the phosphides~\cite{thomale_mechanism_2011}. As far as spin fluctuations are concerned there is conflicting evidence. Optical and charge transport studies have concluded that electron correlations are significantly weaker in the phosphides than in the arsenides~\cite{qazilbash_electronic_2009,kasahara_contrasts_2012}, whereas measurements of the Fermi surface indicate that correlations could be of similar strength in the two systems~\cite{coldea_fermi_2008}. Furthermore, evidence of spin correlations in phosphides has been reported in NMR and $\mu$SR studies~\cite{nakai_spin_2008,carlo_static_2009}. These differences raise the possibility that the pairing mechanism in the iron phosphides might not be the same as that in the higher $T_{\mathrm{c}}$ iron-based superconductors. In particular, the role of magnetic fluctuations in the pairing mechanism of the iron phosphides remains unresolved, and this provides a strong incentive to obtain experimental information on the momentum-resolved magnetic fluctuation spectrum. Such information is so far lacking for the phosphides. Here we present results of INS measurements to search for magnetic fluctuations in two phosphides, LaFePO and Sr$_{2}$ScO$_{3}$FeP. LaFePO has a relatively low $T_{\mathrm{c}}$ of 4.5\,K, but is of interest because of evidence that it might be close to a SDW instability driven by Fermi surface nesting~\cite{coldea_fermi_2008,thomale_mechanism_2011}. Sr$_{2}$ScO$_{3}$FeP was chosen because it was reported to show the highest $T_{\mathrm{c}}$($\approx17\,$K) among the known phosphide superconductors~\cite{ogino_superconductivity_2009}. At present, large single crystals of iron phosphides are not available, so our measurements were performed on polycrystalline samples. We searched for magnetic fluctuations with energies $E$ in the range from 0.5 to 20\,meV, and explored a large range of wave vector $Q = |{\bf Q}|$ space. We paid particular attention to $Q \sim 1.2\,\mathrm{\AA}^{-1}$ corresponding to the wave vector ${\bf Q}_{\rm SDW}$ of the SDW order found in the parent phases of many iron-based superconductors (the in-plane component of ${\bf Q}_{\rm SDW}$ is $\left(0.5,0\right)$ when expressed in reciprocal lattice units of the one-Fe unit cell). We looked closely at energies in the region $E \sim 5k_{\rm B}T_{\rm c}$ where the spin resonance has been observed in iron-based superconductors. Despite performing careful measurements above and below $T_{\rm c}$, we found no signal attributable to magnetic fluctuations of any kind in either system. The results suggest that magnetic fluctuations might not be as important for superconductivity in the iron phosphide systems as they are in other iron-based superconductors. \section{Experimental Methods\label{sec:Experimental-Methods}} \subsection*{Synthesis} Polycrystalline samples were prepared by the following solid-state reaction routes. All manipulations were carried out in an argon-filled glovebox with a combined O$_{2}$ and H$_{2}$O content of less than 5$\,$parts per million. The silica ampoules were heated to 1000$\,^{\circ}$C under vacuum to remove any trapped moisture before use. \paragraph*{LaFePO:} LaFePO was prepared based on a combination of the reaction routes reported by Kamihara \emph{et al.}~\cite{kamihara_iron-based_2006} and McQueen \emph{et al.}~\cite{mcqueen_intrinsic_2008}. La$_{2}$O$_{3}$ powder was prepared by dehydrating commercial powder (Alfa 99.99\,\%) at 600$\,^{\circ}$C for 10$\,$hours. Fresh La metal (Alfa 99.9\,\%) filings, red P (Alfa 99.9999\,\%) ground to a powder, and Fe powder (Alfa 99.998\,\%) were mixed in the ratio 1:3:3 and placed in an alumina crucible in an evacuated silica ampoule. The elements were heated at 0.5$\,^{\circ}$C/min to 700$\,^{\circ}$C and held for $\sim$12$\,$hours, to produce a mixture of the compounds LaP, FeP and Fe$_{2}$P. This mixture and La$_{2}$O$_{3}$ were then ground together to a very fine powder which was placed in an alumina crucible; another crucible containing a LaFePO getter (synthesised in a similar way) was placed on top~\cite{mcqueen_intrinsic_2008}. This was all sealed in a silica ampoule under 200$\,$mbar of high purity argon gas in order to prevent collapse of the silica upon heating at close to the softening temperature of silica. This was heated at 1$\,^{\circ}$C/min to 1250$\,^{\circ}$C and held for 24$\,$hours. The final product was removed and ground to a fine powder. \paragraph*{LaZnPO:} LaZnPO was prepared starting from fresh La metal filings, red P powder and ZnO (Alfa 99.99\,\%) powder, following the route previously described by Kayanuma \emph{et al.}~\cite{kayanuma_apparent_2007} \paragraph*{Sr$_{2}$ScO$_{3}$FeP:} Sr$_{2}$ScO$_{3}$FeP was prepared according to the following route. SrO powder was prepared by the thermal decomposition of SrCO$_{3}$ (Alfa 99.99\,\%) by heating to 850\,$^{\circ}$C and holding for 16 hours, then 1100\,$^{\circ}$C for 4 hours, under dynamic vacuum. Sr metal (Aldrich 99\,\%) was sublimed under high vacuum at 850 \textdegree{}C prior to use. Red P, ground to a powder, and small pieces of Sr were placed in a Nb tube in the ratio 1:1. The Nb tube was then sealed by welding in an arc furnace under one atmosphere of argon, and then sealed in a protective evacuated silica tube. This was heated at 2$\,^{\circ}$C/min to 800$\,^{\circ}$and held for 3$\,$days and the resulting mixture was ground to a fine powder. This powder was mixed with SrO, Sc$_{2}$O$_{3}$ powder (99.99\,\%), Fe powder, and Fe$_{2}$O$_{3}$ powder (Alfa 99.998\,\%) according to the stoichiometry Sr$_{2}$ScO$_{3}$FeP. After homogenization, this powder was pelletized and placed in an alumina crucible, then sealed in an evacuated silica ampoule. This was heated at 2$\,^{\circ}$C/min to 1200$\,^{\circ}$C and held for 10$\,$hours. The final product was removed and ground to a fine powder. \subsection*{Characterization} Room temperature x-ray powder diffraction (XRPD) was used to assess the phase purity of all the products prepared as described above. Measurements were made with a PANalytical X\textquoteright{}Pert PRO diffractometer operating in Bragg\textendash{}Brentano geometry with monochromatic Cu K$_{\alpha1}$ radiation and a multiangle X\textquoteright{}Celerator detector. Structural refinements against XRPD data were carried out using the Rietveld refinement package GSAS+EXPGUI~\cite{larson_general_1994,toby_expgui_2001}. DC susceptibility measurements were performed from 2$\,$K to 300$\,$K in a measuring field of 50$\,$Oe using a Quantum Design MPMS-XL SQUID magnetometer. \paragraph*{LaFePO:} A typical result of the characterization of LaFePO is shown in figures~\ref{fig:Both_xray_SQUID}(a) and~(b). Figure~\ref{fig:Both_xray_SQUID}(a) shows XRPD data together with the calculated pattern based on the results of the Rietveld refinement. We find lattice parameters $a=b=3.9646(2)\,\mathrm{\AA}$ and $c=8.5187(5)\,\mathrm{\AA}$ for space group $P4/nmm$, where the error is the estimated standard deviation calculated by GSAS. The standard deviation of lattice parameter values measured on several samples was $\sigma(a) = 0.0003\,\mathrm{\AA}$ and $\sigma(c) = 0.001\,\mathrm{\AA}$. Figure~\ref{fig:Both_xray_SQUID}(b) shows a zero-field-cooled magnetic susceptibility measurement on the same sample. We confirm $T_{\mathrm{{c}}}\approx4.5\,$K and we estimate the overall superconducting volume fraction of the sample used for the neutron scattering experiments to be \textasciitilde{}20$\,$\% at 2\,K, consistent with previous reports~\cite{kamihara_iron-based_2006,kamihara_electromagnetic_2008}. \begin{figure*} \begin{centering} \includegraphics[width=0.95\textwidth]{Figure1a_1d} \caption{\label{fig:Both_xray_SQUID} Characterization of LaFePO, (a) and (b), and Sr$_2$ScO$_3$FeP, (c) and (d). (a) and (c) show Rietveld refinements against x-ray powder diffraction data (black crosses) taken at room temperature, the calculated background and the difference between calculation and data are also shown. (b) and (d) show the results of magnetic susceptibility measurements made on portions of the samples in an applied field of 50\,Oe under zero-field-cooled conditions. Susceptibility is expressed in SI units, with $-1$ corresponding to perfect diamagnetism.} \end{centering} \end{figure*} \paragraph*{LaZnPO:} LaZnPOis isostructural with LaFePO and was synthesized to use as a non-magnetic background for comparison to LaFePO in the neutron scattering experiments. XRPD revealed a phase pure sample and the refined structure gave lattice parameters $a=b=4.04203(3)\,\mathrm{\AA}$ and $c=8.90626(9)\,\mathrm{\AA}$ with space group $P4/nmm$. \paragraph*{Sr$_{2}$ScO$_{3}$FeP:} A typical result of the characterization of Sr$_{2}$ScO$_{3}$FeP is shown in figures~\ref{fig:Both_xray_SQUID}(c) and~(d). Figure~\ref{fig:Both_xray_SQUID}(c) shows XRPD data along with the calculated pattern based on the results of the Rietveld refinement for Sr$_{2}$ScO$_{3}$FeP with space group $P4/nmm$. Unlike the previous report on this material by Ogino \emph{et al.}~\cite{ogino_superconductivity_2009}, in which they found Sr$_{2}$ScO$_{3}$FeP and SrFe$_{2}$P$_{2}$ to coexist in the ratio 9:1, figure~\ref{fig:Both_xray_SQUID}(c) shows that the Sr$_{2}$ScO$_{3}$FeP phase alone describes our data very well. We find lattice parameters $a=b=4.0148(3)\,\mathrm{\AA}$ with $\sigma(a) = 0.002\,\mathrm{\AA}$ and $c=15.551(2)\,\mathrm{\AA}$ with $\sigma(c) = 0.02\,\mathrm{\AA}$, where errors are determined as described above for LaFePO. These lattice parameters are similar within the error to those previously reported~\cite{ogino_superconductivity_2009}. Magnetic susceptibility measurements shown in figure~\ref{fig:Both_xray_SQUID}(d) established the onset of superconductivity occurs at $T_{\mathrm{{c}}}\approx15\,$K, but it appears to be related to a superconducting volume fraction of only a few percent. By contrast, the sample reported by Ogino {\it et al.}~\cite{ogino_superconductivity_2009} that contained small amounts of impurity phases was a bulk superconductor. This suggests, therefore, that the superconducting phase is most likely a slightly off-stoichiometric form of Sr$_2$ScO$_3$FeP, and that Sr$_{2}$ScO$_{3}$FeP is close to a superconducting instability but may not be an intrinsic bulk superconductor. Further investigation on this point is beyond the scope of this article. \subsection*{Inelastic Neutron Scattering} The inelastic neutron scattering experiments were performed on the time-of-flight chopper spectrometers MERLIN at the ISIS Facility~\cite{bewley_merlin_2006}, and IN5 at the Institut Laue--Langevin~\cite{ollivier_in5_2011}, both of which have large, position-sensitive detector arrays. These instruments allow measurement of vast regions of $(Q,E)$ space, which is advantageous when searching throughout the Brillouin zone for evidence of magnetic excitations. For each measurement the powder was sealed in an annulus around the edge of a cylindrical aluminium can. 10.6$\,$g of LaFePO, 8.8$\,$g of Sr$_{2}$ScO$_{3}$FeP and 7.9$\,$g of LaZnPO powder were measured. On MERLIN, LaFePO and Sr$_{2}$ScO$_{3}$FeP were measured with incident neutron energies $E_{\mathrm{{i}}}=25$ and $50\,$meV at temperatures of $T=6$ and $20\,$K. On IN5, LaFePO and LaZnPO were measured with incident neutron energies $E_{\mathrm{{i}}}=3.27$, $4.23$ and $7.51\,$meV at temperatures of $T=1.6$ and $10\,$K. Data are also shown from a previously reported experiment on polycrystalline LiFeAs, $T_{\mathrm{{c}}}=17\,$K, performed under similar conditions on MERLIN~\cite{taylor_antiferromagnetic_2011}. In all cases the scattering from a standard vanadium sample was used to normalize the spectra and place them on an absolute intensity scale, with units mb$\,$sr$^{-1}\,$meV$^{-1}\,$f.u.$^{-1}$, where $1\,$mb$=10^{-31}\,$m$^{2}$ and f.u. stands for formula unit of LaFePO, LaZnPO, Sr$_{2}$ScO$_{3}$FeP or LiFeAs as appropriate. \section{Results\label{sec:Results}} Figures~\ref{fig:LaFePO_vs_LiFeAs} and~\ref{fig:Sr2ScO3FeP_vs_LiFeAs} compare the neutron inelastic scattering response of the phosphide materials with that of LiFeAs. All data in these figures are at temperatures greater than $T_{\mathrm{{c}}}$, to ensure that we are comparing samples in the normal state. In figure~\ref{fig:LaFePO_vs_LiFeAs}, we show the comparison between constant-energy cuts from LaFePO ($T_{\mathrm{c}}\approx4.5\,$K) and LiFeAs ($T_{\mathrm{c}}=17\,$K) data sets recorded at a temperature of 20$\,$K. The peak centered at $Q\approx1.2\,\mathrm{\AA}^{-1}$ in the LiFeAs data originates from quasi-2D spin fluctuations with characteristic in-plane wave vector close to ${\bf Q}_{\rm SDW}=\left(0.5,0\right)$~\cite{taylor_antiferromagnetic_2011, wang_antiferromagnetic_2011, qureshi_inelastic_2012}. No such peak is present in the LaFePO data, whose intensity increases smoothly with $Q$ due to phonon scattering. By attempting to fit a Gaussian peak at $Q\approx1.2\,\mathrm{\AA}^{-1}$ to the LaFePO data we put an upper limit of about 15\% on the size of any such peak relative to the LiFeAs peak. To do this we fitted the LiFeAs data to a Gaussian peak on a linear background, we then constrained the peak width and centre and fitted the LaFePO data to the same function. We found a peak amplitude consistent with zero, with a fitting error 15\% of the size of the LiFeAs peak amplitude. As the spectra are normalized to one f.u., and one f.u. contains one Fe atom in both materials, a magnetic signal of given intensity per Fe would appear the same size in both spectra. Therefore, assuming the spin fluctuations have the same character in both materials, the spin fluctuations are at least 7 times weaker in LaFePO than in LiFeAs. \begin{figure} \begin{centering} \includegraphics[width=0.45\textwidth]{Figure2} \caption{\label{fig:LaFePO_vs_LiFeAs} Constant-energy cuts showing the magnetic signal at $Q\approx1.2\,\mathrm{\AA}^{-1}$ in LiFeAs and the same region in $(Q,E)$ space for LaFePO. Data were measured on MERLIN at $T=20\,$K, with an incident energy $E_{\mathrm{{i}}}=25\,$meV, and have been averaged over an energy range of 10--$13\,$meV.} \end{centering} \end{figure} \begin{figure} \begin{centering} \includegraphics[width=0.45\textwidth]{Figure3} \caption{\label{fig:Sr2ScO3FeP_vs_LiFeAs} Constant-energy cuts showing the the magnetic signal at $Q\approx1.2\,\mathrm{\AA}^{-1}$ in LiFeAs and the same region of $(Q,E)$ space for Sr$_{2}$ScO$_{3}$FeP. Data were measured on MERLIN at $T=20\,$K, with an incident energy $E_{\mathrm{{i}}}=25\,$meV, and have been averaged over an energy range of 10--$13\,$meV. The Sr$_{2}$ScO$_{3}$FeP data have been shifted down by 1 unit for clarity.} \end{centering} \end{figure} Figure~\ref{fig:Sr2ScO3FeP_vs_LiFeAs} shows the same LiFeAs data, this time compared to a constant-energy cut from the Sr$_{2}$ScO$_{3}$FeP ($T_{\mathrm{c}}=15\,$K) data measured at 20$\,$K. The Sr$_{2}$ScO$_{3}$FeP data have been shifted down by one unit to aid comparison. The non-magnetic signal appears larger for Sr$_{2}$ScO$_{3}$FeP than LiFeAs because the spectra are normalized to one f.u., and the f.u. of Sr$_{2}$ScO$_{3}$FeP contains more atoms than that of LiFeAs. By performing a similar analysis as for LaFePO, we put an upper limit of 25\% on the size of any LiFeAs-type magnetic peak in the Sr$_{2}$ScO$_{3}$FeP data. Figure~\ref{fig:LaFePO_Qcuts}(a) shows constant-energy cuts through data on LaFePO averaged over 1.8 to $2.2\,$meV, covering the energy where we would expect a magnetic resonance assuming the scaling relation $E_{\mathrm{res}}\approx5k_{\mathrm{B}}T_{\mathrm{c}}$. Data are shown from measurements at temperatures of 10$\,$K and 1.6$\,$K, i.e. above and below $T_{\mathrm{c}}$. The peak centered at $Q\approx1.05\,\mathrm{\AA}^{-1}$ is a feature of the non-magnetic background. This is shown in figure~\ref{fig:LaFePO_Qcuts}(b), which contains the same LaFePO 1.6$\,$K data together with an equivalent cut taken from the data on non-magnetic reference sample LaZnPO. All the features in the LaFePO data are reproduced in the LaZnPO data. If a spin resonance was present in figure~\ref{fig:LaFePO_Qcuts}(a), we would expect it to appear as an enhancement in intensity at $T<T_{\mathrm{c}}$ centred on the nesting wave vector of the Fermi surface of LaFePO~\cite{coldea_fermi_2008}, which corresponds to $Q\approx1.2\,\mathrm{\AA}^{-1}$. No such enhancement is found. To be more quantitative, we made cuts like those in figure~\ref{fig:LaFePO_Qcuts}(a) through the data at each of the neutron incident energies measured on IN5, and subtracted the 10\,K data from the 1.6\,K data. We attempted to fit Gaussian peaks centred on $Q=1.2\,\mathrm{\AA}^{-1}$ to the subtracted data. From the fit, we estimate the maximum area of any such peak to be 15\% of the peak at the resonance energy of LiFeAs. Figure~\ref{fig:Sr2ScO3FeP_Qcuts} shows wave vector cuts averaged over the energy range from 6 to 8$\,$meV, recorded at 20\,K and 6\,K. This is the approximate energy at which a spin resonance is expected in Fe-based superconductors with the same $T_{\mathrm{c}}$ as Sr$_{2}$ScO$_{3}$FeP. To within the statistical error, there is no difference between these data sets from above and below $T_{\mathrm{c}}$. \begin{figure} \begin{centering} \includegraphics[width=0.5\textwidth]{Figure4a_4b} \caption{\label{fig:LaFePO_Qcuts} Constant-energy cuts through data measured on IN5 with an incident energy $E_{\mathrm{{i}}}=3.27\,$meV, averaged over an energy range of 1.8--$2.2\,$meV. (a) Cuts through LaFePO data taken at $10\,$K and $1.6\,$K, above and below $T_{\mathrm{c}}$ respectively. (b) The same LaFePO data as in (a) measured at $1.6\,$K, with data from LaZnPO also at $1.6\,$K.} \end{centering} \end{figure} \begin{figure} \begin{centering} \includegraphics[width=0.45\columnwidth]{Figure5} \caption{\label{fig:Sr2ScO3FeP_Qcuts} Constant-energy cuts through Sr$_{2}$ScO$_{3}$FeP data measured at 6\,K and 20\,K on MERLIN with an incident energy $E_{\mathrm{{i}}}=25\,$meV, averaged over an energy range of 6--$8\,$meV.} \end{centering} \end{figure} In addition to the selected cuts shown in figure~\ref{fig:LaFePO_Qcuts} and~\ref{fig:Sr2ScO3FeP_Qcuts}, the LaFePO and Sr$_2$ScO$_3$FeP spectra were examined at all energies accessible in our measurements. We compared runs recorded at different temperatures, as well as (in the case of LaFePO) the samples with and without Fe. Energy cuts at constant wave vector were also inspected in a similar way. No signal attributable to magnetic fluctuations could be found for either material, neither in the superconducting nor in the normal state. \section{Discussion} The goal of this work was to determine whether prominent magnetic fluctuations similar to those observed in other iron-based superconductors are present in the iron phosphide systems LaFePO and Sr$_{2}$ScO$_{3}$FeP. To within the limits of our experimental sensitivity, we find no evidence for any magnetic fluctuations in these systems. Either the magnetic fluctuations are weaker than in other iron-based superconductors, or their characteristic length and time scales are too short to allow detection by inelastic neutron scattering. This null result is interesting because there are reasonable expectations that magnetic fluctuations might exist in the iron phosphides. Firstly, like many of the iron arsenide and chalcogenide superconductors, the two iron phosphides studied here have Fermi surfaces with quasi-nested hole and electron pockets centred on $(0,0)$ and $(0.5,0)$, respectively~\cite{lu_electronic_2008,coldea_fermi_2008, shein_structural_2009, nakamura_highly_2010}. In the arsenides and chalcogenides, this quasi-nesting is widely thought to drive the SDW transition and explain the presence of strong spin fluctuations~\cite{mazin_unconventional_2008, johnston_puzzle_2010, stewart_superconductivity_2011, dai_magnetism_2012}. Secondly, despite the evidence for nodal superconductivity in these phosphides~\cite{hicks_evidence_2009,fletcher_evidence_2009,sutherland_low-energy_2012,yates_evidence_2010}, magnetic fluctuations and a magnetic resonance are still expected to be of a typical strength for iron-based materials, as shown for example by the calculations of Maier {\it et al.}~\cite{maier_neutron_2009} for nodal and nodeless $s$-wave gaps. Moreover, magnetic fluctuations and a superconductivity-induced magnetic resonance were observed at $Q \approx 1.2\,$\AA$^{-1}$ by neutron scattering from the nodal superconductor BaFe$_2$(As$_{0.65}$P$_{0.35}$)$_2$~\cite{ishikado_s_like_2011}. Thirdly, in the case of LaFePO, there is experimental evidence of magnetism. Anomalous static magnetic correlations were found in $\mu$SR measurements~\cite{carlo_static_2009}, and an NMR study on Ca-doped LaFePO (Ca doping increases the $T_{\mathrm{c}}$ of LaFePO to 7\,K) reported evidence for moderate ferromagnetic fluctuations~\cite{nakai_spin_2008, julien_enhanced_2008}. Ferromagnetic fluctuations would produce a peak in the neutron spectrum at $Q=0$. If the fluctuations were two-dimensional in character, this peak would extend to $Q>0$ with decreasing intensity; if the fluctuations were three-dimensional they would result in additional peaks at $Q_{(001)} =0.74\,\mathrm{\AA}^{-1}$, $Q_{(002)} = 1.48\,\mathrm{\AA}^{-1}$, etc. No magnetic signal is observed in our data above the background at these or at any other wave vector probed, see figure~\ref{fig:LaFePO_Qcuts}. Because our data are normalized in absolute units we have been able to constrain the size of any magnetic signal in the phosphides. We have used LiFeAs as a reference. Despite poor Fermi surface nesting and no SDW order, the magnetic fluctuations observed at $Q\approx Q_{\rm SDW}$ in LiFeAs are of a typical strength for iron-arsenide-based superconductors~\cite{taylor_antiferromagnetic_2011,wang_antiferromagnetic_2011,ewings_high-energy_2008,christianson_unconventional_2008}. Our analysis has shown that a magnetic peak of the type found in the normal state of LiFeAs, if present, would have to be at least a factor of 4 (Sr$_2$ScO$_3$FeP) or 7 (LaFePO) smaller than in other iron-based systems. The absence of observable magnetic fluctuations in LaFePO is consistent with evidence that it is more metallic than iron arsenide superconductors~ \cite{kamihara_electromagnetic_2008}, and suggests that electronic correlations in LaFePO are weaker than in iron arsenides as reported in some previous studies~ \cite{si_strong_2008,qazilbash_electronic_2009}. Others, however, suggest that the correlations are of similar strength in the two families~\cite{coldea_fermi_2008, skornyakov_lda+dmft_2010}. In the iron arsenide systems a significant suppression of magnetic fluctuations has been observed in electron over-doped samples of Ba(Fe$_{1-x}$Co$_{x}$)$_{2}$As$_{2}$~\cite{matan_doping_2010} and LaFeAsO$_{1-x}$F$_{x}$ ~\cite{wakimoto_degradation_2010}. However, $T_{\mathrm{c}}$ increases on electron doping in LaFePO$_{1-x}$F$_{x}$~\cite{kamihara_electromagnetic_2008}, which implies that LaFePO is not an intrinsically electron overdoped material. It is possible, therefore, that the suppression of magnetic fluctuations is intrinsic to LaFePO and would be reproduced across its entire phase diagram, with no SDW phase proximate to superconductivity. In such a scenario, superconductivity in LaFePO could be controlled not by spin fluctuations but by a different pairing instability, as has been suggested by Thomale {\it et al.}~\cite{thomale_mechanism_2011}. If spin fluctuations are involved in the pairing interaction then materials with weaker magnetic correlations would be expected to have lower $T_{\rm c}$ values. Our results for LaFePO are compatible with this expectation, but those for Sr$_2$ScO$_3$FeP are not. The latter system has a relatively high maximum reported $T_{\rm c}$ of 17\,K, yet our results lack the strong spin fluctuations characteristic of iron arsenide and chalcogenide superconductors with comparable $T_{\rm c}$ values. As discussed above, it appears that stoichiometric Sr$_2$ScO$_3$FeP is not a bulk superconductor, however our results along with the report by Ogino {\it et al.}~\cite{ogino_superconductivity_2009} suggest that Sr$_2$ScO$_3$FeP is close to the superconducting instability. We would therefore expect to see strong magnetic fluctuations as a precursor to the superconducting state. The absence of strong magnetic fluctuations in stoichiometric Sr$_2$ScO$_3$FeP could imply that superconductivity is not associated with a magnetic instability. \section{Conclusions} We have searched for, but did not find, a signal from magnetic fluctuations in the inelastic neutron scattering spectrum of two different iron phosphide materials. Magnetic fluctuations, if present, are significantly weaker than in iron arsenide and chalcogenide systems. This suggests that magnetic fluctuations might not play as significant a role in iron phosphide superconductors as they do in other iron-based superconductors. Because of its relatively high superconducting transition temperature ($T_{\mathrm{c}} = 17$\,K), identification of the precise composition of the superconducting phase in Sr$_2$ScO$_3$FeP would be a significant step towards an understanding of the mechanism of superconductivity in the iron phosphide superconductors. \ack This work was supported by the UK Engineering \& Physical Sciences Research Council and the Science \& Technology Facilities Council. We thank A. J. Corkett and M. J. Pitcher for help with synthesis, J. R. Stewart for help with the neutron scattering experiments, and R. Thomale for comments on the manuscript. \section*{References} \bibliographystyle{AET_bst}
-18,261.01886
[ -2.931640625, 2.748046875 ]
44.827586
[ -3.1328125, 0.222412109375, -2.16796875, -6.22265625, -0.599609375, 8.703125 ]
[ 3.16796875, 8, 4.45703125, 6.30859375 ]
245
3,514
[ -2.873046875, 3.23046875 ]
25.952338
[ -5.8046875, -2.8359375, -2.810546875, -2.3828125, 1.1083984375, 10.28125 ]
0.908583
33.991987
27.376209
3.497957
[ 2.763270854949951 ]
-14,370.46876
6.10757
-17,971.34133
0.251607
5.795015
[ -3.51953125, -4.0078125, -3.380859375, -4.03125, 2.63671875, 11.2265625 ]
[ -5.7578125, -2.642578125, -2.62109375, -2.369140625, 3.87890625, 6.13671875 ]
BkiUdMg4eIXhx5LdSlN-
\section{Introduction} The issue of how to create open-ended evolution in an artificial system is one the open problems in artificial life. This paper examines two of the factors that have some bearing on this issue, using the Tierra artificial life system\cite{Ray91}\footnote{Available from http://www.his.atr.jp/\~{}ray/tierra/}. Tierra is well known artificial system, and well described in the literature, so only brief details will be given here. The digital organisms in Tierra consist of self-replicating codes written in a specially designed machine language. The Tierra environment is a virtual operating system executing the organism's code in a time shared manner. Evolution proceeds through deliberately introduced mutations, copying errors and instruction flaws. Organisms compete for CPU time and memory space (called {\em soup}). {\em Parsimony pressure} is a tendency to penalise more complex organisms by the extra cost needed to reproduce longer genotypes, encouraging simplification to happen. In Ray's earliest experiments with Tierra, CPU time was allocated evenly between organisms, favouring organisms with the shortest genomes. The time sharing system was changed so that CPU time was allocated proportional to $\ell^\mathtt{SlicePow}$. When \verb+SlicePow+=0, we have the original maximal parsimony pressure. When \verb+SlicePow+=1, parsimony pressure is removed entirely. In this case, organism length rapidly increases, until the soup consists of one organism whose length is greater than half of Tierra's memory. At this point, it can no longer reproduce, and the soup dies (simulation stops). But do organisms get more complex? For this purpose, we define complexity to be the {\em algorithmic information} \cite{Li-Vitanyi97} of the organism. Adami\citeyear(1998){Adami98a} introduced this measure in an artificial life setting, and I \cite{Standish99a} developed a technique for measuring this in the Tierra setting. In \cite{Standish03a}, I report the first detailed study of a Tierra run. Whilst organisms get longer, their complexity shows no sign of increase at all. Their length comes from adding ``junk'' into their genomes. Obvious neither extremes of \verb+SlicePow+ leads to complexity growth, but what if we tuned the parsimony pressure to modest values? In previous experiments, I knew that \verb+SlicePow+$<0.9$ led to shorter genomes, not longer, so in this paper, I scan a range of \verb+SlicePow+ from 0.9 to 1 to see if there is an optimal value for generating complexity. Tierra (along with most ALife systems) use pseudo random number generators. Pseudo random number generators are short algorithms satisfying certain statistical tests for uniformity and independence. However, being the product of an algorithm, the output of a pseudo random number generator is not random by definition \cite{Li-Vitanyi97}. Algorithms can never create information, only destroy it. The complexity of any sequence of numbers is closely related to the length of the shortest algorithm that produces it, so the total complexity of the Tierra system with pseudo random number generators is bounded by its initial complexity, implying that the individual organism complexity is bounded. Biological systems, however, have plenty of sources of randomness, ultimately dependent on quantum randomness, so do not have this complexity limit. The only way an algorithm can generate unbounded complexity is if it is coupled to a source of real randomness --- a {\em random oracle}. Random oracles feature in Douglas Adams's description of the {\em infinite improbability drive}: \begin{quote} {\em a Bambleweeny 57 Sub-meson Brain coupled to an atomic vector plotter suspended in a strong Brownian Motion producer (say a nice hot cup of tea)} \cite[Chapt. 10]{Adams79}. \end{quote} It turns out to be simple enough to create random oracles: Geiger counters attached to radioactive sources \cite{Gude85}\footnote{http://www.fourmilab.ch/hotbits/} and Lava lamps\footnote{http://www.lavarnd.org/} are available through the internet to provide sources of genuine randomness, however these sources are limited to about 30 bytes per second. Computers themselves have many different sources of random data available. They often interact with the external environment (eg users using keyboards, mice etc), and there is a small amount of randomness in timings in the hard disk \cite{Jakobsson-etal98}. Programs that harvest these sources of physical randomness are called {\em entropy gatherers}. Since unpredictability is important for cryptographic applications, practical true random number generation has experienced a lot of development in recent years. For example, the Linux operating system includes an entropy gatherer in its kernel, available as a character device at \verb+/dev/random+. Unfortunately, entropy gatherers, like the internet available random oracles, tend to be slow producers of randomness. HAVEGE \cite{Seznec-Sendrier03} \footnote{http://www.irisa.fr/caps/projects/hipsor/HAVEGE.html} exploits many different sources of entropy with a modern computing system using hand crafted assembly language routines to increase the rate of entropy production by 3-4 orders of magnitude over the techniques available in \verb+/dev/random+. This random stream is then used to seed a lookup table accessed by a pseudo random number generator to produce random numbers at similar rates to traditional pseudo random number generators. The entropy of the resulting sequence is less than a truly random sequence, but considerably higher than a pseudo random generator. In this paper, we replace the pseudo random number generator in Tierra by calls to the HAVEGE generator, and compare what difference this makes to growth of complexity. \section{Measurement of Complexity in Tierra} The most general definition of complexity of an object involves two levels of description, a {\em micro-}description which is its implementation, and a {\em macro-}description which is the abstract {\em meaning} of an object. More than one microdescription can correspond to the same macrodescription. If $\omega(\ell,x)$ is the number of microdescriptions of length $\ell$ corresponding to macrodescription $x$, then the complexity of $x$ is given by \cite{Standish01a}: \begin{equation}\label{Complexity} C(x) = \lim_{\ell\rightarrow\infty} \ell \log N - \log \omega(\ell,x). \end{equation} where $N$ is the size of the alphabet used for the microdescription. Eq (\ref{Complexity}) converges extremely rapidly for $\ell > C(x)/\log N$. The base of the logarithm determines what units you are measuring complexity in --- if it is base 2, then the units are bits. For convenience, in this paper we will use base 32, corresponding to the alphabet size of the Tierra instruction set. Complexity is then measured in {\em instructions}. In order to measure the complexity of an organism in Tierra, we simply need to count up the number $\omega(\ell,x)$ of computer programs of length $\ell$ that are equivalent to a given digital organism $x$. Not so simple! The first problem is how to determine if two computer programs are equivalent. The technique we use \cite{Standish03a}, is to record the results of a tournament where an organism is pitted pairwise against all genotypes recorded from a given Tierra run. Since this includes the context that these organisms experienced, any difference between two organism is expected to show up as a difference in the results of the two tournaments. The second problem is that the number of programs of length $\ell$ is $32^\ell$, a computationally infeasible number. In \cite{Standish03a}, I show that an alternative measure $C_\mathrm{ss}$ is a good first order estimate of the organismal complexity: \begin{equation}\label{sse} C_\mathrm{ss}=\ell-\sum_{i=1}\log_{32}n_i \end{equation} where $n_i$ is the number of mutations at site $i$ on the genome that lead to differing phenotypes. This quantity is now very tractable, with a complete analysis of a $10^{10}$ timestep Tierra run taking only a few hours on ac3's Linux cluster --- comparable to the time taken to perform the original Tierra run.\footnote{The analysis code is available from http://parallel.hpc.unsw.edu.au/getaegisdist.cgi/getdeltas/eco-tierra.3} \section{Results} \begin{figure*} \begin{center} \setlength{\unitlength}{0.240900pt} \ifx\plotpoint\undefined\newsavebox{\plotpoint}\fi \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \begin{picture}(1500,900)(0,0) \font\gnuplot=cmr10 at 10pt \gnuplot \sbox{\plotpoint}{\rule[-0.200pt]{0.400pt}{0.400pt}}% \put(140.0,123.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(120,123){\makebox(0,0)[r]{ 0}} \put(1419.0,123.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(140.0,270.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(120,270){\makebox(0,0)[r]{ 50}} \put(1419.0,270.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(140.0,418.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(120,418){\makebox(0,0)[r]{ 100}} \put(1419.0,418.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(140.0,565.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(120,565){\makebox(0,0)[r]{ 150}} \put(1419.0,565.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(140.0,713.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(120,713){\makebox(0,0)[r]{ 200}} \put(1419.0,713.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(140.0,860.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(120,860){\makebox(0,0)[r]{ 250}} \put(1419.0,860.0){\rule[-0.200pt]{4.818pt}{0.400pt}} \put(140.0,123.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(140,82){\makebox(0,0){ 0}} \put(140.0,840.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(284.0,123.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(284,82){\makebox(0,0){ 1000}} \put(284.0,840.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(429.0,123.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(429,82){\makebox(0,0){ 2000}} \put(429.0,840.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(573.0,123.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(573,82){\makebox(0,0){ 3000}} \put(573.0,840.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(717.0,123.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(717,82){\makebox(0,0){ 4000}} \put(717.0,840.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(862.0,123.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(862,82){\makebox(0,0){ 5000}} \put(862.0,840.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(1006.0,123.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(1006,82){\makebox(0,0){ 6000}} \put(1006.0,840.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(1150.0,123.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(1150,82){\makebox(0,0){ 7000}} \put(1150.0,840.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(1295.0,123.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(1295,82){\makebox(0,0){ 8000}} \put(1295.0,840.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(1439.0,123.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(1439,82){\makebox(0,0){ 9000}} \put(1439.0,840.0){\rule[-0.200pt]{0.400pt}{4.818pt}} \put(140.0,123.0){\rule[-0.200pt]{312.929pt}{0.400pt}} \put(1439.0,123.0){\rule[-0.200pt]{0.400pt}{177.543pt}} \put(140.0,860.0){\rule[-0.200pt]{312.929pt}{0.400pt}} \put(789,21){\makebox(0,0){Creation time (millions of timesteps)}} \put(140.0,123.0){\rule[-0.200pt]{0.400pt}{177.543pt}} \put(1279,820){\makebox(0,0)[r]{length}} \put(1299,235){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(1427,241){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(523,265){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(259,315){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(999,326){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(142,344){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(145,344){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(142,347){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(141,350){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(142,350){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(142,350){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(142,350){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(143,350){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(612,353){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(140,359){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(140,371){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(522,371){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(767,406){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(143,415){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(743,424){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(540,435){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(156,447){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(293,474){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(145,491){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(1067,506){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(299,542){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(702,553){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(732,559){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(340,589){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(145,609){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(150,609){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(730,609){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(536,692){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(732,718){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(140,804){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(152,842){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(1349,820){\raisebox{-.8pt}{\makebox(0,0){$\Diamond$}}} \put(1279,779){\makebox(0,0)[r]{$C_{ss}$}} \put(1299,123){\makebox(0,0){$+$}} \put(1427,123){\makebox(0,0){$+$}} \put(523,193){\makebox(0,0){$+$}} \put(259,123){\makebox(0,0){$+$}} \put(999,123){\makebox(0,0){$+$}} \put(142,281){\makebox(0,0){$+$}} \put(145,303){\makebox(0,0){$+$}} \put(142,308){\makebox(0,0){$+$}} \put(141,282){\makebox(0,0){$+$}} \put(142,309){\makebox(0,0){$+$}} \put(142,309){\makebox(0,0){$+$}} \put(142,309){\makebox(0,0){$+$}} \put(143,282){\makebox(0,0){$+$}} \put(612,123){\makebox(0,0){$+$}} \put(140,327){\makebox(0,0){$+$}} \put(140,336){\makebox(0,0){$+$}} \put(522,126){\makebox(0,0){$+$}} \put(767,123){\makebox(0,0){$+$}} \put(143,312){\makebox(0,0){$+$}} \put(743,123){\makebox(0,0){$+$}} \put(540,210){\makebox(0,0){$+$}} \put(156,141){\makebox(0,0){$+$}} \put(293,252){\makebox(0,0){$+$}} \put(145,311){\makebox(0,0){$+$}} \put(1067,249){\makebox(0,0){$+$}} \put(299,124){\makebox(0,0){$+$}} \put(702,226){\makebox(0,0){$+$}} \put(732,277){\makebox(0,0){$+$}} \put(340,237){\makebox(0,0){$+$}} \put(145,344){\makebox(0,0){$+$}} \put(150,302){\makebox(0,0){$+$}} \put(730,294){\makebox(0,0){$+$}} \put(536,209){\makebox(0,0){$+$}} \put(732,333){\makebox(0,0){$+$}} \put(140,527){\makebox(0,0){$+$}} \put(152,123){\makebox(0,0){$+$}} \put(1349,779){\makebox(0,0){$+$}} \sbox{\plotpoint}{\rule[-0.400pt]{0.800pt}{0.800pt}}% \put(1279,738){\makebox(0,0)[r]{Ancestral Complexity}} \put(1299.0,738.0){\rule[-0.400pt]{24.090pt}{0.800pt}} \put(140,327){\usebox{\plotpoint}} \put(140.0,327.0){\rule[-0.400pt]{310.038pt}{0.800pt}} \end{picture} \end{center} \caption{Plot of length and $C_ss$ for the 36 unique phenotypes created during the run with {\tt SlicePow}=0.96, and the HAVEGE entropy source. Three organisms appear with complexity greater than the ancestral organism 0080aaa within the first fifty million timesteps, including one with a complexity nearly twice that of the ancestor, and one appears at $4.1\times10^9$ after which only simple organisms originate.} \label{Havege-time-run} \end{figure*} Tierra was run with \verb+SlicePow+=0.9,0.91\ldots1.0 for $10^{10}$ timesteps (instructions executed), with a soupsize of 131072, both with the original random number generator, and with HAVEGE. When \verb+SlicePow+=1, the runs terminated early due to the soup dying. Figure \ref{Havege-time-run} shows the results for {\tt SlicePow}=0.96 using the Havege generator, which produced the maximum complexity of any run. \begin{figure*} \begin{center} \ifxt\undefined \deft{t} \psset{arrowsize=.01 3.2 1.4 .3} \psset{dotsize=.01} \catcode`@=11 \newpsobject{PST@Border}{psline}{linewidth=.0015,linestyle=solid} \newpsobject{PST@Axes}{psline}{linewidth=.0015,linestyle=dotted,dotsep=.004} \newpsobject{PST@Solid}{psline}{linewidth=.0015,linestyle=solid} \newpsobject{PST@Dashed}{psline}{linewidth=.0015,linestyle=dashed,dash=.01 .01} \newpsobject{PST@Dotted}{psline}{linewidth=.0025,linestyle=dotted,dotsep=.008} \newpsobject{PST@LongDash}{psline}{linewidth=.0015,linestyle=dashed,dash=.02 .01} \newpsobject{PST@Diamond}{psdots}{linewidth=.001,linestyle=solid,dotstyle=square,dotangle=45} \newpsobject{PST@Filldiamond}{psdots}{linewidth=.001,linestyle=solid,dotstyle=square*,dotangle=45} \newpsobject{PST@Cross}{psdots}{linewidth=.001,linestyle=solid,dotstyle=+,dotangle=45} \newpsobject{PST@Plus}{psdots}{linewidth=.001,linestyle=solid,dotstyle=+} \newpsobject{PST@Square}{psdots}{linewidth=.001,linestyle=solid,dotstyle=square} \newpsobject{PST@Circle}{psdots}{linewidth=.001,linestyle=solid,dotstyle=o} \newpsobject{PST@Triangle}{psdots}{linewidth=.001,linestyle=solid,dotstyle=triangle} \newpsobject{PST@Pentagon}{psdots}{linewidth=.001,linestyle=solid,dotstyle=pentagon} \newpsobject{PST@Fillsquare}{psdots}{linewidth=.001,linestyle=solid,dotstyle=square*} \newpsobject{PST@Fillcircle}{psdots}{linewidth=.001,linestyle=solid,dotstyle=*} \newpsobject{PST@Filltriangle}{psdots}{linewidth=.001,linestyle=solid,dotstyle=triangle*} \newpsobject{PST@Fillpentagon}{psdots}{linewidth=.001,linestyle=solid,dotstyle=pentagon*} \newpsobject{PST@Arrow}{psline}{linewidth=.001,linestyle=solid} \catcode`@=12 \fi \psset{unit=5.0in,xunit=5.0in,yunit=3.0in} \pspicture(0.000000,0.000000)(1.000000,1.000000) \ifx\nofigs\undefined \catcode`@=11 \PST@Border(0.1170,0.1260) (0.1320,0.1260) \PST@Border(0.9470,0.1260) (0.9320,0.1260) \rput[r](0.1010,0.1260){ 70} \PST@Border(0.1170,0.2463) (0.1320,0.2463) \PST@Border(0.9470,0.2463) (0.9320,0.2463) \rput[r](0.1010,0.2463){ 80} \PST@Border(0.1170,0.3666) (0.1320,0.3666) \PST@Border(0.9470,0.3666) (0.9320,0.3666) \rput[r](0.1010,0.3666){ 90} \PST@Border(0.1170,0.4869) (0.1320,0.4869) \PST@Border(0.9470,0.4869) (0.9320,0.4869) \rput[r](0.1010,0.4869){ 100} \PST@Border(0.1170,0.6071) (0.1320,0.6071) \PST@Border(0.9470,0.6071) (0.9320,0.6071) \rput[r](0.1010,0.6071){ 110} \PST@Border(0.1170,0.7274) (0.1320,0.7274) \PST@Border(0.9470,0.7274) (0.9320,0.7274) \rput[r](0.1010,0.7274){ 120} \PST@Border(0.1170,0.8477) (0.1320,0.8477) \PST@Border(0.9470,0.8477) (0.9320,0.8477) \rput[r](0.1010,0.8477){ 130} \PST@Border(0.1170,0.9680) (0.1320,0.9680) \PST@Border(0.9470,0.9680) (0.9320,0.9680) \rput[r](0.1010,0.9680){ 140} \PST@Border(0.1170,0.1260) (0.1170,0.1460) \PST@Border(0.1170,0.9680) (0.1170,0.9480) \rput(0.1170,0.0840){ 0.9} \PST@Border(0.2830,0.1260) (0.2830,0.1460) \PST@Border(0.2830,0.9680) (0.2830,0.9480) \rput(0.2830,0.0840){ 0.92} \PST@Border(0.4490,0.1260) (0.4490,0.1460) \PST@Border(0.4490,0.9680) (0.4490,0.9480) \rput(0.4490,0.0840){ 0.94} \PST@Border(0.6150,0.1260) (0.6150,0.1460) \PST@Border(0.6150,0.9680) (0.6150,0.9480) \rput(0.6150,0.0840){ 0.96} \PST@Border(0.7810,0.1260) (0.7810,0.1460) \PST@Border(0.7810,0.9680) (0.7810,0.9480) \rput(0.7810,0.0840){ 0.98} \PST@Border(0.9470,0.1260) (0.9470,0.1460) \PST@Border(0.9470,0.9680) (0.9470,0.9480) \rput(0.9470,0.0840){ 1} \PST@Border(0.1170,0.1260) (0.9470,0.1260) (0.9470,0.9680) (0.1170,0.9680) (0.1170,0.1260) \rput(0.5320,0.0210){{\tt SlicePow}} \rput[r](0.8200,0.9270){PRNG} \PST@Solid(0.8360,0.9270) (0.9150,0.9270) \PST@Solid(0.1170,0.3283) (0.1170,0.3283) (0.2000,0.3881) (0.2830,0.1356) (0.3660,0.2568) (0.4490,0.5182) (0.5320,0.7786) (0.6150,0.2338) (0.6980,0.3538) (0.7810,0.1301) (0.8640,0.1419) (0.9470,0.2838) \rput[r](0.8200,0.8850){HAVEGE} \PST@Dashed(0.8360,0.8850) (0.9150,0.8850) \PST@Dashed(0.1170,0.4893) (0.1170,0.4893) (0.2000,0.2796) (0.2830,0.2705) (0.3660,0.4366) (0.4490,0.3073) (0.5320,0.4580) (0.6150,0.9338) (0.6980,0.1908) (0.7810,0.2436) (0.8640,0.4854) (0.9470,0.2102) \catcode`@=12 \fi \endpspicture \end{center} \caption{Plot of the maximum $C_\mathrm{ss}$ recorded as a function of parsimony pressure. PRNG = pseudo random number generator, and HAVEGE is the entropy harvester mentioned in the text.} \label{Maxcomplexity} \end{figure*} Figure \ref{Maxcomplexity} shows the maximum value of $C_\mathrm{ss}$ recorded for each run, as a function of parsimony pressure. It is showing fairly clearly that a {\tt SlicePow} value between 0.95--0.96 is needed to generate additional complexity. It is also suggestive that the entropy gatherer generates additional complexity over the pseudo random number generator, however this effect is unlikely to be statistically significant in this data set. \begin{figure*} \begin{center} \ifxt\undefined \deft{t} \psset{arrowsize=.01 3.2 1.4 .3} \psset{dotsize=.01} \catcode`@=11 \newpsobject{PST@Border}{psline}{linewidth=.0015,linestyle=solid} \newpsobject{PST@Axes}{psline}{linewidth=.0015,linestyle=dotted,dotsep=.004} \newpsobject{PST@Solid}{psline}{linewidth=.0015,linestyle=solid} \newpsobject{PST@Dashed}{psline}{linewidth=.0015,linestyle=dashed,dash=.01 .01} \newpsobject{PST@Dotted}{psline}{linewidth=.0025,linestyle=dotted,dotsep=.008} \newpsobject{PST@LongDash}{psline}{linewidth=.0015,linestyle=dashed,dash=.02 .01} \newpsobject{PST@Diamond}{psdots}{linewidth=.001,linestyle=solid,dotstyle=square,dotangle=45} \newpsobject{PST@Filldiamond}{psdots}{linewidth=.001,linestyle=solid,dotstyle=square*,dotangle=45} \newpsobject{PST@Cross}{psdots}{linewidth=.001,linestyle=solid,dotstyle=+,dotangle=45} \newpsobject{PST@Plus}{psdots}{linewidth=.001,linestyle=solid,dotstyle=+} \newpsobject{PST@Square}{psdots}{linewidth=.001,linestyle=solid,dotstyle=square} \newpsobject{PST@Circle}{psdots}{linewidth=.001,linestyle=solid,dotstyle=o} \newpsobject{PST@Triangle}{psdots}{linewidth=.001,linestyle=solid,dotstyle=triangle} \newpsobject{PST@Pentagon}{psdots}{linewidth=.001,linestyle=solid,dotstyle=pentagon} \newpsobject{PST@Fillsquare}{psdots}{linewidth=.001,linestyle=solid,dotstyle=square*} \newpsobject{PST@Fillcircle}{psdots}{linewidth=.001,linestyle=solid,dotstyle=*} \newpsobject{PST@Filltriangle}{psdots}{linewidth=.001,linestyle=solid,dotstyle=triangle*} \newpsobject{PST@Fillpentagon}{psdots}{linewidth=.001,linestyle=solid,dotstyle=pentagon*} \newpsobject{PST@Arrow}{psline}{linewidth=.001,linestyle=solid} \catcode`@=12 \fi \psset{unit=5.0in,xunit=5.0in,yunit=3.0in} \pspicture(0.000000,0.000000)(1.000000,1.000000) \ifx\nofigs\undefined \catcode`@=11 \PST@Border(0.1170,0.1260) (0.1320,0.1260) \PST@Border(0.9470,0.1260) (0.9320,0.1260) \rput[r](0.1010,0.1260){ 0} \PST@Border(0.1170,0.2313) (0.1320,0.2313) \PST@Border(0.9470,0.2313) (0.9320,0.2313) \rput[r](0.1010,0.2313){ 100} \PST@Border(0.1170,0.3365) (0.1320,0.3365) \PST@Border(0.9470,0.3365) (0.9320,0.3365) \rput[r](0.1010,0.3365){ 200} \PST@Border(0.1170,0.4418) (0.1320,0.4418) \PST@Border(0.9470,0.4418) (0.9320,0.4418) \rput[r](0.1010,0.4418){ 300} \PST@Border(0.1170,0.5470) (0.1320,0.5470) \PST@Border(0.9470,0.5470) (0.9320,0.5470) \rput[r](0.1010,0.5470){ 400} \PST@Border(0.1170,0.6523) (0.1320,0.6523) \PST@Border(0.9470,0.6523) (0.9320,0.6523) \rput[r](0.1010,0.6523){ 500} \PST@Border(0.1170,0.7575) (0.1320,0.7575) \PST@Border(0.9470,0.7575) (0.9320,0.7575) \rput[r](0.1010,0.7575){ 600} \PST@Border(0.1170,0.8628) (0.1320,0.8628) \PST@Border(0.9470,0.8628) (0.9320,0.8628) \rput[r](0.1010,0.8628){ 700} \PST@Border(0.1170,0.9680) (0.1320,0.9680) \PST@Border(0.9470,0.9680) (0.9320,0.9680) \rput[r](0.1010,0.9680){ 800} \PST@Border(0.1170,0.1260) (0.1170,0.1460) \PST@Border(0.1170,0.9680) (0.1170,0.9480) \rput(0.1170,0.0840){ 0.9} \PST@Border(0.2830,0.1260) (0.2830,0.1460) \PST@Border(0.2830,0.9680) (0.2830,0.9480) \rput(0.2830,0.0840){ 0.92} \PST@Border(0.4490,0.1260) (0.4490,0.1460) \PST@Border(0.4490,0.9680) (0.4490,0.9480) \rput(0.4490,0.0840){ 0.94} \PST@Border(0.6150,0.1260) (0.6150,0.1460) \PST@Border(0.6150,0.9680) (0.6150,0.9480) \rput(0.6150,0.0840){ 0.96} \PST@Border(0.7810,0.1260) (0.7810,0.1460) \PST@Border(0.7810,0.9680) (0.7810,0.9480) \rput(0.7810,0.0840){ 0.98} \PST@Border(0.9470,0.1260) (0.9470,0.1460) \PST@Border(0.9470,0.9680) (0.9470,0.9480) \rput(0.9470,0.0840){ 1} \PST@Border(0.1170,0.1260) (0.9470,0.1260) (0.9470,0.9680) (0.1170,0.9680) (0.1170,0.1260) \rput(0.5320,0.0210){{\tt SlicePow}} \rput[r](0.8200,0.9270){PRNG} \PST@Solid(0.8360,0.9270) (0.9150,0.9270) \PST@Solid(0.1170,0.4281) (0.1170,0.4281) (0.2000,0.1481) (0.2830,0.3954) (0.3660,0.1407) (0.4490,0.1260) (0.5320,0.1376) (0.6150,0.7817) (0.6980,0.1628) (0.7810,0.1260) (0.8640,0.9375) (0.9470,0.1323) \rput[r](0.8200,0.8850){HAVEGE} \PST@Dashed(0.8360,0.8850) (0.9150,0.8850) \PST@Dashed(0.1170,0.1260) (0.1170,0.1260) (0.2000,0.1870) (0.2830,0.2702) (0.3660,0.1260) (0.4490,0.1260) (0.5320,0.1534) (0.6150,0.1260) (0.6980,0.1313) (0.7810,0.2176) (0.8640,0.1492) (0.9470,0.1344) \catcode`@=12 \fi \endpspicture \end{center} \caption{Plot of the origination time of the organism with maximal complexity, as a function of parsimony pressure. PRNG = pseudo random number generator, and HAVEGE is the entropy harvester mentioned in the text.} \label{origtime} \end{figure*} A different view of the data can be seen in figure \ref{origtime}, where the origination of the organism of maximal complexity is plotted. The interesting thing here is that the pseudo random number generator took a lot longer to find its maximally complex organism than the entropy gatherer algorithm. \section{Discussion} The results reported here are of a small scale pilot study to study the effect of parsimony and of random oracles. Obviously much work needs to be done to tighten up the methodology of the experiment, and to perform analysis of statistical significance on the results. Clearly, this experiment does not show evidence of open-ended complexity growth either. Nevertheless, these interim results look encouraging. Since the Tierra simulations are run in parallel on a Linux cluster, and it doesn't matter if one uses the same sequence of random numbers in each simulation, it should be possible to combine the entropy harvested from all CPUs, as well as network latency on the interconnect to substantially increase the entropy production of HAVEGE. This will require substantial recoding of HAVEGE to exploit this. In the history of Earth's biosphere, complexity of individuals remain largely static for the first 2 billion years of life. It was only with originations of Eukaryotes circa 2Gya and and of multicellular life circa 600Mya that we have any appreciable jump in complexity over the previous 2 billion years of bacterial life. During Phanerozoic (540Mya--present), there is little unambiguous evidence of complexity growth of organisms\cite{McShea96}. However, what is a very clear trend is growth in the complexity of ecosystems. Diversity of the Earth's biosphere appears to have grown exponentially since Cambrian times \cite{Benton01}. Looking at things another way, a multicellular animal can be considered as an ecosystem of eukaryotic cells (OK so the genetic code for most of the cells is identical --- gut flora being the obvious exception), and each eukaryotic cell can be considered an ecosystem of bacterial cells (nucleus + organelles). If anything, the ``parts'' of the World's biosphere have gotten simpler --- it's the network connecting the parts that shown the complexity growth. In the Tierra case, what we should be looking for is overall complexity of the ecosystem, not complexity of the individual digital organisms. At present, we still don't have a good theory for how to measure the complexity of an ecosystem, knowing its foodweb. Diversity is a lower bound on complexity, but is a rather crude indicator of overall complexity. Bedau-Packard \cite{Bedau-etal98} statistics use diversity as one of the key indicators of open-ended evolution. This is probably all that is ever likely to be available for the Earth's biosphere, but in artificial life systems we can look for better measures of ecosystem complexity. \section*{Acknowledgment} I would like to thank the {\em Australian Centre for Advanced Computing and Communications} for a grant of time on their Linux cluster for performing this work. \bibliographystyle{alife9}
-40,022.326471
[ -1.3955078125, 1.6201171875 ]
17.801047
[ -2.140625, 3.080078125, -0.4501953125, -3.35546875, -1.2802734375, 3.609375 ]
[ 2.58984375, 6.578125, 2.298828125, 7.390625 ]
873
2,408
[ -1.8876953125, 1.9658203125 ]
42.69564
[ -4.1953125, -1.986328125, -2.24609375, -2.47265625, 0.103271484375, 7.890625 ]
0.582666
16.210942
43.895349
36.842257
[ 2.510464906692505 ]
-31,345.871129
8.552741
-38,630.566321
0.961398
6.161446
[ -3.494140625, -2.6171875, -2.021484375, -3.28515625, 2.453125, 8.4140625 ]
[ -5.14453125, -0.422119140625, -1.314453125, -0.779296875, 2.77734375, 2.673828125 ]
BkiUdmU5qhLAB-QFvl37
\section{Introduction} Deep learning, based on deep neural networks structures and elaborate optimization techniques, has achieved great successes and empirically outperformed classical machine learning methods such as kernel methods in many applications in areas of science and technology \cite{kingma2014adam,le2011optimization,lecun2015deep,schmidhuber2015deep}. Especially, convolutional neural networks are considered as the state-of-the-art for various tasks such as image classification \cite{goodfellow2016deep}, speech recognition \cite{hinton2006fast} and sequence analysis in bio-informatics \cite{alipanahi2015predicting,zhou2015predicting}. However, compared with significant achievements and developments in practical applications, theoretical insurances are left behind. Recently, many researchers in various fields have done great works trying to explain the mysteries of convolutional neural networks, and most of the existing works focus on the approximation properties of convolutional structures. For vector inputs, a universality result for convolutional neural networks is first established in \cite{zhou2020universality}. Afterwards, \cite{zhou2020theory} further illustrates that deep convolutional neural networks are at least as good as fully connected neural networks in the sense that any output of a fully connected neural network can be reconstructed by a deep convolutional neural network with the same order of free parameters. It is also shown in \cite{fang2020theory} that functions in Sobolev spaces on the unit sphere or taking an additive ridge form can be approximated by deep convolutional neural networks with optimal rates. For square matrix inputs, authors in \cite{petersen2020equivalence} prove that if the target function is translation equivariant, then convolutional structures are equivalent to fully connected ones in terms of approximation rates with periodic convolution. Recently, \cite{he2021approximation} derives a decomposition theorem for large convolutional kernels and enables convolutional networks to duplicate one hidden layer neural networks by deep convolutional nets including structures of ResNet and MgNet. Compared with numerous works concerning mean squared error analysis of fully connected neural networks \cite{suzuki2018adaptivity,imaizumi2019deep,schmidt2020nonparametric}, only very few papers attempt to understand the convolutional structures from a statistical learning perspective. With the same spirit in \cite{zhou2020theory}, authors in \cite{oono2019approximation} show that any block-sparse fully connected neural network with $M$ blocks can be realized by a ResNet-type convolutional neural network with fixed-sized channels and filters by adding $O(M)$ parameters. Consequently, if a function class can be approximated by block sparse fully connected nets with optimal rates, then it can also be achieved by ResNet-type convolutional nets. Authors prove that ResNet-type convolutional neural networks can reach optimal learning rates for functions in Barron and H$\ddot{\text{o}}$lder classes by first transforming a fully connected net to a block sparse type, then to ResNet-type convolutional nets. Authors in \cite{mao2021theory} first establish approximation results for deep convolutional neural networks with one fully connected layer when the target function takes a composite form as $f \circ Q$ with a feature polynomial $Q$ and a univariate function $f$. Two groups of convolutional structures are applied to approximate functions $Q$ and $f$ consecutively. Then mean squared error rates are derived for the function class $f\circ Q$. Analysis shows that the estimation error decreases to a minimum as the network depth increases to an optimal value, and then increases as the depth becomes larger. According to the mini-max lower bound derived in our paper, the rate is sub-optimal. Universal consistency of pure convolutional structures is recently presented by \cite{lin2021universal} in the framework of empirical risk minimization. Compared with previous works on mean squared error analysis of deep convolutional neural networks, we show that functions possessing inherent structures $\xi \cdot x$ in an additive form can be directly learned by deep convolutional networks. Formally, we consider mean squared error analysis for deep ReLU convolutional neural networks in the estimation of additive ridge functions. The function class takes the form of $$\sum_{i=1}^m f_i(\xi_i \cdot x),$$ where for each $i$, $f_i$ satisfies some regularity conditions, $\xi_i$ can be considered as a projection vector and $\cdot$ is the inner product in $\mathbb{R}^d$. This function class is also known as the additive index models \cite{yuan2011identifiability} in statistics and related to the projection pursuit regression introduced by \cite{friedman1981projection}. It has been shown in \cite{diaconis1984nonlinear} that this function class can be used to approximate any square-integrable function to arbitrary precision. A precise definition will be presented in Section 3. For additive ridge functions, various investigations have been done including convergence rates, identifiability, iterative estimation in \cite{chen1991estimation,chiou2004quasi,ruan2010dimension,yuan2011identifiability}. However, none of these works presents mini-max lower rate for this type of functions. Though optimal convergence rates are achieved in \cite{ruan2010dimension} by traditional reproducing kernels with regularized least squares scheme, shortcomings of this method are quite obvious when comparing to neural networks. For example, to achieve optimal convergence rates, the best regularization parameter is usually related to the regularity of the target function which is practically unknown and is often decided by cross validation. However, convolutional neural networks are more automatic in the sense that the function smoothness is not required when implementing the algorithm. Also, the smoothing spline algorithm is applied in a backfitting manner, whereas the convolutional neural network is more generic. We show that deep convolutional neural networks followed by one fully connected layer are able to achieve the mini-max optimal learning rates for additive ridge functions by a careful analysis of the covering number of deep convolutional structures. The additive index $m$ would affect the depth of our networks in a linear way which indicates the importance of depth of neural networks when the target function is complicated. The inherent structures $\xi_i \cdot x$ can represent different localized features of input $x$ when $\xi_i$s are sparse. The derived mini-max rate explains why for additive ridge functions, deep convolutional neural networks can avoid the curse of dimensionality. In summary, the contributions of our work are as follows: \begin{itemize} \item We conduct a delicate covering number analysis for deep convolutional neural networks presented in \cite{fang2020theory}. Thanks to the simple structure and few free parameters of convolutional neural networks, we derive small complexity of this hypothesis space. \item We present a mini-max lower bound for additive ridge functions. As a direct consequence, the lower bound also applies when $\xi \cdot x$ is generalized to any polynomial function $Q(x)$. We show that for these two types of functions, lower rates are dimension independent. \item By combining the approximation result in \cite{fang2020theory} and our covering number analysis, we show that deep ReLU convolutional neural networks followed by one fully connected layer can reach optimal learning rates for the regression problem for additive ridge functions and can avoid the curse of dimensionality. \end{itemize} \section{Problem Setting} \subsection{Network Structures} We first give a brief introduction to structures of a fully connected layer and convolutional neural networks considered in this work. A fully connected layer of input $x=(x_1,x_2,\ldots, x_d)\in \RR^d$ with a full matrix $F\in \mathbb{R}^{d' \times d}$ and a bias vector $b \in \mathbb{R}^{d'}$ can be defined as \begin{align*} H(x)=\s \left(F \cdot x- b\right), \end{align*} where $H(x)$ is a vector in $\mathbb{R}^{d'}$ and $\sigma$ is the activation function acting component-wise. Without any sparse condition on this matrix, the number of free parameters in one matrix would be $d'd$, which is huge when the dimension is large. And this situation leads to explosive growth of capacity and computational cost when the neural networks are deep. For convolutional neural networks, the number of free parameters can be decreased dramatically by the nature of convolution. For a sequence $w = \left(w_k\right)_{k\in \ZZ}$ supported in $\left\{0,1,\cdots,S\right\}$ and another one $x= \left(x_k\right)_{k\in\ZZ}$ supported in $\left\{1,2,\cdots,D\right\}$, the convolution of these two sequences is given by $$(w*x)_i=\sum_{k\in\ZZ} w_{i-k} x_k =\sum_{k=1}^D w_{i-k}x_k, \qquad i\in\ZZ,$$ which is supported in $\left\{1,\cdots, D+S\right\}$. By applying this convolutional operation, we know that the matrix in each layer of convolutional neural networks should be of the form \begin{equation}\label{Toeplitz} T^{w}=\left[\begin{array}{lllllll} {w_{0}} & {0} & {0} & {0} & {\cdots} & {\cdots} & {0} \\ {w_{1}} & {w_{0}} & {0} & {0} & {\cdots} & {\cdots} & {0} \\ {\vdots} & {\ddots} & {\ddots} & {\ddots} & {\ddots} & {\ddots} & {\vdots} \\ {w_{S}} & {w_{S-1}} & {\cdots} & {w_{0}} & {0} & {\cdots} & {0} \\ {0} & {w_{S}} & {\cdots} & {w_{1}} & {w_{0}} & {0 \cdots} & {0} \\ {\vdots} & {\ddots} & {\ddots} & {\ddots} & {\ddots} & {\ddots} & {\vdots} \\ {0} & {\cdots} & {0} & {w_{S}} & {\cdots} & {w_{1}} & {w_{0}} \\ {0} & {\cdots} & {0} & {0} & {w_{S}} & {\cdots} & {w_{1}} \\ {\vdots} & {\ddots} & {\ddots} & {\ddots} & {\ddots} & {\ddots} & {\vdots} \\ {0} & {0} & {0} & {0} & {\cdots} & {0} & {w_{S}} \end{array}\right], \end{equation} with $T^{w} \in \mathbb{R}^{(D+S) \times D}$. We can see that the number of free parameters of a convolutional matrix, compared to that of a fully connected layer, is greatly reduced from $d'd$ to $S+1$. If we denote $T^{(j)} := T^{w^{(j)}}$ with $D=d_{j-1}$ and $S=S^{(j)}$ for $j=0,1,\cdots,J$. Then convolutional neural networks can be defined iteratively by \begin{align*} h^{(j)}(x)=\s \left(T^{(j)}\ h^{(j-1)}(x)-b^{(j)}\right), \qquad j=1,\ldots, J. \end{align*} Throughout this paper, we take identical filter length $S^{(j)} = S \in \NN$ which implies $\left\{d_{j} = d+jS\right\}$ and take the activation function as the ReLU, $$ \sigma(x) = \max \left\{0,x\right\},~~~~~~~x \in \RR.$$ Since the sums of the rows in the middle of the Toeplitz type matrix (\ref{Toeplitz}) are equal, we impose a restriction for the bias vectors $\{b^{(j)}\}_{j=1}^J$ of the convolutional layers \begin{equation}\label{restrrow} b^{(j)}_{S+1} = \ldots = b^{(j)}_{d_j -S}, \qquad j=1, \ldots, J. \end{equation} After the last convolutional layer, we add one fully connected layer $h^{(J+1)}$ with a restricted matrix $F^{(J+1)}$ and a bias vector $b^{(J+1)}$. Precisely, we have \begin{align*} h^{(J+1)}(x)=\s\left(F^{(J+1)} h^{(J)}(x)-b^{(J+1)}\right). \end{align*} For the fully connected layer, we take $F^{(J+1)}_{(j-1)(2N+3) +i}= 1$ for $j=1, \ldots, m, \ i=1, \ldots, 2N+3$ and 0 otherwise. We let $\textbf{1}_{2N+3}=(1,1,\ldots,1)^T\in \RR^{2N+3}$. Specifically, the matrix in the last layer takes the form of \begin{equation}\label{fullmatrices} F^{(J+1)} = \left[\begin{array}{ccccc} O&\textbf{1}_{2N+3} & O & \cdots\cdots & O \\ O&O & \textbf{1}_{2N+3} & O \cdots & O \\ \vdots&\vdots & \ddots & \ddots & \vdots \\ O&O & \cdots & \textbf{1}_{2N+3} & O \end{array}\right] , \end{equation} where $F^{(J+1)}\in\RR^{(2N+3) m \times (d+JS)}$ for some positive integer $N\in\NN$ which can be considered as the order of the number of free parameters in the network. We restrict the full matrix to take a simple form as (\ref{fullmatrices}) to further demonstrate abilities of convolutional structures. The hypothesis space induced by our network is given by \begin{equation} \begin{aligned}\label{hypothesisspace} \mathcal{H} := &\mathcal{H}(J,B,S,N) \\ :=&\left\{ {c \cdot h^{(J+1)}(x):\left\|w^{(j)}\right\|_\infty \leq B,}\right.\\ &\left. {\left\|b^{(j)}\right\|_\infty \leq 2((S+1)B)^j,\left\|c\right\|_\infty \leq NB}\right\}, \end{aligned} \end{equation} and $F^{(J+1)}$ \text{ takes the form (\ref{fullmatrices}) with} $\left\|F^{(J+1)}\right\|_\infty \leq 1.$ Here $B$ is a constant depending on the target function space and the filter size $S$ that will be given explicitly in \textbf{Lemma \ref{lemma:upperboundforbias}}. \subsection{Statistical Learning Framework} Now we formulate regression problems in the setting of statistical learning theory. Let $\mathcal{X}$ be the unit ball in $\RR^d$, that is, $\XX := \left\{x: \left\|x\right\|_2 \leq 1, x \in \RR^d\right\}$ and $\mathcal{Y} \subset \RR$. In the non-parametric regression model, we observe $n$ i.i.d. vectors $x_i \in \XX$ and $n$ responses $y_i \in \RR$ from the model \begin{align*} y_i= f^*\left(x_i\right) + \epsilon_i, ~~~~i = 1, \cdots,n, \end{align*} where the noise variables $\epsilon_i$ are assumed to satisfy $\EE\left(\epsilon_i | x_i\right) = 0$ and it is common to assume standard normal distribution for the noise variables. Our goal is to recover the function $f^*$ from the sample $\left\{(x_i,y_i)\right\}_{i=1}^n$. In statistical learning theory, the regression framework is described by some unknown probability measure $\rho$ on the set $\mathcal{Z} =\XX \times \YY$. The target is to learn the regression function $f_\rho(x)$ given by $$f_\rho(x) = \int_{\mathcal{Y}} y d\rho(y|x),~~~x\in \mathcal{X},$$ where $\rho(y|x)$ denotes the conditional distribution of $y$ at point $x$ induced by $\rho$. It is easy to see that these two settings are equivalent by letting $$f_\rho (x)= f^*(x).$$ For any measurable function, we define the $L^2$ prediction error as $$\mathcal{E}(f):= \int_{\mathcal{Z}} (f(x)-y)^2 d\rho,$$ and clearly we know that $f_\rho(x)$ is the minimizer of it among all measurable functions. We assume that the sample $D= \left\{\left(x_i,y_i\right)\right\}_{i=1}^n$ is drawn from the unknown probability measure $\rho$. Let $\rho_{\mathcal{X}}$ be the marginal distribution of $\rho$ on $\mathcal{X}$ and $\left(L^2_{\rho_{\mathcal{X}}}, \left\| \cdot \right\|_{\rho_\XX}\right)$ be the space of $\rho_{\mathcal{X}}$ square-integrable functions on $\mathcal{X}$. For any $f \in L^2_{\rho_{\mathcal{X}}}$, by a simple calculation we know that \begin{align*} \mathcal{E}(f) - \mathcal{E}(f_\rho) = \left\|f-f_\rho\right\|^2_{\rho_\mathcal{X}}. \end{align*} Since the distribution $\rho$ is unknown, we can not find $f_\rho$ directly. We thus use \begin{align}\label{empiricalminimizer} f_{D,\mathcal{H}} = \arg \min_{f\in\mathcal{H}} \mathcal{E}_D(f), \end{align} to approximate $f_\rho$ where $\mathcal{H}$ is our hypothesis space and $\mathcal{E}_D(f)$ is defined to be the empirical risk induced by the sample given by \begin{align*} \mathcal{E}_D(f) := \frac{1}{n} \sum_{i=1}^n \left(f(x_i)-y_i\right)^2. \end{align*} The target of mean squared error analysis is to derive convergence rate of $\left\|f_{D,\mathcal{H}}-f_\rho\right\|^2_{\rho_\mathcal{X}}$. We assume that $|y| \leq M$ almost everywhere and we have $\left|f_\rho(x)\right| \leq M$. We project the output function $f: \mathcal{X} \rightarrow \mathbb{R}$ onto the interval $[-M,M]$ by a projection operator \begin{equation*} \pi_{M}f(x) =\left\{ \begin{array}{lll} f(x),~~&\text{if}~ -M \leq f(x) \leq M, \\ M,~~~~&\text{if}~ f(x) > M,\\ -M,~~~~&\text{if}~ f(x) < M, \end{array} \right. \end{equation*} and we consider $\pi_{M}f_{D,\mathcal{H}}$ as our estimator to $f_\rho(x)$. This type of clipping operator has been widely used in various statistical learning papers such as \cite{suzuki2018adaptivity,oono2019approximation,mao2021theory}. \subsection{Additive Ridge Functions} Additive ridge functions take the following form as \begin{equation}\label{additiveridge} f(x) =\sum_{j=1}^m g_j (\xi_j \cdot x), \end{equation} where $m$ is considered as a fixed constant. For each $j$, $g_j(\cdot): \RR \rightarrow \RR$ is a univariate function and $\xi_j \cdot x$ represents the inner product in $\RR^d$ with $\xi_j \in \RR^{d}$ and $\left\|\xi_j\right\|$ is bounded by some constant $\Xi$. Since this constant $\Xi$ would only affect our results in a linear way, for simplicity of the proof, we take $\Xi=1$. Specifically, we require that $0 < \left\|\xi_j\right\| \leq 1$ and we take $g_j \in W^{\alpha}_\infty \left([-1,1]\right)$, the space of Lipschitz-$\alpha$ functions on $[-1,1]$ with the semi-norm $\left| \cdot \right|_{W^{\alpha}_\infty}$ being the Lipschitz constant. We let $G = \max_{j=1,\cdots,m} \left\|g_j\right\|_\infty$. As mentioned before, (\ref{additiveridge}) is also known as additive index models \cite{yuan2011identifiability} and related to the the projection pursuit regression \cite{friedman1981projection}. In particular, when $m=d$ and $(\xi_1,\cdots,\xi_m)$ is a permutation matrix, this function class reduces to the additive model \cite{hastie2017generalized} and it reduces to single index model \cite{duan1991slicing,hardle1993optimal,ichimura1993semiparametric} when $m=1$. More precisely, we consider the function space \begin{equation} \begin{aligned}\label{functionspace} \Theta :=&\Theta(m,\alpha,G,L)\\ :=&\{{f(x) =\sum_{j=1}^m g_j (\xi_j \cdot x) : g_j \in W^{\alpha}_\infty \left([-1,1]\right),}\\ &{ 0 < \left\|\xi_j\right\| \leq 1, \left\|g_j\right\|_\infty \leq G, \left\|g_j\right\|_{W^\alpha_\infty}\leq L}\}, \end{aligned} \end{equation} and we assume that the target function $f_\rho$ is in the set $\Theta$. \section{Main Results} \subsection{Covering Number Analysis of Deep Convolutional Neural Networks} The mean squared error analysis relies on the approximation abilities and the covering number of the hypothesis space. Before presenting the covering number analysis for the hypothesis space $\mathcal{H}$ (\ref{hypothesisspace}), we would need to first state \textbf{Theorem 2} in \cite{fang2020theory} which presents the approximation error to functions in the space (\ref{functionspace}) by deep convolutional neural networks. \begin{theorem}\label{approximationerror} Let $m\in \mathbb{N}$, $d\geq 3$, $2 \leq S \leq d$, $J= \left\lceil \frac{md-1}{S-1}\right\rceil$, and $N \in \NN$. If $f \in \Theta$, then there exists a deep neural network consisting of $J$ layers of CNNs with filters of length $S$ and bias vectors satisfying (\ref{restrrow}) followed by one fully connected layer $h^{(J+1)}$ with width $m(2N+3)$ and connection matrix $F^{(J+1)}$ defined as (\ref{fullmatrices}) such that for some coefficient vector $c\in \RR^{m(2N+3)}$ there holds $$\left\|f - c^{(J+1)}\cdot h^{(J+1)}\right\|_\infty \leq \sum_{j=1}^m |g_j|_{W^\alpha_\infty} N^{-\alpha}. $$ The total number of free parameters $\mathcal{N}$ in the network can be bounded as $$ \mathcal{N} \leq (3S+2)\left\lceil \frac{md-1}{S-1}\right\rceil +m(2N+2). $$ \end{theorem} To calculate the covering number of our target space, we first need Cauchy bound for polynomials and Vietas Formula to bound the infinity norm of filters in each layer. Proofs will be given in supplementary materials. \begin{lemma}\label{lemma:cauchy} If $W = \left\{W_j\right\}_{j \in \mathbb{Z}}$ is a real sequence supported in $\left\{0,\cdots,K\right\}$ with $W_K =1$, then all the complex roots of its symbol $\tilde{W}(z) = \sum_{j=0}^K W_j z^j$ are bounded by $1 + \max_{j=0,\cdots,K-1} \left|W_j\right|$, the Cauchy Bound of $\tilde{W}$. If we factorize $\tilde{W}$ into polynomials of degree at most $S$, then all the coefficients of these factor polynomials are bounded by $$2^S\left(1+ \max_{j=0,\cdots,K-1}\left|W_j\right|\right)^S.$$ \end{lemma} By applying the previous lemma, we are able to bound the magnitude of filters in each layer. \begin{lemma}\label{lemma:upperboundforbias} Let $2\leq S \leq d$. For the deep convolutional neural networks constructed in this paper with $J$ convolutional layers and $1$ fully connected layer satisfying \textbf{Theorem \ref{approximationerror}}, there exists a constant $B = B_{\xi, S,G}$ depending on $\xi$, $S$ and $G$ such that $$ \left\|w^{(j)}\right\|_\infty \leq B,~~~j=1,\cdots,J_1,$$ and $$\left\|F\right\|_\infty \leq 1,~~~\left\|c^{(J+1)}\right\|_\infty \leq NB,$$ where $B$ is given by $$B = \max \left\{ 2^S \left(1 + \left| \frac{1}{ \left(\xi_m\right)_l} \right|\right)^S,4G \right\}.$$ \end{lemma} After bounding the filters, we can bound bias vectors and output functions in each layer. \begin{lemma}\label{lemma:biasandoutput} Let $2\leq S \leq d$. For the deep convolutional neural networks constructed in this paper with $J$ convolutional layers and $1$ fully connected layer satisfying \textbf{Theorem \ref{approximationerror}}, we have for $j = 1,\cdots , J+1$ $$ \left\|b^{(j)}\right\|_\infty \leq 2\left(\left(S+1\right)B\right)^j,$$ and \begin{align}\label{outputbound} \left\|h^{(j)}(x)\right\|_\infty\leq (2j+1)((S+1)B)^j. \end{align} \end{lemma} After bounding all the filters, bias vectors and output functions in each layer, we can derive a bound for covering number of our hypothesis space $\mathcal{H}$ (\ref{hypothesisspace}) as stated in the lemma below. The covering number $\mathcal{N} \left(\eta, \HH\right)$ of a subset $\mathcal{H}$ of $C(\mathcal{X})$ is defined for $\eta>0$ to be the smallest integer $l$ such that $\HH$ is contained in the union of $l$ balls in $C(\mathcal{X})$ of radius $\eta$. \begin{lemma}\label{lemma:coveringnumberbound} For $N \in \mathbb{N}$ and $\mathcal{H}$ given in (\ref{hypothesisspace}), with two constants $C_{S,d,m}$ and $C''_{S,d,m}$ depending on $S,d,m,B$, there holds \begin{align*} \log \mathcal{N} \left(\delta, \mathcal{H}\right) \leq C_{S,d,m}N \log\frac{1}{\delta} + C''_{S,d,m,B}N \log N, \end{align*} for any $0<\delta \leq 1$. \end{lemma} Constants $C_{S,d,m}$ and $C''_{S,d,m,B}$ depend on $m$, $S$ and $d$ at most in a cubic way. Since we treat $m$, $S$ and $d$ as fixed constants in our setting, the covering number is affected by $N$ in a linear way which is the order of free parameters in the last layer. The above lemma shows that the hypothesis space $\mathcal{H}$ has a relatively small covering number. Consequently, we are able to derive optimal learning rates for convolutional neural networks. \subsection{Oracle Inequality for Empirical Risk Minimization} The following theorem presents the oracle inequality for empirical risk minimizers based on covering number estimates. \begin{theorem}\label{theorem:coveringnumber} Suppose that there exist constants $C_1$, $C_2>0$ and some real numbers $n_1$, $n_2>0$, such that \begin{align}\label{coveringnumbercondition} \log \mathcal{N}\left( \delta, \mathcal{H}\right) \leq C_1n_1 \log \frac{1}{\delta} + C_2n_2 \log {n_2}, ~~\forall \delta>0. \end{align} Then for any $h\in \mathcal{H}$ and $\delta>0$, we have \begin{align*} \left\| \pi_{M}f_{D,\mathcal{H}} - f_\rho\right\|^2_{\rho_{\mathcal{X}}} \leq \delta + 2 \left\|h - f_\rho\right\|_{\rho_{\mathcal{X}}}^2, \end{align*} holds with probability at least $1-\exp \left\{C_1n_1 \log \frac{16M}{ \delta} - C_2n_2\log{n_2} - \frac{3n \delta}{512M^2}\right\} - \exp \left\{\frac{-3n \delta^2}{16\left(3M + \left\|h\right\|_\infty\right)^2 \left(6\left\|h -f_{\rho}\right\|_{\rho_\mathcal{X}}^2 + \delta\right)}\right\}.$ \end{theorem} \subsection{Mean Squared Error} By applying \textbf{Theorem \ref{approximationerror}} and \textbf{Theorem \ref{theorem:coveringnumber}} we can obtain our main result on the upper bound of mean squared error. \begin{theorem}\label{main result: regression} Let $2 \leq S \leq d$, $0<\alpha\leq1$ and $\mathcal{H}$, $\Theta$ be defined as (\ref{hypothesisspace}) and (\ref{functionspace}). If $f_\rho \in \Theta$, then for $N \in \mathbb{N}$, we have \begin{align*} \mathbb{E} \left\{ \left\|\pi_{M}f_{D,\mathcal{H}} - f_\rho\right\|_{\rho_{\mathcal{X}}}^2\right\} \leq C \max\left\{N^{-2\alpha}, \frac{N \log{N}}{n}\right\}, \end{align*} where the constant $C=C_{S,d,m,M,\alpha,B}$ is independent of the sample size $n$ and the $N$. In particular, if we choose $N = \left\lceil n^{\frac{1}{1+2\alpha}} \right\rceil$, then we can get \begin{align*} \mathbb{E} \left\{ \left\|\pi_M f_{D,\mathcal{H}} - f_\rho\right\|_{\rho_{\mathcal{X}}}^2\right\} \leq C n^{\frac{-2\alpha}{1+2\alpha}} \log n. \end{align*} \end{theorem} \begin{proof}[Proof of Theorem \ref{main result: regression}] We know from \textbf{Theorem \ref{approximationerror}} that there exists some $h\in \mathcal{H}$ such that $\left\|h - f_\rho\right\|_{\rho_\XX} \leq \left\|h - f_\rho\right\|_\infty \leq C_{\alpha,m} N^{-\alpha}$, where $C_{\alpha,m} = \sum_{j=1}^m |g_j|_{W^\alpha_\infty}$. We further know that $\left\|h\right\|_\infty \leq M + C_{\alpha,m}.$ By applying \textbf{Theorem \ref{theorem:coveringnumber}}, we have \begin{equation*} \begin{aligned} \mathcal{E}\left(\pi_M f_{D,\mathcal{H}}\right) - \mathcal{E} \left(f_\rho\right) \leq 2\left\|h - f_\rho\right\|^2_{\rho_{\mathcal{X}}} + \epsilon, \end{aligned} \end{equation*} holds with probability at least $1 - \exp\left\{C_1 N \log \frac{16M}{\epsilon} + C_2 N \log {N} -\frac{3n\epsilon}{512M^2}\right\}- \exp \left\{\frac{-3n \epsilon^2}{16\left(4M + C_{\alpha,m}\right)^2 \left(6C_{\alpha,m}^2N^{-2\alpha} + \epsilon\right)}\right\}$ where $C_1=C_{S,d,m,B}$ and $C_2 = C''_{S,d,m}$. If we let $$\epsilon \geq 6C^2_{\alpha,m} N^{-2\alpha},$$ then we have \begin{equation*} \begin{aligned} \mathcal{E}\left(\pi_M f_{D,\mathcal{H}}\right) - \mathcal{E} \left(f_\rho\right) \leq 2\epsilon, \end{aligned} \end{equation*} hold with probability at least $ 1 - \exp\left\{C_3N \log {N} -\frac{3n\epsilon}{512 M^2}\right\}- \exp \left\{- \frac{3n \epsilon}{32 \left( 4M +C_{\alpha,m} \right)^2 }\right\}$, where $C_3 =(C_1\log\frac{8M}{3C^2_{\alpha,m}}+2\alpha C_1 + C_2 ).$ We further let $$\epsilon \geq \frac{1024C_3M^2N \log {N}}{3n},$$ then we have \begin{equation*} \begin{aligned} \mathcal{E}\left(\pi_M f_{D,\mathcal{H}}\right) - \mathcal{E} \left(f_\rho\right) \leq 2\epsilon, \end{aligned} \end{equation*} holds with probability at least $1 - \exp\left\{ -\frac{3n\epsilon}{1024M^2}\right\}- \exp \left\{- \frac{3n\epsilon}{32 \left( 4M + C_{\alpha,m} \right)^2 }\right\}.$ By taking $$C_4 = \max \left\{12C^2_{\alpha,m},\frac{2048}{3}M^2C_3,\frac{64}{3}\left(4M+C_{\alpha,m}\right)^2\right\},$$ and $ \tilde{\epsilon} = 2 \epsilon$, we have \begin{equation*} \begin{aligned} \mathbb{P} \left\{\mathcal{E}\left(\pi_M f_{D,\mathcal{H}}\right) - \mathcal{E} \left(f_\rho\right) \leq \tilde\epsilon \right\} \geq 1 - 2\exp\left\{ -\frac{n\tilde\epsilon}{C_4}\right\}, \end{aligned} \end{equation*} for any $\tilde \epsilon \geq C_4 \max \left\{N^{-2\alpha}, \frac{N \log N}{n}\right\}$. Then by letting $\delta = 2\exp\left\{ -\frac{n\tilde\epsilon}{C_4}\right\}$, we know that \begin{equation*} \begin{aligned} \mathcal{E}\left(\pi_M f_{D,\mathcal{H}}\right) - \mathcal{E} \left(f_\rho\right) \leq C_9 \max\left\{\frac{N \log N}{n},N^{-2\alpha}, \frac{\log{\frac{2}{\delta}}}{n} \right\}, \end{aligned} \end{equation*} holds with probability at least $1 - \delta$. Now we apply $E \left\{\xi\right\} = \int_0^\infty \mathbb{P} \left\{\xi \geq t\right\} dt$ with $\xi = \mathcal{E}\left(\pi_Mf_{D,\mathcal{H}}\right) - \mathcal{E} \left(f_\rho\right) = \left\|\pi_Mf_{D,\mathcal{H}} - f_\rho\right\|^2_{\rho_\XX}$. We have \begin{align*} \mathbb{E} \left[\left\|\pi_M f_{D,\mathcal{H}} - f_\rho\right\|^2_{\rho_\XX}\right] &\leq \int_0^{T} 1 dt + \int_T^\infty 2\exp\left\{\frac{-nt}{C_4}\right\}dt \\ &\leq T +\frac{2C_4}{n} \leq 3T, \end{align*} with $T = C_4 \max \left\{N^{-2\alpha}, \frac{N \log N}{n}\right\}.$ This finishes the proof with the constant $C_{S,d,m,M,\alpha,B}$ independent of $n$ or $N$ given by \begin{align*} &C_{S,d,m,M,\alpha,B} \\ = &3\max \left\{12C^2_{\alpha,m},\frac{2048}{3}M^2C_3,\frac{64}{3}\left(4M+C_{\alpha,m}\right)^2\right\}, \end{align*} where $C_3 =(C_1\log\frac{8M}{3C^2_{\alpha,m}}+2\alpha C_1 + C_2 )$, $C_1 = C''_{S,d,m,B}$ and $C_2= C'_{S,d,m}$. \end{proof} \subsection{Lower Bound for Additive Ridge Functions} In this subsection, we will present the mini-max lower rate for estimating additive ridge functions. We let $\mathcal{M}(\rho, \Theta)$ be the class of all Borel measures $\rho$ on $\mathcal{X} \times \mathcal{Y}$ such that $f_\rho \in \Theta$. This class $\mathcal{M}(\rho, \Theta)$ is related to the set of distributions $\rho$ where data $\left\{\textbf{x}_i,y_i\right\}_{i=1}^n$ is drawn from and $\Theta$ representing the set of target functions. Now we state our mini-max lower bound for the class $\mathcal{M}(\rho, \Theta)$. \begin{theorem}\label{lowerbound1} Assume $m \geq 1$, $G>0$ and $M\geq 4mG$. Let $\hat{f}_n(x)$ be the output of any learning algorithm based on the sample $\left\{\textbf{x}_i,y_i\right\}_{i=1}^n$, then we have \begin{align*} \inf_{\hat{f}_n} \sup_{\rho \in\mathcal{M}(\rho,\Theta)} \mathbb{E} \left\|\hat{f}_n(x) - f_\rho(x)\right\|^2_{L^2_{\rho_{\mathcal{X}}}} \geq c_{m,G} n^{-\frac{2\alpha}{2\alpha +1}}. \end{align*} \end{theorem} Since $\xi \cdot x$ is essentially a polynomial of $x$, we can obtain a mini-max lower bound for the function in the type of $f \circ Q$ as a direct consequence, where $Q(x)$ can be any polynomial of $x$. Formally, we define \begin{equation*} \begin{aligned} \Theta' :=&\Theta'(m,\alpha,G,L)\\ :=&\{{f(x) =\sum_{j=1}^m g_j (Q(x)) : g_j \in W^{\alpha}_\infty \left([-1,1]\right),}\\ &{ 0 < \left\|\xi_j\right\| \leq 1, \left\|g_j\right\|_\infty \leq G, \left\|g_j\right\|_{W^\alpha_\infty} \leq L}\}, \end{aligned} \end{equation*} where $Q(x)$ denote any polynomial of $x$. \begin{corollary}\label{lowerbound2} Assume $m \geq 1$, $G>0$ and $M\geq 4mG$. Let $\hat{f}_n(x)$ be the output of any learning algorithm based on the sample $\left\{\textbf{x}_i,y_i\right\}_{i=1}^n$, then we have \begin{align*} \inf_{\hat{f}_n} \sup_{\rho \in\mathcal{M}(\rho,\Theta')} \mathbb{E} \left\|\hat{f}_n(x) - f_\rho(x)\right\|^2_{L^2_{\rho_{\mathcal{X}}}} \geq c_{m,G} n^{-\frac{2\alpha}{2\alpha +1}}. \end{align*} \end{corollary} \begin{proof}[Proof of Theorem \ref{lowerbound1}] First, we associate a probability measure $\rho_f \in \mathcal{M}(\rho,\Theta)$ to a pair $(\mu,f)$ where $\mu$ is an measure on $\mathcal{X}$ and $f \in \Theta$. We assume that $\mu$ is upper and lower bounded by constants $\tau_1$ and $\tau_2$. Now we define a probability measure $\rho_f$ by \begin{equation}\label{measure} \begin{aligned} &d\rho_f(x,y)\\ =&\left[\frac{T+f(x)}{2T}d\delta_T(y) + \frac{T-f(x)}{2T}d\delta_{-T}(y)\right]d\mu(x), \end{aligned} \end{equation} where $T=4mG$ and $d\delta_T$ denotes the Dirac delta with unit mass at $T$. It can be verified that $\rho_f$ is a probability measure on $\mathcal{X} \times \mathcal{Y}$ with $\mu$ being the marginal distribution $\rho_X$ and $f$ the regression function. Moreover, $M \geq 4mG$ ensures $\left|y\right| \leq M$ almost surely. Hence for any $f\in \Theta$, $\rho_f \in \mathcal{M}(\rho,\Theta).$ Now we would apply \textbf{Theorem 2.7} in \cite{tsybakov2008introduction} to prove our conclusion. It states that if for some $\widetilde{N} \geq 1$ and $\kappa>0$, $f_0, \cdots, f_{\widetilde{N}} \in \Theta$ are such that \begin{itemize} \item [1.] $\left\|f_i - f_j\right\|^2_{L^2_{\rho_{\mathcal{X}}}} \geq \kappa n^{-\frac{2\alpha}{2\alpha +1}}$ for all $0 \leq i< j\leq \widetilde{N},$ \item [2.] $\frac{1}{\widetilde{N}} \sum_{j=1}^{\widetilde{N}} \text{KL}\left(\rho_j^n \| \rho_0^n\right) \leq \frac{\log{\widetilde{N}}}{9},$ \end{itemize} then there exists a positive constant $c_{\kappa,\tau_1,\tau_2}$ such that \begin{align}\label{tsybakovlowerbound} \inf_{\hat{f}_n} \sup_{\rho \in\mathcal{M}(\rho,\Theta)} \mathbb{E} \left\|\hat{f}_n(x) - f_\rho(x)\right\|^2_{L^2_{\rho_{\mathcal{X}}}} \geq c_{\kappa,\tau_1,\tau_2} n^{-\frac{2\alpha}{2\alpha +1}}. \end{align} Now we construct a finite sequence $f_0,\cdots,f_{\hat{N}_n}$ in the space $\Theta$. First, we let function $K \in L^2(\mathbb{R}) \cap \text{Lip}^\alpha(\mathbb{R})$ be supported on $[-\frac{1}{2},\frac{1}{2}]$ with Lipschitz constant $\frac{1}{2}L$ and $\left\||K\right\|_\infty \leq G$. Clearly this function exists. We partition the set $[-1,1]$ into $\hat{N}_n =\lfloor c_\tau n^{\frac{1}{2\alpha+1}} \rfloor$ interval $\left\{A_{n,k}\right\}_{k=1}^{\hat{N}_n}$ with equivalent length $\frac{2}{\hat{N}_n}$, centers $\left\{u_k\right\}_{k=1}^{\hat{N}_n}$ and $c_\tau = \frac{2304\tau_1}{15T^2}\left\|K\right\|_2^2 +1$. Now we define function as $$\psi_{u_k}(x) = \frac{1}{{\hat{N}_n}^\alpha} K\left(\frac{1}{2}\hat{N}_n(x-u_k)\right),~~~\text{for}~ k = 1,\cdots,\hat{N}_n. $$ It is clear that $\psi_{u_k}(x)$ are Lipschitz-$\alpha$ functions with Lipschitz constant $\frac{1}{2^{1+\alpha}}L$ for $0 < \alpha \leq 1$ for $k = 1,\cdots,\hat{N}_n$ and $\left\|\psi_{u_k}(x)\right\|_\infty \leq G$. From the definition above, we can also see that for $u_i \neq u_j,$ $\psi_{u_i}(x)$ and $\psi_{u_j}(x)$ have different supports. Now we consider the set of all binary sequences of length ${\hat{N}_n}$, $$\Omega =\left\{\omega = \left(\omega_1,\cdots,\omega_{\hat{N}_n}\right),\omega_i \in \left\{0,1\right\}\right\} = \left\{0,1\right\}^{\hat{N}_n},$$ and define functions $\phi_\omega(x)$ as $$\phi_\omega(x) = \sum_{k=1}^{\hat{N}_n}\omega_k \psi_{u_k}(x).$$ Now we are going to show that $\phi_\omega(x)$ is a Lipschitz-$\alpha$ function with Lipschitz constant $L$ and $\left\|\phi_\omega(x) \right\|_\infty \leq G$ for any $\omega$. The sup-norm can be check easily by noticing that this is a summation on different supports and $|\omega_k| \leq 1$. Now we are going to check the Lipchitz constant. If $x,y \in A_{n,i}$, then we have \begin{equation*} \left|\phi_\omega(x)-\phi_\omega(y)\right| = \left|\psi_{u_i}(x)-\psi_{u_i}(y)\right| \leq L \left|x-y\right|^\alpha. \end{equation*} If $x\in A_{n,i}$ and $y\in A_{n,j}$ for $i \neq j$, and we let $\bar{x}$ and $\bar{y}$ be the boundary points of $A_{n,i}$ and $A_{n,j}$ between $x$ and $y$, then we have, \begin{equation*} \begin{aligned} &\left|\phi_\omega(x)-\phi_\omega(y)\right|\\ = &\left|\omega_i\psi_{u_i}(x)-\omega_j\psi_{u_j}(y)\right|\\ \leq& \left|\psi_{u_i}(x)\right|+\left|\psi_{u_j}(y)\right|\\ =& \left|\psi_{u_i}(x) - \psi_{u_i}(\bar{x})\right|+\left|\psi_{u_j}(y)-\psi_{u_j}(\bar{y})\right|\\ \leq& 2^{-1-\alpha}L \left(\left|x-\bar{x}\right|^\alpha + \left|y- \bar{y}\right|^\alpha \right)\\ = &2^{-\alpha}L \left(\frac{1}{2}\left|x-\bar{x}\right|^\alpha + \frac{1}{2} \left|y- \bar{y}\right|^\alpha \right)\\ \leq& L \left(\frac{\left|x-\bar{x}\right| + \left|y- \bar{y}\right|}{2}\right)^\alpha\\ \leq& L\left|x-y\right|^\alpha, \end{aligned} \end{equation*} where we have applied Jensen's inequality. Then $\phi_\omega(x)$ is a Lipschitz-$\alpha$ function with Lipschitz constant $L$. Since for any $f \in \Theta$, it can be written in the form of $\sum_{i=1}^m g_i(\xi_i \cdot x)$ with $g_i \in W^\alpha_\infty[-1,1]$. Now we can simply take $m=1$, $\xi_1= [1,0,\cdots,0]$ and $g_1(x)=\phi_{\omega}(x_1)$. Then we denote $f_\omega(x)=\phi_\omega(x_1)$ where $x_1$ is the first component of $x\in \mathbb{R}^d$ and clearly we know that $f_\omega \in \Theta$. For any $u_k$, we have $\left\|\psi_{u_k}\right\|_2^2 = \frac{2}{\hat{N}_n^{1+2\alpha}} \left\|K\right\|_2^2$. We use $$\text{Ham} (\omega,\omega') = \sum_{k=1}^{\hat{N}_n} I \left(\omega_k \neq \omega_k'\right)$$ to denote the Hamming distance between the binary sequences $\omega$ and $\omega'$. We know that $$\left\|f_\omega - f_{\omega'}\right\|_2^2 = \text{Ham} (\omega,\omega') \frac{2}{\hat{N}_n^{1+2\alpha}}\left\|K\right\|_2^2.$$ By the Varshamov-Gilbert bound (Lemma 2.9 in \cite{tsybakov2008introduction}), we conclude that there exists a subset $\mathcal{W} \subset \Omega$ of cardinality $\left|\mathcal{W}\right| \geq 2^{\frac{\hat{N}_n}{8}}$ such that $\text{Ham} (\omega,\omega') \geq \frac{\hat{N}_n}{8}$ for all $\omega, \omega' \in \mathcal{W}$, $\omega \neq \omega'$. Then we have $$\left\|f_\omega - f_{\omega'}\right\|_2^2 \geq \frac{1}{4} \hat{N}_n^{-2\alpha} \left\|K\right\|_2^2,~~\text{for}~\omega, \omega' \in \mathcal{W}, \omega \neq \omega'.$$ Further, we know that \begin{equation*} \begin{aligned} \left\|f_\omega - f_{\omega'}\right\|_{L^2_{\rho_\mathcal{X}}}^2 \geq \kappa n^{-\frac{2\alpha}{2\alpha+1}},~~\text{for}~\omega, \omega' \in \mathcal{W}, \omega \neq \omega', \end{aligned} \end{equation*} by taking $\kappa = \frac{1}{4} \tau_2 c_\tau^{2} \left\|K\right\|_2^2$. The above inequality verifies the condition 1 in \textbf{Theorem 2.7} in \cite{tsybakov2008introduction}. For simplicity, we use $f_i$ to denote $f_{\omega^i}$. Now we consider the KL-divergence $\text{KL}\left(\rho_{f_i} | \rho_{f_j}\right)$. By the euation (\ref{measure}), we have $d\rho_{f_i}(x,y) = g(x,y)d\rho_{f_j}(x,y)$ with $$g(x,y)= \frac{T+\text{sign}(y)f_i(x)}{T+\text{sign}(y)f_j(x)} = 1+ \frac{\text{sign}(y)(f_i-f_j)}{T+ \text{sign}(y)f_j}.$$ Then we know that \begin{equation*} \begin{aligned} &\text{KL}\left(\rho_{f_i} | \rho_{f_j}\right)\\ = &\int_{\mathcal{X}} \frac{T+f_i(x)}{2T} \ln \left(1+ \frac{f_i(x)-f_j(x)}{T+f_j(x)}\right) +\\ &\frac{T-f_i(x)}{2T} \ln \left(1+ \frac{f_i(x)-f_j(x)}{T-f_j(x)}\right) d\mu(x) \\ \leq& \int_{\mathcal{X}}\frac{f_i(x)-f_j(x)}{2T} \left(\frac{T+f_i(x)}{T+f_j(x)} - \frac{T-f_i(x)}{T-f_j(x)}\right)d_{\mu}(x)\\ \leq& \frac{16}{15T^2} \left\|f_i-f_j\right\|^2_{L^2_{\rho_{\mathcal{X}}}}. \end{aligned} \end{equation*} Since $\text{KL}\left(\rho^n_{f_i} | \rho^n_{f_j}\right) \leq \frac{16}{15T^2} n\left\|f_i - f_j\right\|^2_{L^2_{\rho_{\mathcal{X}}}}$, we have \begin{equation*} \begin{aligned}\text{KL}\left(\rho^n_{f_i} | \rho^n_{f_j}\right) &\leq\frac{16\tau_1}{15T^2} n\left\|f_i - f_j\right\|^2_2 \\ &\leq \frac{16\tau_1}{15T^2}n\text{Ham} (\omega,\omega') \frac{2}{\hat{N}_n^{1+2\alpha}}\left\|K\right\|_2^2\\ &\leq \frac{2304\tau_1}{15T^2}\left\|K\right\|_2^2 \frac{n}{\hat{N}_n^{1+2\alpha}}\frac{\hat{N}_n}{72}. \end{aligned} \end{equation*} By $c_\tau = \frac{2304\tau_1}{15T^2}\left\|K\right\|_2^2 +1$, we know that \begin{equation*} \begin{aligned}\text{KL}\left(\rho^n_{f_i} | \rho^n_{f_j}\right) \leq \frac{\hat{N}_n}{72} \leq \frac{\log |\mathcal{W}|}{9}. \end{aligned} \end{equation*} Then two conditions of inequality (\ref{tsybakovlowerbound}) are satisfied by taking $\left|\mathcal{W}\right| = \tilde{N}$. Then we have \begin{align*} \inf_{\hat{f}_n} \sup_{\rho \in\mathcal{M}(\rho,\Theta)} \mathbb{E} \left\|\hat{f}_n(x) - f_\rho(x)\right\|^2_{L^2_{\rho_{\mathcal{X}}}} \geq c_{\kappa,\tau_1,\tau_2} n^{-\frac{2\alpha}{2\alpha +1}}. \end{align*} The proof can be applied directly to Corollary \ref{lowerbound2} by noticing that $x_1$ is a polynomial of input $x$. For simplicity, $\left\|K\right\|_2$ can be chosen to be 1. We know that $\tau_1$ and $\tau_2$ are actually upper and lower bounds of marginal density of $x_1$, the first component of $x$, thus we can simply take $\tau_1=1$ and $\tau_2 = \frac{1}{100}$. Since $\kappa$ is a constant depends on $\tau_1$, $\tau_2$, $m$ and $G$, the constant $c_{\kappa,\tau_1,\tau_2}$ essentially depends on $m$ and $G$. This finishes the proof. \end{proof} Combining this lower bound with the upper bound in the last section, we know that deep convolutional neural networks followed by one fully connected layer can reach optimal learning rates for additive ridge functions up to a log factor. In other words, for estimating $f_\rho \in \Theta$, no other methods could achieve a better rate than the estimator by deep convolutional neural networks. Furthermore, due to the special form of ridge functions, we can observe that this rate is dimension independent. Thanks to the simple structure of convolutional neural networks, we are able to derive small complexity bound and reach this dimension independent rate. \section{Conclusion and Discussion} In this paper, we consider the regression problem in statistical learning theory by using deep convolutional neural networks. By a careful analysis of covering number of deep convolutional neural networks, we show that for additive ridge functions, deep convolutional neural networks followed by one fully connected layer can reach optimal learning rates (up to a log factor). With the simple structure and few free parameters of convolutional neural networks, we obtain suitable complexity bound for the hypothesis space and are able to achieve this dimension independent learning rate, which shows the superiority of convolutional structure. In the future we would like to address in what situation convolutional neural networks outperform usual fully connected neural networks concerning approximation abilities. Or can we derive optimal approximation rate for larger function classes when considering convolutional neural networks. Convolutional structures have been widely applied to various types of neural networks and have performed outstandingly in the classification problems. However, only few papers consider classification problems from a statistical perspective by using fully connected neural networks \cite{hu2020sharp,kim2021fast,bos2021convergence}. It would be significant if we can theoretically show the superiority of convolutional structures when considering classification problems in the future.
-48,274.453135
[ -2.07421875, 2.091796875 ]
33.950617
[ -3.630859375, -0.027679443359375, -2.306640625, -6.33203125, -0.189208984375, 9.1171875 ]
[ 3.51953125, 8.7734375, 2.59765625, 7.6015625 ]
203
4,759
[ -2.658203125, 3.00390625 ]
33.260054
[ -6.4609375, -4.88671875, -5.0078125, -2.09375, 2.74609375, 13.4609375 ]
2.095152
21.421796
26.266022
2.484869
[ 2.484968662261963 ]
-27,435.498266
6.037613
-48,264.182883
3.100964
6.063891
[ -2.521484375, -3.74609375, -3.9921875, -5.2265625, 2.40625, 12.71875 ]
[ -5.80859375, -2.271484375, -2.19921875, -1.6552734375, 4.0703125, 5.2578125 ]
BkiUc5XxK3YB9raXz91v
\section{Introduction} Topological aspects of Fermi systems in solids and ultracold atoms~\cite{WXG,Nayak,ZHSC,Kane,A.Kitaev,Xiangtao}, such as quantum Hall effects~\cite{hall effect,hall effect2,hall effect3,hall effect4}, topological insulation, and topological superconductivity~\cite{TI, TI2}, have attracted much attention over the past three decades. Moreover, a similar topologically nontrivial ground state has been found in the superfluid A-phase of $^3$He films \cite{He3} with chiral $p_{x}+ip_{y}$ symmetry and in the layered triplet $p$-wave superconductor Sr$_2$RuO$_4$ \cite{srruo}, which is expected to host Majorana modes~\cite{majorana, p-wave}. Previous theories have proposed that the zero-energy Majorana bound state can be realized in the vortex core of spinless $p_{x}+ip_{y}$-wave superconductors or superfluids~\cite{p-wave,superfluid}. The zero-energy fermionic modes can be described in terms of self-conjugated Majorana modes, which are also expected to occur in other systems, such as the $\nu=\frac{5}{2}$ fractional quantum Hall state~\cite{qhe,MGreiter} and the surface state of three-dimensional topological insulators with proximity coupling to conventional $s$-wave superconductors~\cite{surface1, surface2, surface3}. One of the simplest effective models realizing a topological superconducting phase supporting Majorana modes is the two-dimensional (2D) chiral $p_{x}+ip_{y}$-wave superconductor. We focus on the impurity effects and the vortex core state structure of a 2D chiral $p$-wave superconductor, identifying its topological nature and exploring its local physics. This paper theoretically investigates the interplay of the ground-state topologies and properties of fermionic bound states near impurities and topological defects. The universal bound state of the quasiparticle and the supercurrent induced by a single impurity in a chiral $p$-wave superconductor at a sufficiently high scattering strength can be observed for both a single nonmagnetic impurity and a single magnetic impurity. For a row of linear nonmagnetic impurities and a row of linear magnetic impurities in a chiral $p$-wave superconductor, we find that the distributions of the supercurrent and chiral domain structures of the $p_{x}\pm ip_{y}$-wave order parameters are different, because the former system preserves time-reversal symmetry, whereas the latter breaks it. Additionally, the directions and magnitudes of the supercurrent have different distributions for two degenerate $p_x\pm ip_y$-wave order parameters. This finding may provide a route to distinguishing the two degenerate components of a chiral $p$-wave superconductor. Further, we study a topological defect in the vortex core structure of a chiral $p$-wave superconductor and show that the local density of states (LDOS) at the vortex core center has a zero-energy peak, which may correspond to the Majorana mode~\cite{17}. All these theoretical findings have potential applications in experimental explorations of Majorana modes. The remainder of the paper is organized as follows. Section~\ref{sec:bdg} discusses primarily the theoretical model Hamiltonian and the methods of numerical calculation. Section~\ref{sec:singleimpurity} details numerical results for both a single nonmagnetic impurity and a single magnetic impurity as well as the effects of linear impurities. Section~\ref{sec:splitting} considers the splitting of the zero-energy peak of the vortex core states in a chiral $p$-wave superconductor in the presence of an external magnetic field. Finally, conclusions drawn from the main results of the study are presented in Section~\ref{sec:conclusion}. \section{Model and Method} \label{sec:bdg} In the following study, we adopt an effective Bardeen--Cooper-- Schrieffer (BCS)-type mean-field Hamiltonian defined on a 2D triangular lattice: \begin{eqnarray} \hat{H}_{\text{eff}}&=&-\sum_{\langle i,j\rangle \sigma}(t_{ij}\hat{c}_{i\sigma }^{\dagger }\hat{c}_{j\sigma} +\text{h.c.})+\sum_{i,\sigma }(\emph{V}_{i\sigma}^{imp}-\mu) \hat{c}_{i\sigma }^{\dagger }\hat{c}_{i\sigma } \nonumber\\% &+&\sum_{\langle i,j\rangle} \left [\Delta _{ij}^{\pm}(\hat{c}_{i\uparrow }^{\dagger }\hat{c}_{j\downarrow }^{\dagger}\pm \hat{c}_{i\downarrow }^{\dagger }\hat{c}_{j\uparrow }^{\dagger})+\text{h.c.}\right ],\label{Eq1} \end{eqnarray
-4,642.929035
[ -2.857421875, 2.578125 ]
45.833333
[ -7.31640625, -5.359375, -3.326171875, -9.3671875, 2.07421875, 14.3125 ]
[ 4.625, 10.3046875, 4.9921875, 7.12890625 ]
24
527
[ -3.37890625, 4.1171875 ]
22.163284
[ -6.08984375, -5.53125, -3.822265625, -0.9453125, 2.822265625, 10.203125 ]
1.473188
34.611469
49.905123
0.361106
[ 1.4970407485961914 ]
-3,866.880662
6.440228
-4,602.276821
4.419564
5.012276
[ -2.232421875, -3.982421875, -5.18359375, -4.5546875, 2.248046875, 11.703125 ]
[ -6.58203125, -4.390625, -2.90234375, -1.9755859375, 5.08203125, 6.6953125 ]
BkiUbtg4uzlgqIdPjLwn
\section*{Introduction} Multivariate polynomials and their bases appear in many combinatorial problems and one often needs to define a polynomial as a formal sum of elements that live in a specified basis. The usual implementation of multivariate polynomials is done as a tensor product of polynomials in one variable. But one can not consider the variables all together: all bases have to be defined by a product of bases of polynomials in one variable. It appeared to us that a clear and handy implementation of multivariate polynomials from a combinatorial point of view would be useful not only to our work but to the community. Our approach is based on \textit{divided difference} operators and their interpretation in terms of linear bases of the multivariate polynomials algebra as explained by Lascoux \cite{DUMMIES}. We define simple operators of types $A$, $B$, $C$, and $D$ in Section \ref{OPER} and use them to create the \textit{divided difference} operators in Section \ref{DIFFDIV}. We then explain in Section \ref{BASES} how these operators allow us to define linear bases of the multivariate polynomials. Our software has been implemented in Sage, we explain our choice in Section \ref{SAGE}. The full description of the implemented features with code examples can be found in Section \ref{SOFTWARE}. The development process is not finished at the time of submission of this paper and what the program now needs most is to be tested by many users so that bugs can be reported and suggestions be made on how to improve the features. \section{Choosing Sage} \label{SAGE} Sage is a mathematics software created in 2005. It is free and open source which is the main reason why we have chosen it. It is developed by many researchers around the world and one can join the very lively community who works specifically on combinatorics within the Sage-Combinat project. The Sage-Combinat community has been created in 2001 and was previously known as Mupad-Combinat. It has moved to Sage in 2008 keeping its main purpose: developing specific tools for algebraic combinatorics and sharing the programs among researchers. From the beginning, we wanted not only to develop in Sage but to be part of the main project by adding our implementation to the software. The program is still in test mode within the Sage-combinat distribution (see the installation process in Section \ref{INSTALL}). Working in Sage also allowed us to use previous work and structures developed by the community. We worked a lot with Weyl Groups which had already been implemented in Sage. We also used the standard implementation of multi-base algebras as a base for our own work. And finally, we hope that being part of Sage will help our software to spread within the community and thus become useful to the largest possible community. \pagenumbering{arabic} \addtocounter{page}{1} \markboth{\SMALL VIVIANE PONS}{\SMALL MULTIVARIABLE POLYNOMIALS IN SAGE} \section{Multi-base polynomials} \subsection{Type $A$, $B$, $C$, $D$ operators} \label{OPER} At first, we need to define simple operations on vectors of integers. Let $v \in \mathbb{Z}^n$, we have the following operators corresponding to the root system of respective types $A$, $B$, $C$, $D$: \begin{align} vs_i &= (\ldots, v_{i+1},v_i, \ldots) &\text{ for }1 \leq i < n \\ vs_i^B = vs_i^C &= (\ldots, -v_i, \ldots) &\text{ for } 1 \leq i \leq n \\ vs_i^D &= (\ldots, -v_i, -v_{i-1}, \ldots) &\text{ for } 2 \leq i \leq n \end{align} The group generated by $s_1, \ldots, s_{n-1}$ (respectively $s_1, \ldots, s_{n-1},s_n^B$, and $s_1, \ldots,\\ s_{n-1},s_n^D$) is the Weyl group of type $A$ (respectively $B$ or $C$, $D$). The operators satisfy the \textit{braid relations}: \begin{align} s_is_{i+1}s_i &= s_{i+1}s_is_{i+1} &\text{ and } s_is_j = s_js_i \text{, } |i-j| \neq 1 \\ s_{n-1}s_n^Bs_{n-1}s_n^B &= s_n^Bs_{n-1}s_n^Bs_{n-1} &\text{ and } s_is_n^B = s_n^Bs_i \text{, } i \leq n-2 \\ s_{n-2}s_n^Ds_{n-2} &= s_n^Ds_{n-2}s_n^D &\text{ and } s_is_n^D = s_n^Ds_i \text{, } i \neq n-2 \end{align} The orbit of the vector $[1, 2, \dots , n]$ consists of all permutations of $1, 2,\dots , n$, i.e., $S_n$, for type $A$, all signed permutations for type $B$, $C$, and all signed permutations with an even number of minus signs for type $D$. The elements of the different groups can be denoted by these objects. In the same way, elements of these groups can be seen as a product of operators $s_i$, called a \textit{decomposition}. When the product is of minimal length, it is called a \textit{reduced decomposition}. \subsection{Action on polynomials} \label{DIFFDIV} We now have a natural action of the Weyl groups on polynomials. Indeed, let $x = (x_1, x_2, \ldots, x_n)$ be a set of variables and for $v \in \mathbb{Z}^n$, let $x^v$ stand for the monomial \begin{equation} x_1^{v_1}x_2^{v_2}\ldots x_n^{v_n}. \end{equation} A polynomial in the variables $x_1, \ldots, x_n$ can therefore be seen as a formal sum of vectors and the action of the operator $s_i$ becomes an action on polynomials: \begin{equation} x^vs_i = x^{vs_i}. \end{equation} From these simple operators $s_i$, we can now define the \textit{divided difference} operators. For type $A$, we have: \begin{align} f\partial_i &:= \frac{f - f^{s_i}}{x_i - x_{i+1}} \\ f\pi_i &:= \frac{x_if - x_{i+1}f^{s_i}}{x_i - x_{i+1}} = f.(x_i\partial_i) \\ f\hat{\pi}_i &:=\frac{(f-f^{s_i})x_{i+1}}{x_i - x_{i+1}} = f.(\pi_i - 1) \\ fT_i &:= f.(\pi_i(t_1 + t_2) - s_it_2) \end{align} for $ 1 \leq i \leq n-1$. $\partial_i$ is the \textit{Newton divided difference}, $\pi_i$ and $\hat{\pi}_i$ are the \textit{isobaric divided differences} and $T_i$ is the generator of the \textit{Hecke algebra} $\mathcal{H}_2$. As the $s_i$ satisfy the braid relations, all the above operators do. If $v_i > v_{i+1}$, the \textit{Newton divided difference} $\partial_i$ can be seen as an operator decrementing the vector degree and summing over all the intermediate monomials between $x^{(\ldots, v_i-1, v_{i+1}, \ldots)}$ and $x^{(\ldots, v_{i+1}, v_{i}-1, \ldots)}$. For example, \begin{equation} x^{(4,1)}\partial_1 = x^{(3,1)} + x^{(2,2)} + x^{(1,3)}. \end{equation} When $v_i < v_{i+1}$, one just needs to switch $v_i$ and $v_{i+1}$, multiply by $-1$ and do the previous operation. When $v_i =v_{i+1}$, the result is $0$. This description can be used to give a more general, type-free definition. We can see the vectors indexing the monomials as elements of the ambient space of the root system of type $A_{n-1}$. The above $\partial_i$ operation can then be seen as a formal sum of vectors, adding factors of the simple root $(\ldots, -1, 1 \ldots)$ to the original vector. The sign of $v_i - v_{i+1}$ is given by the scalar product between the vector and the $i^{th}$ simple coroot of the ambient space. This definition is equivalent to the previous one. We can use it to define our divided differences in types $B$, $C$, and $D$, using the root systems of respective types $B_n$, $C_n$, and $D_n$. Compared to type $A$, we add the $n^{th}$ simple root and coroot whose definition depends on the type and create the $n^{th}$ divided difference operator: \begin{align} \partial_n^B &= \frac{1 - s_n^B}{x_n^{\frac{1}{2}} - x_{n}^{-\frac{1}{2}}} \\ \partial_n^C &= \frac{1 - s_n^C}{x_n - x_n^{-1}} \\ \partial_n^D &= \frac{1 - s_n^D}{x_{n-1}^{-1} - x_n} \end{align} The same construction can be done with the isobaric divided differences $\pi$ and $\hat{\pi}$. \subsection{Linear bases} \label{BASES} We can now use these operators to define linear bases of the ring of multivariate polynomials. Let $(x_1, x_2, \ldots, x_n)$ and $(y_1, y_2, \ldots, y_n)$ be two sets of variables and $\lambda$ a partition of length $n$, i.e., $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_n$. We then define \textit{dominant Schubert polynomials} (respectively \textit{Grothendieck polynomials} and \textit{Key polynomials}) by \begin{align} Y_\lambda &:= \prod_{i = 1}^n \prod_{j = 1}^{\lambda_i} (x_i - y_j), \\ G_\lambda &:= \prod_{i = 1}^n \prod_{j = 1}^{\lambda_i} (1-y_jx_i^{-1}), \\ K_\lambda = \hat{K}_\lambda &:= x^\lambda. \end{align} We define Schubert polynomials to be all the non-zero images of the dominant Schubert polynomials under products of $\partial_i$ and Grothendieck polynomials to be all the images of the dominant Grothendieck polynomials under products of $\pi_i$. Similarly, the two types of Key polynomials are defined by taking all the images under products of $\pi_i$ or of $\hat{\pi}_i$ respectively. Since the operators satisfy relations, we cannot index the polynomials by the choice of the starting point and the sequence of operators used. Rather, we use weight vectors $v \in \mathbb{N}^n$, the recursive definition being \begin{align} Y_{\ldots, v_{i+1}, v_{i}-1, \ldots} = Y_v \partial_i\\ G_{\ldots, v_{i+1}, v_{i}-1, \ldots} = G_v \pi_i\\ K_v\pi_i = K_{vs_i}\\ \hat{K}_v \hat{\pi}_i = \hat{K}_{vs_i}, \end{align} the inal vectors $v$ satisfying $v_i > v_{i+1}$. As the operators satisfy braids relations, the order one chooses to apply the recursive rule on a vector does not change the result. There are dominant polynomials in the images of a dominant polynomial in the Schubert and Grothendieck case; therefore, one has to check consistency, but this is easy. These families constitute triangular bases of the polynomials in $(x_1, \ldots, x_n)$. And one can easily express an arbitrary polynomial in these bases by inverting a triangular matrix. When working with a single set of variables, one can specialize the $y_i$'s to 0 (respectively to 1) and obtain simple Schubert polynomials (respectively Grothendieck polynomials), so that dominant polynomials become \begin{align} Y_\lambda &= x^\lambda, \\ G_\lambda &= \prod_{i = 1}^n (1 - x_i^{-1})^{\lambda_i}. \end{align} To work with positive exponents on the Grothendieck basis, one can also set $x_i = 1 - x_i^{-1}$. Both versions of the bases are available in our implementation. Using the same method, we can also define non-symmetric Macdonald polynomials. In this case, there will be only one generator polynomial, i.e., $M_{0,0 \ldots, 0} = 1$, and we will use both the $T_i$ operator and a raising operator to increase the polynomial degree. The recursive rule is due to Sahi and Knop \cite{SahiKnop} and one can find its description in Lascoux \cite{DUMMIES}. One can also define type $B$, $C$, and $D$ Key polynomials using the operators defined from the Weyl group of the given types as explained in Section \ref{DIFFDIV}. The corresponding families of polynomials will then be indexed by vectors in $\mathbb{Z}^n$ instead of $\mathbb{N}^n$ and become bases of the Laurent polynomials of $(x_1, \ldots, x_n)$, i.e., with both positive and negative exponents. \section{Software description} \label{SOFTWARE} \subsection{Installation process} \label{INSTALL} Our software has been developed as a part of the Sage project. Nevertheless, as it is still in test mode, at the publication time of this paper, it is not yet available on the main Sage distribution, but one can use it within the Sage-Combinat distribution. \medskip \noindent \textsc{Step 1} \medskip If Sage is not already installed on a computer, please follow the instructions on the Sage website \cite{SAGE_WEBSITE} to get the installation process corresponding to your system. To get the latest version of our program, make sure you have the last version of Sage installed. You can upgrade your Sage version by running the following command inside your sage directory: {\tt \begin{lstlisting} ./sage -upgrade \end{lstlisting} } The examples below work with Sage 4.7 and later versions. \medskip \noindent \textsc{Step 2} \medskip Install Sage-Combinat \cite{SAGE_COMBINAT} by running the following command inside the sage directory: {\tt \begin{lstlisting} ./sage -combinat install \end{lstlisting}} If you encounter problems, you find more information by visiting the Sage-Combinat website \cite{SAGE_COMBINAT}. \subsection{Define a polynomial} Sage programming is object-oriented. The object containing the main software methods is called \textit{AbstractPolynomialRing}. One first needs to create this object: {\tt \begin{lstlisting} sage: A = AbstractPolynomialRing(QQ) sage: A The abstract ring of multivariate polynomials on x over Rational Field \end{lstlisting}} \noindent Here, \textit{A} represents the abstract algebra. To create an actual polynomial, we need a concrete basis. {\tt \begin{lstlisting} sage: m = A.monomial_basis(); m The ring of multivariate polynomials on x over Rational Field on the monomial basis \end{lstlisting}} \noindent \textit{m} is a concrete basis and we shall use it to create polynomials. Both of the syntaxes presented below can be used. {\tt \begin{lstlisting} sage: pol = m[1,1,2] + m[2,3]; pol x[1, 1, 2] + x[2, 3, 0] sage: pol = m([1,1,2]) + m([2,3]); pol x[1, 1, 2] + x[2, 3, 0] \end{lstlisting}} \noindent $x[1,1,2]$ means $x^{(1,1,2)}=x_1^1x_2^1x_3^2$. One does not have to declare beforehand how many variables are to be used, it will be computed from the size of the vectors. To know on how many variables a polynomial is defined, one can look at its parent. It can also be changed if needed. {\tt \begin{lstlisting} sage: pol.parent() The ring of multivariate polynomials on x over Rational Field with 3 variables on the monomial basis sage: pol = pol.change_nb_variables(4) sage: pol x[1, 1, 2, 0] + x[2, 3, 0, 0] sage: pol.parent() The ring of multivariate polynomials on x over Rational Field with 4 variables on the monomial basis \end{lstlisting}} Now we have a polynomial object to work with. A polynomial will always be seen as a formal sum of vectors: it cannot be factorized. If two polynomials are multiplied, the result will always be expanded as a sum. {\tt \begin{lstlisting} sage: pol * pol x[2, 2, 4, 0] + 2*x[3, 4, 2, 0] + x[4, 6, 0, 0] \end{lstlisting}} \subsection{Apply operators} We can now apply the \textit{divided differences} operators defined in Section \ref{DIFFDIV}. {\tt \begin{lstlisting} sage: pol = m[1,1,2] + m[2,3]; pol x[1, 1, 2] + x[2, 3, 0] sage: pol.divided_difference(2) -x[1, 1, 1] + x[2, 1, 1] + x[2, 2, 0] + x[2, 0, 2] sage: pol.divided_difference_isobar(2) x[2, 1, 2] + x[2, 2, 1] + x[2, 3, 0] + x[2, 0, 3] \end{lstlisting}} \noindent By default, the operator type is $A$, but we can also apply $B$, $C$, and $D$ operators. {\tt \begin{lstlisting} sage: pol.divided_difference(2,"B") x[1, -1, 2] + x[1, 0, 2] + x[2, 0, 0] + x[2, -3, 0] + x[2, -2, 0] + x[2, -1, 0] + x[2, 1, 0] + x[2, 2, 0] sage: pol.divided_difference(2,"C") x[1, 0, 2] + x[2, 0, 0] + x[2, -2, 0] + x[2, 2, 0] sage: pol.divided_difference(2,"D") x[0, 0, 0] + x[-2, -2, 0] + x[-1, -1, 0] + x[1, 1, 0] + x[1, 0, 2] + x[2, 2, 0] + x[0, -1, 2] \end{lstlisting}} We have seen in Section \ref{DIFFDIV} that type $B$, $C$, and $D$ operators were defined from the root systems of types $B_n$, $C_n$, and $D_n$, by the addition of the $n^{th}$ simple root and coroot. For each type, only one new operator was created which was the $n^{th}$ divided difference. But on the above example, applying the second divided difference on a polynomial in three variables gives a different result for types $B$, $C$, and $D$ than for type $A$. Even though groups have been used to give a definition of our operators, we have extended it to obtain a definition only depending on the polynomial. From the root systems of type $B_n$, $C_n$, and $D_n$, we had \begin{align} \partial_n^B &= \frac{1 - s_n^B}{x_n^{\frac{1}{2}} - x_{n}^{-\frac{1}{2}}} \\ \partial_n^C &= \frac{1 - s_n^C}{x_n - x_n^{-1}} \\ \partial_n^D &= \frac{1 - s_n^D}{x_{n-1}^{-1} - x_n} \end{align} So we just set \begin{align} \partial_i^B &= \frac{1 - s_i^B}{x_i^{\frac{1}{2}} - x_{i}^{-\frac{1}{2}}} \\ \partial_i^C &= \frac{1 - s_i^C}{x_i - x_i^{-1}} \\ \partial_i^D &= \frac{1 - s_i^D}{x_{i-1}^{-1} - x_i} \end{align} with $1 \leq i \leq n$ for types $B$ and $C$, and $ 2 \leq i \leq n$ for type $D$. These definitions allow us to study these operators for themselves and not relatively to their groups. As an example, this allows us to study the relations between $\partial_i^B$ and $\partial_{i+1}^B$. It does not make sense from a group point of view but it remains an interesting question. Of course, one may want to use the group based operators only and not the generalized ones. This is possible by using a different basis. The basis we have been using so far is called the \textit{Monomial basis} and is not related to any group. One can use a basis which is directly related to a root system called the \textit{Ambient space basis}. {\tt \begin{lstlisting} sage: ma = A.ambient_space_basis("A"); ma The ring of multivariate polynomials on x over Rational Field on the Ambient space basis of type A sage: pol = ma[1,1,2] + ma[2,3] sage: pol x(1, 1, 2) + x(2, 3, 0) sage: pol.parent() The ring of multivariate polynomials on x over Rational Field with 3 variables on the Ambient space basis of type A sage: pol.divided_difference(2) -x(1, 1, 1) + x(2, 1, 1) + x(2, 2, 0) + x(2, 0, 2) \end{lstlisting}} \noindent Note that \textit{Ambient Space basis} is very close to the \textit{Monomial basis} but the polynomial contains its type within its parent. It is related to a root system and only operators defined by this root system can be applied. {\tt \begin{lstlisting} sage: mb = A.ambient_space_basis("B"); mb The ring of multivariate polynomials on x over Rational Field on the Ambient space basis of type B sage: pol = mb[1,1,2] + mb[2,3] sage: pol.divided_difference(2) -x(1, 1, 1) + x(2, 1, 1) + x(2, 2, 0) + x(2, 0, 2) sage: pol.divided_difference(3) x(1, 1, 0) + x(1, 1, -2) + x(1, 1, -1) + x(1, 1, 1) \end{lstlisting}} Conversions between the \textit{Monomial basis} and the \textit{Ambient space bases} can be carried out easily: {\tt \begin{lstlisting} sage: pol = m[1,1,2] + m[2,3]; pol x[1, 1, 2] + x[2, 3, 0] sage: pol.parent() The ring of multivariate polynomials on x over Rational Field with 3 variables on the monomial basis sage: pol = ma(pol); pol x(1, 1, 2) + x(2, 3, 0) sage: pol.parent() The ring of multivariate polynomials on x over Rational Field with 3 variables on the Ambient space basis of type A sage: pol = mb(pol); pol x(1, 1, 2) + x(2, 3, 0) sage: pol.parent() The ring of multivariate polynomials on x over Rational Field with 3 variables on the Ambient space basis of type B \end{lstlisting}} \noindent Even though the objects seem similar, one must always be careful with which basis one is working as it will impact the result of the operations as soon as operators are used. \subsection{Working with multi-bases} \label{MULTIBASES} We have already seen that our polynomials could be expressed on a different basis depending on which operations we wanted to make. But the \textit{Monomial basis} as well as the \textit{Ambient space bases} are just different versions of polynomials seen as sums of monomials. It is also possible to work with the linear bases we have defined in Section \ref{BASES}. Here is an example of the Schubert basis: {\tt \begin{lstlisting} sage: A = AbstractPolynomialRing(QQ) sage: Schub = A.schubert_basis_on_vectors() sage: Schub The ring of multivariate polynomials on x over Rational Field on the Schubert basis of type A (indexed by vectors) \end{lstlisting}} \noindent It can be used to create a Schubert polynomial and convert it to the monomial basis. {\tt \begin{lstlisting} sage: pol = Schub[1,2,2] + Schub[3,4]; pol Y(1, 2, 2) + Y(3, 4, 0) sage: pol.expand() x(1, 2, 2) + x(2, 1, 2) + x(2, 2, 1) + x(3, 4, 0) + x(4, 3, 0) sage: m(pol) x[1, 2, 2] + x[2, 1, 2] + x[2, 2, 1] + x[3, 4, 0] + x[4, 3, 0] sage: Schub(m[1,2,4] + m[2,3]) Y(1, 2, 4) - Y(1, 3, 3) - Y(1, 4, 2) - Y(2, 1, 4) + Y(2, 3, 0) + Y(2, 3, 2) + Y(2, 4, 1) + Y(3, 1, 3) - Y(3, 2, 0) - Y(3, 2, 2) - Y(4, 2, 1) + Y(5, 1, 1) \end{lstlisting}} \noindent One can multiply Schubert polynomials together and the result will be given in the same basis. However the program is converting the two polynomials into the monomial basis to multiply them and then convert the result back into the Schubert basis. {\tt \begin{lstlisting} sage: pol1 = Schub[1,2,2] + Schub[3,4] sage: pol2 = Schub[3,1,2] sage: pol1 * pol2 Y(4, 3, 4) + Y(5, 2, 4) + Y(6, 5, 2) + Y(6, 6, 1) + Y(7, 4, 2) + Y(7, 5, 1) \end{lstlisting}} We have other bases implemented. Below is an example of the Key polynomials. One can convert directly from Schubert to Key polynomials without using the monomial basis manually (it is automatically done by the program): {\tt \begin{lstlisting} sage: K = A.demazure_basis_on_vectors();K The ring of multivariate polynomials on x over Rational Field on the Demazure basis of type A (indexed by vectors) sage: pol = K[2,1,4] + K[3,5,1];pol K(2, 1, 4) + K(3, 5, 1) sage: pol.expand() x(2, 1, 4) + x(2, 2, 3) + x(2, 3, 2) + x(2, 4, 1) + x(3, 1, 3) + x(3, 2, 2) + x(3, 3, 1) + x(3, 5, 1) + x(4, 1, 2) + x(4, 2, 1) + x(4, 4, 1) + x(5, 3, 1) sage: Schub(pol) Y(2, 1, 4) + Y(3, 5, 1) - Y(5, 1, 1) sage: K(m[1,2,4] + m[2,3]) K(1, 2, 4) - K(1, 3, 3) - K(1, 4, 2) - K(2, 1, 4) + K(2, 3, 0) + K(2, 3, 2) + K(2, 4, 1) + K(3, 1, 3) - K(3, 2, 0) - K(3, 2, 2) + K(4, 1, 2) - K(4, 2, 1) sage: Khat = A.demazure_hat_basis_on_vectors() sage: pol = Khat[2,1,4] + Khat[3,5,1];pol ^K(2, 1, 4) + ^K(3, 5, 1) sage: pol.expand() x(2, 1, 4) + x(2, 2, 3) + x(2, 3, 2) + x(3, 1, 3) + x(3, 2, 2) + x(3, 5, 1) + x(4, 4, 1) sage: Schub(pol) Y(2, 1, 4) - Y(2, 4, 1) + Y(3, 5, 1) - Y(4, 1, 2) + Y(4, 2, 1) - Y(5, 1, 1) - Y(5, 3, 1) sage: Khat(m[1,2,4] + m[2,3]) ^K(1, 2, 4) - ^K(1, 3, 3) + ^K(2, 3, 0) + ^K(2, 3, 2) \end{lstlisting}} \noindent The key polynomials are also defined in type $B$, $C$, and $D$. {\tt \begin{lstlisting} sage: K = A.demazure_basis_on_vectors("B");K The ring of multivariate polynomials on x over Rational Field on the Demazure basis of type B (indexed by vectors) sage: pol = K[1,2,-2] sage: pol K(1, 2, -2) sage: pol.expand() x(1, 2, 0) + x(1, 2, -2) + x(1, 2, -1) + x(1, 2, 1) + x(1, 2, 2) + x(2, 1, 0) + x(2, 1, -2) + x(2, 1, -1) + x(2, 1, 1) + x(2, 1, 2) + x(2, 2, 0) + x(2, 2, -1) + x(2, 2, 1) sage: pol = m[-2,1,1] + m[1,-1,1]; pol x[-2, 1, 1] + x[1, -1, 1] sage: K(pol) K(0, 0, 0) + K(-2, 1, 1) - K(-1, 1, 1) - K(-1, 1, 2) - K(-1, 0, 1) - 2*K(1, 0, 0) - K(1, -2, 1) + K(1, -1, 0) + 2*K(1, -1, 1) + K(1, -1, 2) + K(1, 1, 0) - K(1, 1, -1) - 2*K(1, 0, 1) + K(0, 1, 1) + K(0, 0, 1) \end{lstlisting}} Back in type $A$, we also have the simple Grothendieck basis. We have two versions of it related by a change of variable explained in Section \ref{BASES} to avoid using negative exponents. {\tt \begin{lstlisting} sage: Grothn = A.grothendieck_negative_basis_on_vectors(); Grothn The ring of multivariate polynomials on x over Rational Field on the Grothendieck basis of type A with negative exponents (indexed by vectors) sage: pol = Grothn[1,2] + Grothn[2,2]; pol G(1, 2) + G(2, 2) sage: Grothp = A.grothendieck_positive_basis_on_vectors(); Grothp The ring of multivariate polynomials on x over Rational Field on the Grothendieck basis of type A, with positive exponents (indexed by vectors) sage: pol.expand() 2*x(0, 0) + x(-2, 0) - x(-2, -1) - 3*x(-1, 0) - x(-1, -2) + 4*x(-1, -1) + x(0, -2) - 3*x(0, -1) sage: pol = Grothp[1,2] + Grothp[2,2]; pol G(1, 2) + G(2, 2) sage: pol.expand() x(1, 2) + x(2, 1) sage: pol.expand().subs_var([(i,1-A.var(i)^(-1)) for i in xrange(1,3)]) 2*x(0, 0) + x(-2, 0) - x(-2, -1) - 3*x(-1, 0) - x(-1, -2) + 4*x(-1, -1) + x(0, -2) - 3*x(0, -1) \end{lstlisting}} The last basis we have implemented are the non-symmetric Macdonald polynomials. In order to use it, one has to define a polynomial ring on a bigger field than $\mathbb{Q}$ to use variables in the coefficients. {\tt \begin{lstlisting} sage: var('t1 t2 q') (t1, t2, q) sage: K.<t1,t2,q> = QQ[] sage: K = K.fraction_field() sage: A = AbstractPolynomialRing(K);A The abstract ring of multivariate polynomials on x over Fraction Field of Multivariate Polynomial Ring in t1, t2, q over Rational Field sage: Mac = A.macdonald_basis_on_vectors() sage: pol = Mac[1,2]; pol M(1, 2) sage: pol.expand() t2^3*x(0, 0) + t2^2/q*x(1, 0) + ((t2*q+t2)/q^2)*x(1, 1) + 1/q^2*x(1, 2) + ((t2^2*q+t2^2)/q)*x(0, 1) + t2/q*x(0, 2) sage: m = A.monomial_basis() sage: Mac( m[1,1] ) (-t1*t2)*M(0, 0) + M(1, 0) + q*M(1, 1) + ((t1*t2*q^2+t2^2*q-t1*t2-t2^2)/(-t1*q-t2))*M(0, 1) \end{lstlisting}} \subsection{Define a new basis} All our bases are defined with divided differences acting recursively on sums of monomials. If one needs to work with a new basis where objects are indexed by vectors, the only thing to be implemented is the rule converting one vector to a sum of monomials. The inverse conversion is automatically done if the basis is triangular. One can define one's own conversion function and so create a new basis. Below is an example to recreate the Schubert polynomials. {\tt \begin{lstlisting} sage: def schubert_on_basis(v, basis, call_back): ... for i in xrange(len(v)-1): ... if(v[i]<v[i+1]): ... v[i], v[i+1] = v[i+1] + 1, v[i] ... return call_back(v).divided_difference(i+1) ... return basis(v) \end{lstlisting}} \noindent Above is a definition of a recursive function that transforms a Schubert element into a sum of monomials. The principle is easy: we take the Schubert vector as an argument ($v$) and test if we find an index $i$ such that $v_i < v_{i+1}$. If we do, we compute the Schubert polynomial where $v_i$ and $v_{i+1}$ are switched and apply a divided difference operator. If $v$ is antidominant, then the result is $x^v$ that we get with $basis(v)$. The three parameters of this function are the vector $v$ corresponding to our Schubert element, a $basis$ parameter which is the monomial basis we convert to, and $call\_back$ that we use to call back our function (this ensures getting a cached method, i.e., things are not calculated twice). To create a new basis, one needs to write a function that takes these three arguments and returns the polynomial associated with the vector $v$. It can directly be written into the sage command line as above or in the notebook, or in a file attached to your session. The function will not directly be called by the user, it will be passed to a method and the program will send the right values to $basis$ and $call\_back$. If more arguments are needed, one just adds them this way: {\tt \begin{lstlisting} sage: def qt_schubert_on_basis(v, basis, call_back, q=1, t=1): ... for i in xrange(len(v)-1): ... if(v[i]<v[i+1]): ... v[i], v[i+1] = v[i+1] + 1, v[i] ... return q*1/t*call_back(v).divided_difference(i+1) ... return basis(v) \end{lstlisting}} \noindent Now that we have the function, we will pass it to our algebra to create a new basis: {\tt \begin{lstlisting} sage: A = AbstractPolynomialRing(QQ) sage: myBasis = A.linear_basis_on_vectors("A","MySchub","Y",schubert_on_basis) sage: pol = myBasis[2,1,3];pol Y(2, 1, 3) sage: pol.expand() x(2, 1, 3) + x(2, 2, 2) + x(2, 3, 1) + x(3, 1, 2) + x(3, 2, 1) + x(4, 1, 1) sage: myBasis(A.an_element()) Y(1, 2, 3) - Y(1, 3, 2) - Y(2, 1, 3) + Y(2, 3, 1) + Y(3, 1, 2) - Y(3, 2, 1) + Y(4, 1, 1) \end{lstlisting}} \noindent This is a copy of the Schubert basis, and it works the same way as the previous bases we have seen in Section \ref{MULTIBASES}. Below is an example with a parametrized function: {\tt \begin{lstlisting} sage: var('q t') (q, t) sage: K.<q,t> = QQ[] sage: K = K.fraction_field() sage: A = AbstractPolynomialRing(K) sage: qtSchubertBasis = A.linear_basis_on_vectors("A","qtSchub","YQ",qt_schubert_on_basis,(("q",q),("t",t)) ) sage: pol = qtSchubertBasis[1,2,3]; pol YQ(1, 2, 3) sage: pol.expand() q^3/t^3*x(1, 2, 3) + q^3/t^3*x(1, 3, 2) + q^3/t^3*x(2, 1, 3) + 2*q^3/t^3*x(2, 2, 2) + q^3/t^3*x(2, 3, 1) + q^3/t^3*x(3, 1, 2) + q^3/t^3*x(3, 2, 1) \end{lstlisting}} \noindent The extra parameters are sent by a tuple of couples \textit{(parameter name, parameter value)} to the main algebra that will create the new basis. \subsection{Double set of variables} Our program also contains another algebra to work with a double set of variables. {\tt \begin{lstlisting} sage: D = DoubleAbstractPolynomialRing(QQ); D The abstract ring of multivariate polynomials on x over The abstract ring of multivariate polynomials on y over Rational Field \end{lstlisting}} \noindent One can see that the double algebra is the algebra of multivariate polynomials in the $x$ variables with the multivariate polynomials in the $y$ variables as coefficients. {\tt \begin{lstlisting} sage: D.an_element() y[0]*x[0, 0, 0] + 2*y[0]*x[1, 0, 0] + y[0]*x[1, 2, 3] + 3*y[0]*x[2, 0, 0] \end{lstlisting}} \noindent One can specify which bases to use in the $x$ variables and which bases to use in the $y$ variables. {\tt \begin{lstlisting} sage: Dx = D sage: Dy = D.base_ring() sage: Schubx = Dx.schubert_basis_on_vectors() sage: Schuby = Dy.schubert_basis_on_vectors() sage: pol = Schuby[2,1,3] * Schubx[1,1,2] sage: pol (Yy(2,1,3))*Yx(1, 1, 2) \end{lstlisting}} \noindent The \textit{expand} function or all direct conversions are done in the $x$ variables. {\tt \begin{lstlisting} sage: pol.expand() (Yy(2,1,3))*x(1, 1, 2) + (Yy(2,1,3))*x(1, 2, 1) + (Yy(2,1,3))*x(2, 1, 1) sage: mx = Dx.monomial_basis() sage: mx(pol) (Yy(2,1,3))*x[1, 1, 2] + (Yy(2,1,3))*x[1, 2, 1] + (Yy(2,1,3))*x[2, 1, 1] \end{lstlisting}} \noindent But, of course, one can also easily change the basis for the $y$ variables. {\tt \begin{lstlisting} sage: my = Dy.monomial_basis() sage: pol.change_coeffs_bases(my) (y[2,1,3]+y[2,2,2]+y[2,3,1]+y[3,1,2]+y[3,2,1]+y[4,1,1])*Yx(1, 1, 2) sage: pol = mx(pol); pol (Yy(2,1,3))*x[1, 1, 2] + (Yy(2,1,3))*x[1, 2, 1] + (Yy(2,1,3))*x[2, 1, 1] sage: pol.change_coeffs_bases(my) (y[2,1,3]+y[2,2,2]+y[2,3,1]+y[3,1,2]+y[3,2,1]+y[4,1,1])*x[1, 1, 2] + (y[2,1,3]+y[2,2,2]+y[2,3,1]+y[3,1,2]+y[3,2,1]+y[4,1,1])*x[1, 2, 1] + (y[2,1,3]+y[2,2,2]+y[2,3,1]+y[3,1,2]+y[3,2,1]+y[4,1,1])*x[2, 1, 1] \end{lstlisting}} \noindent One can also change the role of variables between the main ones and the coefficients. {\tt \begin{lstlisting} sage: pol (Yy(2,1,3))*x[1, 1, 2] + (Yy(2,1,3))*x[1, 2, 1] + (Yy(2,1,3))*x[2, 1, 1] sage: pol.swap_coeffs_elements() (x[1,1,2]+x[1,2,1]+x[2,1,1])*Yy(2, 1, 3) \end{lstlisting}} So we have seen that we can use our previous bases on a double set of variables. But we also have specific bases that only work with a double set of variables. Let us see the double Schubert polynomials and double Grothendieck polynomials, the way they were defined in Section \ref{BASES}. {\tt \begin{lstlisting} sage: DoubleSchub = D.double_schubert_basis_on_vectors(); DoubleSchub The ring of multivariate polynomials on x over The abstract ring of multivariate polynomials on y over Rational Field on the Double Schubert basis of type A (indexed by vectors) sage: pol = DoubleSchub[1,2]; pol y[0]*YY(1, 2) sage: pol.expand() (-y(2,1,0)-y(2,0,1))*x(0, 0) + (y(1,1,0)+y(1,0,1)+y(2,0,0))*x(1, 0) + (-2*y(1,0,0)-y(0,1,0)-y(0,0,1))*x(1, 1) + y[0]*x(1, 2) + (-y[1])*x(2, 0) + y[0]*x(2, 1) + (y(1,1,0)+y(1,0,1)+y(2,0,0))*x(0, 1) + (-y[1])*x(0, 2) sage: DGroth = D.double_grothendieck_basis_on_vectors(); DGroth The ring of multivariate polynomials on x over The abstract ring of multivariate polynomials on y over Rational Field on the Double Grothendieck basis of type A (indexed by vectors) sage: pol =DGroth[1,2]; pol y[0]*GG(1, 2) sage: pol.expand() y[0]*x(0, 0) + (-y(2,1,1))*x(-2, -2) + (y(1,1,1))*x(-2, -1) + (-y[1])*x(-1, 0) + (y(1,1,1))*x(-1, -2) + (y(2,0,0)-y(0,1,1))*x(-1, -1) + (-y[1])*x(0, -1) \end{lstlisting}} \section{Some advanced applications} \subsection{Projective degrees of Schubert varieties} In his Ph.D. thesis, Veigneau \cite{VEIGN} presents an application of the software ACE by computing the projective degree of Schubert varieties. We can implement this same function with our patch on Sage-Combinat. The projective degree $d(X)$ of a sub-variety $X \subset \mathbb{P}^M$ of codimension $k$ is the number of intersections between $X$ and a generic hyperplane of dimension $k$. For $\sigma$ a permutation of size $n$ and $X_\sigma$ a Schubert sub-variety of the flag variety $\mathcal{F}(\mathbb{C}^n)$ embedded in $\mathbb{P}^M$ by the Pl\"ucker embedding (with $M = 2^N-1$ where $N=\frac{n(n-1)}{2}$ is the dimension of $\mathcal{F}(\mathbb{C}^n)$), $d(X_\sigma)$ is a coefficient in a product in the Schubert basis. More precisely, the first Chern class of the tautologic invertible vectorial fiber of $\mathbb{P}^M$ is $h = (n-1)x_1 + (n-2)x_2 + \ldots + x_{n-1}$ and $d(X_\sigma)$ is the coefficient of $Y_{n-1,n-2,\ldots,0}$ in $h^{N-\ell(\sigma)} Y_v$, where $Y_v$ is the Schubert polynomial indexed by $v$, the Lehmer code of $\sigma$ \cite{CHERN}. The following function computes these degrees: {\tt \begin{lstlisting} def proj_deg(perm): n = len(perm) d = n*(n-1)/2 - perm.length() # we create the polynomial ring and the bases A = AbstractPolynomialRing(QQ) Schub = A.schubert_basis_on_vectors() # we compute the product h = sum( [(n-i) * A.var(i) for i in xrange(1,n)]) res = Schub( h**d * Schub(perm.to_lehmer_code())) # we look for the right coefficient for (key, coeff) in res: if ([key[i] for i in xrange(n)] == [n-i for i in xrange(1,n+1)]): return coeff return 0 \end{lstlisting}} \noindent One can also compute the product and directly read the result: {\tt \begin{lstlisting} sage: A = AbstractPolynomialRing(QQ) sage: m = A.monomial_basis() sage: Schub = A.schubert_basis_on_vectors() sage: Schub( (3*m[1] + 2*m[0,1] + m[0,0,1])^4 * Schub[1,0,1,0]) 8*Y(1, 1, 4, 0) + 23*Y(1, 2, 3, 0) + 24*Y(1, 3, 2, 0) + 39*Y(1, 4, 1, 0) + 15*Y(1, 5, 0, 0) + Y(1, 0, 5, 0) + 48*Y(2, 1, 3, 0) + 101*Y(2, 2, 2, 0) + 117*Y(2, 3, 1, 0) + 84*Y(2, 4, 0, 0) + 12*Y(2, 0, 4, 0) + 173*Y(3, 1, 2, 0) + 78*Y(3, 2, 1, 0) + 147*Y(3, 3, 0, 0) + 53*Y(3, 0, 3, 0) + 283*Y(4, 1, 1, 0) + 171*Y(4, 2, 0, 0) + 96*Y(4, 0, 2, 0) + 93*Y(5, 1, 0, 0) + 176*Y(5, 0, 1, 0) + 80*Y(6, 0, 0, 0) sage: proj_deg(Permutation([2,1,4,3])) 78 \end{lstlisting}} \noindent We can use our function to compute the degree for all permutations of size 4: {\tt \begin{lstlisting} sage: degrees = {} sage: for perm in Permutations(4): ....: degrees[perm] = proj_deg(perm) ....: sage: degrees {[2, 1, 4, 3]: 78, [1, 3, 4, 2]: 48, [3, 2, 4, 1]: 3, [3, 1, 2, 4]: 48, [4, 2, 1, 3]: 3, [1, 4, 2, 3]: 46, [3, 2, 1, 4]: 16, [4, 1, 3, 2]: 3, [2, 3, 4, 1]: 6, [3, 4, 2, 1]: 1, [1, 2, 3, 4]: 720, [1, 3, 2, 4]: 280, [2, 4, 3, 1]: 3, [2, 3, 1, 4]: 46, [3, 4, 1, 2]: 2, [4, 2, 3, 1]: 1, [1, 4, 3, 2]: 16, [4, 1, 2, 3]: 6, [2, 4, 1, 3]: 12, [4, 3, 1, 2]: 1, [4, 3, 2, 1]: 1, [3, 1, 4, 2]: 14, [2, 1, 3, 4]: 220, [1, 2, 4, 3]: 220} \end{lstlisting}} \subsection{Determinants of Schur functions} Grassmannian Schubert polynomials are the Schubert polynomials indexed by vectors $v$ such that $v_1 \leq v_2 \leq \ldots \leq v_n$. They are symmetric functions in $x_1, \ldots, x_n$. In a single set of variables (i.e., specializing $y$ to 0), Grassmannian Schubert polynomials are equal to Schur functions. More precisely, the transition matrix between double Grassmannian Schubert polynomials and Schur functions is unitriangular. {\tt \begin{lstlisting} sage: A = AbstractPolynomialRing(QQ) sage: Schub = A.schubert_basis_on_vectors() sage: pol = Schub[1,2] sage: pol.expand() x(1, 2) + x(2, 1) sage: sage: D = DoubleAbstractPolynomialRing(QQ) sage: DSChub = D.double_schubert_basis_on_vectors() sage: pol = DSchub[1,2] sage: pol y[0]*YY(1, 2) sage: Schub = D.schubert_basis_on_vectors() sage: Schub(pol) y[0]*Yx(1, 2) + (-y(2,1,0)-y(2,0,1))*Yx(0, 0) + (-y(1,0,0)-y(0,1,0)-y(0,0,1))*Yx(1, 1) + (y(1,1,0)+y(1,0,1)+y(2,0,0))*Yx(0, 1) + (-y[1])*Yx(0, 2) \end{lstlisting}} \noindent This allows us to compute determinants of Schur functions by replacing these by Schubert polynomials and specializing arbitrarily the $y$ variables. For example, we can compute \begin{equation} \vert s_\mu(A) \vert_{\mu \subseteq 11}, \end{equation} where $A \in [ \lbrace x_1, x_2 \rbrace, \lbrace x_1, x_3 \rbrace, \lbrace x_2, x_3 \rbrace ]$, and prove that it is equal to $\prod_{j>i}(x_j-x_i)$. First, we replace $s_\mu$ by \begin{equation} \vert Y_u(A,y) \vert_{u = 00, 01, 11} \end{equation} and specialize $y_1=x_1$, $y_2=x_2$, in which case the determinant becomes \begin{equation} \left| \begin{array}{lll} 1 & 1 & 1 \\ 0 & x_3 - x_2 & x_3 - x_1 \\ 0 & 0 & (x_3-x_1)(x_2-x_1) \end{array} \right| \end{equation} and gives the result. The following function computes the above Matrix: {\tt \begin{lstlisting} def compute_matrix(variables, alphabet, indices): n = len(indices) #Initial definitions K = PolynomialRing(QQ,[var(v) for v in variables]) K = K.fraction_field() D = DoubleAbstractPolynomialRing(K) DSchub = D.double_schubert_basis_on_vectors() result_matrix = [] for u in indices: line = [] #the expansion on the double schubert will allow us to compute the result pu = DSchub(u).expand() for a in alphabet: #we apply our polynomial on alphabets and specialize the y (this should be improved on further versions) pol = pu.subs_var( [(i,K(a[i])) for i in xrange(len(a))]) pol = pol.swap_coeffs_elements() pol = pol.subs_var( [(i,K(variables[i])) for i in xrange(pol.nb_variables()) ] ) if(pol ==0): coeff = 0 else: coeff = list(list(pol)[0][1])[0][1] line.append(coeff) result_matrix.append(line) return Matrix(K,result_matrix) \end{lstlisting}} \noindent In Sage: {\tt \begin{lstlisting} sage: variables = ("x1", "x2", "x3") sage: alphabet = [["x1","x2"],["x1","x3"],["x2","x3"]] sage: indices = [[0,0],[0,1],[1,1]] sage: sage: res = compute_matrix(variables, alphabet, indices) sage: res [ 1 1 1 ] [ 0 -x2 + x3 -x1 + x3 ] [ 0 0 x1^2 - x1*x2 - x1*x3 + x2*x3] sage: det = res.determinant() sage: det -x1^2*x2 + x1*x2^2 + x1^2*x3 - x2^2*x3 - x1*x3^2 + x2*x3^2 sage: factor(det) (x2 - x3) * (-x1 + x2) * (x1 - x3) \end{lstlisting}}
-45,205.684496
[ -2.71484375, 2.623046875 ]
25.909753
[ -3.4140625, -0.060791015625, -1.8291015625, -5.640625, -0.21923828125, 8.1796875 ]
[ 2.8515625, 6.30859375, 1.421875, 7.82421875 ]
263
5,507
[ -1.5556640625, 1.0185546875 ]
38.349515
[ -5.6875, -3.35546875, -3.369140625, -1.43359375, 1.943359375, 9.71875 ]
0.305918
19.849856
20.174324
8.811724
[ 2.2346549034118652 ]
-28,989.057455
4.629926
-44,677.450374
2.800329
5.632068
[ -2.61328125, -3.064453125, -2.95703125, -4.29296875, 2.40234375, 10.6953125 ]
[ -5.828125, -1.9951171875, -2.27734375, -1.4677734375, 3.978515625, 5.00390625 ]
BkiUb4bxaL3SuhzLnCaY
\section{Introduction} The graphs in this paper are simple. Let $V(G)$ denote the vertex set of a graph $G$ and let $E(G)$ denote its edge set. If two distinct vertices $v_i, v_j \in V(G)$ are adjacent in $G$, we write $v_i\sim v_j$, otherwise, $v_i \not \sim v_j$. The \emph{adjacency matrix}, $A(G)=(a_{ij})$, of a graph $G$ with $V(G)=\{v_1,\ldots,v_n\}$ is an $n\times n$ $(0,1)$ symmetric matrix in which $a_{ij}=1$ if and only if $v_i \sim v_j$. The \emph{spectrum} of $G$ is the multiset of the eigenvalues of $A(G)$. Several parameters of a graph can be deduced from the spectrum of $A(G)$. For example, $\text{trace}(A(G)^k)=\sum_{i=1}^n\lambda_i^k$ is the number of all closed walks, of length $k$ in $G$, so in particular, $|E(G)|=\frac{1}{2}\text{trace}(A(G)^2)$ and the number of triangles in $G$ is $\frac{1}{6}\text{trace}(A(G)^3)$. Thus it is interesting to know which graphs are determined (up to isomorphism) by their spectrum, that is, graphs for which there exist no non-isomorphic graph with the same spectrum. Two non-isomorphic graphs $G$ and $H$ are \emph{cospectral} if they have the same spectrum. A classical example of non-isomorphic cospectral graphs is given in Figure \ref{nnn} \cite{cvetkovic1971graphs}; their common spectrum is $\{[2]^1, [0]^3, [-2]^1\}$ (exponents indicate multiplicities). The complete graph $K_n$ and the path graph $P_n$ are determined by their spectrum. Let $V(G)=\{v_1,\ldots,v_n\}$. Then $D(G)=(d_{ij})$ is the diagonal matrix with $d_{ii}$ the degree of $v_i$. Let $I$ denote the identity matrix and $J$ the all-ones matrix. A linear combination of $A(G), D(G), J$ and $I$ is called a \emph{generalised adjacency matrix}. There are many results on the spectra of generalised adjacency matrices, see the excellent surveys \cite{van2003graphs,haemers2004enumeration,vanDamHaemers2009}. Generalised adjacency matrices include the {\it Laplacian}, $L(G)=D(G)-A(G)$, the {\it signless Laplacian}, $Q(G)=D(G)+A(G)$, and the {\it Seidel matrix} $S(G)=J-I-2A(G)$. Note that the Seidel matrix $S(G)= (s_{ij})$ of $G$ is the square matrix of order $n$ defined by \[s_{ij}=\begin{cases} 0 & \mbox{if $v_i=v_j$},\\ 1 & \mbox{if $v_i\not \sim v_j$, $v_i\neq v_j$},\\ -1 & \mbox{if $v_i\sim v_j$, $v_i\neq v_j$.} \end{cases}\] Other matrices for which the spectrum is of interest are the \emph{distance matrix}, where the $(i,j)$ entry is the distance between $v_i$ and $v_j$, and the {\it normalized Laplacian}, $\mathcal{L}(G)=D(G)^{-\frac{1}{2}}L(G)D(G)^{-\frac{1}{2}}$. Let $X\in \{$generalised adjacency, Laplacian, signless Laplacian, normalised Laplacian, distance, Seidel$\}$. The \emph{$X$ spectrum} of $G$ is the spectrum of the $X$ matrix of $G,$ and the references mentioned above contain many results on finding non-isomorphic $X$ cospectral graphs (i.e., non-isomorphic graphs that have the same $X$ spectrum) or showing that a graph is determined by its $X$ spectrum. In addition, some graphs that are determined by the normalized Laplacian spectrum are given in \cite{butler2016cospectral,berman2018family}, and the references there. Our paper is a small contribution to the rich literature on graphs that are determined by their $X$ spectrum. This is done by considering the Seidel spectrum of complete multipartite graphs. We mention in passing, that complete multipartite graphs are determined by the spectrum of the distance matrix but not by the spectrum of the adjacency matrix \cite{Delorme2012,jin2014complete}. Let $U,W\subseteq V(G)$ form a partition of $V(G)$. A \emph{Seidel switching} with respect to $U$ transforms $G$ to a graph $H$ by deleting the edges between $U$ and $W$ and adding an edge between vertices $u\in U$ and $w\in W$ if $(u,w)\notin E(G)$. For more details on Seidel matrices and related topics, see \cite{van1991equilateral,sciriha2018two,szollHosi2018enumeration,balla2018equiangular} and the references there. Seidel switching is an equivalence relation and we say that $G$ and $H$ are \emph{switching equivalent}. In general, $G$ and $H$ are not isomorphic, but since $S(H)=\Lambda S(G) \Lambda$, where $\Lambda$ is a signature matrix (a diagonal matrix with $1$ entries corresponding to vertices of $U$ and $-1$ entries corresponding to vertices of $W$), $S(H)$ and $S(G)$ are similar and have the same spectrum. Hence such $G$ and $H$ are cospectral, so no graph with more than one vertex is determined by its Seidel spectrum. Hence we say that a graph $G$ is \emph{Seidel determined, up to switching,} (or, in short, {\it $S$-determined}) if the only graphs with same Seidel spectrum are switching equivalent to a graph isomorphic to $G$. \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[thick] (0,2)--(2.8,0); \draw[thick] (0,0)--(2.8,2); \draw[fill](0,0) circle[radius=0.1]; \draw[fill](0,2) circle[radius=0.1]; \draw[fill](2.8,0) circle[radius=0.1]; \draw[fill](2.8,2) circle[radius=0.1]; \draw[fill](1.4,1) circle[radius=0.1]; \draw[thick] (6,0)--(6,2)--(8.8,2)--(8.8,0)--cycle; \draw[fill](6,0) circle[radius=0.1]; \draw[fill](6,2) circle[radius=0.1]; \draw[fill](8.8,0) circle[radius=0.1]; \draw[fill](8.8,2) circle[radius=0.1]; \draw[fill](7.4,1) circle[radius=0.1]; \end{tikzpicture} \caption{A pair of on-isomorphic cospectral graphs \label{nnn}} \end{center}\vspace{-8pt} \end{figure} The paper is organized as follows. In Section \ref{no&pri} we discuss some notations and preliminaries used in the main results. In Section \ref{comultipar} we consider complete multipartite graphs. We show that each graph Seidel cospectral with a complete $k$-partite graph is switching equivalent to a complete $k$-partite graph, and describe cases when complete multipartite graphs are $S$-determined. Complete tripartite graphs are considered in Section \ref{comtripar}, where we present triples $p,q,r$ for which $K_{p,q,r}$ is $S$-determined and examples of non-isomorphic, non switching equivalent, Seidel cospectral complete tripartite graphs. We conclude with a conjecture on complete tripartite graphs on more than 18 vertices. \section{Notations and preliminaries}\label{no&pri} We begin with presenting the Seidel spectrum of some graphs. \begin{lemma}\label{spectra1} \begin{enumerate} \item[{\rm(a)}] The Seidel spectrum of the empty graph on $n$ vertices is $\{[n-1]^1,[-1]^{n-1} \}$. \item[{\rm(b)}] The Seidel spectrum of the complete graph $K_n$ is $\{[1]^{n-1}, [1-n]^1 \}$. \item[{\rm(c)}] The Seidel spectrum of the complete bipartite graph $K_{p,q}$ is $\{[p+q-1]^1,[-1]^{p+q-1}\} $. \end{enumerate} \end{lemma} \begin{proof} The Seidel matrix of the empty graph on $n$ vertices is the $n\times n$ matrix $J-I$. The Seidel matrix of $K_n$ is $I-J$, and the graph $K_{p,q}$ is obtained by switching from the empty graph on ${p+q}$ vertices, where $U$ is any subset of $p$ vertices. \end{proof} To prove our main theorem we need the Seidel spectra of some graphs on $5$ vertices with an isolated vertex. \begin{figure}[!ht] \begin{center} \begin{tikzpicture}[scale=0.8] \draw[thick] (0,1.2)--(0,0)--(1.2,0)--(1.2,1.2); \draw[fill](0,0) circle[radius=0.1]; \draw[fill](1.2,0) circle[radius=0.1]; \draw[fill](0,1.2) circle[radius=0.1]; \draw[fill](1.2,1.2) circle[radius=0.1]; \draw[fill](.6,2) circle[radius=0.1]; \draw[thick] (4,1.2)--(4,0); \draw[thick] (5.2,0)--(5.2,1.2); \draw[fill](4,0) circle[radius=0.1]; \draw[fill](5.2,0) circle[radius=0.1]; \draw[fill](4,1.2) circle[radius=0.1]; \draw[fill](5.2,1.2) circle[radius=0.1]; \draw[fill](4.6,2) circle[radius=0.1]; \draw[thick] (8.6,.8)--(8,0)--(9.2,0)--(8.6,.8)--(8.6,1.5); \draw[fill](8,0) circle[radius=0.1]; \draw[fill](9.2,0) circle[radius=0.1]; \draw[fill](8.6,.8) circle[radius=0.1]; \draw[fill](8.6,1.5) circle[radius=0.1]; \draw[fill](8.6,2.2) circle[radius=0.1]; \end{tikzpicture} \caption{$K_1\cup P_4$, $K_1\cup K_2\cup K_2$ and $K_1\cup H$ \label{fig:paw}} \end{center}\vspace{-8pt} \end{figure} \begin{lemma}\label{spectra2} \begin{enumerate} \item[{\rm(a)}] The Seidel spectrum of $K_1\cup P_4$ is $[\sqrt{5}]^2, [0]^1, [-\sqrt{5}]^2$; $-\sqrt{5}\approx -2.2361$. \item[{\rm(b)}] The Seidel spectrum of $K_1\cup K_2\cup K_2$ is $\left[\frac{1+\sqrt{17}}{2}\right]^1,[1]^2,\left[\frac{1-\sqrt{17}}{2}\right]^1, [-3]^1$; $\frac{1-\sqrt{17}}{2}\approx -1.5616$. \item[{\rm(c)}] The Seidel spectrum of the $K_1\cup H$, where $H$ is the paw graph {\begin{tikzpicture}[scale=0.4] \draw[thick](.6,.8)--(0,0)--(1.2,0)--(.6,.8)--(.6,1.5); \draw[fill](0,0) circle[radius=0.1]; \draw[fill](1.2,0) circle[radius=0.1]; \draw[fill](.6,.8) circle[radius=0.1]; \draw[fill](.6,1.5) circle[radius=0.1]; \end{tikzpicture}}, is \\$\left[\frac{1+\sqrt{17}}{2}\right]^1,[1]^2,\left[\frac{1-\sqrt{17}}{2}\right]^1, [-3]^1$. \end{enumerate} \end{lemma} \begin{proof} Parts (a) and (b) are found by direct computation. For (c) note that $K_1\cup H$ is obtained from $K_1\cup K_2\cup K_2$ by switching with respect to $U=\{u\}$, where $u$ is an end vertex of one of the two edges. \end{proof} \section{Complete multipartite graphs}\label{comultipar} For considering general complete multipartite graphs, we need some facts about their Seidel spectrum. The Seidel spectrum of the complete multipartite graph $K_{p,p,\ldots, p}$ was found in \cite{LvWeiZhao2012} by computing the characteristic polynomial of the graph. We include here a different proof. \begin{lemma}\label{spectra0} The Seidel spectrum of the graph $K_{p,p,\ldots, p}$ on $n=kp$ vertices is \[\{[2p-1]^{k-1},[-1]^{n-k}, [-n+2p-1]^1\}.\] \end{lemma} \begin{proof} The Seidel matrix of $K_{p,p, \ldots, p}$ is $(2I_k-J_k)\otimes J_p -I_n$, where $\otimes$ denotes the Kronecker product, and $n=kp$. The spectrum of $2I_k-J_k$ is $\{[2]^{k-1},[2-k]^1\}$, and that of $J_p$ is $\{[p]^1,[0]^{p-1}\}$. The spectrum of their Kronecker product consists of all the products of these eigenvalues: $\{[2p]^{k-1},[0]^{(p-1)k}, [(2-k)p]^1 \}=\{[2p]^{k-1},[0]^{n-k}, [2p-n]^1 \}$. Thus the Seidel spectrum of $K_{p,p, \ldots, p}$ is $\{[2p-1]^{k-1},[-1]^{n-k}, [-n+2p-1]^1\}$. \end{proof} The next lemma is mostly implicit in \cite[Corollaries 2.2 and 2.3, and eq. (3)]{WangZhaoLi2014}. We include a partial explanation for better clarity. \begin{lemma}\label{spectra3} Let $p_1\ge p_2\ge \ldots\ge p_k\ge 1$. Denote the Seidel eigenvalues of the complete $k$-partite graph $K_{p_1, p_2, \ldots, p_k}$ by $\lambda_1\ge \lambda_2\ge \ldots \ge \lambda_n$, where $n=p_1+p_2+\ldots+p_k$. Then \begin{enumerate} \item[(1)] $\lambda_1\ge \lambda_2\ge \ldots \ge \lambda_{k-1}>0>\lambda_{k}=-1=\ldots=-1=\lambda_{n-1}\ge \lambda_n$. \item[(2)] $ 2p_1-1\ge \lambda_1\ge 2p_2-1\ge \lambda_2\ge \ldots\ge 2p_{k-1}-1 \ge\lambda_{k-1}\ge 2p_k-1$, \item[(3)] $(2-k)p_k-1\ge \lambda_n\ge 2p_k-n-1$. \item[(4)] For $i=1, \ldots, k-1$, if $p_i>p_{i+1}$, then $2p_i-1>\lambda_i>2p_{i+1}-1$. \end{enumerate} \end{lemma} \begin{proof} The graph $K_{p_k,p_k,\ldots, p_k}$ is an induced subgraph of $K_{p_1, p_2, \ldots, p_k}$, with Seidel eigenvalues $\mu_1\ge \mu_2\ge \ldots \ge \mu_{kp_k}$. By Lemma \ref{spectra0}, \[\mu_1=\ldots=\mu_{k-1}=2p_k-1>0.\] By interlacing we get that \[\lambda_i\ge \mu_i>0\] for every $i=1, \ldots, k-1$, and \[(2-k)p_k-1=\mu_{kp_k}\ge \lambda_{n-kp_k+kp_k}=\lambda_n.\] Let $S$ be the Seidel matrix of $K_{p_1,p_2,\ldots, p_k}$. It is easy to see that in $I+S$ there are only $k$ different rows (the $i$-th of which is repeated $p_i$ times, $i=1,\ldots, k$). Thus $\rank(I+S)\le k$, and $-1$ is an eigenvalue of $S$ of multiplicity at least $n-k$. This completes the proof of (1) (and part of (3)). The more detailed inequalities on the Seidel eigenvalues in (2) and (4) follow from analysis of the characteristic polynomial of $S$, done in \cite[Theorem 2.2]{WangZhaoLi2014}. The polynomial is \[(x+1)^{n-k}\left[\prod_{i=1}^k\left(x-2p_i+1\right)+ \sum_{j=1}^k p_j\prod_{\underset{i\ne j}{i=1}}^k (x-2p_i+1),\right]\] and thus the $n-k$ eigenvalues of $S$ other than the known $-1$'s are the roots of \[f(x)=\prod_{i=1}^k\left(x-2p_i+1\right)+ \sum_{j=1}^k p_j\prod_{\underset{i\ne j}{i=1}}^k (x-2p_i+1). \] The inequality $\lambda_n\ge 2p_k-n-1$ follows from \[0=\sum_{i=1}^n\lambda_i\le \sum_{i=1}^{k-1}(2p_i-1)+(n-k)(-1)+\lambda_n,\] combined with $\sum_{i=1}^{k-1}p_i=n-p_k$. This completes the proof of (3). \end{proof} \begin{theorem}\label{main} Let the Seidel spectrum of $G$ be equal to that of the complete $k$-partite graph $K_{p_1, p_2, \ldots, p_k}$. Then $G$ is a complete $k$-partite graph, up to switching. \end{theorem} \begin{proof} We first show that $G$ is complete multipartite up to switching. By using a switching with respect to the neighborhood of a single vertex, we may assume that $G=K_1\cup D$. As the Seidel spectrum of $G$ is equal to that of $K_{p_1, p_2, \ldots, p_k}$, $\lambda_{n-1}=-1$. If $\mu_1\ge \ldots\ge \mu_5$ are the Seidel eigenvalues of an induced subgraph of $G$ on $5$ vertices, $\mu_{4}\ge \lambda_{n-5+4}=-1$ by interlacing. Hence, by Lemma \ref{spectra2}, $G$ cannot have an induced $K_1\cup K_2\cup K_2$, or an induced $K_1\cup P_4$, or an induced $K_1\cup H$, where $H$ is the paw graph. Therefore $D$ cannot contain an induced $K_2\cup K_2$ or an induced $P_4$, or an induced paw. At most one of the components of $D$ may contain an edge, since otherwise $D$ contains an induced $K_2\cup K_2$, contrary to the assumption. Let $\chi(D)=t$ be the chromatic number of $D$. If $t>1$ then $D$ contains (exactly one) component $C$ with at least one edge, and $\chi(D)=\chi(C)$. Let $C=V_1\cup V_2\cup \ldots\cup V_t$, such that $v{\not\sim}u$ whenever $v, u\in V_i$, and for each $i\ne j$ there is an edge with one end in $V_i$ and one end in $V_j$. Let $v\in V_i$ and $u\in V_j$ be neighbors. If $w\in V_i$, $w\ne v$, then $w$ has to be a neighbor of $u$ too. To see that, suppose on the contrary that $w{\not\sim}u$. Then by the connectivity of $C$, $w$ has a neighbor $z\ne u$. As $v{\not\sim}w$, the subgraph of $D$ induced on vertices $\{v,u,z,w\}$ is one of the following: either an induced $K_2\cup K_2$ (if it has only the edges $vu$ and $wz$), or an induced $P_4$ (if $u\sim z$ and $v\not\sim z$), or an induced paw (if $v$ and $u$ are both neighbors of $z$). This contradicts the observation above, that none of these is a possible induced subgraph of $D$. Thus every neighbor of $v\in V_i$ is also a neighbor of all the other vertices in $V_i$. Suppose $u\in V_j$, $j\ne i$. By the same argument, each vertex in $V_j$ is a neighbor of each of the vertices in $V_i$. Hence the graph induced on $V_i\cup V_j$ is complete bipartite. Since this holds for any $i\ne j$, the graph $C$ is complete $t$-partite graph. Therefore $D$, and thus $G$, consists of a complete multipartite graph and isolated vertices. Let $U$ be the set of all isolated vertices of $K_1\cup D$. Switching with respect to $U$ yields a complete $(t+1)$-partite graph. Hence $G$ is a complete $r$-partite graph for some $r$ (up to switching). The number of positive Seidel eigenvalues of $G$ is therefore $r-1$. On the other hand, the Seidel eigenvalues of $G$ are those of $K_{p_1, p_2, \ldots, p_k}$, hence $r-1=k-1$, implying that $G$ is a complete $k$-partite graph, up to switching. \end{proof} According to the next theorem, two complete $k$-partite graphs, $k\ge 3$, can be switching equivalent only if they are isomorphic. \begin{theorem}\label{difkpart} Let $p_1\ge p_2\ge \ldots \ge p_k\ge 1$ and $q_1\ge q_2\ge \ldots \ge q_k\ge 1$ be two different $k$-tuples, $k\ge 3$, such that $\sum_{i=1}^kp_i=\sum_{i=1}^kq_i$. Then $K_{p_1, p_2, \ldots, p_k}$ and $K_{q_1, q_2,\ldots, q_k}$ are not switching-equivalent. \end{theorem} \begin{proof} Let $V_i$ denote the independent set in $G=K_{p_1, p_2, \ldots, p_k}$ of size $p_i$, $i=1,\ldots, k$. Let $W_i$, $i=1,\ldots, k$, be the independent sets in $K_{q_1, q_2,\ldots, q_k}$. Suppose there exists $U\subseteq \cup_{i=1}^k V_i$ such that switching $G$ with respect to $U$ yields $G'=K_{q_1, q_2,\ldots, q_k}$. The set $U$ is not empty, since by the assumption on the $k$-tuples $G'\ne G$. We denote $V_{i1}=V_i\cap U$, and $V_{i2}=V_i\setminus V_{i1}$. In $G'$, each vertex in $V_{i1}$ is connected by an edge with each vertex in $V_{i2}$, and with each vertex in $V_{j1}$, $j\ne i$, and to no other vertices. Similarly, each vertex in $V_{i2}$ is connected also with each vertex in $V_{j2}$, $j\ne i$. Suppose without loss of generality that $V_{11}\ne \emptyset$. Then at most one of $V_{i2}$, $i=2, \ldots, k$, is not empty. For if $V_{i2}, V_{j2}\ne \emptyset$, for $i, j\ge 2$, $i\ne j$, then as $V_{11}\subseteq W_\ell$ for some $\ell$, and $V_{11}\cup V_{i2}$ is independent in $G'$, also $V_{i2}\subseteq W_\ell$. Similarly, $V_{j2}\subseteq W_\ell$. But since there are edges between $V_{i2}$ and $V_{j2}$, this contradicts the independence of $W_\ell$. So suppose without loss of generality that $V_{i2}=\emptyset$ for $3\le i\le k$. Then $V_{i1}=V_i$ for $3\le i\le k$. Now both $V_{11}$ and $V_{k1}$ are not empty. If $V_{22}\ne \emptyset$, then $V_{11}\cup V_{22}$ is independent in $G'$ and thus contained in $W_\ell$. But then since $V_{k1}\cup V_{22}$ is independent, also $V_{k1}\subseteq W_\ell$. A contradiction, since there is an edge between $V_{11}$ and $V_{k1}$. Hence $V_{22}$ is empty, and $V_{21}=V_2$ is non-empty. By the independence of both $V_{k1}\cup V_{12}$ and $V_{21}\cup V_{12}$ in $G'$, and the existence of edges between $V_{k1}$ and $V_{21}$, we get that $V_{12}$ has also to be empty. That is, $V_{11}=V_1$, and thus $U$ consists of all the vertices of $G$, which means that $G'=G$, contrary to the assumption that the $k$-tuples are different. \end{proof} The last theorem leaves out the case $k=2$. \begin{remark}\label{Kpq=Kst} {\rm By Lemma \ref{spectra1} all complete bipartite graphs are Seidel cospectral. However, any two complete bipartite graphs are also switching equivalent: Let $K_{p,q}$ and $K_{s,t}$ be two non-isomorphic complete bipartite graphs, with $p+q=s+t=n$, $\{s,t\}\ne \{p,q\}$. Suppose $p\ge q$, $s\ge t$, and $p>s$. Let $V_1$ and $V_2$ be the independent sets in $K_{p,q}$, $|V_1|=p$, $|V_2|=q$. Let $U\subseteq V_1$ be any set of $s-q$ vertices (by our assumptions, $p-t=s-q$ and $p-t>s-t\ge 0$, so $p>s-q>0$). Then after switching $K_{p,q}$ with respect to $U$, we get $K_{s,t}$ with independent sets $W_1=U\cup V_2$ and $W_2=V_1\setminus U$.} \end{remark} Combining Remark \ref{Kpq=Kst} with Theorem \ref{main} we get that if a graph $G$ is cospectral with $K_{p,q}$, then $G$ is switching equivalent to a complete bipartite graph, and therefore to $K_{p,q}$. \begin{theorem} Any complete bipartite graph is $S$-determined. \end{theorem} In some cases a complete $k$-partite graph is determined by its Seidel spectrum up to switching. The following is one such case. \begin{theorem}\label{Kpiqi} Let $p_1> \ldots > p_l\ge 1$. The graph $K_{\underset{s_1}{\underbrace{p_1,...,p_1}},\ldots,\underset{s_l}{\underbrace{p_l,...,p_l}}}$, where $s_i\geq 3$ for every $i=1,\ldots,l$, is $S$-determined. \end{theorem} \begin{proof} Let $r_i=\sum_{j=1}^{i}s_j, i=1,2,\ldots,l$, $r_0=0$. Denote $k=r_l$. Let $\lambda_1\ge \ldots\ge \lambda_n$, where $n=\sum_{i=1}^l s_ip_i$, be the Seidel eigenvalues of the $k$-partite graph $K_{\underset{s_1}{\underbrace{p_1,...,p_1}},\ldots,\underset{s_l}{\underbrace{p_l,...,p_l}}}$. By Lemma \ref{spectra3}, for $i=1, \ldots, l$ \[\lambda_j=2p_i-1,\quad j=r_{i-1}+1, \ldots, r_i-1,\] and \[2p_i-1>\lambda_{r_i}>2p_{i+1}-1.\] If for $q_1\ge q_2\ge \ldots \ge q_k$ the graph $K_{q_1, q_2, \ldots, q_k}$ has the same Seidel spectrum, then by Lemma \ref{spectra3}(2), for $i=1,\ldots,l$, \[2q_j-1\geq \lambda_j=2p_i-1 \geq 2q_{j+1}-1, j=r_{i-1}+1,\ldots, r_{i}-1.\] Hence $q_{j}=p_i$ for $j=r_{i-1}+2,\ldots, r_{i}-1$. By Lemma \ref{spectra3}(4), $q_{r_{i-1}+1}>q_{r_{i-1}+2}$ is impossible, since otherwise \[2q_{r_{i-1}+1}-1>\lambda_{r_{i-1}+1}>2q_{r_{i-1}+2}-1,\] contrary to $\lambda_{r_{i-1}+1}=2p_i-1=2q_{r_{i-1}+2}-1$ obtained above. Hence, $q_{j}=p_i$, for $j=r_{i-1}+1,\ldots, r_{i}-1$, $i=1, \ldots, l$. Since $\sum_{j=1}^k q_j=\sum_{i=1}^k s_ip_i$, we have $\sum_{i=1}^{l}q_{r_i}=\sum_{i=1}^{l}p_i$. As $q_{r_i}\le q_{r_i-1}=p_i$ for every $i=1, \ldots, l$, the latter equality of sums implies that $q_{r_i}=p_i$ for every $i$. \end{proof} In general, however, there may be complete $k$-partite graphs that are not switching-equivalent and have the same Seidel spectrum. Some examples of such tripartite graphs are included in the next section. \section{Complete tripartite graphs}\label{comtripar} It was shown in \cite{LvWeiZhao2012} that the characteristic polynomial of the Seidel matrix of the complete tripartite graph $K_{p,q,r}$ is \[(\lambda+1)^{n-3}(\lambda^3+\lambda^2(3-n)+\lambda(3-2n)+(4pqr-n+1)),\] where $n=p+q+r$. Thus if $\lambda_1>\lambda_2$ are the two positive Seidel eigenvalues of $K_{p,q,r}$ and $\lambda_n$ is its smallest Seidel eigenvalue, then $\lambda_1, \lambda_2, \lambda_n$ are the roots of the polynomial \[\lambda^3+\lambda^2(3-(p+q+r))+\lambda(3-2(p+q+r))+(4pqr-(p+q+r)+1).\] As the other $n-3$ eigenvalues in both graphs are all equal to $-1$, $K_{x,y,z}$ has exactly the same Seidel spectrum as $K_{p,q,r}$ if and only if the following two equalities hold: \begin{align} x+y+z&=p+q+r \label{p+q+r}\\ xyz&=pqr ~~.\label{pqr} \end{align} Combined with Theorem \ref{difkpart} this implies the following. \begin{observation}\label{dif3part} The complete tripartite graph $K_{p,q,r}$ is $S$-determined if and only if the unique solution (up to permutation) to the system of equations \eqref{p+q+r} and \eqref{pqr} is $\{x,y,z\}=\{p,q,r\}$. \end{observation} \begin{example}\label{661} The graphs $K_{6,6,1}$ and $K_{9,2,2}$ have the same Seidel spectrum, but are not switching-equivalent. \end{example} We mention a few more observations: \begin{itemize} \item Suppose one of $x,y,z$ is equal to one of $p,q,r$, say $x=p$. Then $y$ and $z$ have to have the same sum and the same product as $q$ and $r$. But $q,r$ is the only pair of integers with sum $q+r$ and product $qr$. Thus if $K_{x,y,z}$ has the same Seidel spectrum as $K_{p,q,r}$ and is not switching-equivalent to $K_{p,q,r}$, then $\{x,y,z\}\cap\{p,q,r\}=\emptyset$. \item If $K_{x,y,z}$ has the same Seidel spectrum as $K_{p,q,r}$, then $K_{kx,ky,kz}$ has the same spectrum as $K_{kp,kq,kr}$. Thus if $K_{p,q,r}$ is not $S$-determined, then for every positive integer $k$ the graph $K_{kp,kq,kr}$ is not $S$-determined. The converse does not hold: $K_{10,2,2}$ has the same Seidel spectrum as $K_{8,5,1}$, but $K_{5,1,1}$ is $S$-determined, see Theorem \ref{pq1} below. \end{itemize} It is not hard to find infinitely many pairs of different triples $p, q, r$ and $x, y, z$ such that $K_{p,q,r}$ and $K_{x,y,z}$ have the same Seidel spectrum. Some examples will be given below. In fact, it was shown in \cite{Schinzel1996} that for every positive integer $m$ there are infinitely many sets of $m$ different primitive triples sharing the same sum and the same product (where a triple is primitive if the greatest common divisor of its elements is $1$). In the remainder of this section, we mention some cases when complete tripartite graphs are $S$-determined, and some cases when they are not. \begin{theorem}\label{casesdetermined} In the following cases the graph $K_{p,q,r}$ is $S$-determined: \begin{enumerate} \item[{\rm(a)}] $p=q=r$. \item[{\rm(b)}] $p,q,r$ are all powers of the same prime $a$. \item[{\rm(c)}] $\max\{p,q,r\}$ is prime. \end{enumerate} \end{theorem} \begin{proof} Part (a) is a special case of Theorem \ref{Kpiqi}. Part (b) holds since in this case if $K_{x,y,z}$ has the same Seidel spectrum as $K_{p,q,r}$, then $xyz=pqr$ implies that each of $x$, $y$, $z$ is a power of $a$. Since there is a unique way to write $p+q+r$ as a sum of powers of the prime $a$, the triple $x,y,z$ is equal to $p,q,r$. We now prove part (c). Suppose $p\ge q\ge r$ and $p$ is prime. If $xyz=pqr$, then $p$ divides one of $x,y,z$, say $x$. If $x=kp$, $k\ge 2$, then \[3p\ge p+q+r=x+y+z\ge kp+2\] implies that $k=2$. But then \[qr=2yz ~\text{ and }~ q+r=p+y+z,\] and thus $q\ge q+r-p= y+z\ge 2z$. This in turn implies \[2yz=qr\ge 2zr,\] hence $y\ge r$. We get that \[p+y\ge q+r =p+y+z>p+y,\] a contradiction. Thus $x=p$, and therefore the triple $p,q,r$ and the triple $x,y,z$ are identical. \end{proof} We now consider whether slightly weaker conditions may suffice for $K_{p,q,r}$ to be $S$-determined. Does equality of exactly two elements in the triple $p,q,r$ suffice? Does a prime in the triple, but not the largest, suffice? Do two primes suffice? In general, the answer to each of these questions is negative: \begin{example}\label{prrppr} {\rm For every positive integer $k$ ($k$ and $2k-1$ not necessarily prime) the graphs $K_{k(2k-1), k(2k-1), 1}$ and $K_{(2k-1)^2, k, k}$ are Seidel cospectral, and for $k>1$ they are non-isomorphic. Thus for every $r>1$ there exists $p>r$ such that $K_{p,r,r}$ is not $S$-determined, and for every $r\ge 1$ there exist infinitely many non-prime $p$'s such that $K_{p,p,r}$ is not $S$-determined.} \end{example} To point out some cases where $K_{p,p,r}$, $p>r$, is $S$-determined, we first prove an auxiliary result. \begin{lemma}\label{p=q>r} Let $p>r$ and let $K_{x,y,z}$ be Seidel cospectral with $K_{p,p,r}$, where $x\ge y\ge z$ and $x,y,z\notin\{p,r\}$. Then \[x>p>y\ge z>r \text{ and } x+y<2p.\] Let $K_{x,y,z}$ be Seidel cospectral with $K_{p,r,r}$, where $x\ge y\ge z$ and $x,y,z\notin\{p,r\}$. Then \[p>x\ge y>r>z \text{ and } x+y<2p.\] \end{lemma} \begin{proof} let $\lambda_1\ge \lambda_2$ be the two positive Seidel eigenvalue of $K_{p,p,r}$. By Lemma \ref{spectra3}, since $p>r$, \[2p-1\ge \lambda_1\ge 2p-1> \lambda_2> 2r-1.\] Thus $\lambda_1=2p-1$, and $\lambda_2>2r-1$. By Lemma \ref{spectra3}, \[2x-1\ge 2p-1\ge 2y-1\ge \lambda_2\ge 2z-1.\] Since $2x-1, 2y-1\ne 2p-1$ the first two inequalities are strict, and hence $x>p>y$. So $x=p+a$, $y=p-b$, with $a$ and $b$ positive integers. As \[p+a+p-b+z=2p+r,\] we get that $r=z+a-b$. Thus from $xyz=p^2r$ we get \[(p+a)(p-b)z=p^2(z+a-b),\] implying that \[(a-b)pz-abz=(a-b)p^2,\] or \[(a-b)p(z-p)=abz.\] As in the last equality the right hand side is positive, and $z\le y<p$, we must have $a<b$, and \[x+y=p+a+p-b<2p.\] This in turn implies that \[z=2p+r-(x+y)>r.\] The proof of the second claim in the lemma is similar. \end{proof} \begin{theorem}\label{ababa} Let $a$ and $b$ be different primes. Then $K_{ab,ab,a}$ is $S$-determined if and only if the ordered pair $(a,b)\ne (2,3)$. \end{theorem} \begin{proof} For the only if note that the tripartite graph $K_{6,6,2}$ is Seidel cospectral with $K_{8,3,3}$. Now suppose $(a,b)\ne (2,3)$. By by Lemma \ref{p=q>r} combined with \eqref{p+q+r} and \eqref{pqr}, the graph $K_{x,y,z}$, $x\ge y\ge z$ and $x,y,z\notin\{ab,a\}$, is Seidel cospectral with $K_{ab,ab,a}$ if and only if the following three conditions are satisfied: \begin{align} x+y+z&=2ab+a \label{2ab+a}\\ xyz&=a^3b^2 \label{a3b2}\\ x>ab&>y\ge z>a \label{x>ab>..z>a} \end{align} By \eqref{a3b2} $z$ cannot be a product of at least two primes, since otherwise $xy$ is a product of at most three primes, which cannot occur in combination with $x>y\ge z$. Hence $z$ is a prime. In the case that $b<a$, $K_{ab,ab,a}$ is $S$-determined since in this case there is no prime $z>a$ which divides $a^3b^2$. If $b>a$, then $z=b$. Then $xy=a^3b$, $x>y\ge b$, $x,y\notin\{ab,a\}$, holds only if $x=a^3$, $y=z=b$. By \eqref{2ab+a}, $a^3+2b=2ab+a$ , so $a^3-a=2b(a-1)$ and therefore $a(a+1)=2b$. But this last equality is satisfied only if $a=2$ and $b=3$, contrary to our assumption. Therefore $K_{ab,ab,a}$ is $S$-determined in this case also. \end{proof} In the next result, we consider $K_{p,p,1}$, where $p$ is a product of two different primes. \begin{theorem}\label{abab1} If $p=ab$, where $a>b$ are both prime, and $K_{p,p,1}$ is not $S$-determined, then $a=2b-1$ and the only complete tripartite graph Seidel cospectral with $K_{p,p,1}$ is $K_{a^2,b,b}$. \end{theorem} \begin{proof} By the assumption and Lemma \ref{p=q>r}, there exist $x>p>y\ge z>1$ such that $K_{x,y,z}$ is Seidel cospectral with $K_{p,p,1}$. Then $xyz=a^2b^2$. By Lemma \ref{p=q>r}, $x>ab$, and as $y,z\notin\{ab,1\}$, $x\notin \{a^2b^2\,,\, a^2b\, ,\, ab^2\}$. Thus either $x=b^2$ or $x=a^2$. Suppose $x=b^2$. Then $yz=a^2$ and, since $y,z\ne 1$, necessarily $y=z=a$. By \eqref{p+q+r} we get that \[b^2+2a=2ab+1,\] and therefore \[b^2-1=2a(b-1),\] implying that $b+1=2a$. But this is impossible by the assumption that $a>b(>1)$. By the same computation, in the remaining case, that $x=b^2$, necessarily $y=z=a$ and we get that $a+1=2b$. The graphs $K_{ab,ab,1}$ and $K_{a^2,b,b}$ do have the same Seidel spectrum when $a=2b-1$ (see also Example \ref{prrppr}). \end{proof} In the next result, we consider the general case that one element in the triple is $1$. \begin{theorem}\label{pq1} $K_{p,q,1}$ is $S$-determined if $q\le 4$. For every $q>4$ there exists a positive integer $p$ such that $K_{p,q,1}$ is not $S$-determined. \end{theorem} \begin{proof} If \begin{align} x+y+z&=p+q+1 \label{p+q+1} \\ xyz&=pq ~~,\label{pq} \end{align} where $x\ge y\ge z$ is a triple different from $p, q, 1$, then in particular $y\ge z\ge 2$, and by Theorem \ref{casesdetermined}(c) $x$ is not prime. Multiplying \eqref{p+q+1} by $q$ and substituting $pq$ by $xyz$ we get \[q(x+y+z)=xyz+q^2+q,\] and thus $q(y+z-q-1)=x(yz-q)$. As $y\ge z\ge 2$, $yz\ge y+z$. Thus \begin{equation} x(yz-q)\le q(yz-q-1). \label{xle}\end{equation} For the first part of the theorem, suppose on the contrary that such $x\ge y\ge z$ exist for $q\le 4$. As $y,z\ge 2$, we have $yz\ge 4\ge q$. And $yz\ne q$, since otherwise in \eqref{xle} we get that $0\le -q$. Thus $yz>q$ and \[x\le q\frac{yz-q-1}{yz-q}<q.\] But since $q\le 4$ this means that $x$ is prime, and we get a contradiction. For the second part of the theorem suppose first that $q>4$ is odd, then $K_{\frac{(q-1)^2}{2},q,1}$ and $K_{q\frac{q-1}{2}, \frac{q-1}{2},2}$ have the same Seidel spectrum. This covers in particular the case that $q$ is prime. If $q>4$ and $q=ab$, where $a>1$ and $b>2$, let $p=(b-1)(ab-a-b+2)$. Then $x=b(ab-a-b+2)$, $y=b-1$, $z=a$ form a triplet such that $K_{x,y,z}$ has the same Seidel spectrum as $K_{p,q,1}$. \end{proof} \begin{corollary}\label{cornge 3 det} For any intger $n\ge 3$ there exists an $S$-determined complete tripartite graph of order $n$. \end{corollary} By computer check, complete tripartite graphs on $n$ vertices, $n<13$ or $n=15$ or $n=18$ are $S$-determined. For $n=13,14,16,17$ there exist complete tripartite graphs which are not $S$-determined. For $n=13, K_{9,2,2}, K_{6,6,1}$ are Seidel cospectral. For $n=14, K_{8,3,3}, K_{6,6,2}$ are Seidel cospectral. For $n=16, K_{9,5,2}, K_{10,3,3}$ are Seidel cospectral. For $n=17, K_{9,4,4}, K_{8,6,3}$ are Seidel cospectral. \begin{remark}{\rm A natural question is for which integers $n$ there exist complete tripartite graphs on $n$ vertices that are not $S$-determined. This question was settled in an Undergraduate Research Project at the Technion by Ramy Masalha and Eli Bogdanov \cite{MasalhaBogdanov2018}. They showed that for $n=7k-\alpha$, $\alpha\in \{1,2,3,4,5,6,7\}$, the following two triples have the same sum and the same product: \[ \alpha, 3k,4k-2\alpha ~\text{ and }~ 2\alpha, k, 6k-3\alpha.\] For $n\notin\{22,24,30,36,42\}$ these are two different triples. For the remaining values of $n$ they found the following pairs of triples: \begin{align*} n=22:&\quad 9,8,5 \quad\text{and}\quad 10,6,6\\ n=24:&\quad 12,10,2 \quad\text{and}\quad 16,5,3\\ n=30:&\quad 20,7,3 \quad\text{and}\quad 21,5,4\\ n=36:&\quad 21,13,2 \quad\text{and} \quad 26,7,3\\ n=42:&\quad 24,16,2 \quad\text{and}\quad 32,6,4 \end{align*} Thus for every $n\ge 13$, other than $n=15$ and $n=18$, there exists a complete tripartite graph on $n$ vertices that is not $S$-determined. }\end{remark} {\bf Acknowledgements} The authors would like to thank the anonymous referee for suggestions and comments on the early version of this paper.
-41,129.411419
[ -2.42578125, 2.236328125 ]
45.210084
[ -2.869140625, 1.0830078125, -2.201171875, -5.19921875, -1.4423828125, 8.03125 ]
[ 2.25390625, 8.28125, 2.505859375, 7.31640625 ]
361
4,249
[ -3.322265625, 3.9140625 ]
37.34977
[ -4.64453125, -2.751953125, -3.625, -2.265625, 0.91015625, 9.890625 ]
1.898261
28.161743
24.570487
5.906106
[ 2.3539483547210693 ]
-25,849.462213
5.033655
-40,992.954893
1.03329
5.812121
[ -1.9375, -2.8046875, -3.62109375, -5.1640625, 1.919921875, 11.7578125 ]
[ -5.5625, -1.0966796875, -2.083984375, -1.369140625, 3.052734375, 3.779296875 ]
BkiUc_w25V5ih3AxzUJY
\section{Introduction} This paper is intended as the first in a series \cite{tian-xiao2, tian-xiao3}, in which we study the Goren-Oort stratification for quaternionic Shimura varieties. The purpose of this paper is to give a global description of the strata, saying that they are in fact $(\mathbb{P}^1)^r$-bundles over (the special fiber of) other quaternionic Shimura varieties for a certain integer $r$. We fix $p > 2$ a prime number. \subsection{A baby case: modular curves} \label{S:modular curve} Let $N \geq 5$ be an integer prime to $p$. Let $\mathcal{X}$ denote the modular curve with level $\Gamma_1(N)$; it admits a smooth integral model $\mathbf{X}$ over $\mathbb{Z}[1/N]$. We are interested in the special fiber $X: = \mathbf{X} \otimes_{\mathbb{Z}[1/N]} \mathbb{F}_p$. The curve $X$ has a natural stratification by the supersingular locus $X^\mathrm{ss}$ and the ordinary locus $X^\mathrm{ord}$. In concrete terms, $X^\mathrm{ss}$ is defined as the zero locus of the Hasse-invariant $h \in H^0(X, \omega^{\otimes(p-1)})$, where $\omega^{\otimes(p-1)}$ is the sheaf for weight $p-1$ modular forms. The following deep result of Deuring and Serre (see e.g. \cite{serre}) gives an intrinsic description of $X^\mathrm{ss}$. \begin{theorem}[Deuring, Serre] \label{T:Deuring-Serre} Let $\mathbb{A}^\infty$ denote the ring of finite ad\`eles over $\mathbb{Q}$, and $\mathbb{A}^{\infty, p}$ its prime-to-$p$ part. We have a bijection of sets: \[ \big\{\overline \mathbb{F}_p\textrm{-points of } X^\mathrm{ss} \big\} \longleftrightarrow B^\times_{p,\infty} \backslash B_{p,\infty}^\times(\mathbb{A}^{\infty}) / K_1(N) B_{p,\infty}^\times(\mathbb{Z}_p) \] equivariant under the prime-to-$p$ Hecke correspondences, where $B_{p, \infty}$ is the quaternion algebra over $\mathbb{Q}$ which ramifies at exactly two places: $p$ and $\infty$, $B_{p, \infty}^\times(\mathbb{Z}_p)$ is the maximal open compact subgroup of $B_{p, \infty}^\times(\mathbb{Q}_p)$, and $K_1(N)$ is an open compact subgroup of $\mathrm{GL}_2(\mathbb{A}^{\infty,p}) = B^\times_{p, \infty}(\mathbb{A}^{\infty,p})$ given by \[ K_1(N) = \big\{\big(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\big) \in \mathrm{GL}_2(\widehat \mathbb{Z}^{(p)}) \; \big|\; c \equiv 0, d\equiv 1 \pmod N \big\}, \textrm{ where } \widehat \mathbb{Z}^{(p)} = \prod_{l \neq p} \mathbb{Z}_l. \] \end{theorem} The original proof of this theorem uses the fact that all supersingular elliptic curves over $\overline\mathbb{F}_p$ are isogenous and the quasi-endomorphism ring is exactly $B_{p, \infty}$. We however prefer to understand the result as: certain special cycles of the special fiber of the Shimura variety for $\mathrm{GL}_2$ is just the special fiber of the Shimura variety for $B_{p, \infty}^\times$. The aim of this paper is to generalize this theorem to the case of quaternionic Shimura varieties. For the purpose of simple presentation, we focus on the case of Hilbert modular varieties. We will indicate how to modify the result to adapt to general cases. \subsection{Goren-Oort Stratification} \label{S:stratification of HMV} Let $F$ be a totally real field, and let $\mathcal{O}_F$ denote its ring of integers. We assume that $p$ is \emph{unramified} in $F$. Goren and Oort \cite{goren-oort} defined a stratification of the special fiber of the Hilbert modular variety $X_{\mathrm{GL}_2}$. More precisely, let $\mathbb{A}_F^\infty$ denote the ring finite ad\`eles of $F$ and $\mathbb{A}_F^{\infty, p}$ its prime-to-$p$ part. We fix an open compact subgroup $K^p \subset \mathrm{GL}_2(\mathbb{A}_F^{\infty, p})$. Let $\mathcal{X}_{\mathrm{GL}_2}$ denote the {\it Hilbert modular variety} (over $\mathbb{Q}$) with tame level $K^p$. Its complex points are given by \[ \mathcal{X}_{\mathrm{GL}_2}(\mathbb{C}) = \mathrm{GL}_2(F)\; \backslash\; \big({\mathfrak{h}^\pm}^{[F:\mathbb{Q}]} \times \mathrm{GL}_2(\mathbb{A}_F^{\infty})\big) \;/\; \big(K^p \times \mathrm{GL}_2(\mathcal{O}_{F, p})\big), \] where $\mathfrak{h}^\pm :=\mathbb{C} \backslash \mathbb{R}$ and $\mathcal{O}_{F, p} := \mathcal{O}_F \otimes_\mathbb{Z} \mathbb{Z}_p$. The Hilbert modular variety $\mathcal{X}_{\mathrm{GL}_2}$ admits an integral model $\mathbf{X}_{\mathrm{GL}_2}$ over $\mathbb{Z}_{(p)}$ and let $X_{\mathrm{GL}_2}$ denote its special fiber over $\overline \mathbb{F}_p$. Since $p$ is unramified in $F$, we may and will identify the $p$-adic embeddings of $F$ with the homomorphisms of $\mathcal{O}_F$ to $\overline \mathbb{F}_p$, i.e. $\Hom(F, \overline \mathbb{Q}_p) \cong \Hom(\mathcal{O}_F, \overline \mathbb{F}_p)$. Let $\Sigma_\infty$ denote this set. (We shall later identify the $p$-adic embeddings with the real embeddings, hence the subscript $\infty$.) Under the latter description, the absolute Frobenius $\sigma$ acts on $\Sigma_\infty$ by taking an element $\tau$ to the composite $\sigma \tau:\mathcal{O}_F \xrightarrow{\tau} \overline \mathbb{F}_p \xrightarrow{x\mapsto x^p} \overline \mathbb{F}_p$. This action decomposes $\Sigma_\infty$ into a disjoint union of cycles, parametrized by all $p$-adic places of $F$. Let $\mathcal{A}$ denote the universal abelian variety over $X_{\mathrm{GL}_2}$. The sheaf of invariant differential 1-forms $\omega_{\mathcal{A}/X_{\mathrm{GL}_2}}$ is then locally free of rank one as a module over \[ \mathcal{O}_F \otimes_\mathbb{Z} \mathcal{O}_{X_{\mathrm{GL}_2}} \cong \bigoplus_{\tau \in \Sigma_\infty} \mathcal{O}_{X_{\mathrm{GL}_2, \tau}}, \] where $\mathcal{O}_{X_{\mathrm{GL}_2, \tau}}$ is the direct summand on which $\mathcal{O}_F$ acts through $\tau : \mathcal{O}_F \to \overline \mathbb{F}_p$. We then write accordingly $\omega_{\mathcal{A}/X_{\mathrm{GL}_2}} = \bigoplus_{\tau \in \Sigma_\infty} \omega_\tau$; each $\omega_\tau$ is locally free of rank one over $\mathcal{O}_{X_{\mathrm{GL}_2}}$. \begin{defn} The Verschiebung map induces an $\mathcal{O}_F$-morphism $\omega_{A/X_{\mathrm{GL}_2}} \to \omega_{A^{(p)}/ X_{\mathrm{GL}_2}}$, which further induces a homomorphism $h_\tau: \omega_\tau \to \omega^{\otimes p}_{\sigma^{-1}\tau}$ for each $\tau \in \Sigma_\infty$. This map then defines a global section $h_\tau \in H^0(X_{\mathrm{GL}_2}, \omega_\tau^{\otimes -1} \otimes \omega^{\otimes p}_{\sigma^{-1}\tau})$; it is called the \emph{partial Hasse invariant at} $\tau$. We use $X_\tau$ to denote the zero locus of $h_\tau$. For a subset $\mathtt{T} \subseteq \Sigma_\infty$, we put $X_\mathtt{T} = \bigcap_{\tau \in \mathtt{T}} X_\tau$. These $X_\mathtt{T}$'s give the \emph{Goren-Oort stratification} of $X_{\mathrm{GL}_2}$. An alternative definition of $X_\mathtt{T}$ is given as follows: $z \in X_\mathtt{T}(\overline \mathbb{F}_p)$ if and only if $\Hom(\alpha_p, A_z[p])$ under the action of $\mathcal{O}_F$ has eigenvalues given by those embeddings $\tau \in \mathtt{T}$. We refer to \cite{goren-oort} for the proof of equivalence and a more detailed discussion. \end{defn} It is proved in \cite{goren-oort} that each $X_\tau$ is a smooth and proper divisor and they intersect transversally. Hence $X_\mathtt{T}$ is smooth of codimension $\#\mathtt{T}$ for any subset $\mathtt{T} \subseteq \Sigma_\infty$; it is proper if $\mathtt{T} \neq \emptyset$. \subsection{Description of the Goren-Oort strata} \label{S:intro GO-strata} The goal of this paper is to give a \emph{global} description of the Goren-Oort strata. Prior to our paper, most works focus on the $p$-divisible groups of the abelian varieties, which often provides a good access to the local structure of Goren-Oort strata, e.g. dimensions and smoothness. Unfortunately, there have been little understanding of the global geometry of $X_\mathtt{T}$, mostly in low dimension. We refer to the survey article \cite{andreatta-goren} for a historical account. Recently, Helm made a break-through progress in \cite{helm, helm-PEL} by taking advantage of the moduli problem; he was able to describe the global geometry of certain analogous strata of the special fibers of Shimura varieties of type $U(2)$. Our proof of the main theorem of this paper is, roughly speaking, to complete Helm's argument to cover all cases for the $U(2)$-Shimura varieties and then transfer the results from the unitary side to the quaternionic side. Rather than stating our main theorem in an abstract way, we prefer to give more examples to indicate the general pattern. When $F=\mathbb{Q}$, this is discussed in Section~\ref{S:modular curve}. For $F \neq \mathbb{Q}$, we fix an isomorphism $\mathbb{C} \cong \overline \mathbb{Q}_p$ and hence identify $\Sigma_\infty = \Hom(F, \overline \mathbb{Q}_p)$ with the set of real embeddings of $F$. \subsubsection{$F$ real quadratic and $p$ inert in $F$} Let $\infty_1$ and $\infty_2$ denote the two real embeddings of $F$, and $\tau_1$ and $\tau_2$ the corresponding $p$-adic embeddings (via the fixed isomorphism $ \mathbb{C} \cong \overline \mathbb{Q}_p$). Then our main theorem says that each $X_{\tau_i}$ is a $\mathbb{P}^1$-bundle over the special fiber of discrete Shimura variety $\overline\mathrm{Sh}_{B_{\infty_1, \infty_2}^\times}$, where $B_{\infty_1, \infty_2}$ stands for the quaternion algebra over $F$ which ramifies at both archimedean places. The intersection $X_{\tau_1} \cap X_{\tau_2}$ is isomorphic to the special fiber of the discrete Shimura variety $\overline\mathrm{Sh}_{ B_{\infty_1, \infty_2}^\times}(\mathrm{Iw}_p)$, where $(\mathrm{Iw}_p)$ means to take Iwahori level structure at $p$ instead. The two natural embeddings of $X_{\tau_1} \cap X_{\tau_2}$ into $X_{\tau_1}$ and $X_{\tau_2}$ induces two morphisms $\overline\mathrm{Sh}_{ B_{\infty_1, \infty_2}^\times}(\mathrm{Iw}_p) \to \overline\mathrm{Sh}_{ B_{\infty_1, \infty_2}^\times}$; this gives (certain variant of) the Hecke correspondence at $p$ (see Theorem~\ref{T:link and Hecke operator}). We remark here that it was first proved in \cite{bachmat-goren} that the one-dimensional strata are disjoint unions of $\mathbb{P}^1$ and the number of such $\mathbb{P}^1$'s is also computed in \cite{bachmat-goren}. This computation relies on the intersection theory and does not provide a natural parametrization as we gave above. Our proof will be completely different from theirs. One can easily recover their counting from our natural parametrization. \subsubsection{Quaternionic Shimura varieties} Before proceeding, we clarify our convention on quaternionic Shimura varieties. For $\mathtt{S}$ an even subset of archimedean and $p$-adic places of $F$, we use $B_\mathtt{S}$ to denote the quaternion algebra over $F$ which ramifies exactly at the places in $\mathtt{S}$. We fix an identification $B_\mathtt{S}^\times(\mathbb{A}^{\infty, p}) \cong \mathrm{GL}_2(\mathbb{A}^{\infty, p})$. We fix a maximal open compact subgroup $B_\mathtt{S}^\times(\mathcal{O}_{F,p})$ of $B_\mathtt{S}^\times(F \otimes_\mathbb{Q} \mathbb{Q}_p)$. We use $\mathtt{S}_\infty$ to denote the subset of archimedean places of $\mathtt{S}$. The Shimura variety $\mathcal{S} h_{B_\mathtt{S}^\times}$ for the algebraic group $\Res_{F/\mathbb{Q}} B_\mathtt{S}^\times$ has complex points \[ \mathcal{S} h_{B_\mathtt{S}^\times}(\mathbb{C}) = B_\mathtt{S}^\times(F) \; \backslash \; \big({\mathfrak{h}^\pm}^{[F:\mathbb{Q}]-\#\mathtt{S}} \times B_\mathtt{S}^\times(\mathbb{A}_F^{\infty}) \big)\; / \; \big( K^p \times B_\mathtt{S}^\times(\mathcal{O}_{F,p})\big). \] Here and later, the tame level $K^p$ is uniformly matched up for all quaternionic Shimura varieties. Unfortunately, $\mathcal{S} h_{B_\mathtt{S}^\times}$ itself does not possess a moduli interpretation. We follow the construction of Carayol \cite{carayol} to relate it with a unitary Shimura variety $Y$ and ``carry over" the integral model of $Y$. The assumption $p >2$ comes from the verification of the extension property for the integral canonical model following Moonen \cite[Corollary~3.8]{moonen96}. In any case, we use $\overline\mathrm{Sh}_{B_\mathtt{S}^\times}$ to denote the special fiber of the Shimura variety over $\overline \mathbb{F}_p$. When we take the Iwahori level structure at $p$ instead, we write $\overline\mathrm{Sh}_{B_\mathtt{S}^\times}(\mathrm{Iw}_p)$. \subsubsection{$F$ real quartic and $p$ inert in $F$} Let $\infty_1, \dots, \infty_4$ denote the four real embeddings of $F$, labeled so that the corresponding $p$-adic embeddings $\tau_1, \dots, \tau_4$ satisfies $\sigma \tau_i = \tau_{i+1}$ with the convention that $\tau_i = \tau_{i\,(\mathrm{mod}\; 4)}$. We list the description of the strata as follows. \begin{tabular}{|c|c|} \hline Strata & Description\\ \hline $X_{\tau_i}$ for each $i$ & $\mathbb{P}^1$-bundle over $\overline\mathrm{Sh}_{B^\times_{\{\infty_{i-1}, \infty_i\}}}$\\ \hline $X_{\{\tau_{i-1},\tau_i\}}$ for each $i$ & $\overline\mathrm{Sh}_{B^\times_{\{\infty_{i-1}, \infty_i\}}}$\\ \hline $X_{\{\tau_1, \tau_3\}}$ and $X_{\{\tau_2, \tau_4\}}$ & $(\mathbb{P}^1)^2$-bundle over $\overline\mathrm{Sh}_{B^\times_{\{\infty_1, \infty_2, \infty_3, \infty_4\}}}$\\ \hline $X_{\mathtt{T}}$ with $\#\mathtt{T} = 3$ & $\mathbb{P}^1$-bundle over $\overline\mathrm{Sh}_{B^\times_{\{\infty_1, \infty_2, \infty_3, \infty_4\}}}$\\ \hline $X_{\{\tau_1, \tau_2, \tau_3, \tau_4\}}$ & $\overline\mathrm{Sh}_{B^\times_{\{\infty_1, \infty_2, \infty_3, \infty_4\}}}(\mathrm{Iw}_p)$\\ \hline \end{tabular} In particular, we point out that for a codimension $2$ stratum, its shape depends on whether the two chosen $\tau_i$'s are adjacent in the cycle $\tau_1 \to \dots \to \tau_4 \to \tau_1$. \medskip \subsubsection{$F$ general totally real of degree $g$ over $\mathbb{Q}$ and $p$ inert in $F$} \label{SS:p inert case} As before, we label the real embeddings of $F$ by $\infty_1, \dots, \infty_g$ such that the corresponding $p$-adic embeddings $\tau_1, \dots, \tau_g$ satisfy $\sigma \tau_i = \tau_{i+1}$ with the convention that $\tau_i = \tau_{i \,(\mathrm{mod}\; g)}$. The general statement for Goren-Oort strata takes the following form: for a subset $\mathtt{T} \subseteq \Sigma_\infty$, the strata $X_\mathtt{T}$ is isomorphic to a $(\mathbb{P}^1)^N$-bundle over the special fiber of some quaternion Shimura variety $\overline \mathrm{Sh}_{B_{\mathtt{S}(\mathtt{T})}^\times}$. We now explain, given $\mathtt{T}$, what $\mathtt{S}(\mathtt{T})$ and $N$ are. \begin{itemize} \item When $\mathtt{T} \subsetneq \Sigma_\infty$, we construct $\mathtt{S}(\mathtt{T})$ as follows: if $\tau \notin \mathtt{T}$ and $\sigma^{-1}\tau, \dots, \sigma^{-r}\tau \in \mathtt{T}$, we put $\sigma^{-1}\tau, \dots, \sigma^{-2 \lceil r/2 \rceil}\tau$ into $\mathtt{S}(\mathtt{T})$. In other words, we always have $\mathtt{T} \subseteq \mathtt{S}(\mathtt{T})$, and $\mathtt{S}(\mathtt{T})$ contains the additional element $\sigma^{-r-1}\tau$ if and only if the corresponding $r$ is odd. The number $N$ is the cardinality of $\mathtt{S}(\mathtt{T}) - \mathtt{T}$. \item When $\mathtt{T} = \Sigma_\infty$, $N$ is always $0$; for $\mathtt{S}(\mathtt{T})$, we need to distinguish the parity: \begin{itemize} \item if $\#\Sigma_\infty$ is odd, we put $\mathtt{S}(\mathtt{T}) = \Sigma \cup \{p\}$; \item if $\#\Sigma_\infty$ is even, we put $\mathtt{S}(\mathtt{T}) = \Sigma$ and we put an Iwahori level structure at $p$. \end{itemize} \end{itemize} \subsubsection{$F$ general totally real and $p$ is unramified in $F$} \label{S:F general p unramified} The general principle is: \emph{different places above $p$ work ``independently' in the recipe of describing the strata (e.g. which places of the quaternion algebra are ramified); so we just take the ``product" of all recipes for different $p$-adic places.} More concretely, let $p\mathcal{O}_F = \mathfrak{p}_1 \cdots \mathfrak{p}_d$ be the prime ideal factorization. We use $\Sigma_\infty$ to denote the set of all archimedean embeddings of $F$, which is identified with the set of $p$-adic embeddings. We use $\Sigma_{\infty/\mathfrak{p}_i}$ to denote those archimedean embeddings or equivalently $p$-adic embeddings that give rise to the $p$-adic place $\mathfrak{p}_i$. Given any subset $\mathtt{T} \in \Sigma_\infty$, we put $\mathtt{T}_{\mathfrak{p}_i} = \mathtt{T} \cap \Sigma_{\infty/\mathfrak{p}_i}$. Applying the recipe in \ref{SS:p inert case} to each $\mathtt{T}_{\mathfrak{p}_i}$, we get a set of places $\mathtt{S}(\mathtt{T}_{\mathfrak{p}_i})$ and a nonnegative number $N_{\mathfrak{p}_i}$. We put $\mathtt{S}(\mathtt{T}) = \cup_{i=1}^d \mathtt{S}(\mathtt{T}_{\mathfrak{p}_i})$ and $N = \sum_{i=1}^d N_{\mathfrak{p}_i} = \sum_{i=1}^d \#(\mathtt{S}(\mathtt{T}_{\mathfrak{p}_i}) - \mathtt{T}_{\mathfrak{p}_i})$. Then $X_\mathtt{T}$ is a $(\mathbb{P}^1)^N$-bundle over $\overline\mathrm{Sh}_{B_{\mathtt{S}(\mathtt{T})}^\times}$ (with possibly some Iwahori level structure at appropriate places above $p$). We also prove analogous result on the global description of the Goren-Oort strata on general quaternionic Shimura varieties (Theorem~\ref{T:main-thm}). We refer to the content of the paper for the statement. The modification we need to do in the general quarternionic case is that one just ``ignores" all ramified archimedean places and apply the above recipe formally to the set $\Sigma_\infty$ after ``depriving all ramified archimedean places". \subsection{Method of the proof} We briefly explain the idea behind the proof. The first step is to translate the question to an analogous question about (the special fiber of) unitary Shimura varieties. We use $X'$ to denote the special fiber of the unitary Shimura variety we start with, over which we have the universal abelian variety $A'$. Similar to the Hilbert case, we have naturally defined analogous Goren-Oort stratification given by divisors $X'_\tau$. We consider $X'_\mathtt{T} = \cap_{\tau \in \mathtt{T}}X'_\tau$. The idea is to prove the following sequence of isomorphisms $X'_\mathtt{T} \xleftarrow\cong Y'_\mathtt{T} \xrightarrow{\cong} Z'_\mathtt{T}$, where $Z'_\mathtt{T}$ is the $(\mathbb{P}^1)$-power bundle over the special fiber of another unitary Shimura variety; it comes with a universal abelian variety $B'$; $Y'_\mathtt{T}$ is the moduli space classifying both $A'$ and $B'$ together with a quasi-isogeny $A' \to B'$ of certain fixed type (with very small degree); the two morphisms are just simply forgetful morphisms. We defer the characterization of the quasi-isogeny to the content of the paper. To prove the two isomorphisms above, we simply check that the natural forgetful morphisms are bijective on closed points and induce isomorphisms on the tangent spaces. \begin{remark} We point out that we have been deliberately working with the special fiber over the algebraic closure $\overline \mathbb{F}_p$. This is because the description of the stratification is \emph{not} compatible with the action of the Frobenius. In fact, a more rigorous way to formulate theorem is to compare the special fiber of the Shimura variety associated to $\mathrm{GL}_2(F) \times_{F^\times} E^\times$ and that to $B_{\mathtt{S}(\mathtt{T})}^\times \times_{F^\times} E^\times$. The homomorphism from the Deligne torus into the two $E^\times$ are in fact different, causing the incompatibility of the Frobenius action. (See Corollary~\ref{C:main-thm-product} for the corresponding statement.) The result about quaternionic Shimura variety is obtained by comparing geometric connected components of the corresponding Shimura varieties, in which we lose the Frobenius action. See Remark~\ref{R:quaternionic Shimura reciprocity not compatible} for more discussion. \end{remark} \subsection{Ampleness of automorphic line bundle} An immediate application of the study of the global geometry of the Goren-Oort stratification is to give a necessary condition (hopefully also sufficient) for an automorphic line bundle to be ample. As before, we take $F$ to be a totally real field of degree $g$ in which $p$ is inert for simplicity. Let $X^*_{\mathrm{GL}_2}$ denote the special fiber of the minimal compactification of the Hilbert modular variety. We label all $p$-adic embeddings as $\tau_1, \dots, \tau_g$ with subindices considered modulo $g$, such that $\sigma \tau_i = \tau_{i+1}$. We put $\omega_i = \omega_{\tau_i}$; they form a basis of the group of automorphic line bundles. The class $[\omega_i]$ in $\Pic(X)_\mathbb{Q} : = \Pic(X) \otimes_\mathbb{Z} \mathbb{Q}$ of each $\omega_i$ extends to a class in $\Pic(X^*_{\mathrm{GL}_2})_\mathbb{Q}$, still denoted by $[\omega_i]$. For a $g$-tuple $\underline k = (k_1, \dots, k_g) \in \mathbb{Z}^g$, we put $[\omega^{\underline k}] = \sum_{i=1}^g k_i[\omega_i]$. Propobably slightly contrary to the common intuition from the case of modular forms, we prove the following. \begin{theorem} \label{T:introduction ample} If the rational class of line bundle $[\omega^{\underline k}]$ is ample, then \begin{equation} \label{E:ampleness condition GL2} p k_i > k_{i-1} \quad \textrm{for all }i; \quad \textrm{(and all }k_i >0). \end{equation} \end{theorem} Here we put the second condition in parentheses because it automatically follows from the first condition. This theorem is proved in Theorem~\ref{T:ampleness}. When $F$ is a real quadratic field, Theorem~\ref{T:introduction ample} is proved in \cite[Theorem~8.1.1]{andreatta-goren}. To see that the condition \eqref{E:ampleness condition GL2} is necessary, we simply restrict to each of the GO-strata $X_{\tau_i}$, which is a $\mathbb{P}^1$-bundle as we discussed before. Along each of the $\mathbb{P}^1$-fiber, the line bundle $\omega^{\underline k}$ restricts to $\mathcal{O}(pk_i - k_{i-1})$. The condition~\eqref{E:ampleness condition GL2} is clear. We do expect the condition in Theorem~\ref{T:introduction ample} to be necessary; but we are not able to prove it due to a combinatorics complication. \subsection{Forthcoming works in this series} We briefly advertise the other papers of this series to indicate the potential applications of the technical result in this paper. In the subsequent paper \cite{tian-xiao2}, we discuss an application to the classicality of overconvergent Hilbert modular forms, following the original proof of R. Coleman. In the third paper \cite{tian-xiao3}, we show that certain generalizations of the Goren-Oort strata realize Tate classes of the special fiber of certain Hilbert modular varieties, and hence verify the Tate Conjecture under some mild hypothesis. \subsection{Structure of the paper} In Section~\ref{Section:Sh Var}, we review some basic facts about integral models of Shimrua varieties, which will be used to relate the quarternionic Shimura varieties with the unitary Shimura varieties. One novelty is that we include a discussion about the ``canonical model" of certain discrete Shimura varieties, this can be treated uniformly together with usual Shimura varieties. In Section~\ref{Section:Integral-model}, we construct the integral canonical model for quaternionic Shimura varieties, following the Carayol \cite{carayol}. However, we tailor many of the choices (e.g. the auxiliary CM field, signatures) for our later application. In Section~\ref{Section:defn of GOstrata}, we define the Goren-Oort stratification for the unitary Shimura varieties and transfer them to the quaternionic Shimura varieties; this is a straightforward generalization of the work of Goren and Oort \cite{goren-oort}. In Section~\ref{Section:GO-geometry}, we give the global description of Goren-Oort stratification. The method is very close to that used in \cite{helm}. In Section~\ref{Section:GO divisors}, we give more detailed description for Goren-Oort divisors, including a necessary condition for an automorphic line bundle to be ample, and a structure theorem relating the Goren-Oort stratification along a $\mathbb{P}^1$-bundle morphism provided by Theorem~\ref{T:main-thm-unitary}. In Section~\ref{Section:links}, we further study some structure of the Goren-Oort stata which will play an important role in the forthcoming paper \cite{tian-xiao3}. \subsection*{Acknowledgements} We thank Ahmed Abbes, Matthew Emerton, and David Helm for useful discussions. We started working on this project when we were attending a workshop held at the Institute of Advance Study at Hongkong University of Science and Technology in December 2011. The hospitality of the institution and the well-organization provided us a great environment for brainstorming ideas. We especially thank the organizers Jianshu Li and Shou-wu Zhang, as well as the staff at IAS of HKUST. We also thank Fields Institute and the Morningside Center; the authors discussed the project while both attending conferences at these two institutes. The second author is partially supported by a grant from the Simons Foundation \#278433. \subsection{Notation}\label{S:Notation-for-the-paper} \subsubsection{} For a scheme $X$ over a ring $R$ and a ring homomorphism $R \to R'$, we use $X_{R'}$ to denote the base change $X \times_{\Spec R} \Spec R'$. For a field $F$, we use $\Gal_F$ to denote its Galois group. For a number field $F$, we use $\mathbb{A}_F$ to denote its ring of ad\`eles, and $\mathbb{A}_F^\infty$ (resp. $\mathbb{A}_F^{\infty, p}$) to denote its finite ad\`eles (resp. prime-to-$p$ finite ad\`eles). When $F = \mathbb{Q}$, we suppress the subscript $F$ from the notation. We use superscript $\mathrm{cl}$ to mean closure in certain topological groups; for example, $F^{\times, \mathrm{cl}}$ means the closure of $F^\times$ inside $\mathbb{A}_F^{\infty, \times}$ or $ \mathbb{A}_F^{\infty, p}$ (depending the situation). We put $\widehat \mathbb{Z} = \prod_l \mathbb{Z}_l$ and $\widehat \mathbb{Z}^{(p)} = \prod_{l \neq p} \mathbb{Z}_l$. For each finite place $\mathfrak{p}$ of $F$, let $F_\mathfrak{p}$ denote the completion of $F$ at $\mathfrak{p}$ and $\mathcal{O}_\mathfrak{p}$ its valuation ring, which has uniformizer $\varpi_\mathfrak{p}$ and residue field $k_\mathfrak{p}$. (When $F_\mathfrak{p}$ is unramified over $\mathbb{Q}_p$, we take $\varpi_\mathfrak{p}$ to be $p$.) We normalize the Artin map $\Art_F: F^\times \backslash \mathbb{A}^\times_F \to \Gal_F^\mathrm{ab}$ so that for each finite prime $\mathfrak{p}$, the local uniformizer at $\mathfrak{p}$ is mapped to a geometric Frobenius at $\mathfrak{p}$. \subsubsection{} For $A$ an abelian scheme over a scheme $S$, we denote by $A^{\vee}$ the dual abelian scheme, by $\Lie(A)$ the Lie algebra of $A$, and by $\omega_{A/S}$ the module of \emph{invariant $1$-differential forms} of $A$ relative to $S$. We sometimes omit $S$ from the notation when the base is clear. For a finite $p$-group scheme or a $p$-divisible group $G$ over a perfect field $k$ of characteristic $p$, we use $\mathcal{D}(G)$ to denote its \emph{covariant} Dieudonn\'e module. For an abelian variety $A$ over $k$, we write $\mathcal{D}_A$ for $\mathcal{D}(A[p])$ and write $\tilde \mathcal{D}_A$ for $\mathcal{D}(A[p^\infty])$. \subsubsection{} \label{SS:notation-F} Throughout this paper, we fix a totally real field $F$ of degree $g>1$ over $\mathbb{Q}$. Let $\Sigma$ denote the set of places of $F$, and $\Sigma_\infty$ the subset of archimedean places, or equivalently, all real embeddings of $F$. We fix a prime number $p$ which is unramified in $F$. Let $\Sigma_p$ denote the set of places of $F$ above $p$. We fix an isomorphism $\iota_p: \mathbb{C} \xrightarrow{\cong} \overline \mathbb{Q}_p$. For each $\mathfrak{p} \in \Sigma_p$, we use $\Sigma_{\infty/\mathfrak{p}}$ to denote the subset of $\tau\in \Sigma_{\infty}$ for which $\tau'$ induces the $p$-adic place $\mathfrak{p}$. Since $p$ is unramified, each $\tau'$ induces an embedding $\mathcal{O}_F \hookrightarrow W(\overline \mathbb{F}_p)$. Post-composition with Frobenius $\sigma$ on the latter induces an action of $\sigma$ on the set of $p$-adic embeddings and hence make each $\Sigma_{\infty/\mathfrak{p}}$ into one cycle; we use $\sigma \tau$ to denote this action, i.e. $\sigma \circ \tau' = (\sigma \tau)'$. \subsubsection{} \label{SS:notation for E} We will consider a CM extension $E$ over $F$, in which all places above $p$ are unramified. Let $\Sigma_{E, \infty}$ denote the set of complex embeddings of $E$. For $\tau \in \Sigma_\infty$, we often use $\tilde \tau$ to denote a/some complex embedding of $E$ extending $\tau$; we write $\tilde \tau^c$ for its complex conjugation. Using the isomorphism $\iota$ above, we write $\tilde \tau': = \iota_p\circ \tilde \tau$ for the corresponding $p$-adic embedding of $E$; again $\tilde \tau'^c$ for the $p$-adic embedding of $E$ corresponding to $\tilde \tau ^c$. Under the natural two-to-one map $\Sigma_{E, \infty} \to \Sigma_\infty$, we use $\Sigma_{E, \infty/\mathfrak{p}}$ to denote the preimage of $\Sigma_{\infty/\mathfrak{p}}$. In case when $\mathfrak{p}$ splits as $\mathfrak{q} \mathfrak{q}^c$ in $E/F$, we use $\Sigma_{E, \infty/\mathfrak{q}}$ to denote the set of complex embeddings $\tilde \tau$ such that $\iota_p \circ \tilde \tau$ induces the $p$-adic place $\mathfrak{q}$. \subsubsection{}\label{SS:notation-S} For $\mathtt{S}$ an even subset of places of $F$, we denote by $B_\mathtt{S}$ the quaternion algebra over $F$ ramified at $\mathtt{S}$. Let $\mathrm{Nm}_{B_\mathtt{S}/F}: B_\mathtt{S} \to F$ denote the reduced norm and $\mathrm{Tr}_{B_\mathtt{S}/F}: B_\mathtt{S} \to F$ the reduced trace. We will use the following lists of algebraic groups. Let $G_\mathtt{S}$ denote the algebraic group $\Res_{F/\mathbb{Q}}B_\mathtt{S}^\times$. Let $E$ be the CM extension of $F$ above and we put $T_{E,\tilde \mathtt{S}} =\Res_{E/\mathbb{Q}}\mathbb{G}_m$; see Subsection~\ref{S:CM extension} for the meaning of subscript $\tilde \mathtt{S}$. We put $\widetilde G_{\tilde \mathtt{S}} = G_\mathtt{S} \times T_{E, \tilde \mathtt{S}}$ and $G''_{\tilde \mathtt{S}} = G_\mathtt{S} \times_Z T_{E,\tilde \mathtt{S}}$, which is the quotient of $\widetilde G_{\tilde \mathtt{S}}$ by the subgroup $Z = \Res_{F/\mathbb{Q}}\mathbb{G}_m$ embedded as $z \mapsto (z, z^{-1})$. let $G'_{\tilde \mathtt{S}}$ denote the subgroup of $G''_{\tilde \mathtt{S}}$ consisting of elements $(g, e)$ such that $\mathrm{Nm}_{B_\mathtt{S}/F}(g)\cdot \mathrm{Nm}_{E/F}(e) \in \mathbb{G}_m$. We put $\mathtt{S}_{\infty}=\Sigma_{\infty}\cap \mathtt{S}$. For each place $\mathfrak{p}\in \Sigma_p$, we set $\mathtt{S}_{\infty/\mathfrak{p}}=\Sigma_{\infty/\mathfrak{p}}\cap \mathtt{S}$. \section{Basics of Shimura Varieties} \label{Section:Sh Var} We first collect some basic facts on integral canonical models of Shimura varieties. Our main references are \cite{deligne1, deligne2, milne-book,kisin}. (Our convention follows \cite{milne-book,kisin}.) We focus on how to transfer integral canonical models of Shimura varieties from one group to another group. This is mostly well-known to the experts; we include the discussion here mostly for completeness. One novelty of this section, however, is that we give an appropriate definition of ``canonical model" for certain discrete Shimura varieties, so that the construction holds uniformly for both regular Shimura varieties and these zero-dimensional ones. This will be very important for later application to transferring description of Goren-Oort strata between Shimura varieties for different groups. \begin{notation} \label{N:condition on G} Fix a prime number $p$. Fix an isomorphism $\iota: \mathbb{C} \xrightarrow{\cong }\overline \mathbb{Q}_p$. We use $\overline \mathbb{Q}$ to denote the algebraic closure of $\mathbb{Q}$ inside $\mathbb{C}$ (which is then identified with the algebraic closure of $\mathbb{Q}$ inside $\overline \mathbb{Q}_p$ via $\iota$). In this section, let $G$ be a connected reductive group over $\mathbb{Q}$. We use $G(\mathbb{R})^+$ to denote the neutral connected component of $G(\mathbb{R})$. We put $G(\mathbb{Q})^+ = G(\mathbb{R})^+ \cap G(\mathbb{Q})$. We use $G^\mathrm{ad}$ to denote the adjoint group and $G^\mathrm{der}$ its derived subgroup. We use $G(\mathbb{R})_+$ to denote the preimage of $G^\mathrm{ad}(\mathbb{R})^+$ under the natural homomorphism $G(\mathbb{R}) \to G^\mathrm{ad}(\mathbb{R})$; we put $G(\mathbb{Q})_+ = G(\mathbb{R})_+ \cap G(\mathbb{Q})$. For $S$ a torus over $\mathbb{Q}_p$, let $S(\mathbb{Z}_p)$ denote the maximal open compact subgroup of $S(\mathbb{Q}_p)$. \end{notation} \subsection{Shimura varieties over $\mathbb{C}$} \label{A:Shimura varieties} Put $\mathbb{S} =\Res_{\mathbb{C}/\mathbb{R}}\mathbb{G}_m$. For a real vector space $V$, a \emph{Deligne homomorphism} $h: \mathbb{S}_\mathbb{R} \to \mathrm{GL}(V)$ induces a direct sum decomposition $V_\mathbb{C} = \oplus_{a, b \in \mathbb{Z}} V^{a,b}$ such that $z \in \mathbb{S}(\mathbb{R}) \cong \mathbb{C}^\times$ acts on $V^{a,b}$ via the character $z^{-a} \bar z^{-b}$. Let $r$ denote the $\mathbb{C}$-homomorphism $\mathbb{G}_{m, \mathbb{C}} \to \mathbb{S}_\mathbb{C}$ such that $z^{-a} \bar z^{-b} \circ r = (x \mapsto x^{-a})$. A \emph{Shimura datum} is a pair $(G, X)$ consisting of a connected reductive group $G$ over $\mathbb{Q}$ and a $G(\mathbb{R})$-conjugacy class $X$ of homomorphisms $h: \Res_{\mathbb{C}/\mathbb{R}}\mathbb{G}_m \to G_\mathbb{R}$ satisfying the following conditions: \begin{itemize} \item[(SV1)] for $h \in X$, only characters $z/\bar z, 1, \bar z / z$ occur in the representation of $\mathbb{S}(\mathbb{R}) \cong \mathbb{C}^\times$ on $\Lie(G^\mathrm{ad})_\mathbb{C}$ via $\mathrm{Ad} \circ h$; \item[(SV2)] for $h \in X$, $\mathrm{Ad}(h(i))$ is a Cartan involution on $G^\mathrm{ad}_\mathbb{R}$; and \item[(SV3)] $G^\mathrm{ad}$ has no $\mathbb{Q}$-factor $H$ such that $H(\mathbb{R})$ is compact. \end{itemize} The $G(\mathbb{R})$-conjugacy class $X$ of $h$ admits the structure of a complex manifold. Let $X^+$ denote a fixed connected component of $X$. A pair $(G,X)$ satisfying only (SV1) (SV2) and the following (SV3)' is called a \emph{weak Shimura datum}. \begin{itemize} \item[(SV3)'] $G^\mathrm{ad}(\mathbb{R})$ is compact (and hence connected by \cite[p.277]{borel}; this forces the image of $h$ to land in the center $Z_\mathbb{R}$ of $G_\mathbb{R}$). \end{itemize} For an open compact subgroup $K \subseteq G(\mathbb{A}^\infty)$, we define the \emph{Shimura variety} for $(G, X)$ with level $K$ to be the quasi-projective variety $\mathrm{Sh}_K(G,X)_\mathbb{C}$, whose $\mathbb{C}$-points are \[ \mathrm{Sh}_K(G, X)(\mathbb{C}) := G(\mathbb{Q}) \backslash X \times G(\mathbb{A}^\infty) / K \cong G(\mathbb{Q})_+ \backslash X^+ \times G(\mathbb{A}^\infty) / K. \] When $(G,X)$ is a weak Shimura datum, $\mathrm{Sh}_K(G,X)_\mathbb{C}$ is just a finite set of points. \subsection{Reflex field} Let $(G,X)$ be a (weak) Shimura data. The \emph{reflex field}, denoted by $E= E(G, X)$, is the field of definition of the conjugacy class of the composition $h \circ r: \mathbb{G}_{m, \mathbb{C}} \to \mathbb{S}_\mathbb{C} \to G_\mathbb{C}$. It is a subfield of $\mathbb{C}$, finite over $\mathbb{Q}$. We refer to \cite{deligne1} for the definition of the canonical model $\mathrm{Sh}_K(G,X)$ of $\mathrm{Sh}_K(G,X)_\mathbb{C}$ over this reflex field $E$. We assume from now on, all (weak) Shimura varieties we consider in this section admit canonical models. (In fact, {\it loc. cit.} excludes the case when $(G,X)$ is a weak Shimura datum; we will give the meaning of the canonical model in this case in Subsection~\ref{S:integral model weak Shimura datum} later.) We will always assume that $K$ is the product $K^p K_p$ of an open compact subgroup $K^p$ of $G(\mathbb{A}^{\infty, p})$ and an open compact subgroup $K_p$ of $G(\mathbb{Q}_p)$. Taking the inverse limit over the open compact subgroups $K^p$, we have $\mathrm{Sh}_{K_p}(G, X) := \varprojlim_{K^p} \mathrm{Sh}_{K^pK_p}(G, X)$. This is actually a scheme locally of finite type over $E$ carrying a natural (right) action of $G(\mathbb{A}^{\infty,p})$. \subsection{Extension property} \label{S:extension property} Let $\mathcal{O}$ be the ring of integers in a finite extension of $\mathbb{Q}_p$. A scheme $X$ over $\mathcal{O}$ is said to have the \emph{extension property} over $\mathcal{O}$ if for any smooth $\mathcal{O}$-scheme $S$, a map $S \otimes \mathrm{Frac}(\mathcal{O}) \to X$ extends to $S$ (Such an extension is automatically unique if it exists by the normality of $S$.) Note that this condition is weaker than the one given in \cite[2.3.7]{kisin} but is enough to ensure the uniqueness. The chosen isomorphism $\mathbb{C} \cong \overline \mathbb{Q}_p$ identifies $E$ as a subfield of $\overline \mathbb{Q}_p$; let $E_\wp$ denote the $p$-adic completion of $E$, $\mathcal{O}_\wp$ its valuation ring with $\mathbb{F}_\wp$ as the residue field. Let $E_\wp^\mathrm{ur}$ be the maximal unramified extension of $E_\wp$ and $\mathcal{O}_\wp^\mathrm{ur}$ its valuation ring. An \emph{integral canonical model} $\mathrm{Sh}_{K_p}(G, X)_{\mathcal{O}_\wp}$ of $\mathrm{Sh}_{K_p}(G,X)$ over $\mathcal{O}_{\wp}$ is an $\mathcal{O}_{\wp}$-scheme $\mathrm{Sh}_{K_p}(G,X)_{\mathcal{O}_\wp}$, which is an inverse limit of smooth $\mathcal{O}_{\wp}$-schemes $\mathrm{Sh}_{K_{p}K^p}(G,X)_{\mathcal{O}_{\wp}}$ with finite \'etale transition maps as $K^p$ varies, such that \begin{itemize} \item there is an isomorphism $\mathrm{Sh}_K(G,X)_{\mathcal{O}_\wp} \otimes_{\mathcal{O}_\wp} E_\wp \cong \mathrm{Sh}_K(G,X) \otimes_{E} E_\wp$ for each $K$, and \item $\mathrm{Sh}_{K_p}(G,X)_{\mathcal{O}_\wp}=\varprojlim_{K^p}\mathrm{Sh}_{K^pK_p}(G,X)_{\mathcal{O}_{\wp}}$ satisfies the extension property. \end{itemize} \vspace{10pt} Existence of integral canonical model of Shimura varieties of abelian type with hyperspecial level structure was proved by Kisin \cite{kisin}. Unfortunately, our application requires, in some special cases, certain non-hyperspecial level structures, as well as certain ramified groups. We have to establish the integral canonical model in two steps: we first prove the existence for some group $G'$ with the same derived and adjoint groups as $G$ (as is done in Section~\ref{Section:Integral-model}); we then reproduce a variant of an argument of Deligne to show that the integral canonical model for the Shimura variety for $G'$ gives that of $G$. The second step is well-known at least for regular Shimura varieties when $K_p$ is hyperspecial (\cite{kisin}); our limited contribution here is to include some non-hyperspecial case and to cover the case of discrete Shimura varieties, in a uniform way. \begin{hypo} \label{H:hypo on G} Let $(G,X)$ be a (weak) Shimura datum. From now on, we assume that the derived subgroup $G^\mathrm{der}$ is simply-connected, which will be the case when we apply the theory later. Let $Z$ denote the center of $G$. Let $\nu: G \twoheadrightarrow T$ denote the maximal abelian quotient of $G$. We fix an open compact subgroup $K_p$ of $G(\mathbb{Q}_p)$ such that $\nu(K_p) = T(\mathbb{Z}_p)$ and $K_p \cap Z(\mathbb{Q}_p) = Z(\mathbb{Z}_p)$. \end{hypo} \subsection{Geometric connected components} \label{A:geometric connected components} We put $T(\mathbb{R})^\dagger = \mathrm{Im}(Z(\mathbb{R}) \to T(\mathbb{R}))$ and $T(\mathbb{Q})^\dagger = T(\mathbb{R})^\dagger \cap T(\mathbb{Q})$. Put $T(\mathbb{Q})^{?, (p)} = T(\mathbb{Q})^? \cap T(\mathbb{Z}_p)$ for $? = \emptyset$ or $\dagger$. Let $Y$ denote the finite quotient $T(\mathbb{R}) / T(\mathbb{R})^\dagger$, which is isomorphic to $T(\mathbb{Q}) / T(\mathbb{Q})^\dagger$ because $T(\mathbb{Q})$ is dense in $T(\mathbb{R})$. The morphism $\nu: G \to T$ then induces a natural map \begin{align} \label{E:nu map} & \xymatrix@C=15pt{\nu\colon G(\mathbb{Q})_+ \backslash X^+ \times G(\mathbb{A}^\infty) / K \ar[r]^-\nu & T(\mathbb{Q})^\dagger \backslash T(\mathbb{A}^\infty) / \nu(K) \cong T(\mathbb{Q})^{\dagger, (p)} \backslash T(\mathbb{A}^{\infty, p}) / \nu(K^p) }\\ \nonumber &\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\cong T(\mathbb{Q})^{(p)}\backslash Y \times T(\mathbb{A}^{\infty, p}) / \nu (K^p). \end{align} If $(G, X)$ is a Shimura datum, this map induces an isomorphism \cite[Theorem~5.17]{milne-book} on the set of geometric connected components $\pi_0\big( \mathrm{Sh}_K(G, X)_{\overline \mathbb{Q}} \big)\cong T(\mathbb{Q})^{(p)}\backslash Y \times T(\mathbb{A}^{\infty, p}) / \nu (K^p)$. Taking inverse limit gives a bijection \[ \pi_0(\mathrm{Sh}_{K_p}(G, X)_{\overline \mathbb{Q}}) \cong T(\mathbb{Q})^{\dagger, (p), \mathrm{cl}} \backslash T(\mathbb{A}^{\infty,p}) \cong T(\mathbb{Q})^{(p), \mathrm{cl}} \backslash Y \times T(\mathbb{A}^{\infty,p}), \] Here and later, the superscript $\mathrm{cl}$ means taking closure in $T(\mathbb{A}^{\infty,p})$ (or in appropriate topological groups). \subsection{Reciprocity law} \label{A:reciprocity law} Let $(G,X)$ be a (weak) Shimura datum. The composite \[ \xymatrix{ \nu hr: \mathbb{G}_{m, \mathbb{C}} \ar[r]^-{r} & \mathbb{S}_\mathbb{C} \ar[r]^-{h} & G_\mathbb{C} \ar[r]^-{\nu} & T_\mathbb{C} } \] does not depend on the choice of $h$ (in the conjugacy class) and is defined over the reflex field $E$. The \emph{Shimura reciprocity map} is given by \[ \Rec(G, X):\ \Res_{E/\mathbb{Q}}(\mathbb{G}_m) \xrightarrow{\Res_{E/\mathbb{Q}}(\nu hr)} T_E \xrightarrow{N_{E/\mathbb{Q}}} T. \] We normalize the Artin reciprocity map $\Art_E: \mathbb{A}^\times_E / (E^{\times}E_\mathbb{R}^{\times, +})^\mathrm{cl} \xrightarrow{\cong} \Gal_E^\mathrm{ab}$ so that the local parameter at a finite place $\mathfrak{l}$ is mapped to a geometric Frobenius at $\mathfrak{l}$, where $E_\mathbb{R}^{\times, +}$ is the identity connected component of $E_\mathbb{R}^\times$. We denote the unramified Artin map at $\wp$ by $\Art_\wp: E_\wp^\times / \mathcal{O}_\wp^\times \to \Gal_{E_\wp}^{\mathrm{ab}, \mathrm{ur}}$ (again normalized so that a uniformizer is mapped to the geometric Frobenius). The morphism $\Rec(G, X)$ induces a natural homomorphism \[ \xymatrix@C=10pt{\mathfrak{R}\mathrm{ec}= \mathfrak{R}\mathrm{ec}(G, X): \ \Gal_E^{\mathrm{ab}} && \ar[ll]_-{\Art_E}^-\cong (E^\times E_\mathbb{R}^{\times, +})^\mathrm{cl} \backslash \mathbb{A}^\times_E\ar[rrr]^-{\Rec(G,X)} &&& T(\mathbb{Q})^\mathrm{cl} \backslash Y \times T(\mathbb{A}^{\infty}). } \] When $(G,X)$ is a Shimura datum, the Shimura reciprocity law \cite[Section~13]{milne-book} says that the action of $\sigma \in \Gal_E$ on $\pi_0(\mathrm{Sh}_{K_p}(G,X)_{\overline \mathbb{Q}}) \cong T(\mathbb{Q})^\mathrm{cl} \backslash Y \times T(\mathbb{A}^\infty) / T(\mathbb{Z}_p)$ is given by multiplication by $\mathfrak{R}\mathrm{ec}(G,X)(\sigma)$. As a corollary, $\pi_0(\mathrm{Sh}_{K_p}(G,X)_{\overline \mathbb{Q}}) = \pi_0(\mathrm{Sh}_{K_p}(G,X)_{E_\wp^\mathrm{ur}})$, i.e. the geometric connected components are seen over an unramified extension of $E_\wp$. The action of $\Gal_{E_\wp}^{\mathrm{ab}, \mathrm{ur}} = \Gal_{\mathbb{F}_\wp}$ on the geometric connected component is then given by multiplication by the image of the Galois group element under the following map: \begin{equation} \label{E:reciprocity-at-p} \xymatrix@C=10pt{ \mathfrak{R}\mathrm{ec}_\wp = \mathfrak{R}\mathrm{ec}_\wp(G, X): \ \Gal_{\mathbb{F}_\wp}&& \ar[ll]_-{\Art_\wp}^-\cong \widehat{E_\wp^\times / \mathcal{O}_\wp^\times} \to (E^\times E_\mathbb{R}^{\times, +})^\mathrm{cl} \backslash \mathbb{A}^\times_E / \mathcal{O}_\wp^\times \ar[rrr]^-{\Rec(G,X)} &&& T(\mathbb{Q})^{(p), \mathrm{cl}} \backslash Y \times T(\mathbb{A}^{\infty,p}), } \end{equation} where $\widehat{E_\wp^\times / \mathcal{O}_\wp^\times}$ denotes the profinite completion of $E_\wp^\times / \mathcal{O}_\wp^\times$. \subsection{Integral canonical model for weak Shimura datum} \label{S:integral model weak Shimura datum} When $(G,X)$ is a weak Shimura datum, the associated Shimura variety is, geometrically, a finite set of points; we define its canonical model by specifying the action of $\Gal_E$. The key observation is that condition (SV3)' ensures that the morphism $\Rec(G,X)$ factors as \[ \xymatrix@C=50pt{ \Res_{E/\mathbb{Q}}(\mathbb{G}_m) \ar[r]^-{\Res_{E/\mathbb{Q}}(hr)} \ar@{-->}[dr]_{\Rec_Z(G,X)} & Z_E \ar[r]^{\Res_{E/\mathbb{Q}}(\nu)} \ar[d]^{N_{E/\mathbb{Q}}} & T_E \ar[d]^{N_{E/\mathbb{Q}}} \\ & Z \ar[r]^{\nu} & T. } \] We consider the natural homomorphism \[ \xymatrix@C=10pt{\mathfrak{R}\mathrm{ec}_Z: \ \Gal_E^{\mathrm{ab}} && \ar[ll]_-{\Art_E}^-\cong (E^\times E_\mathbb{R}^{\times, +})^\mathrm{cl} \backslash \mathbb{A}^\times_E\ar[rrr]^-{\Rec_Z(G,X)} &&& Z(\mathbb{Q})^\mathrm{cl} Z(\mathbb{R}) \backslash Z(\mathbb{A}) \cong Z(\mathbb{Q})^\mathrm{cl} \backslash Z(\mathbb{A}^{\infty}). } \] We define the \emph{canonical model} $\mathrm{Sh}_K(G,X)$ to be the (pro-)$E$-scheme whose base change to $\mathbb{C}$ is isomorphic to $\mathrm{Sh}_K(G,X)_\mathbb{C}$, such that every $\sigma \in \Gal_E$ acts on its $\overline \mathbb{Q}$-points by multiplication by $\mathfrak{R}\mathrm{ec}_Z(\sigma)$. In comparison to Subsection~\ref{A:reciprocity law}, we have $\nu(\sigma(x)) = \mathfrak{R}\mathrm{ec}(G,X)(\sigma)\cdot \nu( x)$ for any $x \in \mathrm{Sh}_K(G,X)(\overline \mathbb{Q})$. Since $\mathrm{Sh}_K(G,X)$ is just a finite union of spectra of some finite extensions of $E$, it naturally admits an integral canonical model over $\mathcal{O}_\wp$ by taking the corresponding valuation rings. With the map $\mathfrak{R}\mathrm{ec}_\wp$ as defined in \eqref{E:reciprocity-at-p}, we have $\nu(\sigma(x)) = \mathfrak{R}\mathrm{ec}_\wp(G,X)(\sigma)\cdot \nu(x)$ for any closed point $x \in \mathrm{Sh}_K(G,X)_{\mathcal{O}_\wp}$ and $\sigma \in \Gal_{\mathbb{F}_\wp}$. \begin{notation} \label{N:hypo on G} We put $K^\mathrm{der}_p = K_p \cap G^\mathrm{der}(\mathbb{Q}_p)$; let $K^\mathrm{ad}_p$ denote the image of $K_p$ in $G^\mathrm{ad}(\mathbb{Q}_p)$. Set $G^?(\mathbb{Q})^{(p)} = G^?(\mathbb{Q}) \cap K^?_p$ for $? = \emptyset, \mathrm{ad}$ and $\mathrm{der}$; they are the subgroups of $p$-integral elements. Put $G^\mathrm{ad}(\mathbb{Q})^{+, (p)} = G^\mathrm{ad}(\mathbb{R})^+ \cap G^\mathrm{ad}(\mathbb{Q})^{(p)}$ and $G^?(\mathbb{Q})_+^{(p)} = G^?(\mathbb{R})_+ \cap G^?(\mathbb{Q})^{(p)}$ for $? = \emptyset$ or $\mathrm{der}$. \end{notation} \subsection{A group theoretic construction} Before proceeding, we recall a purely group theoretic construction. See \cite[\S~2.0.1]{deligne2} for more details. Let $H$ be a group equipped with an action $r$ of a group $\Delta$, and $\Gamma \subset H$ a $\Delta$-stable subgroup. Suppose given a $\Delta$-equivariant map $\varphi: \Gamma \to \Delta$, where $\Delta$ acts on itself by inner automorphisms, and suppose that for $\gamma \in \Gamma$, $\varphi(\gamma)$ acts on $H$ as inner conjugation by $\gamma$. Given the data above, we can first define the semi-product $H \rtimes \Delta$ using the action $r$. The conditions above imply that the natural map $\gamma \mapsto (\gamma, \varphi(\gamma)^{-1})$ embeds $\Gamma$ as a normal subgroup of $H \rtimes \Delta$. We define the \emph{star extension} $H \ast_\Gamma \Delta$ to be the quotient of $H \rtimes \Delta$ by this subgroup. Two typical examples we will encounter later are \[ G^\mathrm{der}(\mathbb{A}^{\infty, p}) \ast _{G^\mathrm{der}(\mathbb{Q})^{(p)}} G(\mathbb{Q})^{(p)} \cong G^\mathrm{der}(\mathbb{A}^{\infty, p}) \cdot G(\mathbb{Q})^{(p)} \quad \textrm{and} \quad G^\mathrm{der}(\mathbb{A}^{\infty,p}) \ast _{G^\mathrm{der}(\mathbb{Q})^{(p)}} G^\mathrm{ad}(\mathbb{Q})^{(p)}. \] \subsection{The connected components of the integral model} \label{A:connected integral model} Let $(G,X)$ be a (weak) Shimura datum. Suppose that there exists an integral canonical model $\mathrm{Sh}_{K_p}(G,X)_{\mathcal{O}_\wp}$. For $K^p$ an open compact subgroup of $G(\mathbb{A}^{\infty, p})$, let $\mathrm{Sh}_{K^pK_p}(G, X)^\circ_{\mathcal{O}_\wp^\mathrm{ur}}$ denote the open and closed subscheme whose $\mathbb{C}$-points consists of the preimage of $\{1\}$ under the $\nu$-map in \eqref{E:nu map}. When $(G,X)$ is a Shimura datum, this gives a connected component of $\mathrm{Sh}_{K^pK_p}(G,X)_{\mathcal{O}_\wp^\mathrm{ur}}$. We put \begin{equation} \label{E:connected components of Shimura varieties} \mathrm{Sh}_{K_p}(G, X)^\circ_{\mathcal{O}_\wp^\mathrm{ur}} = \varprojlim_{K^p} \mathrm{Sh}_{K^pK_p}(G,X)^\circ_{\mathcal{O}_\wp^\mathrm{ur}} \quad \textrm{and} \quad \mathrm{Sh}_{K_p}(G, X)^\circ_{\overline \mathbb{F}_\wp} = \mathrm{Sh}_{K_p}(G, X)^\circ_{\mathcal{O}_\wp^\mathrm{ur}} \otimes_{\mathcal{O}_\wp^\mathrm{ur}} \overline \mathbb{F}_\wp. \end{equation} Note that the set of $\mathbb{C}$-points of $\mathrm{Sh}_{K_p}(G, X)^\circ_{\mathcal{O}_\wp^\mathrm{ur}}$ is nothing but $G^\mathrm{der}(\mathbb{Q})^{(p), \mathrm{cl}}_+ \backslash (X^+ \times G^\mathrm{der}(\mathbb{A}^{\infty, p}))$; when $(G,X)$ is a Shimura datum, strong approximation shows that this is a projective limit of connected complex manifold. In any case, this implies that $\mathrm{Sh}_{K_p}(G,X)^\circ_{\mathcal{O}_\wp^\mathrm{ur}}$ depends only on $X$, the groups $G^\mathrm{der}$ and $G^\mathrm{ad}$ (as opposed to the group $G$), and the subgroups $K_p^\mathrm{der}$ and $K_p^\mathrm{ad}$ (as opposed to $K_p$). We also point out that \eqref{E:nu map} gives rise to a natural map \begin{equation} \label{E:nu-map on Sh var} \nu\colon \pi_0(\mathrm{Sh}_{K_p}(G, X)_{\mathcal{O}_\wp^\mathrm{ur}}) = \pi_0(\mathrm{Sh}_{K_p}(G, X)_{\overline \mathbb{F}_\wp}) = \pi_0(\mathrm{Sh}_{K_p}(G,X)_{\overline \mathbb{Q}}) \longrightarrow T(\mathbb{Q})^{(p), \mathrm{cl}} \backslash Y \times T(\mathbb{A}^{\infty, p}). \end{equation} By abuse of language, we call \eqref{E:connected components of Shimura varieties} the (geometric) connected components of the Shimura varieties, and the target of \eqref{E:nu-map on Sh var} the set of connected components (although this is not the case if $(G,X)$ is a weak Shimura datum). The Shimura varieties $\mathrm{Sh}_{K_p}(G,X)_{{\boldsymbol ?}}$ for ${\boldsymbol ?} = \mathcal{O}_\wp^\mathrm{ur}$ and $\overline \mathbb{F}_\wp$ admit the following actions. \begin{enumerate} \item The natural right action of $G(\mathbb{A}^{\infty, p})$ on $\mathrm{Sh}_{K_p}(G,X)_{\overline \mathbb{Q}}$ extends to a right action on $\mathrm{Sh}_{K_p}(G,X)_\textbf{?}$. The subgroup $Z(\mathbb{Q})^{(p)} := Z(\mathbb{Q}) \cap G(\mathbb{Z}_p)$ acts trivially. So the right multiplication action above factors through $ G(\mathbb{A}^{\infty, p})\big/ Z(\mathbb{Q})^{(p), \mathrm{cl}}$. The induced action on the set of connected components is given by $\nu: G(\mathbb{A}^{\infty,p})\big/ Z(\mathbb{Q})^{(p),\mathrm{cl}}\to T(\mathbb{Q})^{\dagger,(p),\mathrm{cl}} \backslash T(\mathbb{A}^{\infty,p})$. \item There is a right action $\rho$ of $G^\mathrm{ad}(\mathbb{Q})^{+,(p)}$ on $\mathrm{Sh}_{K_p}(G,X)_{\mathcal{O}_\wp^\mathrm{ur}}$ such that the induced map on $\mathbb{C}$-points is given by, for $g \in G^\mathrm{ad}(\mathbb{Q})^{+,(p)}$, \[ \xymatrix@C=40pt@R=0pt{ \rho(g)\colon G(\mathbb{Q})_+^\mathrm{cl} \backslash X^+ \times G(\mathbb{A}^\infty) / K_p \ar[r] & G(\mathbb{Q})_+^\mathrm{cl} \backslash X^+ \times G(\mathbb{A}^\infty) /K_p \\ [x, a] \ar@{|->}[r] & [g^{-1}x, \mathrm{int}_{g^{-1}}(a)]. } \] Here note that $K_p$ is stable under the conjugation action of $K^\mathrm{ad}_p$ and hence of $G^\mathrm{ad}(\mathbb{Q})^{+,(p)}$. One extends the action $\rho(g)$ to the integral model and hence to the special fiber using the extension property. Moreover, this action preserves the connected component $\mathrm{Sh}_{K_p}(G, X)^\circ_{\boldsymbol ?}$. \item For an element $g \in G(\mathbb{Q})_+^{(p)}$, the two actions above coincide. Putting them together, we have a right action of the group \begin{equation} \label{E:calG} \mathcal{G} := \big(G(\mathbb{A}^{\infty,p}) \big/ Z(\mathbb{Q})^{(p), \mathrm{cl}}\big) \ast_{G(\mathbb{Q})^{(p)}_+/Z(\mathbb{Q})^{(p)}} G^\mathrm{ad}(\mathbb{Q})^{+,(p)} \end{equation} on $\mathrm{Sh}_{K_p}(G, X)^\circ_{\boldsymbol ?}$. The induced action on the set of connected components is given by \[ \nu \ast \mathrm{triv}: \mathcal{G} \twoheadrightarrow T(\mathbb{Q})^{\dagger,(p),\mathrm{cl}} \backslash T(\mathbb{A}^\infty), \] i.e., $\nu$ on the first factor and trivial on the second factor. \item The Galois group $\Gal(E_\wp^\mathrm{ur}/E_\wp)$ acts on $\mathrm{Sh}_{K_p}(G)_{\boldsymbol ?}$, according to \eqref{E:reciprocity-at-p} (and Subsection~\ref{S:integral model weak Shimura datum}). \end{enumerate} Let $\mathcal{E}_{G, \wp}$ denote the subgroup of $\mathcal{G} \times \Gal(E_\wp^\mathrm{ur} / E_\wp)$ consisting of pairs $(g, \sigma)$ such that $(\nu \ast \mathrm{triv})(g)$ is equal to $\mathfrak{R}\mathrm{ec}_\wp(\sigma)^{-1}$ in $T(\mathbb{Q})^{\dagger,(p),\mathrm{cl}} \backslash T(\mathbb{A}^{\infty,p})$. Then by the discussion above, the group $\mathcal{E}_{G,\wp}$ acts on the connected component $\mathrm{Sh}_{K_p}(G,X)^\circ_{\boldsymbol ?}$. Conversely, knowing $\mathrm{Sh}_{K_p}(G,X)^\circ_{\boldsymbol ?}$ together with the action of $\mathcal{E}_{G,\wp}$, we can recover the integral model $\mathrm{Sh}_{K_p}(G,X)_{\mathcal{O}_\wp}$ or its special fiber $\mathrm{Sh}_{K_p}(G,X)_{\mathbb{F}_\wp}$ of the Shimura variety as follows. We consider the (pro-)scheme $\mathrm{Sh}_{K_p}(G,X)^\circ_{\mathcal{O}_\wp^\mathrm{ur}} \times_{\mathcal{E}_{G,\wp}} \big(\mathcal{G} \times \Gal(E_\wp^\mathrm{ur} / E_\wp) \big)$. Since this is a projective limit of \emph{quasi-projective} varieties, by Galois descent, it is the base change of a projective system of varieties $\mathrm{Sh}_{K_p}(G,X)_{\mathcal{O}_\wp}$ from $\mathcal{O}_\wp$ to $\mathcal{O}_\wp^\mathrm{ur}$. The same argument applies to the special fiber. In general, for a finite unramified extension $\tilde E_{\tilde \wp}$ of $E_\wp$, we put $\mathcal{E}_{G, \tilde E_{\tilde \wp}}$ to be the subgroup of $\mathcal{E}_{G, \wp}$ consisting of elements whose second coordinate lives in $\Gal(E_\wp^\mathrm{ur}/\tilde E_{\tilde \wp})$. Knowing the action of $\mathcal{E}_{G, \tilde E_{\tilde \wp}}$ on $\mathrm{Sh}_{K_p}(G,X)^\circ_{\mathcal{O}_\wp^\mathrm{ur}}$ or $\mathrm{Sh}_{K_p}(G,X)^\circ_{\overline \mathbb{F}_\wp}$ allows one to descend the integral model to $\mathrm{Sh}_{K_p}(G,X)_{\mathcal{O}_{\tilde E_{\tilde \wp}}}$. \subsection{Transferring mathematical objects} \label{S:transfer math obj} One can slightly generalize the discussion above to $\mathcal{E}_{G, \tilde E_{\tilde \wp}}$-equivariant mathematical objects over the Shimura variety. More precisely, for $? = \mathbb{F}_\wp, \mathcal{O}_\wp$, by a \emph{mathematical object} $\mathcal{P}$ over $\mathrm{Sh}_{K^p}(G,X)_?$, we mean, for each sufficiently small open compact subgroup $K^p$ of $G(\mathbb{A}^{\infty,p})$, we have a (pro-)scheme or a vector bundle (with a section) $\mathcal{P}_{K^p}$ over $\mathrm{Sh}_{K_pK^p}(G,X)_{\boldsymbol ?}$, such that, for any subgroup $K_1^p \subseteq K_2^p$, $\mathcal{P}_{K_1^p}$ is the base change of $\mathcal{P}_{K_2^p}$ along the natural morphism $\mathrm{Sh}_{K_pK_1^p}(G,X)_{\boldsymbol ?} \to \mathrm{Sh}_{K_pK_2^p}(G,X)_{\boldsymbol ?}$. We say $\mathcal{P}$ is \emph{$\mathcal{G} \times \Gal(E_\wp^\mathrm{ur}/\tilde E_{\tilde \wp})$-equivariant} if $\mathcal{P}$ carries an action of $\mathcal{G} \times \Gal(E_\wp^\mathrm{ur}/\tilde E_{\tilde \wp})$ that is compatible with the actions on the Shimura variety. Similarly, a \emph{mathematical object} $\mathcal{P}^\circ$ over $\mathrm{Sh}_{K^p}(G, X)^\circ_{?^\mathrm{ur}}$ is a (pro-)scheme or a vector bundle (with a section) as above, over the connected Shimura variety $\mathrm{Sh}_{K^p}(G, X)^\circ_{?^\mathrm{ur}}$, viewed as a pro-scheme. It is called $\mathcal{E}_{G, \tilde E_{\tilde \wp}}$-equivariant, if it carries an action of the group compatible with the natural group action on the base Shimura variety. Similar to the discussion above, we have the following. \begin{cor} \label{C:mathematical objects equivalence} There is a natural equivalence of categories between $\mathcal{G} \times \Gal(E^\mathrm{ur}_\wp / \tilde E_{\tilde \wp})$-equivariant mathematical objects $\mathcal{P}$ over the tower of Shimura varieties $\mathrm{Sh}_{K_pK^p}(G,X)_?$, and the category of mathematical objects $\mathcal{P}^\circ$ over $\mathrm{Sh}_{K^p}(G,X)^\circ_{?^\mathrm{ur}}$, equivariant for the action of $\mathcal{E}_{G, \tilde E_{\tilde \wp}}$. \end{cor} \begin{proof} As above, given $\mathcal{P}$, we can recover $\mathcal{P}^\circ$ by taking inverse limit with respect to the open compact subgroup $K^p$ and then restricting to the connected component $\mathrm{Sh}_{K_p}(G,X)^\circ_{?^\mathrm{ur}}$. Conversely, we can recover $\mathcal{P}$ from $\mathcal{P}^\circ$ through the isomorphism $ \mathcal{P}_{?^\mathrm{ur}} \cong \mathcal{P}^\circ \times_{\mathcal{E}_{G, \tilde E_{\tilde \wp}}} (\mathcal{G} \times \Gal(E_\wp^\mathrm{ur}/\tilde E_{\tilde \wp}))$ and then using Galois descent if needed. \end{proof} \begin{remark} \label{R:no Galois action} If one does not consider the Galois action, then Theorem~\ref{T:structure of calE_G,p} below implies that \[ \mathrm{Sh}_{K_p}(G,X)_{\mathcal{O}_\wp^\mathrm{ur}} \cong \mathrm{Sh}_{K_p}(G,X)_{\mathcal{O}_\wp^\mathrm{ur}}^\circ \times_{\big(G^\mathrm{der}(\mathbb{A}^{\infty,p}) \ast_{G^\mathrm{der}(\mathbb{Q})^{(p)}_+} G^\mathrm{ad}(\mathbb{Q})^{+,(p)} \big)}\mathcal{G}, \] and the same applies to the mathematical objects. \end{remark} \begin{lemma} \label{L:nu(G(Q)->>T(Q)} We have $\nu(G(\mathbb{Q})_+^{(p)}) = T(\mathbb{Q})^{\dagger, (p)}$. \end{lemma} \begin{proof} By Subsection~\ref{A:geometric connected components}, we have $\nu(G(\mathbb{Q})_+) = T(\mathbb{Q})^\dagger$. The lemma follows from taking the kernels of the following morphism of exact sequences \[ \xymatrix{ 1 \ar[r] & G^\mathrm{der}(\mathbb{Q})_+ \ar[r] \ar@{->>}[d] & G(\mathbb{Q})_+ \ar[r] \ar[d] & T(\mathbb{Q})^\dagger \ar[r] \ar[d] & 1\\ 1 \ar[r] & G^\mathrm{der}(\mathbb{Q}_p)/K^\mathrm{der}_p \ar[r] & G(\mathbb{Q}_p)/K_p \ar[r] & T(\mathbb{Q}_p) / T(\mathbb{Z}_p) \ar[r] & 1 } \] Here, the left vertical arrow is surjective by the strong approximation theorem for the simply-connected group $G^\mathrm{der}(\mathbb{Q})$. The bottom sequence is exact because the corresponding sequences are exact both for $\mathbb{Q}_p$ (because $H^1(\mathbb{Q}_p, G^\mathrm{der})=0$) and for $\mathbb{Z}_p$ (by Hypothesis~\ref{H:hypo on G}). \end{proof} The following structure theorem for $\mathcal{E}_{G, \wp}$ is the key to transfer integral canonical models of Shimura varieties for one group to that for another group. \begin{theorem} \label{T:structure of calE_G,p} For a finite unramified extension $\tilde E_{\tilde \wp}$ of $E_\wp$, we have a natural short exact sequence. \begin{equation} \label{E:structure of E_p} 1 \longrightarrow G^\mathrm{der}(\mathbb{A}^{\infty,p}) \ast_{G^\mathrm{der}(\mathbb{Q})^{(p)}_+} G^\mathrm{ad}(\mathbb{Q})^{+,(p)} \longrightarrow \mathcal{E}_{G, \tilde E_{\tilde \wp}} \longrightarrow \Gal(E_\wp^\mathrm{ur} / \tilde E_{\tilde \wp}) \longrightarrow 1. \end{equation} \end{theorem} \begin{proof} By the definition of $\mathcal{E}_{G,\tilde E_{\tilde \wp}}$, it fits into the following short exact sequence \[ 1 \longrightarrow \Ker\Big (\widetilde G_p \to T(\mathbb{Q})^{\dagger,(p), \mathrm{cl}} \big \backslash T(\mathbb{A}^{\infty,p}) \Big) \longrightarrow \mathcal{E}_{G, \tilde E_{\tilde \wp}} \longrightarrow \Gal(E_\wp^\mathrm{ur} / \tilde E_{\tilde \wp}) \longrightarrow 1. \] By Lemma~\ref{L:nu(G(Q)->>T(Q)}, the kernel above is isomorphic to \begin{equation} \label{E:expression of kernel} \big(\, \big(G(\mathbb{Q})_+^{(p)} G^\mathrm{der}(\mathbb{A}^{\infty,p})\big)^\mathrm{cl} \big/ Z(\mathbb{Q})^{(p), \mathrm{cl}} \big) \ast_{G(\mathbb{Q})_+^{(p)} / Z(\mathbb{Q})^{(p)}} G^\mathrm{ad}(\mathbb{Q})^{+,(p)}, \end{equation} where both closures are taken inside $G(\mathbb{A}^{\infty, p})$. We claim that we can remove the two completions. Indeed, put $Z' = Z \cap G^\mathrm{der}$ and $Z'(\mathbb{Q})^{(p)} = Z'(\mathbb{Q}) \cap Z(\mathbb{Q})^{(p)}$; the latter is a finite group. Consider the commutative diagram of exact sequences \begin{small} \[ \xymatrix@R=15pt@C=10pt{ 1 \ar[r] & Z'(\mathbb{Q})^{(p)} \ar[r] \ar@{=}[d] & Z(\mathbb{Q})^{(p)} \times G^\mathrm{der}(\mathbb{A}^{\infty,p}) \ar[d] \ar[r] & G(\mathbb{Q})_+^{(p)} G^\mathrm{der}(\mathbb{A}^{\infty,p}) \ar[d] \ar[r] & T(\mathbb{Q})^{\dagger,(p)} \big/ \mathrm{Im}(Z(\mathbb{Q})^{(p)} \to T(\mathbb{Q})^{(p)}) \ar[r] \ar[d] & 1 \\ 1 \ar[r] & Z'(\mathbb{Q})^{(p), \mathrm{cl}} \ar[r] &{Z(\mathbb{Q})^{(p), \mathrm{cl}}} \times G^\mathrm{der}(\mathbb{A}^{\infty,p}) \ar[r] & \big(G(\mathbb{Q})^{(p)}_+ G^\mathrm{der}(\mathbb{A}^{\infty,p})\big)^\mathrm{cl} \ar[r] & T(\mathbb{Q})^{\dagger, (p), \mathrm{cl}} \big/\mathrm{Im}({Z(\mathbb{Q})^{(p), \mathrm{cl}}} \to {T(\mathbb{Q})^{(p), \mathrm{cl}}}) \ar[r] & 1 } \]\end{small}By diagram chasing, it suffices to prove that the right vertical arrow is an isomorphism. Since the kernel of $Z \to T$ is finite, \cite[\S~2.0.10]{deligne2} implies that $\mathrm{Im}({Z(\mathbb{Q})^\mathrm{cl}} \to {T(\mathbb{Q})^\mathrm{cl}}) \cong (\mathrm{Im}(Z(\mathbb{Q}) \to T(\mathbb{Q})))^\mathrm{cl}$ and the right vertical arrow is an isomorphism. Now, the exact sequence \eqref{E:structure of E_p} follows from a series of tautological isomorphisms \begin{align*} &\big( G(\mathbb{Q})_+^{(p)} \cdot G^\mathrm{der}(\mathbb{A}^{\infty,p}) / Z(\mathbb{Q})^{(p)} \big) \ast_{G(\mathbb{Q})^{(p)}_+ / Z(\mathbb{Q})^{(p)}} G^\mathrm{ad}(\mathbb{Q})^{+,(p)} \\ \cong\ & \Big[ \big(G^\mathrm{der}(\mathbb{A}^{\infty,p}) \ast_{G^\mathrm{der}(\mathbb{Q})_+^{(p)}} G(\mathbb{Q})_+^{(p)}) / Z(\mathbb{Q})^{(p)} \Big] \ast_{G(\mathbb{Q})_+^{(p)} / Z(\mathbb{Q})^{(p)}} G^\mathrm{ad}(\mathbb{Q})^{+,(p)} \\ \cong\ & \Big[ G^\mathrm{der}(\mathbb{A}^{\infty,p}) \ast_{G^\mathrm{der}(\mathbb{Q})_+^{(p)}} \big( G(\mathbb{Q})_+^{(p)} / Z(\mathbb{Q})^{(p)} \big) \Big] \ast_{G(\mathbb{Q})_+^{(p)} / Z(\mathbb{Q})^{(p)}} G^\mathrm{ad}(\mathbb{Q})^{+,(p)} \\ \cong \ & G^\mathrm{der}(\mathbb{A}^{\infty,p}) \ast_{G^\mathrm{der}(\mathbb{Q})_+^{(p)}} G^\mathrm{ad}(\mathbb{Q})^{+, (p)}. \end{align*} \end{proof} \begin{cor} \label{C:Sh(G)^circ_Zp independent of G} Let $G \to G'$ be a homomorphism of two reductive groups satisfying Hypothesis~\ref{H:hypo on G}, which induces isomorphisms between the derived and adjoint groups as well as their $p$-integral elements. A $G^\mathrm{ad}(\mathbb{R})^+$-conjugacy class $X^+$ of homomorphisms $h: \mathbb{S} \to G_\mathbb{R}$ induces a $G'^\mathrm{ad}(\mathbb{R})^+$-conjugacy class $X'^+$ of homomorphisms $h': \mathbb{S} \to G'_\mathbb{R}$. Put $X = G(\mathbb{R}) \cdot X^+$ and $X' = G'(\mathbb{R}) \cdot X'^+$. Then, for any field $\tilde E_{\tilde \wp}$ containing both $E_\wp$ and $E'_{\wp'}$ and unramified over them, there exist a natural isomorphism of groups $\mathcal{E}_{G, \tilde E_{\tilde \wp}} \xrightarrow{ \cong} \mathcal{E}_{G', \tilde E_{\tilde \wp}}$ and a natural isomorphism of geometric connected components of Shimura varieties $\mathrm{Sh}_{K_p}(G, X)_{\tilde E_{\tilde \wp}^\mathrm{ur}}^\circ \cong \mathrm{Sh}_{K'_p}(G', X')_{\tilde E_{\tilde \wp}^\mathrm{ur}}^\circ$ equivariant for the natural actions of the groups. As a corollary, if the Shimura variety for one of $G$ or $G'$ admits an integral canonical model and both $E_\wp$ and $E'_{\wp'}$ are unramified extensions of $\mathbb{Q}_p$, then the other Shimura variety admits an integral canonical model. Moreover, when there are canonical integral models, we have an equivalence of categories between the category of $\mathcal{G} \times \Gal(E_\wp^\mathrm{ur}/\tilde E_{\tilde \wp})$-equivariant mathematical objects $\mathcal{P}$ over the tower of Shimura varieties $\mathrm{Sh}_{K_pK^p}(G,X)_?$ (for $? = \mathcal{O}_{\tilde \wp}$ or $\mathbb{F}_{\tilde \wp}$) and the categories of $\mathcal{G}' \times \Gal(E_\wp^\mathrm{ur}/\tilde E_{\tilde \wp})$-equivariant mathematical objects $\mathcal{P}'$ over the tower of Shimura varieties $\mathrm{Sh}_{K'_pK'^p}(G',X')_{?'}$ (for $?' = \mathcal{O}_{\tilde \wp'}$ or $\mathbb{F}_{\tilde \wp'}$). \end{cor} \begin{proof} The first part follows from Theorem~\ref{T:structure of calE_G,p} and the discussion in Subsection~\ref{A:connected integral model}. For the second part, the existence of integral canonical model over $\tilde E_{\tilde \wp}$ follows from the first part and the discussion at the end of Subsection~\ref{A:connected integral model}. The extension property allows one to further descend the integral canonical model to $\mathcal{O}_\wp$ (or $\mathcal{O}_{\wp'}$). The last part follows from Corollary~\ref{C:mathematical objects equivalence}. \end{proof} \section{Integral canonical model of quaternionic Shimura varieties}\label{Section:Integral-model} Classically, the integral model for a quaternionic Shimura variety is defined by passing to a unitary Shimura variety, as is done in the curve case by Carayol \cite{carayol}. As pointed out earlier that we will encounter some groups which are not quasi-split at $p$, Kisin's general work \cite{kisin} unfortunately does not apply. We have to work out the general Carayol's construction for completeness; this will also be useful later when discussing the construction of the Goren-Oort stratification. We tailor the choice of the unitary group to our application of Helm's isogeny trick later; in particular, we will assume that certain places above $p$ to be inert in the CM extension. \subsection{Quaternionic Shimura varieties} \label{S:quaternionic-shimura-varieties} Recall the notation from \ref{SS:notation-F}. Let $\mathtt{S}$ be an even subset of places of $F$. Put $\mathtt{S}_\infty = \mathtt{S} \cap \Sigma_\infty$ and $\mathtt{S}_p = \mathtt{S}\cap \Sigma_p$. Let $B_\mathtt{S}$ be the quaternion algebra over $F$ ramified presicsely at $\mathtt{S}$. Let $G_{\mathtt{S}}$ denote the reductive group $\Res_{F/\mathbb{Q}}(B^\times_\mathtt{S})$. Then $G_{\mathtt{S},\mathbb{R}}$ is isomorphic to \[ \prod_{\tau \in \mathtt{S}_\infty} \mathbb{H}^\times \times \prod_{\tau \in\Sigma_\infty -\mathtt{S}_\infty} \mathrm{GL}_{2, \mathbb{R}}. \] We define the {Deligne homomorphism} to be $h_\mathtt{S}: \mathbb{S} \to G_{\mathtt{S},\mathbb{R}}$, sending $z = x +\mathbf{i} y$ to $(z_{G_\mathtt{S}}^\tau)_{\tau \in \Sigma_\infty}$, where $z_{G_\mathtt{S}}^\tau = 1$ if $\tau \in \mathtt{S}_\infty$ and $z_{G_\mathtt{S}}^\tau = \big(\begin{smallmatrix} x&y\\ -y&x \end{smallmatrix}\big)$ if $\tau \in \Sigma_\infty -\mathtt{S}_\infty$. Let $\mathfrak{H}_\mathtt{S}$ denote the $G_\mathtt{S}(\mathbb{R})$-conjugacy class of the homomorphism $h_\mathtt{S}$; it is isomorphic to the product of $\#(\Sigma_\infty - \mathtt{S}_\infty)$ copies of $\mathfrak{h}^\pm = \mathbb{P}^1(\mathbb{C}) - \mathbb{P}^1(\mathbb{R})$. We put $\mathfrak{H}_\mathtt{S}^+ = (\mathfrak{h}^+)^{\Sigma_\infty - \mathtt{S}_\infty}$, where $\mathfrak{h}^+$ denotes the upper half plane. We will consider the following type of open compact subgroups of $G_\mathtt{S}(\mathbb{A}^\infty)$: $K = K^pK_p$, where $K^p$ is an open compact subgroup of $B^\times_\mathtt{S}(\mathbb{A}_F^{\infty, p})$ and $K_p = \prod_{\mathfrak{p} \in \Sigma_p} K_\mathfrak{p}$ with $K_\mathfrak{p}$ an open compact subgroup of $B_\mathtt{S}^\times(F_\mathfrak{p})$. From this point onward, we write $\mathrm{Sh}_K(G)$ instead of $\mathrm{Sh}_K(G,X)$ for Shimura varieties when the choice of $X$ is clear. Associated to the data above, there is a Shimura variety $\mathrm{Sh}_K(G_\mathtt{S})$ whose $\mathbb{C}$-points are \[ \mathrm{Sh}_K(G_\mathtt{S})(\mathbb{C}) = G_\mathtt{S}(\mathbb{Q}) \backslash ( \mathfrak{H}_\mathtt{S} \times G_\mathtt{S}(\mathbb{A}^\infty))/ K. \] The \emph{reflex field} $F_\mathtt{S}$ is a subfield of $\mathbb{C}$ characterized as follows: an element $\sigma \in \Aut(\mathbb{C}/\mathbb{Q})$ fixes $F_\mathtt{S}$ if and only if the subset $\mathtt{S}_\infty$ of $\Sigma_\infty $ is preserved under the action of $\sigma$ by post-composition. Following Subsection~\ref{A:Shimura varieties}, we put $\mathrm{Sh}_{K_p}(G_\mathtt{S}) = \varprojlim_{K^p} \mathrm{Sh}_{K^pK_p}(G_\mathtt{S})$. (Note that the level structure at $p$ is fixed in the inverse limit.) Put $T_F = \Res_{F/\mathbb{Q}}\mathbb{G}_m$. The reduced norm on $B_\mathtt{S}$ induces a homomorphism $\mathrm{Nm} = \mathrm{Nm}_{B_\mathtt{S} / F}: G_\mathtt{S} \to T_F$. This homomorphism induces a map \[ \pi_0^\mathrm{geom}(\mathrm{Sh}_K(G_\mathtt{S})) \longrightarrow T_F(\mathbb{Q}) \backslash ( T_F(\mathbb{A}^\infty) \times \{\pm 1\}^g) / \mathrm{Nm} (K), \] which is an isomorphism if $\mathtt{S}_\infty \subsetneq \Sigma_\infty$. We will make the Shimura reciprocity law (Subsection~\ref{A:reciprocity law}) explicit for $\mathrm{Sh}_K(G_\mathtt{S})$ later when it is in use. \subsection{Level structure at $p$}\label{S:level-structure-at-p} We fix an isomorphism $\iota_p: \mathbb{C} \simeq \overline \mathbb{Q}_p$. For each $\mathfrak{p} \in \Sigma_p$, let $\Sigma_{\infty/\mathfrak{p}}$ denote the subset of $\Sigma_\infty$ consisting of real embeddings which, when composed with $\iota_p$, induce the $p$-adic place $\mathfrak{p}$. We put $\mathtt{S}_{\infty/\mathfrak{p}} = \mathtt{S} \cap \Sigma_{\infty/\mathfrak{p}}$. Similarly, we can view the reflexive field $F_\mathtt{S}$ above as a subfield of $\overline \mathbb{Q}_p$ via $\iota_p$, which induces a $p$-adic place $\wp$ of $F_\mathtt{S}$. We use $\mathcal{O}_\wp$ to denote the valuation ring and $k_\wp$ the residue field. In this paper, we always make the following assumption on $\mathtt{S}$: \begin{hypo} \label{H:B_S-splits-at-p} If $B_\mathtt{S}$ does not split at a $p$-adic place $\mathfrak{p}$ of $F$, then $\mathtt{S}_{\infty/\mathfrak{p}}= \Sigma_{\infty/\mathfrak{p}}$. \end{hypo} For each $\mathfrak{p}\in \Sigma_{p}$, we now specify the level structure $K_{\mathfrak{p}}\subset B_{\mathtt{S}}^\times(F_\mathfrak{p})$ of $\mathrm{Sh}_{K}(G_{\mathtt{S}})$ to be considered in this paper. We distinguish four types of the prime $\mathfrak{p} \in \Sigma_p$: \begin{itemize} \item {\bf Types $\alpha$ and $\alpha^\sharp$:} $B_{\mathtt{S}}$ splits at $\mathfrak{p}$ and the cardinality $\# (\Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}})$ is even. We fix an identification $B^{\times}_{\mathtt{S}}(F_{\mathfrak{p}})\simeq \mathrm{GL}_2(F_{\mathfrak{p}})$. We take $K_{\mathfrak{p}}$ to be \begin{itemize} \item either $\mathrm{GL}_2(\mathcal{O}_{\mathfrak{p}})$, or \item $\mathrm{Iw}_\mathfrak{p} = \big( \begin{smallmatrix} \mathcal{O}_\mathfrak{p} ^\times & \mathcal{O}_\mathfrak{p}\\ \mathfrak{p}\mathcal{O}_\mathfrak{p} & \mathcal{O}_\mathfrak{p}^\times \end{smallmatrix} \big)$ which we allow only when $\Sigma_{\infty/\mathfrak{p}} = \mathtt{S}_{\infty/\mathfrak{p}}$. \end{itemize} We name the former case as type $\alpha$ and the latter as type $\alpha^\sharp$. (Under our definition, when $\Sigma_{\infty/\mathfrak{p}} = \mathtt{S}_{\infty/\mathfrak{p}}$, the type of $\mathfrak{p}$ depends on the choice of the level structure.) \item {\bf Type $\beta$:} $B_{\mathtt{S}}$ splits at $\mathfrak{p}$ and the cardinality $\#(\Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}})$ is odd. We fix an identification $B_{\mathtt{S}}^{\times}(F_{\mathfrak{p}})\simeq \mathrm{GL}_2(F_{\mathfrak{p}})$. We take $K_{\mathfrak{p}}$ to be $\mathrm{GL}_2(\mathcal{O}_{\mathfrak{p}})$. \item {\bf Type $\beta^\sharp$:} $B_\mathtt{S}$ ramifies at $\mathfrak{p}$ and $\mathtt{S}_{\infty/\mathfrak{p}} = \Sigma_{\infty/\mathfrak{p}}$. In this case, $B_{\mathtt{S}}\otimes_{F}F_{\mathfrak{p}}$ is the division quaternion algebra $B_{F_{\mathfrak{p}}}$ over $F_{\mathfrak{p}}$. Let $\mathcal{O}_{B_{F_{\mathfrak{p}}}}$ be the maximal order of $B_{F_{\mathfrak{p}}}$. We take $K_{\mathfrak{p}}$ to be $\mathcal{O}_{B_{F_{\mathfrak{p}}}}^{\times}$. \end{itemize} The aim of this section is to construct an integral canonical model of $\mathrm{Sh}_K(G_{\mathtt{S}})$ over $\mathcal{O}_\wp$ with $K_{p}=\prod_{\mathfrak{p}|p}K_{\mathfrak{p}}$ specified above. For this, we need to introduce an auxiliary CM extension and a unitary group. \subsection{Auxiliary CM extension} \label{S:CM extension} We choose a CM extension $E$ over $F$ such that \begin{itemize} \item every place in $\mathtt{S}$ is inert in $E/F$; and \item a place $\mathfrak{p} \in \Sigma_p$ is split in $E/F$ if it is of type $\alpha$ or $\alpha^\sharp$, and is inert in $E/F$ if it is of type $\beta$ or $\beta^\sharp$. \end{itemize} We remark that our construction slightly differs from \cite{carayol} in that Carayol requires all places above $p$ to split in $E/F$. For later convenience, we fix some totally negative element $\mathfrak{d} \in \mathcal{O}_F$ coprime to $p$ so that $E = F(\sqrt{\mathfrak{d}})$. (The construction will be independent of such choice.) Let $\Sigma_{E, \infty}$ denote the set of complex embeddings of $E$. We have a natural two-to-one map $\Sigma_{E, \infty} \to \Sigma_\infty$. For each $\tau \in \Sigma_\infty$, we often use $\tilde \tau $ to denote a complex embedding of $E$ extending $\tau$; its complex conjugate is denoted by $\tilde \tau^c$. We fix a choice of a subset $\tilde \mathtt{S}_\infty \subseteq \Sigma_{E,\infty}$ which consists of, for each $\tau \in \mathtt{S}_\infty$, a choice exactly one lift $\tilde \tau\in \Sigma_{E,\infty}$. This choice is equivalent to a collection of the numbers $s_{\tilde \tau} \in \{0,1,2\}$ for all $\tilde \tau \in \Sigma_{E, \infty}$ such that \begin{itemize} \item if $\tau \in \Sigma_\infty - \mathtt{S}_\infty$, we have $s_{\tilde \tau} = 1$ for all lifts $\tilde \tau $ of $\tau$; \item if $\tau \in \mathtt{S}_\infty$ and $\tilde \tau$ is the lift in $\tilde \mathtt{S}_\infty$, we have $s_{\tilde \tau} = 0$ and $s_{\tilde \tau^c} = 2$. \end{itemize} We put $\tilde\mathtt{S}=(\mathtt{S},\tilde\mathtt{S}_{\infty})$. Consider the torus $T_{E, \tilde \mathtt{S}} = \Res_{E/\mathbb{Q}}\mathbb{G}_m$ together with the following choice of the Deligne homomorphism: \[ \xymatrix@R=0pt@C=50pt{ h_{E, \tilde \mathtt{S}}\colon \mathbb{S}(\mathbb{R}) = \mathbb{C}^\times \ar[r] & T_{E, \tilde \mathtt{S}}(\mathbb{R}) = \bigoplus_{\tau \in \Sigma_\infty} (E \otimes_{F, \tau}\mathbb{R})^\times \simeq \bigoplus_{\tau\in\Sigma_\infty} \mathbb{C}^\times\\ z\ar@{|->}[r] & (z_{E, \tau})_\tau. } \] Here $z_{E, \tau} = 1$ if $\tau \in \Sigma_\infty -\mathtt{S}_\infty$ and $z_{E, \tau} = z$ otherwise, where, in the latter case, the isomorphism $(E \otimes_{F, \tau}\mathbb{R})^\times \simeq \mathbb{C}^\times$ is given by the lift $\tilde \tau \in \tilde \mathtt{S}^c_\infty$. The reflex field $E_{\tilde \mathtt{S}}$ is the subfield of $\mathbb{C}$ corresponding to the subgroup of $\Aut(\mathbb{C}/\mathbb{Q})$ which stabilizes the set $\tilde \mathtt{S}_\infty \subset \Sigma_{E, \infty}$; it contains $F_\mathtt{S}$ as a subfield. The isomorphism $\iota_p: \mathbb{C} \simeq \overline \mathbb{Q}_p$ determines a $p$-adic place $\tilde \wp$ of $E_{\tilde \mathtt{S}}$; we use $\mathcal{O}_{\tilde \wp}$ to denote the valuation ring and $k_{\tilde \wp}$ the residue field. Note that $[k_{\tilde \wp}: \mathbb{F}_p]$ is always even whenever there is a place $\mathfrak{p}\in \Sigma_p$ of type $\beta$. We take the level structure $K_E$ to be $K_E^pK_{E,p}$, where $K_{E, p} = (\mathcal{O}_E \otimes_\mathbb{Z} \mathbb{Z}_p)^\times$, and $K_E^p$ is an open compact subgroup of $\mathbb{A}_E^{\infty,p,\times}$. This then gives rise to a Shimura variety $\mathrm{Sh}_{K_E}(T_{E, \tilde \mathtt{S}})$ and its limit $\mathrm{Sh}_{K_{E,p}}(T_{E, \tilde \mathtt{S}}) = \varprojlim_{K_E^p}\mathrm{Sh}_{K_{E,p}K_E^p}(T_{E, \tilde \mathtt{S}})$; they have integral canonical models $\mathbf{Sh}_{K_E}(T_{E, \tilde \mathtt{S}})$ and $\mathbf{Sh}_{K_{E,p}}(T_{E, \tilde \mathtt{S}})$ over $\mathcal{O}_{\tilde \wp}$ as specified in \ref{S:integral model weak Shimura datum}. We also consider the product group $ G_\mathtt{S} \times T_{E, \tilde \mathtt{S}}$ with the product Deligne homomorphism \[ \tilde h_{\tilde \mathtt{S}} = h_\mathtt{S} \times h_{E, \tilde \mathtt{S}} \colon \mathbb{S}(\mathbb{R}) = \mathbb{C}^\times \longrightarrow (G_\mathtt{S}\times T_{E, \tilde \mathtt{S}})(\mathbb{R}). \] This gives rise to the product Shimura varieties: \begin{align*} \mathrm{Sh}_{K \times K_E}(G_\mathtt{S} \times T_{E,\tilde \mathtt{S}}) &= \mathrm{Sh}_K(G_\mathtt{S}) \times_{F_{\mathtt{S},\wp}} \mathrm{Sh}_{K_E}(T_{E, \tilde \mathtt{S}});\\ \mathrm{Sh}_{K_p \times K_{E,p}}(G_\mathtt{S} \times T_{E,\tilde \mathtt{S}}) &= \mathrm{Sh}_{K_p}(G_\mathtt{S}) \times_{F_{\mathtt{S},\wp}} \mathrm{Sh}_{K_{E,p}}(T_{E, \tilde \mathtt{S}}) . \end{align*} Let $Z = \Res_{F/\mathbb{Q}}\mathbb{G}_m$ denote the center of $G_\mathtt{S}$. Put $G''_{\tilde \mathtt{S}} = G_\mathtt{S} \times_Z T_{E, \tilde \mathtt{S}}$ which is the quotient of $G_\mathtt{S} \times T_{E, \tilde \mathtt{S}}$ by $Z$ embedded anti-diagonally as $z \mapsto (z, z^{-1})$. The corresponding Deligne homomorphism $h''_{\tilde \mathtt{S}}: \mathbb{S}(\mathbb{R}) \to G''_{\tilde \mathtt{S}}(\mathbb{R})$ is the one induced by $\tilde h_{\tilde \mathtt{S}}$. We will consider open compact subgroups $K'' \subseteq G''_{\tilde \mathtt{S}}(\mathbb{A}^\infty)$ of the form $K''^pK''_p$, where $K''^p$ is an open compact subgroup of $G''_{\tilde \mathtt{S}}(\mathbb{A}^{\infty,p})$ and $K''_p $ is an open compact subgroup of $G''_{\tilde \mathtt{S}}(\mathbb{Q}_p)$. Finally, the $G''_{\tilde \mathtt{S}}(\mathbb{R})$-conjugacy class of $h''_{\tilde \mathtt{S}}$ can be canonically identified with $\mathfrak{H}_\mathtt{S}$. We then get the Shimura variety $\mathrm{Sh}_{K''}(G''_{\tilde \mathtt{S}})$ and its limit $\mathrm{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})$ over the reflex field $E_{\tilde \mathtt{S}}$. The set of $\mathbb{C}$-points of $\mathrm{Sh}_{K''}(G''_{\tilde \mathtt{S}})$ is \[ \mathrm{Sh}_{K''}(G''_{\tilde \mathtt{S}})(\mathbb{C}) = G''_{\tilde \mathtt{S}}(\mathbb{Q})\backslash (\mathfrak{H}_\mathtt{S} \times G''_{\tilde \mathtt{S}}(\mathbb{A}^\infty) ) / K''. \] \subsection{Unitary Shimura varieties} \label{S:unitary-shimura} We now introduce the unitary group. Consider the morphism \[ \xymatrix@R=0pt@C=10pt{ \nu = \mathrm{Nm}_{B/F} \times \mathrm{Nm}_{E/F}: & G''_{\tilde \mathtt{S}}=G_\mathtt{S} \times_Z T_E \ar[rr]&& T\\ & (g, z) \ar@{|->}[rr]&& \mathrm{Nm}(g) z\bar z. } \] Viewing $\mathbb{G}_m$ naturally as a subgroup of $T = \Res_{F/\mathbb{Q}}\mathbb{G}_m$, we define $G'_{\tilde \mathtt{S}}$ to be the reductive group $\nu^{-1}(\mathbb{G}_m)$; this will be our auxiliary unitary group, whose associated Shimura variety will provide $\mathrm{Sh}_K(G_{ \mathtt{S}})$ an integral canonical model. We will occasionally use the algebraic group $G'_{{\tilde \mathtt{S}}, 1} = \Ker \nu$; but we view it as a reductive group over $F$. Note that the Deligne homomorphism $h''_{\tilde \mathtt{S}} : \mathbb{S}(\mathbb{R}) \to G''_{\tilde \mathtt{S}}(\mathbb{R})$ factors through a homomorphism $h'_{\tilde \mathtt{S}}: \mathbb{S}(\mathbb{R}) \to G'_{\tilde \mathtt{S}}(\mathbb{R})$. The $G'_{\tilde \mathtt{S}}(\mathbb{R})$-conjugacy classes of $h'_{\tilde \mathtt{S}}$ is canonically isomorphic to $\mathfrak{H}_\mathtt{S}$. We will consider open compact subgroups of $G'_{\tilde \mathtt{S}}(\mathbb{A}^\infty)$ of the form $K' = K'_pK'^p$, where $K'_p$ is an open compact subgroup of $G'_{\tilde \mathtt{S}}(\mathbb{Q}_p)$ (to be specified later in Subsection~\ref{S:level-structure}) and $K'^p$ is an open compact subgroup of $G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p})$. We will always take $K'^p$ to be sufficiently small so that $K'$ is \emph{neat} and hence the moduli problem we encounter later would be representable by a fine moduli space. Given the data above, we have a Shimura variety $\mathrm{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ whose $\mathbb{C}$-points are given by \[ \mathrm{Sh}_{K'}(G'_{\tilde \mathtt{S}})(\mathbb{C}) = G'_{\tilde \mathtt{S}}(\mathbb{Q}) \backslash (\mathfrak{H}_\mathtt{S} \times G'_{\tilde \mathtt{S}}(\mathbb{A}^\infty) )/ K'. \] The Shimura variety $\mathrm{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ is defined over the reflex field $E_{\tilde \mathtt{S}}$. We put $\mathrm{Sh}_{K'_p}(G'_{\tilde \mathtt{S}}) = \varprojlim_{K'^p} \mathrm{Sh}_{K'_pK'^p}(G'_{\tilde \mathtt{S}})$. The upshot is the following lemma, which verifies the conditions listed in Corollary~\ref{C:Sh(G)^circ_Zp independent of G}. This allows us to bring the integral canonical models of the unitary Shimura varieties to that of the quaternionic Shimura varieties. \begin{lemma} \label{L:compatibility of derived group and adjoint group} The natural diagram of morphisms of groups \begin{equation}\label{E:morphism-of-groups} G_{\mathtt{S}}\leftarrow G_{\mathtt{S}}\times T_{E, \tilde \mathtt{S}}\rightarrow G''_{\tilde \mathtt{S}} = G_{\mathtt{S}}\times_{Z}T_{E, \tilde \mathtt{S}}\leftarrow G'_{\tilde \mathtt{S}} \end{equation} \begin{itemize} \item[(1)] are compatible with the Deligne homorphisms; and \item[(2)] induce isomorphisms on their associated derived and adjoint groups. \end{itemize} \end{lemma} \begin{proof} Straightforward. \end{proof} \subsection{PEL Shimura data}\label{S:PEL-Shimura-data} We put $D_\mathtt{S} = B_\mathtt{S} \otimes_F E$; it is isomorphic to $\mathrm{M}_2(E)$ under Hypothesis~\ref{H:B_S-splits-at-p}. This is a quaternion algebra over $E$ equipped with an involution $l \to \bar l$ given by the tensor product of the natural involution on $B_\mathtt{S}$ and the complex conjugation on $E$. Let $D_\mathtt{S}^\mathrm{sym}$ denote the subsets of \emph{symmetric} elements, i.e. those elements $\delta \in D_\mathtt{S}$ such that $\delta = \bar \delta$. For any element $\delta\in (D_{\mathtt{S}}^\mathrm{sym})^{\times}$, we can define a new involution on $D_\mathtt{S}$ given by $l \mapsto l^* = \delta^{-1}\bar l \delta$. In the following Lemma~\ref{L:property-PEL-data}, we will specify a convenient choice of such $\delta$. Let $V$ be the underlying $\mathbb{Q}$-vector space of $D_\mathtt{S}$, with the natural left $D_\mathtt{S}$-module structure. Define a pairing $\psi_{E}: V\times V\rightarrow E$ on $V$ by \begin{equation}\label{Equ:pairing-E} \psi_E(v, w) = \mathrm{Tr}_{D_\mathtt{S}/E}(\sqrt{\mathfrak{d}} \cdot v \delta w^*), \quad \quad v, w \in V. \end{equation} It is easy to check that $\psi_E$ is skew-hermitian over $E$ for $*$, i.e. $\overline{\psi_{E}(v,w)}=-\psi_E(w,v)$ and $\psi_E(lv, w) = \psi_E(v, l^*w)$ for $l \in D_\mathtt{S}$ and $v, w \in V$. We define the bilinear form $$\psi=\mathrm{Tr}_{E/\mathbb{Q}}\circ \psi_{E}\colon V\times V\mathrm{\longrightarrow} \mathbb{Q}.$$ which is skew-symmetric and hermitian for $*$. One checks easily that the subgroup consisting of elements $l \in D^\times_\mathtt{S}$ satisfying $\psi(vl, wl) = c(l)\psi(v, w)$ for some $c(l) \in \mathbb{Q}^\times$ is exactly the subgroup $G'_{\tilde \mathtt{S}} \subset D^\times_\mathtt{S}$. \emph{We make the above right action of $G'_\mathtt{S}$ on $V$ into a left action by taking the inverse action.} Then the group $G'_{\tilde \mathtt{S}}$ is identified with the $D_\mathtt{S}$-linear unitary of group of $V$ with similitudes in $\mathbb{Q}^\times$, i.e. for each $\mathbb{Q}$-algebra $R$, we have \begin{equation}\label{Equ:description-G'} G'_{\tilde \mathtt{S}}(R)=\{g\in \End_{D_{\mathtt{S}}\otimes_{\mathbb{Q}}R}(V\otimes_{\mathbb{Q}}R)\;|\; \psi(gv, gw)=c(g)\psi(v,w)\; \text{with }c(g)\in R^{\times}\}. \end{equation} We describe $D_{\mathtt{S},\mathfrak{p}} = D_\mathtt{S}\otimes_{F}F_{\mathfrak{p}}$ by distinguishing three cases according to the types of $\mathfrak{p}\in \Sigma_{p}$ in Subsection~\ref{S:level-structure-at-p}: \begin{itemize} \item {\bf Types $\alpha$ or $\alpha^\sharp$:} In this case, the place $\mathfrak{p}$ splits into two primes $\mathfrak{q}$ and $\bar{\mathfrak{q}}$ in $E$. We have natural isomorphisms $F_\mathfrak{p} \cong E_{\mathfrak{q}} \cong E_{\bar \mathfrak{q}}$. We fix an isomorphism $B_{\mathtt{S}}\otimes_{F}F_{\mathfrak{p}}\simeq \mathrm{M}_2(F_{\mathfrak{p}})$ as above, then $D_{\mathtt{S},\mathfrak{p}} \simeq \mathrm{M}_2(E_{\mathfrak{q}})\oplus \mathrm{M}_2(E_{\bar{\mathfrak{q}}})$. Under these identification, we put $\mathcal{O}_{B_{\mathtt{S}}, \mathfrak{p}}=\mathrm{M}_2(\mathcal{O}_{\mathfrak{p}})$ and $\mathcal{O}_{D_{\mathtt{S}, \mathfrak{p}}}=\mathrm{M}_2(\mathcal{O}_{\mathfrak{q}})\oplus \mathrm{M}_2(\mathcal{O}_{\bar\mathfrak{q}})$. \item {\bf Type $\beta$:} In this case, the place $\mathfrak{p}$ is inert in $E/F$ and let $\mathfrak{q}$ denote the unique place in $E$ above $\mathfrak{p}$. Using the fixed isomorphism $B_{\mathtt{S}}\otimes_{F}F_{\mathfrak{p}}\simeq \mathrm{M}_2(F_{\mathfrak{p}})$, we have $D_{\mathtt{S},\mathfrak{p}}\simeq \mathrm{M}_2(E_{\mathfrak{q}})$. We put $\mathcal{O}_{B_{\mathtt{S}}, \mathfrak{p}}=\mathrm{M}_2(\mathcal{O}_{\mathfrak{p}})$ and $\mathcal{O}_{D_{\mathtt{S}}, \mathfrak{p}}=\mathrm{M}_2(\mathcal{O}_{\mathfrak{q}})$. \item {\bf Type $\beta^\sharp$:} Let $\mathfrak{q}$ be the unique place in $E$ above $\mathfrak{p}$. The division quaternion algebra $B_{F_{\mathfrak{p}}}=B_{\mathtt{S}}\otimes_{F}F_{\mathfrak{p}}$ over $F_{\mathfrak{p}}$ is generated by an element $\varpi_{B_{F_\mathfrak{p}}}$ over $E_{\mathfrak{q}}$, with the relations $\varpi_{B_{F_\mathfrak{p}}}^2 = p$ and $\varpi_{B_{F_\mathfrak{p}}} a = \bar a \varpi_{B_{F_\mathfrak{p}}}$ for $a \in E_\mathfrak{q}$. We identify $B_{F_\mathfrak{p}} \otimes_{F_\mathfrak{p}} E_\mathfrak{q}$ with $ \mathrm{M}_2(E_\mathfrak{q})$ via the map \begin{equation} \label{E:involution-on-quaternion-embedding} (a+b\varpi_{B_{F_\mathfrak{p}}}) \otimes c \longmapsto \big( \begin{smallmatrix} ac & bc\\ p\bar b c & \bar a c \end{smallmatrix} \big). \end{equation} This also identifies $D_{\mathtt{S} ,\mathfrak{p}}$ with $\mathrm{M}_2(E_\mathfrak{q})$. We put $\mathcal{O}_{B_{\mathtt{S}}, \mathfrak{p}}=\mathcal{O}_{B_{F_\mathfrak{p}}}$, and take $\mathcal{O}_{D_{\mathtt{S}}, \mathfrak{p}}$ to be the preimage of $\mathrm{M}_2(\mathcal{O}_\mathfrak{q})$ in $D_{\mathtt{S}}\otimes_F F_{\mathfrak{p}}$. \end{itemize} We put $\mathcal{O}_{D_\mathtt{S}, p} = \prod_{\mathfrak{p} \in \Sigma_p} \mathcal{O}_{D_\mathtt{S}, \mathfrak{p}}$. \begin{lemma} \label{L:property-PEL-data} \begin{itemize} \item[(1)] We can choose the symmetric element $\delta \in (D_\mathtt{S}^\mathrm{sym})^\times$ above such that \begin{itemize} \item $\delta \in \mathcal{O}_{D_\mathtt{S}, \mathfrak{p}}^\times$ for each $\mathfrak{p} \in \Sigma_p$ not of type $\beta^\sharp$, and $\delta \in \big( \begin{smallmatrix} p^{-1} &0\\0&1 \end{smallmatrix} \big) \mathcal{O}_{D_\mathtt{S}, \mathfrak{p}}^\times$ for each $\mathfrak{p} \in \Sigma_p$ of type $\beta^\sharp$, and \item the following (symmetric) bilinear form on $V_\mathbb{R}$ is positive definite. \[ (v, w) \mapsto \psi\big(v, w\cdot h'_{\tilde \mathtt{S}}(\mathbf{i})^{-1} \big). \] \end{itemize} \item[(2)] Through $h'_{\tilde \mathtt{S}}: \mathbb{S}(\mathbb{R}) \to G'_{\tilde \mathtt{S}}(\mathbb{R})$, $h'_{\tilde \mathtt{S}}(\mathbf{i})$ acts on the vector space $V_\mathbb{R}$ and gives it a Hodge structure of type $\{(-1, 0), (0, -1)\}$. For $l \in D_\mathtt{S}$, we have \[ \mathrm{tr}(l; V_\mathbb{C} / F^0V_\mathbb{C}) = \big( \sum_{\tilde \tau \in \Sigma_{E,\infty}}s_{\tilde \tau} \tilde \tau \big) (\mathrm{Tr}_{D_\mathtt{S}/E}(l)). \] The reflex field $E_{\tilde \mathtt{S}}$ is the subfield of $\mathbb{C}$ generated by these traces for all $l \in D_\mathtt{S}$. \item[(3)] With the choice of $\delta$ in (1), the group $G'_{\tilde \mathtt{S},1}$ is unramified at $\mathfrak{p} \in \Sigma_p$ not of type $\beta^\sharp$ and is ramified at $\mathfrak{p} \in \Sigma_p$ of type $\beta^\sharp$. Moreover, $\mathcal{O}_{D_\mathtt{S}, p}$ is a maximal $*$-invariant lattice of $D_\mathtt{S}(\mathbb{Q}_p)$. \end{itemize} \end{lemma} \begin{proof} (1) Since $F$ is dense in $F \otimes_\mathbb{Q} \mathbb{Q}_p \oplus F \otimes_\mathbb{Q} \mathbb{R}$, the symmetric elements in $V$ is dense in the symmetric elements in $V \otimes_\mathbb{Q} \mathbb{Q}_p \oplus V \otimes_\mathbb{Q} \mathbb{R}$. The conditions at places above $p$ is clearly open and non-empty; so are the conditions at archimedean places, which follows from the same arguments in \cite[2.2.4]{carayol}. (2) This follows from the same calculation as in \cite[2.3.2]{carayol}. (3) We first remark that $G'_{\tilde\mathtt{S}, 1, F_\mathfrak{p}}$ does not depend on the choice of $\delta$, and hence we may take a convenient $\delta$ to ease the computation. We discuss each of the types separately. If $\mathfrak{p}$ is of type $\alpha$ or $\alpha^\sharp$, $G'_{\tilde\mathtt{S},1}(F_\mathfrak{p})$ is isomorphic to the kernel of $\mathrm{GL}_2(F_\mathfrak{p}) \times_{F_\mathfrak{p}^\times}(E_{\mathfrak{q}}^\times \times E_{\bar \mathfrak{q}}^\times) \to F_\mathfrak{p}^\times$ given by $(l, x, y) \mapsto \mathrm{Nm}(l)xy$. Hence $l \mapsto (l, \mathrm{Nm}(l)^{-1}, 1)$ induces an isomorphism $\mathrm{GL}_2(F_\mathfrak{p}) \to G'_{\mathtt{S},1}(F_\mathfrak{p})$. They are of course unramified. If $\mathfrak{p}$ is of type $\beta$, when we identify $D_{\mathtt{S}, \mathfrak{p}}$ with $\mathrm{M}_2(E_\mathfrak{q})$, the convolution $l \mapsto \bar l$ is given by $\big(\begin{smallmatrix} a& b\\ c& d \end{smallmatrix} \big) \mapsto \big(\begin{smallmatrix} \bar d& -\bar b\\ -\bar c& \bar a \end{smallmatrix} \big)$ for $a, b, c, d \in E_\mathfrak{q}$. We take the element $\delta$ to be $\big( \begin{smallmatrix} 0&1\\ 1&0 \end{smallmatrix} \big)$. The Hermitian form on $\mathrm{M}_2(E_\mathfrak{q})$ is then given by \[ \langle v,w \rangle = \mathrm{tr}_{\mathrm{M}_2(E_\mathfrak{q}) / E_\mathfrak{q}}(v\bar w\delta) = -a\bar b' + b \bar a' +c \bar d' - d \bar c', \quad v= \big(\begin{smallmatrix} a& b\\ c& d \end{smallmatrix} \big)\textrm{ and } w= \big(\begin{smallmatrix} a'& b'\\ c'& d' \end{smallmatrix} \big). \] One checks easily that $\mathfrak{e} = \big(\begin{smallmatrix} 1 &0 \\0&0 \end{smallmatrix} \big)$ is invariant under the $*$-involution. So $D_{\mathtt{S}, \mathfrak{p}}$ is isomorphic to $(\mathfrak{e} D_{\mathtt{S},\mathfrak{p}})^{\oplus 2}$ as a $*$-Hermitian space and $G'_{\tilde \mathtt{S},1,F_\mathfrak{p}}$ is the unitary group for $\mathfrak{e} D_{\mathtt{S}, \mathfrak{p}}$. It is clear from the expression above that $\mathfrak{e} D_{\mathtt{S}, \mathfrak{p}}$ is a hyperbolic plane (\cite[Example~3.2]{minguez}). Hence $G'_{\tilde \mathtt{S},1, F_\mathfrak{p}}$ being the unitary group of such Hermitian space is unramified. If $\mathfrak{p}$ is of type $\beta^\sharp$, the identification of $D_{\mathtt{S}, \mathfrak{p}}$ with $\mathrm{M}_2(E_\mathfrak{q})$ using \eqref{E:involution-on-quaternion-embedding} implies that the convolution $l \mapsto \bar l$ is given by \[ \big(\begin{smallmatrix} a& b\\ c& d \end{smallmatrix} \big) \mapsto \big(\begin{smallmatrix} \bar a& -\bar c/p\\ -p\bar b& \bar d \end{smallmatrix} \big) \quad \textrm{ for }a, b, c, d \in E_\mathfrak{q}. \] We take the element $\delta$ to be $\big( \begin{smallmatrix} p^{-1} &0\\0&1 \end{smallmatrix} \big)$. The Hermitian form on $\mathrm{M}_2(E_\mathfrak{q})$ is then given by \begin{equation} \label{E:Hermitian-type-gamma} \langle v,w\rangle = \mathrm{Tr}_{\mathrm{M}_2(E_\mathfrak{q}) / E_\mathfrak{q}}( v\bar w\delta) = a\bar a'/p -b \bar b' - c\bar c'/p + d \bar d', \quad v= \big(\begin{smallmatrix} a& b\\ c& d \end{smallmatrix} \big)\textrm{ and } w= \big(\begin{smallmatrix} a'& b'\\ c'& d' \end{smallmatrix} \big). \end{equation} Similar to above, $\mathfrak{e} = \big( \begin{smallmatrix} 1&0\\0&0 \end{smallmatrix} \big)$ is invariant under $*$-involution; and $D_{\mathtt{S}, \mathfrak{p}}$ is isomorphic to $(\mathfrak{e} D_{\mathtt{S}, \mathfrak{p}})^{\oplus 2}$ as $*$-Hermitian spaces. The unitary group $G'_{\tilde \mathtt{S},1,F_\mathfrak{p}}$ is just the usual unitary group of $\mathfrak{e} D_{\mathtt{S}, \mathfrak{p}}$. But the Hermitian form there takes the form of $a\bar a'/p - b\bar b'$, which is a typical example of anisotropic plane (\cite[Example~3.2]{minguez}). So $G'_{\tilde \mathtt{S},1,F_\mathfrak{p}}$ is a ramified unitary group. To see that $\mathcal{O}_{D_\mathtt{S},p}$ is a maximal $*$-stable lattice, it suffices to prove it for $\mathcal{O}_{D_\mathtt{S}, \mathfrak{p}}$ for each $\mathfrak{p} \in \Sigma_p$. When $\mathfrak{p}$ is of type $\alpha, \alpha^\sharp$, or $\beta$, this is immediate. When $\mathfrak{p}$ is of type $\gamma$, we write $\delta$ as $\big( \begin{smallmatrix} p^{-1} &0\\0&1 \end{smallmatrix} \big) u$ for $u \in \mathcal{O}_{D_\mathtt{S}, \mathfrak{p}}^\times$. The involution $*$ is given by \[ \big(\begin{smallmatrix} a& b\\ c& d \end{smallmatrix} \big) \mapsto u^{-1}\big( \begin{smallmatrix} p &0\\0&1 \end{smallmatrix} \big) \big(\begin{smallmatrix} \bar a& -\bar c/p\\ -p\bar b& \bar d \end{smallmatrix} \big) \big( \begin{smallmatrix} p^{-1} &0\\0&1 \end{smallmatrix} \big) u = u^{-1} \big(\begin{smallmatrix} \bar a& -\bar c\\ -\bar b& \bar d \end{smallmatrix} \big) u \quad \textrm{ for }a, b, c, d \in E_\mathfrak{q}. \] It is then clear that $\mathcal{O}_{D_\mathtt{S},\mathfrak{p}}$ is a maximal $*$-stable lattice. \end{proof} \subsection{Level structures at $p$ in the unitary case} \label{S:level-structure} We specify our choice for $K'_p$ corresponding to the level structure $K_p=\prod_{\mathfrak{p}|p}K_{\mathfrak{p}}\subset \prod_{\mathfrak{p}|p} (B_{\mathtt{S}}\otimes_{F}F_{\mathfrak{p}})^{\times}$ considered in Subsection \ref{S:level-structure-at-p}. By \eqref{Equ:description-G'}, giving an element $g_p\in G_{\tilde \mathtt{S}}'(\mathbb{Q}_p)$ is equivalent to giving tuples $(g_{\mathfrak{p}})_{\mathfrak{p}\in \Sigma_{p}}$ with $g_{\mathfrak{p}}\in \End_{D_{\mathtt{S}}\otimes_{F}F_{\mathfrak{p}}}(V\otimes_{F}F_{\mathfrak{p}})$ such that there exists $\nu(g_p)\in \mathbb{Q}_{p}^{\times}$ independent of $\mathfrak{p}$ satisfying $$ \psi_{E, \mathfrak{p}}(g_{\mathfrak{p}}v, g_{\mathfrak{p}}w)=\nu(g_p)\psi_{E, \mathfrak{p}}(v,w), \quad \forall v,w \in V\otimes_F F_{\mathfrak{p}}, $$ where $\psi_{E, \mathfrak{p}}$ is the base change of $\psi_E$ to $V\otimes_F F_{\mathfrak{p}}= D_{\mathtt{S},\mathfrak{p}}$. In the following, we will give a chain of lattices $\Lambda_{\mathfrak{p}}^{(1)}\subseteq \Lambda_{\mathfrak{p}}^{(2)}$ in $D_{\mathtt{S},\mathfrak{p}}$ for each $\mathfrak{p}$, and define $K_p'\subseteq G_{\tilde \mathtt{S}}'(\mathbb{Q}_p)$ to be the subgroup consisting of the elements $(g_{\mathfrak{p}})_{\mathfrak{p}\in \Sigma_{p}}$ with $g_{\mathfrak{p}}$ belonging to the stablizer of $\Lambda^{(1)}_{\mathfrak{p}}\subseteq \Lambda_{\mathfrak{p}}^{(2)}$ and with $\nu(g_p)\in \mathbb{Z}_{p}^{\times}$ independent of $\mathfrak{p}$. \begin{itemize} \item When $\mathfrak{p}$ is of type $\alpha$, we take $\Lambda_\mathfrak{p}^{(1)}=\Lambda_{\mathfrak{p}}^{(2)}$ to be $\mathcal{O}_{D_\mathtt{S}, \mathfrak{p}}$. \item When $\mathfrak{p}$ is of type $\alpha^\sharp$, we take \[ \Lambda_\mathfrak{p}^{(1)} = \left( \begin{smallmatrix} \mathfrak{q} & \mathcal{O}_\mathfrak{q} \\ \mathfrak{q} & \mathcal{O}_\mathfrak{q} \end{smallmatrix} \right)\oplus\left( \begin{smallmatrix} \mathcal{O}_{\bar \mathfrak{q}} & \mathcal{O}_{\bar \mathfrak{q}} \\ \mathcal{O}_{\bar \mathfrak{q}} & \mathcal{O}_{\bar \mathfrak{q}} \end{smallmatrix}\right) \quad \textrm{ and } \quad \Lambda_\mathfrak{p}^{(2)} =\left( \begin{smallmatrix} \mathcal{O}_\mathfrak{q} & \mathcal{O}_\mathfrak{q} \\ \mathcal{O}_\mathfrak{q} & \mathcal{O}_\mathfrak{q} \end{smallmatrix}\right) \oplus\left( \begin{smallmatrix} \mathcal{O}_{\bar \mathfrak{q}} & \bar \mathfrak{q}^{-1} \\ \mathcal{O}_{\bar \mathfrak{q}} &\bar \mathfrak{q}^{-1} \end{smallmatrix}\right). \] \item When $\mathfrak{p}$ is of type $\beta$, we take $\Lambda_{\mathfrak{p}}^{(1)} =\Lambda_{\mathfrak{p}}^{(2)} =\mathcal{O}_{D_{\mathtt{S}},\mathfrak{p}}$. \item When $\mathfrak{p}$ is of type $\beta^\sharp$, we take \[ \Lambda_\mathfrak{p}^{(1)} = \left( \begin{smallmatrix} \mathfrak{q} & \mathcal{O}_\mathfrak{q}\\ \mathfrak{q} & \mathcal{O}_\mathfrak{q} \end{smallmatrix} \right) \subseteq \Lambda_\mathfrak{p}^{(2)} = \left( \begin{smallmatrix} \mathcal{O}_\mathfrak{q} & \mathcal{O}_\mathfrak{q}\\ \mathcal{O}_\mathfrak{q} & \mathcal{O}_\mathfrak{q} \end{smallmatrix} \right). \] Note that, these two lattices are dual of each other under the Hermitian form \eqref{E:Hermitian-type-gamma}. \end{itemize} Similarly, we give the level structure at $p$ for the Shimura variety associated to the group $G''_{\tilde \mathtt{S}}$: take $K''_{ p}$ to be the image of $K_p \times K_{E,p} $ under the natural map $(G_\mathtt{S} \times T_{E,\tilde \mathtt{S}})(\mathbb{Q}_p) \to G''_{\tilde \mathtt{S}}(\mathbb{Q}_p)$. \begin{lemma} \label{L:compatibility of derived group and adjoint group2} The Shimura data for $G_\mathtt{S}, G_\mathtt{S} \times T_{E,\tilde \mathtt{S}}, G''_{\tilde \mathtt{S}},$ and $G'_{\tilde \mathtt{S}}$ satisfy Hypothesis~\ref{H:hypo on G}. Moreover, The natural diagram of morphisms of groups \begin{equation}\label{E:morphism-of-groups2} G_{\mathtt{S}}\leftarrow G_{\mathtt{S}}\times T_{E, \tilde \mathtt{S}}\rightarrow G''_{\tilde \mathtt{S}} = G_{\mathtt{S}}\times_{Z}T_{E,\tilde \mathtt{S}}\leftarrow G'_{\tilde \mathtt{S}} \end{equation} induce isomorphisms on the $p$-integral points of the derived and adjoint groups. \end{lemma} \begin{proof} This is straightforward from definition. In fact, both $K_p^\mathrm{ad} = \prod_{\mathfrak{p} \in \Sigma_p}K_\mathfrak{p}^\mathrm{ad}$ and $K_p^\mathrm{der} = \prod_{\mathfrak{p} \in \Sigma_p} K_\mathfrak{p}^\mathrm{der}$ are products and we give the description case by case: \begin{itemize} \item if $\mathfrak{p}$ is of type $\alpha$ or $\beta$, then $K^\mathrm{der}_\mathfrak{p} = \mathrm{SL}_{2, \mathcal{O}_\mathfrak{p}}$ and $K^\mathrm{ad}_\mathfrak{p} = \mathrm{PGL}_{2, \mathcal{O}_\mathfrak{p}}$; \item if $\mathfrak{p}$ is of type $\alpha^\sharp$, then $K^\mathrm{der}_\mathfrak{p} = \mathrm{SL}_{2, \mathcal{O}_\mathfrak{p}} \cap \big( \begin{smallmatrix} \mathcal{O}_\mathfrak{p}^\times & \mathcal{O}_\mathfrak{p}\\ \mathfrak{p} & \mathcal{O}_\mathfrak{p}^\times \end{smallmatrix} \big)$ and $K^\mathrm{ad}_\mathfrak{p} = \big( \begin{smallmatrix} \mathcal{O}_\mathfrak{p}^\times & \mathcal{O}_\mathfrak{p}\\ \mathfrak{p} & \mathcal{O}_\mathfrak{p}^\times \end{smallmatrix} \big) / \mathcal{O}_\mathfrak{p}^\times$; and \item if $\mathfrak{p}$ is of type $\beta^\sharp$, $K^\mathrm{der}_\mathfrak{p}$ and $K^\mathrm{ad}_\mathfrak{p}$ are the maximal compact open subgroups of $(B_\mathtt{S}^\times)^\mathrm{der}(F_\mathfrak{p})$ and $(B_\mathtt{S}^\times)^\mathrm{ad}(F_\mathfrak{p})$, respectively. \end{itemize} \end{proof} \begin{cor} \label{C:comparison of shimura varieties} The natural morphisms between Shimura varieties \begin{equation} \label{E:morphisms of Shimura varieties} \mathrm{Sh}_{K_p}(G_\mathtt{S})\longleftarrow\mathrm{Sh}_{K_p \times K_{E, p}}(G_\mathtt{S} \times T_{E,\tilde \mathtt{S}}) \longrightarrow \mathrm{Sh}_{K''_p}(G''_{\tilde \mathtt{S}}) \longleftarrow\mathrm{Sh}_{K'_p}(G'_{\tilde \mathtt{S}}) \end{equation} induce isomorphisms on the geometric connected components. Moreover, the groups $\mathcal{E}_{G, \tilde \wp}$ defined in \ref{A:connected integral model} (and made explicit below) are isomorphic for each of the groups; and \eqref{E:morphisms of Shimura varieties} is equivariant for the actions of $\mathcal{E}_{G, \tilde \wp}$'s on the geometric connected components. Moreover, if one of the Shimura varieties admits an integral canonical model; so do the others. \end{cor} \begin{proof} This follows from Corollary~\ref{C:Sh(G)^circ_Zp independent of G} for which the conditions are verified in Lemmas~\ref{L:compatibility of derived group and adjoint group} and \ref{L:compatibility of derived group and adjoint group2}. \end{proof} \subsection{Structure groups for connected Shimura varieties} \label{S:structure group} In order to apply the machinery developed in Section~\ref{Section:Sh Var}, we now make explicit the structure groups $\mathcal{G}$ in \eqref{E:calG} and $\mathcal{E}_{G, \tilde \wp}$ in Subsection~\ref{A:connected integral model} in the case of our interest. We use $\mathcal{G}_\mathtt{S}$ (resp. $\mathcal{G}'_{\tilde \mathtt{S}}$, $\mathcal{G}''_{\tilde \mathtt{S}}$) to denote the group defined in \eqref{E:calG} for $G = G_\mathtt{S}$ (resp. $ G'_{\tilde \mathtt{S}}$, $G''_{\tilde \mathtt{S}}$). Explicitly, since the center of $G_\mathtt{S}$ is $\Res_{F/\mathbb{Q}}\mathbb{G}_m$, we have $G^\mathrm{ad}_\mathtt{S}(\mathbb{Q}) = B_\mathtt{S}^\times / F^\times$. Taking the positive and $p$-integral part as in Lemma~\ref{L:nu(G(Q)->>T(Q)}, we have $G_\mathtt{S}^\mathrm{ad}(\mathbb{Q})^{+, (p)} = B_\mathtt{S}^{\times, >0, (p)} / \mathcal{O}_{F, (p)}^\times$ where the superscript $>0$ means to take the elements whose determinant is positive for all real embeddings. It follows that $\mathcal{G}_\mathtt{S} = G_\mathtt{S}(\mathbb{A}^{\infty, p}) / \mathcal{O}_{F, (p)}^{\times, \mathrm{cl}}$. The same argument applies to $G''_{\tilde \mathtt{S}}$ whose center is $\Res_{E/\mathbb{Q}}\mathbb{G}_m$, and shows that $\mathcal{G}''_{\tilde \mathtt{S}} = G''_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p}) \big/ \mathcal{O}_{E,(p)}^{\times, \mathrm{cl}}$. Determination of $\mathcal{G}'_{\tilde \mathtt{S}}$ is more subtle. By Lemmas~\ref{L:compatibility of derived group and adjoint group} and \ref{L:compatibility of derived group and adjoint group2}, we have $(G'_{\tilde \mathtt{S}})^\mathrm{ad}(\mathbb{Q})^{+,(p)} = (G''_{\tilde \mathtt{S}})^\mathrm{ad}(\mathbb{Q})^{+,(p)}$. So if we use $Z'_{\tilde \mathtt{S}}$ to denote the center of $G'_{\tilde \mathtt{S}}$, then we have \begin{align} \label{E:structure group description} \mathcal{G}'_{\tilde \mathtt{S}} &= \big(G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty,p}) \big/ Z'_{\tilde \mathtt{S}}(\mathbb{Q})^{(p), \mathrm{cl}}\big) \ast_{G'_{\tilde \mathtt{S}}(\mathbb{Q})^{(p)}_+/Z'_{\tilde \mathtt{S}}(\mathbb{Q})^{(p)}} (G'_{\tilde \mathtt{S}})^\mathrm{ad}(\mathbb{Q})^{+,(p)} \\ \nonumber &= \big(G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty,p}) \big/ Z'_{\tilde \mathtt{S}}(\mathbb{Q})^{(p), \mathrm{cl}}\big) \ast_{G'_{\tilde \mathtt{S}}(\mathbb{Q})^{(p)}_+/Z'_{\tilde \mathtt{S}}(\mathbb{Q})^{(p)}} \big( G''_{\tilde \mathtt{S}}(\mathbb{Q})_+^{(p)} / \mathcal{O}_{E, (p)}^\times \big) \\ \nonumber &= G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p}) G''_{\tilde \mathtt{S}}(\mathbb{Q})_+^{(p)} \big/ \mathcal{O}_{E, (p)}^{\times, \mathrm{cl}}. \end{align} The subgroup $G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p}) G''_{\tilde \mathtt{S}}(\mathbb{Q})_+^{(p)}$ can be characterized by the following commutative diagram of exact sequence as the pull back of the right square. \begin{equation} \label{E:description of G'' G'} \xymatrix{ 1 \ar[r] & \ar@{=}[d] G'_{\tilde \mathtt{S},1}(\mathbb{A}^{\infty,p}) \ar[r] &G''_{\tilde \mathtt{S}}(\mathbb{Q})_+^{(p)} G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty,p}) \ar[r] \ar@{^{(}->}[d] & \mathcal{O}_{F, (p)}^\times (\mathbb{A}^{\infty,p})^\times\ar[r] \ar@{^{(}->}[d] & 1 \\1 \ar[r]& G'_{{\tilde \mathtt{S}},1}(\mathbb{A}^{\infty,p}) \ar[r] & G''_{\tilde \mathtt{S}}(\mathbb{A}^{\infty,p}) \ar[r] & (\mathbb{A}_F^{\infty,p})^\times \ar[r] &1. } \end{equation} We use $\mathcal{E}_{G, \mathtt{S}, \wp}$ to denote the group $\mathcal{E}_{G, \tilde \wp}$ defined in Subsection~\ref{A:connected integral model}. As an abstract group, it is isomorphic for all groups $G_\mathtt{S}$, $G'_{\tilde \mathtt{S}}$, and $G''_{\tilde \mathtt{S}}$. But we point out that it is important (see Remark~\ref{R:quaternionic Shimura reciprocity not compatible}) to know how they sit as subgroups of $\mathcal{G}_\mathtt{S} \times \Gal_{k_\wp}$, $\mathcal{G}'_{\tilde \mathtt{S}} \times \Gal_{k_{\tilde \wp}}$ and $\mathcal{G}''_{\tilde \mathtt{S}} \times \Gal_{k_{\tilde \wp}}$, respectively, according to the Shimura reciprocity map. \subsection{Integral models of unitary Shimura varieties}\label{S:integral-unitary} We choose a finite extension $k_0$ of $k_{\tilde \wp}$ that contains all residual fields $k_\mathfrak{q}$ for any $p$-adic place $\mathfrak{q}$ of $E$. Then the ring of Witt vectors $W(k_0)$ may be viewed as a subring of $\overline \mathbb{Q}_p$, containing $\mathcal{O}_{\tilde \wp}$ as a subring. We fix an order $\mathcal{O}_{D_\mathtt{S}}$ of $D_\mathtt{S}$ stable under the involution $l \mapsto l^*$ such that $\mathcal{O}_{D_\mathtt{S}} \otimes_{\mathcal{O}_F} \mathcal{O}_{F,p} \simeq \mathcal{O}_{D_\mathtt{S}, p}$. Recall that $V$ is the abstract $\mathbb{Q}$-vector space $D_\mathtt{S}$; we choose and fix an $\mathcal{O}_{D_\mathtt{S}}$-lattice $\Lambda$ of $V$ such that, \begin{itemize} \item for each $\mathfrak{p} \in \Sigma_p$, we have $\Lambda \otimes_{\mathcal{O}_F} \mathcal{O}_\mathfrak{p} \cong \Lambda_\mathfrak{p}^{(1)}$, and \item if we put $\widehat{\Lambda}^{(p)} : = \Lambda \otimes_\mathbb{Z} \widehat \mathbb{Z}^{(p)}$ as a lattice of $V \otimes_\mathbb{Q} \mathbb{A}^{\infty, p}$, we have \begin{equation} \label{E:Lambda-dual} \widehat \Lambda^{(p)} \subseteq \widehat \Lambda^{(p),\vee}\textrm{ under the bilinear form } \psi, \textrm{ or equivalently, } \psi(\widehat \Lambda^{(p)}, \widehat \Lambda^{(p)}) \subseteq \widehat{\mathbb{Z}}^{(p)}. \end{equation} \end{itemize} We call such $\Lambda$ \emph{admissible}. \begin{theorem} \label{T:unitary-shimura-variety-representability} Let $K'_p$ be the open compact subgroup of $G_{\tilde \mathtt{S}}'(\mathbb{Q}_p)$ considered in Subsection \ref{S:level-structure}, and $K'^p\subset G_{\tilde \mathtt{S}}'(\mathbb{A}^{\infty, p})$ sufficiently small so that $K'=K'^pK'_p$ is neat. Then there exists a unique \emph{smooth} quasi-projective scheme $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ over $W(k_0)$ representing the functor that sends a locally noetherian $W(k_0)$-scheme $S$ to the set of isomorphism classes of tuples $(A, \iota, \lambda, \alpha_{K'})$, as described as follows. \begin{itemize} \item[(a)] $A$ is an abelian scheme over $S$ of dimension $4g$ equipped with an embedding $\iota: \mathcal{O}_{D_\mathtt{S}} \to \End_S(A)$ such that the characteristic polynomial of the endomorphism $\iota(b)$ on $\Lie(A/S)$ for $b \in \mathcal{O}_E$ is given by \[ \prod_{\tilde \tau \in \Sigma_{E,\infty}} \big(x - \tilde \tau(b)\big) ^{2s_{\tilde \tau}}. \] \item[(b)] $\lambda:A \to A^\vee$ is a polarization of $A$, such that \begin{itemize} \item[(b1)] the Rosati involution associated to $\lambda$ induces the involution $l \mapsto l^*$ on $\mathcal{O}_{D_\mathtt{S}}$, \item[(b2)] $(\Ker \lambda)[p^\infty] $ is a finite flat closed subgroup scheme contained in $\prod_{\mathfrak{p} \textrm{ of type } \beta^\sharp}A[\mathfrak{p}]$ of rank $\prod_{\mathfrak{p} \textrm{ of type } \beta^\sharp} (\#k_{\mathfrak{p}})^{4}$ and \item[(b3)] the cokernel of $\lambda_*: H_1^\mathrm{dR}(A / S) \to H_1^\mathrm{dR}(A^\vee/S)$ is a locally free module of rank two over \[ \bigoplus_{\mathfrak{p} \textrm{ of type }\beta^\sharp} \mathcal{O}_S \otimes_{\mathbb{Z}_p} (\mathcal{O}_E \otimes_{\mathcal{O}_F} k_\mathfrak{p}). \] \end{itemize} \item[(c)] $ \alpha_{K'}$ is a pair $( \alpha^p_{K'^p}, \alpha_p)$ defined as follows: \begin{itemize} \item[(c1)] For each connected component $S_i$ of $S$, we choose a geometric point $\bar{s}_i$, and let $T^{(p)}(A_{\bar{s}_i})$ be the product of $l$-adic Tate modules of $A$ at $\bar{s}_i$ for all $l\neq p$. Then $\alpha^p_{K'^p}$ is a collection of $\pi_1(S_i, \bar{s}_i)$-invariant $K'^p$-orbit of pairs $(\alpha^p_i, \nu(\alpha^p_i))$, where $\alpha^p_i$ is an $\mathcal{O}_{D_\mathtt{S}}\otimes_\mathbb{Z} \widehat{\mathbb{Z}}^{(p)}$-linear isomorphism $\widehat \Lambda^{(p)} \xrightarrow{\sim} T^{(p)}( A_{\bar s_i})$ and $\nu(\alpha^p_i)$ is an isomorphism $\widehat{\mathbb{Z}}^{(p)}\xrightarrow{\sim} \widehat \mathbb{Z}^{(p)}(1)$ such that the following diagram commute: \[ \xymatrix{ \widehat \Lambda^{(p)}\times \widehat \Lambda^{(p)}\ar[rr]^-{\psi}\ar[d]_{\alpha^p_i\times \alpha_i^p} &&\widehat{\mathbb{Z}}^{(p)}\ar[d]^{\nu(\alpha_i^p)}\\ T^{(p)}(A_{\bar s_i})\times T^{(p)}(A_{\bar s_i}) \ar[rr]^-{\lambda-\mathrm{Weil}} && \widehat{\mathbb{Z}}^{(p)}(1). } \] \item[(c2)] For each prime $\mathfrak{p}\in \Sigma_p$ of type $\alpha^\sharp$, let $\mathfrak{q}$ and $\bar\mathfrak{q}$ be the two primes of $E$ above $\mathfrak{p}$. Then $\alpha_p$ is a collection of $\mathcal{O}_{D_\mathtt{S}}$-stable closed finite flat subgroups $\alpha_{\mathfrak{p}}=H_{\mathfrak{q}}\oplus H_{\bar{\mathfrak{q}}} \subset A[\mathfrak{q}]\oplus A[\bar\mathfrak{q}]$ of order $(\#k_\mathfrak{p})^4$ such $H_\mathfrak{q}$ and $H_{\bar\mathfrak{q}}$ are dual to each other under the perfect pairing \[A[\mathfrak{q}]\times A[\bar\mathfrak{q}]\rightarrow \mu_p\]\ induced by the polarization $\lambda$. \end{itemize} \end{itemize} By Galois descent, the moduli space $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ can be defined over $\mathcal{O}_{ \tilde \wp}$. Moreover, if the ramification set $\mathtt{S}_{\infty}$ is non-empty, $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ is projective. \end{theorem} We will postpone the proof of this theorem after Notation~\ref{N:notation-reduced}. \subsection{Deformation theory}\label{S:deformation} We recall briefly the crystalline deformation theory of abelian varieties due to Serre-Tate and Grothendieck-Messing. This will be used in the proof of Theorem~\ref{T:unitary-shimura-variety-representability}. We start with a general situation. Let $S$ be a $\mathbb{Z}_p$-scheme over which $p$ is locally nilpotent, and $S_0\hookrightarrow S$ be a closed immersion whose ideal sheaf $\mathcal{I}$ is equipped with a divided power structure compatible with that on $p\mathbb{Z}_p$, e.g. $S_0= \Spec k\hookrightarrow S= \Spec k[\epsilon]/ (\epsilon^2)$ with $k$ a perfect field of characteristic $p$. Let $(S_0/\mathbb{Z}_p)_{\mathrm{cris}}$ be the crystalline site of $S_0$ over $\Spec \mathbb{Z}_p$, and $\mathcal{O}^\mathrm{cris}_{S_0/\mathbb{Z}_p}$ be the structure sheaf. Let $A_0$ be an abelian scheme over $S_0$, and $H^{\mathrm{cris}}_1(A_0/S_0)$ be the \emph{dual} of the relative crystalline cohomology $H^1_{\mathrm{cris}}(A_0/S_0)$ (or isomorphically $H^{\mathrm{cris}}_1(A_0/S_0)=H^1_{\mathrm{cris}}(A_0^\vee/S_0)$). Then $H^{\mathrm{cris}}_1(A_0/S_0)$ is a crystal of locally free $\mathcal{O}^\mathrm{cris}_{S_0/\mathbb{Z}_p}$-modules whose evaluation $H^{\mathrm{cris}}_1(A_0/S_0)_{S}$ at the pd-embedding $S_0\hookrightarrow S$ is a locally free $\mathcal{O}_S$-module. We have a canonical isomorphism $H^{\mathrm{cris}}_1(A_0/S_0)_{S}\otimes_{\mathcal{O}_S}\mathcal{O}_{S_0}\simeq H^{\mathrm{dR}}_1(A_0/S_0)$, which is the \emph{dual} of the relative de Rham cohomology of $A_0/S_0$. For each abelian scheme $A$ over $S$ with $A\times_S S_0\simeq A_0$, we have a canonical Hodge filtatrion \[ 0\rightarrow \omega_{A^\vee/S}\rightarrow H^{\mathrm{cris}}_1(A_0/S_0)_{S}\rightarrow \Lie(A/S)\rightarrow 0. \] Hence, $\omega_{A^\vee/S}$ gives rise to a local direct factor of $H^{\mathrm{cris}}_1(A_0/S_0)_{S}$ that lifts the subbundle $\omega_{A_0^\vee/S_0}\subseteq H_1^{\mathrm{dR}}(A_0/S_0)$. Conversely, the theory of deformations of abelian schemes says that knowing this lift of subbundle is also enough to recover $A$ from $A_0$. More precisely, let $\mathtt{AV}_{S}$ be the category of abelian scheme over $S$, $\mathtt{AV}^+_{S_0}$ denote the category of pairs $(A_0, \omega)$, where $A_0$ is an abelian scheme over $S_0$ and $\omega$ is a subbundle of $H_1^\mathrm{cris}( A_0/S_0)_{S}$ that lifts $\omega_{A_0^\vee/S_0} \subseteq H_1^\mathrm{dR}(A_0/S_0)$. The main theorem of the crystalline deformation theory (cf. \cite[pp. 116--118]{grothendieck}, \cite[Chap. II \S 1]{mazur-messing}) says that \emph{the natural functor $\mathtt{AV}_{S} \to \mathtt{AV}_{S_0}^+$ given by $A\mapsto (A\times_{S}S_0, \omega_{A^\vee/S})$ is an equivalence of categories.} Let $A$ be a deformation of $A_0$ corresponding to a direct factor $\omega\subseteq H_1^{\mathrm{cris}}(A_0/S_0)_S$ that lifts $\omega_{A_0^\vee/S_0}$. If $A_0$ is equipped with an action $\iota_0$ by a certain algebra $R$, then $\iota_0$ deforms to an action $\iota$ of $R$ on $A$ if and only if $\omega_{S}\subseteq H_1^{\mathrm{cris}}(A_0/S_0)_S$ is $R$-stable. Let $\lambda_0:A_0\rightarrow A_0^\vee$ be a polarization. Then $\lambda_0$ induces a natural alternating pairing \cite[5.1]{bbm} \[ \langle\ ,\ \rangle_{\lambda_0}\colon H^{\mathrm{cris}}_1(A_0/S_0)_S\times H^{\mathrm{cris}}_1(A_0/S_0)_S\rightarrow \mathcal{O}_S, \] which is perfect if $\lambda_0$ is prime-to-$p$. Then there exists a (necessarily unique) polarization $\lambda: A\rightarrow A^\vee$ that lifts $\lambda_0$ if and only if $\omega_S$ is isotropic for $\langle\ ,\ \rangle_{\lambda_0}$ by \cite[2.1.6.9, 2.2.2.2, 2.2.2.6]{lan}. \begin{notation} \label{N:notation-reduced} Before going to the proof of Theorem~\ref{T:unitary-shimura-variety-representability}, we introduce some notation. Recall that we have an isomorphism $\mathcal{O}_{D_{\mathtt{S}},p}\simeq \mathrm{M}_2(\mathcal{O}_{E}\otimes \mathbb{Z}_p)$. We denote by $\mathfrak{e}\in \mathcal{O}_{D_{\mathtt{S}},p}$ the element corresponding to $\bigl( \begin{smallmatrix}1 &0\\0&0\end{smallmatrix} \bigr)$ in $\mathrm{M}_2(\mathcal{O}_{E}\otimes \mathbb{Z}_p)$. For $S$ a $W(k_0)$-scheme and $M$ an $\mathcal{O}_S$-module locally free of finite rank equipped with an action of $\mathcal{O}_{D_{\mathtt{S}},p}$, we call $M^{\circ}:=\mathfrak{e} M$ \emph{the reduced part} of $M$; we have $M=(M^{\circ})^{\oplus 2}$ by Morita equivalence. Moreover, the $\mathcal{O}_E$-action induces a canonical decomposition \[ M^{\circ}=\bigoplus_{\tilde \tau\in \Sigma_{E, \infty}} M^{\circ}_{\tilde\tau}, \] where $\mathcal{O}_E$ acts on each factor $M^\circ_{\tilde \tau}$ by $\tilde \tau': \mathcal{O}_E \to W(k_0)$. Let $A$ be an abelian scheme over $S$ carrying an action of $\mathcal{O}_{D_{\mathtt{S}}}$. The construction above gives rise to locally free $\mathcal{O}_S$-modules $\omega^\circ_{A/S}$, $\Lie(A/S)^\circ$ and $ H_1^\mathrm{dR}(A/S)^\circ $, which are of rank $\frac12\dim A$, $\frac12\dim A$ and $\dim A$, respectively. We call them the \emph{reduced invariant differential $1$-forms}, \emph{reduced Lie algebra} and the \emph{reduced de Rham homology} of $A$ respectively. For each $\tilde \tau \in \Sigma_{E, \infty}$, we have a \emph{reduced Hodge filtration} in $\tilde \tau$-component \begin{equation}\label{Equ:reduced-Hodge} 0\rightarrow \omega_{A^\vee/S, \tilde \tau}^{\circ}\rightarrow H_1^\mathrm{dR}(A/S)^\circ_{\tilde \tau}\rightarrow \Lie(A/S)^{\circ}_{\tilde \tau}\rightarrow 0. \end{equation} \end{notation} \begin{proof}[Proof of Theorem~\ref{T:unitary-shimura-variety-representability}] The representability of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ by a quasi-projective scheme over $W(k_0)$ is well-known (cf. for instance \cite[1.4.13, 2.3.3, 7.2.3.10]{lan}). To show the smoothness of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$, it suffices to prove that it is formally smooth over $W(k_0)$. Let $R$ be a $W(k_0)$-algebra, $I\subset R$ be an ideal with $I^2=0$, and $R_0=R/I$. We need to show that, every point $x_0=(A_0, \iota_0, \lambda_0, \alpha_{K',0})$ of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ with values in $R_0$ lifts to an $R$-valued point $x$ of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$. We apply the deformation theory recalled in \ref{S:deformation}. The relative crystalline homology $H_1^{\mathrm{cris}}(A_0/R_0)$ is naturally equipped with an action of $\mathcal{O}_{D_{\mathtt{S}}}\otimes \mathbb{Z}_p$. Let $H_1^{\mathrm{cris}}(A_0/R_0)^{\circ}:=\mathfrak{e} H_1^{\mathrm{cris}}(A_0/R_0)$ be its reduced part, and $H_1^{\mathrm{cris}}(A_0/R_0)^{\circ}_{R}$ be its evaluation on $R$. This is a free $R\otimes\mathcal{O}_E$-module of rank $4[F:\mathbb{Q}]$, and we have a canonical decomoposition \[ H_1^{\mathrm{cris}}(A_0/R_0)^{\circ}_{R}= \bigoplus_{\tilde \tau\in \Sigma_{E,\infty}} H_1^{\mathrm{cris}}(A_0/R_0)^{\circ}_{R,\tilde \tau}. \] The polarization $\lambda_0$ on $A_0$ induces a pairing \begin{equation}\label{Equ:pairing-R} H_1^{\mathrm{cris}}(A_0/R_0)_{R, \tilde \tau}^{\circ}\times H_1^{\mathrm{cris}}(A_0/R_0)_{R,\tilde \tau^c}^{\circ}\mathrm{\longrightarrow} R, \end{equation} which is perfect for $\tilde \tau\in \Sigma_{E, \infty/\mathfrak{p}}$ with $\mathfrak{p}$ not of type $\beta^\sharp$. By the deformation theory \ref{S:deformation}, giving a deformation of $(A_0, \iota_0)$ to $R$ is equivalent to giving, for each $\tilde \tau \in \Sigma_{E, \infty}$, a direct summand $\omega_{R, \tilde \tau}^{\circ}\subseteq H_1^{\mathrm{cris}}(A_0/R_0)^{\circ}_{R, \tilde \tau}$ which lifts $\omega_{A_0^\vee/R_0, \tilde \tau}^{\circ}$. Let $\mathfrak{p}\in \Sigma_{p}$ with $\tilde \tau\in \Sigma_{E, \infty/\mathfrak{p}}$. We distinghish several cases: \begin{itemize} \item If $\tilde \tau$ restricts to $\tau \in \mathtt{S}_\infty$, $\Lie(A_0/R_0)^{\circ}_{\tilde \tau}$ has rank $s_{\tilde \tau} \in \{0,2\}$ by the determinant condition (a). By duality or the Hodge filtration \eqref{Equ:reduced-Hodge}, $\omega_{A_0^\vee/R_0, \tilde \tau}^\circ$ has rank $2-s_{\tilde{\tau}}$, i.e. $\omega_{A_0^\vee/R_0, \tilde{\tau}^\circ}=0$ when $s_{\tilde \tau} =2$ and $\omega_{A_0^\vee/R_0, \tilde \tau}^{\circ}\cong H_1^{\mathrm{dR}}(A_0/R_0)^{\circ}_{\tilde \tau}$ when $s_{\tilde \tau} = 0$. Therefore, $\omega_{R, \tilde \tau}^{\circ}=0$ or $\omega_{R, \tilde \tau}^{\circ}=H_1^{\mathrm{cris}}(A_0/R_0)_{R, \tilde \tau}^{\circ}$ is the unique lift in these cases respectively. \item If $\tilde \tau$ restricts to $\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}$, then $\omega_{A_0^\vee/R_0,\tilde \tau}^{\circ}$ and $\omega_{A_0^\vee/R_0, \tilde \tau^c}^{\circ}$ are both of rank 1 over $R_0$, and we have $\omega_{A_0^\vee/R_0, \tilde \tau}^\circ=(\omega_{A_0^\vee/R_0, \tilde \tau^c}^\circ)^{\perp}$ under the perfect pairing between $H_1^{\mathrm{dR}}(A_0/R_0)^{\circ}_{\tilde \tau}$ and $H_1^{\mathrm{dR}}(A_0/R_0)_{\tilde \tau^c}$ induced by $\lambda_0$. (Note that $\tau \in \Sigma_{\infty/\mathfrak{p}}- \mathtt{S}_\infty$ means that $\mathfrak{p}$ is not of type $\beta^\sharp$ and hence the Weil pairing is perfect.) Within each pair $\{\tilde \tau, \tilde \tau^c\}$, we can take an arbitrary direct summand $\omega_{R,\tilde \tau }^{\circ} \subseteq H_1^{\mathrm{cris}}(A_0/R_0)_{R, \tilde \tau}^\circ$ which lifts $\omega_{A_0^\vee/R_0, \tilde \tau}^{\circ}$, and let $\omega_{R, \tilde \tau^c}^\circ$ be the orthogonal complement of $\omega_{R, \tilde \tau^c}^\circ$ under the perfect pairing \eqref{Equ:pairing-R}. By the Hodge filtration \eqref{Equ:reduced-Hodge}, such choices of $(\omega_{R, \tilde \tau}^\circ, \omega_{R, \tilde \tau^c}^{\circ})$ form a torsor under the group \[ \Hom_{R_0}(\omega_{A_0^\vee/R_0, \tilde \tau}^{\circ}, \Lie(A_0)_{\tilde \tau}^\circ)\otimes I\cong \Lie(A_0)_{\tilde \tau}^{\circ}\otimes_{R_0} \Lie(A_0)_{\tilde \tau^c}^\circ \otimes I, \] where in the second isomorphism, we have used the fact that $\Lie(A_0^\vee)^\circ_{\tilde \tau}\simeq \Lie(A_0)^\circ_{\tilde \tau^c}$. \end{itemize} We take liftings $\omega_{R, \tilde \tau}^\circ$ for each $\tilde \tau \in \Sigma_{E, \infty}$ as above, and let $(A, \iota)$ be the corresponding deformation to $R$ of $(A_0, \iota_0)$. It is clear that $\bigoplus_{\tau\in \Sigma_{\infty}}(\omega_{R,\tilde{\tau}}^{\circ}\oplus \omega_{R, \tilde{\tau}^c}^{\circ})$ is isotropic for the pairing on $H^{\mathrm{cris}}_1(A_0/R_0)^{\circ}_R$ induced by $\lambda_0$. Hence, the polarization $\lambda_0$ lifts uniquely to a polarization $\lambda: A\rightarrow A^\vee$ satisfying condition (b1) in the statement of the Theorem. By the criterion of flatness by fibres \cite[11.3.10]{ega}, $\Ker(\lambda)$ is a finite flat group scheme over $R$, and the condition (b2) is thus satisfied. Condition (b3) follows from the fact that the morphism $\lambda_*:H_1^{\mathrm{dR}}(A/R)\rightarrow H_1^{\mathrm{dR}}(A/R)$ is the same as $\lambda_{0,*}: H_1^{\mathrm{cris}}(A_0/R_0)_R\rightarrow H_1^{\mathrm{cris}}(A_0^\vee/R_0)_R$ under the canonical isomorphism $H_1^{\mathrm{dR}}(B/R)\simeq H_1^{\mathrm{cris}}(B_0/R_0)_R$ for $B=A_0, A_0^\vee$. We have to show moreover that the level structure $\alpha_{K',0}=(\alpha^p_0, \alpha_{p,0})$ extends unique to $A$. It is clear for $\alpha^{p}_0$. For $\alpha_{p,0}$, let $H_0=\prod_{\mathfrak{p}\text{ of type } \alpha^\sharp} \alpha_{\mathfrak{p}}$ be the product of the closed subgroup in the data of $\alpha_{p,0}$. Let $f_0:A_0\rightarrow B_0=A_0/H_0$ be the canonical isogeny. It suffices to show that $B_0$ and $f_0$ deform to $R$. The abelian variety $B_0$ is equipped with an induced action of $\mathcal{O}_{D_{\mathtt{S}}}$, a polarization $\lambda_{B_0}$ satisfying the same conditions (a) and (b). The isogeny $f_0$ induces canonical isomorphisms $H^{\mathrm{cris}}_{1}(A_0/R_0)_{R, \tilde \tau}\cong H_1^{\mathrm{cris}}(B_0/R_0)_{R,\tilde \tau}$ for $\tilde \tau \in \Sigma_{E,\infty/\mathfrak{p}}$ with $\mathfrak{p}$ not of type $\alpha^\sharp$. So for such primes $\mathfrak{p}$ and $\tilde \tau \in \Sigma_{\infty/\mathfrak{p}}$, the liftings $\omega^{\circ}_{R,\tilde \tau}$ chosen above give the liftings of $\omega_{B_0^\vee/R_0, \tilde \tau}^\circ\subset H_1^\mathrm{dR}(B_0/R_0)_{\tilde \tau}$. For $\tilde \tau\in \Sigma_{\infty/\mathfrak{p}}$ with $\mathfrak{p}$ of type $\alpha^\sharp$, we note that at each closed point $x$ of $R_0$, $\omega_{B_0^\vee/k_x, \tilde \tau}^\circ$ is either trivial or isomorphic to the whole $H_1^{\mathrm{dR}}(B_0/k_x)_{\tilde \tau}^{\circ}$ as in the case for $A_0$; hence the same holds for $R_0$ in place of $k_x$. Therefore, $\omega_{B_0^\vee/R_0, \tilde \tau}^\circ$ admits a unique lift to a direct summand of $H^{\mathrm{cris}}_1(B_0/R_0)_{R, \tilde \tau}^{\circ}$. Such choices of liftings of $\omega_{B_0^\vee/R_0,\tilde \tau}^{\circ}$ give rise to a deformation $B/R$ of $B_0/R_0$. It is clear that $f_0 : A_0 \to B_0$ also lifts to an isogeny $f: A \to B$. Then the kernel of $f$ gives the required lift of $H_0$. This concludes the proof of the smoothness of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$. The dimension of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ follows from the calculation of the tangent bundle of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ as the following corollary shows. When the ramification set $\mathtt{S}_{\infty}$ is non-empty, it is a standard argument to use valuative criterion to check that $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ is proper. We will postpone the proof to Proposition~\ref{Prop:smoothness}, where a more general statement is proved. \end{proof} \begin{cor}\label{C:deformation} Let $S_0\hookrightarrow S$ be a closed immersion of $\overline{\mathbb{F}}_p$-schemes with ideal sheaf $\mathcal{I}$ such that $\mathcal{I}^2=0$. Let $x_0=(A_0, \iota_0, \lambda_0, \bar{\alpha}_{K',0} )$ be an $S_0$-valued point of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$. Then the set-valued sheaf of local deformations of $x_0$ to $S$ form a torsor under the group \[ \bigoplus_{\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}} \bigl(\Lie(A_0)_{\tilde \tau}^{\circ}\otimes \Lie(A_0)_{\tilde \tau ^c}^{\circ}\bigr)\otimes \mathcal{I} \] In particular, the tangent bundle $\mathcal{T}_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})}$ of $\mathbf{Sh}_{K'}(G_{\tilde \mathtt{S}}')$ is canonically isomorphic to \[ \bigoplus_{\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}} \Lie(\mathbf{A}')^\circ_{\tilde \tau}\otimes \Lie(\mathbf{A}')^\circ_{\tilde \tau^c} \] where $\mathbf{A}' = \mathbf{A}'_{{\tilde \mathtt{S}}, K'}$ denotes the universal abelian scheme over $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$. \end{cor} \begin{proof} A deformation of $x_0$ is determined by the liftings $\omega^\circ_{S, \tilde \tau}\subseteq H^{\mathrm{cris}}_1(A_0/S_0)_{S, \tilde \tau}^\circ$ of $\omega_{A_0^\vee/S_0, \tilde \tau}^\circ$ for $\tilde \tau\in \Sigma_{E,\infty}$. From the proof of Theorem~\ref{T:unitary-shimura-variety-representability}, we see that the choices for $\omega_{S, \tilde \tau}^\circ$ are unique if $\tilde \tau$ restricts to $\tau\in \mathtt{S}_{\infty}$. For $\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}$, the possible liftings $\omega_{S, \tilde \tau}$ and $\omega_{S, \tilde \tau^c}$ determines each other, and form a torsor under the group \[ \Hom_{\mathcal{O}_{S_0}}(\omega_{A_0^\vee/S_0, \tilde \tau}^\circ, \Lie(A_0)_{\tilde \tau}^\circ) \otimes_{\mathcal{O}_{S_0}}\mathcal{I}\simeq \Lie(A_0)_{\tilde \tau}^\circ \otimes \Lie(A_0)^\circ_{\tilde \tau^c}\otimes_{\mathcal{O}_{S_0}} \mathcal{I}. \] The statement for the local lifts of $x_0$ to $S$ follows immediately. Applying this to the universal case, we obtain the second part of the Corollary. \end{proof} \begin{remark} We remark that the moduli space $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ does not depend on the choice of the admissible lattice $\Lambda$ in Subsection~\ref{S:integral-unitary}; but the universal abelian scheme $\mathbf{A}'$ does in the following way. If $\Lambda_1$ and $\Lambda_2$ are two admissible lattices, we put $\widehat{\Lambda}_i^{(p)}: = \Lambda_i \otimes_\mathbb{Z} \widehat{\mathbb{Z}}^{(p)}$, and we use $\mathbf{A}'_i$ to denote the corresponding universal abelian variety over $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ and $\bar \boldsymbol{\alpha}^p_{K'^p, i}$ to denote the universal level structure (away from $p$), for $i = 1,2$. Then there is a natural prime-to-$p$ quasi-isogeny $\eta: \mathbf{A}'_1 \dashrightarrow \mathbf{A}'_2$ such that \[ \xymatrix@C=50pt{ \widehat \Lambda^{(p)}_1 \ar@{-->}[d] \ar[r]^-{\bar \boldsymbol{\alpha}^p_{K'^p,1}}_-\cong & T^{(p)}( \mathbf{A}'_1) \ar@{-->}[d]^{T^{(p)}(\eta)}\\ \widehat \Lambda^{(p)}_2 \ar[r]^-{\bar \boldsymbol{\alpha}^p_{K'^p, 2}}_-\cong & T^{(p)}( \mathbf{A}'_2) } \] is a commutative diagram up the action of $K'^p$, where the left vertical arrow is the isogeny of lattices inside $V \otimes_\mathbb{Q} \mathbb{A}^{\infty,p}$. (For more detailed discussion, see \cite[1.4.3]{lan}.) \end{remark} \begin{cor} \label{C:integral-model-quaternion} The integral model $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ defined in Theorem~\ref{T:unitary-shimura-variety-representability} gives an integral canonical model $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})$ of $\mathrm{Sh}_{K'_p}(G'_\mathtt{S})$. Consequently, the quaternionic Shimura variety $\mathrm{Sh}_{K_p}(G_\mathtt{S})$ admits an integral canonical model over $\mathcal{O}_{\wp}$. Similarly, the Shimura varieties $\mathrm{Sh}_{K_p \times K_{E,p}}(G_\mathtt{S} \times T_{E, \tilde \mathtt{S}})$ and $\mathrm{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})$ both admit integral canonical models over $\mathcal{O}_{\tilde \wp}$. The geometric connected component of these integral canonical models are canonically isomorphic. \end{cor} \begin{proof} We first assume that $\mathtt{S}_\infty \neq \Sigma_\infty$. We need to verify that for any smooth $\mathcal{O}_{\tilde \wp}$-scheme $S$, any morphism $s_0:S\otimes_{\mathcal{O}_{\tilde \wp}} E_{\tilde \mathtt{S}, \tilde \wp}\rightarrow \mathrm{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})$ extends to a morphism $s: S\rightarrow \mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})$. Explicitly, we have to show that a tuple $(A, \iota,\lambda_, \alpha^p\alpha_p)$ over $S\otimes_{\mathcal{O}_{\tilde \wp}}E_{{\tilde \mathtt{S}},\tilde \wp}$ extends to a similar tuple over $S$. Here, $\alpha^p\alpha_p$ is the projective limit in $K'^{p}$ of level structures $\alpha^p_{K'^p} \alpha_p$ as in Theorem~\ref{T:unitary-shimura-variety-representability}(c). The same arguments of \cite[Corollary 3.8]{moonen96} apply to proving the existence of extension of $A$, $\iota$, $\lambda$, and the prime-to-$p$ level structure $\alpha^p$. It remains to extend the level structure $\alpha_p$. Let $\mathbf{Sh}_{\widetilde {K}'_{p}}(G'_{\tilde \mathtt{S}})$ denote the similar moduli space as $\mathbf{Sh}_{K'_{p}}(G_{\tilde \mathtt{S}}')$ by forgetting the $p$-level structure $\alpha_p$. The discussion above shows that $\mathbf{Sh}_{\widetilde {K}'_{p}}(G'_{\tilde \mathtt{S}})$ satisfies the extension property. We have seen in the proof of Theorem~\ref{T:unitary-shimura-variety-representability} that there is no local deformation of $\alpha_p$, which means the forgetful map $\mathbf{Sh}_{K'_{p}}(G'_{\tilde \mathtt{S}})\rightarrow \mathbf{Sh}_{\widetilde {K}'_{p}}(G'_{\tilde \mathtt{S}})$ is finite and \'etale. By the discussion above, there exists a morphism $\tilde s: S\rightarrow \mathbf{Sh}_{\widetilde{K}'_p}(G'_{\tilde \mathtt{S}})$ such that the square of the following diagram \[ \xymatrix{ S\otimes_{\mathcal{O}_{\tilde \wp}}E_{\tilde \mathtt{S},\tilde \wp}\ar[r]^{s_0}\ar[d]& \mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})\ar[d]\\ S\ar@{-->}[ur]^{s} \ar[r]^{\tilde s} &\mathbf{Sh}_{\widetilde{K}'_p}(G'_{\tilde \mathtt{S}}) } \] is commutative. We have to show that there exists a map $s$ as the dotted arrow that makes whole diagram commutative. Giving such a map $s$ is equivalent to giving a section of the finite \'etale cover $S\times_{\mathbf{Sh}_{ \widetilde{K}'_{\mathtt{S}}}(G'_{\tilde \mathtt{S}})}\mathbf{Sh}_{{K}'_{\mathtt{S}}}(G'_{\tilde \mathtt{S}})\rightarrow S$ extending the section corresponding to $s_0$. Since a section of a finite \'etale cover of separated schemes is an open and closed immersion, the existence of $s$ follows immediately. The existence of integral canonical models for $\mathrm{Sh}_{K_p}(G_{\mathtt{S}})$, $\mathrm{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})$ and $\mathrm{Sh}_{K_p\times K_{E,p}}(G_{\mathtt{S}}\times T_{E, \tilde \mathtt{S}})$ follows from Corollary~\ref{C:Sh(G)^circ_Zp independent of G}. When $\mathtt{S}_\infty = \Sigma_\infty$, we need to show that the geometric Frobenius $\mathrm{Frob}_{\tilde \wp}$ acts on the moduli space $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ by appropriate central Hecke action specified by reciprocity map. Put $n_{\tilde \wp} = [k_{\tilde \wp}: \mathbb{F}_p]$. Let $\mathfrak{R}\mathrm{ec}_{Z'}: \Gal_{E_{\tilde \mathtt{S}}} \to Z'(\mathbb{Q})^\mathrm{cl}\backslash Z'(\mathbb{A}^\infty)/Z'(\mathbb{Z}_p)$ denote the reciprocity map defined in Subsection~\ref{S:integral model weak Shimura datum}, where $Z'$ is the center of $G'_{\tilde \mathtt{S}}$ which is the algebraic group associated to the subgroup of $E^\times$ consisting of elements with norm to $F^{\times}$ lying in $\mathbb{Q}^{\times}$. By definition, $\mathfrak{R}\mathrm{ec}_{Z'}(\mathrm{Frob}_{\tilde \wp})$ is the image of $\varpi_{\tilde \wp}$ under the composite of \[ \mathfrak{R}\mathrm{ec}_{Z', \tilde \wp}\colon E_{\tilde\mathtt{S},\tilde \wp}^\times / \mathcal{O}_{\tilde \wp}^\times \xrightarrow{\Rec_{Z'}(G'_{\tilde \mathtt{S}}, \mathfrak{H}_\mathtt{S})} Z'(\mathbb{Q}_p) /Z'(\mathbb{Z}_p) \] and the natural map $Z'(\mathbb{Q}_p)/Z'(\mathbb{Z}_p) \rightarrow Z'(\mathbb{Q})^\mathrm{cl}\backslash Z'(\mathbb{A}^\infty)/Z'(\mathbb{Z}_p)$. Explicitly, \[ Z'(\mathbb{Q}_p) = \big\{ \big( (x_\mathfrak{p})_{\mathfrak{p} \in \Sigma_p}, y\big)\,\big|\, y \in \mathbb{Q}_p^\times,\ x_\mathfrak{p} \in E_{\mathfrak{p}}^\times, \textrm{ and }\mathrm{Nm}_{E_\mathfrak{p}/F_\mathfrak{p}}(x_\mathfrak{p}) = y\big\}. \] We note that there is no $p$-adic primes of $F$ of type $\beta^{\sharp}$, and the valuation of $y$ determines the valuation of $x_\mathfrak{p}$ for $\mathfrak{p}$ of type $\beta^{\sharp}$. For each prime $\mathfrak{p}\in \Sigma_p$ of type $\alpha$ or $\alpha^{\sharp}$, choose a place $\mathfrak{q}$ of $E$ above $\mathfrak{p}$, then the map $((x_{\mathfrak{p}})_{\mathfrak{p}}, y)\mapsto (\mathrm{val}_p(y), (\mathrm{val}_{p}(x_{\mathfrak{q}}))_{\mathfrak{p}})$ defines an isomorphism $$\xi:Z'(\mathbb{Q}_p)/Z'(\mathbb{Z}_p)\cong \mathbb{Z} \times \prod_{\mathfrak{p} \textrm{ of type $\alpha$ or $\alpha^{\sharp}$}} \mathbb{Z},$$ where we have written $x_{\mathfrak{p}}=(x_{\mathfrak{q}},x_{\bar\mathfrak{q}})$ for each prime $\mathfrak{p}\in \Sigma_p$ of type $\alpha$ or $\alpha^{\sharp}$. By definition of $\mathfrak{R}\mathrm{ec}_{Z', \tilde \wp}$ in Subsection~\ref{S:integral model weak Shimura datum} using $h'_{\tilde \mathtt{S}}$, we see that $\xi\circ\mathfrak{R}\mathrm{ec}_{Z', \tilde \wp}(\varpi_{\tilde \wp})$ is equal to \begin{equation} \label{E:image of rec} \big(n_{\tilde \wp}, (\#\tilde \mathtt{S}_{\infty / \mathfrak{q}} \cdot n_{\tilde \wp} / f_\mathfrak{p})_{\mathfrak{p} \in \Sigma_p}\big), \end{equation} where $n_{\tilde \wp} = [\mathbb{F}_{\tilde \wp}: \mathbb{F}_p]$ and $f_\mathfrak{p}$ is the inertia degree of $\mathfrak{p} $ in $F/\mathbb{Q}$. On the other hand, $\mathrm{Frob}_{\tilde \wp}$ takes a closed point $x = (A, \iota, \lambda, \alpha_{K'})$ of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{\overline \mathbb{F}_p}$ to $\mathrm{Frob}_{\tilde \wp}(x) = (\mathrm{Frob}_{\tilde \wp}^*(A), \iota^{\mathrm{Frob}_{\tilde \wp}}, \lambda^{\mathrm{Frob}_{\tilde \wp}}, \alpha_{K'} \circ \mathrm{Frob}_{\tilde \wp})$. For a $p$-adic prime $\mathfrak{p}$ of $F$ (or of $E$), denote by $\tcD(A)_{\mathfrak{p}}$ the covariant Dieudonn\'e module of $A[\mathfrak{p}^{\infty}]$. We observe that, if $\mathfrak{p}$ is a prime of $F$ of type $\beta^{\sharp}$, then \[ \tilde \mathcal{D}(\mathrm{Frob}^*_{\tilde \wp}(A))_\mathfrak{p} = p^{n_{\tilde \wp}/2} \tilde \mathcal{D}(A)_\mathfrak{p}; \] if $\mathfrak{p}$ is of type $\alpha$ or $\alpha^{\sharp}$ with $\mathfrak{q}$ a place of $E$ above $\mathfrak{p}$, then \[ \tilde \mathcal{D}(\mathrm{Frob}_{\tilde \wp}(A))_\mathfrak{q} = p^{\#\tilde \mathtt{S}_{\infty / \mathfrak{q}} \cdot n_{\tilde \wp} / f_\mathfrak{p}} \tilde \mathcal{D}(A)_\mathfrak{q} \] This agrees with the computation \eqref{E:image of rec} of $\mathfrak{R}\mathrm{ec}_{Z', \tilde \wp}(\varpi_{\tilde \wp})$ above. \end{proof} The rest of this section is devoted to understanding how to pass the universal abelian varieties on $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})$ to other Shimura varieties, as well as natural partial Frobenius morphisms among these varieties and their compatibility with the abelian varieties. \subsection{Actions on universal abelian varieties in the unitary case} \label{S:abel var in unitary case} We need to extend the usual tame Hecke algebra action on the universal abelian variety $\mathbf{A}'_{K'_p}$ over $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})$ to the action of a slightly bigger group $\widetilde G_{\tilde \mathtt{S}}: = G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p}) G''_{\tilde \mathtt{S}}(\mathbb{Q})_+^{(p)}$. Take an element $\tilde g \in \widetilde G_{\tilde \mathtt{S}}$; let $K'^p_1$ and $K'^p_2$ be two open compact subgroups of $G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p})$ such that $\tilde g^{-1} K'^p_1 \tilde g \subseteq K'^p_2$ (note that $G''_{\tilde \mathtt{S}}$ normalizes $G'_{\tilde \mathtt{S}}$). We put $K'_i = K'^p_iK'_p$ for $i=1,2$. Then starting from the universal abelian variety $\mathbf{A}'_{K'_1}$ together with the tame level structure $\bar \boldsymbol{\alpha}_{K'^p_1}^p: \widehat \Lambda^{(p)} \xrightarrow{\cong} T^{(p)} \mathbf{A}'_{K'_1}$, we may obtain an abelian variety $\mathbf{B}'$ over $\mathbf{Sh}_{K'_1}(G'_{\tilde \mathtt{S}})$, together with a prime-to-$p$ quasi-isogeny $\eta: \mathbf{A}'_{K'_1} \to \mathbf{B}' $ and a tame level structure such that the following diagram commutes \[ \xymatrix@C=40pt{ & \widehat \Lambda^{(p)} \ar[r]^-{\bar \boldsymbol{\alpha}^p_{K'^p}}_-\cong \ar@{-->}[d] & T^{(p)}( \mathbf{A}'_{K'_1}) \ar@{-->}[d]^{T^{(p)}(\eta)} \\ \widehat \Lambda^{(p)} \ar[r]^-{\cdot \tilde g^{-1}}_-\cong & \tilde g\widehat \Lambda^{(p)} \ar[r]^-\cong & T^{(p)}(\mathbf{B}'), } \] where the left vertical arrow is the natural quasi-isogeny as lattices inside $V \otimes_\mathbb{Q} \mathbb{A}^{\infty,p}$. Since $\tilde g^{-1}K'^p_1 \tilde g \subseteq K'^p_2$, we may take the $K'^p_2$-orbit of the composite of the bottom homomorphism as the tame level structure. One can easily transfer other data in the moduli problem of Theorem~\ref{T:unitary-shimura-variety-representability} to $\mathbf{B}'$, \emph{except for the polarization} which we make the modification as follows: since $\tilde g \in G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p}) G''_{\tilde \mathtt{S}}(\mathbb{Q})_+^{(p)}$, we have $\nu(\tilde g) \in \mathcal{O}_{F, (p)}^{\times,>0} \cdot \mathbb{A}_\mathbb{Q}^{\infty, p,\times} = \mathcal{O}_{F, (p)}^{\times,>0} \cdot \widehat \mathbb{Z}^{(p),\times}$. We can then write $\nu(\tilde g)$ as the product $\nu^+_{\tilde g} \cdot u$ for $\nu^+_{\tilde g} \in \mathcal{O}_{F, (p)}^{\times,>0}$ and $u \in \widehat \mathbb{Z}^{(p),\times}$. In fact, $\nu^+_{\tilde g}$ is uniquely determined by this restriction. We take the polarization on $\mathbf{B}'$ to be the composite of a sequence of quasi-isogenies: \[ \lambda_{\mathbf{B}'}: \mathbf{B}' \xleftarrow{\quad} \mathbf{A}'_{K'_1} \xrightarrow{\nu^+_{\tilde g} \lambda_{\mathbf{A}'}} \big(\mathbf{A}'_{K'_1}\big)^\vee \xrightarrow{\quad} \mathbf{B}'^\vee. \] Such modification ensures that $\mathbf{B}'$ satisfies condition (c1) of Theorem~\ref{T:unitary-shimura-variety-representability}. The moduli problem then implies that $\mathbf{B}' \cong (H_{ \tilde g})^*(\mathbf{A}'_{K'_2})$ for a uniquely determined morphism $H_{\tilde g}:\mathbf{Sh}_{K'_1}(G'_{\tilde \mathtt{S}})\to \mathbf{Sh}_{K'_2}(G'_{\tilde \mathtt{S}})$; this gives the action of $\widetilde G$. Moreover, we have a quasi-isogeny \[ H_{\tilde g}^\#: \mathbf{A}'_{K'_1} \xrightarrow{\eta} \mathbf{B}' \cong (H_{\tilde g})^*(\mathbf{A}'_{K'_2})\] giving rise to an equivariant action of $\widetilde G$ on the universal abelian varieties $\mathbf{A}'_{K'_p}$ over $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})$. One easily checks that the action of diagonal $ \mathcal{O}_{E,(p)}^\times$ on the Shimura variety $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})$ is trivial, and hence we have an action of $\mathcal{G}'_{\tilde \mathtt{S}}= \widetilde G_{\tilde \mathtt{S}} / \mathcal{O}_{E,(p)}^{\times, \mathrm{cl}}$ on $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})$. However, the action of $\mathcal{O}_{E,(p)}^\times$ on the universal abelian variety $\mathbf{A}'_{K'_p}$ is \emph{not} trivial. So the latter does not carry a natural action of $\mathcal{G}'_{\tilde \mathtt{S}}$. So our earlier framework for Shimura varieties does not apply to this case directly. However, we observe that, by the construction at the end of Subsection~\ref{A:connected integral model}, \[ \mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}}) = \mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}}) \times_{\mathcal{G}'_{\tilde \mathtt{S}}} \mathcal{G}''_{\tilde \mathtt{S}} = \mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}}) \times_{\widetilde G_{\tilde \mathtt{S}}} G''_{\tilde \mathtt{S}}(\mathbb{A}^{\infty,p}). \] So \[ \mathbf{A}''_{K''_p}: = \mathbf{A}'_{K'_p}\times_{\mathcal{G}'_{\tilde \mathtt{S}}} \mathcal{G}''_{\tilde \mathtt{S}} = \mathbf{A}'_{K'_p} \times_{\widetilde G_{\tilde \mathtt{S}}} G''_{\tilde \mathtt{S}}(\mathbb{A}^{\infty,p}) \] gives a natural family of abelian variety over $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})$. We will not discuss family of abelian varieties over the quaternionic Shimura variety $\mathbf{Sh}_{K_p}(G_\mathtt{S})$ (except when $\mathtt{S} =\emptyset$). \subsection{Automorhpic $l$-adic systems on $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})$ and its geometric interpretation} By a \emph{multiweight}, we mean a tuple $(\underline{k}, w) =((k_\tau)_{\tau\in\Sigma_\infty}, w) \in \mathbb{N}^{[F:\mathbb{Q}]} \times \mathbb{N}$ such that $ k_\tau \geq 2$ and $w \equiv k_\tau \pmod 2$ for each $\tau$. We also fix a section of the natural map $\Sigma_{E, \infty} \to \Sigma_\infty$, that is to fix a extension $\tilde \tau$ to $E$ of each real embedding $\tau \in \Sigma_\infty$ of $F$; use $\tilde \Sigma$ to denote the image of this section. In this subsection, we use $\tilde \tau$ to denote this chosen lift of $\tau$. We fix a subfield $L$ of $\overline \mathbb{Q} \subset \mathbb{C}$ containing all embeddings of $E$, as the coefficient field. Let $\mathfrak{l}$ be a finite place of $L$ over a prime $l$ with $l \neq p$. Fix an isomorphism $\iota_\mathfrak{l}:\mathbb{C} \simeq \overline L_\mathfrak{l} $. Consider the injection \[ G''_{\tilde \mathtt{S}} \times_\mathbb{Q} L = \big( \Res_{F/\mathbb{Q}}(B_\mathtt{S}^\times) \times_{\Res_{F/\mathbb{Q}}\mathbb{G}_m} \Res_{E/\mathbb{Q}}\mathbb{G}_m \big) \times_\mathbb{Q} L \hookrightarrow \Res_{E/\mathbb{Q}} D^\times_\mathtt{S} \times_\mathbb{Q} L \cong \prod_{\tau \in \Sigma_\infty} \mathrm{GL}_{2, L, \tilde \tau} \times \mathrm{GL}_{2, L, \tilde \tau^c}, \] where $E^\times$ acts on $\mathrm{GL}_{2, L, \tilde \tau}$ (resp. $\mathrm{GL}_{2, L, \tilde \tau^c}$) through $\tilde \tau$ (resp. $\tilde \tau^c$). For a multiweight $(\underline k, w)$, we consider the following representation of $G''_{\tilde \mathtt{S}} \times_\mathbb{Q} L$: \[ \rho''^{(\underline{k}, w)}_{\tilde \mathtt{S}, \widetilde \Sigma} = \bigotimes_{\tau \in \widetilde \Sigma} \rho_\tau^{(k_\tau, w)} \circ \check\mathrm{pr}_{\tilde \tau} \quad \textrm{for }i = 1, 2 \textrm{ with} \quad \rho_\tau^{(k_\tau, w)} = \Sym^{k_\tau-2} \otimes \det{}^{\frac{w-k_\tau}{2}} , \] where $\tau$ is the restriction of $\tilde \tau$ to $F$, and $\check\mathrm{pr}_{\tilde \tau}$ is the \emph{contragradient} of the natural projection to the $\tilde \tau$-component of $G''_{\tilde \mathtt{S}} \times_\mathbb{Q} L \hookrightarrow \Res_{E/\mathbb{Q}} D_\mathtt{S}^\times \times_\mathbb{Q} L$. Note that $\rho''^{(\underline k, w)}_{\tilde \mathtt{S}, \widetilde \Sigma}$ is trivial on the maximal anisotropic $\mathbb{R}$-split subtorus of the center of $G''_{\tilde \mathtt{S}}$, i.e. $\Ker(\Res_{F/\mathbb{Q}}\mathbb{G}_m \to \mathbb{G}_m)$. By \cite[Ch. III, \S 7]{milne}, $\rho''^{(\underline k, w)}_{\tilde \mathtt{S}, \widetilde \Sigma}$ corresponds to a lisse $L_\mathfrak{l}$-sheaf $\mathscr{L}''^{(\underline{k}, w)}_{\tilde \mathtt{S}, \widetilde \Sigma}$ over the Shimura variety $\mathbf{Sh}_{K''}(G''_{\tilde \mathtt{S}})$ compatible as the level structure changes. We now give a geometric interpretation of this automorphic $l$-adic sheaf on $\mathbf{Sh}_{K''}(G''_{\tilde \mathtt{S}})$. For this, we fix an isomorphism $D_\mathtt{S}\simeq \mathrm{M}_2(E)$ and let $\mathfrak{e} = \big( \begin{smallmatrix} 1&0\\0&0 \end{smallmatrix}\big)\in \mathrm{M}_2(\mathcal{O}_E)$ denote the idempotent element. Let $\mathbf{A}'' = \mathbf{A}''_{\tilde \mathtt{S}, K''}$ denote the natural family of abelian varieties constructed in Subsection~\ref{S:abel var in unitary case}. Let $V(\mathbf{A}'')$ denote the $l$-adic Tate module of $\mathbf{A}''$. We then have a decomposition \[ V(\mathbf{A}'') \otimes_{\mathbb{Q}_l} L_\mathfrak{l} \cong \bigoplus_{\tau \in\Sigma_\infty} \big(V(\mathbf{A}'')_{\tilde \tau} \oplus V(\mathbf{A}'')_{\tilde \tau^c} \big) = \bigoplus_{\tau \in\Sigma_\infty} \big(V(\mathbf{A}'')^{\circ, \oplus 2}_{\tilde \tau} \oplus V(\mathbf{A}'')^{\circ, \oplus 2}_{\tilde \tau^c} \big), \] where $V(\mathbf{A}'')_{\tilde \tau}$ (resp. $V(\mathbf{A}'')_{\tilde \tau^c}$) is the component where $\mathcal{O}_E$ acts through $\iota_\mathfrak{l} \circ\tilde \tau$ (resp. $\iota_\mathfrak{l} \circ \tilde \tau^c$), and $V(\mathbf{A}'')_{\tilde \tau}^\circ = \mathfrak{e} V(\mathbf{A}'')_{\tilde \tau}$ (resp. $V(\mathbf{A}'')_{\tilde \tau^c}^\circ = \mathfrak{e} V(\mathbf{A}'')_{\tilde \tau^c}$) is a lisse $L_\mathfrak{l}$-sheaf of rank $2$. For a multiweight $(\underline k, w)$, we put \[ \mathcal{L}^{(\underline{k},w)}_{\widetilde\Sigma}(\mathbf{A}'')=\bigotimes_{\tau \in \widetilde \Sigma} \bigg( \Sym^{k_\tau -2} V(\mathbf{A}'')_{\tilde \tau}^{\circ, \vee} \otimes (\wedge^2 V(\mathbf{A}'')_{\tilde \tau}^{\circ, \vee})^{\frac{w-k_\tau}2} \bigg). \] Note the duals on the Tate modules mean that we are essentially taking the relative first \'etale \emph{cohomology}. The moduli interpretation implies that we have a canonical isomorphism \[ \mathscr{L}''^{(\underline k, w)}_{\tilde \mathtt{S}, \widetilde \Sigma} \cong \mathcal{L}_{\widetilde \Sigma}^{(\underline k, w)}(\mathbf{A}''_{\tilde \mathtt{S}}). \] \subsection{Twisted Partial Frobenius} \label{S:partial Frobenius} The action of the twisted partial Frobenius and its compatibility with the GO-strata description will be the key to later applications in \cite{tian-xiao2}. We start with the action of the twisted partial Frobenius on the universal abelian scheme $\mathbf{A}'_{\tilde \mathtt{S}} = \mathbf{A}'_{\tilde \mathtt{S}, K'}$ over the unitary Shimura variety $\mathbf{Sh}_{K'}(G_{\tilde \mathtt{S}}')$. Fix $\mathfrak{p} \in \Sigma_p$. We define an action of $\sigma_\mathfrak{p} $ on $\Sigma_{E,\infty}$ as follows: for $\tilde\tau\in \Sigma_{E,\infty}$, we put \begin{equation}\label{E:defn-sigma-gothp} \sigma_{\mathfrak{p}}\tilde\tau=\begin{cases}\sigma \circ\tilde\tau & \text{if }\tilde \tau\in \Sigma_{E,\infty/\mathfrak{p}},\\ \tilde\tau &\text{if }\tilde\tau\notin \Sigma_{E,\infty/\mathfrak{p}}, \end{cases} \end{equation} where $\Sigma_{E,\infty/\mathfrak{p}}$ denotes the lifts of places in $\Sigma_{\infty/\mathfrak{p}}$. Note that $\sigma_{\mathfrak{p}}$ induces a natural action on $\Sigma_{\infty}$, and $\prod_{\mathfrak{p}\in \Sigma_p}\sigma_{\mathfrak{p}}=\sigma$ is the Frobenius action. Let $\sigma_{\mathfrak{p}}\tilde \mathtt{S}$ denote the image of $\tilde \mathtt{S}$ under $\sigma_{\mathfrak{p}}$. We fix an isomorphism $B_{\sigma_{\mathfrak{p}}\mathtt{S}}\otimes\mathbb{A}^{\infty}\simeq B_{\mathtt{S}}\otimes\mathbb{A}^{\infty}$, which induces in turns an isomorphism $G'_{\sigma_\mathfrak{p}\tilde \mathtt{S}}(\mathbb{A}^{\infty})\simeq G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty})$. We may thus regard $K'$ as an open subgroup of $G'_{\sigma_{\mathfrak{p}}\tilde \mathtt{S}}(\mathbb{A}^{\infty})$. Note that a prime $\mathfrak{p}'\in \Sigma_p$ has the same type with respect to $\tilde \mathtt{S}$ or $\sigma_{\mathfrak{p}}\tilde \mathtt{S}$. We get therefore a unitary Shimura variety $\mathbf{Sh}_{K'}(G'_{\sigma_{\mathfrak{p}}\tilde \mathtt{S}})$. We also point out that the $p$-adic completion of the reflex field at $\tilde \wp$ for $G'_{\tilde \mathtt{S}}$ and for $G'_{\sigma_\mathfrak{p} \tilde \mathtt{S}}$ are the same. Let $S$ be a locally noetherian $k_{\tilde \wp}$-scheme and let $(A, \iota, \lambda, \bar \alpha_{K'})$ be an $S$-point on $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_{\tilde \wp}}$. We will define a new $S$-point $(A',\iota', \lambda', \bar{\alpha}_{K'})$ on $\mathbf{Sh}_{K'}(G'_{\sigma^2_\mathfrak{p} \tilde \mathtt{S}})_{k_{\tilde \wp}}$ as follows. The kernel of the relative $p^2$-Frobenius $\mathrm{Fr}_{A}\colon A \to A^{(p^2/S)}$ carries an action of $\mathcal{O}_F$, and we denote by $\Ker_{\mathfrak{p}^2}$ its $\mathfrak{p}$-component. We put $A'=(A/\Ker_{\mathfrak{p}^2})\otimes_{\mathcal{O}_F} \mathfrak{p}$ with its induced action by $\mathcal{O}_{D_{\mathtt{S}}}$. It also comes equipped with a quasi-isogeny $\eta$ given by the composite \[ \eta: A \longrightarrow A/\Ker_{\mathfrak{p}^2} \longleftarrow (A/\Ker_{\mathfrak{p}^2})\otimes_{\mathcal{O}_F} \mathfrak{p} =A'. \] It induces canonical isomorphisms of $p$-divisible groups $A'[\mathfrak{q}^{\infty}]\simeq A[\mathfrak{q}^{\infty}]$ for $\mathfrak{q}\in \Sigma_{p}$ with $\mathfrak{q}\neq \mathfrak{p}$, and $A'[\mathfrak{p}^{\infty}]\simeq A[\mathfrak{p}^{\infty}]^{(p^2)}$. From this, one can easily check the signature condition for $A'$. We define the polarization $\lambda'$ to be the quasi-isogeny defined by the composite \begin{equation} \label{E:polarization for partial frob} A' \xleftarrow{\ \eta\ } A \xrightarrow{\ \lambda\ } A^\vee \xleftarrow{\ \eta^\vee\ } A'^\vee. \end{equation} We have to check that $\lambda'$ is a genuine isogeny, and it verifies condition Theorem~\ref{T:unitary-shimura-variety-representability}(b) on $\lambda'$ at prime $\mathfrak{p}$. By flatness criterion by fibers, it suffices to do this after base change to every geometric point of $S$. We may thus suppose that $S=\Spec(k)$ for an algebraically closed field $k$ of characteristic $p$. Let $\tcD(A)_{\mathfrak{p}}$ be the covariant Dieudonn\'e module of the $p$-divisible group $A[\mathfrak{p}^{\infty}]$, and similarly for $\tcD(A')_{\mathfrak{p}}$. By definition, we have \[ \tcD(A')_{\mathfrak{p}}=p\tcD(A/\Ker_{\mathfrak{p}^2})_{\mathfrak{p}}=pV^{-2}\tcD(A)_{\mathfrak{p}}=p^{-1}F^2\tcD(A)_{\mathfrak{p}} \] where $pV^{-2}\tcD(A)_{\mathfrak{p}}$ means the inverse image of $\tcD(A)_{\mathfrak{p}}$ under the bijective endomorphism $V^{2}$ on $\tcD(A)_{\mathfrak{p}}[1/p]$. Applying the Dieudonn\'e functor to \eqref{E:polarization for partial frob}, we get \[ \lambda'_*: \tcD(A')_{\mathfrak{p}}=pV^{-2}\tcD(A)_{\mathfrak{p}} \xleftarrow{\ \eta_*\ } \tcD(A)_{\mathfrak{p}} \xrightarrow{\ \lambda_*\ } \tcD(A^\vee)_{\mathfrak{p}} \xleftarrow{\ \eta^\vee_*\ }\tilde \mathcal{D}(A'^\vee) = p^{-1}F^2\tcD(A^\vee)_{\mathfrak{p}}. \] Now it is easy to see that $\lambda'$ is an isogeny, and the condition on $\lambda'$ follows from that for $\lambda$. The tame level structure $\bar\alpha'_{K'}$ is given by the composition \[ \widehat\Lambda^{(p)} \xrightarrow{\alpha_{K'}} T^{(p)}A \xrightarrow \cong T^{(p)}(A/\Ker_{\mathfrak{p}^2}) \xleftarrow \cong T^{(p)}((A / \Ker_{\mathfrak{p}^2}) \otimes_{\mathcal{O}_F} \mathfrak{p}) = T^{(p)}(A'). \] We are left to define the subgroups $\alpha'_{\mathfrak{p}'}$ for all $\mathfrak{p}' \in \Sigma_p$ of type $\alpha^\sharp$. The definition is clear for $\mathfrak{p}'\neq \mathfrak{p}$, since $A'[\mathfrak{p}'^{\infty}]$ is canonically identified with $A[\mathfrak{p}'^{\infty}]$. Assume thus $\mathfrak{p}' = \mathfrak{p}$ is of type $\alpha^\#$. In the data of $\alpha'_\mathfrak{p} = H'_\mathfrak{q} \oplus H'_{\bar \mathfrak{q}}\subseteq A'[\mathfrak{p}]$, the subgroup $H'_{\bar\mathfrak{q}}$ is determined as the orthogonal complement of $H'_{\mathfrak{q}}$ under the Weil-pairing on $A[\mathfrak{p}]$. Therefore, it suffices to construct $H'_{\mathfrak{q}}$, or equivalently an $\mathcal{O}_{D_{\mathtt{S}}}$-isogeny $f':A'\rightarrow B'=A'/H'_{\mathfrak{q}}$ with kernel in $A'[\mathfrak{q}]$ of degree $\#k_{\mathfrak{p}}^2$. Let $f:A\rightarrow B=A/H_{\mathfrak{q}}$ be the isogeny given by $\alpha_{\mathfrak{p}}$. We write $\Ker_{\mathfrak{p}^2, B}$ for the $\mathfrak{p}$-component of the kernel of the relative $p^2$-Frobenius $B \to B^{(p^2)}$. It is easy to see that we have a natural isogeny $f_{\mathfrak{p}^2}: A / \Ker_{\mathfrak{p}^2} \to B / \Ker_{\mathfrak{p}^2, B}$. Then $H'_\mathfrak{q}$ is defined to be the kernel of \[ f_{\mathfrak{p}^2} \otimes 1:\ A' = (A / \Ker_{\mathfrak{p}^2}) \otimes \mathfrak{p} \longrightarrow (B / \Ker_{\mathfrak{p}^2, B}) \otimes \mathfrak{p} =: B', \] and $\alpha'_\mathfrak{p}$ is the direct sum of $H'_\mathfrak{q}$ and its orthogonal dual $H'_{\bar \mathfrak{q}}$. To sum up, we obtain a morphism \begin{equation}\label{E:twist-partial-Frob} \mathfrak{F}'_{\mathfrak{p}^2}: \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_{\tilde \wp}} \to \mathbf{Sh}_{K'}(G'_{\sigma^2_\mathfrak{p} \tilde \mathtt{S}})_{k_{\tilde \wp}}. \end{equation} In all cases, we call the morphism $\mathfrak{F}'_{\mathfrak{p}^2}$ the \emph{twisted partial Frobenius map} on the unitary Shimura varieties. Moreover, if $\mathbf{A}'_{\tilde \mathtt{S}}$ and $\mathbf{A}'_{\sigma^2_{\mathfrak{p}}\tilde \mathtt{S}}$ are respectively the universal abelian schemes over $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})$ and $\mathbf{Sh}_{K'}(G'_{\sigma^2_{\mathfrak{p}}\tilde \mathtt{S}})$, we have the following universal quasi-isogeny: \begin{equation}\label{E:universal-isog-Frob} \eta'_{\mathfrak{p}^2}\colon \mathbf{A}'_{\tilde \mathtt{S}, k_{\tilde \wp}} \longrightarrow \mathfrak{F}'^{*}_{\mathfrak{p}^2}(\mathbf{A}'_{\sigma^2_{\mathfrak{p}}\tilde \mathtt{S}, k_{\tilde \wp}}). \end{equation} It is clear from the definition that $(\mathfrak{F}'_{\mathfrak{p}^2}, \eta'_\mathfrak{p})$'s for different $\mathfrak{p} \in \Sigma_p$ commute with each other. Let $S_p: \mathbf{Sh}_{K'}(G_{\tilde\mathtt{S}}')\rightarrow \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})$ be the automorphism defined by $(A,\iota,\lambda,\bar\alpha_{K'})\mapsto (A,\iota,\lambda, p\bar\alpha_{K'})$. It is clear that $S_p^*\mathbf{A}'_{\tilde\mathtt{S}}\cong \mathbf{A}'_{\tilde\mathtt{S}}$. Hence, $S_p$ induces an automorphism of the cohomology groups $H^{\star}_{\mathrm{rig}}(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}}), \mathscr{D}^{(\underline{k},w)}_{\widetilde{\Sigma}}(\mathbf{A}'_{\tilde\mathtt{S},k_0}))$, still denoted by $S_p$. If \[ F^2_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_{\tilde \wp}}/k_{\tilde \wp}}\colon \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_{\tilde \wp}} \longrightarrow \mathbf{Sh}_{K'}(G'_{\sigma^2\tilde \mathtt{S}})_{k_{\tilde \wp}}\simeq \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_{\tilde\wp}}^{(p^2)} \] denotes the relative $p^2$-Frobenius, then we have $ S_p^{-1}\circ F^2_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_{\tilde \wp}}/k_{\tilde \wp}} = \prod_{\mathfrak{p}\in\Sigma_p} \mathfrak{F}_{\mathfrak{p}^2}. $ Similarly, if $[p]: \mathbf{A}'^{(p^2)}_{\tilde \mathtt{S}} \rightarrow \mathbf{A}'^{(p^2)}_{\tilde \mathtt{S}}$ denotes the multiplication by $p$ and \[ F^2_{A} \colon \mathbf{A}'_{\tilde \mathtt{S}} \to (F^2_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_{\tilde \wp}}/k_{\tilde \wp}})^* (\mathbf{A}'_{\sigma^2\tilde \mathtt{S}}) \cong \mathbf{A}'^{(p^2)}_{\tilde \mathtt{S}} \] the $p^2$-Frobenius homomorphism, we have $ [p]^{-1}\circ F_{A}^2 = \prod_{\mathfrak{p} \in \Sigma_p} \eta_\mathfrak{p}. $ Finally, we note that all the discussions above are equivariant with respect to the action of the Galois group and the action of $\widetilde G_{\tilde \mathtt{S}} = G''_{\tilde\mathtt{S}}(\mathbb{Q})^{+, (p)} G'_{\tilde\mathtt{S}}(\mathbb{A}^{\infty, p}) \simeq G''_{\sigma_\mathfrak{p}^2\tilde\mathtt{S}}(\mathbb{Q})^{+, (p)} G'_{\sigma_\mathfrak{p}^2\tilde\mathtt{S}}(\mathbb{A}^{\infty, p})$ when passing to the limit. (The isomorphism follows from the description of the group $\widetilde G_{\tilde \mathtt{S}}$ in \eqref{E:description of G'' G'}.) So applying $-\times_{\widetilde G_{\tilde \mathtt{S}}} G''_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p})$ to the construction gives the following. \begin{prop} \label{P:product of partial Frobenius} Let $\mathbf{A}''_{\tilde \mathtt{S}}$ denote the natural family of abelian varieties over $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})$. We identify the level structure for $G''_{\tilde \mathtt{S}}$ with that of $G''_{\sigma_\mathfrak{p}^2\tilde \mathtt{S}}$ similarly. Then for each $\mathfrak{p} \in \Sigma_p$, we have a $G''_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p})$-equivariant natural \emph{twisted partial Frobenius morphism} and an quasi-isogeny of family of abelian varieties. \[ \mathfrak{F}''_{\mathfrak{p}^2}\colon \mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_{\tilde \wp}} \longrightarrow \mathbf{Sh}_{K''_p}(G''_{\sigma_\mathfrak{p}^2\tilde \mathtt{S}})_{k_{\tilde \wp}} \quad \textrm{and} \quad \eta''_{\mathfrak{p}^2} \colon \mathbf{A}''_{\tilde \mathtt{S}, k_{\tilde \wp}} \longrightarrow \mathfrak{F}''^{*}_{\mathfrak{p}^2}(\mathbf{A}''_{\sigma^2_{\mathfrak{p}}\tilde \mathtt{S}, k_{\tilde \wp}}). \] This induces a natural $G''_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p})$-equivariant homomorphism of \'etale cohomology groups: \[ \xymatrix{ H^*_\mathrm{et}\big( \mathbf{Sh}_{K''_p}(G''_{\sigma_\mathfrak{p}^2\tilde \mathtt{S}})_{\overline \mathbb{F}_p}, \mathcal{L}_{\tilde \Sigma}^{(\underline k, w)}(\mathbf{A}''_{\sigma_\mathfrak{p}^2 \tilde \mathtt{S}})\big) \ar[rr]^-{\mathfrak{F}''^* _{\mathfrak{p}^2}} \ar[drr]_{\Phi_{\mathfrak{p}^2}} && H^*_\mathrm{et}\big( \mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{\overline \mathbb{F}_p}, \mathcal{L}_{\tilde \Sigma}^{(\underline k, w)}(\mathfrak{F}''^* _{\mathfrak{p}^2}\mathbf{A}''_{\sigma_\mathfrak{p}^2 \tilde \mathtt{S}})\big) \ar[d]^{\eta''^*_{\mathfrak{p}^2}} \\ && H^*_\mathrm{et}\big( \mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{\overline \mathbb{F}_p}, \mathcal{L}_{\tilde \Sigma}^{(\underline k, w)}(\mathbf{A}''_{\tilde \mathtt{S}})\big) } \] Moreover, we have an equality of morphisms \[ \prod_{\mathfrak{p} \in \Sigma_p} \Phi_{\mathfrak{p}^2} = S_p^{-1} \circ F^2 \colon H^*_\mathrm{et}\big( \mathbf{Sh}_{K''_p}(G''_{\sigma^2\tilde \mathtt{S}})_{\overline \mathbb{F}_p}, \mathcal{L}_{\tilde \Sigma}^{(\underline k, w)}(\mathbf{A}''_{\sigma^2 \tilde \mathtt{S}})\big) \longrightarrow H^*_\mathrm{et}\big( \mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{\overline \mathbb{F}_p}, \mathcal{L}_{\tilde \Sigma}^{(\underline k, w)}(\mathbf{A}''_{ \tilde \mathtt{S}})\big), \] where $F^2$ is the relative $p^2$-Frobenius and $S_p$ is the Hecke action given by multiplication by $\underline p^{-1}$. Here $\underline p$ is the idele element which is $p$ at all places above $p$ and $1$ elsewhere. \end{prop} \begin{proof} This is clear from the construction. \end{proof} \subsection{Comparison with the Hilbert modular varieties}\label{S:comparison-Hilbert} When $\mathtt{S}=\emptyset$, we have $G_{\emptyset}=\Res_{F/\mathbb{Q}}(\mathrm{GL}_{2,F})$. Let $K_p=\mathrm{GL}_2(\mathcal{O}_{F}\otimes_{\mathbb{Z}} \mathbb{Z}_p)$. It is well known that $\mathrm{Sh}_{K_p}(G_{\emptyset})=\varprojlim_{K^p}\mathrm{Sh}_{K^pK_p}(G_{\emptyset})$ is a projective system of Shimura varieties defined over $\mathbb{Q}$, and it parametrizes polarized Hilbert-Blumenthal abelian varieties (HBAV for short) with prime-to-$p$ level structure. Using this moduli interpretation, one can construct an integral canonical model over $\mathbb{Z}_p$ of $\mathrm{Sh}_{K_p}(G_{\emptyset})$ as in \cite{rap,lan}. By the uniqueness of the integral canonical model, we know that this classical integral model is isomorphic to $\mathbf{Sh}_{K_p}(G_{\emptyset})$ constructed in Corollary~\ref{C:integral-model-quaternion}. For this ``abstract'' isomorphism to be useful in applications, we need to relate the universal HBAV $\mathcal{A}$ on $\mathbf{Sh}_{K_p}(G_{\emptyset})$ and the abelian scheme $\mathbf{A}''_\emptyset$ on $\mathbf{Sh}_{K''_p}(G''_\emptyset)$ constructed at the end of Subsection~\ref{S:abel var in unitary case}. Let $G_{\emptyset}^\star \subseteq G_{\emptyset}$ be the inverse image of $\mathbb{G}_{m,\mathbb{Q}}\subseteq T_{F}=\Res_{F/\mathbb{Q}}(\mathbb{G}_{m,\mathbb{Q}})$ via the determinant map $\nu: G_{\emptyset}\rightarrow T_{F}$. The homomorphism $h_{\emptyset}: \mathbb{C}^{\times}\rightarrow G_{\emptyset}(\mathbb{R})$ factors through $G^\star_\emptyset(\mathbb{R})$. We can talk about the Shimura variety associated to $(G^\star_{\emptyset}, h_{\emptyset})$. We put $K^\star _p=K_p\cap G^\star_{\emptyset}(\mathbb{Q}_p)$. Then by Corollary~\ref{C:Sh(G)^circ_Zp independent of G}, $\mathrm{Sh}_{K_p}(G_{\emptyset})$ and $\mathrm{Sh}_{K_{p}^\star}(G^\star_{\emptyset})$ has isomorphic neutral connected components $\mathrm{Sh}_{K_p}(G_{\emptyset})^\circ_{\mathbb{Q}_{p}^{\mathrm{ur}}}\simeq \mathrm{Sh}_{K^*_p}(G^\star_\emptyset)^{\circ}_{\mathbb{Q}_p^{\mathrm{ur}}}$. The Shimura variety $\mathrm{Sh}_{K^\star_p}(G^\star_\emptyset)$ is of PEL-type, and the universal abelian scheme on $\mathrm{Sh}_{K^\star_p}(G^\star_\emptyset)^\circ_{\mathbb{Q}_p^{\mathrm{ur}}}$ is identified with that on $\mathrm{Sh}_{K_p}(G_{\emptyset})^\circ_{\mathbb{Q}_p^{\mathrm{ur}}}$ via the isomorphism above. Actually, if $K^p\subseteq \mathrm{GL}_2(\mathbb{A}_F^{\infty,p})$ is an open compact subgroup such that $\det(K^p\cap \mathcal{O}_{F}^{\times})=\det(K^p)\cap \mathcal{O}_{F}^{\times, +}$, where $\mathcal{O}^{\times, +}_{F}$ denotes the set of totally positive units of $F$, then $\mathrm{Sh}_{K^pK_p}(G_{\emptyset})$ is isomorphic to a finite union of $\mathrm{Sh}_{K^{\star p}K^\star_{p}}(G^\star_{\emptyset})$ \cite[4.2.1]{hida}. We now describe the Hilbert moduli problem that defines an integral canonical model of $\mathrm{Sh}_{K^\star_p}(G_{\emptyset}^\star)$. Let $\widehat{\mathcal{O}}_F^{(p)}=\prod_{v\nmid p\infty}\mathcal{O}_{F_v}$, and put $\widehat\Lambda_{F}^{(p)}=\widehat{\mathcal{O}}_F^{(p)}e_1\oplus \widehat{\mathcal{O}}_F^{(p)}\mathfrak{d}_F^{-1} e_{2}$. We endow $\widehat \Lambda_{F}^{(p)}$ with the symplectic form \[ \psi_F( a_1 e_1+a_2e_2, b_1e_1+b_2e_2)=\mathrm{Tr}_{F/\mathbb{Q}}(a_2b_1-a_1b_2)\in \widehat{\mathbb{Z}}^{(p)}\quad \] {for }$a_1, b_1\in \widehat{\mathcal{O}}_F^{(p)}$, and $ a_2,b_2\in \mathfrak{d}_F^{-1}\widehat{\mathcal{O}}^{(p)}_F$. It is an elementary fact that every rank two free $\widehat{\mathcal{O}}_F^{(p)}$-module together with a $\widehat{\mathbb{Z}}^{(p)}$-linear $\mathcal{O}_F$-hermitian symplectic form is isomorphic to $(\widehat\Lambda^{(p)}_F, \psi_F)$. Let $K^\star_p=\mathrm{GL}_2(\mathcal{O}_{F}\otimes \mathbb{Z}_p)\cap G^\star_{\emptyset}(\mathbb{Q}_p)$ be as above. For an open compact subgroup $K^{\star p}$, we put $K^\star=K^{\star p}K_p^\star$. Assume that $K^{\star p}$ stabilizes the lattice $\widehat \Lambda_F^{(p)}$. We consider the functor that associates to each connected locally noetherian $\mathbb{Z}_p$-scheme $S$ the set of isomorphism classes of quadruples $(A,\iota,\lambda, \alpha_{K^{\star p}})$, where \begin{enumerate} \item $(A,\iota)$ is a HBAV, i.e. an abelian scheme $A/S$ of dimension $g$ equipped with a homomorphism $\iota: \mathcal{O}_F\hookrightarrow \End_S(A)$, \item $\lambda : A\rightarrow A^\vee$ is an $\mathcal{O}_F$-linear $\mathbb{Z}_{(p)}^{\times}$-polarization in the sense of \cite[1.3.2.19]{lan}, \item $\alpha_{K^{\star p}}$ is a $\pi_1(S,\bar{s})$-invariant $K^{\star p}$-orbit of $\widehat{\mathcal{O}}_F^{(p)}$-linear isomorphisms $\widehat\Lambda^{(p)}_F\xrightarrow{\sim} T^{(p)}(A_{\bar{s}})$, sending the symplectic pairing $\psi_F$ on the former to the $\lambda$-Weil pairing on the latter. \end{enumerate} This functor is representable by a quasi-projective and smooth scheme $\mathbf{Sh}_{K^\star}(G^\star_{\emptyset})$ over $\mathbb{Z}_p$ such that $\mathbf{Sh}_{K^\star}(G_{\emptyset}^\star)_{\mathbb{Q}_p}\simeq \mathrm{Sh}_{K^\star}(G_{\emptyset}^\star)$ \cite{rap} and \cite[1.4.1.11]{lan}. By the same arguments of \cite[Corollary~3.8]{moonen96}, it is easy to see that $\mathbf{Sh}_{K^\star_p}(G_{\emptyset}^\star)$ satisfies the extension property \eqref{S:extension property}. This then gives rise to an integral canonical model $\mathbf{Sh}_{K_p}(G_\emptyset)$ of $\mathrm{Sh}_{K_p}(G_\emptyset)$. We could pull back the universal abelian variety $\mathcal{A}^\star$ over $\mathbf{Sh}_{K^\star_p}(G^\star_\emptyset)$ to a family of abelian variety over $\mathbf{Sh}_{K_p}(G_\emptyset)$ using \cite[4.2.1]{hida} cited above. But we prefer to do it more canonically following the same argument as in Subsection~\ref{S:abel var in unitary case}. More precisely, there is a natural equivariant action of $\widetilde G^\star := G_\emptyset(\mathbb{Q})^{(p)}_+ \cdot G^\star(\mathbb{A}^{\infty, p})$ on the universal abelian variety $\mathcal{A}^\star$ over $\mathbf{Sh}_{K^\star_p}(G^\star _\emptyset)$. Then \begin{equation} \label{E:A from A*} \mathcal{A} : = \mathcal{A}^\star \times_{\widetilde G^\star} \mathrm{GL}_2(\mathbb{A}^{\infty,p}) \end{equation} gives a natural family of abelian variety over $ \mathbf{Sh}_{K_p}(G_\emptyset)$. The natural homomorphism $\mathrm{GL}_{2,F}\rightarrow \mathrm{GL}_{2,F}\times_{F^{\times}}E^{\times}$ induces a closed immersion of algebraic groups $G^\star_{\emptyset}\rightarrow G'_{\emptyset}$ compatible with Deligne's homomorphisms $h_{\emptyset}$ and $h_{\emptyset}'$. (This does not hold in general if $\mathtt{S}_\infty \neq \emptyset$.) Therefore, one obtains a map of (projective systems of) Shimura varieties $f: \mathrm{Sh}_{K^\star_p}(G^\star_{\emptyset})\rightarrow \mathrm{Sh}_{K_p'}(G'_{\emptyset})$ which induces an isomorphism of neutral connected component $\mathrm{Sh}_{K^\star_p}(G^\star_{\emptyset})^{\circ}_{\mathbb{Q}_p^{\mathrm{ur}}}\simeq \mathrm{Sh}_{K'_p}(G'_{\emptyset})^{\circ}_{\mathbb{Q}_p^{\mathrm{ur}}}$. We will extend $f$ to a map of integral models $\mathbf{Sh}_{K^\star_p}(G^\star_{\emptyset})\rightarrow\mathbf{Sh}_{K_p'}(G'_{\emptyset})$. In the process of constructing the pairing $\psi$ on $D_\emptyset$, we may take $\delta_\emptyset$ to be $\big(\begin{smallmatrix} 0 & -1/\sqrt{\mathfrak{d}} \\ 1/\sqrt{\mathfrak{d}} &0 \end{smallmatrix}\big)$ which is coprime to $p$, where $\mathfrak{d}$ is the totally negative element chosen in \ref{S:PEL-Shimura-data}. It is easy to check that it satisfies the conditions in Lemma~\ref{L:property-PEL-data}(1), and the $*$-involution given by $\delta_{\emptyset}$ on $D_\emptyset=\mathrm{M}_2(E)$ is given by $\big(\begin{smallmatrix} a&b\\c&d \end{smallmatrix}\big) \mapsto \big(\begin{smallmatrix} \bar a&\bar c\\\bar b&\bar d \end{smallmatrix}\big)$ for $a,b, c, d \in E$. The $\ast$-hermitian pairing on $\mathrm{M}_2(E)$ is given by \begin{align*} \psi(v, w) &= \mathrm{Tr}_{\mathrm{M}_2(E) / \mathbb{Q}}\Big( v \bar w \big(\begin{smallmatrix} 0 & -1 \\ 1 &0 \end{smallmatrix}\big) \Big), \textrm{ for } v = \big(\begin{smallmatrix} a_v & b_v \\ c_v & d_v \end{smallmatrix}\big) \textrm{ and } w = \big(\begin{smallmatrix} a_w & b_w \\ c_w & d_w \end{smallmatrix}\big) \in \mathrm{M}_2(E)\\ &=\mathrm{Tr}_{E/\mathbb{Q}} \big( b_v \bar a_w -a_v \bar b_w + d_v \bar c_w - c_v \bar d_w \big). \end{align*} In defining the PEL data for $G'_\emptyset$, we take the $\mathcal{O}_{D_\emptyset} $-lattice $\Lambda$ to be $ \begin{pmatrix} \mathcal{O}_E & \mathfrak{d}_F^{-1}\mathcal{O}_E \\ \mathcal{O}_E & \mathfrak{d}_F^{-1}\mathcal{O}_E \end{pmatrix} $; clearly $\widehat\Lambda^{(p)}=\Lambda\otimes_{\mathbb{Z}}\widehat{\mathbb{Z}}^{(p)}$ satisfies $\widehat\Lambda^{(p)} \subseteq \widehat\Lambda^{(p),\vee}$ for the bilinear form $\psi$ above. Moreover, if we equip $\Lambda_{F}^{(p)} \otimes_{\mathcal{O}_F} \mathcal{O}_E$ with the symplectic form $\psi_{E}=\psi_F(\mathrm{Tr}_{E/F}(\bullet), \mathrm{Tr}_{E/F}(\bullet))$, then $(\widehat\Lambda^{(p)},\psi) $ is isomorphic to $ ((\widehat{\Lambda}_{F}^{(p)} \otimes_{\mathcal{O}_F} \mathcal{O}_E)^{\oplus 2}, \psi_E^{\oplus 2})$ as a $\ast$-hermitian symplectic $\mathrm{M}_2(\mathcal{O}_E)$-module. \begin{prop} \label{P:integral-HMV-unitary} For any open compact subgroup $K'^p$ of $G'_\emptyset(\mathbb{A}^{\infty, p})$, we put $K^{\star p} = K'^p \cap G^*_\emptyset(\mathbb{A}^{\infty, p})$. Then we have a canonical morphism $$ \mathbf{f}: \mathbf{Sh}_{K^{\star p}K^\star_p}(G^\star _\emptyset) \rightarrow \mathbf{Sh}_{K'^pK'_p}(G'_\emptyset) $$ such that, if $\mathcal{A}$ and $\mathbf{A}'$ denote respectively the universal abelian scheme on $\mathbf{Sh}_{K^{\star p}K^\star _p}(G^\star _{\emptyset})$ and that on $\mathbf{Sh}_{K'^pK'_p}(G'_{\emptyset})$, then we have an isomorphism of abelian schemes $\mathbf{f}^*\mathbf{A}'\simeq (\mathcal{A}\otimes_{\mathcal{O}_F}\mathcal{O}_E)^{\oplus 2}$ compatible with the natural action of $\mathrm{M}_2(\mathcal{O}_E)$ and polarizations on both sides. By passing to the limit, the morphism $\mathbf{f}$ induces an isomorphism between the integral models of connected Shimura varieties $\mathbf{Sh}_{K_p^\star }(G^\star _\emptyset)_{\mathbb{Z}_p^\mathrm{ur}}^\circ \simeq \mathbf{Sh}_{K_p'}(G'_\emptyset )_{\mathbb{Z}_p^\mathrm{ur}}^\circ$. \end{prop} \begin{proof} By Galois descent, it is enough to work over $W(k_0)$ for $k_0$ in Theorem~\ref{T:unitary-shimura-variety-representability}. Let $S$ be a connected locally noetherian $W(k_0)$-scheme, and $x=(A,\iota, \lambda, \alpha_{K^{\star p}})$ be an $S$-valued point of $\mathbf{Sh}_{K^{\star p}K^\star _p}(G_{\emptyset}^\star )$. We define its image $f(x)=(A', \iota', \lambda', \alpha_{K'^p})$ as follows. We take $A'=(A\otimes_{\mathcal{O}_F}\mathcal{O}_E)^{\oplus 2}$ equipped with the naturally induced action $\iota'$ of $\mathrm{M}_2(\mathcal{O}_E)$. It is clear that $\Lie(A')_{\tilde\tau}$ is an $\mathcal{O}_{S}$-module locally free of rank $1$ for all $\tilde\tau\in \Sigma_{E,\infty}$. The prime-to-$p$ polarization $\lambda'$ on $A'$ is defined to be \[ \lambda': A'\xrightarrow{\sim} (A\otimes_{\mathcal{O}_F}\mathcal{O}_E)^{\oplus 2}\xrightarrow{(\lambda\otimes 1)^{\oplus 2}} (A^\vee\otimes_{\mathcal{O}_F}\mathcal{O}_E)^{\oplus 2}\simeq A'^\vee. \] We define the $K'^p$-level structure to be the $K'^p$-orbit of the isomorphism \[ \alpha_{K'^p}\colon \widehat{\Lambda}^{(p)}\xrightarrow {\cong} (\widehat{\Lambda}^{(p)}_F\otimes_{\mathcal{O}_F}\mathcal{O}_E)^{\oplus 2}\xrightarrow{\alpha_{K^{\star p}}^{\oplus 2}} \big (T^{(p)}(A_{\bar s})\otimes_{\mathcal{O}_F}\mathcal{O}_E\big)^{\oplus 2}\simeq T^{(p)}(A'_{\bar s}). \] By the discussion before the Proposition, it is clear that $\alpha_{K'^p}$ sends the symplectic form $\psi$ on the left hand side to the $\lambda'$-Weil pairing on the right. This defines the morphism $\mathbf{f}$ from $\mathbf{Sh}_{K^{\star p}K^\star _p}(G^\star _{\emptyset})$ to $\mathbf{Sh}_{K'^pK'_p}(G'_{\emptyset})$. By looking at the complex uniformization, we note that $\mathbf{f}$ extends the morphism $f: \mathrm{Sh}_{K^{\star p}K^\star _p}(G^\star _{\emptyset})_{\mathbb{Q}_p}\rightarrow \mathrm{Sh}_{K'^pK'_p}(G'_{\emptyset})_{\mathbb{Q}_p}$ defined previously by group theory. Since both $\mathbf{Sh}_{K^\star _p}(G^\star _{\emptyset})$ and $\mathbf{Sh}_{K'_{p}}(G'_{\emptyset})$ satisfy the extension property \ref{S:extension property}, it follows that $\mathbf{f}$ induces an isomorphism $\mathbf{Sh}_{K_p^\star }(G^\star _\emptyset)_{\mathbb{Z}_p^\mathrm{ur}}^\circ \simeq \mathbf{Sh}_{K_p'}(G'_\emptyset )_{\mathbb{Z}_p^\mathrm{ur}}^\circ$. \end{proof} \begin{cor}\label{C:integral-HMV-unitary} Let $\mathcal{A}$ denote the universal HBAV over $\mathbf{Sh}_{K_p}(G_{\emptyset})$, and $\mathbf{A}''_\emptyset$ be the family of abelian varieties over $\mathbf{Sh}_{K''_p}(G''_{\emptyset})$ defined in Subsection~\ref{S:abel var in unitary case}. Then under the natural morphisms of Shimura varieties \begin{equation} \label{E:morphisms of Shimura varieties HMV} \mathbf{Sh}_{K_p}(G_\emptyset) \xleftarrow{\ \mathbf{pr}_1\ } \mathbf{Sh}_{K_p \times K_{E,p}} (G_\emptyset \times T_{E, \emptyset}) \xrightarrow{\ \boldsymbol{\alpha} \ } \mathbf{Sh}_{K''_p}(G''_\emptyset), \end{equation} one has an isomorphism of abelian schemes over $\mathbf{Sh}_{K_p \times K_{E,p}} (G_\emptyset \times T_{E, \emptyset})$ \begin{equation} \label{E:comparison abelian varieties over HMV and unitary} \boldsymbol{\alpha}^*\mathbf{A}''_\emptyset \cong (\mathbf{pr}_1^*\mathcal{A}\otimes_{\mathcal{O}_F}\mathcal{O}_E)^{\oplus 2} \end{equation} compatible with the action of $\mathrm{M}_2(\mathcal{O}_E)$ and prime-to-$p$ polarizations. \end{cor} \begin{proof} This follows immediately from the constructions of $\mathcal{A}$ and $\mathbf{A}''_\emptyset$ and the proposition above. \end{proof} \subsection{Comparison of twisted partial Frobenius} Keep the notation as in Subsection~\ref{S:comparison-Hilbert}. The Shimura variety $\mathbf{Sh}_{K^\star }(G_\emptyset^\star )_{\mathbb{F}_p}$ also admits a twisted partial Frobenius $\Phi_{\mathfrak{p}^2}$ for each $\mathfrak{p} \in \Sigma_p$ which we define as follows. Let $S$ be a locally noetherian $ \mathbb{F}_p$-scheme. Given an $S$-point $(A, \iota, \lambda, \alpha_{K^{\star p}})$ of $\mathbf{Sh}_{K^\star }(G_\emptyset^*)_{\mathbb{F}_p}$, we associate a new point $(A', \iota', \lambda', \alpha'_{K^{\star p}})$: \begin{itemize} \item $A' = A / \Ker_{\mathfrak{p}^2} \otimes_{\mathcal{O}_F} \mathfrak{p}$, where $\Ker_{\mathfrak{p}^2}$ is the $\mathfrak{p}$-component of the kernel of relative Frobenius homomorphism $\mathrm{Fr}^2_A: A \to A^{(p^2)}$; it is equipped with the induced $\mathcal{O}_F$-action $\iota'$; \item using the natural quasi-isogeny $\eta: A \to A'$, $\lambda'$ is given by the composite of quasi-isogenies $A' \xleftarrow{\eta} A \xrightarrow{\lambda} A^\vee \xleftarrow{\eta^\vee} A'^\vee$ (which is a $\mathbb{Z}_{(p)}^\times$-isogeny by the same argument as in Subsection~\ref{S:partial Frobenius}); \item $\bar \alpha'_{K^{\star p}}$ is the composite $\widehat\Lambda_F^{(p)} \xrightarrow{\bar \alpha_{K^{\star p}}} T^{(p)}(A) \xleftarrow{\eta} T^{(p)}(A')$. \end{itemize} The construction above gives rise to a \emph{twisted partial Frobenius morphism} \[ \mathfrak{F}^\star_{\mathfrak{p}^2}: \mathbf{Sh}_{K^\star }(G^\star _\emptyset)_{\mathbb{F}_p} \longrightarrow \mathbf{Sh}_{K^\star }(G^\star _\emptyset)_{\mathbb{F}_p} \quad \textrm{and} \quad \eta_{\mathfrak{p}^2}: \mathcal{A}_{\mathbb{F}_p} \to (\mathfrak{F}^\star_{\mathfrak{p}^2})^*\mathcal{A}_{\mathbb{F}_p}. \] Using the formalism of Shimura varieties (Corollary~\ref{C:Sh(G)^circ_Zp independent of G} and more specifically \eqref{E:A from A*}), it gives rise to a \emph{twisted partial Frobenius morphism} \[ \mathfrak{F}^\emptyset_{\mathfrak{p}^2}: \mathbf{Sh}_{K}(G_\emptyset)_{\mathbb{F}_p} \longrightarrow \mathbf{Sh}_{K}(G_\emptyset)_{\mathbb{F}_p} \quad \textrm{and} \quad \eta_{\mathfrak{p}^2}^\emptyset: \mathcal{A}_{\mathbb{F}_p} \to (\mathfrak{F}^\emptyset_{\mathfrak{p}^2})^*\mathcal{A}_{\mathbb{F}_p}. \] \begin{cor} The twisted partial Frobenius morphism $\mathfrak{F}''_{\mathfrak{p}^2}$ on $\mathbf{Sh}_{K''_p}(G''_\emptyset)_{\mathbb{F}_p}$ and the twisted partial Frobenius $\mathfrak{F}^\emptyset_{\mathfrak{p}^2}$ on $\mathbf{Sh}_{K_p}(G_\emptyset)_{\mathbb{F}_p}$ are compatible, in the sense that there exists a morphism $\tilde \mathfrak{F}_{\mathfrak{p}^2}$ so that both squares in the commutative diagram are Cartesian. \[ \xymatrix{ \mathbf{Sh}_{K_p}(G_\emptyset)_{\mathbb{F}_p} \ar[d]^{\mathfrak{F}^\emptyset_{\mathfrak{p}^2}} & \ar[l]_-{\mathbf{pr}_1} \mathbf{Sh}_{K_p \times K_{E,p}} (G_\emptyset \times T_{E, \emptyset})_{\mathbb{F}_p} \ar[r]^-{ \boldsymbol{\alpha} } \ar[d]^{\tilde \mathfrak{F}_{\mathfrak{p}^2}} & \mathbf{Sh}_{K''_p}(G''_\emptyset)_{\mathbb{F}_p} \ar[d]^{\mathfrak{F}''_{\mathfrak{p}^2}} \\ \mathbf{Sh}_{K_p}(G_\emptyset)_{\mathbb{F}_p} & \ar[l]_-{\mathbf{pr}_1} \mathbf{Sh}_{K_p \times K_{E,p}} (G_\emptyset \times T_{E, \emptyset})_{\mathbb{F}_p} \ar[r]^-{ \boldsymbol{\alpha} } & \mathbf{Sh}_{K''_p}(G''_\emptyset)_{\mathbb{F}_p} } \] Moreover, $\eta^\emptyset_{\mathfrak{p}^2}$ is compatible with $\eta''_{\mathfrak{p}^2}$ in the sense that the following diagram commutes. \[ \xymatrix{ \boldsymbol{\alpha}^*\mathbf{A}''_{\emptyset,\mathbb{F}_p} \ar[d]^{\boldsymbol{\alpha}^*(\eta'' _{{\mathfrak{p}^2}})} \ar[r]^-{\eqref{E:comparison abelian varieties over HMV and unitary}} & (\mathbf{pr}_1^* \mathcal{A}_{\mathbb{F}_p} \otimes_{\mathcal{O}_F} \mathcal{O}_E)^{\oplus 2} \ar[d]_{\eta^\emptyset_{\mathfrak{p}^2} \otimes 1}\\ \boldsymbol{\alpha}^*\mathfrak{F}''^*_{\mathfrak{p}^2} \mathbf{A}_{\emptyset, \mathbb{F}_p} \ar[r]^-{\eqref{E:comparison abelian varieties over HMV and unitary}} & \big(\mathbf{pr}_1^*\mathfrak{F}_{\mathfrak{p}^2}^{\emptyset,*}( \mathcal{A}_{\mathbb{F}_p}) \otimes_{\mathcal{O}_F} \mathcal{O}_E \big)^{\oplus 2} } \] \end{cor} \begin{proof} This follows from the definition of the partial Frobenii in various situations and the comparison Proposition~\ref{P:integral-HMV-unitary} above. \end{proof} \section{Goren-Oort Stratification} \label{Section:defn of GOstrata} We define an analog of the Goren-Oort stratification on the special fibers of quaternionic Shimura varieties. This is first done for unitary Shimura varieties and then pulled back to the quaternionic ones. Unfortunately, the definition apriori depends on the auxiliary choice of CM field (as well as the signatures $s_{\tilde \tau}$). In the case of Hilbert modular variety, we show that our definition of the GO-strata agrees with Goren-Oort's original definition in \cite{goren-oort} (and hence does not depend on the auxiliary choice of data). \subsection{Notation} \label{S:GO-notation} Keep the notation as in previous sections. Let $k_0$ be a finite extension of $\mathbb{F}_p$ containing all residue fields of $\mathcal{O}_E$ of characteristic $p$. Let $X':=\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ denote the base change to $k_0$ of the Shimura variety $\mathbf{Sh}_{K}(G'_{\tilde \mathtt{S}})$ considered in Theorem~\ref{T:unitary-shimura-variety-representability} Recall that $\mathfrak{e} \in \mathcal{O}_{D_\mathtt{S},p}$ corresponds to $\big( \begin{smallmatrix} 1&0\\0&0 \end{smallmatrix} \big)$ when identifying $\mathcal{O}_{D_\mathtt{S},p}$ with $\mathrm{M}_2(\mathcal{O}_{E,p})$. For an abelian scheme $A$ over a $k_0$-scheme $S$ carrying an action of $\mathcal{O}_{D_{\mathtt{S}}}$, we have the reduced module of invariant differential 1-forms $\omega_{A/S}^{\circ}$, the reduced Lie algebra $\Lie(A/S)^\circ$, and the reduced de Rham homology $H^{\mathrm{dR}}_1(A/S)^{\circ}$ defined in Subsection~\ref{N:notation-reduced}. Their $\tilde{\tau}$-components $\omega_{A/S, \tilde{\tau}}^{\circ}$, $\Lie(A/S)_{\tilde{\tau}}^{\circ}$ and $H_1^{\mathrm{dR}}(A/S)^{\circ}_{\tilde{\tau}}$ for $\tilde{\tau}\in \Sigma_{E,\infty}$, fit in an exact sequence, called the \emph{reduced Hodge filtration}, \[ 0\rightarrow \omega_{A^\vee/S,\tilde{\tau}}^{\circ}\rightarrow H^{\mathrm{dR}}_1(A/S)^{\circ}_{\tilde{\tau}}\rightarrow \Lie(A/S)^\circ_{\tilde{\tau}}\rightarrow 0. \] Let $A^{(p)}$ denote the base change of $A$ via the absolute Frobenius on $S$. The Verschiebung $\mathrm{Ver}:A^{(p)}\rightarrow A$ and the Frobenius morphism $\mathrm{Fr} : A\rightarrow A^{(p)}$ induce respectively maps of coherent sheaves on $S$: \[ F_A: H_1^{\mathrm{dR}}(A/S)^{\circ, (p)}\rightarrow H_1^{\mathrm{dR}}(A/S)^{\circ}\quad \text{and } V_A: H_1^{\mathrm{dR}}(A/S)^{\circ}\rightarrow H_1^{\mathrm{dR}}(A/S)^{\circ, (p)}, \] which are compatible with the action of $\mathcal{O}_E$. Here, for a coherent $\mathcal{O}_S$-module, $M^{(p)}$ denotes the base change $M\otimes_{\mathcal{O}_S, F_{\mathrm{abs}}} \mathcal{O}_S$. If there is no confusion, we drop the subscript $A$ from the notation and simply write $F$ and $V$ for the two maps. Moreover, we have \[ \Ker(F)=\mathrm{Im}(V)=(\omega_{A^\vee/S}^{\circ})^{(p)}\quad \textrm{ and } \quad \mathrm{Im}(F)=\Ker(V)\simeq \Lie(A^{(p)}/S)^{\circ}.\] Let $(A, \iota, \lambda, \alpha_{K'})$ be an $S$-valued point of $X'=\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$. By Kottwitz' determinant condition \ref{T:unitary-shimura-variety-representability}(a), for each $\tilde{\tau}\in \Sigma_{E,\infty}$, $\Lie(A/S)^{\circ}_{\tilde{\tau}}$ is a locally free $\mathcal{O}_{S}$-module of rank $s_{\tilde{\tau}}$. (The numbers $s_{\tilde \tau}$ are defined as in Subsection~\ref{S:CM extension}.) By duality, this implies that $\omega_{A^\vee/S,\tilde{\tau}}^{\circ}$ is locally free of rank $s_{\tilde{\tau}^c}=2-s_{\tilde{\tau}}$. Moreover, when $\tau \in \Sigma_{\infty/\mathfrak{p}}$ with $\mathfrak{p}$ not of type $\beta^\sharp$, the universal polarization $\lambda$ induces an isomorphism of locally free $\mathcal{O}_{S}$-modules \begin{equation}\label{Equ:duality-omega} \omega_{A^\vee/S,\tilde{\tau}}^\circ\simeq \omega_{A/S,\tilde{\tau}^c}^\circ. \end{equation} \subsection{Essential Frobenius and essential Verschiebung} \label{N:essential frobenius and verschiebung} We now define two very important morphisms: essential Frobenius and essential Verschiebung; we will often encounter later their variants for crystalline homology and Dieudonn\'e modules, for which we shall simply refer the similar construction given here. Let $(A,\iota, \lambda, \alpha_{K'})$ be as above. For $\tilde \tau \in \Sigma_{E, \infty}$ lifting a place $\tau \in \mathtt{S}_\infty$, we define the \emph{essential Frobenius} to be \begin{align} \label{E:defintion of Fes} F_\mathrm{es} =F_{A,\mathrm{es}, \tilde \tau}: (H^{\mathrm{dR}}_1(A/S)_{\sigma^{-1}\tilde{\tau}}^{\circ})^{ (p)}&=H_1^{\mathrm{dR}}(A^{(p)}/S)^{\circ}_{\tilde{\tau}}\longrightarrow H_1^{\mathrm{dR}}(A/S)^{\circ}_{\tilde{\tau}} \\ \nonumber x&\longmapsto{ \left\{ \begin{array}{ll} F(x) & \textrm{when }s_{\sigma^{-1}\circ \tilde \tau} = 1\textrm{ or }2;\\ V^{-1}(x) & \textrm{when }s_{\sigma^{-1}\circ \tilde \tau} = 0.\\ \end{array} \right.} \end{align} Note that in the latter case, the morphism $V: H_1^{\mathrm{dR}}(A/S)^{\circ}_{\tilde{\tau}} \xrightarrow{\sim} H_1^{\mathrm{dR}}(A^{(p)}/S)^{\circ}_{\tilde{\tau}}$ is an isomorphism by Kottwitz' determinant condition. Similarly, we define the \emph{essential Verschiebung} to be \begin{align} \label{E:definition of Ves} V_\mathrm{es} =V_{A,\mathrm{es}, \tilde \tau}: H_1^{\mathrm{dR}}(A/S)^{\circ}_{\tilde{\tau}}&\longrightarrow H_1^{\mathrm{dR}}(A^{(p)}/S)^{\circ}_{\tilde{\tau}} =(H_1^{\mathrm{dR}}(A/S)_{\sigma^{-1}\tilde{\tau}}^{\circ})^{(p)} \\ \nonumber x&\longmapsto{ \left\{ \begin{array}{ll} V(x) & \textrm{when }s_{\sigma^{-1}\circ \tilde \tau} = 0\textrm{ or }1;\\ F^{-1}(x) & \textrm{when }s_{\sigma^{-1}\circ \tilde \tau} = 2.\\ \end{array} \right.} \end{align} Here, in the latter case, the morphism $F: H_1^{\mathrm{dR}}(A^{(p)}/S)^{\circ}_{\tilde{\tau}} \to H_1^{\mathrm{dR}}(A/S)^{\circ}_{\tilde{\tau}}$ is an isomorphism. When no confusion arises, we may suppress the subscript $A$ and/or $\tilde \tau$ from $F_{A, \mathrm{es}, \tilde\tau}$ and $V_{A,\mathrm{es},\tilde\tau}$. Thus, if $s_{\sigma^{-1}\tilde \tau} =0$ or $2$, both $F_{\mathrm{es}, \tilde \tau}:H_1^{\mathrm{dR}}(A^{(p)}/S)^{\circ}_{\tilde{\tau}}\rightarrow H_1^{\mathrm{dR}}(A/S)^{\circ}_{\tilde{\tau}}$ and $V_{\mathrm{es}, \tilde \tau}: H_1^{\mathrm{dR}}(A/S)^{\circ}_{\tilde{\tau}}\rightarrow H_1^{\mathrm{dR}}(A^{(p)}/S)^{\circ}_{\tilde{\tau}}$ are isomorphisms, and both $F_{\mathrm{es}, \tilde \tau}V_{\mathrm{es}, \tilde \tau}$ and $V_{\mathrm{es}, \tilde \tau}F_{\mathrm{es}, \tilde \tau}$ are isomorphisms. When $s_{\sigma^{-1}\tilde \tau}=1$, we usually prefer to write the usual Frobenius and Verscheibung. We will also use composites of Frobenii and Verschiebungs: \begin{align} \label{E:Ves n} V_{\mathrm{es}, \tilde \tau}^n: &H_1^\mathrm{dR}(A/S)^\circ_{\tilde \tau}\xrightarrow{V_{\mathrm{es}, \tilde \tau}} H^{\mathrm{dR}}_1(A^{(p)}/S)_{\tilde{\tau}}^{\circ}\xrightarrow{V_{\mathrm{es}, \sigma^{-1}\tilde \tau}^{(p)}} \cdots \xrightarrow{V_{\mathrm{es}, \sigma^{1-n}\tilde \tau}^{(p^{n-1})}} H^{\mathrm{dR}}_1(A^{(p^{n})}/S)_{\tilde{\tau}}^{\circ}, \\ \label{E:Fes n} F_{\mathrm{es}, \tilde \tau}^n:& H^{\mathrm{dR}}_1(A^{(p^{n})}/S)_{\tilde{\tau}}^{\circ} \xrightarrow{F_{\mathrm{es}, \sigma^{1-n} \tilde \tau}^{(p^{n-1})}} H^{\mathrm{dR}}_1(A^{(p^{n-1})}/S)_{\tilde{\tau}}^{\circ} \xrightarrow{F_{\mathrm{es}, \sigma^{2-n}\tilde \tau}^{(p^{n-2})}} \cdots \xrightarrow{F_{\mathrm{es},\tilde\tau}} H_1^\mathrm{dR}(A/S)^\circ_{\tilde \tau}. \end{align} Suppose now $S=\Spec(k)$ is the spectrum of a perfect field of characteristic $p>0$. Let $\tcD_{A}$ denote the \emph{covariant} Dieudonn\'e module of $A$. We have a canonical decomposition $\tcD_A=\bigoplus_{\tilde\tau\in \Sigma_{E,\infty}}\tcD_{A,\tilde\tau}$. We put $\tcD_{A,\tilde\tau}^{\circ}=\mathfrak{e}\cdot \tcD_{A,\tilde\tau}$. Then we define the essential Frobenius and essential Verschiebung \[ F_\mathrm{es} = F_{A,\mathrm{es}, \tilde \tau}: \tcD^{\circ}_{A,\sigma^{-1}\tilde\tau}\rightarrow \tcD^{\circ}_{A,\tilde\tau}\quad \text{and}\quad V_{\mathrm{es}} = V_{A, \mathrm{es}, \tilde \tau}:\tcD_{A,\tilde\tau}^{\circ}\rightarrow \tcD^{\circ}_{A,\sigma^{-1}\tilde\tau} \] in the same way as those on $H_1^{\mathrm{dR}}(A/S)^{\circ}_{\tilde\tau}$, as done in \eqref{E:defintion of Fes} and \eqref{E:definition of Ves}. The morphisms $F_{A,\mathrm{es}, \tilde \tau}$ and $V_{A,\mathrm{es}, \tilde \tau}$ on $H_1^{\mathrm{dR}}(A/S)^{\circ}_{\tilde\tau}$ can be recovered from those on $\tcD^{\circ}_{A,\tilde\tau}$ by reduction modulo $p$. \begin{notation} \label{N:n tau} For $\tau \in \Sigma_\infty-\mathtt{S}_\infty$, we define $n_{\tau} = n_{\tau, \mathtt{S}}\geq 1$ to be the integer such that $\sigma^{-1}\tau, \dots, \sigma^{-n_\tau+1}\tau \in \mathtt{S}_\infty$ and $\sigma^{-n_{\tau}}\tau\notin \mathtt{S}_{\infty}$. \end{notation} \subsection{Partial Hasse invariants}\label{S:partial-Hasse} For each $\tilde \tau$ lifting a place $\tau \in \Sigma_\infty-\mathtt{S}_\infty$, we must have $s_{\tilde \tau} = 1$; so in definition of $V_{\mathrm{es}, \tilde \tau}^{n_\tau}$ in \eqref{E:Ves n}, all morphisms are isomorphisms except the last one; similarly, in the definition of $F_{\mathrm{es}, \tilde \tau}^{n_\tau}$ in \eqref{E:Fes n}, all morphisms are isomorphisms except the first one. It is clear that $V_{\mathrm{es}, \tilde \tau}^{n_\tau} F_{\mathrm{es}, \tilde \tau}^{n_\tau} =F_{\mathrm{es}, \tilde \tau}^{n_\tau} V_{\mathrm{es}, \tilde \tau}^{n_\tau}=0$, coming from the composition of $V^{(p^{n_\tau-1})}_{\sigma^{n_\tau-1}\tilde \tau}$ and $F^{(p^{n_\tau-1})}_{\sigma^{n_\tau-1}\tilde \tau}$ in both ways. Note also that the cokernels of $V_{\mathrm{es}, \tilde \tau}^{n_{\tau}}$ and $F_{\mathrm{es}, \tilde \tau}^{n_{\tau}}$ are both locally free $\mathcal{O}_{X'}$-modules of rank $1$. The restriction of $V_{\mathrm{es}, \tilde \tau}^{n_\tau}$ to the line bundle $\omega^{\circ}_{A^\vee/S,\tilde{\tau}}$ induces a homomorphism \begin{equation*} h_{\tilde{\tau}}(A): \omega_{A^\vee/S,\tilde{\tau}}^{\circ}\mathrm{\longrightarrow} \omega^\circ_{A^{\vee, (p^{n_{\tau}})}/S,\tilde{\tau}}=(\omega^{\circ}_{A^\vee/S,\sigma^{-n_{\tau}}\tilde{\tau}})^{\otimes p^{n_{\tau}}}. \end{equation*} Applied to the universal case, this gives rise to a global section \begin{equation}\label{Equ:partial-hasse} h_{\tilde{\tau}}\in \Gamma(X', (\omega^{\circ}_{\mathbf{A}'^\vee/X', \sigma^{-n_{\tau}}\tilde{\tau}})^{\otimes p^{n_{\tau}}}\otimes( \omega_{\mathbf{A}'^\vee/X', \tilde{\tau}}^{\circ})^{\otimes (-1)}), \end{equation} where $\mathbf{A}'$ is the universal abelian scheme over $X'=\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$. We call $h_{\tilde \tau}$ \emph{the $\tilde{\tau}$-partial Hasse invariant}. With $\tilde{\tau}$ replaced by $\tilde{\tau}^c$ everywhere, we can define similarly a partial Hasse invariant $h_{\tilde{\tau}^c}$. They are analogs of the partial Hasse invariants in the unitary case. \begin{lemma}\label{Lemma:partial-Hasse} Let $(A,\iota, \lambda, \bar{\alpha}_{K'})$ be an $S$-valued point of $X'$ as above. Then the following statements are equivalent. \begin{enumerate} \item We have $h_{\tilde{\tau}}(A)=0$. \item The image of $F_{\mathrm{es}, \tilde \tau}^{n_\tau}: H_{1}^{\mathrm{dR}}(A^{(p^{n_{\tau}})}/S)^{\circ}_{\tilde{\tau}}\rightarrow H_1^{\mathrm{dR}}(A/S)_{\tilde{\tau}}^{\circ}$ is $\omega_{A^\vee/S,\tilde{\tau}}^{\circ}$. \item We have $h_{\tilde{\tau}^c}(A)=0$. \item The image of $F_{\mathrm{es}, \tilde \tau^c}^{n_\tau}:H_{1}^{\mathrm{dR}}(A^{(p^{n_{\tau}})}/S)^{\circ}_{\tilde{\tau}^c}\rightarrow H_1^{\mathrm{dR}}(A/S)_{\tilde{\tau}^c}^{\circ}$ is $\omega_{A^\vee/S, \tilde{\tau}^c}^{\circ}$. \end{enumerate} \end{lemma} \begin{proof} The equivalences $(1)\Leftrightarrow(2)$ and $(3)\Leftrightarrow (4)$ follow from the fact that the image of $F$ coincides with the kernel of $V$. We prove now $(2)\Leftrightarrow (4)$. Let $\mathfrak{p}\in \Sigma_{p}$ be the prime above $p$ so that $\tau\in \Sigma_{\infty/\mathfrak{p}}$. Since $\Sigma_{\infty/\mathfrak{p}}\neq \mathtt{S}_{\infty/\mathfrak{p}}$, $\mathfrak{p}$ can not be of type $\beta^\sharp$ by Hypothesis \ref{H:B_S-splits-at-p}. We consider the following diagram: \[ \xymatrix@C=5pt{ H_1^\mathrm{dR}(A^{(p^{n_{\tau}})}/S)^\circ_{\tilde{\tau}} \ar@/^10pt/[d]^{F_{\mathrm{es}, \tilde \tau}^{n_\tau}} & \times & H_1^\mathrm{dR}(A^{(p^{n_{\tau}})}/S)^{\circ}_{\tilde{\tau}^c} \ar@/_10pt/[d]_{F_{\mathrm{es}, \tilde \tau^c}^{n_\tau}} \ar[rrr]^-{\langle\ , \ \rangle} &&& \mathcal{O}_{S} \\ H_1^\mathrm{dR}(A/S)^{\circ}_{\tilde{\tau}} \ar@/^10pt/[u]^{V_{\mathrm{es}, \tilde \tau}^{n_\tau}} &\times& H_1^\mathrm{dR}(A/S)^{\circ}_{\tilde{\tau}^c} \ar@/_10pt/[u]_{V_{\mathrm{es}, \tilde \tau^c}^{n_\tau}} \ar[rrr]^-{\langle\ , \ \rangle} &&& \mathcal{O}_{S}, \ar@{=}[u] } \] where the pairings $\langle\ , \ \rangle$ are induced by the polarization $\lambda$, and they are perfect because $\mathfrak{p}$ is not of type $\beta^\sharp$. We have $\langle F_{\mathrm{es}, \tilde \tau}^{n_\tau} x, y\rangle=\langle x,V_{\mathrm{es}, \tilde \tau^c}^{n_\tau} y \rangle$. It follows that \[ (\omega_{A^\vee/S,\tilde{\tau}}^{\circ})^\perp=\omega_{A^\vee/S,\tilde{\tau}^c}^{\circ},\quad \text{and}\quad \mathrm{Im}(F_{\mathrm{es}, \tilde \tau}^{n_\tau})^\perp=\mathrm{Im}(F_{\mathrm{es}, \tilde \tau^c}^{n_\tau}),\] where $\perp$ means the orthogonal complement under $\langle\ ,\ \rangle$. Therefore, we have \begin{align*} & \text{(2) } \omega_{A^\vee/S,\tilde{\tau}}^{\circ}=\mathrm{Im}(F_{\mathrm{es}, \tilde \tau}^{n_\tau}) \Leftrightarrow (\omega_{A^\vee/S,\tilde{\tau}}^{\circ})^\perp=\mathrm{Im}(F_{\mathrm{es}, \tilde \tau}^{n_\tau} )^\perp \Leftrightarrow \text{(4) } \omega_{A^\vee/S, \tilde{\tau}^c }^{\circ}= \mathrm{Im} (F_{\mathrm{es}, \tilde \tau^c}^{n_\tau}). \end{align*} \end{proof} \begin{defn}\label{Defn:GO-strata} We fix a section $ \tau \mapsto \tilde{\tau}$ of the natural restriction map $\Sigma_{E,\infty}\rightarrow \Sigma_{\infty}$. Let $\mathtt{T}\subset \Sigma_{\infty}-\mathtt{S}_{\infty}$ be a subset. We put $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\emptyset}=X'$, and $X'_\mathtt{T} : =\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}}$ to be the closed subscheme of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ defined by the vanishing locus of $\{h_{\tilde{\tau}}: \tau\in \mathtt{T}\}$. Passing to the limit, we put \[ \mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})_{k_0,\mathtt{T}}:=\varprojlim_{K'^p}\mathbf{Sh}_{K'^pK'_p}(G'_{\tilde \mathtt{S}})_{k_0,\mathtt{T}} \] We call $\{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\mathtt{T}}: \mathtt{T}\subset \Sigma_{\infty}-\mathtt{S}_{\infty}\}$ (resp. $\{\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})_{k_0,\mathtt{T}}: \mathtt{T}\subset \Sigma_{\infty}-\mathtt{S}_{\infty}\}$) the \emph{Goren-Oort stratification} (or GO-stratification for short) of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ (resp. $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})_{k_0}$). \end{defn} By Lemma~\ref{Lemma:partial-Hasse}, the GO-stratum $X'_\mathtt{T}$ does not depend on the choice of the section $\tau\mapsto \tilde{\tau}$. \begin{prop}\label{Prop:smoothness} For any subset $\mathtt{T}\subseteq \Sigma_\infty - \mathtt{S}_\infty$. The closed GO-stratum $X'_\mathtt{T}\subseteq X'$ is smooth of codimension $\#\mathtt{T}$, and the tangent bundle $\mathcal{T}_{X'_\mathtt{T}}$ is the subbundle \[ \bigoplus_{\tau\in \Sigma_\infty - (\mathtt{S}_\infty \cup \mathtt{T})} \bigl( \Lie(\mathbf{A}')^{\circ}_{\tilde{\tau}}\otimes \Lie(\mathbf{A}')_{\tilde{\tau}^c}^{\circ}\bigr)|_{X'_\mathtt{T}}\subseteq \bigoplus_{\tau\in \Sigma_\infty - \mathtt{S}_\infty}\bigl(\Lie(\mathbf{A}')^\circ_{\tilde{\tau}}\otimes \Lie(\mathbf{A}')^\circ_{\tilde{\tau}^c}\bigr)|_{X'_\mathtt{T}}, \] where the latter is identified with the restriction to $X'_\mathtt{T}$ of the tangent bundle of $X'$ computed in Corollary~\ref{C:deformation}. Moreover, $X'_\mathtt{T}$ is proper if $\mathtt{S}_{\infty}\cup \mathtt{T}$ is non-empty. \end{prop} \begin{proof} We follow the same strategy as in \cite[Proposition 3.4]{helm}. First, the same argument as \cite[Lemma 3.7]{helm} proves the non-emptyness of $X'_{\mathtt{T}}$. We now proceed as in the proof of Corollary~\ref{C:deformation}. Let $S_0\hookrightarrow S$ be a closed immersion of $k_0$-schemes whose ideal of definition $\mathcal{I}$ satisfies $\mathcal{I}^2=0$. Consider an $S_0$-valued point $x_0=(A_0, \iota_0, \lambda_0, \bar{\alpha}_{K'})$ of $X'_{\mathtt{T}}$. To prove the smoothness of $X'_{\mathtt{T}}$, it suffices to show that, locally for the Zariski topology on $S_0$, there exists $x\in X'_{\mathtt{T}}(S)$ lifting $x_0$. By Lemma \ref{Lemma:partial-Hasse}, we have, for every $\tau \in \mathtt{T}$, $$ \omega_{A_0^\vee/S_0,\tilde{\tau}}^{\circ}=F_{\mathrm{es}, \tilde \tau}^{n_\tau} (H_1^{\mathrm{dR}}(A_0^{(p^{n_{\tau}})}/S_0)^{\circ}_{\tilde{\tau}}). $$ The reduced ``crystalline homology'' $H_1^{\mathrm{cris}}(A_0/S_0)_{S}^{\circ}$ is equipped with natural operators $F$ and $V$, lifting the corresponding operators on $H_1^{\mathrm{dR}}(A_0/S_0)^{\circ}$. We define the composite of essential Frobenius \[ \tilde{F}_{\mathrm{es}, \tilde \tau}^{n_\tau}: H_1^{\mathrm{cris}}(A_0^{(p^{n_{\tau}})}/S_0)^{\circ}_{S, \tilde{\tau}}\rightarrow H_1^{\mathrm{cris}}(A_0/S_0)^{\circ}_{S, \tilde{\tau}} \] in the same manner as $F_{\mathrm{es}, \tilde \tau}^{n_\tau}$ on $H^{\mathrm{dR}}_1(A_0^{(p^{n_{\tau}})}/S_0)^{\circ}_{\tilde{\tau}}$ in Notation~\ref{N:essential frobenius and verschiebung}. Let $\tilde{\omega}_{A_0^\vee/S_0, \tilde{\tau}}^{\circ}$ denote the image of $\tilde{F}_{\mathrm{es}, \tilde \tau}^{n_\tau}$ for $\tau\in \mathtt{T}$. This is a local direct factor of $H_1^{\mathrm{cris}}(A_0/S_0)_{S, \tilde{\tau}}^{\circ}$ that lifts $\omega_{A_0^\vee/S_0, \tilde{\tau}}^{\circ}$. As in the proof of Theorem~\ref{T:unitary-shimura-variety-representability}, specifying a deformtion $x\in X'(S)$ of $x_0$ to $S$ is equivalent to giving a local direct summand $\omega_{S,\tilde{\tau}}^{\circ}\subseteq H_1^{\mathrm{cris}}(A_0/S_0)_{S, \tilde{\tau}}^{\circ}$ that lifts $\omega_{A_0^\vee/S_0, \tilde{\tau}}^{\circ}$ for each $\tau \in \Sigma_{\infty}-\mathtt{S}_{\infty}$. By Lemma \ref{Lemma:partial-Hasse}, such a deformatoin $x$ lies in $X'_{\mathtt{T}}$ if and only if $\omega_{S,\tilde{\tau}}^{\circ}=\tilde{\omega}_{A_0^\vee/S_0, \tilde{\tau}}^{\circ}$ for all $\tau \in \mathtt{T}$. Therefore, to give a deformation of $x_0$ to $S$ in $X'_{\mathtt{T}}$, we just need specify the liftings $\omega_{S,\tilde{\tau}}^{\circ}$ of $\omega_{A_0^\vee/S_0, \tilde{\tau}}^{\circ}$ for $\tau\in \Sigma_{\infty}-(\mathtt{S}_{\infty}\cup \mathtt{T})$. Since the set-valued sheaf of liftings $\omega_{S, \tilde{\tau}}^{\circ}$ for $\tau\in \Sigma_{\infty}-(\mathtt{S}_{\infty}\cup \mathtt{T})$ form a torsor under the group \[ \mathcal{H}om_{\mathcal{O}_{S_0}}(\omega_{A_0^\vee/S_0, \tilde{\tau}}^\circ, \Lie(A_0)_{\tilde{\tau}}^\circ)\otimes_{\mathcal{O}_{S_0}}\mathcal{I}\simeq \Lie(A_0)_{\tilde{\tau}}\otimes_{\mathcal{O}_{S_0}} \Lie(A_0)_{\tilde{\tau}^c}\otimes_{\mathcal{O}_{S_0}}\mathcal{I}. \] Here, in the last isomorphism, we have used \eqref{Equ:duality-omega}. The statement for the tangent bundle of $X'_{\mathtt{T}}$ now follows immediately. It remains to prove the properness of $X'_{\mathtt{T}}$ when $\mathtt{S}_{\infty}\cup \mathtt{T}$ is non-empty. The arguments are similar to those in \cite[Proposition 3.4]{helm}. We use the valuative criterion of properness. Let $R$ be a discrete valuation ring containing $\overline{\mathbb{F}}_p$ and $L$ be its fraction field. Let $x_L=(A_L, \iota, \lambda, \bar{\alpha}_{K'})$ be an $L$-valued point of $X'_{\mathtt{T}}$. We have to show that $x_L$ extends to an $R$-valued point $x_R\in X'_{\mathtt{T}}$ up to a finite extension of $L$. By Grothendieck's semi-stable reduction theorem, we may assume that, up to a finite extension of $L$, the N\'eron model $A_R$ of $A_L$ over $R$ has a semi-stable reduction. Let $\overline{A}$ be the special fiber of $A_R$, and $\mathbb{T}\subset \overline{A}$ be its torus part. Since the N\'eron model is canonical, the action of $\mathcal{O}_{D_{\mathtt{S}}}$ extends uniquely to $A_R$, and hence to $\mathbb{T}$. The rational cocharacter group $X_*(\mathbb{T})_{\mathbb{Q}}:=\Hom(\mathbb{G}_m,\mathbb{T})\otimes_{\mathbb{Z}}\mathbb{Q}$ is a $\mathbb{Q}$-vector space of dimension at most $\dim(\overline A)= 4 g=\frac{1}{2}\dim_{\mathbb{Q}}(D_{\mathtt{S}})$, and equipped with an induced action of $D_{\mathtt{S}}\cong \mathrm{M}_{2}(E)$. By the classification of $\mathrm{M}_{2}(E)$-modules, $X_*(\mathbb{T})_{\mathbb{Q}}$ is either $0$ or isomorphic to $E^{\oplus 2}$. In the latter case, we have $X_*(\mathbb{T})_{\mathbb{Q}}\otimes L\cong \Lie(A_L)$, and the trace of the action of $b\in E$ on $X_*(\mathbb{T})_{\mathbb{Q}}$ is $2\sum_{\tilde\tau\in \Sigma_{E}}\tilde\tau(b)$, which implies that $\mathtt{S}_{\infty}=\emptyset$. Therefore, if $\mathtt{S}_{\infty}\neq \emptyset$, $\mathbb{T}$ has to be trivial and $A_R$ is an abelian scheme over $R$ with generic fiber $A_L$. The polarization $\lambda$ and level structure $\bar{\alpha}_{K'}$ extends uniquely to $A_R$ by the canonicalness of N\'eron model. We obtain thus a point $x_R\in X'(R)$ extending $x_L$. Since $X'_{\mathtt{T}}\subseteq X'$ is a closed subscheme, we see easily that $x_R\in X'_{\mathtt{T}}$. Now consider the case $\mathtt{S}_{\infty}=\emptyset$ but $\mathtt{T}$ is non-empty. If $X_*(\mathbb{T})_{\mathbb{Q}}\cong E^{\oplus 2}$, then the abelian part of $\overline{A}$ is trivial. Since the action of Verschiebung on $\omega_{\mathbb{T}}$ is an isomorphism, the point $x_L$ cannot lie in any $X'_{\mathtt{T}}$ with $\mathtt{T}$ non-empty. Therefore, if $\mathtt{T}\neq \emptyset$, $\mathbb{T}$ must be trivial, and we conclude as in the case $\mathtt{S}_{\infty}\neq \emptyset$. \end{proof} \begin{remark} It seems that $X'_{\mathtt{T}}$ is still proper if $\mathtt{S}$ is non-empty. But we can not find a convincing algebraic argument. \end{remark} \subsection{GO-stratification of connected Shimura varieties} \label{S:GO-stratum connected Shimura variety} From the definition, it is clear that the GO-stratification on $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})_{k_0}$ is compatible with the action (as described in Subsection~\ref{S:abel var in unitary case}) of the group $\mathcal{G}'_{\tilde \mathtt{S}}$ (introduced in Subsection~\ref{S:structure group}). By Corollary~\ref{C:mathematical objects equivalence}, for each $\mathtt{T} \subseteq \Sigma_\infty -\mathtt{S}_\infty$, there is a natural scheme \[ \mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})^\circ_{\overline \mathbb{F}_p, \mathtt{T}} \subseteq \mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})^\circ_{\overline \mathbb{F}_p} \] equivariant for the action of $\mathcal{E}_{G, k_0}$. We call them the \emph{Goren-Oort stratification} for the connected Shimura variety. Using the identification of connected Shimura variety in Corollary~\ref{C:comparison of shimura varieties} together with Corollary~\ref{C:mathematical objects equivalence}, we obtain \emph{Goren-Oort stratum} $\mathbf{Sh}_{K_p}(G_\mathtt{S})_{k_0, \mathtt{T}} \subseteq \mathbf{Sh}_{K_p}(G_\mathtt{S})_{k_0}$ and $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0, \mathtt{T}} \subseteq \mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0}$, for each subset $\mathtt{T} \subseteq \Sigma_\infty- \mathtt{S}_\infty$. Explicitly, for the latter case, we have \[ \mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0, \mathtt{T}}:= \mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}} \times_{\widetilde G_{\tilde \mathtt{S}}} G''_{\tilde \mathtt{S}}(\mathbb{A}^{\infty,p}). \] Alternatively, in terms of the natural family of abelian varieties $\mathbf{A}''_{\tilde \mathtt{S}}$, the stratum $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0, \mathtt{T}}$ is the common zero locus of partial Hasse-invariants \[ h_{\tilde \tau}: \omega^\circ_{\mathbf{A}''_{\tilde \mathtt{S},k_0}/\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0}, \tilde \tau} \longrightarrow \big( \omega^\circ_{\mathbf{A}''_{\tilde \mathtt{S},k_0}/\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0}, \sigma^{-n_\tau} \tilde\tau}\big)^{\otimes p^{n_\tau}} \] for all $\tilde \tau$ lifting $\tau \in \mathtt{T}$. \begin{theorem} When $\mathtt{S}=\emptyset$, the GO-stratification on $\mathbf{Sh}_{K_p}(G_{\emptyset})_{k_0}$ defined above agrees with the original definition given in \cite{goren-oort}. Moreover, the for each subset $\mathtt{T} \subseteq \Sigma_\infty$, under the morphisms \eqref{E:morphisms of Shimura varieties HMV}, we have \[ \mathbf{pr}_1^*(\mathbf{Sh}_{K_p}(G_{\emptyset})_{k_0, \mathtt{T}}) = \boldsymbol{\alpha}^*(\mathbf{Sh}_{K''_p}(G''_{\emptyset})_{k_0,\mathtt{T}}), \] where $\mathbf{Sh}_{K_p}(G_{\emptyset})_{k_0, \mathtt{T}}$ denote the GO-stratum for $\mathtt{T}$ defined in loc. cit. \end{theorem} \begin{proof} Put $X=\mathbf{Sh}_{K_p}(G_{\emptyset})_{k_0}$ for simplicity. By Proposition~\ref{P:integral-HMV-unitary}, we have an isomorphism of abelian varieties $\boldsymbol{\alpha}^*\mathbf{A}''_{\emptyset}=(\mathbf{pr}_1^*(\mathcal{A})\otimes_{\mathcal{O}_F}\mathcal{O}_E)^{\oplus 2}$ on $\mathbf{Sh}_{K_p\times K_{E,p}}(G_\emptyset \times T_{E, \emptyset})$. Let $\omega_{\mathcal{A}_{k_0}^\vee/X}=\bigoplus_{\tau\in \Sigma_{\infty}}\omega_{\mathcal{A}_{k_0}^\vee/X, \tau}$ be the canonical decomposition, where $\omega_{\mathcal{A}_{k_0}^\vee/X,\tau}$ is the local direct factor on which $\mathcal{O}_F$ acts via $\iota_p\circ\tau: \mathcal{O}_{F}\rightarrow\mathbb{Z}_p^\mathrm{ur}\twoheadrightarrow \overline{\mathbb{F}}_p$. Then we have a canonical isomorphism of line bundles over $\mathbf{Sh}_{K_p\times K_{E,p}}(G_\emptyset \times T_{E, \emptyset})_{k_0}$ \[ \boldsymbol{\alpha}^*\omega_{\mathbf{A}''^\vee_{k_0}/X, \tilde{\tau}}^{\circ}\simeq \mathbf{pr}_1^*\omega_{\mathcal{A}_{k_0}^\vee/X,\tau}, \] for either lift $\tilde{\tau}\in \Sigma_{E,\infty}$ of $\tau$. Via these identifications, the (pullback of) partial Hasse invariant $\boldsymbol{\alpha}^*(h_{\tilde{\tau}})$ defined in \eqref{Equ:partial-hasse} coincides with the pullback via $\mathbf{pr}_1$ of the partial Hasse invariant $h_{\tau}\in \Gamma(X, \omega_{\mathcal{A}/X, \sigma^{-1}\tau}^{\otimes p}\otimes \omega_{\mathcal{A}/X, \tau}^{\otimes -1})$ defined in \cite{goren-oort}. Therefore, for any $\mathtt{T}\subset \Sigma_{\infty}$, the pull back along $\mathbf{pr}_1$ of the GO-strata $X_{\mathtt{T}}\subseteq X$ defined by the vanishing of $\{h_{\tau}:\tau\in \mathtt{T}\}$ is the same as the pullback along $\boldsymbol{\alpha}$ of the GO-stratum defined by $\{h_{\tilde{\tau}}:\tau\in \mathtt{T}\}$. \end{proof} \begin{remark} It would be interesting to know, in general, whether the GO-strata on quaternionic Shimura varieties depends on the auxiliary choice of CM field $E$. \end{remark} To understand the ``action" of the twisted partial Frobenius on GO-strata, we need the following. \begin{lemma} \label{L:partial Frobenius vs partial Hasse inv} Let $x = (A, \iota, \lambda, \bar \alpha_{K'}) $ be a point of $X'$ with values in a noetherian $k_0$-scheme $S$, and $\mathfrak{F}'_{\mathfrak{p}^2}(x) = (A', \iota', \lambda', \bar \alpha'_{K'})$ be the image of $x$ under the twisted partial Frobenius at $\mathfrak{p}$ \eqref{S:partial Frobenius} (which lies on another Shimura variety). Then $h_{\tilde \tau} (x) = 0$ if and only if $h_{\sigma_{\mathfrak{p}}^2\tilde \tau}(\mathfrak{F}'_{\mathfrak{p}^2}(x))=0$. \end{lemma} \begin{proof} The statement is clear if $\tilde\tau\notin \Sigma_{E,\infty/\mathfrak{p}}$, since $\mathfrak{F}'_{\mathfrak{p}^2}$ induces a canonical isomorphism of $p$-divisible groups $A[\mathfrak{q}^{\infty}]\simeq A'[\mathfrak{q}^{\infty}]$ for $\mathfrak{q}\in \Sigma_p$ with $\mathfrak{q}\neq \mathfrak{p}$. Consider the case $\tilde\tau \in \Sigma_{E,\infty/\mathfrak{p}}$. We claim that there exists an isomorphism \[ H^{\mathrm{dR}}_1(A'/S)^{\circ}_{\tilde\tau}\cong (H^{\mathrm{dR}}_1(A/S)^{\circ}_{\sigma^{-2}\tilde\tau})^{(p^2)} \] compatible with the action of $F$ and $V$ on both sides with $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}$ varying. By Lemma~\ref{Lemma:partial-Hasse}, the Lemma follows from the claim immediately. It remains thus to prove the claim. Actually, the $\mathfrak{p}$-component of de Rham homology $$ H^{\mathrm{dR}}_1(A'/S)_{\mathfrak{p}}:=\bigoplus_{\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}}H^{\mathrm{dR}}_1(A'/S)_{\tilde\tau} $$ is canonically isomorphic to the evaluation at the trivial pd-thickening $S\hookrightarrow S$, denoted by $\cD(A'[\mathfrak{p}^{\infty}])_S$, of the reduced covariant Dieudonn\'e crystal of $A'[\mathfrak{p}^{\infty}]$. By definition of $\mathfrak{F}'_{\mathfrak{p}^2}$, the $p$-divisible group $A'[\mathfrak{p}^{\infty}]\cong (A/\Ker_{\mathfrak{p}^2})[\mathfrak{p}^{\infty}]$ is isomorphic to the quotient of $A[\mathfrak{p}^{\infty}]$ by its kernel of $p^2$-Frobenius $A[\mathfrak{p}^{\infty}]\rightarrow (A[\mathfrak{p}^{\infty}])^{(p^2)}$. Therefore, by functoriality of Dieudonn\'e crystals, one has $\cD(A'[\mathfrak{p}^{\infty}])_S=\cD(A[\mathfrak{p}^\infty])_S^{(p^2)}$, whence the claim. \end{proof} One deduces immediately \begin{cor} For $\mathrm{Sh}_{\tilde \mathtt{S}} = \mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})_{k_0}$ and $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0}$, the twisted partial Frobenius map $\mathfrak{F}_{\mathfrak{p}^2}: \mathrm{Sh}_{\tilde \mathtt{S}} \to \mathrm{Sh}_{\sigma_\mathfrak{p}^2 \tilde \mathtt{S}}$ takes the subvariety $\mathrm{Sh}_{\tilde \mathtt{S}, \mathtt{T}}$ to $\mathrm{Sh}_{\sigma_\mathfrak{p}^2\tilde \mathtt{S}, \sigma_\mathfrak{p}^2 \mathtt{T}}$ for each $\mathtt{T}\subseteq \Sigma_{\infty}-\mathtt{S}_{\infty}$. \end{cor} \section{The global geometry of the GO-strata: Helm's isogeny trick}\label{Section:GO-geometry} In this section, we will prove that each closed GO-stratum of the special fiber of the unitary Shimura variety defined in Definition~\ref{Defn:GO-strata} is a $(\mathbb{P}^1)^N$-bundle over the special fiber of another unitary Shimura variety for some appropriate integer $N$. This then allows us to deduce the similar result for the case of quaternionic Shimura varieties. This section is largely inspired by Helm's pioneer work \cite{helm}, where he considered the case when $p$ splits in $E_0/\mathbb{Q}$ and $\mathtt{S}$ is ``sparse'' (we refer to \textit{loc. cit.} for the definition of sparse subset; essentially, this means that, for any $\tau \in \Sigma_\infty$, $\tau$ and $\sigma\tau$ cannot belong to $\mathtt{S}$ simultaneously.) \subsection{The associated quaternionic Shimura data for a GO-stratum}\label{S:quaternion-data-T} We first introduce the recipe for describing general GO-strata. We recommend first reading the light version of the same recipe in the special case of Hilbert modular varieties, as explained in the introduction~\ref{S:intro GO-strata}, before diving into the general but more complicated definition below. Keep the notation as in the previous sections. Let $\mathtt{T}$ be a subset of $\Sigma_{\infty}-\mathtt{S}_{\infty}$. Our main theorem will say that the Goren-Oort stratum $\mathbf{Sh}_K(G_\mathtt{S})_{\overline \mathbb{F}_p, \mathtt{T}}$ is a $(\mathbb{P}^1)^N$-bundle over $\mathbf{Sh}_{K_\mathtt{T}}(G_{\mathtt{S}(\mathtt{T})})_{\overline{\mathbb{F}}_p}$ for some $N \in \mathbb{Z}_{\geq0}$ and some even subset $\mathtt{S}(\mathtt{T})$ of places of $F$ and open compact subgroup $K_{\mathtt{T}}\subseteq G_{\mathtt{S}(\mathtt{T})}(\mathbb{A}^{\infty})$. We describe the set $\mathtt{S}(\mathtt{T})$ now. For each prime $\mathfrak{p}\in \Sigma_{p}$, we put $\mathtt{T}_{/\mathfrak{p}}=\mathtt{T}\cap \Sigma_{\infty/\mathfrak{p}}$. We define first a subset $\mathtt{T}'_{/\mathfrak{p}}\subseteq \Sigma_{\infty/\mathfrak{p}}\cup \{\mathfrak{p}\} $ containing $\mathtt{T}_{/\mathfrak{p}}$ which depends on the types of $\mathfrak{p}$ as in Subsection~\ref{S:level-structure-at-p} and we then put \begin{equation}\label{Equ:defn-S-T} \mathtt{T}'=\coprod_{\mathfrak{p}\in \Sigma_p}\mathtt{T}'_{/\mathfrak{p}},\quad\text{and }\quad\mathtt{S}(\mathtt{T})=\mathtt{S} \sqcup \mathtt{T}'. \end{equation} We separate the discussion into several cases: \begin{itemize} \item If $\mathfrak{p}$ is of type $\alpha^\sharp$ or type $\beta^\sharp$ for $\mathbf{Sh}_{K}(G_{\mathtt{S}})$, we put $\mathtt{T}'_{/\mathfrak{p}} = \emptyset$. \item If $\mathfrak{p}$ is of type $\alpha$ for $\mathtt{S}$, i.e. $(\Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}})$ has even cardinality. We distinguish two cases: \begin{itemize} \item (Case $\alpha 1$) $\mathtt{T}_{/\mathfrak{p}}\subsetneq \Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}}$. We write $\mathtt{S}_{\infty/\mathfrak{p}}\cup \mathtt{T}_{/\mathfrak{p}}=\coprod C_i$ as a disjoint union of chains. Here, by a chain, we mean that there exists $\tau_i\in \mathtt{S}_{\infty/\mathfrak{p}}\cup \mathtt{T}_{/\mathfrak{p}}$ and an integer $m_i\geq 0$ such that $C_i=\{\sigma^{-a}\tau_i: 0\leq a\leq m_i\}$ belong to $\mathtt{S}_{\infty/\mathfrak{p}} \cup \mathtt{T}_{/\mathfrak{p}}$ and $\sigma\tau_i, \sigma^{-m_i-1}\tau_i\notin (\mathtt{S}_{\infty/\mathfrak{p}}\cup \mathtt{T}_{/\mathfrak{p}})$. We put $\mathtt{T}'_{/\mathfrak{p}} = \coprod_i C'_i$, where \[ C'_i: = \left\{ \begin{array}{ll} C_i \cap \mathtt{T}_{/\mathfrak{p}} & \textrm{if } \#(C_i \cap \mathtt{T}_{/\mathfrak{p}}) \textrm{ is even}; \\ (C_i\cap \mathtt{T}_{/\mathfrak{p}})\cup \{\sigma^{-m_i-1}\tau_i\}& \textrm{if } \#(C_i \cap \mathtt{T}_{/\mathfrak{p}}) \textrm{ is odd}. \end{array} \right. \] For example, if $\Sigma_{\infty/\mathfrak{p}}=\{\tau_0, \sigma^{-1}\tau_0,\dots, \sigma^{-9}\tau_0\}$, $\mathtt{S}_{\infty/\mathfrak{p}}=\{\sigma^{-2}\tau_0, \sigma^{-6}\tau_0\}$, and $\mathtt{T}_{/\mathfrak{p}}=\{\sigma^{-3}\tau_0, \sigma^{-5}\tau_0, \sigma^{-7}\tau_0\}$, then $\mathtt{S}_{\infty/\mathfrak{p}} \cup \mathtt{T}_{/\mathfrak{p}}$ is separated into two chains $C_1 = \{\sigma^{-2}\tau_0, \sigma^{-3} \tau_0\}$ and $C_2 = \{\sigma^{-5}\tau_0, \sigma^{-6}\tau_0, \sigma^{-7}\tau_0\}$; we have $ \mathtt{T}'_{/\mathfrak{p}}=\{\sigma^{-3}\tau_0,\sigma^{-4}\tau_0, \sigma^{-5}\tau_0, \sigma^{-7}\tau_0\}. $ An alternative way to understand the partition of $\mathtt{T}_{/\mathfrak{p}}$ is to view it as a subset of $\Sigma_{\infty/\mathfrak{p}} - \mathtt{S}_{\infty/\mathfrak{p}}$ with the cycle structure inherited from $\Sigma_{\infty/\mathfrak{p}}$; then $C_i \cap \mathtt{T}_{/\mathfrak{p}}$ is just to group elements of $\mathtt{T}_{/\mathfrak{p}}$ into connected subchains. \item (Case $\alpha 2$) $\mathtt{T}_{/\mathfrak{p}}= \Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}}$. We put $\mathtt{T}'_{/\mathfrak{p}}=\mathtt{T}_{/\mathfrak{p}}$. \end{itemize} \item If $\mathfrak{p}$ is of type $\beta$ for $\mathtt{S}$, i.e. $(\Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}})$ has odd cardinality and $B_{\mathtt{S}}$ splits at $\mathfrak{p}$. We distinguish two cases: \begin{itemize} \item (Case $\beta 1$) $\mathtt{T}_{/\mathfrak{p}}\subsetneq \Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}}$. In this case, we define $\mathtt{T}'_{/\mathfrak{p}}$ using the same rule as in Case $\alpha 1$. \item (Case $\beta 2$) $\mathtt{T}_{/\mathfrak{p}}=\Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}}$. We put $\mathtt{T}'_{\mathfrak{p}}=\mathtt{T}_{/\mathfrak{p}}\cup \{\mathfrak{p}\}$. \end{itemize} \end{itemize} In either case, we put $\mathtt{T}'_{\infty/\mathfrak{p}} = \mathtt{T}'_{/\mathfrak{p}} \cap \Sigma_\infty$; it is equal to $\mathtt{T}'_{/\mathfrak{p}}$ unless in case $\beta2$. It is easy to see that each $\mathtt{T}'_{/\mathfrak{p}}$ has even cardinality. Therefore, $\mathtt{S}(\mathtt{T})$ is also even, and it defines a quaternion algebra $B_{\mathtt{S}(\mathtt{T})}$ over $F$. Note that $\mathtt{S}(\mathtt{T})$ still satisfies Hypothesis~\ref{H:B_S-splits-at-p}. Let $G_{\mathtt{S}(\mathtt{T})}=\Res_{F/\mathbb{Q}}(B_{\mathtt{S}(\mathtt{T})}^{\times})$ be the algebraic group over $\mathbb{Q}$ associated to $B_{\mathtt{S}(\mathtt{T})}^{\times}$. We fix an isomorphism $B_{\mathtt{S}}\otimes_{F}F_{\mathfrak{l}}\simeq B_{\mathtt{S}(\mathtt{T})}\otimes_{F}F_{\mathfrak{l}}$ whenever $\{\mathfrak{l}\}\cap \mathtt{S}=\{\mathfrak{l}\}\cap\mathtt{S}(\mathtt{T})$. We define an open compact subgroup $K_{\mathtt{T}}=K_{\mathtt{T}}^pK_{\mathtt{T}, p}\subseteq G_{\mathtt{S}(\mathtt{T})}(\mathbb{A}^{\infty})$ determined by $K$ as follows. \begin{itemize} \item We put $K_{\mathtt{T}}^p=K^p$. This makes sense, because $B_{\mathtt{S}}\otimes_{F}F_{\mathfrak{l}}\simeq B_{\mathtt{S}(\mathtt{T})}\otimes_{F}F_{\mathfrak{l}}$ for any finite place $\mathfrak{l}$ prime to $p$. \item For $K_{\mathtt{T}, p}=\prod_{\mathfrak{p}\in \Sigma_p}K_{\mathtt{T}, \mathfrak{p}}$, we take $K_{\mathtt{T},\mathfrak{p}}=K_{\mathfrak{p}}$, unless we are in case $\alpha 2$ or $\beta2 $. \begin{itemize} \item If $\mathfrak{p}$ is of type $\alpha 2$ for $\mathrm{Sh}_{K}(G_{\mathtt{S}})$, we have $B_{\mathtt{S}(\mathtt{T})}\otimes_{F}F_{\mathfrak{p}}\simeq B_{\mathtt{S}}\otimes_{F}F_{\mathfrak{p}}\simeq \mathrm{M}_2(\mathcal{O}_{F_{\mathfrak{p}}})$. We take $K_{\mathtt{T}, \mathfrak{p}}=K_{\mathfrak{p}}$ if $\mathtt{T}_{/\mathfrak{p}}=(\Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}})=\emptyset$, and $K_{\mathtt{T}, \mathfrak{p}}=\mathrm{Iw}_{\mathfrak{p}}$ if $\mathtt{T}_{/\mathfrak{p}}\neq \emptyset$. \item If we are in case $\beta 2$ (and $\beta^\sharp$), $B_{\mathtt{S}(\mathtt{T})}$ is ramified at $\mathfrak{p}$. We take $K_{\mathtt{T}, \mathfrak{p}}=\mathcal{O}_{B_{F_{\mathfrak{p}}}}^{\times}$, where $\mathcal{O}_{B_{F_{\mathfrak{p}}}}$ is the unique maximal order of the division algebra over $F_{\mathfrak{p}}$ with invariant $1/2$. \end{itemize} \end{itemize} The level $K_{\mathtt{T}}$ fits into the framework considered in Subsection~\ref{S:level-structure-at-p}. We obtain thus a quaternionic Shimura variety $\mathrm{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$, and its integral model $\mathbf{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$ is given by Corollary~\ref{C:integral-model-quaternion}. Note that \begin{itemize} \item if we are in case $\alpha 1$ above, then $\mathfrak{p}$ is of type $\alpha$ for the Shimura variety $\mathrm{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$; \item if we are in case $\alpha 2$ above, then $\mathfrak{p}$ is of type $\alpha^\sharp$ for $\mathrm{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$ unless $\mathfrak{p}$ is of type $\alpha$ for $\mathrm{Sh}_{K}(G_{\mathtt{S}})$ and $\mathtt{T}_{/\mathfrak{p}}=\Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}}=\emptyset$, in which case $\mathfrak{p}$ remains of type $\alpha$ for $\mathrm{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$; \item if we are in case $\beta 1$, then $\mathfrak{p}$ is of type $\beta$ for $\mathrm{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$; \item if we are in case $\beta2$ or $\beta^\sharp$ above, then $\mathfrak{p}$ is of type $\beta^\sharp$ for $\mathrm{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$. \end{itemize} \begin{theorem}\label{T:main-thm} For a subset $\mathtt{T}\subseteq \Sigma_{\infty}-\mathtt{S}_{\infty}$, the GO-stratum $\mathbf{Sh}_{K}(G_\mathtt{S})_{\overline \mathbb{F}_p, \mathtt{T}}$ is isomorphic to a $(\mathbb{P}^1)^{I_{\mathtt{T}}}$-bundle over $\mathbf{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})_{\overline \mathbb{F}_p}$, where $\mathtt{S}(\mathtt{T})$ is as described above, and the index set is given by $$ I_{\mathtt{T}}=\mathtt{S}(\mathtt{T})_{\infty}-(\mathtt{S}_{\infty}\cup \mathtt{T})=\bigcup_{\mathfrak{p}\in \Sigma_p}(\mathtt{T}'_{\infty/\mathfrak{p}}-\mathtt{T}_{/\mathfrak{p}}). $$ Moreover, this isomorphism is compatible with the action of $G_{\mathtt{S}}(\mathbb{A}^{\infty,p})$, if we let $K^p\subseteq G_{\mathtt{S}}(\mathbb{A}^{\infty,p})$ vary. \end{theorem} Theorem~\ref{T:main-thm} will follow from the analogous statement (Theorem~\ref{T:main-thm-unitary} and Corollary~\ref{C:main-thm-product}) in the unitary case. But note Remark~\ref{R:quaternionic Shimura reciprocity not compatible}. \subsection{The signature at infinity for the unitary Shimura varieties} \label{S:tilde S(T)} In order to describe the unitary Shimura data associated to $\mathrm{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$ as in Subsections~\ref{S:CM extension} and \ref{S:unitary-shimura}, we need to pick a lift $\tilde \mathtt{S}(\mathtt{T})$ of the set $\mathtt{S}(\mathtt{T})$ to embeddings of $E$. More precisely, we will define a subset $\tilde \mathtt{S}(\mathtt{T})_\infty = \coprod_{\mathfrak{p} \in \Sigma_p} \tilde \mathtt{S}(\mathtt{T})_{\infty/\mathfrak{p}}$, where $\tilde \mathtt{S}(\mathtt{T})_{\infty/\mathfrak{p}}$ consists of exactly one lift $\tilde \tau \in \Sigma_{E, \infty}$ for each $\tau \in \mathtt{S}(\mathtt{T})_{\infty/\mathfrak{p}}$. Then we put $\tilde \mathtt{S}(\mathtt{T}) = (\mathtt{S}(\mathtt{T}), \tilde \mathtt{S}(\mathtt{T})_\infty)$. So we just need to assign such choices of lifts. \begin{itemize} \item When $\tau \in \mathtt{S}(\mathtt{T})_{\infty/\mathfrak{p}}$ belongs to $\mathtt{S}_{\infty/\mathfrak{p}}$, we choose its lift $\tilde \tau \in \Sigma_{E, \infty}$ be the one that belongs to $\tilde \mathtt{S}$. \end{itemize} We now specify our choices of the lifts in $\tilde\mathtt{S}(\mathtt{T})_{\infty/\mathfrak{p}}$ for the elements of $ \mathtt{T}'_{\infty/\mathfrak{p}}$, which are collectively denoted by $\tilde \mathtt{T}'_{/\mathfrak{p}}$. We separate into cases and use freely the notation from Subsection~\ref{S:quaternion-data-T}. There is nothing to do if $\mathfrak{p}$ is of type $\alpha^\sharp$ or type $\beta^\sharp$ (for $\mathtt{S}$). \begin{itemize} \item $\mathfrak{p}$ is of type $\alpha$ (for $\mathtt{S}$); in this case, $\mathfrak{p}$ splits into two primes $\mathfrak{q}$ and $\mathfrak{q}^c$ in $E$. For a place $\tau \in \Sigma_{\infty/\mathfrak{p}}$, we use $\tilde \tau$ to denote its lift to $\Sigma_{E, \infty}$ which corresponds to the $p$-adic place $\mathfrak{q}$. \begin{itemize} \item (Case $\alpha1$) For a chain $C_i=\{\sigma^{-a}\tau_i, 0\leq a\leq m_{i}\}\subseteq \mathtt{S}_{\infty/\mathfrak{p}}\cup\mathtt{T}_{/\mathfrak{p}}$ and the subset $C'_i =\{ \sigma^{-a_1}\tau_i, \dots, \sigma^{-a_{r_i}}\tau_i\}\subseteq C_i$ as defined in \ref{S:quaternion-data-T} for some $0\leq a_{1}< \dots<a_{r_i}\leq m_i+1$ (note that $r_i$ is always even by construction), we put \[ \tilde C'_i = \{ \sigma^{-a_1}\tilde \tau_i, \sigma^{-a_2}\tilde \tau^c_i, \sigma^{-a_3}\tilde \tau_i, \dots, \sigma^{-a_{r_i}}\tilde \tau^c_i\}; \] put $\tilde \mathtt{T}'_{/\mathfrak{p}} = \coprod_i \tilde C'_i$. \item (Case $\alpha2$) We need to fix $\tau_0 \in \mathtt{T}_{/\mathfrak{p}} = \Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}}$ and write $\mathtt{T}_{/\mathfrak{p}}$ as $\{\sigma^{-a_1}\tau_0, \dots, \sigma^{-a_{2r}}\tau_0\}$ for integers $0 = a_1 < \cdots < a_{2r} \leq f_\mathfrak{p}-1$. We put \[ \tilde \mathtt{T}'_{/\mathfrak{p}} = \{\sigma^{-a_1}\tilde \tau_0, \sigma^{-a_2}\tilde \tau^c_0, \sigma^{-a_3}\tilde \tau_0, \dots, \sigma^{-a_{2r}}\tilde \tau^c_0\}. \] \end{itemize} \item $\mathfrak{p}$ is of type $\beta$ (for $\mathtt{S}$). In this case, $\mathfrak{p}$ is inert in $E/F$, and we do not have a canonical choice for the lift $\tilde \tau$ of a $\tau$. \begin{itemize} \item (Case $\beta1$) In this case, we fix a partition of the preimage of $C'_i$ under the map $\Sigma_{E, \infty/\mathfrak{p}} \to \Sigma_{\infty/\mathfrak{p}}$ into two chains $\tilde C''_i \coprod \tilde C''^c_i$, where \[ \tilde C''_i = \{\sigma^{-a_1} \tilde \tau_i, \dots, \sigma^{-a_{r_i}} \tilde \tau_i\}, \quad \textrm{and} \quad \tilde C''^c_i = \{\sigma^{-a_1} \tilde \tau^c_i, \dots, \sigma^{-a_{r_i}} \tilde \tau^c_i\}. \] Here, the choice of $\tilde\tau_i$ is arbitrary, and $r_i$ is always even by construction. We put \[ \tilde C'_i :=\{ \sigma^{-a_1}\tilde \tau_i, \sigma^{-a_2}\tilde \tau^c_i, \sigma^{-a_3}\tilde \tau_i, \dots, \sigma^{-a_{r_i}}\tilde \tau^c_i\}. \] Finally, we set $\tilde \mathtt{T}'_{/\mathfrak{p}} = \coprod_i \tilde C'_i$. \item (Case $\beta2$) We fix an element $\tilde \tau_0 \in \Sigma_{E, \infty/\mathfrak{p}}$ lifting some element from $\Sigma_{\infty/\mathfrak{p}}$. Then the preimage of $\mathtt{T}'_{/\mathfrak{p}}$ under the natural map $\Sigma_{E, \infty/\mathfrak{p}} \to \Sigma_{\infty/\mathfrak{p}}$ can be written as $\{ \sigma^{-a_1}\tilde \tau_0, \dots, \sigma^{-a_{2r}}\tilde \tau_0\}$ (where $r = \# ( \Sigma_\infty - \mathtt{S}_\infty)$ is odd), with $0=a_1< \cdots < a_{2r}\leq 2f_\mathfrak{p}-1$ and $a_{r+i} = a_i + f_\mathfrak{p}$ for all $i$. We put \[ \tilde \mathtt{T}'_{/\mathfrak{p}} = \{ \sigma^{-a_1} \tilde \tau_0, \sigma^{-a_3} \tilde \tau_0, \dots, \sigma^{-a_{2r-1}} \tilde \tau_0\}. \] Since $r$ is odd, this consists exactly one lift of each element of $\mathtt{T}'_{\infty/\mathfrak{p}}$. \end{itemize} \end{itemize} Now, we can assign integers $s_{\mathtt{T}, \tilde \tau}$ according to $\tilde \mathtt{S}(\mathtt{T})$: \begin{itemize} \item if $\tau \in \Sigma_\infty - \mathtt{S}(\mathtt{T})_\infty$, we have $s_{\tilde \tau} = 1$ for all lifts $\tilde \tau $ of $\tau$; \item if $\tau \in \mathtt{S}(\mathtt{T})_\infty$ and $\tilde \tau$ is the lift in $\tilde \mathtt{S}(\mathtt{T})_\infty$, we have $s_{\mathtt{T},\tilde \tau} = 0$ and $s_{\mathtt{T},\tilde \tau^c} = 2$. \end{itemize} We put $\tilde \mathtt{T}' = \cup_{\mathfrak{p} \in \Sigma_p} \tilde \mathtt{T}'_{/\mathfrak{p}}$ and $\tilde \mathtt{T}'^c$ the complex conjugations of the elements in $\tilde \mathtt{T}'$. \\ Now we compare the PEL data for the Shimura varieties for $G'_{\tilde \mathtt{S}}$ and $G'_{\tilde \mathtt{S}(\mathtt{T})}$. We fix an isomorphism $\theta_{\mathtt{T}}:D_{\mathtt{S}}\rightarrow D_{\mathtt{S}(\mathtt{T})}$ that sends $\mathcal{O}_{D_{\mathtt{S}},\mathfrak{p}}$ to $\mathcal{O}_{D_{\mathtt{S}(\mathtt{T})},\mathfrak{p}}$ for each $\mathfrak{p}\in \Sigma_p$, where $\mathcal{O}_{D_{\mathtt{S}},\mathfrak{p}}$ and $\mathcal{O}_{D_{\mathtt{S}(\mathtt{T})},\mathfrak{p}}$ are respectively fixed maximal orders of $D_{\mathtt{S}}\otimes_F F_{\mathfrak{p}}$ and $D_{\mathtt{S}(\mathtt{T})}\otimes_{F}F_{\mathfrak{p}}$ as in \ref{S:PEL-Shimura-data}. \begin{lemma} \label{L:compare D_S with D_S(T)} Let $\delta_{\mathtt{S}}\in (D_{\mathtt{S}}^\mathrm{sym})^{\times}$ be an element satisfying Lemma~\ref{L:property-PEL-data}(1). Then there exists an element $\delta_{\mathtt{S}(\mathtt{T})}\in (D_{\mathtt{S}(\mathtt{T})}^{\mathrm{sym}})^{\times}$ satisfying the same condition with $\mathtt{S}$ replaced by $\mathtt{S}(\mathtt{T})$ such that, if $*_{\mathtt{S}}: l\mapsto \delta_{\mathtt{S}}^{-1}\bar{l}\delta_{\mathtt{S}}$ and $*_{\mathtt{S}(\mathtt{T})}: l\mapsto \delta_{\mathtt{S}(\mathtt{T})}^{-1}\bar{l}\delta_{\mathtt{S}(\mathtt{T})}$ denote the involutions on $D_{\mathtt{S}}$ and $D_{\mathtt{S}(\mathtt{T})}$ induced by $\delta_{\mathtt{S}}$ and $\delta_{\mathtt{S}(\mathtt{T})}$ respectively, then $\theta_{\mathtt{T}}$ induces an isomorphism of algebras with positive involutions $(D_{\mathtt{S}},*_{\mathtt{S}})\xrightarrow{\sim} (D_{\mathtt{S}(\mathtt{T})}, *_{\mathtt{S}(\mathtt{T})})$. \end{lemma} \begin{proof} We choose first an arbitrary ${\delta}'_{\mathtt{S}(\mathtt{T})}\in (D_{\mathtt{S}(\mathtt{T})}^{\mathrm{sym}})^{\times}$ satisfying Lemma~\ref{L:property-PEL-data}(1). Let $*'_{\mathtt{S}(\mathtt{T})}$ denote the involution $l\mapsto (\delta'_{\mathtt{S}(\mathtt{T})})^{-1}\bar{l}\delta'_{\mathtt{S}(\mathtt{T})}$ on $D_{\mathtt{S}(\mathtt{T})}$. By Skolem-Noether theorem, there exists $g\in D^{\times}_{\mathtt{S}(\mathtt{T})}$ such that $\theta_{\mathtt{T}}(x)^{*'_{\mathtt{S}(\mathtt{T})}}=g\theta_{\mathtt{T}}(x^{*_\mathtt{S}})g^{-1}$ for all $x\in D_{\mathtt{S}}$. Since both $*_{\mathtt{S}}^2$ and $*'^2_{\mathtt{S}(\mathtt{T})}$ are identity, we get $g^{*'_{\mathtt{S}(\mathtt{T})}}=g\mu$ for some $\mu\in E^{\times}$ with $\bar\mu\mu=1$. By Hilbert 90, we can write $\mu={\lambda}/{\bar\lambda}$ for some $\lambda\in E^{\times}$. Up to replacing $g$ by $g\lambda$, we may assume that $g^{*'_{\mathtt{S}(\mathtt{T})}}=g$, or equivalently, $\overline{\delta'_{\mathtt{S}(\mathtt{T})}g}=\delta'_{\mathtt{S}(\mathtt{T})}g$ and hence $\delta'_{\mathtt{S}(\mathtt{T})}g \in (D_{\mathtt{S}(\mathtt{T})}^\mathrm{sym})^\times$. Note that we still have the freedom to modify $g$ by an element of $F^{\times}$ without changing $*_{\mathtt{S}(\mathtt{T})}$. We claim that, up to such a modification on $g$, $\delta_{\mathtt{S}(\mathtt{T})}=\delta'_{\mathtt{S}(\mathtt{T})}g$ will answer the question. Indeed, by construction, $\theta_{\mathtt{T}}$ is an $*$-isomorphism, i.e. $\theta_{\mathtt{T}}(x)^{*_{\mathtt{S}(\mathtt{T})}}=\theta_{\mathtt{T}}(x^{*_\mathtt{S}})$. Note that $\theta_{\mathtt{T}}$ sends $\mathcal{O}_{D_{\mathtt{S}},\mathfrak{p}}$ to $\mathcal{O}_{D_{\mathtt{S}(\mathtt{T})},\mathfrak{p}}$ for every $\mathfrak{p}\in \Sigma_p$. Up to modifying $g$ by an element of $F^{\times}$, we may assume that $g\in \mathcal{O}_{D_{\mathtt{S}(\mathtt{T})},\mathfrak{p}}^{\times}$ for all $\mathfrak{p}\in \Sigma_p$. Then it is clear that $\delta_{\mathtt{S}(\mathtt{T})}$ satisfies the first part of Lemma \ref{L:property-PEL-data}(1), since so does $\delta'_{\mathtt{S}(\mathtt{T})}$ by assumption. It remains to prove that, up to multiplying $g$ by an element of $\mathcal{O}_{F,(p)}^{\times}$, $$ (v,w)\mapsto \psi_{\delta_{\mathtt{S}(\mathtt{T})}}(v,w h'_{\tilde\mathtt{S}(\mathtt{T})}(\mathbf{i})^{-1})=\mathrm{Tr}_{D_{\mathtt{S}(\mathtt{T}),\mathbb{R}}/\mathbb{R}}(\sqrt{\mathfrak{d}}vh'_{\tilde\mathtt{S}(\mathtt{T})}(\mathbf{i})\bar{w}\delta_{\mathtt{S}(\mathtt{T})}) $$ on $D_{\mathtt{S}(\mathtt{T}),\mathbb{R}}:=D_{\mathtt{S}(\mathtt{T})}\otimes_{\mathbb{Q}}\mathbb{R}$ is positive definite, where $\psi_{\delta_{\mathtt{S}(\mathtt{T})}}$ is the $*_{\mathtt{S}(\mathtt{T})}$-hermitian alternating form on $D_{\mathtt{S}(\mathtt{T})}$ defined as in Subsection \ref{S:PEL-Shimura-data}. Since the elements $\delta_{\mathtt{S}}$ and $\delta'_{\mathtt{S}(\mathtt{T})}$ satisfy similar positivity conditions by assumption, we get two semi-simple $\mathbb{R}$-algebras with positive involution $(D_{\mathtt{S},\mathbb{R}}, *_{\mathtt{S}})$ and $(D_{\mathtt{S}(\mathtt{T}),\mathbb{R}}, *_{\mathtt{S}(\mathtt{T})})$. By \cite[Lemma~2.11]{kottwitz}, there exists an element $b\in D^{\times}_{\mathtt{S}(\mathtt{T}), \mathbb{R}}$ such that $b\theta_{\mathtt{T}}(x^{*_{\mathtt{S}}})b^{-1}=(b(\theta_{\mathtt{T}}(x)b^{-1})^{*'_{\mathtt{S}(\mathtt{T})}}$. It follows that $g=b^{*'_{\mathtt{S}(\mathtt{T})}}b\lambda$ with $\lambda\in (F\otimes_{\mathbb{Q}} \mathbb{R})^{\times}$. Up to multiplying $g$ by an element of $\mathcal{O}_{F,(p)}^{\times}$, we may assume that $\lambda$ is totally positive so that $\lambda=\xi^2$ with $\xi\in (F\otimes_{\mathbb{Q}} \mathbb{R})^{\times}$. Then, up to replacing $b$ by $b\xi$, we have $g=b^{*'_{\mathtt{S}(\mathtt{T})}}b$. Then the positivity of the form $\psi_{\delta_{\mathtt{S}(\mathtt{T})}}$ follows immediately from the positivity of $\psi_{\delta'_{\mathtt{S}(\mathtt{T})}}$, and the fact that $\psi_{\delta_{\mathtt{S}(\mathtt{T})}}(v,wh'_{\tilde\mathtt{S}(\mathtt{T})}(\mathbf{i})^{-1})=\psi_{\delta'_{\mathtt{S}(\mathtt{T})}}(bv, bwh'_{\tilde\mathtt{S}(\mathtt{T})}(\mathbf{i})^{-1})$. \end{proof} \begin{lemma} \label{L:comparison of Hermitian space} We keep the choice of $\delta_{\mathtt{S}(\mathtt{T})}$ as in Lemma~\ref{L:compare D_S with D_S(T)}. Then there exists an isomorphism $\Theta_\mathtt{T}: D_\mathtt{S}(\mathbb{A}^{\infty,p}) \to D_{\mathtt{S}(\mathtt{T})}(\mathbb{A}^{\infty, p})$ of skew $*$-Hermitian spaces compatible with the actions of $D_\mathtt{S}$ and $D_{\mathtt{S}(\mathtt{T})}$, respectively. \end{lemma} \begin{proof} This could be done explicitly. We however prefer a sneaky quick proof. Under Morita equivalence, we are essentially working with two-dimensional Hermitian spaces and the associated unitary groups. It is well-known that, over a nonarchimedean local field, there are exactly two Hermitian spaces and the associated unitary groups are not isomorphic (see e.g. \cite[3.2.1]{minguez}). In our situation, we know that $G'_{\tilde \mathtt{S},1, v} \cong G'_{\tilde \mathtt{S}(\mathtt{T}),1,v}$ for any place $v \nmid p\infty$. It follows that the associated Hermitian spaces at $v$ are isomorphic. The lemma follows. \end{proof} \begin{cor} \label{C:identify structure groups} The isomorphisms $\theta_\mathtt{T}$ and $\Theta_\mathtt{T}$ induce an isomorphism $\theta'_\mathtt{T}:\mathcal{G}'_{\tilde \mathtt{S}} \xrightarrow{\cong}\mathcal{G}'_{\tilde \mathtt{S}(\mathtt{T})}$; moreover $\theta'_\mathtt{T} \times \mathrm{id}$ takes the subgroup $\mathcal{E}_{G, \mathtt{S}, k_0} \subset\mathcal{G}'_{\tilde \mathtt{S}} \times \Gal_{k_0}$ to the subgroup $\mathcal{E}_{G, \mathtt{S}(\mathtt{T}), k_0}\subset\mathcal{G}'_{\tilde \mathtt{S}(\mathtt{T})} \times \Gal_{k_0}$. \end{cor} \begin{proof} The first statement follows from the description of the two groups in \eqref{E:structure group description} and \eqref{E:description of G'' G'} and the interpretation of these groups as certain automorphic groups of the skew $*$-Hermitian spaces. The second statement follows from the description of both subgroups in Subsection~\ref{S:structure group} and the observation that the choice of signatures in Subsection~\ref{S:tilde S(T)} ensures that the reciprocity map $\mathfrak{R}\mathrm{ec}_{k_0}$ for both Shimura data are the \emph{same} at $p$. \end{proof} \subsection{Level structure of $\mathrm{Sh}_{K'_{\mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})$} \label{S:level structure tilde ttS(ttT)} We now specify the level structure $K_{\mathtt{T}}'\subseteq G'_{\tilde \mathtt{S}(\mathtt{T})}(\mathbb{A}^{\infty})$. \begin{itemize} \item For the prime-to-$p$ level, since $\Theta_\mathtt{T}$ induces an isomorphism $G_{\tilde \mathtt{S}(\mathtt{T})}'(\mathbb{A}^{\infty,p}) \simeq G_{\tilde \mathtt{S}}'(\mathbb{A}^{\infty, p})$, the subgroup $K'^{p}\subseteq G_{\tilde \mathtt{S}}'(\mathbb{A}^{\infty,p})$ gives rise to a subgroup $K_{\mathtt{T}}'^{p}\subseteq G_{\tilde \mathtt{S}(\mathtt{T})}'(\mathbb{A}^{\infty,p})$. \item For $K'_{\mathtt{T}, p}$, we take it as the open compact subgroup of $G_{\tilde \mathtt{S}(\mathtt{T})}'(\mathbb{Q}_p)$ corresponding to $K_{\mathtt{T},p}\subseteq G_{\tilde \mathtt{S}(\mathtt{T})}(\mathbb{Q}_p)$ by the rule in Subsection~\ref{S:level-structure}. According to the discussion there, it suffices to choose a chain of lattices $\Lambda^{(1)}_{\mathtt{T},\mathfrak{p}}\subset \Lambda^{(2)}_{\mathtt{T},\mathfrak{p}}$ in $D_{\mathtt{S}(\mathtt{T})}\otimes_{F}F_{\mathfrak{p}}$ for each $\mathfrak{p}\in \Sigma_{p}$. Using the isomorphism $\theta_{\mathtt{T}}$, we can identify $D_{\mathtt{S}(\mathtt{T})}\otimes_F F_{\mathfrak{p}}$ with $D_{\mathtt{S}}\otimes_F F_{\mathfrak{p}}$, and hence with $\mathrm{M}_2(E \otimes_F F_\mathfrak{p})$. \begin{itemize} \item For $\mathfrak{p}\in \Sigma_{p}$ with $K_{\mathtt{T},\mathfrak{p}}=K_{\mathfrak{p}}$, we take $\Lambda_{\mathtt{T}, \mathfrak{p}}^{(1)}\subseteq \Lambda_{\mathtt{T},\mathfrak{p}}^{(2)}$ to be the same as the chain $\Lambda_{\mathfrak{p}}^{(1)}\subseteq \Lambda_{\mathfrak{p}}^{(2)}$ for defining $K_{p}'\subset G'_{\mathtt{S}}(\mathbb{Q}_p)$. \item For $\mathfrak{p}\in \Sigma_{p}$ with $K_{\mathtt{T}, \mathfrak{p}}\neq K_{\mathfrak{p}}$, then $K_{\mathtt{T}, \mathfrak{p}}$ is either Iwahori subgroup of $\mathrm{GL}_2(\mathcal{O}_{F_{\mathfrak{p}}})$ or $\mathcal{O}_{B_{F_{\mathfrak{p}}}}^{\times}$. We take then $\Lambda_{\mathtt{T},\mathfrak{p}}^{(1)}\subsetneq \Lambda_{\mathtt{T},\mathfrak{p}}^{(2)}$ to be the corresponding lattices as in Subsection~\ref{S:level-structure} that defines the Iwahori level at $\mathfrak{p}$. \end{itemize} Note that we have always $K_{p}'\subseteq K'_{\mathtt{T},p}$ under the isomorphism $G_{\tilde \mathtt{S}}'(\mathbb{Q}_p)\simeq G'_{\tilde \mathtt{S}(\mathtt{T})}(\mathbb{Q}_p)$ induced by $\theta_{\mathtt{T}}$. \end{itemize} We also specify the lattices we use for both Shimura varieties: if $\Lambda_\mathtt{S}$ denotes the chosen lattice of $D_\mathtt{S}$, we choose the lattice of $D_{\mathtt{S}(\mathtt{T})}$ to be $\Lambda_{\mathtt{S}(\mathtt{T})} =\theta_\mathtt{T}(\Lambda_\mathtt{S})$. With these data, we have a unitary Shimura variety $\mathrm{Sh}_{K_{\mathtt{T}}'}(G_{\tilde \mathtt{S}(\mathtt{T})}')$ over the reflex field $E_{\tilde \mathtt{S}(\mathtt{T})}$, which is the field corresponding to the Galois group fixing the subset $\tilde \mathtt{S}(\mathtt{T}) \subseteq \Sigma_{E, \infty}$. To construct an integral model of $\mathrm{Sh}_{K'_{\mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})$, we need to choose an order $\mathcal{O}_{D_{\mathtt{S}(\mathtt{T})}}$. Let $\mathcal{O}_{D_{\mathtt{S}}}$ be the order stable under $*$ and maximal at $p$ used to define the integral model $\mathbf{Sh}_{K'}(G_{\tilde \mathtt{S}}')$. We put $\mathcal{O}_{D_{\mathtt{S}(\mathtt{T})}}=\theta_{\mathtt{T}}(\mathcal{O}_{D_{\mathtt{S}}})$. For any $\mathfrak{p}\in \Sigma_{p}$, both $\mathcal{O}_{D_{\mathtt{S}}, \mathfrak{p}}$ and $\mathcal{O}_{D_{\mathtt{S}(\mathtt{T})}, \mathfrak{p}}$ can be identified with $\mathrm{M}_2(\mathcal{O}_{E}\otimes_{\mathcal{O}_F}\mathcal{O}_{F_{\mathfrak{p}}})$. We have now all the PEL-data needed for Theorem~\ref{T:unitary-shimura-variety-representability}, which assures that $\mathrm{Sh}_{K_{\mathtt{T}}'}(G_{\tilde \mathtt{S}(\mathtt{T})}')$ admits an integral model $\mathbf{Sh}_{K_{\mathtt{T}}'}(G_{\tilde \mathtt{S}(\mathtt{T})}')$ over $W(k_0)$. Using $\mathbf{Sh}_{K_{\mathtt{T}}'}(G_{\tilde \mathtt{S}(\mathtt{T})}')$, we can construct an integral model $\mathbf{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$ of the quaternionic Shimura variety $\mathrm{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$. \begin{theorem}\label{T:main-thm-unitary} For a subset $\mathtt{T}\subseteq \Sigma_{\infty}-\mathtt{S}_{\infty}$, let $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\mathtt{T}}\subseteq \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ denote the GO-stratum defined in Definition~\ref{Defn:GO-strata}. Let $I_{\mathtt{T}}$ be as in Theorem~\ref{T:main-thm}. Then we have the following: \begin{itemize} \item[(1)] (Description) $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\mathtt{T}}$ is isomorphic to a $(\mathbb{P}^1)^{I_{\mathtt{T}}}$-bundle over $\mathbf{Sh}_{K'_{\mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})_{k_0}$. \item[(2)] (Compatibility of abelian varieties) Let $\pi_{\mathtt{T}}: \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\mathtt{T}}\rightarrow \mathbf{Sh}_{K'_{\mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})_{k_0}$ denote the projection of the $(\mathbb{P}^1)^{I_{\mathtt{T}}}$-bundle in \emph{(1)}. The abelian schemes $\mathbf{A}'_{\tilde \mathtt{S},k_0}$ and $\pi_{\mathtt{T}}^*\mathbf{A}'_{\tilde \mathtt{S}(\mathtt{T}),k_0}$ over $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ are isogenous, where $\mathbf{A}'_{\tilde \mathtt{S},k_0}$ and $\mathbf{A}'_{\tilde \mathtt{S}(\mathtt{T}),k_0}$ denote respectively the universal abelian varieties over $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\mathtt{T}}$ and over $\mathbf{Sh}_{K'_{ \mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})_{k_0}$. \item[(3)] (Compatibility with Hecke action) When the open compact subgroup $K'^p\subseteq G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty,p})$ varies, the isomorphism as well as the isogeny of abelian varieties are compatible with the action of the Hecke correspondence given by $\widetilde G_{\tilde \mathtt{S}} = G''_{\tilde \mathtt{S}}(\mathbb{Q})^{+,(p)} G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty, p}) \cong \widetilde G_{\tilde \mathtt{S}(\mathtt{T})}$. \item[(4)] (Compatibility with partial Frobenius) The description in (1) is compatible with the action of the twisted partial Frobenius (Subsection~\ref{S:partial Frobenius}) in the sense that we have a commutative diagram \[ \xymatrix{ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}} \ar[r]_-{\xi^\mathrm{rel}} \ar[rd]_{\pi_{\mathtt{T}}} \ar@/^15pt/[rr]^-{\mathfrak{F}'_{\mathfrak{p}^2, \tilde \mathtt{S}}} & \mathfrak{F}'^*_{\mathfrak{p}^2}( \mathbf{Sh}_{K'_\mathtt{T}}(G'_{\sigma_\mathfrak{p}^2\tilde \mathtt{S}})_{k_0, \sigma_\mathfrak{p}^2\mathtt{T}}) ) \ar[d] \ar[r]_-{\mathfrak{F}'^*_{\mathfrak{p}^2, \tilde \mathtt{S}(\mathtt{T})}} & \mathbf{Sh}_{K'_\mathtt{T}}(G'_{\sigma_\mathfrak{p}^2\tilde \mathtt{S}})_{k_0, \sigma_\mathfrak{p}^2\mathtt{T}} \ar[d]^{\pi_{\sigma_{\mathfrak{p}}^2\mathtt{T}}} \\ & \mathbf{Sh}_{K'_\mathtt{T}}(G'_{\tilde \mathtt{S}(\mathtt{T})})_{k_0} \ar[r]^-{\mathfrak{F}'_{\mathfrak{p}^2, \tilde \mathtt{S}(\mathtt{T})}} & \mathbf{Sh}_{K'_{\sigma_\mathfrak{p}^2\mathtt{T}}}(G'_{\sigma_\mathfrak{p}^2(\tilde \mathtt{S}(\mathtt{T}))})_{k_0} } \] where the square is cartesian, we added subscripts to the partial Frobenius to indicate the corresponding base scheme, and the morphism $\xi^\mathrm{rel}$ is a morphism whose restriction to each fiber $\pi_{\mathtt{T}}^{-1}(x)=(\mathbb{P}^1_x)^{I_{\mathtt{T}}}$ is the product of the relative $p^2$-Frobenius of the $\mathbb{P}^1$'s indexed by $I_{\mathtt{T}}\cap \Sigma_{\infty/\mathfrak{p}}=\mathtt{T}'_{\infty/\mathfrak{p}}-\mathtt{T}_{/\mathfrak{p}}$, and the identify of the other $\mathbb{P}^1_x$'s. \end{itemize} \end{theorem} The proof of this theorem will occupy the rest of this section and concludes in Subsection~\ref{S:End-of-proof}. We first state a corollary. \begin{cor} \label{C:main-thm-product} \begin{enumerate} \item The Goren-Oort stratum $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})^\circ_{\overline \mathbb{F}_p, \mathtt{T}}$ is isomorphic to a $(\mathbb{P}^1)^{I_\mathtt{T}}$-bundle over $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}(\mathtt{T})})^\circ_{\overline \mathbb{F}_p}$, equivariant for the action of $\mathcal{E}_{G, \mathtt{S}, \tilde \wp} \cong \mathcal{E}_{G, \mathtt{S}(\mathtt{T}), \tilde \wp}$ (which are identified as in Corollary~\ref{C:identify structure groups}) with trivial action on the fibers. \item The GO-stratum $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0,\mathtt{T}}$ is isomorphic to a $(\mathbb{P}^1)^{I_\mathtt{T}}$-bundle over $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}(\mathtt{T})})_{k_0}$, such that the natural projection $\pi_{\mathtt{T}}: \mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0,\mathtt{T}} \to\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}(\mathtt{T})})_{k_0}$ is equivariant for the tame Hecke action. \item The abelian schemes $\mathbf{A}''_{\tilde \mathtt{S},k_0}$ and $\pi_{\mathtt{T}}^*(\mathbf{A}''_{\tilde \mathtt{S}(\mathtt{T}),k_0})$ over $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0, \mathtt{T}}$ are isogenous. \item The following diagram is commutative \[ \xymatrix@C=50pt{ \mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0, \mathtt{T}} \ar[r]_-{\xi^\mathrm{rel}} \ar@/^15pt/[rr]^-{\mathfrak{F}''_{\mathfrak{p}^2,\tilde \mathtt{S}}} \ar[dr]_{\pi_{\mathtt{T}}} & \mathfrak{F}_{\mathfrak{p}^2, \tilde \mathtt{S}(\mathtt{T})}^*(\mathbf{Sh}_{K''_p}(G''_{\sigma_\mathfrak{p}^2\tilde \mathtt{S}})_{k_0, \sigma_{\mathfrak{p}}^2 \mathtt{T}} \ar[r]_-{\mathfrak{F}''^*_{\mathfrak{p}^2, \tilde \mathtt{S}(\mathtt{T})}} \ar[d] & \mathbf{Sh}_{K''_p}(G''_{\sigma_\mathfrak{p}^2\tilde \mathtt{S}})_{k_0, \sigma_{\mathfrak{p}}^2 \mathtt{T}} \ar[d]^{\pi_{\tilde \mathtt{S}(\mathtt{T})}} \\ & \mathbf{Sh}_{K''_{\mathtt{T},p}}(G''_{\tilde \mathtt{S}(\mathtt{T})})_{k_0} \ar[r]^-{\mathfrak{F}''_{\mathfrak{p}^2, \tilde \mathtt{S}(\mathtt{T})}} & \mathbf{Sh}_{K''_{\sigma_\mathfrak{p}^2\mathtt{T},p}}(G''_{\sigma_\mathfrak{p}^2(\tilde \mathtt{S}(\mathtt{T}))})_{k_0} } \] where the square is cartesion, $\mathfrak{F}''_{\mathfrak{p}^2, \tilde \mathtt{S}}$ and $\mathfrak{F}''_{\mathfrak{p}^2, \tilde \mathtt{S}(\mathtt{T})}$ denote the twisted partial Frobenii (Subsection~\ref{S:partial Frobenius}) on $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})_{k_0}$ and $\mathbf{Sh}_{K''_{\mathtt{T},p}}(G''_{\tilde \mathtt{S}(\mathtt{T})})_{k_0}$ respectively, and $\xi^\mathrm{rel}$ is a morphism whose restriction to each fiber $\pi_{\mathtt{T}}^{-1}(x)=(\mathbb{P}^1_x)^{I_{\mathtt{T}}}$ is the product of the relative $p^2$-Frobenius of the $\mathbb{P}^1_x$'s indexed by $I_{\mathtt{T}}\cap \Sigma_{\infty/\mathfrak{p}}=\mathtt{T}'_{\infty/\mathfrak{p}}-\mathtt{T}_{/\mathfrak{p}}$, and the identify of the other $\mathbb{P}^1_x$'s. \end{enumerate} \end{cor} \begin{proof} This is an immediate consequence of Corollary~\ref{C:main-thm-product} above. The claims regarding the universal abelian varieties follows from the explicit construction of $\mathbf{A}''_{\tilde \mathtt{S}}$ and $\mathbf{A}''_{\tilde \mathtt{S}(\mathtt{T})}$ in Subsection~\ref{S:abel var in unitary case}. \end{proof} \begin{remark} \label{R:quaternionic Shimura reciprocity not compatible} We emphasize that the analogous of Corollary~\ref{C:main-thm-product}(2) for quaternionic Shimura varieties only holds over $\overline \mathbb{F}_p$. This is because the subgroups $\mathcal{E}_{G, \mathtt{S}, \wp}$ and $\mathcal{E}_{G, \mathtt{S}(\mathtt{T}), \wp}$, although abstractly isomorphic, sit in $\mathcal{G}_\mathtt{S} \times \Gal_{k_0} \cong \mathcal{G}_{\mathtt{S}(\mathtt{T})} \times \Gal_{k_0}$ as \emph{different} subgroups. The two Deligne homomorphisms are different. \end{remark} The rest of this section is devoted to the proof of Theorem~\ref{T:main-thm-unitary}, which concludes in Subsection~\ref{S:End-of-proof}. \subsection{Signature changes}\label{S:Delta-pm} The basic idea of proving Theorem~\ref{T:main-thm-unitary} is to find a quasi-isogeny between the two universal abelian varieties $\mathbf{A}_{\tilde \mathtt{S}}$ and $\mathbf{B}: =\mathbf{A}_{\tilde \mathtt{S}(\mathtt{T})}$ (over an appropriate base). We view this quasi-isogeny as two genuine isogenies $ \mathbf{A}_{\tilde \mathtt{S}} \xrightarrow{\phi} \mathbf{C} \xleftarrow{\phi'} \mathbf{B} $ for some abelian variety $\mathbf{C}$; each isogeny is characterized by the set of places $\tilde\tau\in \Sigma_{E,\infty}$ where the isogeny does not induce an isomorphism of the $\tilde\tau$-components of the de Rham cohomology of the abelian varieties. We define these two subsets $\tilde \Delta(\mathtt{T})^+$ and $\tilde \Delta(\mathtt{T})^-$ of $\Sigma_{E, \infty}$ now, as follows. As usual, $ \tilde \Delta(\mathtt{T})^\pm = \coprod_{\mathfrak{p} \in \Sigma_p} \tilde \Delta(\mathtt{T})_{/ \mathfrak{p}}^\pm$ for subsets $\tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^\pm \subseteq \Sigma_{E, \infty/\mathfrak{p}}$. When $\mathfrak{p}$ is of type $\alpha^\sharp$ or $\beta^\sharp$ for $\mathtt{S}$, we set $\tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^\pm =\emptyset$. For the other two types, we use the notation in Subsection~\ref{S:tilde S(T)} in the corresponding cases (in particular our convention on $\tilde \tau$ and $a_j$'s): \begin{itemize} \item (Case $\alpha1$) Put \[ \tilde C^-_i: = \bigcup_{\substack{j \text{ odd}\\ 1\leq j\leq r_i}} \{\sigma^{-\ell}\tilde \tau_i: a_{j}\leq \ell\leq a_{j+1}-1\}. \] We set $\tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^- = \coprod_i \tilde C^-_i$ and $\tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^+ =(\tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^-)^c$. \item (Case $\alpha2$) Put \[ \tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^- : = \bigcup_{1\leq i\leq r} \big\{ \sigma^{-l}\tilde\tau_0: a_{2i-1} \leq l< a_{2i} \big\}; \quad \textrm{and}\quad \tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^+: = (\tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^-)^c. \] \item (Case $\beta1$) Put \[ \tilde C_i^-: = \bigcup_{\substack{j \text{ odd}\\ 1\leq j\leq r_i}} \{\sigma^{-\ell}\tilde \tau_i: a_{j}\leq \ell\leq a_{j+1}-1\}. \] We set $\tilde\Delta(\mathtt{T})_{/\mathfrak{p}}^- = \coprod_i \tilde C^-_i$ and $\tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^+=(\tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^-)^c$. (Formally, this is the same recipe as in case $\alpha1$, but the choice of $\tilde \tau_i$ is less determined; see Subsection~\ref{S:tilde S(T)}.) \item (Case $\beta2$) Put \[ \tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^- : = \bigcup_{1\leq i\leq r} \big\{ \sigma^{-l}\tilde\tau_0: a_{2i-1} \leq l< a_{2i} \big\}. \] \emph{Unlike in all other cases, we put $\tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^+=\emptyset$.} \end{itemize} \begin{notation} We use $\mathtt{T}_E$ (resp. $\mathtt{T}'_E$) to denote the preimage of $\mathtt{T}$ (resp. $\mathtt{T}'$) under the map $\Sigma_{E, \infty} \to \Sigma_\infty$. \end{notation} The following two lemmas follow from the definition by a case-by-case check. \begin{lemma} \label{L:distance to T'} For each $\tilde \tau \in \tilde \Delta(\mathtt{T})^+$ (resp. $\tilde \Delta(\mathtt{T})^-$), let $n$ be the unique positive integer such that $\tilde \tau, \sigma^{-1}\tilde \tau, \dots, \sigma^{1-n} \tilde \tau$ all belong to $\tilde \Delta(\mathtt{T})^+$ (resp. $\tilde \Delta(\mathtt{T})^-$) but $\sigma^{-n}\tilde \tau$ does not. Then, for this $n$, $\sigma^{-n}\tilde \tau \in \mathtt{T}'_E$. Moreover, if $\tilde \tau$ also belongs to $\mathtt{T}'_E$, then $n$ equals to the number $n_\tau$ introduced in Subsection~\ref{S:partial-Hasse}. \end{lemma} \begin{lemma} \label{L:property of Delta} \emph{(1)} If both $\tilde \tau$ and $\sigma \tilde \tau$ belong to $\tilde \Delta(\mathtt{T})^+$ (resp. $\tilde \Delta(\mathtt{T})^-$), then $\tilde \tau|_F$ belongs to $\mathtt{S}_\infty$. \emph{(2)} If $\tilde\tau \in \tilde \Delta(\mathtt{T})^-$ but $\sigma \tilde \tau \notin \tilde \Delta(\mathtt{T})^-$, then $\tilde \tau \in \tilde \mathtt{T}'$. \emph{(3)} If $\tilde\tau \notin \tilde \Delta(\mathtt{T})^-$ but $\sigma \tilde \tau \in \tilde \Delta(\mathtt{T})^-$, then $\tilde \tau \in \tilde \mathtt{T}'^c$. \end{lemma} \subsection{Description of the strata $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}}$ via isogenies} \label{S:moduli-Y_S} To simplify the notation, we put $X'=\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ and $X'_{\mathtt{T}}=\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\mathtt{T}}$ for a subset $\mathtt{T}\subseteq \Sigma_{\infty}-\mathtt{S}_{\infty}$. We will first prove statement (1) of Theorem~\ref{T:main-thm-unitary}. Following the idea of Helm \cite{helm}, we introduce auxiliary moduli spaces $Y'_{\mathtt{T}}$ and $Z'_{\mathtt{T}}$ and establishing isomorphisms \begin{equation}\label{E:sequence-moduli} \xymatrix{ X'_\mathtt{T} &Y'_\mathtt{T}\ar[l]^-{\cong}_-{\eta_1} \ar[r]^-{\eta_2}_-{\cong} &Z'_\mathtt{T}}, \end{equation} where $Z'_\mathtt{T}$ is a $(\mathbb{P}^1)^{I_{\mathtt{T}}}$-bundle over the special fiber of $\mathbf{Sh}_{K'_{\mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})_{k_0}$. Recall that we have fixed an isomorphism $\theta_{\mathtt{T}}: (D_{\mathtt{S}},*_{\mathtt{S}})\xrightarrow{\sim}(D_{\mathtt{S}(\mathtt{T})}, *_{\mathtt{S}(\mathtt{T})})$ of simple algebras over $E$ with positive involution, and put $\mathcal{O}_{D_{\mathtt{S}(\mathtt{T})}}=\theta_{\mathtt{T}}(\mathcal{O}_{D_{\mathtt{S}}})$. To ease the notation, we identify $\mathcal{O}_{D_{\mathtt{S}(\mathtt{T})}}$ with $\mathcal{O}_{D_{\mathtt{S}}}$ via $\theta_{\mathtt{T}}$, and denote them by $\mathcal{O}_D$ when there is no confusions. We start now to describe $Y'_\mathtt{T}$: It is the moduli space over $k_0$ which attaches to a locally noetherian $k_0$-scheme $S$ the set of isomorphism classes of $(A, \iota_A,\lambda_A, \alpha_{K'}, B, \iota_B, \lambda_B, \beta_{K'_\mathtt{T}}, C, \iota_C; \phi_A, \phi_B)$, where \begin{itemize} \item[(i)] $(A, \iota_A, \lambda_A, \alpha_{K'})$ is an element in $X'_\mathtt{T}(S)$. \item[(ii)] $(B, \iota_B, \lambda, \beta_{K'_\mathtt{T}})$ is an element in $\mathbf{Sh}_{K'_\mathtt{T}}(G'_{\tilde \mathtt{S}(\mathtt{T})})(S)$. \item[(iii)] $C$ is an abelian scheme over $S$ of dimension $4g$, equipped with an embedding $\iota_C: \mathcal{O}_D \to \End_S(C)$, \item[(iv)] $\phi_A: A \to C$ is an $\mathcal{O}_D$-isogeny whose kernel is killed by $p$, such that the induced map $$ \phi_{A,*,\tilde{\tau}}: H_1^{\mathrm{dR}}(A/S)^\circ_{\tilde{\tau}} \to H_1^\mathrm{dR}(C/S)^\circ_{\tilde{\tau}} $$ is an isomorphism for $\tilde{\tau}\in \Sigma_{E,\infty}$ \emph{unless} $\tilde{\tau}\in \tilde\Delta(\mathtt{T})^+$, in which case, we require that \begin{equation}\label{E:condition-phi-A} \Ker(\phi_{A,*,\tilde{\tau}})=\mathrm{Im}(F_{A,\mathrm{es}, \tilde \tau}^{n}), \end{equation} where $n$ is as determined in Lemma~\ref{L:distance to T'}, and $F_{A,\mathrm{es}, \tilde \tau}^n$ is defined in \eqref{E:Fes n}. (When $\tau$ itself belongs to $\mathtt{T}'$, the number $n$ equals to $n_\tau$ introduced in Subsection~\ref{S:partial-Hasse}; in this case, condition \eqref{E:condition-phi-A} is equivalent to saying that $\Ker(\phi_{A,*,\tilde\tau})=\omega^\circ_{A^\vee/S,\tilde\tau}$.) \item[(v)] $\phi_B: B\to C$ is an $\mathcal{O}_D$-isogeny whose kernel is killed by $p$ such that $\phi_{B,*,\tilde{\tau}}: H_1^\mathrm{dR}(B/S)_{\tilde{\tau}}^\circ \to H_1^\mathrm{dR}(C/S)_{\tilde{\tau}}^\circ$ is an isomorphism for $\tilde{\tau}\in \Sigma_{E,\infty}$ \emph{unless} $\tilde{\tau}\in \tilde{\Delta}(\mathtt{T})^-$, in which case, we require that $\mathrm{Im}(\phi_{B,*,\tilde{\tau}})$ is equal to $\phi_{A,*,\tilde{\tau}}(\mathrm{Im}(F_{A,\mathrm{es}, \tilde \tau}^n))$, where $n$ is as determined in Lemma~\ref{L:distance to T'}. \item[(vi)] The tame level structures are compatible, i.e. $T^{(p)}(\phi_A)\circ \alpha^p_{K'^p} = T^{(p)}(\phi_B) \circ \alpha^p_{K'^p_\mathtt{T}}$ as maps from $\widehat \Lambda^{(p)}_\mathtt{S} \cong \widehat \Lambda^{(p)}_{\mathtt{S}(\mathtt{T})}$ to $T^{(p)}(C)$, modulo $K'^p$, if we identify the two lattices naturally as in Subsection~\ref{S:level structure tilde ttS(ttT)}. \item[(vii)] If $\mathfrak{p}$ is a prime of type $\alpha^\sharp$ for the original quaternionic Shimura variety $\mathrm{Sh}_{K}(G_{\mathtt{S}})$, then $\alpha_{\mathfrak{p}}$ and $\beta_{\mathfrak{p}}$ are compatible, i.e. $\phi_{A}(\alpha_{\mathfrak{p}})=\phi_{B}(\beta_{\mathfrak{p}})$, where $\alpha_{\mathfrak{p}}\subseteq A[\mathfrak{p}]$ denotes the closed finite flat group scheme given by Theorem~\ref{T:unitary-shimura-variety-representability}(c2). \item[(viii)] Let $\mathfrak{p}$ be a prime in Case $\alpha2$, splitting into $\mathfrak{q} \bar \mathfrak{q}$ in $E$. Then $\beta_\mathfrak{p} = H_\mathfrak{q} \oplus H_{\bar \mathfrak{q}}$. If $\phi_{\mathfrak{q}}: B\rightarrow B'_{\mathfrak{q}}=B/H_{\mathfrak{q}}$ the canonical isogeny. Then the kernel of the induced map $\phi_{\mathfrak{q},*}:H^{\mathrm{dR}}_1(B/S)^{\circ}_{\tilde{\tau}}\rightarrow H^{\mathrm{dR}}_1(B'_{\mathfrak{q}}/S)^\circ_{\tilde\tau}$ coincides with that of $\phi_{B,*}: H^{\mathrm{dR}}_1(B/S)^\circ_{\tilde\tau}\rightarrow H^{\mathrm{dR}}_1(C/S)^\circ_{\tilde\tau}$ for all $\tilde\tau\in \tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^-$. \item[(ix)] We have the following commutative diagram \[ \xymatrix{ A \ar[r]^-{\phi_{A}}\ar[d]_{\lambda_A} & C & B \ar[l]_-{\phi_B}\ar[d]^{\lambda_B}\\ A^\vee & C^\vee\ar[l]_-{\phi_A^\vee} \ar[r]^-{\phi_B^\vee} & B^\vee. } \] \end{itemize} \begin{remark} Compared to \cite{helm}, our moduli problem appears to be more complicated. This is because we allow places above $p$ of $F$ to be inert in the CM extension $E$. It is clear that $B$ is quasi-isogenous to $A$. So when $S$ is the spectrum of a perfect field $k$, the \emph{covariant} Dieudonn\'e module $\tilde{\mathcal{D}}_{B}$ is a $W(k)$-lattice in $\tilde{\mathcal{D}}_A[1/p]$. The complicated conditions (v) and (vi) can be better understood by looking at $\tilde{\mathcal{D}}_{B}$ (See the proof of Proposition~\ref{P:Y_S=X_S} below). \end{remark} \begin{prop} \label{P:Y_S=X_S} The natural forgetful functor \[ \eta_1\colon (A, \iota_A,\lambda_A, \alpha_{K'}, B, \iota_B, \lambda_B, \beta_{K'_\mathtt{T}}, C, \iota_C; \phi_A, \phi_B) \mapsto (A, \iota_A,\lambda_A, \alpha_{K'}) \] induces an isomorphism $\eta_1\colon Y'_\mathtt{T} \to X'_\mathtt{T}$. \end{prop} \begin{proof} By the general theory of moduli spaces of abelian schemes due to Mumford, $Y'_{\mathtt{T}}$ is representable by an $k_0$-scheme of finite type. Hence, to prove the proposition, it suffices to show that the natural map $Y'_{\mathtt{T}}\rightarrow X'_{\mathtt{T}}$ induces a bijection on closed points and the tangent spaces at each closed point. The proposition will follow thus from Lemmas~\ref{L:Y_T=X_T-1} and \ref{L:Y_T-X_T-tangent} below. This is a long and tedious book-keeping check, essentially following the ideas of \cite[Proposition~4.4]{helm}. \end{proof} \begin{lemma}\label{L:Y_T=X_T-1} Let $x=(A, \iota_{A}, \lambda_{A}, \alpha_{K'})$ be a closed point of $X'_\mathtt{T}$, with values in a perfect field $k$. Then there exist \emph{unique} $(B,\lambda_{B}, \iota_B, \beta_{K'_\mathtt{T}}, C, \iota_C; \phi_A, \phi_B)$ such that $(A, \iota_{A}, \lambda_{A}, \alpha_{K'}, B,\lambda_{B}, \iota_B, \beta_{K'_\mathtt{T}}, C, \iota_C; \phi_A, \phi_B)\in Y'_{\mathtt{T}}(k)$. \end{lemma} \begin{proof} We first recall some notation regarding Dieudonn\'e modules. Let $\tilde \mathcal{D}_A$ denote the \emph{covariant} Dieudonn\'e module of $A[p^\infty]$. Then $\mathcal{D}_A := \tilde \mathcal{D}_A/p$ is the covariant Dieudonn\'e module of $A[p]$. Given the action of $\mathcal{O}_D\otimes_{\mathbb{Z}}\mathbb{Z}_p\simeq \mathrm{M}_2(\mathcal{O}_{E}\otimes_{\mathbb{Z}}\mathbb{Z}_p)$ on $A$, we have direct sum decompositions \[ \tcD_A^\circ := \mathfrak{e} \tcD_A = \bigoplus_{\tilde{\tau}\in \Sigma_{E,\infty}} \tcD_{A, \tilde{\tau}}^\circ, \quad \cD_A^\circ := \mathfrak{e} \cD_A = \bigoplus_{\tilde{\tau}\in \Sigma_{E,\infty}} \cD_{A, \tilde{\tau}}^{\circ}, \] where $\mathfrak{e}$ denotes the idempotent $\bigl(\begin{smallmatrix} 1&0\\0&0\end{smallmatrix}\bigr)$. By the theory of Dieudonn\'e modules, we have canonical isomorphisms \[ H^{\mathrm{cris}}_1(A/k)_{W(k)}\cong \tcD_{A},\quad H^{\mathrm{dR}}_1(A/k)\cong \cD_{A}, \] compatible with all the structures. For $\tilde{\tau}\in \Sigma_{E,\infty}$, we have the Hodge filtration $ 0\rightarrow \omega^{\circ}_{A^\vee,\tilde{\tau}}\rightarrow \cD_{A,\tilde{\tau}}^{\circ}\rightarrow \Lie(A)^{\circ}_{\tilde{\tau}}\rightarrow 0. $ We use $\tilde \omega^\circ_{A^\vee, \tilde \tau}$ to denote the preimage of $\omega^\circ_{A, \tilde\tau} \subseteq \cD^\circ_{A, \tilde\tau}$ under the reduction map $\tcD^\circ_{A, \tilde\tau} \twoheadrightarrow \cD^\circ_{A, \tilde\tau}$. We first construct $C$ from $A$. For each $\tilde{\tau}\in \Sigma_{E,\infty}$, we define a $W(k)$-module $M^{\circ}_{\tilde{\tau}}$ with $\tcD_{A,\tilde{\tau}}\subseteq M^{\circ}_{\tilde{\tau}}\subseteq p^{-1}\tcD_{A,\tilde{\tau}}^\circ$ as follows. We put $M^{\circ}_{\tilde{\tau}}=\tcD^{\circ}_{\tilde{\tau}}$ unless $\tilde{\tau}\in \tilde{\Delta}(\mathtt{T})^+$. In the exceptional case, let $n$ be the integer as determined in Lemma~\ref{L:distance to T'} (or equivalently as in property (v) of $Y'_{\mathtt{T}}$ above), and put \[ M^{\circ}_{\tilde{\tau}}:=p^{-1}F_{A,\mathrm{es}}^n(\tcD^\circ_{A,\sigma^{-n}\tilde \tau}), \] where $ F_{A,\mathrm{es}}^n$ is the $n$-iteration of the essential Frobenius on $\tcD^{\circ}_{A}$ defined in Notation~\ref{N:essential frobenius and verschiebung}. If $\tilde{\tau}\in \tilde \Delta(\mathtt{T})^+ \cap \mathtt{T}_E$, then the number $n$ for $\tilde{\tau}$ in Lemma~\ref{L:distance to T'} coincides with $n_{\tau}$ introduced in Subsection~\ref{S:partial-Hasse}. Since the partial Hasse invariant $h_{\tilde{\tau}}(A)$ vanishes for any $\tilde{\tau}\in \mathtt{T}_E$ by the definition of $X'_{\mathtt{T}}$, we see that $M^{\circ}_{\tilde\tau}=p^{-1}\tilde\omega^\circ_{A,\tilde{\tau}}$ for $\tilde \tau \in \tilde \Delta(\mathtt{T})^+ \cap \mathtt{T}_E$. We now check that, for any $\tilde{\tau}\in \Sigma_{E,\infty}$, \begin{equation}\label{E:stability-FV} F_A(M^\circ_{\sigma^{-1}\tilde{\tau}})\subseteq M^{\circ}_{\tilde{\tau}},\quad \textrm{and} \quad V_A(M^{\circ}_{\tilde{\tau}})\subseteq M^{\circ}_{\sigma^{-1}\tilde{\tau}}. \end{equation} Note that we are using the genuine but not essential Frobenius and Verschiebung here. We distinguish several cases: \begin{itemize} \item $\tilde\tau,\sigma^{-1}\tilde\tau\notin \tilde\Delta(\mathtt{T})^+$. Then $M^\circ_{\tilde\tau}=\tcD^\circ_{A,\tilde\tau}$ and $M^\circ_{\sigma^{-1}\tilde\tau}=\tcD^\circ_{A,\sigma^{-1}\tilde\tau}$; hence \eqref{E:stability-FV} is clear. \item $\tilde{\tau}\in \tilde\Delta(\mathtt{T})^+$ and $\sigma^{-1}\tilde{\tau}\notin \tilde\Delta(\mathtt{T})^+$. Then we have $M^{\circ}_{\sigma^{-1}\tilde{\tau}}=\tcD_{A,\tilde\tau}$ and $M^\circ_{\tilde{\tau}}=p^{-1}F_A(\tcD_{A,\sigma^{-1}\tilde\tau})$. Hence $F_A(M^{\circ}_{\sigma^{-1}\tilde{\tau}})\subseteq M^\circ_{\tilde{\tau}}$ is trivial, and $V_A( M^\circ_{\tilde{\tau}})=M^{\circ}_{\sigma^{-1}\tilde{\tau}}$. \item $\tilde{\tau},\sigma^{-1}\tilde{\tau}\in \tilde\Delta(\mathtt{T})^+$. Let $n$ be positive integer for $\tilde{\tau}$ as in Lemma~\ref{L:distance to T'}. Then we have $$ M^{\circ}_{\tilde{\tau}}=p^{-1} F_{A,\mathrm{es}}^n(\tcD^\circ_{A, \sigma^{-n}\tilde{\tau}})= F_{A,\mathrm{es}}\big( p^{-1} F_{A,\mathrm{es}}^{n-1}(\tcD^\circ_{A, \sigma^{-n}\tilde{\tau}})\big) = F_{A,\mathrm{es}}\big(M^{\circ}_{\sigma^{-1}\tilde\tau} \big). $$ The inclusions \eqref{E:stability-FV} are clear from this. \item $\tilde{\tau}\notin \tilde\Delta(\mathtt{T})^+$ and $\sigma^{-1}\tilde{\tau}\in \tilde\Delta(\mathtt{T})^+$. In this case, $\sigma^{-1}\tilde\tau$ must be in $\mathtt{T}_E$ by Lemma~\ref{L:distance to T'}. Hence, we have $M^{\circ}_{\tilde\tau}=\tcD^\circ_{A,\tilde\tau}$ and $M^\circ_{\sigma^{-1}\tilde{\tau}}=p^{-1}\tilde\omega^\circ_{A,\sigma^{-1}\tilde{\tau}}$ as remarked above. We see thus $F_A(M^\circ_{\sigma^{-1}\tilde{\tau}})=M^\circ_{\tilde{\tau}}$ and $V_A(M^\circ_{\tilde{\tau}})=p M^\circ_{\sigma^{-1}\tilde{\tau}}$. \end{itemize} Consequently, if we put $M^{\circ}=\bigoplus_{\tilde\tau\in \Sigma_{E,\infty}}M^\circ_{\tilde\tau}$ and $M=(M^{\circ})^{\oplus 2}$, then $M$ is a Dieudonn\'e module, and $\tcD_{A}\subseteq M\subseteq p^{-1}\tcD_A$ with induced $F$ and $V$ on $M$. Consider the quotient $M/\tcD_A$. It corresponds to a finite subgroup scheme $K$ of $A[p]$ stable under the action of $\mathcal{O}_D$ by the covariant Dieudonn\'e theory. We put $C=A/K$ and let $\phi_A:A\rightarrow C$ denote the natural quotient. Then the induced map $\phi_{A,*}: \tcD_{A}\rightarrow \tcD_{C}$ is identified with the natural inclusion $\tcD_{A}\hookrightarrow M$. The morphisms $F_C$ and $V_C$ on $\tcD_C$ are induced from those on $\tcD_{A}[1/p]$. It is clear that $C$ is equipped with a natural action $\iota_C$ by $\mathcal{O}_D$, and $\phi_{A}$ satisfies conditions (iii) and (iv) for the moduli space $Y'_{\mathtt{T}}$. Conversely, if $C$ exists, the conditions (iii) and (iv) imply that $\tcD_{C,\tilde\tau}^{\circ}$ has to coincide with $M^\circ_{\tilde\tau}$. Therefore, $C$ is uniquely determined by $A$. We finally remark that, by construction, $\tilde \mathcal{D}_{C, \tilde \tau}^\circ / \tilde \mathcal{D}_{A, \tilde \tau}^\circ$ is isomorphic to $k$ if $\tilde \tau \in \tilde \Delta(\mathtt{T})^+$ and trivial otherwise. We now construct the abelian variety $B$ and the isogeny $\phi_B:B\rightarrow C$. Similarly to the construction for $C$, we will first define a $W(k)$-lattice $N^\circ=\bigoplus_{\tilde{\tau}\in \Sigma_{E,\infty}}N^\circ_{\tilde{\tau}}\subseteq \tcD_{C}^{\circ}$, with $N^\circ_{\tilde\tau}=\tcD^{\circ}_{C,\tilde{\tau}}$ unless $\tilde\tau\in \tilde\Delta(\mathtt{T})^-$. In the exceptional case, we put $N^\circ_{\tilde{\tau}}={F}_{A,\mathrm{es}}^n(\tcD^\circ_{C,\sigma^{-n}\tilde{\tau}})$, where $n$ is the positive integer given by Lemma~\ref{L:distance to T'}. Here, we view $\tcD^{\circ}_{C,\sigma^{-n}\tilde{\tau}}$ as a lattice of $\tcD^{\circ}_{A,\sigma^{-n}\tilde\tau}[1/p]$ so that ${F}_{A,\mathrm{es}}^n(\tcD^\circ_{C,\sigma^{-n}\tilde{\tau}})$ makes sense. Note once again that, if $\tilde{\tau}\in \tilde \Delta(\mathtt{T})^- \cap \mathtt{T}_E$, then $n$ equals to $n_{\tau}$ defined in \ref{S:partial-Hasse}, and we have $ N^\circ_{\tilde{\tau}}=\tilde{\omega}^\circ_{C, \tilde{\tau}}\simeq \tilde{\omega}^\circ_{A, \tilde{\tau}}, $ since $h_{\tilde{\tau}}(A)$ vanishes. We now check that $N^\circ$ is stable under $F_C$ and $V_C$, i.e. $F_{C}(N^\circ_{\sigma^{-1}\tilde\tau})\subseteq N^\circ_{\tilde{\tau}}$ and $V_C(N^\circ_{\tilde\tau})\subseteq N^\circ_{\sigma^{-1}\tilde\tau}$ for all $\tilde\tau\in \Sigma_{E,\infty}$. The same arguments for $M$ above work verbatim in this case (with $\tilde \Delta(\mathtt{T})^-$ in places of $\tilde \Delta(\mathtt{T})^+$). Again, we point out that, by construction, $\tilde \mathcal{D}_{C, \tilde \tau}^\circ / \tilde \mathcal{D}_{B, \tilde \tau}^\circ$ is isomorphic to $k$ if $\tilde \tau \in \tilde \Delta(\mathtt{T})^-$ and trivial otherwise. Therefore, $N=(N^\circ)^{\oplus 2}$ is a Dieudonn\'e module such that the inclusions $p\tcD_{C}\subseteq N\subseteq \tcD_C$ respect the Frobenius and Verschiebung actions. In particular, the Dieudonn\'e submodule $N/p\tcD_{C}\subset \tcD_{C}/p\tcD_{C}$ is the covariant Dieudonn\'e module of a closed finite subgroup scheme $H\subset C[p]$ stable under the action of $\mathcal{O}_D$. We put $B=C/H$, and define $\phi_{B}:B\rightarrow C$ to be the isogeny such that the composite $C\rightarrow B=C/H\xrightarrow{\phi_B} C$ is the multiplication by $p$. Then the induced morphism $\phi_{B,*}:\tcD_{B}\rightarrow \tcD_{C}$ is identified with the natural inclusion $N\subseteq \tcD_{C}$. It is clear that $B$ is equipped with a natural action by $\mathcal{O}_D$, and the condition (v) for the moduli space $Y'_{\mathtt{T}}$ is satisfied. Conversely, if the abelian variety $B$ exists, then condition (v) implies that $\tcD_{B}^\circ$ has to be $N^\circ$ defined above. This means that $B$ is uniquely determined by $C$, thus by $A$. To see condition (ix) of the moduli space $Y'_{\mathtt{T}}$, we consider the quasi-isogeny: \[ \lambda_B: B\xrightarrow{\phi_B}C\xleftarrow{\phi_A}A\xrightarrow{\lambda_A}A^\vee \xleftarrow{\phi_{A}^\vee}C^\vee\xrightarrow{\phi_B^\vee}B^\vee. \] We have to show that $\lambda_B$ is a genuine isogeny, and verify that it satisfies conditions (b2) and (b3) in Theorem~\ref{T:unitary-shimura-variety-representability} for the Shimura variety $\mathbf{Sh}_{K'_\mathtt{T}}(G'_{\tilde \mathtt{S}(\mathtt{T})})$. It is equivalent to proving that, when viewing $\tcD_{B, \tilde \tau}^\circ$ as a $W(k)$-lattice of $\tcD_{A, \tilde \tau}^\circ[1/p]$ via the quasi-isogeny $B\xrightarrow{\phi_B}C\xleftarrow{\phi_A}A$, the perfect alternating pairing \[ \langle\ ,\ \rangle_{\lambda_A, \tilde \tau}: \tcD^\circ_{A, \tilde \tau}[1/p]\times \tcD^\circ_{A, \tilde \tau^c}[1/p]\rightarrow W(k)[1/p] \] for $\tilde \tau \in \Sigma_{E, \infty/\mathfrak{p}}$ induces a perfect pairing of $\tcD_{B,\tilde{\tau}}^\circ\times \tcD_{B,\tilde\tau^c}^\circ\rightarrow W(k)$ if $\mathfrak{p}$ is not of type $\beta^\sharp$ for $\mathtt{S}(\mathtt{T})$, and induces an inclusion $\tilde \mathcal{D}^\circ_{B, \tilde \tau^c} \subset \tilde \mathcal{D}^{\circ, \vee}_{B, \tilde \tau}$ with quotient equal to $k$ if $\mathfrak{p}$ is of type $\beta^\sharp$ for $\mathtt{S}(\mathtt{T})$. We discuss case by case. \begin{itemize} \item if $\mathfrak{p}$ is of type $\beta^\sharp$ for $\mathtt{S}$, then both $\phi_A$ and $\phi_B$ induce isomorphisms on the $\mathfrak{p}$-divisible groups and the statement is clear in this case. \item If $\mathfrak{p}$ is of Case $\beta2$, $\tilde\Delta(\mathtt{T})^+_{/\mathfrak{p}}=\emptyset$. By the construction of $B$, we have $\tcD_{B,\tilde\tau}^{\circ}=\tcD_{A,\tilde\tau}^{\circ}$ unless $\tilde\tau\in \tilde\Delta(\mathtt{T})^-_{/\mathfrak{p}}$; in the latter case, $\tcD_{B,\tilde\tau}^{\circ}=\tilde{F}^n_{\mathrm{es},\tilde\tau}(\tcD_{A,\sigma^{-n}\tilde\tau}^\circ)$ is a submodule of $\tcD_{A,\tilde\tau}^\circ$ with quotient isomorphic to $k$. Note that $\Sigma_{E, \infty/\mathfrak{p}} = \tilde \Delta(\mathtt{T})^-_{/\mathfrak{p}} \coprod (\tilde \Delta(\mathtt{T})^{-}_{/\mathfrak{p}})^c$; this then implies that the pairing $\langle \ ,\ \rangle_{\lambda_A, \tilde \tau}$ induces an inclusion $\tilde \mathcal{D}^\circ_{B, \tilde \tau^c} \subset \tilde \mathcal{D}^{\circ, \vee}_{B, \tilde \tau}$ with quotient equal to $k$. \item In all other cases, we have $\tilde \Delta(\mathtt{T})^+_{/\mathfrak{p}} = (\tilde \Delta(\mathtt{T})^-_{/\mathfrak{p}})^c$. So \begin{equation}\label{E:Dieud-B-2} \tcD_{B,\tilde\tau}^\circ= \begin{cases} \tcD_{A,\tilde{\tau}}^\circ &\text{if }\tilde{\tau}\notin (\tilde\Delta(\mathtt{T})^+_{/\mathfrak{p}}\cup \tilde\Delta(\mathtt{T})^-_{/\mathfrak{p}});\\ p^{-1}{F}_{A,\mathrm{es}}^n(\tcD_{A,\sigma^{-n}\tilde\tau}^\circ) &\text{if }\tilde\tau\in \tilde\Delta(\mathtt{T})^+_{/\mathfrak{p}};\\ {F}_{A,\mathrm{es}}^n(\tcD_{A,\sigma^{-n}\tilde\tau}^\circ) &\text{if }\tilde{\tau}\in \tilde\Delta(\mathtt{T})^-_{/\mathfrak{p}}. \end{cases} \end{equation} It is clear that $\langle\ ,\ \rangle_{\lambda_A}$ induces a perfect pairing on $\tcD_{B,\tilde{\tau}}^\circ\times\tcD_{B,\tilde\tau^c}^\circ$ if $\tilde\tau\notin (\tilde\Delta(\mathtt{T})^+_{/\mathfrak{p}}\cup \tilde\Delta(\mathtt{T})^-_{/\mathfrak{p}})$. If $\tilde\tau\in (\tilde\Delta(\mathtt{T})^+_{/\mathfrak{p}}\cup \tilde\Delta(\mathtt{T})^-_{/\mathfrak{p}})$, the perfect duality between $\tilde \mathcal{D}_{B, \tilde \tau}^\circ$ and $\tilde \mathcal{D}_{B, \tilde \tau^c}^\circ$ follows from the equality \[ \langle p^{-1} F_{A,\mathrm{es}}^n u, F_{A,\mathrm{es}}^n v\rangle_{\lambda_A, \tilde \tau} = \langle u, v \rangle_{\lambda_A, \sigma^{-n} \tilde \tau}^{\sigma^n}, \] for all $u\in \tcD^{\circ}_{A,\sigma^{-n}\tilde\tau}$ and $v\in \tcD^{\circ}_{A,\sigma^{-n}\tilde\tau^c}$. \end{itemize} This completes the verification of condition (viii) of the moduli space $Y'_\mathtt{T}$ and conditions (b2) and (b3) in Theorem~\ref{T:unitary-shimura-variety-representability} for $\mathbf{Sh}_{K'_\mathtt{T}}(G'_{\tilde \mathtt{S}(\mathtt{T})})$. It is also clear that $\lambda_B$ induces the involution $*_{\mathtt{S}(\mathtt{T})}$ on $\mathcal{O}_D=\mathcal{O}_{D_{\mathtt{S}(\mathtt{T})}}$. We now check that the abelian variety $B$ has the correct signature required by the moduli space $\mathbf{Sh}_{K'_{\mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})$. For convenience of future reference, we put this into the following separated lemma. \begin{lemma} \label{L:dimension count} In the setup above, that is, knowing \[ \dim \Coker(\phi_{A, *, \tilde \tau}) = \delta_{\tilde \Delta(\mathtt{T})^+}(\tilde \tau)\quad \dim \Coker(\phi_{B, *, \tilde \tau}) = \delta_{\tilde \Delta(\mathtt{T})^-}(\tilde \tau), \] where $\delta_\bullet(?)$ is $1$ if $? \in \bullet$ and is $0$ if $? \notin \bullet$, we have $\dim \omega_{B^\vee/k, \tilde \tau}^\circ = 2- s_{\mathtt{T}, \tilde \tau}$ for all $\tilde \tau \in \Sigma_{E, \infty}$ if and only if $\dim \omega_{A^\vee/k, \tilde \tau}^\circ = 2- s_{ \tilde \tau}$ for all $\tilde \tau \in \Sigma_{E, \infty}$, with the numbers $s_{\mathtt{T}, \tilde \tau}$ defined as in Subsection~\ref{S:tilde S(T)}. \end{lemma} \begin{proof} This is a simple dimension count. We prove the sufficiency; the necessity follows by reversing the argument. Using the signature condition for the Shimura variety $X'_\mathtt{T}$, we have \[ s_{\tilde \tau} = \dim_k \big( \tilde \mathcal{D}^\circ_{A, \tilde \tau} \big/ V(\tilde\mathcal{D}^\circ_{A, \sigma\tilde \tau} ) \big). \] Comparing this with the abelian variety $B$, we have \[ \dim_k\frac{\tilde \mathcal{D}^\circ_{B, \tilde \tau} }{ V(\tilde\mathcal{D}^\circ_{B, \sigma\tilde \tau} )} = \dim_k \frac{\tilde \mathcal{D}^\circ_{A, \tilde \tau}}{ V(\tilde\mathcal{D}^\circ_{A, \sigma\tilde \tau} )} - \big( \dim_k \frac{ \tilde \mathcal{D}^\circ_{C, \tilde \tau} }{ \tilde\mathcal{D}^\circ_{B,\tilde \tau}} -\dim_k \frac{ \tilde \mathcal{D}^\circ_{C, \tilde \tau} }{ \tilde\mathcal{D}^\circ_{A,\tilde \tau}} \big) + \big( \dim_k \frac{ \tilde \mathcal{D}^\circ_{C, \sigma\tilde \tau} }{ \tilde\mathcal{D}^\circ_{B,\sigma \tilde\tau}} -\dim_k \frac{ \tilde \mathcal{D}^\circ_{C, \sigma\tilde \tau} }{ \tilde\mathcal{D}^\circ_{A,\sigma\tilde \tau}} \big); \] here we used the fact that the quotient $\tilde \mathcal{D}^\circ_{C, \sigma\tilde \tau} / \tilde \mathcal{D}_{B, \sigma \tilde \tau}^\circ$ has the same dimension as $V(\tilde \mathcal{D}^\circ_{C, \sigma\tilde \tau}) / V(\tilde \mathcal{D}_{B, \sigma \tilde \tau}^\circ)$ and the same for $A$ in place of $B$ because $V$ is equivariant. Using our construction of the abelian varieties $B$ and $C$, we deduce \begin{equation} \label{E:dimension count} \dim_k \big( \tilde \mathcal{D}^\circ_{B, \tilde \tau} \big/ V(\tilde\mathcal{D}^\circ_{B, \sigma\tilde \tau} ) \big)= s_{\tilde \tau} - \big( \delta_{\tilde \Delta(\mathtt{T})^-}(\tilde \tau) - \delta_{\tilde \Delta(\mathtt{T})^+}(\tilde \tau) \big) + \big(\delta_{\tilde \Delta(\mathtt{T})^-}(\sigma\tilde \tau) - \delta_{\tilde \Delta(\mathtt{T})^+}(\sigma\tilde \tau)\big). \end{equation} Using the definition of $\tilde \Delta(\mathtt{T})^\pm$, one checks case-by-case that the expression \eqref{E:dimension count} is equal to $s_{ \mathtt{T}, \tilde \tau}$. We will only indicate the proof when $\tilde \tau \in \Sigma_{E, \infty/\mathfrak{p}}$ for $\mathfrak{p}$ in Case $\alpha1$, and leave the other cases as an exercise for interested readers. Indeed, under the notation from Subsections~ \ref{S:Delta-pm}, when $\mathfrak{p}\in \Sigma_p$ is of type $\alpha1$, $\tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^\pm = \coprod_i \tilde C_i^\pm$. Then \[ \delta_{\tilde \Delta(\mathtt{T})^+}(\tilde \tau)-\delta_{\tilde \Delta(\mathtt{T})^+}(\sigma\tilde \tau) =\left\{ \begin{array}{ll} 1&\textrm{if } \tilde \tau \textrm{ is one of }\sigma^{-a_1}\tilde \tau_i^c, \sigma^{-a_3}\tilde \tau_i^c, \dots; \\ -1&\textrm{if } \tilde \tau \textrm{ is one of }\sigma^{-a_2}\tilde \tau_i^c, \sigma^{-a_4}\tilde \tau_i^c, \dots; \\ 0&\textrm{otherwise}; \end{array} \right. \] \[ \textrm{and}\ \delta_{\tilde \Delta(\mathtt{T})^-}(\tilde \tau)-\delta_{\tilde \Delta(\mathtt{T})^-}(\sigma\tilde \tau) =\left\{ \begin{array}{ll} 1&\textrm{if } \tilde \tau \textrm{ is one of }\sigma^{-a_1}\tilde \tau_i, \sigma^{-a_3}\tilde \tau_i, \dots; \\ -1&\textrm{if } \tilde \tau \textrm{ is one of }\sigma^{-a_2}\tilde \tau_i, \sigma^{-a_4}\tilde \tau_i, \dots; \\ 0&\textrm{otherwise}. \end{array} \right. \] Putting these two formulas together and using the notation from Subsection~\ref{S:tilde S(T)}, we have \[ \big(\delta_{\tilde \Delta(\mathtt{T})^+}(\tilde \tau) -\delta_{\tilde \Delta(\mathtt{T})^+}(\sigma\tilde \tau) \big) -\big( \delta_{\tilde \Delta(\mathtt{T})^-}(\tilde \tau) -\delta_{\tilde \Delta(\mathtt{T})^-}(\sigma \tilde \tau)\big) = \delta_{\tilde \mathtt{T}'}(\tilde \tau^c)-\delta_{\tilde \mathtt{T}'}(\tilde \tau). \] This implies that \eqref{E:dimension count} is equal to $s_{ \mathtt{T}, \tilde \tau}$, and concludes the proof of Lemma~\ref{L:dimension count}. \end{proof} We now continue our proof of Proposition~\ref{P:Y_S=X_S}. To fulfill condition (vi) of the moduli space $Y'_\mathtt{T}$, the tame level structure on $B$ is chosen and determined as the composite \[ \beta^p_{K'^p_\mathtt{T}}: \widehat \Lambda_{\mathtt{S}(\mathtt{T})}^{(p)} \xrightarrow{\theta_\mathtt{T}^{-1}} \widehat \Lambda_{\mathtt{S}}^{(p)} \xrightarrow{\alpha} T^{(p)}(A) \xrightarrow{T^{(p)}(\phi_A)} T^{(p)}(C) \xrightarrow{T^{(p)}(\phi_B)^{-1}} T^{(p)}(B), \] where both $T ^{(p)}(\phi_A)$ and $T^{(p)}(\phi_B)$ are isomorphisms because $\phi_A$ and $\phi_B$ are $p$-isogenies. It remains to show that there exists a unique collection of subgroups $\beta_{p}$ satisfying \ref{T:unitary-shimura-variety-representability}(c2) for $\mathbf{Sh}_{K'_{\mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})$ and properties (vii) and (viii) of $Y'_{\mathtt{T}}$. So the corresponding prime $\mathfrak{p} \in \Sigma_p$ is either of type $\alpha^\sharp$ for $\mathtt{S}$ or in the Case $\alpha2$ of Subsection~\ref{S:quaternion-data-T}. In the former case, we have $\mathtt{T}_{/\mathfrak{p}} = \emptyset$, which forces $\Delta(\mathtt{T})^\pm_{/\mathfrak{p}} = \emptyset$ by definition. So the induced morphisms $\phi_{A,\mathfrak{p}}: A[\mathfrak{p}^\infty] \to C[\mathfrak{p}^\infty]$ and $\phi_{B,\mathfrak{p}}: B[\mathfrak{p}^\infty] \to C[\mathfrak{p}^\infty]$ are isomorphisms. Now, condition (vii) of the moduli space $Y'_\mathtt{T}$ determines that the level structure $\beta_\mathfrak{p}$ is taken to be $\phi_{B,\mathfrak{p}}^{-1}\big( \phi_{A, \mathfrak{p}}(\alpha_\mathfrak{p})\big)$. If $\mathfrak{p}$ is in Case $\alpha 2$ of Subsection~\ref{S:quaternion-data-T}, the prime $\mathfrak{p}$ splits into two primes $\mathfrak{q}$ and $\bar\mathfrak{q}$ in $E$. Using the polarization $\lambda_B$, we just need to show that there exists a unique subgroup $H_{\mathfrak{q}}\subseteq B[\mathfrak{q}]$ satisfying condition (vii). Since $s_{\mathtt{T}, \tilde \tau} = 0$ or $2$ for $\tilde \tau \in \Sigma_{E,\infty/\mathfrak{p}}$, both $ F_{B,\mathrm{es},\tilde \tau}$ and $ V_{B,\mathrm{es}, \tilde \tau}$ are isomorphisms. We define a one-dimensional $k$-vector subspace $\cD_{H_{\mathfrak{q}}}^\circ\subseteq \tcD_{B,\tilde\tau}^\circ/p\tcD_{B,\tilde\tau}^\circ$ for each $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{q}}$ as follows: \begin{itemize} \item If $\tilde\tau\in \tilde\Delta(\mathtt{T})^-_{/\mathfrak{p}}$, then $\tilde \mathcal{D}^\circ_{B, \tilde \tau}$ is contained in $ \tilde \mathcal{D}^\circ_{C, \tilde \tau}\cong\tilde \mathcal{D}^\circ_{A, \tilde \tau}$ with quotient isomorphic to $k$; put $\cD^{\circ}_{H_{\mathfrak{q}}}=p\tcD^\circ_{A,\tilde\tau}/p\tcD^\circ_{B,\tilde\tau}$. \item If $\tilde\tau\notin \tilde\Delta(\mathtt{T})^{-}_{/\mathfrak{p}}$, let $n\in \mathbb{N}$ be the least positive integer such that $\sigma^{-n}\tilde\tau\in \tilde\Delta(\mathtt{T})^-_{/\mathfrak{p}}$ (such $n$ exists because $\tilde \tau_0$ in Subsection~\ref{S:tilde S(T)} belongs to $\tilde \Delta(\mathtt{T})^-_{/\mathfrak{p}}$); put $\cD^{\circ}_{H_{\mathfrak{q}},\tilde\tau}=F^{n}_{B,\mathrm{es}}(\cD^{\circ}_{H_{\mathfrak{q}},\sigma^{-n}\tilde\tau})$. \end{itemize} Put $\cD_{H_{\mathfrak{q}}}=\bigoplus_{\tilde\tau\in \Sigma_{E,\infty/\mathfrak{q}}}\cD^{\circ,\oplus 2}_{H_{\mathfrak{q}}, \tilde\tau}$. Using the vanishing of the partial Hasse invariants $\{h_{\tilde\tau}(A): \tilde\tau\in {\mathtt{T}}_{E,\infty/\mathfrak{p}} \}$, one checks easily that $\cD_{H_{\mathfrak{q}}}\subseteq \cD_{B[\mathfrak{q}]}$ is a Dieudonn\'e submodule. We define $H_{\mathfrak{q}}\subseteq B[\mathfrak{q}]$ as the finite subgroup scheme corresponding to $\cD_{H_{\mathfrak{q}}}$ by covariant Dieudonn\'e theory. Then $\cD_{H_{\mathfrak{q}}}$ is canonically identified with the kernel of the induced map \[ \phi_{\mathfrak{q},*} \colon \cD_{B}=H^{\mathrm{dR}}_1(B/k)\rightarrow \cD_{B/H_{\mathfrak{q}}}=H^{\mathrm{dR}}_1((B/H_{\mathfrak{q}})/k). \] Therefore, $H_{\mathfrak{q}}$ satisfies condition (viii) of the moduli space $Y'_\mathtt{T}$. This shows the existence of $H_{\mathfrak{q}}$. For the uniqueness, the condition (viii) forces the choice of $\cD_{H_{\mathfrak{q}}, \tilde{\tau}}^{\circ}$ for $\tilde\tau\in \tilde{\Delta}(\mathtt{T})^-_{/\mathfrak{p}}$ and the stability under $F_B$ and $V_B$ forces the choice at other $\tilde \tau$'s. This concludes the proof that $Y'_{\mathtt{T}}\rightarrow X'_{\mathtt{T}}$ induces a bijection on closed points. \end{proof} \begin{lemma}\label{L:Y_T-X_T-tangent} The map $\eta_1: Y'_{\mathtt{T}}\rightarrow X'_{\mathtt{T}}$ induces an isomorphism of tangent spaces at every closed point. \end{lemma} \begin{proof} Let $y=(A, \iota_A,\lambda_A, \alpha_{K'}, B, \lambda_B, \iota_B, \beta_{K'_\mathtt{T}}, C, \iota_C; \phi_A, \phi_B)$ be a closed point of $Y'_{\mathtt{T}}$ with values in a perfect field $k$, and $x=(A, \iota_A,\lambda_A, \alpha_{K'})$ be its image in $X'_{\mathtt{T}}$. We have to show that $Y'_{\mathtt{T}}\rightarrow X'_{\mathtt{T}}$ induces an isomorphism of $k$-vector spaces between tangent spaces: $T_{Y'_{\mathtt{T}}, y}\xrightarrow{\cong}T_{X'_{\mathtt{T}}, x}$. Set $\mathbb{I}=\Spec(k[\epsilon]/\epsilon^2)$. By deformation theory, $T_{X'_{\mathtt{T}},x}$ is identified with the $\mathbb{I}$-valued points $x_{\mathbb{I}}=(A_{\mathbb{I}}, \iota_{A,\mathbb{I}}, \lambda_{A,\mathbb{I}}, \alpha_{K',\mathbb{I}})$ of $X'_{\mathtt{T}}$ with reduction $x\in X'_{\mathtt{T}}(k)$ modulo $\epsilon$. In the proof of Proposition~\ref{Prop:smoothness}, we have seen that giving an $x_{\mathbb{I}}$ is equivalent to giving, for each $\tilde\tau\in \Sigma_{E,\infty}$, a direct factor $\omega^{\circ}_{A^\vee,\mathbb{I},\tilde\tau}\subseteq H^{\mathrm{cris}}_1(A/k)_{\mathbb{I},\tilde\tau}^\circ$ that lifts $\omega_{A^\vee,\tilde\tau}\subseteq H^{\mathrm{dR}}_1(A/k)^{\circ}$ and satisfies the following properties: \begin{itemize} \item[(a)] If $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}$ with $\mathfrak{p}$ not of type $\beta^\sharp$ for $\mathtt{S}$, then $\omega_{A^\vee,\mathbb{I},\tilde\tau}^{\circ}$ and $\omega_{A^\vee,\mathbb{I},\tilde\tau^c}^\circ$ are orthognal complements of each other under the perfect pairing \[ H^{\mathrm{cris}}_1(A/k)_{\mathbb{I},\tilde\tau}^\circ\times H^{\mathrm{cris}}_1(A/k)_{\mathbb{I},\tilde\tau^c}^\circ \rightarrow k[\epsilon]/\epsilon^2 \] induced by the polarization $\lambda_A$. \item[(b)] If $\tilde\tau \in \tilde \mathtt{S}_\infty$, then $\omega_{A^\vee,\mathbb{I},\tilde\tau}^\circ=0$ and $\omega_{A^\vee,\mathbb{I},\tilde\tau^c}^\circ = H^{\mathrm{cris}}_1(A/k)^{\circ}_{\mathbb{I},\tilde\tau^c}$. \item[(c)] If $\tilde\tau$ restricts to $\tau\in \mathtt{T}$, then $\omega_{A^\vee,\mathbb{I},\tilde\tau}^{\circ}$ has to be $F_{A,\mathrm{es}}^{n_{\tau}}(H^\mathrm{cris}_1(A^{(p^{n_{\tau}})}/k)_{\mathbb{I},\tilde\tau}^\circ)$, where $n_\tau$ is as introduced in Subsection~\ref{S:partial-Hasse} and $F_{A,\mathrm{es}}^{n_\tau}$ on the crystalline homology are defined in the same way as $F_{A, \mathrm{es}}^{n_\tau}$ on the de Rham homology as in Notation~\ref{N:essential frobenius and verschiebung}. Since we are in characteristic $p$, we have $F_{A,\mathrm{es}}^{n_{\tau}}(H^\mathrm{cris}_1(A^{(p^{n_{\tau}})}/k)_{\mathbb{I},\tilde\tau}^\circ)=\omega_{A^\vee,\tilde\tau}\otimes k[\epsilon]/\epsilon^2$. \end{itemize} Note also that, the crystal nature of $H^{\mathrm{cris}}_1(A/k)$ implies that there is a canonical isomorphism $$H^{\mathrm{cris}}_1(A/k)_{\mathbb{I}}\simeq H^{\mathrm{dR}}_1(A/k)\otimes_k k[\epsilon]/\epsilon^2.$$ We have to show that, given such an $x_{\mathbb{I}}$, or equivalently given the liftings $\omega_{A^\vee, \mathbb{I},\tilde\tau}^\circ$ as above, there exists a unique tuple $(B_{\mathbb{I}}, \lambda_{B,\mathbb{I}}, \iota_{B,\mathbb{I}},\beta_{K'_\mathtt{T}, \mathbb{I}}, C_{\mathbb{I}}, \iota_{C, \mathbb{I}}; \phi_{A,\mathbb{I}},\phi_{B,\mathbb{I}}) $ over $\mathbb{I}$ deforming $(B,\lambda_B, \iota_B, \beta_{K'_\mathtt{T}}, C, \iota_C; \phi_A, \phi_B)$ such that $(A_{\mathbb{I}}, \iota_{A,\mathbb{I}}, \lambda_{A,\mathbb{I}}, \alpha_{K',\mathbb{I}}, B_{\mathbb{I}}, \lambda_{B,\mathbb{I}}, \iota_{B,\mathbb{I}},\beta_{K'_\mathtt{T}, \mathbb{I}}, C_{\mathbb{I}}, \iota_{C, \mathbb{I}}; \phi_{A,\mathbb{I}},\phi_{B,\mathbb{I}})$ is an $\mathbb{I}$-valued point of $Y'_{\mathtt{T}}$. We start with $C_{\mathbb{I}}$. To show its existence, it suffices to construct, for each $\tilde{\tau}\in \Sigma_{E,\infty}$, a direct factor $\omega_{C^\vee,\mathbb{I},\tilde\tau}^\circ\subseteq H^{\mathrm{cris}}_1(C/k)_{\mathbb{I},\tilde\tau}^\circ$ that lifts $\omega^\circ_{C^\vee, \tilde\tau}\subseteq\cD_{C,\tilde\tau}^\circ\cong H^{\mathrm{dR}}_1(C/k)_{\tilde\tau}^\circ$. \begin{itemize} \item When neither $\tilde \tau$ nor $\sigma \tilde \tau $ belongs to $ \tilde \Delta(\mathtt{T})^+$, $\phi_{A, *, ?}: H^{\mathrm{dR}}_{1}(A/k)^\circ_{?}\xrightarrow{\sim}H^{\mathrm{dR}}_1(C/k)^\circ_{?}$ is an isomorphism for $?=\tilde\tau, \sigma\tilde\tau$. We take $\omega_{C^\vee,\mathbb{I},\tilde\tau}^\circ\subseteq H^{\mathrm{cris}}_1(C/k)_{\mathbb{I},\tilde\tau}^\circ$ to be the image of $\omega_{A^\vee, \mathbb{I},\tilde\tau}\subseteq H^{\mathrm{cris}}_1(A/k)_{\mathbb{I},\tilde\tau}^\circ$ under the induced morphism $\phi^\mathrm{cris}_{A, *, \tilde \tau}$ on the crystalline homology. \item When either one of $\tilde\tau$ and $\sigma \tilde \tau$ belongs to $\tilde \Delta(\mathtt{T})^+$, an easy dimension count argument similar to Lemma~\ref{L:dimension count} (using Lemma~\ref{L:property of Delta}) shows that $\omega_{C^\vee, \tilde \tau}^\circ$ is either $0$ or of rank $2$; there is a unique obvious such lift $\omega_{C^\vee,\mathbb{I},\tilde\tau}^\circ$. \end{itemize} This finishes the construction of $\omega^\circ_{C^\vee,\mathbb{I},\tilde\tau}$ for all $\tilde\tau$, hence one gets a deformation $C_{\mathbb{I}}$ of $C$. It is clear that the map $\phi^\mathrm{cris}_{A,*}: H^{\mathrm{cris}}_1(A/k)^\circ_{\mathbb{I},\tilde\tau}\rightarrow H^{\mathrm{cris}}_1(C/k)^\circ_{\mathbb{I},\tilde\tau}$ sending $\omega^\circ_{A^\vee,\mathbb{I},\tilde\tau}$ to $\omega^\circ_{C^\vee,\mathbb{I},\tilde\tau}$. Hence, $\phi_{A}$ deforms to an isogeny of abelian schemes $\phi_{A,\mathbb{I}}:A_{\mathbb{I}}\rightarrow C_{\mathbb{I}}$ by \cite[2.1.6.9]{lan}. We check now that $\phi_{A_\mathbb{I}}$ satisfies condition (iv) of the moduli space $Y'_{\mathtt{T}}$. We note that the map $\phi_{A_{\mathbb{I}},*}: H^{\mathrm{dR}}_1(A_{\mathbb{I}}/\mathbb{I})\rightarrow H^{\mathrm{dR}}_1(C_{\mathbb{I}}/\mathbb{I})$ is canonically identified with $\phi_{A,*}^{\mathrm{cris}}: H^\mathrm{cris}_1(A/k)_{\mathbb{I}}\simeq H^{\mathrm{cris}}_1(C /k)_\mathbb{I}$ by crystalline theory, which is in turn isomorphic to the base change of $\phi_{A,*}:H^\mathrm{dR}_1(A/k)\rightarrow H^{\mathrm{dR}}_1(C/k)$ via $k\hookrightarrow k[\epsilon]/\epsilon^2$. Let $\tilde\tau\in \tilde \Delta(\mathtt{T})^+$. Since the Frobenius on $k[\epsilon]/\epsilon^2$ factors as $$ k[\epsilon]/\epsilon^2\twoheadrightarrow k\xrightarrow{x\mapsto x^p} k\hookrightarrow k[\epsilon]/\epsilon^2, $$ we see that $$ F_{A,\mathrm{es}}^n(H^{\mathrm{dR}}_1(A_{\mathbb{I}}^{(p^n)}/\mathbb{I})^\circ_{\tilde\tau}) =F_{A,\mathrm{es}}^n(H^{\mathrm{dR}}_1(A^{(p^n)}/k)^\circ_{\tilde\tau})\otimes_k k[\epsilon]/\epsilon^2. $$ Hence, the kernel of $\phi_{A_{\mathbb{I}},*,\tilde\tau}: H^{\mathrm{dR}}_1(A_{\mathbb{I}}/\mathbb{I})^\circ_{\tilde\tau}\rightarrow H^\mathrm{dR}_1(C_{\mathbb{I}}/\mathbb{I})^\circ_{\tilde\tau}$ coincides with $F_{A,\mathrm{es}}^n(H^{\mathrm{dR}}_1(A_{\mathbb{I}}^{(p^n)}/\mathbb{I})^\circ_{\tilde\tau})$, since it is the case after reduction modulo $\epsilon$. This shows that $\phi_{A_\mathbb{I}}$ satisfies the condition (iv). Conversely, it is clear that, if $C_{\mathbb{I}}$ and $\phi_{A_\mathbb{I}}$ satisfy condition (iv), then they have to be of the form as above. We show now that there exists a unique deformation $(B_{\mathbb{I}}, \phi_{B_\mathbb{I}})$ over $\mathbb{I}$ of $(B,\phi_B)$ satisfying condition (vi) of the moduli space $Y'_\mathtt{T}$. To construct $B_{\mathbb{I}}$, one has to specify, for each $\tilde\tau\in \Sigma_{E,\infty}$ a subbundle $\omega_{B^\vee, \mathbb{I}, \tilde\tau}^\circ\subseteq H^{\mathrm{cris}}_1(B/k)_{\mathbb{I},\tilde\tau}^\circ$ lifting $\omega^\circ_{B^\vee,\tilde\tau}\subseteq H^{\mathrm{dR}}_1(B/k)^\circ_{\tilde\tau}$. Similar to the discussion above, \begin{itemize} \item If neither $\tilde \tau$ nor $\sigma \tilde \tau$ belong to $\tilde \Delta(\mathtt{T})^-$, then $\phi_{B, *, ?}: H^{\mathrm{dR}}_{1}(B/k)^\circ_{?}\xrightarrow{\sim}H^{\mathrm{dR}}_1(C/k)^\circ_{?}$ is an isomorphism for $?=\tilde\tau, \sigma\tilde\tau$. We take $\omega_{B^\vee,\mathbb{I},\tilde\tau}^\circ\subseteq H^{\mathrm{cris}}_1(B/k)_{\mathbb{I},\tilde\tau}^\circ$ to be the image of $\omega_{C^\vee, \mathbb{I},\tilde\tau}\subseteq H^{\mathrm{cris}}_1(C/k)_{\mathbb{I},\tilde\tau}^\circ$ under the induced morphism $\phi_{B, *, \tilde \tau}^{-1}$ on the crystalline homology. \item If at least one of $\tilde \tau$ and $\sigma \tilde \tau$ belongs to $\tilde \Delta(\mathtt{T})^-$, then an easy dimension count argument similar to Lemma~\ref{L:dimension count} (using Lemma~\ref{L:property of Delta}) shows that $\omega^\circ_{B^\vee, \tilde \tau}$ is either $0$ or of rank $2$; there is a unique obvious such lift $\omega^\circ_{B^\vee, \mathbb{I}, \tilde \tau}$. \end{itemize} This defines $\omega_{B^\vee,\mathbb{I}, \tilde\tau}^\circ$ for all $\tilde\tau\in \Sigma_{E,\infty}$. Hence, one gets a deformation $B_{\mathbb{I}}$ of $B$ over $k[\epsilon]/\epsilon^2$. It is immediate from the construction that the action of $\mathcal{O}_D$ lifts to $B_{\mathbb{I}}$, and $\phi^\mathrm{cris}_{B,*,\tilde\tau}: H^{\mathrm{cris}}_1(B/k)^{\circ}_{\mathbb{I}, \tilde\tau}\rightarrow H^{\mathrm{cris}}_1(C/k)^\circ_{\mathbb{I},\tilde\tau}$ sends $\omega_{B^\vee,\mathbb{I}, \tilde\tau}^\circ$ to $ \omega_{C^\vee, \mathbb{I},\tilde\tau}^\circ$ for all $\tilde\tau\in \Sigma_{E,\infty}$. Hence, $\phi_{B}:B\rightarrow C$ deforms to an isogeny $\phi_{B_\mathbb{I}}: B_{\mathbb{I}}\rightarrow C_{\mathbb{I}}$. In the same way as for $\phi_{A_{\mathbb{I}}}$, we prove that $\phi_{B_\mathbb{I}}$ satisfies condition (v) of the moduli space $Y'_{\mathtt{T}}$, and conversely the condition (v) determines $B_{\mathbb{I}}$ uniquely. Let $\langle\ ,\ \rangle_{\lambda_{B}}:H^{\mathrm{cris}}_1(B/k)_{\mathbb{I}}^\circ\times H^\mathrm{cris}_1(B/k)^\circ_{\mathbb{I}}\rightarrow k[\epsilon]/\epsilon^2$ be the pairing induced by the polarization $\lambda_B$. To prove that $\lambda_{B}$ deforms (necessarily uniquely) to a polarization $\lambda_{B_\mathbb{I}}$ on $B_{\mathbb{I}}$, it suffices to check that $\langle\ ,\ \rangle_{\lambda_B}^\mathrm{cris}$ vanishes on $\omega_{B, \mathbb{I}, \tilde\tau}^\circ\times \omega_{B,\mathbb{I}, \tilde\tau^c}^\circ$ for all $\tilde\tau\in \Sigma_{E,\infty}$ (cf. \cite[2.1.6.9, 2.2.2.2, 2.2.2.6]{lan}): \begin{itemize} \item If $\tau=\tilde\tau|_F$ lies in $\mathtt{S}(\mathtt{T})_\infty$, this is trivial, because one of $\omega^{\circ}_{B^\vee, \mathbb{I}, \tilde\tau}$ and $\omega_{B^\vee, \mathbb{I}, \tilde\tau^c}^\circ$ equals $0$ and the other one equals to $H^{\mathrm{cris}}_1(B/k)_{\mathbb{I},\tilde\tau}^\circ$ by construction. \item If $\tau=\tilde\tau|_F$ is not in $\mathtt{S}(\mathtt{T})_\infty$, then the natural isomorphism $H^{\circ}_1(B/k)^\mathrm{cris}_{\mathbb{I},\star}\cong H^\mathrm{cris}_1(A/k)^\circ_{\mathbb{I}, \star}$ sends $\omega^\circ_{B^\vee, \mathbb{I}, \star}$ to $\omega^\circ_{A^\vee,\mathbb{I}, \star}$ for $\star=\tilde\tau, \tilde\tau^c$. The vanishing of $\langle \ ,\ \rangle_{\lambda_B}$ on $\omega^\circ_{B^\vee,\mathbb{I},\tilde\tau}\times \omega_{B^\vee, \mathbb{I}, \tilde\tau^c}^\circ$ follows from the similar statement with $B$ replaced by $A$. \end{itemize} Therefore, we see that $\lambda_B$ deforms to a polarization $\lambda_{B_{\mathbb{I}}}$ on $B_\mathbb{I}$. Since $\lambda_{B_\mathbb{I},*}^\mathrm{dR}:H^{\mathrm{dR}}_1(B/\mathbb{I})\rightarrow H^{\mathrm{dR}}_1(B^\vee/\mathbb{I})$ is canonically identified with $\lambda_{B,*}^\mathrm{cris}: H^{\mathrm{cris}}_1(B/k)_{\mathbb{I}}\rightarrow H^{\mathrm{cris}}_1(B/k)_{\mathbb{I}}$, which is in turn identified with the base change of $\lambda^\mathrm{dR}_{B,*}$ via $k\hookrightarrow k[\epsilon ]/\epsilon^2$, it is clear that condition (ii) regarding the polarization is preserved by the deformation $\lambda_{B_{\mathbb{I}}}$. It remains to prove that $\beta_{K'_\mathtt{T}}$ deforms to $\beta_{K'_\mathtt{T}, \mathbb{I}}$. The deformation of the tame level structure is automatic; the deformation of the subgroup at $p$-adic places of type $\alpha^\sharp$ and $\alpha2$ is also unique, by the same argument as in Theorem~\ref{T:unitary-shimura-variety-representability}. \end{proof} \subsection{A lift of $I_\mathtt{T}$} \label{S:tilde IT} Recall that $I_\mathtt{T}$ is the subset $\mathtt{S}(\mathtt{T})_\infty - (\mathtt{S}_\infty \cup \mathtt{T})$ defined in Theorem~\ref{T:main-thm}. We use $\tilde I_{\mathtt{T}}$ to denote the subset of complex embeddings of $E$ consisting of the unique lift $\tilde \tau$ of every element $\tau \in I_\mathtt{T}$, for which $\tilde \tau^c \in \tilde \mathtt{S}(\mathtt{T})_\infty$. We describe this set explicitly as follows. We write $I_{\mathtt{T}/\mathfrak{p}} = I_\mathtt{T} \cap \Sigma_{\infty/\mathfrak{p}}$ and $\tilde I_{\mathtt{T}/\mathfrak{p}} = \tilde I_\mathtt{T} \cap \Sigma_{E, \infty/\mathfrak{p}}$ for $\mathfrak{p} \in \Sigma_p$; then they are empty set unless $\mathfrak{p}$ is of type $\alpha 1$ or $\beta 1$. When $\mathfrak{p}$ is of type $\alpha1$ or $\beta1$, using the notation of Subsection~\ref{S:quaternion-data-T}, $I_{\mathtt{T}/\mathfrak{p}}$ consists of $\sigma^{-m_i-1}\tau_i$ for all $i$ such that $\#(C_i \cap \mathtt{T}_{/\mathfrak{p}})$ is odd. In the notation of Subsection~\ref{S:tilde S(T)}, the set $\tilde I_{\mathtt{T}/\mathfrak{p}}$ consists of $\sigma^{-a_{r_i}}\tilde \tau_i$ for all such $i$ as above. We remark that, in either case, for any $\tilde \tau$ lifting a place $\tau \in I_{\mathtt{T}}$, $\tilde \tau \notin \tilde \Delta(\mathtt{T})^+ \cup \tilde \Delta(\mathtt{T})^-$. \subsection{Isomorphism of $Y'_\mathtt{T}$ with $Z'_{\mathtt{T}}$} \label{S:Y_T=Z_T} Let $Z'_\mathtt{T}$ be the moduli space over $k_0$ representing the functor that takes a locally noetherian $k_0$-scheme $S$ to the set of isomorphism classes of tuples $(B, \iota_B, \lambda_B, \beta_{K'_{\mathtt{T}}}, J^\circ)$, where \begin{itemize} \item[(i)] $(B, \iota_B, \lambda_B, \beta_{K'_{\mathtt{T}}})$ is an $S$-valued point of $\mathbf{Sh}_{K'_{\mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})$. \item[(ii)] $J^\circ$ is the collection of sub-bundles $J^\circ_{\tilde\tau}\subseteq H^{\mathrm{dR}}_1(B/S)_{\tilde\tau}^\circ$ locally free of rank $1$ for each $\tilde\tau\in \tilde I_\mathtt{T}$.\end{itemize} It is clear that $Z'_{\mathtt{T}}$ is a $(\mathbb{P}^1)^{I_{\mathtt{T}}}$-bundle over $\mathbf{Sh}_{K'_{\mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})$. We define a morphism $\eta: Y'_{\mathtt{T}}\rightarrow Z'_{\mathtt{T}}$ as follows: Let $S$ be a locally noetherian $k_0$-scheme, and $x=(A, \iota_{A},\lambda_A, \alpha_{K'}, B, \iota_B, \lambda_B, \beta_{K'_\mathtt{T}}, C, \iota_C;\phi_A, \phi_B)$ be an $S$-valued point of $Y'_{\mathtt{T}}$. We define $\eta(x)\in Z'_{\mathtt{T}}(S)$ to be the isomorphism class of the tuple $(B, \iota_B, \lambda_B, \beta_{K'_{\mathtt{T}}}, J^\circ)$, where $J^\circ_{\tilde \tau}$ is given by $\phi_{B,*,\tilde\tau}^{-1}\circ \phi_{A,*,\tilde\tau}(\omega_{A^\vee,\tilde\tau}^\circ)$ for the isomorphisms \[ \xymatrix{ H^{\mathrm{dR}}_1(A/S)^\circ_{\tilde\tau}\ar[r]^{\phi_{A,*,\tilde\tau}}_{\cong} &H^{\mathrm{dR}}_1(C/S)^\circ_{\tilde\tau} & H^{\mathrm{dR}}_1(B/S)^\circ_{\tilde\tau}\ar[l]^{\cong}_{\phi_{B,*, \tilde\tau}}. }\] Note that $\tilde \tau \notin \tilde \Delta(\mathtt{T})^\pm$ implies that both $\phi_{A, *, \tilde \tau}$ and $\phi_{B, *, \tilde \tau}$ are isomorphisms. \begin{prop}\label{P:Y_T=Z_T} The morphism $\eta_2: Y'_{\mathtt{T}}\rightarrow Z'_{\mathtt{T}}$ is an isomorphism. \end{prop} We note that Theorem~\ref{T:main-thm-unitary} follows immediately from this Proposition and Proposition~\ref{P:Y_S=X_S}. \begin{proof} As in the proof of Proposition~\ref{P:Y_S=X_S}, it suffices to prove that $\eta$ induces a bijection on the closed points and on tangent spaces. \textbf{Step I.} We show first that $\eta$ induces a bijection on closed points. Let $z=(B,\iota_B, \lambda_B, \beta_{K'_\mathtt{T}}, J^\circ)$ be a closed point of $Z'_{\mathtt{T}}$ with values in $k=\overline{\mathbb{F}}_p$. We have to show that there exists a \emph{unique} point $y=(A, \iota_{A}, \lambda_A, \alpha_{K'}, B, \iota_B, \lambda_B, \beta_{K'_\mathtt{T}}, C, \iota_C;\phi_A, \phi_B)\in Y'_{\mathtt{T}}(k)$ with $\eta(y)=z$. To prove this, we basically reverse the construction in the proof of Lemma~\ref{L:Y_T=X_T-1}. We start by reconstructing $C$ from $B$ and $J^\circ$. We denote by $\tcD_B=(\tcD^{\circ}_B)^{\oplus 2}$ the covariant Dieudonn\'e module of $B$, and by $\tcD^\circ_B=\bigoplus_{\tilde\tau\in \Sigma_{E,\infty}}\tcD_{B,\tilde\tau}^\circ$ the canonical decomposition according to the $\mathcal{O}_E$-action. We construct a Dieudonn\'e submodule $M^\circ=\bigoplus_{\tau\in \Sigma_{E,\infty}} M^\circ_{\tilde\tau}\subseteq \tcD_{B}^\circ [1/p]$ with $\tcD^\circ_{B}\subseteq M^\circ\subseteq p^{-1}\tcD^\circ_{B}$ as follows. Let $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}$ with $\mathfrak{p}\in \Sigma_p$. If $\tilde\tau\notin \tilde \Delta(\mathtt{T})^-$, we put $M^\circ_{\tilde\tau}=\tcD^\circ_{B,\tilde\tau}$. To define $M^\circ_{\tilde\tau}$ in the other case, we separate the discussion according to the type of $\mathfrak{p}$. \begin{itemize} \item (Case $\alpha 1$ and $\beta 1$) Recall our notation from Subsections~\ref{S:quaternion-data-T}, \ref{S:tilde S(T)}, \ref{S:Delta-pm}, and \ref{S:tilde IT}. There are two subcases according to the parity of $\#(C_i \cap \mathtt{T}_{/\mathfrak{p}})$, where $C_i$ is a chain of $\mathtt{S}_{\infty/\mathfrak{p}}\cup \mathtt{T}_{/\mathfrak{p}}$ as in Subsection~\ref{S:quaternion-data-T}. (It should not be confused with the abelian variety $C$.) \begin{itemize} \item When $r_i=\#(C_i \cap \mathtt{T}_{/\mathfrak{p}})$ is odd, $\sigma^{-m_i-1}\tilde\tau_i\in \tilde I_{\mathtt{T}/\mathfrak{p}}$ so that $J^\circ_{\sigma^{-m_i-1}\tilde\tau_i}$ is defined. In this case, all $\tau = \sigma^{-\ell}\tau_i$ belong to $ \mathtt{S}(\mathtt{T})_{\infty/\mathfrak{p}}$ for $0 \leq \ell \leq m_i+1$; so $s_{\mathtt{T}, \sigma^{-\ell} \tilde \tau_i} \in \{0,2\}$ and the essential Frobenii \[ \xymatrix{ F_{B,\mathrm{es}}^{m_i+1-\ell}:\ \tcD^\circ_{B,\sigma^{-m_i-1}\tilde\tau_i}\ar[r]_-{ F_{B,\mathrm{es}}}^-{\cong} &\tcD^\circ_{B,\sigma^{-m_i}\tilde\tau_i}\ar[r]^-{\cong}_-{ F_{B,\mathrm{es}}} &\cdots \ar[r]^-{\cong}_-{ F_{B, \mathrm{es}}}&\tcD^{\circ}_{B,\sigma^{-\ell}\tilde\tau_i} }\] are isomorphisms for such an $\ell$. If $a_j \leq \ell < a_{j+1}$ for some odd number $j$, we put \[ M^\circ_{\sigma^{-\ell}\tilde\tau_i}=p^{-1} F_{B,\mathrm{es}}^{m_i+1-\ell}(\tilde J^\circ_{\sigma^{-m_i-1}\tilde\tau_i}), \] where $\tilde J^\circ_{\sigma^{-m_i-1}\tilde\tau_i}$ denotes the inverse image in $\tcD_{B,\sigma^{-m_i-1}\tilde\tau_i}^\circ$ of $J^\circ_{\sigma^{-m_i-1}\tilde\tau_i}\subseteq \cD^\circ_{B,\sigma^{-m_i-1}\tilde\tau_i}$ under the natural reduction map modulo $p$; otherwise, we have already defined $M^\circ_{\sigma^{-\ell}\tilde\tau_i}$ to be $\tilde \mathcal{D}^\circ_{B, \sigma^{-\ell}\tilde\tau_i}$. \item When $r_i=\#(C_i\cap \mathtt{T}_{/\mathfrak{p}})$ is even, there is no $J^\circ$ involved in this construction. Note that all $\tau = \sigma^{-\ell}\tau_i$ belong to $\mathtt{S}(\mathtt{T})_{\infty/\mathfrak{p}}$ for $0 \leq \ell \leq m_i$; so $s_{\mathtt{T}, \sigma^{-\ell} \tilde \tau_i} \in \{0,2\}$ and in the sequence of essential Frobenii \[ \xymatrix{\quad\quad F_{B,\mathrm{es}}^{m_i-\ell+1}:\ \tcD^{\circ}_{B,\sigma^{-m_i-1}\tilde\tau_i}\ar[r]_-{F_{B}}&\tcD^\circ_{B,\sigma^{-m_i}\tilde\tau_i}\ar[r]_-{ F_{B,\mathrm{es}}}^-{\cong} &\tcD^\circ_{B,\sigma^{-m_i+1}\tilde\tau_i}\ar[r]^-{\cong}_-{ F_{B,\mathrm{es}}} &\cdots \ar[r]^-{\cong}_-{ F_{B, \mathrm{es}}}&\tcD^{\circ}_{B,\sigma^{-\ell}\tilde\tau_i}, } \] all the maps except the first one are isomorphisms. If $a_j \leq \ell < a_{j+1}$ for some odd number $j$, we put $$ M^\circ_{\sigma^{-\ell}\tilde\tau_i}=p^{-1} F_{B,\mathrm{es}}^{m_i-\ell+1} (\tcD^{\circ}_{B,\sigma^{-m_i-1}\tilde\tau_i}); $$ then we have $\dim_k(M^\circ_{\sigma^{-\ell}\tilde\tau_i}/\tcD^\circ_{B,\sigma^{-\ell}\tilde\tau_i})=1$, since the cokernel of $F_B: \tcD^\circ_{B,\sigma^{-m_i-1}\tilde\tau_i}\rightarrow \tcD^{\circ}_{B,\sigma^{-m_i}\tilde\tau_i}$ has dimension $1$, as $s_{\mathtt{T},\sigma^{-m_i-1}\tilde\tau_i}=1$. (For other $\ell$, we have already defined $M^\circ_{\sigma^{-\ell}\tilde\tau_i}$ to be $\tilde \mathcal{D}^\circ_{B, \sigma^{-\ell}\tilde\tau_i}$.) \end{itemize} \item (Case $\alpha 2$) In this case, $\mathfrak{p}$ is a prime of type $\alpha^\sharp$ for $\mathrm{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$, and it splits into two primes $\mathfrak{q}$ and $\bar\mathfrak{q}$ in $E$. Let $H_{\mathfrak{q}}\subseteq B[\mathfrak{q}]$ be the closed subgroup scheme given in the data $\beta_{K'_\mathtt{T}}$. Let $H_{\bar \mathfrak{q}}$ be its annihilator under the Weil pairing on $B[\mathfrak{q}]$ induced by $\lambda_B$. (We collectively write $H_\mathfrak{p}$ for $H_\mathfrak{q} \times H_{\bar \mathfrak{q}}$.) Let $\cD^\circ_{H_\mathfrak{p}}=\bigoplus_{\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}}\cD^\circ_{H_\mathfrak{p},\tilde\tau} \subseteq \cD_{B}^\circ$ be the reduced covariant Dieudonn\'e module of $H_\mathfrak{p} = H_{\mathfrak{q}} \times H_{\bar \mathfrak{q}}$. Then each $\cD^\circ_{H_{\mathfrak{p}},\tilde\tau}$ is necessarily one-dimensional over $k$ for all $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}$. For $\tilde\tau\in \tilde\Delta(\mathtt{T})^-_{/\mathfrak{p}}$, we define $$ M^\circ_{\tilde\tau} =p^{-1}\tilde\cD^\circ_{H_\mathfrak{p},\tilde\tau}, $$ where $\tcD^\circ_{H_\mathfrak{p},\tilde\tau}$ denotes the inverse image in $\tcD^\circ_{B,\tilde\tau}$ of the subspace $\cD_{H_{\mathfrak{p}},\tilde\tau}\subseteq \cD_{B,\tilde\tau}^\circ$. (We have defined $M^\circ_{\tilde \tau} = \tilde D^\circ_{B, \tilde \tau}$ for $\tilde \tau \notin \tilde \Delta(\mathtt{T})^-_{/\mathfrak{p}}$ before.) \item(Case $\beta 2$) In this case, $\mathfrak{p}$ is a prime of type $\beta^\sharp$ for $\mathrm{Sh}_{K_{\mathtt{T}}}(G_{\mathtt{S}(\mathtt{T})})$. For $\tilde\tau\in \tilde \Delta(\mathtt{T})^-_{/\mathfrak{p}}$, let $\lambda_{B,*,\tilde\tau}: \cD^{\circ}_{B,\tilde\tau}\rightarrow \cD^\circ_{B^\vee,\tilde\tau}$ be the morphism induced by the polarization $\lambda_B$. By Theorem~\ref{T:unitary-shimura-variety-representability}(b3), $J^\circ_{\tilde\tau}:=\Ker(\lambda_{B,*,\tilde\tau})$ is a $k$-vector space of dimension $1$. We set $M^\circ_{\tilde\tau}=p^{-1}\tilde J^\circ_{\tilde\tau}$ for such $\tilde \tau$, where $\tilde J^\circ_{\tilde \tau}$ is the preimage of $J^\circ_{\tilde \tau}$ under the reduction map $\tilde \mathcal{D}^\circ_{B, \tilde \tau} \to \mathcal{D}^\circ_{B, \tilde \tau}$. Note that when viewing $\tcD^\circ_{B^\vee,\tilde\tau}$ as a lattice of $\tcD^\circ_{B,\tilde\tau}[1/p]$ using the polarization, we have $M^\circ_{\tilde\tau} =\tcD^\circ_{B^\vee,\tilde\tau}$. (We have defined $M^\circ_{\tilde \tau} = \tilde D^\circ_{B, \tilde \tau}$ for $\tilde \tau \notin \tilde \Delta(\mathtt{T})^-_{/\mathfrak{p}}$ before.) \end{itemize} This concludes the definition of $M^\circ\subseteq p^{-1}\tcD^\circ_{B}$. One checks easily that $M^\circ$ is stable under $F_B$ and $V_B$. Consider the quotient Dieudonn\'e modules $$ M/\tcD_{B}=(M^\circ/\tcD^\circ_{B})^{\oplus 2}\subseteq p^{-1}\tcD_{B}/\tcD_B\cong \cD_B. $$ Then $M/\tcD^\circ_B$ corresponds to a closed finite group scheme $G\subseteq B[p]$ stable under the action of $\mathcal{O}_D$. We put $C=B/G$ with the induced $\mathcal{O}_D$-action, and define $\phi_B:B\rightarrow C$ as the canonical $\mathcal{O}_D$-equivariant isogeny. Then the natural induced map $\phi_{B,*}:\tcD^\circ_{B}\rightarrow \tcD^\circ_{C}$ is identified with the inclusion $\tcD^\circ_B\hookrightarrow M^\circ$. We now construct $A$ from $C$. Similar to above, we first define a $W(k)$-lattice $L^\circ = \bigoplus_{\tilde \tau\in \Sigma_{E, \infty}} L^\circ_{\tilde \tau} \subseteq \tilde \mathcal{D}_C^\circ$, with $L^\circ_{\tilde \tau} = \tilde \mathcal{D}_{C, \tilde \tau}^\circ$ unless $\tilde \tau \in \tilde \Delta(\mathtt{T})^+$. If $\tilde \tau \in \tilde \Delta(\mathtt{T})^+$, then the corresponding $p$-adic place $\mathfrak{p} \in \Sigma_p$ cannot be of type $\beta2$ or $\beta^\sharp$. In this case, we identify $\tcD_{B}^\circ[1/p]$ with $\tcD_C^\circ[1/p]$ so that $\tcD_B^\circ$ and $\tcD^\circ_C$ are both viewed as $W(k)$-lattices in $\tcD_B^\circ[1/p]$. The polarization $\lambda_B$ induces a perfect pairing \[ \langle\ ,\ \rangle_{\lambda_B}\colon \tcD^\circ_{B,\tilde\tau}[1/p]\times \tcD^\circ_{B,\tilde\tau^c}[1/p]\rightarrow W(k)[1/p], \] which induces a perfect pairing between $\tcD^\circ_{B,\tilde\tau}$ and $\tcD^\circ_{B,\tilde\tau^c}$. We put $$ L^\circ_{\tilde\tau}=\tilde \mathcal{D}^{\circ,\perp}_{C,\tilde\tau^c}\colon =\{v\in \tcD^\circ_{C,\tilde\tau}[1/p]: \langle v, w\rangle_{\lambda_B}\in W(k),\text{ for all } w\in \tilde \mathcal{D}^\circ_{C,\tilde\tau^c} \}. $$ Note that $\tilde \tau \in \tilde \Delta(\mathtt{T})^+$ always implies that $\tilde \tau^c \in \tilde \Delta(\mathtt{T})^-- \tilde \Delta(\mathtt{T})^+$. So $\tilde \mathcal{D}^\circ_{C, \tilde \tau} = \tilde \mathcal{D}^\circ_{B, \tilde \tau}$ and $\tilde \mathcal{D}^\circ_{C, \tilde \tau^c} \supset \tilde \mathcal{D}^\circ_{B, \tilde \tau^c}$ with quotient isomorphic to $k$. This implies that $L^\circ_{\tilde \tau} \subseteq \tilde \mathcal{D}_{C, \tilde \tau}^\circ$ with quotient isomorphic to $k$. As usual, one verifies that $L^\circ$ is stable under $F_B$ and $V_B$ (because it either equals to $\tilde \mathcal{D}_{C, \tilde \tau}^\circ$ or $\tilde \mathcal{D}_{C, \tilde \tau^c}^{\circ, \perp}$ in various cases), and we put $L=(L^\circ)^{\oplus 2}$. The quotient Dieudonn\'e module $L/p\tcD_{C}$ corresponds to a closed subgroup scheme $K\subseteq C[p]$ stable under the action of $\mathcal{O}_D$. We put $A=C/K$ equipped with the induced $\mathcal{O}_D$-action, and define $\phi_A\colon A\rightarrow C$ as the canonical $\mathcal{O}_D$-equivariant isogeny with kernel $C[p]/K$. Then $\phi_{A,*}:\tcD_{A}\rightarrow \tcD_C$ is identified with the natural inclusion $L\hookrightarrow \tcD_C$. We define $\lambda_A:A\rightarrow A^\vee$ to be the quasi-isogeny: \[ \lambda_A: A\xrightarrow{\phi_A} C \xleftarrow{\phi_B} B\xrightarrow{\lambda_B}B^\vee\xleftarrow{\phi_{B}^\vee} C^\vee \xrightarrow{\phi_A^\vee}A^\vee, \] and we will verify that $\lambda_A$ is a genuine isogeny (hence a polarization since $\lambda_B$ is) satisfying condition (b) of the moduli space $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})$ as in Theorem~\ref{T:unitary-shimura-variety-representability}. We may identify $\tcD_{A}^\circ[1/p]$ and $\tcD^\circ_{A^\vee}[1/p]$ with $\tcD_{B}^\circ[1/p]$, and view both $\tcD^\circ_{A,\tilde\tau}$ and $\tcD^\circ_{A^\vee,\tilde\tau}$ as lattices of $\tcD^\circ_{B,\tilde\tau}[1/p]$. It suffices to show that we have a natural inclusion $$ \tcD^\circ_{A^\vee, \tilde\tau} \subseteq (\tcD^\circ_{A,\tilde\tau^c})^\perp= \big\{v\in \tcD^\circ_{B,\tilde\tau}[1/p]: \langle v, w\rangle_{\lambda_B}\in W(k)\text{ for all }w\in \tcD^\circ_{A,\tilde\tau^c}\big\}, $$ which is an isomorphism unless $\tilde \tau$ induces a $p$-adic place of type $\beta^\sharp$ for $\mathrm{Sh}_{K}(G_{\mathtt{S}})$ in which case it is an inclusion with quotient $k$. \begin{itemize} \item By the construction of $A$, this is clear for $\tilde \tau \in \tilde \Delta(\mathtt{T})^+$ and hence for all their complex conjugates (as the duality is reciprocal). \item For all places $\tilde \tau \in \Sigma_{E,\infty/ \mathfrak{p}}$ such that $\mathfrak{p}$ is not of type $\beta2$ and $\tilde \tau \notin \tilde \Delta(\mathtt{T})^\pm$, we know that $\tilde \tau^c \notin \tilde \Delta(\mathtt{T})^\pm$. So $\tilde \mathcal{D}^\circ_{A, ? } = \tilde \mathcal{D}^\circ_{B,?}$ for $? = \tilde \tau, \tilde \tau^c$ under the identification. The statement is clear. Note that this includes the case that $\mathfrak{p}$ is a prime of type $\beta^\sharp$ for $\mathrm{Sh}_{K}(G_{\mathtt{S}})$. \item The only case left is when $\tilde \tau \in \Sigma_{E, \infty/\mathfrak{p}}$ for $\mathfrak{p}$ of type $\beta2$. In this case, $\tcD^\circ_{A,\tilde\tau}=\tcD^\circ_{C,\tilde\tau}$ which is the dual of $\tcD^\circ_{C,\tilde\tau^c}$ for all $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}$ by construction. \end{itemize} This concludes the verification of that $\lambda_A$ is isogeny satisfying condition (b) of Theorem~\ref{T:unitary-shimura-variety-representability} for the moduli space $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})$. We now define the level structure $\alpha_{K'}=\alpha^p\alpha_p$ on $A$. For the prime-to-$p$ level structure $\alpha^p$, we define it to be the $K_{\mathtt{T}}'$-orbit of the isomorphism class: \[\xymatrix{ \alpha^p:&\Lambda^{(p)}\ar[r]^-{\sim}_-{\beta^p}&T^{(p)}(B)\ar[r]^-{\sim}_-{\phi_{B,*}}& T^{(p)}(C) &T^{(p)}(A)\ar[l]_-{\sim}^-{\phi_{A,*}}. }\] We take the closed subgroup scheme $\alpha_\mathfrak{p}\subseteq A[\mathfrak{p}]$ for each $\mathfrak{p}\in \Sigma_p$ of type $\alpha^\sharp$ for $\mathtt{S}$ (and hence for $\mathtt{S}(\mathtt{T})$) to be the subgroup scheme corresponding to $\beta_\mathfrak{p}$ under the sequence of isomorphisms (note that $\tilde \Delta(\mathtt{T})^\pm_{/\mathfrak{p}} = \emptyset$ for $\mathfrak{p}$ of type $\alpha^\sharp$) of $p$-divisible groups $$\xymatrix{ A[\mathfrak{p}^\infty]\ar[r]^{\phi_{A}}_{\cong} &C[\mathfrak{p}^\infty] &B[\mathfrak{p}^\infty]\ar[l]_{\phi_B}^{\cong} }.$$ It is clear that $\alpha_{K'}$ verifies condition (c) in Theorem~\ref{T:unitary-shimura-variety-representability}. This finishes the construction of all the data $y=(A,\iota_A,\lambda_A,\alpha_{K'}, B, \iota_B,\lambda_B, \beta_{K'_\mathtt{T}}, C, \iota_C;\phi_A,\phi_B)$. To see that $y$ is indeed a $k$-point of $Y'_{\mathtt{T}}$, we have to check that $y$ satisfies the conditions (i)-(ix) for $Y'_{\mathtt{T}}$ in Subsection~\ref{S:moduli-Y_S}. Conditions (ii), (iii), and (vi)-(ix) being clear from our construction, it remains to check (i), (iv) and (v). Moreover, the Kottwitz signature condition Theorem~\ref{T:unitary-shimura-variety-representability}(a) follows from Lemma~\ref{L:dimension count} immediately. So for property (i), it remains to show that the partial Hasse invariant $h_{\tilde\tau}(A)$ vanishes if $\tau = \tilde\tau|_F\in \mathtt{T}$. We now check these properties (i), (iv), and (v) in various cases. For this, we identify $\tilde \mathcal{D}_A^\circ[\frac1p]$, $\tilde \mathcal{D}_B^\circ[\frac 1p]$, and $\tilde \mathcal{D}_C^\circ[\frac 1p]$ via $\phi_{A, *}$ and $\phi_{B, *}$. (1) Assume $\mathfrak{p}$ is a prime of case $\alpha1$ or $\beta1$. We keep the notation as before. If $\tilde \tau$ does not lift a place belonging to some chain $C_i$ inside $\mathtt{S}_{\infty/\mathfrak{p}}\cup \mathtt{T}_{/\mathfrak{p}}$, the conditions (i), (iv), and (v) trivially hold. So we assume $\tilde \tau|_F \in C_i$ for some $C_i$. If $r_i=\#(\mathtt{T}_{/\mathfrak{p}}\cap C_i)$ is odd, unwinding our earlier construction gives, for $0 \leq \ell \leq m_i+1$, \begin{align*} &\tilde \mathcal{D}_{A, \sigma^{-\ell}\tilde \tau_i}^\circ = \tilde \mathcal{D}_{C, \sigma^{-\ell}\tilde \tau_i}^\circ = \left\{ \begin{array}{ll} p^{-1} F_{B,\mathrm{es}}^{m_i+1-\ell}(\tilde J^\circ_{\sigma^{-m_i-1}\tilde \tau_i}) & \textrm{ if }a_j \leq \ell< a_{j+1} \textrm{ for some odd }j, \\ \tilde \mathcal{D}_{B, \sigma^{-\ell}\tilde \tau_i}^\circ & \textrm{ otherwise;} \end{array} \right. \\ &\tilde \mathcal{D}_{A, \sigma^{-\ell}\tilde \tau_i^c}^\circ = \left\{ \begin{array}{ll} p F_{B,\mathrm{es}}^{m_i+1-\ell}(\tilde J^{\circ,\perp}_{\sigma^{-m_i-1}\tilde \tau_i}) & \textrm{ if }a_j \leq \ell< a_{j+1} \textrm{ for some odd }j, \\ \tilde \mathcal{D}_{B, \sigma^{-\ell}\tilde \tau_i^c}^\circ & \textrm{ otherwise; and} \end{array} \right.\\ & \tilde \mathcal{D}_{C, \sigma^{-\ell}\tilde \tau_i^c}^\circ = \tilde \mathcal{D}_{B, \sigma^{-\ell}\tilde \tau_i^c}^\circ \textrm{ for all } \ell. \end{align*} For condition (v) in Subsection~\ref{S:moduli-Y_S}, it is trivial unless $\tilde \tau = \sigma^{-\ell} \tilde \tau_i$ for some $\ell \in [a_j, a_{j+1})$ with $j$ odd; and in the exceptional case, it is equivalent to proving that (for the $n$ as in condition (v)) \[ \tilde \mathcal{D}^\circ_{B, \sigma^{-\ell} \tilde \tau_i} = F_{A, \mathrm{es}}^n (\tilde \mathcal{D}_{A, \sigma^{-\ell - n} \tilde \tau_i}^\circ). \] Note that $n = a_{j+1}-\ell$, it follows that $\tcD_{A,\sigma^{-\ell-n}\tilde\tau_i}^{\circ}=\tcD_{B,\sigma^{-a_{j+1}}\tilde\tau_i}^{\circ}$ by definition. As $s_{\mathtt{T}, \sigma^{-a_{j+1}}\tilde\tau_i}=2$ and $s_{\sigma^{-a_{j+1}}\tilde\tau_i}=1$, $F_{A,\mathrm{es}}^n(\tilde \mathcal{D}_{A, \sigma^{-\ell - n} \tilde \tau_i}^\circ)$ coincides with $F_{B,\mathrm{es}}^n(\tilde \mathcal{D}_{B, \sigma^{-a_{j+1}}\tilde\tau_i}^\circ )$ by the definition of essential Frobenius. The desired equality follows from the fact that $F_{B,\mathrm{es}}^n(\tilde \mathcal{D}_{B, \sigma^{-a_{j+1}}\tilde\tau_i}^\circ )=\tcD_{B,\sigma^{-\ell}\tilde\tau_i}$. Similarly, condition (iv) is trivial unless $\tilde \tau = \sigma^{-\ell} \tilde \tau_i^c$ for some $\ell \in [a_j, a_{j+1})$ with $j$ odd, in which case, it is equivalent to the following equality (for the $n$ as in condition (iv)) \[ p\tilde \mathcal{D}^\circ_{B, \sigma^{-\ell} \tilde \tau_i^c} = \tilde F_{A, \mathrm{es}}^n (\tilde \mathcal{D}_{A, \sigma^{-\ell - n} \tilde \tau_i^c}^\circ). \] But $n = a_{j+1}-\ell$ by definition; so $\tilde \mathcal{D}_{A, \sigma^{-\ell - n} \tilde \tau_i^c}^\circ = \tilde \mathcal{D}^\circ_{B, \sigma^{-a_{j+1}}\tilde \tau_i^c}$. Since $s_{\mathtt{T}, \sigma^{-a_{j+1}}\tilde \tau^c_i} = 0$ and $s_{\sigma^{-a_{j+1}}\tilde \tau^c_i} = 1$, the essential Frobenius of $A$ at $\sigma^{-a_{j+1}}\tilde \tau^c_i$ is defined to be $F_A$ while that of $B$ at $\sigma^{-a_{j+1}}\tilde \tau^c_i$ is defined to be $V_{B}^{-1}$. Therefore, $F_{A, \mathrm{es}, \sigma^{-\ell}\tilde \tau_i^c}^n$ is the same as $pF_{B, \mathrm{es}, \sigma^{-\ell}\tilde \tau_i^c}^n$. The equality above is now clear. We now check the vanishing of partial Hasse invariants $h_{\sigma^{a_j}\tilde\tau_i}(A)$ with $1\leq j\leq r_i-1$. By Lemma~\ref{Lemma:partial-Hasse}, it suffices to show that, for any $j = 1, \dots, r_i-1$ and setting $a_0 = -1$, the image of \[ F_{A, \mathrm{es}}^{a_{j+1}- a_{j-1}}: \tcD^\circ_{A, \sigma^{-a_{j+1}} \tilde \tau_i} \rightarrow \tcD^\circ_{A, \sigma^{-a_{j-1}}\tilde \tau_i} \] is contained in $p\tcD^\circ_{A, \sigma^{-a_{j-1}}\tilde \tau_i}$. First, regardless of the parity of $j$, we find easily that $ F_{A, \mathrm{es}}^{a_{j+1}- a_{j-1}}=pF_{B, \mathrm{es}}^{a_{j+1}- a_{j-1}}$ as maps from $\tcD^\circ_{A, \sigma^{-a_{j+1}} \tilde \tau_i}$ to $\tcD^\circ_{A, \sigma^{-a_{j-1}} \tilde \tau_i}$ by checking carefully the dependence of the essential Frobenii on the signatures. Now if $j$ is odd, then $$ \tcD^{\circ}_{A,\sigma^{-\ell}\tilde\tau_i}=\tcD^{\circ}_{B,\sigma^{-\ell}\tilde\tau_i}\quad \text{for }\ell=a_{j+1} \textrm{ and }a_{j-1}; $$ hence one gets $F_{A, \mathrm{es}}^{a_{j+1}- a_{j-1}}(\tcD^{\circ}_{A,\sigma^{-a_{j+1}}\tilde\tau_i})=p\tcD^\circ_{A, \sigma^{-a_{j-1}} \tilde \tau_i}$, since $F_{B, \mathrm{es}}^{a_{j+1}- a_{j-1}}(\tcD^{\circ}_{B,\sigma^{-a_{j+1}}\tilde\tau_i})=\tcD^\circ_{B, \sigma^{-a_{j-1}} \tilde \tau_i}$. If $j$ is even, then \[ \tcD^{\circ}_{A,\sigma^{-\ell}\tilde\tau_i}=p^{-1} F_{B,\mathrm{es}}^{m_i+1-\ell}(\tilde J^\circ_{\sigma^{-m_i-1}\tilde \tau_i}),\quad \text{for } \ell=a_{j+1}\textrm{ and } a_{j-1}; \] now it is also obvious that $F_{A, \mathrm{es}}^{a_{j+1}- a_{j-1}}(\tcD^{\circ}_{A,\sigma^{-a_{j+1}}\tilde\tau_i})=p\tcD^\circ_{A, \sigma^{-a_{j-1}} \tilde \tau_i}$. If $r_i=\#(\mathtt{T}_{/\mathfrak{p}}\cap C_i)$ is even, all conditions can be proved in exactly the same way, except replacing $ F_{B, \mathrm{es}}^{m_i+1-\ell}(\tilde J^\circ_{\sigma^{-m_i-1}\tilde \tau_i})$ by $ F_{B,\mathrm{es}}^{m_i-\ell} (F_B(\tcD^{\circ}_{B,\sigma^{-m_i-1}\tilde\tau_i}))$ and the proof of the vanishing of Hasse invariant $h_{\sigma^{-a_{r_i}}\tilde \tau}(A)$ needs a small modification. In fact, we have \[ \tcD^{\circ}_{A,\sigma^{-\ell}\tilde\tau_i}=\begin{cases} \tcD^{\circ}_{B,\sigma^{-\ell}\tilde\tau_i} & \text{for } a_{r_i}\leq \ell\leq m_i+1,\\ p^{-1} F_{B,\mathrm{es}}^{m_i-\ell} (F_B(\tcD^{\circ}_{B,\sigma^{-m_i-1}\tilde\tau_i})) &\text{for } a_{r_{i}-1}\leq \ell < a_{r_i}. \end{cases} \] Note that the number $n_{\sigma^{-a_{r_{i}}}\tau}$ defined in \ref{S:partial-Hasse} is equal to $m_i+1-a_{r_i}$, and the essential Frobenius $F_{A,\mathrm{es}}: \tcD^{\circ}_{A,\sigma^{-a_{r_i}}\tilde \tau_i}\rightarrow \tcD^{\circ}_{A,\sigma^{-a_{r_i}+1}\tilde \tau_i}$ is simply $F_A$. We have \[ F_{A} F_{A, \mathrm{es}}^{m_i+1-a_{r_i}}(\tilde \mathcal{D}^\circ_{A, \sigma^{-m_i-1}\tilde \tau_i} ) = F_{A, \mathrm{es}}^{m_i+2-a_{r_i}} (\tilde \mathcal{D}^\circ_{A, \sigma^{-m_i-1}\tilde \tau_i} ) = p\tilde \mathcal{D}^\circ_{A, \sigma^{-a_{r_i}+1}\tilde\tau_i}. \] This verifies the vanishing of $h_{\sigma^{-a_{r_i}}\tilde \tau}(A)$. (2) Assume that $\mathfrak{p}$ is a prime of Case $\alpha2$. We write $H_\mathfrak{p} = H_{\mathfrak{q}}\oplus H_{\bar\mathfrak{q}}$ and $\tcD^\circ_{H_{\mathfrak{q}},\tilde\tau}\subseteq \tcD^\circ_{B,\tilde\tau}$ as before. We have \[ \tcD^\circ_{A,\tilde\tau}= \begin{cases}p^{-1}\tcD^\circ_{H_{\mathfrak{p}},\tilde\tau} &\textrm{if }\tilde\tau \in \tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^-, \\ p(\tcD^\circ_{H_{\mathfrak{p}},\tilde\tau^c})^\perp &\textrm{if }\tilde\tau \in \tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^+, \\ \tcD^\circ_{B,\tilde\tau} &\text{otherwise}, \end{cases} \quad \textrm{and} \quad \tcD^\circ_{C,\tilde\tau}= \begin{cases}p^{-1}\tcD^\circ_{H_{\mathfrak{p}},\tilde\tau} &\textrm{if }\tilde\tau \in \tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^-, \\ \tcD^\circ_{B,\tilde\tau} &\text{otherwise}. \end{cases} \] The same arguments as in (1) allows us to check conditions (i), (iv) and (v). (3) Assume now that $\mathfrak{p}$ is prime of Case $\beta2$ in Subsection~\ref{S:quaternion-data-T}. For each $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}$, let $\lambda_{B,*,\tilde\tau}: \tcD^\circ_{B,\tilde\tau}\rightarrow \tcD^\circ_{B^\vee,\tilde\tau}$ be the map induced by the polarization $\lambda_B$. By condition (b3) of Theorem~\ref{T:unitary-shimura-variety-representability}, its cokernel has dimension $1$ over $k$. When viewing $\tcD^\circ_{B^\vee,\tilde\tau}$ as a lattice of $\tcD^\circ_{B,\tilde\tau}[1/p]$ via $\lambda^{-1}_{B,*,\tilde\tau}$, we have \[ \tcD^\circ_{A,\tilde\tau} = \tcD^\circ_{C,\tilde\tau}= \begin{cases} \tcD^\circ_{B^\vee,\tilde\tau} &\text{if }\tilde\tau\in \tilde \Delta(\mathtt{T})_{/\mathfrak{p}}^-,\\ \tcD^\circ_{B,\tilde\tau} &\text{otherwise}. \end{cases} \] The same argument as in (1) allows us to check the conditions (i), (iv) and (v). This then concludes the proof of Step I. \vspace{10pt} \textbf{Step II:} Let $y=(A, \iota_A,\lambda_A, \alpha_{K'}, B, \lambda_B, \iota_B, \beta_{K'_\mathtt{T}}, C, \iota_C; \phi_A, \phi_B)\in Y'_{\mathtt{T}}$ be a closed point with values in $k=\overline{\mathbb{F}}_p$, and $z=\eta(y)=(B, \iota_B, \lambda_B, \beta_{K'_\mathtt{T}}, J^\circ)\in Z'_{\mathtt{T}}$. We prove that $\eta: Y_{T}'\rightarrow Z'_{\mathtt{T}} $ induces a bijection of tangent spaces $\eta_{y}: T_{Y'_{\mathtt{T}}, y}\xrightarrow{\cong} T_{Z'_{\mathtt{T}},z}$. We follow the same strategy as Lemma~\ref{L:Y_T-X_T-tangent}. Set $\mathbb{I}=\Spec(k[\epsilon]/\epsilon^2)$. The tangent space $T_{Z'_{\mathtt{T}},z}$ is identified with the set of deformations $z_{\mathbb{I}}=(B_{\mathbb{I}}, \iota_{B,\mathbb{I}}, \lambda_{B,\mathbb{I}}, \beta_{K'_\mathtt{T},\mathbb{I}}, J^\circ_{\mathbb{I}})\in Z'_{\mathtt{T}}(\mathbb{I})$ of $z$, where $J^\circ_{\mathbb{I}}$ is the collection of sub-bundles $J^\circ_{\mathbb{I},\tilde\tau}\subseteq H^\mathrm{dR}_{1}(B_{\mathbb{I}}/\mathbb{I})^\circ_{\tilde\tau}=H^{\mathrm{cris}}_{1}(B/k)_{\mathbb{I},\tilde\tau}^{\circ}$ for each $\tilde\tau\in \tilde I_\mathtt{T}$. We have to show that every point $z_\mathbb{I}$ lifts uniquely to a deformation $y_{\mathbb{I}}=(A_{\mathbb{I}}, \iota_{A_\mathbb{I}}, \lambda_{A_{\mathbb{I}}}, \alpha_{K',\mathbb{I}}, B_{\mathbb{I}}, \lambda_{B_{\mathbb{I}}}, \iota_{B_{\mathbb{I}}}, \beta_{K'_\mathtt{T},\mathbb{I}}, C_{\mathbb{I}}, \iota_{C_{\mathbb{I}}}; \phi_{A_{\mathbb{I}}}, \phi_{B_{\mathbb{I}}})\in Y'_{\mathtt{T}}(\mathbb{I})$ with $\eta(y_{\mathbb{I}})=z_{\mathbb{I}}$. We start with $C_{\mathbb{I}}$ and $\phi_{B_{\mathbb{I}}}$. For $\tilde\tau\in \Sigma_{E,\infty}$, denote by $$ \phi_{B,*,\tilde\tau}^{\mathrm{cris}}: H^\mathrm{cris}_1(B/k)^{\circ}_{\mathbb{I},\tilde\tau}\rightarrow H^\mathrm{cris}_1(C/k)^\circ_{\mathbb{I},\tilde\tau} $$ the natural morphism induced by $\phi_{B}$, and by $\phi^{\mathrm{dR}}_{B,*,\tilde\tau}$ the analogous map between the de Rham homology $H^{\mathrm{dR}}_1$. The crystalline nature of $H^{\mathrm{cris}}_1$ implies that $\phi^{\mathrm{cris}}_{B,*,\tilde\tau}=\phi^{\mathrm{dR}}_{B,*,\tilde\tau}\otimes_k k[\epsilon]/\epsilon^2$. To construct $C_{\mathbb{I}}$ and $\phi_{B,\mathbb{I}}$ it suffices to specify, for each $\tilde\tau\in \Sigma_{E,\infty}$, a sub-bundle $\omega^\circ_{C^\vee,\mathbb{I},\tilde\tau}\subseteq H^{\mathrm{cris}}_1(C/k)^\circ_{\mathbb{I},\tilde\tau}$ which lifts $\omega^\circ_{C^\vee,\tilde\tau}$ and satisfies \begin{equation}\label{E:omega-inclusion-C} \phi^{\mathrm{cris}}_{B,*,\tilde\tau}(\omega^{\circ}_{B^\vee_{\mathbb{I}},\tilde\tau})\subseteq \omega^\circ_{C^\vee,\mathbb{I},\tilde\tau}. \end{equation} We distinguish a few cases: \begin{enumerate} \item If neither $\tilde\tau$ nor $\sigma\tilde\tau$ belong to $\tilde\Delta(\mathtt{T})^-$, both $\phi^{\mathrm{cris}}_{B,*,\tilde\tau}$ and $\phi^{\mathrm{cris}}_{B,*,\sigma\tilde\tau}$ are isomorphisms. It follows that $\phi_{B,*,\tilde\tau}^{\mathrm{dR}}(\omega^\circ_{B^\vee,\tilde\tau})=\omega^\circ_{C^\vee, \tilde\tau}$; hence we have to take $\omega^{\circ}_{C^\vee,\mathbb{I}, \tilde\tau}=\phi^{\mathrm{cris}}_{B,*, \tilde\tau}(\omega^\circ_{B^\vee_{\mathbb{I}},\tilde\tau})$. \item If both $\tilde \tau, \sigma \tilde \tau \in \tilde \Delta(\mathtt{T})^-$, then $\tau=\tilde\tau|_{F}\in \mathtt{S}_{\infty/\mathfrak{p}}$ by Lemma~\ref{L:property of Delta}. A simple dimension count similar to Lemma~\ref{L:dimension count} implies that $ \dim_{k}(\omega^\circ_{C^\vee, \tilde\tau})=\dim_{k}(\omega^\circ_{B^\vee,\tilde\tau})\in \{0,2\}. $ We take $\omega^{\circ}_{C^\vee,\mathbb{I}, \tilde\tau}$ to be $0$ or $H^{\mathrm{cris}}_1(C/k)^\circ_{\mathbb{I}, \tilde\tau}$ correspondingly, and \eqref{E:omega-inclusion-C} is trivial. \item If $\tilde \tau \in \tilde I_\mathtt{T}$, property of the morphism $\eta$ forces $\omega^{\circ}_{C^\vee, \mathbb{I},\tilde\tau}=\phi^{\mathrm{cris}}_{B,*,\tilde\tau}(J^\circ_{B_{\mathbb{I}},\tilde\tau})$. \item For all other $\tilde \tau$, $\tilde \tau|_F$ must belong to $\mathtt{T}$. Let $n$ be the number associated to $\tilde\tau $ as in Lemma~\ref{L:distance to T'}. By the vanishing of the partial Hasse invariant on $A$ at $\tilde \tau$, we have $\omega^\circ_{C^\vee, \tilde \tau} = F_{C,\mathrm{es}}^n(H^{\mathrm{dR}}_1(C/k)^\circ_{\sigma^{-n}\tilde \tau}).$ We take \[ \omega^\circ_{C^\vee, \mathbb{I}, \tilde \tau} = F_{C,\mathrm{es}}^n(H_1^\mathrm{cris}(C^{(p^n)}/k)^\circ_{\mathbb{I}, \tilde \tau}). \] This is not a forced choice now; but it will become one when we have constructed the lift $A_\mathbb{I}$ and require $A_\mathbb{I}$ to have vanishing partial Hasse invariant. Since $F_{B,\mathrm{es}}^n: H_1^\mathrm{cris}(B^{(p^n)}/k)^\circ_{\mathbb{I}, \tilde \tau} \to H_1^\mathrm{cris}(B/k)^\circ_{\mathbb{I}, \tilde \tau}$ is an isomorphism, we conclude that \eqref{E:omega-inclusion-C} holds for $\tilde \tau$. \end{enumerate} We now construct $A_\mathbb{I}$ and the isogeny $\phi_{A_{\mathbb{I}}}:A_{\mathbb{I}}\rightarrow C_{\mathbb{I}}$. As usual, we have to specify, for each $\tilde\tau\in \Sigma_{E,\infty}$, a sub-bundle $\omega^\circ_{A^\vee,\mathbb{I},\tilde\tau}\subseteq H^{\mathrm{cris}}_1(A/k)^\circ_{\mathbb{I},\tilde\tau}$ that lifts $\omega^{\circ}_{A^\vee,\tilde\tau}$ and satisfies $\phi_{A,*,\tilde\tau}^\mathrm{cris}(\omega^\circ_{A^\vee,\mathbb{I},\tilde\tau})\subseteq \omega^\circ_{C^\vee,\mathbb{I},\tilde\tau}$. Let $\mathfrak{p}\in \Sigma_p$ be the prime such that $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}$. \begin{itemize} \item If neither $\tilde \tau$ nor $\sigma \tilde \tau$ belong to $\tilde \Delta(\mathtt{T})^+$, then $\phi_{A, *, \tilde \tau}^\mathrm{dR}$ and hence $\phi^\mathrm{cris}_{A, *, \tilde \tau}$ is an isomorphism. We are forced to take $\omega^{\circ}_{A^\vee,\mathbb{I}, \tilde\tau}=(\phi^{\mathrm{cris}}_{A,*,\tilde\tau})^{-1}(\omega^\circ_{C^\vee, \mathbb{I}, \tilde\tau})$. In particular, if $\tilde\tau\in \tilde I_{\mathtt{T}}$, we have $\omega_{A^\vee,\mathbb{I}, \tilde\tau}=(\phi^{\mathrm{cris}}_{A,*,\tilde\tau})^{-1}\phi^{\mathrm{cris}}_{B,*,\tilde\tau}(J^{\circ}_{B_{\mathbb{I}},\tilde\tau})$. \item In all other cases, we must have $\tilde \tau \in \Sigma_{E, \infty/\mathfrak{p}}$ for $\mathfrak{p}$ not of type $\beta2$ or $\beta^\sharp$. Then we have to take $\omega^\circ_{A^\vee,\mathbb{I},\tilde\tau}$ to be the orthogonal complement of $\omega^\circ_{A^\vee,\mathbb{I}, \tilde\tau^c}$ (which is already defined in the previous case) under the perfect pairing \[ \langle\ ,\ \rangle_{\lambda_A}\colon H^{\mathrm{cris}}_1(A/k)^\circ_{\mathbb{I}, \tilde\tau}\times H^{\mathrm{cris}}_{1}(A/k)^\circ_{\mathbb{I},\tilde\tau^c}\rightarrow k[\epsilon]/\epsilon^2 \] induced by the polarization $\lambda_A$. It is clear that $\omega^\circ_{A^\vee, \mathbb{I}, \tilde\tau}$ is a lift of $\omega^\circ_{A^\vee, \tilde\tau}$. It remains to show that $\phi^\mathrm{cris}_{A, *, \tilde \tau}(\omega^\circ_{A^\vee, \mathbb{I}, \tilde \tau}) \subseteq \omega^\circ_{C^\vee, \mathbb{I}, \tilde \tau}$. We consider the following commutative diagram \begin{equation} \label{E:dualization ACB} \xymatrix@C=5pt{ H_1^\mathrm{cris}(A/k)^\circ_{\mathbb{I},\tilde{\tau}} \ar[d]^{\phi^{\mathrm{cris}}_{A,*,\tilde\tau}} & \times & H_1^\mathrm{cris}(A/k)^\circ_{\mathbb{I},\tilde{\tau}^c}\ar[d]^{\phi^{\mathrm{cris}}_{A,*,\tilde\tau^c}}_\cong \ar[rrrr]^-{\langle\ , \ \rangle_{\lambda_A}} &&&& \mathbb{I} \\ H_1^\mathrm{cris}(C/k)^\circ_{\mathbb{I},\tilde{\tau}} & \times & H_1^\mathrm{cris}(C/k)^\circ_{\mathbb{I},\tilde{\tau}^c} \\ H_1^\mathrm{cris}(B/k)^\circ_{\mathbb{I},\tilde{\tau}} \ar[u]_{\phi^{\mathrm{cris}}_{B,*,\tilde\tau}}^\cong & \times & \ar[u]_{\phi^{\mathrm{cris}}_{B,*,\tilde\tau^c}} H_1^\mathrm{cris}(B/k)^\circ_{\mathbb{I},\tilde{\tau}^c}\ar[rrrr]^-{\langle\ , \ \rangle_{\lambda_B}} &&&& \mathbb{I},\ar@{=}[uu] } \end{equation} where both duality pairings are perfect. By our choice of $\tilde \tau$, we have $\tilde \tau, \sigma \tilde \tau \notin \tilde \Delta(\mathtt{T})^-$ and $\tilde \tau^c, \sigma \tilde \tau^c \notin \tilde \Delta(\mathtt{T})^+$; so both $\phi^\mathrm{cris}_{A, *, \tilde \tau^c}$ and $\phi^\mathrm{cris}_{B, *, \tilde \tau}$ in \eqref{E:dualization ACB} are isomorphisms and they induce isomorphisms on the reduced differentials. Using the diagram \eqref{E:dualization ACB} of perfect duality, it suffices to prove that $\phi^\mathrm{cris}_{A, *, \tilde \tau}(\omega^\circ_{A^\vee, \mathbb{I}, \tilde \tau}) \subseteq \omega^\circ_{C^\vee, \mathbb{I}, \tilde \tau}$ is equivalent to $\phi^\mathrm{cris}_{B, *, \tilde \tau^c}(\omega^\circ_{B,\mathbb{I}, \tilde \tau^c}) \subseteq \omega^\circ_{C,\mathbb{I}, \tilde \tau^c}$, which was already checked. \end{itemize} By construction, the $\tilde\tau$-partial Hasse invariant of $A_\mathbb{I}$ vanishes if $\tilde\tau\in \tilde\Delta(\mathtt{T})^-$ and $\tilde\tau|_F\in \mathtt{T}$; the duality guarantees the vanishing of Hasse invariants at their conjugate places. This condition conversely forces the uniqueness of our choice of $C_\mathbb{I}$ and $A_\mathbb{I}$. From the construction, $\omega^\circ_{A^\vee,\mathbb{I}}=\bigoplus_{\tilde\tau\in \Sigma_{E,\infty}}\omega^\circ_{A^\vee,\mathbb{I},\tilde\tau}$ is isotropic under the paring on $H^\mathrm{cris}_{1}(A/k)^\circ_{\mathbb{I}}$ induced by $\lambda_A$. This concludes checking condition (1) of Subsection~\ref{S:moduli-Y_S}. The lift of the level structure $\alpha_{K', \mathbb{I}}$ is automatic for the tame part, and can be done in a unique way as in the proof of Theorem~\ref{T:unitary-shimura-variety-representability}. It then remains to check that $\phi_{A_\mathbb{I}}$ and $\phi_{B_\mathbb{I}}$ satisfy conditions (iv) and (v) of Subsection \ref{S:moduli-Y_S}. For condition (v), it is obvious except when $\tilde \tau \in \tilde \Delta(\mathtt{T})^-$, in which case, \[ \mathrm{Im}(\phi^\mathrm{cris}_{B, *, \tilde \tau}) = \mathrm{Im}(\phi^\mathrm{dR}_{B, *, \tilde \tau}) \otimes_k \mathbb{I}, \quad \textrm{and} \quad \phi_{A, *, \tilde \tau}^\mathrm{cris}(\mathrm{Im}(F_{\mathrm{es},A_\mathbb{I}, \tilde \tau}^n)) = \phi_{A, *, \tilde \tau}^\mathrm{dR}(\mathrm{Im}(F_{\mathrm{es},A, \tilde \tau}^n)) \otimes_k \mathbb{I}, \] where $n>1$ is the number determined in Lemma~\ref{L:distance to T'}. So condition (v) for the lift follows from that for $\phi_A: A \to C$. (Note that $n\geq 1$ implies that the image of the essential image is determined by the reduction.) Exactly the same argument proves condition (iv). This concludes Step II of the proof of Proposition~\ref{P:Y_T=Z_T}. \end{proof} \subsection{End of proof of Theorem~\ref{T:main-thm-unitary}} \label{S:End-of-proof} Statement (1) of \ref{T:main-thm-unitary} follows from Proposition~\ref{P:Y_S=X_S} and \ref{P:Y_T=Z_T}. Statements (2) and (3) are clear from the proof of (1). It remains to prove statement (4), namely the compatibility of partial Frobenius. We use $X'_{\tilde \mathtt{S},\mathtt{T}}$, $Y_{\tilde \mathtt{S},\mathtt{T}}'$ and $Z'_{\tilde \mathtt{S},\mathtt{T}}$ to denote the original $X'_{\mathtt{T}}$, $Y'_{\mathtt{T}}$ and $Z'_{\mathtt{T}}$ in \eqref{S:moduli-Y_S} to indicate their dependence on $\tilde \mathtt{S}$. We will define a twisted partial Frobenius \[ \mathfrak{F}'_{\mathfrak{p}^2,\mathtt{S}}:Y'_{\tilde \mathtt{S},\mathtt{T}}\rightarrow Y'_{\sigma^2_{\mathfrak{p}}\tilde \mathtt{S},\sigma^2_{\mathfrak{p}}\mathtt{T}} \] compatible via $\eta_1$ with the $\mathfrak{F}'_{\mathfrak{p}^2,\tilde \mathtt{S}}$ on $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ defined in Subsection~\ref{S:partial Frobenius}. For an $S$-valued point $x=(A, \iota_A,\lambda_A, \alpha_{K'}, B, \iota_B, \lambda_B, \beta_p, C, \iota_C; \phi_A, \phi_B)$ of $Y'_{\tilde \mathtt{S},\mathtt{T}}$, its image \[ \mathfrak{F}'_{\mathfrak{p}^2,\tilde \mathtt{S}}(x)=(A', \iota_{A'},\lambda_{A'}, \alpha'_{K'}, B', \iota_{B'}, \lambda_{B'}, \beta'_p, C', \iota_{C'}; \phi_{A'}, \phi_{B'}). \] is given as follows. Here, for $G=A,B,C$, we put $G'=(G/\Ker_{G,\mathfrak{p}^2})\otimes_{\mathcal{O}_F}\mathfrak{p}$ where $\Ker_{G',\mathfrak{p}^2}$ is the $\mathfrak{p}$-component of the $p^2$-Frobenius of $G$. The induced structures $(\iota_{A'},\lambda_{A'}, \alpha'_{K'},\iota_{B'},\lambda_{B'},\beta'_{p})$ are defined in the same way as in \ref{S:partial Frobenius}. The isogenies $\phi_{A'}: A'\rightarrow C'$ and $\phi_{B'}:B'\rightarrow C'$ are constructed from $\phi_{A}$ and $\phi_B$ by the functoriality of $p^2$-Frobenius. We have to prove that the induced map on de Rham homologies $\phi_{A',*,\tilde\tau}$ and $\phi_{B',*,\tilde\tau}$ satisfy the required conditions in (v) and (vi) of \ref{S:moduli-Y_S}. If $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}'}$ with $\mathfrak{p}'\neq \mathfrak{p}$, this is clear, because the $p$-divisible groups $G'[\mathfrak{p}'^{\infty}]$ are canonically identified with $G[\mathfrak{p}'^{\infty}]$ for $G=A,B,C$. Now consider the case $\mathfrak{p}'=\mathfrak{p}$. As in the proof of Lemma~\ref{L:partial Frobenius vs partial Hasse inv}, for $G=A,B,C$, the $p$-divisible group $G'[\mathfrak{p}^{\infty}]$ is isomorphic to the base change of $G[\mathfrak{p}^{\infty}]$ via $p^2$-Frobenius on $S$. One deduces thus isomorphisms of de Rham homologies \begin{equation}\label{E:isom-dR-ABC} H^{\mathrm{dR}}_1(G'/S)^{\circ}_{\tilde\tau}=(H^{\mathrm{dR}}_1(G/S)^{\circ}_{\sigma^{-2}\tilde\tau})^{(p^2)}, \end{equation} which is compatible with $F$ and $V$ as $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}$ varies, and compatible with $\phi_{A',*,\tilde\tau}$ and $\phi_{B',*,\tilde\tau}$ by functoriality. Hence, the required properties on $\phi_{A',*,\tilde\tau}$ and $\phi_{B',*,\tilde\tau}$ follow from those on $\phi_{A,*,\sigma^{-2}\tilde\tau}$ and $ \phi_{B,*,\sigma^{-2}\tilde\tau}$. This finishes the construction of $\mathfrak{F}'_{\mathfrak{p}^2}$ on $Y'_{\tilde \mathtt{S},\mathtt{T}}$. Via the isomorphism $\eta_2:Y'_{\tilde \mathtt{S},\mathtt{T}}\xrightarrow{\sim}Z'_{\tilde \mathtt{S},\mathtt{T}}$ proved in \ref{P:Y_T=Z_T}, $\mathfrak{F}'_{\mathfrak{p}^2,\mathtt{S}}$ induces a map $\mathfrak{F}'_{\mathfrak{p}^2,\mathtt{S}}:Z'_{\tilde \mathtt{S},\mathtt{T}}\rightarrow Z'_{\sigma^2_{\mathfrak{p}}\tilde \mathtt{S},\sigma^2_{\mathfrak{p}}\mathtt{T}}$. Let $z=(B,\iota_{B},\lambda_{B},\beta_{K'_{\mathtt{T}}}, J^{\circ})$ be an $S$-valued point of $Z'_{\tilde \mathtt{S},\mathtt{T}}$ as described in \ref{S:Y_T=Z_T}. Then its image $\mathfrak{F}'_{\mathfrak{p}^2,\tilde \mathtt{S}}(z)$ is given by $(B',\iota_{B'},\lambda_{B'},\beta'_{K'_{\mathtt{T}}}, J^{\circ,\prime})\in Z'_{\sigma^2_{\mathfrak{p}}\mathtt{S},\sigma^{2}\mathtt{T}}$, where $(B',\iota_{B'},\lambda_{B'}, \beta'_{K'_{\mathtt{T}}})$ are defined as in \ref{S:partial Frobenius}, and $J^{\circ,\prime}$ is the collection of line bundles $J^{\circ,\prime}_{\tilde\tau}\subseteq H^{\mathrm{dR}}_1(B'/S)^{\circ}_{\tilde\tau}$ for each $\tilde\tau\in \bigcup_{\mathfrak{p}'\in\Sigma_{p}}\sigma^2_{\mathfrak{p}}(\tilde\mathtt{T}_{\infty/\mathfrak{p}'}-\tilde\mathtt{T}_{/\mathfrak{p}'})$ given as follows. For $\tilde\tau\in \tilde\mathtt{T}_{\infty/\mathfrak{p}'}-\tilde\mathtt{T}_{/\mathfrak{p}'}$ with $\mathfrak{p}'\neq \mathfrak{p}$, we have $J^{\circ,\prime}_{\tilde\tau}=J^{\circ}_{\tilde\tau}$ since $H^{\mathrm{dR}}_1(B'/S)^{\circ}_{\tilde\tau}$ is canonically identified with $H^{\mathrm{dR}}_1(B/S)^{\circ}_{\tilde\tau}$; for $\tilde\tau\in \sigma^2(\tilde\mathtt{T}_{\infty/\mathfrak{p}}-\tilde\mathtt{T}_{/\mathfrak{p}})$, we have $J^{\circ,\prime}_{\tilde\tau}=(J^{\circ}_{\sigma^{-2}\tilde\tau})^{(p^2)}$, which makes sense thanks to the isomorphism \eqref{E:isom-dR-ABC} for $G=B'$. After identifying $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\mathtt{T}}=X'_{\tilde \mathtt{S},\mathtt{T}}$ with $Z'_{\tilde \mathtt{S},\mathtt{T}}$, the projection $\pi_{\mathtt{T}}:Z'_{\tilde \mathtt{S},\mathtt{T}}\to \mathbf{Sh}_{K'_{\mathtt{T}}}(G'_{\tilde \mathtt{S}(\mathtt{T})})_{k_0}$ is given by $(B,\iota_{B},\lambda_{B},\beta_{K'_{\mathtt{T}}}, J^{\circ})\mapsto (B,\iota_{B},\lambda_{B},\beta_{K'_{\mathtt{T}}})$. It is clear that we have a commutative diagram: \[ \xymatrix{ Z'_{\tilde \mathtt{S},\mathtt{T}}\ar[r]_-{\xi^\mathrm{rel}} \ar[rd]_{\pi_{\mathtt{T}}} \ar@/^15pt/[rr]^-{\mathfrak{F}'_{\mathfrak{p}^2, \tilde \mathtt{S}}} & \mathfrak{F}'^*_{\mathfrak{p}^2}(Z'_{\sigma^2_{\mathfrak{p}}\tilde \mathtt{S},\sigma^2_{\mathfrak{p}}\mathtt{T}}) \ar[d] \ar[r]_-{\mathfrak{F}'^*_{\mathfrak{p}^2, \tilde \mathtt{S}(\mathtt{T})}} & Z'_{\sigma^{2}_{\mathfrak{p}}\tilde \mathtt{S},\sigma^2_{\mathfrak{p}}\mathtt{T}} \ar[d]^{\pi_{\sigma_{\mathfrak{p}}^2\mathtt{T}}} \\ & \mathbf{Sh}_{K'_\mathtt{T}}(G'_{\tilde \mathtt{S}(\mathtt{T})})_{k_0} \ar[r]^-{\mathfrak{F}'_{\mathfrak{p}^2, \tilde \mathtt{S}(\mathtt{T})}} & \mathbf{Sh}_{K'_\mathtt{T}}(G'_{\sigma_\mathfrak{p}^2(\tilde \mathtt{S}(\mathtt{T}))})_{k_0}, } \] where $\xi^{\mathrm{rel}}$ is given by $(B,\iota_{B},\lambda_{B},\beta_{K'_{\mathtt{T}}}, J^{\circ})\mapsto (B,\iota_{B},\lambda_{B},\beta_{K'_{\mathtt{T}}}, J^{\circ,\prime})$ with $J^{\circ,\prime}$ defined above. This proves statement (4) immediately. \section{Ampleness of modular line bundles} \label{Section:GO divisors} In this section, we suppose that $F\neq \mathbb{Q}$. We will apply Theorem~\ref{T:main-thm-unitary} to prove some necessary conditions for the ampleness of certain modular line bundles on quaternionic/unitary Shimura varieties. In this section, let $X' = \mathbf{Sh}_{K'}(G_{\tilde \mathtt{S}}')_{k_0}$ be a unitary Shimura variety over $k_0$ considered in Subsection~\ref{S:GO-notation}. This is a smooth and quasi-projective variety over $k_0$, and projective if $\mathtt{S}_{\infty}\neq\emptyset$. Let $(\mathbf{A}',\iota,\lambda, \alpha_{K'})$ be the universal abelian scheme over $X'$. For each $\tilde\tau\in \Sigma_{E,\infty}$, the $\mathcal{O}_{X'}$-module $\omega^{\circ}_{\mathbf{A}'^\vee/X', \tilde \tau}$ is locally free of rank $2-s_{\tilde\tau}$; it is a line bundle if $\tilde \tau|_F$ belongs to $ \Sigma_{\infty}-\mathtt{S}_\infty$. \subsection{Rational Picard group} For a variety $Y$ over $k_0$, we write $\Pic(Y)_\mathbb{Q}$ for $\Pic(Y)\otimes_\mathbb{Z} \mathbb{Q}$. For a line bundle $\mathcal{L}$ on $Y$, we denote by $[\mathcal{L}]$ its class in $\Pic (Y)_{\mathbb{Q}}$. \begin{lemma}\label{L:omega-Pic} \emph{(1)} For any $\tilde\tau\in\Sigma_{E,\infty}$ lifting a place $\tau \in \Sigma_\infty-\mathtt{S}_\infty$, we have equalities \[ [\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau}]=[\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau^c}]=[\omega^\circ_{\mathbf{A}'/X',\tilde\tau}]=[\omega^\circ_{\mathbf{A}'/X',\tilde\tau^c}]. \] \emph{(2)} For any $\tilde\tau\in \Sigma_{E,\infty}$, we have $[\wedge^2_{\mathcal{O}_{X'}} H_{1}^\mathrm{dR}(\mathbf{A}'/X')_{\tilde\tau}]=0.$ \emph{(3)} Let $X'^*$ denote the minimal compactification of $X'$ (which is just $X'$ if $\mathtt{S}_{\infty} \neq \emptyset$). Then the natural morphism $j: \Pic(X'^*) \to \Pic(X')$ is injective. Moreover, for each $\tilde \tau \in \Sigma_{E, \infty}$ lifting a place $\tau \in \Sigma_\infty - \mathtt{S}_\infty$, $[\omega^\circ_{\mathbf{A}'^\vee/X', \tilde \tau}]$ belongs to the image of $j_\mathbb{Q}: \Pic(X'^*)_\mathbb{Q} \to \Pic(X')_\mathbb{Q}$. \end{lemma} \begin{proof} (1) Suppose that $\tau \in \Sigma_{\infty/\mathfrak{p}} - \mathtt{S}_{\infty/\mathfrak{p}}$ for $\mathfrak{p} \in \Sigma_\mathfrak{p}$. Clearly, $\mathfrak{p}$ is not of type $\alpha^\sharp$ or $\beta^\sharp$. The equality $[\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau}]=[\omega^\circ_{\mathbf{A}'/X',\tilde\tau^c}]$ follows from the isomorphism $\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau}\cong \omega^\circ_{\mathbf{A}'/X',\tilde\tau^c} $ thanks to the polarization $\lambda$ on $\mathbf{A}'$. To prove the equality $[\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau}]=[\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau^c}]$, we consider the partial Hasse invariants $h_{\tilde\tau}\in \Gamma(X', (\omega^\circ_{\mathbf{A}'^\vee/X',\sigma^{-n_\tau}\tilde\tau})^{\otimes p^{n_{\tau}}}\otimes \omega^{\circ,\otimes (-1)}_{\mathbf{A}'^\vee/X',\tilde\tau})$, and $h_{\tilde\tau^c}$ defined similarly with $\tilde\tau$ replaced by $\tilde\tau^c$. By Lemma~\ref{Lemma:partial-Hasse} and Proposition~\ref{Prop:smoothness}, the vanishing of $h_{\tilde\tau}$ and $h_{\tilde\tau^c}$ define the same divisor: $X'_{\tau}\subseteq X'$. Hence, for each $\tilde\tau\in \Sigma_{E,\infty}$ lifting some $ \tau \in \Sigma_{\infty/\mathfrak{p}} -\mathtt{S}_{\infty/\mathfrak{p}}$, we have an equality \begin{equation}\label{E:equality-tau-tau-c} p^{n_{\tau}} [\omega^\circ_{\mathbf{A}'^\vee/X',\sigma^{-n_\tau}\tilde\tau}]- [\omega^{\circ}_{\mathbf{A}'^\vee/X',\tilde\tau}] =p^{n_{\tau}} [\omega^{\circ}_{\mathbf{A}'^\vee/X',\sigma^{-n_\tau}\tilde\tau^{c}}]-[\omega^{\circ}_{\mathbf{A}'^\vee/X',\tilde\tau^c}]. \end{equation} Let $C$ be the square matrix with coefficients in $\mathbb{Q}$, whose rows and columns are labeled by those places $\tilde \tau \in \Sigma_{E, \infty}$ lifting a place $\tau \in \Sigma_{\infty/\mathfrak{p}} - \mathtt{S}_{\infty/\mathfrak{p}}$, and whose $(\tilde \tau_1, \tilde \tau_2)$-entry is \[ c_{\tilde \tau_1, \tilde \tau_2}=\begin{cases} -1&\text{if } \tilde \tau_1=\tilde \tau_2,\\ p^{n_{\tau_2}}&\text{if }\tilde \tau_1=\sigma^{-n_{\tau_2}}\tilde \tau_2,\\ 0&\text{otherwise}. \end{cases} \] One checks easily that $C$ is invertible, hence it follows from \eqref{E:equality-tau-tau-c} that $[\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau}]=[\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau^c}]$. (2) Assume first that $\tilde\tau\in \Sigma_{E,\infty}$ lifts some $\tau \in \Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty}$. From the Hodge filtration $0\rightarrow \omega^\circ_{\mathbf{A}'^\vee,\tilde\tau}\rightarrow H^\mathrm{dR}_1(\mathbf{A}'/X')^\circ_{\tilde\tau}\rightarrow \Lie(\mathbf{A}'/X')^\circ_{\tilde\tau}\rightarrow 0$, one deduces that \[ [\wedge^2_{\mathcal{O}_{X'}}H^{\mathrm{dR}}_1(\mathbf{A}'/X')_{\tilde\tau}^\circ]=[\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau}]+[\Lie(\mathbf{A}'/X')_{\tilde\tau}^\circ]. \] Then statement (2) follows from (1) and that $[\Lie(\mathbf{A}'/X')^\circ_{\tilde\tau}]=-[\omega^\circ_{\mathbf{A}'/X',\tilde\tau}]=-[\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau^c}]$. Consider now the case when $\tilde\tau\in \Sigma_{E,\infty}$ lifts some $\tau \in \mathtt{S}_{\infty/\mathfrak{p}}$ for a place $\mathfrak{p}$ of type $\alpha$ or $\beta$. Then there is an integer $m\geq 1$ such that $\sigma^m\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}$ and $\sigma^{i}\tau\in \mathtt{S}_{\infty}$ for all $0\leq i\leq m-1$, and we have a sequence of isomorphisms \[ \xymatrix{ H^\mathrm{dR}_1(\mathbf{A}'/X')^{\circ, (p^m)}_{\tilde\tau}\ar[r]^-{F_{\mathbf{A}',\mathrm{es}}}_-{\cong} &H^\mathrm{dR}_1(\mathbf{A}'/X')^{\circ, (p^{m-1})}_{\sigma\tilde\tau}\ar[r]^-{F_{\mathbf{A}',\mathrm{es}}}_-{\cong} &\cdots\ar[r]^-{F_{\mathbf{A}',\mathrm{es}}}_-{\cong} &H^\mathrm{dR}_1(\mathbf{A}'/X')^\circ_{\sigma^m\tilde\tau}. } \] From this, one gets $$ p^m[\wedge^2_{\mathcal{O}_{X'}}H^\mathrm{dR}_1(\mathbf{A}'/X')^\circ_{\tilde\tau}]=[\wedge^2_{\mathcal{O}_{X'}}H^\mathrm{dR}_1(\mathbf{A}'/X')^\circ_{\sigma^m\tilde\tau}]=0.$$ Finally, if $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{p}}$ for a place $\mathfrak{p}$ of type $\alpha^\sharp$ or $\beta^\sharp$ and if $m$ is the inertia degree of $\mathfrak{p}$ over $p$, then the sequence of isomorphism \[ \xymatrix{ H^\mathrm{dR}_1(\mathbf{A}'/X')^{\circ, (p^{2m})}_{\tilde\tau}\ar[r]^-{F_{\mathbf{A}',\mathrm{es}}}_-{\cong} &H^\mathrm{dR}_1(\mathbf{A}'/X')^{\circ, (p^{2m-1})}_{\sigma\tilde\tau}\ar[r]^-{F_{\mathbf{A}',\mathrm{es}}}_-{\cong} &\cdots\ar[r]^-{F_{\mathbf{A}',\mathrm{es}}}_-{\cong} &H^\mathrm{dR}_1(\mathbf{A}'/X')^\circ_{\sigma^{2m}\tilde\tau} } \] gives rise to an equality \[ p^{2m}[\wedge^2_{\mathcal{O}_{X'}}H^\mathrm{dR}_1(\mathbf{A}'/X')^\circ_{\tilde\tau}]=[\wedge^2_{\mathcal{O}_{X'}}H^\mathrm{dR}_1(\mathbf{A}'/X')^\circ_{\sigma^{2m}\tilde\tau}]=[\wedge^2_{\mathcal{O}_{X'}}H^\mathrm{dR}_1(\mathbf{A}'/X')^\circ_{\tilde\tau}], \] as $\sigma^{2m}\tilde \tau = \tilde \tau$. This forces $[\wedge^2_{\mathcal{O}_{X'}}H^\mathrm{dR}_1(\mathbf{A}'/X')^\circ_{\tilde\tau}] = 0$. (3) If $X'$ is a Shimura curve, then $X'^*=X'$ as $F\neq \mathbb{Q}$. Assume now that $X'$ has dimension at least $2$. The injectivity of $j: \Pic(X'^*) \to \Pic(X')$ follows from the fact that $X'^*$ is normal \cite[Proposition~7.2.4.3]{lan}, and that the complement $X'^*-X'$ has codimension $\geq 2$. Recall from (1) that the partial Hasse invariant defines the class as described in \eqref{E:equality-tau-tau-c}. Note that the inverse matrix of $C$ has all entries positive. It follows that each $[\omega^\circ_{\mathbf{A}'^\vee/X', \tilde\tau_0}]$ for $\tilde \tau_0$ lifting $\tau_0 \in \Sigma_\infty- \mathtt{S}_\infty$ is a positive linear combination of $\mathcal{O}_{X'}(X'_\tau)$'s. Let $X'^{\mathrm{n-ord}}=\bigcup_{\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}}X'_{\tau}\subseteq X'$ be the union of the Goren-Oort strata of codimension $1$. Since $X'^{\mathrm{n-ord}}$ is closed in $X'^*$ and is disjoint from the cusps, each line bundle $\mathcal{O}_{X'}(X'_\tau)$ extends to a line bundle $\mathcal{O}_{X'^*}(X'_\tau)$. By linear combination, each $[\omega^\circ_{\mathbf{A}'^\vee/X', \tilde\tau_0}]$ extends to a class in $\Pic(X'^*)_\mathbb{Q}$. \end{proof} \begin{notation} For any $\tau \in \Sigma_{\infty}-\mathtt{S}_{\infty}$, we put $[\omega_{\tau}]=[\omega^\circ_{\mathbf{A}'/X',\tilde\tau}]$ for simplicity, where $\tilde \tau$ is a lift of $\tau$. This is a well defined element in $\Pic (X'^*)_\mathbb{Q}$ by Lemma~\ref{L:omega-Pic}. \end{notation} \begin{prop} \label{P:normal bundle} Let $\mathfrak{p}$ be a $p$-adic place such that $\#(\Sigma_{\infty/\mathfrak{p}} - \mathtt{S}_{\infty/\mathfrak{p}}) >1$. When $\mathtt{T}$ consists of a single element $\tau \in \Sigma_{\infty/\mathfrak{p}}$ with $\mathfrak{p}$ not of type $\beta2$, let $n_\tau = n_{\tau,\mathtt{S}}$ be as in Subsection~\ref{S:partial-Hasse}. Let $N_{X'_{\mathtt{T}}}(X')$ denote the normal bundle of the embedding $X'_{\mathtt{T}} \hookrightarrow X'$. Then the equality $[N_{X'_{\mathtt{T}}}(X')]=[\mathcal{O}(-2p^{n_\tau})]$ holds in $\Pic(X'_{\mathtt{T}})_{\mathbb{Q}}$, where $\mathcal{O}(1)$ is the canonical quotient bundle of the $\mathbb{P}^1$-bundle $\pi_\tau: X'_{\mathtt{T}}=\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}} \to \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\mathtt{T})})_{k_0} $, and $\mathcal{O}(-2p^{n_\tau})$ is the dual of $\mathcal{O}(1)^{\otimes2 p^{n_\tau}}$. \end{prop} \begin{proof} By the construction in Subsection~\ref{S:Y_T=Z_T}, the set $\tilde I_\mathtt{T} = \{\sigma^{-n_\tau}\tilde \tau\}$ for a specific lift $\tilde \tau$ of $\tau$. We have \[ J^\circ_{\sigma^{-n_\tau}\tilde \tau} = \phi_{B, *, \sigma^{-n_\tau}\tilde \tau}^{-1} \circ \phi_{A, *, \sigma^{-n_\tau}\tilde \tau}(\omega^\circ_{\mathbf{A}^\vee, \sigma^{-n_\tau}\tilde \tau}), \] in terms of the moduli description of $Y'_\mathtt{T}\cong X'_\mathtt{T}$. So the restriction of $[\omega_{ \sigma^{-n_\tau} \tau}]$ to $X'_{\mathtt{T}}$ is $[\mathcal{O}(-1)]$. The Goren-Oort stratum $X'_{\mathtt{T}}$ is defined as the zero locus of \[ h_{\tilde \tau}: \omega^\circ_{\mathbf{A}'^\vee/X', \tilde \tau} \to (\omega^\circ_{\mathbf{A}'^\vee / X', \sigma^{-n_\tau} \tilde \tau})^{\otimes p^{n_\tau}}. \] So firstly, the class of $N_{X'_{\mathtt{T}}}(X')$ in $\Pic(X'_{\mathtt{T}})_{\mathbb{Q}}$ is given by the restriction of $p^{n_\tau}[\omega_{\sigma^{-n_\tau}\tau}] - [\omega_{\tau}]$ to $X'_{\mathtt{T}}$; and secondly, on $X'_{\mathtt{T}}$ we have an isomorphism \[ \omega^\circ_{\mathbf{A}'^\vee/X', \tilde \tau} \xrightarrow{\cong} H_1^\mathrm{dR}(\mathbf{A}'/X')^{\circ, (p^{n_\tau})}_{\tilde \tau} / (\omega^\circ_{\mathbf{A}'^\vee / X', \sigma^{-n_\tau} \tilde \tau})^{\otimes p^{n_\tau}}. \] This implies that $[\omega_\tau]$ equals to $-p^{n_\tau}[\omega_{\sigma^{-n_\tau}\tau}]$ in $\Pic(X'_{\mathtt{T}})_{\mathbb{Q}}$. To sum up, we have equalities in $\Pic(X'_{\mathtt{T}})_{\mathbb{Q}}$: \begin{align*} [N_{X'_{\mathtt{T}}}(X')]&=p^{n_\tau}[\omega_{\sigma^{-n_\tau}\tau}] - [\omega_{\tau}]\\ &=2p^{n_\tau}[\omega_{\sigma^{-n_\tau}\tau}]=[\mathcal{O}(-2p^{n_\tau})]. \end{align*} \end{proof} \begin{theorem}\label{T:ampleness} Let $\underline t=(t_{\tau})\in \mathbb{Q}^{\Sigma_{\infty}-\mathtt{S}_{\infty}}$. If the element $[\omega^{\underline t}]=\sum_{\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}}t_{\tau}[\omega_{\tau}]$ of $\Pic(X')_{\mathbb{Q}}$ is ample, then \begin{equation}\label{E:condition-ample} p^{n_{\tau}}t_{\tau}> t_{\sigma^{-n_\tau}\tau}\quad \text{(and $t_{\tau}>0$) for all }\tau. \end{equation} Here, we put the second condition in parentheses, because it follows from the first one. \end{theorem} \begin{proof} Assume that $[\omega^{\underline t}]$ is ample. Let $\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}$, and $\mathfrak{p}\in \Sigma_p$ be the prime of $F$ such that $\tau\in \Sigma_{\infty/\mathfrak{p}}$. We distinguish two cases: \begin{itemize} \item $\Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}}=\{\tau\}$. Condition \eqref{E:condition-ample} for $\tau$ is simply $t_{\tau}>0$. We consider the GO-stratum $X'_{\mathtt{T}_{\tau}}$ with $\mathtt{T}_{\tau}=\Sigma_{\infty}-(\mathtt{S}_{\infty}\cup \{\tau\})$. Then $X'_{\mathtt{T}_{\tau}}$ is isomorphic to a Shimura curve by Theorem~\ref{T:main-thm-unitary}, and let $i_{\tau}: X'_{\mathtt{T}_{\tau}}\rightarrow X'$ denote the canonical embedding. For any $\tilde\tau'\in \Sigma_{E,\infty}-\mathtt{S}_{E,\infty}$ with restriction $\tau'=\tilde\tau'|_{F}\neq \tau$. Let \begin{equation}\label{E:definition-F-tau} F_{\mathbf{A}',\mathrm{es},\tilde\tau'}^{n_{\tau'}}: H^{\mathrm{dR}}_1(\mathbf{A}'^{(p^{n_{\tau}})}/X')^\circ_{\tilde\tau'}\rightarrow H^{\mathrm{dR}}_1(\mathbf{A}'/X')^\circ_{\tilde\tau'}. \end{equation} be the $n_{\tau'}$-th iteration of the essential Frobenius in Subsection~\ref{S:partial-Hasse}. We always have $\Ker(F_{\mathbf{A}',\mathrm{es},\tilde\tau'}^{n_{\tau'}})=(\omega^\circ_{\mathbf{A}'^\vee/X',\sigma^{-n_{\tau'}}\tilde\tau'})^{(p^{n_{\tau'}})}$. The vanishing of $h_{\tilde\tau'}$ on $X'_{\mathtt{T}_{\tau}}$ is equivalent to $$ \mathrm{Im}(F_{\mathbf{A}',\mathrm{es},\tilde\tau'}^{n_{\tau'},\tilde\tau'})|_{X'_{\mathtt{T}_{\tau}}}=(\omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau'})|_{X'_{\mathtt{T}_{\tau}}}. $$ Therefore, one deduces an equality in $\Pic(X'_{\mathtt{T}_{\tau}})_{\mathbb{Q}}$: \[ p^{n_{\tau}}i_{\tau}^*[\omega_{\sigma^{-n_{\tau'}}\tau'}]+i^*_{\tau}[\omega_{\tau'}]=0. \] Letting $\tau'$ run through the set $\Sigma_{\infty/\mathfrak{q}}-\mathtt{S}_{\infty/\mathfrak{q}}$ with $\mathfrak{q}\neq \mathfrak{p}$, one obtains that $i_{\tau}^*[\omega_{\tau'}]=0$. Therefore, we have $i_{\tau}^*[\omega^{\underline t}]=t_{\tau} i_{\tau}^*[\omega_{\tau}]$, which is ample on $X'_{\mathtt{T}_\tau}$ since so is $[\omega^{\underline t}]$ on $X'_{\mathtt{T}_\tau}$ by assumption. By the ampleness of $\det(\omega)=\bigotimes_{\tilde\tau'\in \Sigma_{E,\infty}}\omega_{\mathbf{A}',\tilde\tau'}$ on $X'$ and hence on $X'_{\mathtt{T}_{\tau}}$, we see that $i^*_{\tau}[\omega_{\tau}]$ is ample on $X'_{\mathtt{T}_{\tau}}$. It follows that $t_{\tau}>0$. \item $\Sigma_{\infty/\mathfrak{p}}-\mathtt{S}_{\infty/\mathfrak{p}}\neq\{\tau\}$. Consider the GO-strata $X'_{\tau}$ given by $\mathtt{T}=\{\tau\}$, then $\mathtt{S}(\mathtt{T})=\mathtt{S}\cup \{\tau,\sigma^{-n_{\tau}}\tau\}$ in the notation of Subsection~\ref{S:quaternion-data-T}. By Propositions~\ref{P:Y_S=X_S} and \ref{P:Y_T=Z_T}, $X'_\tau$ is isomorphic to a $\mathbb{P}^1$-bundle over $\mathbf{Sh}_{K'_{\tau}}(G'_{\tilde \mathtt{S}(\mathtt{T})})_{k_0}$. Let $\pi: X'_{\tau}\rightarrow \mathbf{Sh}_{K'_{\tau}}(G'_{\tilde \mathtt{S}(\mathtt{T})})_{k_0}$ denote the natural projection. The ampleness of $[\omega^{\underline t}]$ on $X'$ implies the ampleness of its restriction to each closed fibre $\mathbb{P}^1_s$ of $\pi$. By the proof of Proposition~\ref{P:normal bundle}, we have \[ \omega^\circ_{\mathbf{A}'^\vee/X',\tilde\tau'}|_{\mathbb{P}^1_{s}}\simeq \begin{cases} \mathcal{O}_{\mathbb{P}^1_{s}}(-1) &\text{if $\tau'=\sigma^{-n_{\tau}}\tau$};\\ \mathcal{O}_{\mathbb{P}^1_{s}}(p^{n_{\tau}}) &\text{if } \tau'=\tau;\\ \mathcal{O}_{\mathbb{P}^1_{s}} &\text{otherwise}. \end{cases} \] The relation~\eqref{E:condition-ample} follows immediately. \end{itemize} \end{proof} Since the Hilbert modular varieties and the unitary Shimura varieties have the same neutral geometric connected components, the following is an immediate corollary of Theorem~\ref{T:ampleness}. \begin{cor}\label{C:ampleness Hilbert} Let $X$ denote the special fiber of the Hilbert modular variety, and $X^*$ its minimal compactification. Then for each $\tau \in \Sigma_\infty$, the class $[\omega_\tau] \in \Pic(X)_\mathbb{Q}$ uniquely extends to a class $[\omega_\tau] \in \Pic(X^*)_\mathbb{Q}$. Moreover, $[\omega^{\underline t}] = \sum_{\tau \in \Sigma_\infty}t_\tau [\omega_\tau]$ is ample only when $t_{\tau}>0$ and $p t_\tau > t_{\sigma^{-1}\tau}$ for all $\tau \in \Sigma_\infty$. \end{cor} For the converse to Theorem~\ref{T:ampleness}, we have the following \begin{conj} \label{C:ampleness conjecture} The conditions in Theorem~\ref{T:ampleness} and Corollary~\ref{C:ampleness Hilbert} are also sufficient for $[\omega^{\underline t}]$ to be ample. \end{conj} \begin{remark} In the case of Hilbert modular surface, Corollary~\ref{C:ampleness Hilbert} and the sufficiency of the condition were proved by Andreatta and Goren \cite[Theorem~8.1.1]{andreatta-goren}, which relies heavily on some intersection theory on surfaces. It seems difficult to generalize their method. Using our global geometric description, it seems possible to prove, for small inertia degrees (at least when all inertia degrees are $\leq 5$), Conjecture~\ref{C:ampleness conjecture} using variants of Nakai-Moishezon criterion. The combinatorics becomes complicated when the inertia degree is large. \end{remark} \section{Link morphisms}\label{Section:links} We will introduce certain generalizations of partial Frobenius morphisms, called \emph{link morphisms}, on unitary Shimura varieties associated to quaternionic ones. These morphisms appear naturally when considering the restriction of the projection maps $\pi_{\mathtt{T}}$ in Theorem~\ref{T:main-thm-unitary} to other Goren-Oort strata. The explicit descriptions of these morphisms are essential for the application considered in the forthcoming paper \cite{tian-xiao3}. For simplicity, \emph{we will assume that $p$ is inert of degree $g$ in the totally field $F$.} Denote by $\mathfrak{p}$ the unique prime of $F$ above $p$. Let $E$ be a CM extension of $F$. If $\mathfrak{p}$ splits in $E$, fix a prime $\mathfrak{q}$ of $E$ above $\mathfrak{p}$, and denote the other prime by $\bar \mathfrak{q}$; if $\mathfrak{p}$ is inert in $E$, we denote by $\mathfrak{q}$ the unique prime of $E$ above $\mathfrak{p}$. \subsection{Links}\label{S:links} We introduce some combinatorial objects. Let $n\geq 1$ be an integer. Put $n$ points aligned equi-distantly on a horizontal section of a vertical cylinder. We label the $n$ points by the elements of $\mathbb{Z}/n\mathbb{Z}$ so that the $(i+1)$-st point is next to the $i$th point on the right. Let $S$ be a subset of the $n$ points above. To such an $S$, we associate a graph as follows: We start from left to right with the plot labeled $0\in \mathbb{Z}/n\mathbb{Z}$, and draw a \emph{plus sign} if the element is in $S$, and a \emph{node} if it is in $\mathbb{Z}/n\mathbb{Z}-S$. We call such a picture a \emph{band of length $n$} associated to $S$. For instance, if $n=5$ and $S=\{1,3\}$, then the band is $ \psset{unit=0.3} \begin{pspicture}(-.5,-0.3)(4.5,0.3) \psset{linecolor=black} \psdots(0,0)(2,0)(4,0) \psdots[dotstyle=+](1,0)(3,0) \end{pspicture}$. Let $S'$ be another subset of $\mathbb{Z}/n\mathbb{Z}$ of the same cardinality as $S$. Then a link \emph{link} $\eta: S\rightarrow S'$ is a graph of the following kind: Put the band attached to $S$ on the top of the band for $S'$ in the same cylinder; draw non-intersecting curves from each of the nodes from the top band to a node on the bottom band. We say a curving is turning to the \emph{left} (resp. to the \emph{right}) if it is so as moving from the top band to the bottom band. If a curve travels $m$-numbers of points (of both plus signs and nodes) to the \emph{right} (resp. \emph{left}), we say the displace of this curve is $m$ (resp. $-m$). When both $S$ and $S'$ are equal to $\mathbb{Z}/n\mathbb{Z}$ (so that there are no nodes at all), then we say that $\eta:S\rightarrow S'$ is the trivial link. We define the \emph{total displacement} of a link $\eta$ as the sum of the displacements of all curves in $\eta$. For example, if $n=5$, $S=\{1, 3\}$ and $S'=\{1, 4\}$, then \begin{equation} \label{E:left turn link} \psset{unit=0.3} \eta = \begin{pspicture}(-.5,-0.3)(5,2.3) \psset{linecolor=red} \psset{linewidth=1pt} \psbezier(0,2)(1,1)(3,1)(3,0) \psbezier(2,2)(2,1)(3.5,1.5)(4.5,0.5) \psbezier(-0.5,1.3)(0.5,.3)(2,1)(2,0) \psarc{-}(-0.5,0){0.5}{0}{90} \psarc{-}(4.5,2){0.5}{180}{270} \psset{linecolor=black} \psdots(0,2)(2,2)(4,2) \psdots(0,0)(2,0)(3,0) \psdots[dotstyle=+](1,2)(3,2) \psdots[dotstyle=+](1,0)(4,0) \end{pspicture}. \end{equation} is a link from $S$ to $S'$, and its total displacement is $v(\eta)=3+3+3=9$. For a link $\eta: S\rightarrow S'$, we denote by $\eta^{-1}: S'\rightarrow S$ the link obtained by flipping the picture about the equator of the cylinder. For two links $\eta: S\rightarrow S'$ and $\eta':S'\rightarrow S''$, we define the composition of $\eta'\circ\eta: S\rightarrow S''$ by putting the picture of $\eta$ on the top of the picture of $\eta'$ and joint the nodes corresponding to $\eta'$. It is obvious that $v(\eta^{-1})=-v(\eta)$ and $v(\eta'\circ\eta)=v(\eta')+v(\eta)$. \subsection{Links for a subset of places of $F$ or $E$} We return to the setup of Notation~\ref{S:Notation-for-the-paper}, and recall that $p$ is inert in $F$. We fix an isomorphism $\Sigma_{\infty}\cong \mathbb{Z}/g\mathbb{Z}$ so that $i\mapsto i+1$ corresponds to the action of Frobenius on $\Sigma_{\infty}$. For an even subset $\mathtt{S}$ of places in $F$, we have the \emph{band} for $\mathtt{S}$ when applying Subsection~\ref{S:links} to the subset $\mathtt{S}_{\infty}$ of $\Sigma_{\infty}$. Let $\mathtt{S}'$ be another even subset of places of $F$ such that $\#\mathtt{S}_{\infty}=\#\mathtt{S}'_{\infty}$ and $\mathtt{S}'$ contains the same finite places of $F$ as $\mathtt{S}$ does. A link $\eta$ from the band for $\mathtt{S}$ to that for $\mathtt{S}'$ is denoted by $\eta: \mathtt{S}\rightarrow \mathtt{S}'$. When $\mathtt{S}'=\mathtt{S}$ and $\mathtt{S}_{\infty}=\Sigma_{\infty}$, $\eta:\mathtt{S}\rightarrow\mathtt{S}'$ is necessarily the trivial link (so that there is no curves at all). The Frobenius action on $\Sigma_{\infty}$ defines a link $\sigma: \mathtt{S}\rightarrow \sigma(\mathtt{S})$, in which all curves turn to the right with displacement 1; the total displacement of this link $\sigma$ is $v(\sigma)=g-\#\mathtt{S}_{\infty}$. Here, $\sigma(\mathtt{S})$ denotes the subset of places of $F$ whose finite part is the same as $\mathtt{S}$, and whose infinite part is the image of Frobenius on $\mathtt{S}_{\infty}$. \begin{notation}\label{S:notation-n-tau} Recall the definition of $n_\tau$ for $\tau \in \Sigma_\infty - \mathtt{S}_\infty$ from Notation~\ref{N:n tau}. For simplicity, we write $\tau^-$ for $\sigma^{-n_\tau} \tau$; and we use $\tau^+$ to denote the unique place in $\Sigma_\infty-\mathtt{S}_\infty$ such that $\tau = (\tau^+)^- = \sigma^{-n_{\tau^+}}\tau^+$. When there are several $\mathtt{S}$ involved, we will write $n_{\tau}(\mathtt{S})$ for $n_{\tau}$ to emphasize its dependence on $\mathtt{S}$. \end{notation} \subsection{Link morphisms}\label{S:link-morphisms} Let $\eta: \mathtt{S}\rightarrow \mathtt{S}'$ be a link of two even subsets of places of $F$. If $\mathtt{S}_{\infty}\neq \Sigma_{\infty}$, we denote by $m(\tau)$ the displacement of the curve at $\tau$ in the link $\eta$ for each $\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}$; and put $m(\tau) =0$ for $\tau \in \mathtt{S}_\infty$. Let $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ and $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'})_{k_0}$ denote the special fibers of some unitary Shimura varieties of type considered in Subsection~\ref{S:PEL-Shimura-data}. (There is no restriction on the signatures, i.e. the sets $\tilde \mathtt{S}_\infty$ and $\tilde \mathtt{S}'_\infty$ that lift $\mathtt{S}_\infty$ and $\mathtt{S}'_\infty$; but we fix them.) Here, we have fixed (compatible) isomorphisms $\mathcal{O}_D: = \mathcal{O}_{D_{\mathtt{S}}}\cong \mathcal{O}_{D_{\mathtt{S}'}}\cong \mathrm{M}_{2\times 2}(\mathcal{O}_E)$ and $G'_{\tilde\mathtt{S}}(\mathbb{A}^{\infty})\cong G'_{\tilde\mathtt{S}'}(\mathbb{A}^{\infty})$, and regard $K'$ as an open compact subgroups of both of the groups; this is possible because $\mathtt{S}$ and $\mathtt{S}'$ have the same finite part, and the argument in Lemma~\ref{L:compare D_S with D_S(T)} applies verbatim in this situation. Note that $K'_p$ is assumed to be hyperspecial as in Subsection~\ref{S:PEL-Shimura-data}. Let $\mathbf{A}'_{\tilde\mathtt{S}, k_0}$ be the universal abelian scheme over $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$. For a point $x$ of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$ with values in a perfect field $k(x)$, we denote by $\mathbf{A}'_{\tilde\mathtt{S},x}$ the base change of $\mathbf{A}'_{\tilde\mathtt{S}}$ to $x$, and $\tcD(\mathbf{A}'_{\tilde\mathtt{S}, x})^{\circ}$ the reduced part of the \emph{covariant} Dieudonn\'e module of $\mathbf{A}'_{\tilde\mathtt{S},x}$ (cf. Subsection~\ref{S:GO-notation} and proof of Lemma~\ref{L:Y_T=X_T-1}). For each $\tilde\tau\in \Sigma_{E, \infty} - \mathtt{S}_{E,\infty}$ we have the essential Frobenius map defined in \ref{N:essential frobenius and verschiebung}: \[ F_{\mathbf{A}',\mathrm{es}}: \tcD(\mathbf{A}'_{\tilde \mathtt{S},x})^{\circ}_{\sigma^{-1}\tilde\tau}\rightarrow \tcD(\mathbf{A}'_{\tilde \mathtt{S},x})_{\tilde\tau}^{\circ}. \] Finally, recall that a \emph{$p$-quasi-isogeny} of abelian varieties means a quasi-isogeny of the form $f_1\circ f_2^{-1}$, where $f_1$ and $f_2$ are isogenies of $p$-power order. \begin{defn}\label{D:link-morphism} Assume that $m(\tau)\geq 0$ for each $\tau\in \Sigma_\infty$, i.e. all curves (if any) in $\eta$ are either straight lines or all turning to the right. Let $n$ be an integer. If $\mathfrak{p}$ is inert in $E$, we assume that $n=0$. A \emph{link morphism of indentation degree $n$} associated to $\eta$ on $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$ (if exists) is a morphism of varieties $$ \eta'_{(n),\sharp}: \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}\rightarrow \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'})_{k_0} $$ together with a $p$-quasi-isogeny of abelian varieties $$\eta'^{\sharp}_{(n)}: \mathbf{A}'_{\tilde\mathtt{S}, k_0}\rightarrow \eta'^{*}_{(n),\sharp}(\mathbf{A}'_{\tilde\mathtt{S}',k_0}),$$ such that the following conditions are satisfied: \begin{itemize} \item[(1)] $\eta'_{(n),\sharp}$ induces a bijection on geometric points. \item[(2)] The quasi-isogeny $\eta'^{\sharp}_{(n)}$ is compatible with the actions of $\mathcal{O}_D$, level structures, and the poarlizations on both abelian varieties. \item[(3)] There exists, for each $\tilde\tau\in \Sigma_{E, \infty} - \mathtt{S}_{E,\infty}$, some $t_{\tilde\tau}\in \mathbb{Z}$, such that, for every $\overline \mathbb{F}_p$-point $x$ of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ with image $x'=\eta'_{(n),\sharp}(x)$, \[ \eta'^{\sharp}_{(n), *}\big(F_{\mathrm{es},\mathbf{A}'_{\tilde\mathtt{S},x}}^{m(\tau)}(\tcD(\mathbf{A}'_{\tilde \mathtt{S},x})^\circ_{\tilde\tau})\big)=p^{t_{\tilde\tau}}\tcD(\mathbf{A}'_{\tilde\mathtt{S}',x'})^\circ_{\sigma^{m(\tau)}\tilde\tau}, \] where $\tau\in \Sigma_{\infty}$ is the image of $\tilde\tau$. \item[(4)] The quasi-isogeny of $\mathfrak{q}$-divisible group \[ \eta'^{\sharp}_{(n),\mathfrak{q}}:\mathbf{A}'_{\tilde\mathtt{S}}[\mathfrak{q}^{\infty}]\rightarrow \eta'^{*}_{(n),\sharp}\mathbf{A}'_{\tilde\mathtt{S}'}[\mathfrak{q}^{\infty}] \] has degree $p^{2n}$. Here, our convention for $\mathfrak{q}$ is as at the beginning of this section; in particular, if $\mathfrak{p}$ splits in $E$, then the quasi-isogeny on the $\bar\mathfrak{q}^{\infty}$-divisible groups \[ \eta'^{\sharp}_{(n),\bar\mathfrak{q}}:\mathbf{A}'_{\tilde\mathtt{S}}[\bar\mathfrak{q}^{\infty}]\rightarrow \eta'^*_{(n),\sharp}\mathbf{A}'_{\tilde\mathtt{S}'}[\bar\mathfrak{q}^{\infty}] \] has necessarily degree $p^{-2n}$. Here, the exponent $2n$ is due to the fact that $\mathbf{A}'[\mathfrak{q}^{\infty}]$ is two copies of its reduced part $\mathbf{A}'[\mathfrak{q}^{\infty}]^{\circ}$. \end{itemize} \end{defn} Let $\eta_i:\mathtt{S}_i\rightarrow \mathtt{S}_{i+1}$ for $i=1,2$ be two links with all curves turning to the right, and let $(\eta'_{i,\sharp},\eta'^{\sharp}_i)$ be the link morphism of indentation degree $n_i$ on $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}_i})_{k_0}$ attached to $\eta_{i}$. The composition of $(\eta'_{2,\sharp},\eta'^{\sharp}_{2})$ with $(\eta'_{1,\sharp},\eta'^{\sharp}_{1})$ defined by \[ \eta'_{12,\sharp}: \mathbf{Sh}_{K'_p}(G'_{\tilde\mathtt{S}_1})_{k_0}\xrightarrow{\eta'_{1,\sharp}}\mathbf{Sh}_{K'_p}(G'_{\tilde\mathtt{S}_2})_{k_0}\xrightarrow{\eta'_{2,\sharp}}\mathbf{Sh}_{K'_p}(G'_{\tilde\mathtt{S}_3})_{k_0} \] and \[ \eta'^{\sharp}_{12}: \mathbf{A}'_{\tilde\mathtt{S}_1,k_0}\xrightarrow{\eta'^{\sharp}_{1}}\eta'^{*}_{1,\sharp}(\mathbf{A}'_{\tilde\mathtt{S}_2,k_0})\xrightarrow{\eta'^{*}_{1,\sharp}(\eta'^{\sharp}_{2})}\eta'^{*}_{12,\sharp}(\mathbf{A}'_{\tilde\mathtt{S}_3,k_0}), \] is a link morphism attached to the composed link $\eta_{12} :=\eta_2\circ\eta_1$ with indentation degree $n_1+n_2$. \subsection{Variants} The formulation of link morphisms on $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ is compatible with changing the tame level $K'^p$. By taking the inverse limit of $K'^p$, one can define a link morphism on $\mathbf{Sh}_{K'_p}(G'_{\tilde \mathtt{S}})_{k_0}$ associated to $\eta$ in the obvious way. One can define similarly a link morphism of indentation degree $n$ on $\mathbf{Sh}_{K''_p}(G''_{\tilde \mathtt{S}})$ as a pair $(\eta''_{(n), \sharp}, \eta''_{(n),\sharp})$, where \[ \eta''_{(n),\sharp}: \mathbf{Sh}_{K''_p}(G''_{\tilde\mathtt{S}})_{k_0}\rightarrow \mathbf{Sh}_{K''_p}(G''_{\tilde\mathtt{S}'})_{k_0} \] is a morphism of varieties and \[ \eta''^{\sharp}_{(n)}: \mathbf{A}''_{\tilde\mathtt{S}, k_0}\rightarrow \eta''^{*}_{(n),\sharp}(\mathbf{A}''_{\tilde\mathtt{S}',k_0}) \] is a $p$-quasi-isogeny of abelian schemes such that same conditions (1)-(4) in \ref{D:link-morphism} are satisfied (except the primes is replaced by double primes). Here, $\mathbf{A}''_{\tilde\mathtt{S}, k_0}$ is the family of abelian varieties constructed in Subsection~\ref{S:abel var in unitary case}. \begin{example} (1) Consider the second iteration of the Frobenius link $\sigma^2=\sigma_{\mathfrak{p}}^2: \mathtt{S}\rightarrow \sigma^{2}(\mathtt{S})$. The twisted (partial) Frobenius map \eqref{E:twist-partial-Frob} \[ \mathfrak{F}'_{\mathfrak{p}^2}: \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_{0}} \to \mathbf{Sh}_{K'}(G'_{\sigma^2 \tilde \mathtt{S}})_{k_{0}} \] together with the isogeny $\eta'_{\mathfrak{p}^2}$ defined in \eqref{E:universal-isog-Frob} is a link morphism associated to $\sigma^2$; the indentation degree is $0$ if $\mathfrak{p}$ is inert in $E/F$, and is $2\#\tilde \mathtt{S}_{\infty/\bar \mathfrak{q}} - 2\#\tilde \mathtt{S}_{\infty/ \mathfrak{q}}$ if $\mathfrak{p}$ splits in $E/F$. (2) Assume that $\mathtt{S}_\infty = \Sigma_\infty$ and $\mathfrak{p}\notin\mathtt{S}$ (so that $\mathfrak{p}$ splits in $E$ by our choice of $E$). The Shimura variety $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ is just a finite union of closed points. Let $\tau_0\in \Sigma_{\infty}$, and $\tilde \tau_0\in \tilde \mathtt{S}_\infty$ be the lift of $\tau_0$ with signature $s_{\tilde\tau_0}=0$. We assume that $\sigma^{-1}\tilde \tau_0 \notin \tilde \mathtt{S}_\infty$ (so that $\sigma^{-1}\tilde \tau_0^c \in \tilde \mathtt{S}_\infty$). Let $\tilde \mathtt{S}'$ denote the subset of places of $F$ containing the same finite places as $ \tilde \mathtt{S}$ and such that $\tilde \mathtt{S}'_\infty = \tilde \mathtt{S}_\infty \cup \{\tilde \tau_0^c, \sigma^{-1}\tilde\tau_0\} \backslash \{ \tilde \tau_0, \sigma^{-1}\tilde \tau_0^c\}$. Let $\mathtt{S}'$ be the subset of places of $F$ defined by the restriction of $\tilde\mathtt{S}'_{\infty}$. Then there exists a link morphism $(\delta'_{\tau_0,\sharp}, \delta'^{\sharp}_{\tau_0})$ from $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ to $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'})_{k_0}$ associated to the trivial link $\mathtt{S}\rightarrow \mathtt{S}'$ defined as follows; its indentation is $0$ if $\mathfrak{p}$ is inert in $E/F$, is $2$ if $\mathfrak{p}$ splits in $E/F$ and $\tilde \tau$ induces the $p$-adic place $\mathfrak{q}$, and is $-2$ if $\mathfrak{p}$ splits in $E/F$ and $\tilde \tau$ induces the $p$-adic place $\bar \mathfrak{q}$ It suffices to define $\delta'_{\tau_0,\sharp}$ on the geometric closed points, as both Shimura varieities are zero-dimensional. For each $\overline \mathbb{F}_p$-point $x = (A, \iota ,\lambda_{A}, \alpha_{K'}) \in \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})(\overline \mathbb{F}_p)$, let $\tilde \mathcal{D}^\circ_{A,\tilde \tau}$ denote the $\tilde\tau$-component of the reduced covariant Dieudonn\'e module of $A$ for each $\tilde \tau \in \Sigma_{E, \infty}$. We put $M_{\tilde \tau} = \tilde \mathcal{D}^\circ_{A,\tilde \tau}$ for $\tilde \tau \neq \tilde \tau_0, \tilde \tau_0^c$ and $$M_{\tilde \tau_0^c} = p\tilde \mathcal{D}^\circ_{A,\tilde \tau_0^c},\quad M_{\tilde \tau_0} = \frac 1p\tilde \mathcal{D}^\circ_{A,\tilde \tau_0}\subseteq \tcD^{\circ}_{A,\tilde\tau_0}[\frac{1}{p}]. $$ We check that the signature condition implies that $M_{\tilde\tau}$'s are stable under the actions of Frobenius and Verschiebung of $\tcD^{\circ}_{A,\tilde\tau_0}[1/p]$. As in the proof of Proposition~\ref{P:Y_S=X_S}, this gives rise to an abelian variety $B$ of dimension $4g$ with an action of $\mathcal{O}_D$ and an $\mathcal{O}_D$-quasi-isogeny $\phi:B \to A$ such that the induced morphism on Dieudonn\'e modules $\phi_*: \tcD^{\circ}_{B,\tilde\tau}\rightarrow \tcD^{\circ}_{A,\tilde\tau}[1/p]$ is identified with the natural inclusion $M_{\tilde\tau}\hookrightarrow \tcD^{\circ}_{A,\tilde\tau}[1/p]$ for all $\tilde\tau\in \Sigma_{E,\infty}$. The polarization $\lambda_{A}$ induces naturally a polarization $\lambda_{B}$ on $B$ such that $\lambda_B=\phi^\vee\circ \lambda_A\circ\phi$, since $M_{\tilde\tau}$ is the dual lattice of $M_{\tilde\tau^c}$ for every $\tilde\tau\in \Sigma_{E,\infty}$. When $\mathfrak{p}$ is of type $\alpha_2$, then $K'_p$ is the Iwahoric subgroup and the level structure at $\mathfrak{p}$ is equivalent to the data of a collection of submodules $L_{\tilde \tau} \subset \tilde \mathcal{D}^\circ_{A,\tilde \tau}$ for $\tilde \tau \in \Sigma_{E, \infty/\mathfrak{q}}$ which are stable under the action of Frobenius and Verschiebung morphisms and such that $\tilde \mathcal{D}^\circ_{A,\tilde \tau} / L_{\tilde \tau}$ is a one-dimensional vector space over $\overline \mathbb{F}_p$ for each $\tilde \tau$. This then gives rise to a level structure at $\mathfrak{p}$ for $B_x$ by taking $L'_{\tilde \tau} = L_{\tilde \tau}$ if $\tilde \tau \neq \tilde \tau_0$, and $L'_{\tilde \tau_0} = p^{-1}L_{\tilde \tau_0}$. It is clear that other level structures of $A$ transfer to that of $B$ automatically. This then defines a morphism $$ \delta'_{\tau_0,\sharp}: \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0} \to \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'})_{k_0}. $$ One checks easily that one can reverse the construction to recover $A$ from $B$. So $\delta'_{\tau_0}$ is an isomorphism, and there exists a $p$-quasi-isogeny $$ \delta'^{\sharp}_{\tau_0}: \mathbf{A}'_{\tilde \mathtt{S}, k_0} \to (\delta'_{\tau_0})^*\mathbf{A}'_{\tilde \mathtt{S}', k_0}, $$ whose base change to $x$ is $\phi^{-1}: A\rightarrow B$ constructed above. It is evident by construction that $(\delta'_{\tau_0,\sharp},\delta'^{\sharp}_{\tau_0})$ is a link morphism of the prescribed indentation associated to the trivial link $\mathtt{S}\rightarrow \mathtt{S}'$. \end{example} The following proposition will play a crucial role in our application in \cite{tian-xiao3}. \begin{prop}\label{P:uniqueness-link-morphism} For a given link $\eta: \mathtt{S}\rightarrow \mathtt{S}'$ with all curves (if any) turning to the right and an integer $n\in \mathbb{Z}$ (with $n=0$ if $\mathfrak{p}$ is inert in $E$), there exists at most one link morphism of indentation degree $n$ from $\mathbf{Sh}_{K'_p}(G'_{\tilde\mathtt{S}})_{k_0}$ to $\mathbf{Sh}_{K'_p}(G'_{\tilde\mathtt{S}'})_{k_0}$ (or from $\mathbf{Sh}_{K''_p}(G''_{\tilde\mathtt{S}})_{k_0}$ to $\mathbf{Sh}_{K''_p}(G''_{\tilde\mathtt{S}'})_{k_0}$ ) associated to $\eta$. \end{prop} \begin{proof} Since $\mathbf{Sh}_{K'_p}(G'_{\tilde\mathtt{S}})_{k_0}$ and $\mathbf{Sh}_{K''_p}(G''_{\tilde\mathtt{S}})_{k_0}$ have canonically isomorphic neutral connected component (and the restrictions of $\mathbf{A}'_{\tilde\mathtt{S},k_0}$ and $\mathbf{A}''_{\tilde\mathtt{S},k_0}$ to this neutral connected component are also canonically isomorphic), it suffices to treat the case of $\mathbf{Sh}_{K'_p}(G'_{\tilde\mathtt{S}})_{k_0}$. Let $(\eta'_{i,\sharp}, \eta'^{ \sharp}_{i})$ for $i=1,2$ be two link morphisms of indentation degree $n$ associated to $\eta$. By the moduli property of $\mathbf{Sh}_{K'_p}(G'_{\tilde\mathtt{S}'})_{k_0}$, it suffices to show that the $p$-quasi-isogeny of abelian varieties \[ \phi: \eta'^{ *}_{1,\sharp}(\mathbf{A}'_{\tilde\mathtt{S}',k_0}) \xleftarrow{\eta'^{\sharp}_{1}}\mathbf{A}'_{\tilde\mathtt{S}, k_0}\xrightarrow{\eta'^{\sharp}_{2}} \eta'^{ *}_{2,\sharp}(\mathbf{A}'_{\tilde\mathtt{S}',k_0}) \] is an isomorphism. By \cite[Proposition~2.9]{rapoport-zink}, the locus where $\phi$ is an isomorphism is a closed subscheme of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$. As $\mathbf{Sh}_{K'_p}(G'_{\tilde\mathtt{S}})_{k_0}$ is a reduced variety, $\phi$ is an isomorphism if and only if it is so after base changing to every $\overline \mathbb{F}_p$-point of $\mathbf{Sh}_{K'_p}(G'_{\tilde\mathtt{S}})_{k_0}$. Let $x$ be an $\overline \mathbb{F}_p$-point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$, and put $x_i=\eta'_{i,\sharp}(x)$ for $i=1,2$. Consider first the case $\mathtt{S}_{\infty}\neq \Sigma_{\infty}$. By condition~\ref{D:link-morphism}(3), there exists an integer $u_{\tilde\tau}$ for each $\tilde\tau\in \Sigma_{E,\infty}-\mathtt{S}'_{E,\infty}$ such that \[ \phi_{x,*}\big(\tcD(\mathbf{A}_{\tilde\mathtt{S}',x_1})^{\circ}_{\tilde\tau}\big)=p^{u_{\tilde\tau}}\tcD(\mathbf{A}_{\tilde\mathtt{S}',x_2})^{\circ}_{\tilde\tau}. \] We claim that $u_{\tilde\tau}$ must be $0$ for all $\tilde\tau$. Note that the cokernel of \[ F^{n_{\tau}}_{\mathrm{es},\mathbf{A}_{\tilde\mathtt{S}',x_i}}: \tcD(\mathbf{A}_{\tilde\mathtt{S}',x_i})^{\circ}_{\sigma^{-n_{\tau}}\tilde\tau}\rightarrow \tcD(\mathbf{A}_{\tilde\mathtt{S}',x_i})^{\circ}_{\tilde\tau} \] has dimension $1$ over $k(x_i)$ for $i=1,2$. Since $\phi_{x,*}$ commutes with $F^{n_{\tau}}_{\mathrm{es}}$, we see that $u_{\tilde\tau}=u_{\sigma^{-n_{\tau}}\tilde\tau}$. Consequently, for all $\tilde\tau\in \Sigma_{E,\infty/\mathfrak{q}}$, $u_{\tilde\tau}$ takes the same value, which we denote by $u$. However, both $\eta'^{\sharp}_{1, \mathfrak{q}}$ and $\eta'^{\sharp}_{2, \mathfrak{q}}$ have degree $p^{2n}$ by condition~\ref{D:link-morphism}(4). It follows that $$\phi_{x,\mathfrak{q}}:\eta'^{ *}_{1,\sharp}(\mathbf{A}'_{\tilde\mathtt{S}',x_1})[\mathfrak{q}^{\infty}]\rightarrow \eta'^{ *}_{2,\sharp}(\mathbf{A}'_{\tilde\mathtt{S}',x_2})[\mathfrak{q}^{\infty}] $$ is a quasi-isogeny of degree $0$, which forces $u$ to be $0$. Hence $\phi_{*}$ is an isomorphism. When $\mathtt{S}_{\infty}=\Sigma_{\infty}$, we have similarly an integer $u_{\tilde\tau}$ for all $\tilde\tau\in \Sigma_{E,\infty}$ such that $\phi_{*}\big(\tcD(\mathbf{A}_{\tilde\mathtt{S}',x_1})^{\circ}_{\tilde\tau}\big)=p^{u_{\tilde\tau}}\tcD(\mathbf{A}_{\tilde\mathtt{S}',x_2})^{\circ}_{\tilde\tau}$. Since $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$ and $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'})_{k_0}$ are both zero-dimensional and $$F_{\mathrm{es},\mathbf{A}'_{\tilde\mathtt{S},x_i}}: \tcD(\mathbf{A}_{\tilde\mathtt{S}',x_i})^{\circ}_{\sigma^{-1}\tilde\tau}\rightarrow \tcD(\mathbf{A}_{\tilde\mathtt{S}',x_i})_{\tilde\tau}^{\circ} $$ is an isomorphism for all $\tilde\tau\in \Sigma_{E,\infty}$ and $i=1,2$, the commutativity of $\phi_*$ with essential Frobenii show that $u_{\tilde\tau}=u_{\sigma^{-1}\tilde\tau}$. The same arguments as above show that $\phi_*$ is an isomorphism. \end{proof} \begin{remark} This Proposition does not guarantee the existence of the link morphism associated to a given link. \end{remark} \subsection{Link morphisms and Hecke operators} Assume $\mathtt{S}_\infty = \Sigma_\infty$ so that $\mathfrak{p}$ is of type $\alpha$ or $\alpha^{\sharp}$ and the band associated to $\mathtt{S}$ consists of only plus signs. Let $\mathfrak{q}$ and $\bar\mathfrak{q}$ be the two primes of $E$ above $\mathfrak{p}$. We will focus on the compatibility of link morphisms with the Hecke operators at $\mathfrak{q}$, whose definition we recall now. We have the following description $$ G''_{\tilde \mathtt{S}}(\mathbb{Q}_p) \cong \mathrm{GL}_2(F_{\mathfrak{p}})\times_{F_{\mathfrak{p}}^{\times}}(E_{\mathfrak{q}}^{\times}\times E_{\bar\mathfrak{q}}^{\times})\xrightarrow{\sim} \mathrm{GL}_2(E_{\mathfrak{q}})\times F_{\mathfrak{p}}^{\times}, $$ where the last isomorphism is given by $(g, (\lambda_1,\lambda_2))\mapsto(g\lambda_1, \det(g)\lambda_{1}\lambda_2)$ for $g\in \mathrm{GL}_2(F_{\mathfrak{p}})$ and $\lambda_1\in E_{\mathfrak{q}}^{\times}$ and $\lambda_2\in E_{\bar\mathfrak{q}}^{\times}$. Then $G'_{\tilde \mathtt{S}}(\mathbb{Q}_p)$ is the subgroup $\mathrm{GL}_2(E_{\mathfrak{q}})\times \mathbb{Q}_p^{\times}$ of $G''_{\tilde \mathtt{S}}(\mathbb{Q}_p)$. Let $\gamma_{\mathfrak{q}}$ (resp. $\xi_\mathfrak{q}$) be the element of $G'_{\tilde \mathtt{S}}(\mathbb{A}^{\infty})$ which is equal to $$(\begin{pmatrix}p^{-1}&0\\0&p^{-1}\end{pmatrix},1)\in \mathrm{GL}_2(E_\mathfrak{q})\times\mathbb{Q}_p^{\times} \quad \textrm{(resp. }(\begin{pmatrix}p^{-1} &0\\ 0&1\end{pmatrix}, 1)\in \mathrm{GL}_2(E_\mathfrak{q})\times\mathbb{Q}_p^{\times} \ )$$ at $p$ and is equal to $1$ at other places. Assume that $K'\subseteq G'(\mathbb{A}^{\infty})$ is hyperspecial at $p$, i.e. $K'_p=\mathrm{GL}_{2}(\mathcal{O}_{E_{\mathfrak{q}}})\times \mathbb{Z}_p^{\times}$. We use $S_{\mathfrak{q}}$ and $T_{\mathfrak{q}}$ to denote the Hecke correspondences on $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})$ defined by $K'\gamma_\mathfrak{q} K'$ and $K'\xi_\mathfrak{q} K'$, respectively. Explicitly, if $\mathrm{Iw}'_p= \mathrm{Iw}_{\mathfrak{q}}\times \mathbb{Z}_p^{\times}\subseteq G'_{\tilde\mathtt{S}}(\mathbb{Q}_p)$ with $\mathrm{Iw}_{\mathfrak{q}}\subseteq \mathrm{GL}_2(\mathcal{O}_{E_{\mathfrak{q}}})$ the standard Iwahoric subgroup reducing to upper triangular matrices when modulo $p$, then the Hecke correspondence $T_{\mathfrak{q}}$ is given by the following diagram: \begin{equation}\label{E:Hecke-T_q} \xymatrix{ & \mathbf{Sh}_{K'^p \mathrm{Iw}'_p}(G'_{\tilde\mathtt{S}}) \ar[rd]^{\mathrm{pr}_2}\ar[ld]_{\mathrm{pr}_1}\\ \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})&& \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}}), } \end{equation} where $\mathrm{pr}_1$ is the natural projection, and $\mathrm{pr}_2$ is induced by the right multiplication by $\xi_{\mathfrak{q}}$. Note that $S_{\mathfrak{q}}$ is an automorphism of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})$, and there is a natural $p$-quasi-isogeny of universal abelian schemes $$ \Phi_{S_{\mathfrak{q}}}: \mathbf{A}'_{\tilde \mathtt{S}}\rightarrow S_{\mathfrak{q}}^*\mathbf{A}'_{\tilde\mathtt{S}} $$ compatible with all structures such that the induced quasi-isogeny of $p$-divisible groups $\Phi_{S_{\mathfrak{q}}}[\mathfrak{q}^{\infty}]: \mathbf{A}'_{\tilde\mathtt{S}}[\mathfrak{q}^{\infty}]\rightarrow (S_{\mathfrak{q}}^*\mathbf{A}'_{\tilde\mathtt{S}})[\mathfrak{q}^{\infty}]$ is the canonical isogeny with kernel $\mathbf{A}'_{\tilde\mathtt{S}}[\mathfrak{q}]$. Similarly, the elements $\gamma_{\mathfrak{q}}$ and $\xi_{\mathfrak{q}}$ induce Hecke correspondences on $\mathbf{Sh}_{K''}(G''_{\tilde\mathtt{S}})$, which we denote still by $S_\mathfrak{q}$ and $T_\mathfrak{q}$ respectively. \begin{prop} Assume that $\mathtt{S}_\infty = \Sigma_\infty$. Let $\tilde \mathtt{S}_\infty$ and $\tilde \mathtt{S}'_\infty$ be two different choices of signatures in Subsection~\ref{S:CM extension}. Suppose that there exists a link morphism $(\eta'_\sharp,\eta'^{\sharp})$ from $ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}$ to $ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'})_{k_0}$ (of some indentation) associated to the trivial link $\mathtt{S} \to \mathtt{S}'$, where $K'_p= \mathrm{GL}_2(\mathcal{O}_{E_{\mathfrak{q}}})\times \mathbb{Z}_p^{\times}\subseteq G'(\mathbb{Q}_p)$ is hyperspecial. Then $(\eta'_{\sharp}, \eta'^{\sharp})$ lifts uniquely to a link morphism $(\eta'_{\sharp, \mathrm{Iw}}, \eta'^{\sharp}_{\mathrm{Iw}})$ on $\mathbf{Sh}_{K'^p\mathrm{Iw}'_p}(G'_{\tilde\mathtt{S}})_{k_0}$ such that the following commutative diagrams are Cartesian: \[ \xymatrix{ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0} \ar[d]^{\eta'_\sharp} & \mathbf{Sh}_{K'^p\mathrm{Iw}'_p}(G'_{\tilde \mathtt{S}})_{k_0} \ar[l]_{\mathrm{pr}_1}\ar[d]^{\eta'_{\sharp,\mathrm{Iw}}} \ar[r]^{\mathrm{pr}_2} & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}\ar[d]^{\eta'_{\sharp}} & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}\ar[d]^{\eta'_\sharp} \ar[r]^{S_\mathfrak{q}} & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0}\ar[d]^{\eta'_\sharp} \\ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'})_{k_0} & \mathbf{Sh}_{K'^p\mathrm{Iw}'_{p}}(G'_{\tilde \mathtt{S}'})_{k_0}\ar[l]_{\mathrm{pr}_1} \ar[r]^{\mathrm{pr}_2} & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'})_{k_0} & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'})_{k_0} \ar[r]^{S_\mathfrak{q}} & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'})_{k_0} } \] where the top and the bottom lines of the left diagram are the Hecke correspondences $T_{\mathfrak{q}}$ defined above. The same holds for the link morphism $(\eta''_\sharp,\eta''^{\sharp})$: $\mathbf{Sh}_{K''}(G''_{\tilde \mathtt{S}})_{k_0}\to \mathbf{Sh}_{K''}(G''_{\tilde \mathtt{S}'})_{k_0}$. \end{prop} \begin{proof} Note that $S_\mathfrak{q}$ is in fact an isomorphism of Shimura varieties; so the compatibility with $S_\mathfrak{q}$-action follows from the uniqueness of link morphism by Proposition~\ref{P:uniqueness-link-morphism}. We prove now the existence of the lift $(\eta'_{\sharp,\mathrm{Iw}},\eta'^{\sharp}_{\mathrm{Iw}})$, whose uniqueness is proved in \ref{P:uniqueness-link-morphism}. Let $x=(A,\iota,\lambda, \alpha_{K'^p})$ be a point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}(\overline \mathbb{F}_p)$. Put $x'=\eta'_{\sharp}(x)=(A',\iota',\lambda',\alpha'_{K'^p})$. By Definition~\ref{D:link-morphism}(3), for any $\tilde\tau\in \Sigma_{E,\infty}$, there exists $t_{\tilde\tau}\in \mathbb{Z}$ independent of $x$ such that $\eta'^{\sharp}_{*}(\tcD^{\circ}_{A,\tilde\tau})=p^{t_{\tilde\tau}}\tcD^{\circ}_{A',\tilde\tau}$. Fix a $\tilde\tau_0\in \Sigma_{E,\infty/\mathfrak{q}}$. Giving a point $y$ of $\mathbf{Sh}_{K'^p\mathrm{Iw}'_{p}}(G'_{\tilde\mathtt{S}})_{k_0}$ with $\mathrm{pr}_1(y)=x$ is equivalent to giving a $W(\overline \mathbb{F}_p)$-submodule $\tilde H_{\tilde\tau_0}\subseteq \tcD^{\circ}_{A,\tilde\tau_0}$ such that $F_{\mathrm{es}, A}^{g}(\tilde H_{\tilde\tau_0})=\tilde H_{\tilde\tau_0}$ and $\tcD^{\circ}_{A,\tilde\tau_0}/\tilde H_{\tilde\tau_0}$ is one-dimensional over $\overline \mathbb{F}_p$. We put $\tilde H_{\tilde\tau_0}'=p^{-t_{\tilde\tau_0}}\eta'^{\sharp}_*(\tilde H_{\tilde\tau_0})\subseteq \tcD^{\circ}_{A',\tilde\tau_0}$. Then one sees easily that the quotient $\tcD^{\circ}_{A',\tilde\tau_0}/\tilde H'_{\tilde\tau_0}$ is one-dimensional over $\overline \mathbb{F}_p$ and $\tilde H'_{\tilde\tau_0}$ is fixed by $F_{\mathrm{es}, A'}^g$. This gives rise to a point $y'$ of $\mathbf{Sh}_{K'^p\mathrm{Iw}'_p}(G'_{\tilde\mathtt{S}'})_{k_0}$ with $\mathrm{pr}_1(y')=x'$. One defines thus $\eta_{\sharp,\mathrm{Iw}}'(y)=y'$, and the quasi-isogeny $\eta'^{\sharp}_{\mathrm{Iw}}$ as the pull-back of $\eta'^{\sharp}$ via $\mathrm{pr}_1$. It is clear by construction that $\eta'_{\sharp}\circ\mathrm{pr}_{1}=\mathrm{pr}_1\circ \eta'_{\sharp,\mathrm{Iw}}$. It remains to prove that $\eta'_{\sharp}\circ \mathrm{pr}_2=\mathrm{pr}_2\circ\eta'_{\sharp,\mathrm{Iw}}$. Let $y=(A,\iota, \lambda, \alpha_{K'^p}, \tilde H_{\tilde\tau_0})\in \mathbf{Sh}_{K'^p\mathrm{Iw}_p}(G'_{\tilde\mathtt{S}})_{k_0}$ be a point above $x$ as above. We put $\tcD_{A,\mathfrak{q}}^{\circ}\colon =\bigoplus_{\tilde\tau\in \Sigma_{\infty/\mathfrak{q}}} \tcD_{A,\tilde\tau}^{\circ}$, and we define $\tcD_{A,\bar\mathfrak{q}}^{\circ}$ similarly with $\mathfrak{q}$ replaced by $\bar\mathfrak{q}$. Then $\tilde H_{\mathfrak{q}}\colon =\frac{1}{p}\bigoplus_{i=0}^{g-1} F_{\mathrm{es}, A}^{i}(\tilde H_{\tilde\tau_0})$ is a $W(\overline \mathbb{F}_p)$-lattice of $\tcD^{\circ}_{A,\mathfrak{q}}[1/p]$ stable under the action of $F$ and $V$. Let $\tilde H_{\bar\mathfrak{q}}=\tilde H^{\vee}_{\mathfrak{q}}\subseteq \tcD^{\circ}_{A,\bar\mathfrak{q}}[1/p]$ denote the dual lattice of $\tilde H_{\mathfrak{q}}$ under the perfect pairing between $\tcD^{\circ}_{A,\mathfrak{q}}[1/p]$ and $\tcD^{\circ}_{A,\bar\mathfrak{q}}[1/p]$ induced by $\lambda$. By the theory of Dieudonn\'e modules, there exists a unique abelian variety $B$ equipped with $\mathcal{O}_D$-action $\iota_B$ together with an $\mathcal{O}_D$-linear $p$-quasi-isogeny $\phi\colon B\rightarrow A$ such that $\phi_*(\tcD_{B}^{\circ})$ is identified with the lattice $\tilde H_{\mathfrak{q}}\oplus \tilde H_{\bar\mathfrak{q}}$ of $\tcD_{A}^{\circ}[1/p]$. Note that $B$ satisfies the signature condition of $\mathbf{Sh}_{K'}(G_{\tilde\mathtt{S}})_{k_0}$. Since $\tilde H_{\mathfrak{q}}$ and $\tilde H_{\bar\mathfrak{q}}$ are dual to each other, the quasi-isogeny $\lambda_B = \phi^{\vee}\circ\lambda\circ\phi\colon B\rightarrow B^{\vee}$ is a prime-to-$p$ polarization $\lambda_B$ on $B$. We equip moreover $B$ with the $K'^p$-level structure $\beta_{K'^p}$ such that $\alpha_{K'^p}=\phi\circ \beta_{K'^p}$. Thus $z\colon =(B,\iota_B,\lambda_B, \beta_{K'^p})$ gives rise to an $\overline \mathbb{F}_p$-point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$, and we have $z=\mathrm{pr}_2(y)$ by the moduli interpretation of $\mathrm{pr}_2$. Let $(B',\iota_{B'},\lambda_{B'},\beta'_{K'^p})$ denote the image $\eta'_{\sharp}(z)$. Then $\tcD^{\circ}_{B',\tilde\tau_0}$ is identified via $(\eta'^{\sharp}_{*})^{-1}$ with the lattice $p^{-t_{\tilde\tau_0}} \tcD^{\circ}_{B,\tilde\tau_0}$ of $\tcD^{\circ}_{B,\tilde\tau_0}[1/p]$, hence with the lattice $p^{-t_{\tilde\tau_0}-1}\tilde H_{\tilde\tau_0}$ of $\tcD^{\circ}_{A,\tilde\tau_0}[1/p]$. By our construction of $y'=\eta'_{\sharp, \mathrm{Iw}}(y)$, it is easy to see that, if $\mathrm{pr}_2(y')=(B'',\iota_{B''},\lambda_{B''},\beta''_{K'^p})$, then $\tcD^{\circ}_{B'',\tilde\tau_0}$ can be canonical identified with $\tcD^{\circ}_{B',\tilde\tau_0}$ as lattices of $\tcD^{\circ}_{A,\tilde\tau_0}[1/p]$. Since other components $\tcD^{\circ}_{B',\tilde\tau}$ or $\tcD^{\circ}_{B'',\tilde\tau}$ for $\tilde\tau\in \Sigma_{E,\infty}$ are determined from $\tcD^{\circ}_{B',\tilde\tau_0}$ by the same rules (i.e. stability under the essential Frobenius and the duality), we see that $B'$ is canonically isomorphic to $B''$, compatible with all structures. This concludes the proof of $\mathrm{pr}_2\circ\eta'_{\sharp,\mathrm{Iw}}=\eta'_{\sharp}\circ\mathrm{pr}_2$. \end{proof} For the rest of this paper, we discuss two topics; their proofs are nested together. One topic is to understand the behavior of the description of the Goren-Oort strata under the link morphisms; the other is to understand the restriction of the $\mathbb{P}^1$-bundle description of the Goren-Oort strata to other Goren-Oort strata. \begin{prop} \label{P:restriction of GO strata} Let $\tau \in \Sigma_{\infty} - \mathtt{S}_{\infty}$ be a place such that $\tau^-\neq \tau$, and let $\pi_\tau: \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\tau} \to \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0}$ be the $\mathbb{P}^1$-bundle fibration given by Theorem~\ref{T:main-thm-unitary} for the Goren-Oort stratum defined by the vanishing of the partial Hasse invariant at $\tau$. Let $\mathtt{T}$ be a subset of $\Sigma_\infty - \mathtt{S}_\infty$ containing $\tau$. \begin{itemize} \item[(1)] If $\tau^+ \notin \mathtt{T}$, then we put $\mathtt{T}_{\tau} = \mathtt{T}\backslash \{\tau, \tau^-\}$ and we have a commutative diagram \begin{equation} \label{E:good commutative diagram} \xymatrix{ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}} \ar@{^{(}->}[r] \ar[d]& \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \tau} \ar[d]^{\pi_\tau} \\ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0, \mathtt{T}_{\tau}} \ar@{^{(}->}[r]& \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0}. } \end{equation} If $\tau^- \in \mathtt{T}$ the left vertical arrow is an isomorphism. If $\tau^- \notin \mathtt{T}$, this diagram is Cartesian. \item[(2)] If $\tau, \tau^- \in \mathtt{T}$ and $\tau^+\neq \tau^-$, then we put $\mathtt{T}_{\tau} = \mathtt{T}\backslash \{\tau, \tau^-\}$ and $\pi_\tau$ induces a natural isomorphism \begin{equation} \label{E:easy projection} \pi_\tau\colon \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}} \to \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0, \mathtt{T}_{\tau}} \end{equation} \end{itemize} Moreover, all descriptions above are compatible with the natural quasi-isogenies on universal abelian varieties, and analogous results hold for $\mathbf{Sh}_{K''}(G''_{\tilde \mathtt{S}})_{k_0}$. \end{prop} \begin{proof} The statements for $\mathbf{Sh}_{K''}(G''_{\tilde \mathtt{S}})_{k_0}$ follow from those analogs for $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$ by \ref{S:transfer math obj} (or in this case more explicitly by \ref{S:abel var in unitary case}). Thus, we will just prove the proposition for $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$. (1) If $\tau^+ \notin \mathtt{T}$, the prime $\mathfrak{p}$ must be of type $\alpha1$ or $\beta1$. By the proof of Proposition~\ref{P:Y_S=X_S}, the natural quasi-isogeny $\phi: \pi_\tau^*(\mathbf{A}'_{\tilde \mathtt{S}(\tau), k_0}) \to \mathbf{A}'_{\tilde \mathtt{S},k_0}|_{ \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \tau}}$ induces an isomorphism on the (reduced) differential forms at $\tilde \tau'$ for all $\tilde \tau'\in \Sigma_{E,\infty}$ \emph{not} lifting $\tau, \tau^-$. So $\pi_\tau$ induces a Cartesian square \[ \xymatrix{ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}_{\tau}\cup\{\tau\}} \ar@{^{(}->}[r] \ar[d]^{\pi_{\tau}}& \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \tau} \ar[d]^{\pi_\tau} \\ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0, \mathtt{T}_{\tau}} \ar@{^{(}->}[r]& \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0} } \] This already proves (1) in case $\tau^- \notin \mathtt{T}$. Suppose now $\tau^-\in \mathtt{T}$. By Proposition~\ref{P:Y_T=Z_T}, $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}_{\tau}\cup\{\tau\}}$ is the moduli space of tuples $(B,\iota_B, \lambda_B, \beta_{K'_{\mathtt{T}_{\tau}}}; J^{\circ}_{\tilde\tau^-})$, where \begin{itemize} \item $(B,\iota_B, \lambda_B, \beta_{K'_{\mathtt{T}_{\tau}}})$ is a point of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0,\mathtt{T}_{\tau}}$ with values in a scheme $S$ over $k_0$; \item $J^{\circ}_{\tilde\tau^-}$ is a sub-bundle of $H^{\mathrm{dR}}_1(B/S)^{\circ}_{\tilde\tau^-}$ of rank 1 (here, $\tilde\tau^-\in \Sigma_{E,\infty}$ is the specific lift of $\tau^-$ defined in Subsection~\ref{S:tilde IT}). \end{itemize} Then the closed subscheme $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}}$ of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}_{\tau}\cup\{\tau\}}$ is defined by the condition: $$ J^\circ_{\tilde\tau^-} =F_{B,\mathrm{es}}^{n_{\tau^-}}\big(H_1^\mathrm{dR}(B^{(p^{n_{\tau^-}})} / S)^{\circ}_{\sigma^{-n_{\tau^-}}\tilde\tau^{-}}\big). $$ This shows that the restriction of $\pi_{\tau}:\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}_{\tau}\cup\{\tau\}}\rightarrow \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}_{\tau}}$ to $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}}$ is an isomorphism. (2) When $\tau^+\notin \mathtt{T}$, this was proved in (1). Assume now $\tau^+ \in \mathtt{T}$. To complete the proof, it suffices to prove that, for a $k_0$-scheme $S$, the $\tau^+$-th partial Hasse invariant vanishes at an $S$-point $x=(A, \iota_A, \lambda_A, \alpha_{K'})\in \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\{\tau\}}$ if and only if it vanishes at $\pi_{\tau}(x)=(B, \iota_B, \lambda_B, \beta_{K'})\in \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$. Let $\tilde\tau$ be the lift of $\tau$ contained in $\tilde\Delta(\tau)^+$ (See Subsection~\ref{S:Delta-pm} for the notation $\tilde\Delta(\tau)^+$). Put $\tilde\tau^+=\sigma^{n_{\tau^+}}\tilde\tau$ and $\tilde\tau^-=\sigma^{-n_{\tau}}\tilde\tau$. By Lemma~\ref{Lemma:partial-Hasse}, it suffices to show that \begin{align} \label{E:equivalent condition for triple Hasse invariant} & F_{A,\mathrm{es}}^{n_{\tau^+}} \big( H_1^\mathrm{dR}(A/S)^{\circ, (p^{n_{\tau^+}})}_{\tilde \tau} \big) = \omega^\circ_{A^\vee/S, \tilde \tau^+},\\ \nonumber \Leftrightarrow \ & F_{B,\mathrm{es} }^{n_{\tau^+}+ n_\tau + n_{\tau^-}}\big( H_1^\mathrm{dR}(B/S)^{\circ, (p^{n_{\tau^+}+ n_\tau + n_{\tau^-}})}_{\sigma^{-n_{\tau^-}}\tilde \tau^-} \big) = \omega^\circ_{B^\vee/S, \tilde \tau^+}. \end{align} But this follows from the following three facts. \begin{itemize} \item[(a)] By the definition of essential Frobenius \ref{N:essential frobenius and verschiebung}, one deduces a commutative diagram \[ \xymatrix{H_1^\mathrm{dR}(A/S)_{\tilde \tau}^{\circ,(p^{n_{\tau^+}})}\ar[rr]^{F_{A,\mathrm{es}}^{n_{\tau^+}}}\ar[d]_{\phi_{*,\tilde\tau}} && H^{\mathrm{dR}}_1(A/S)_{\tilde\tau^+}\ar[d]^{\phi_{*,\tilde\tau^+}}_{\cong}\\ H_1^\mathrm{dR}(B/S)_{\tilde \tau}^{\circ,(p^{n_{\tau^+}})}\ar[rr]^{F_{B,\mathrm{es}}^{n_{\tau^+}}} && H^{\mathrm{dR}}_{1}(B/S)^{\circ}_{\tilde\tau^+}, } \] \item[(b)] It follows from condition (v) of the moduli description in Subsection~\ref{S:moduli-Y_S} that \[ \phi_{*, \tilde\tau}(H_1^\mathrm{dR}(A/S)_{\tilde \tau}^\circ) = F_{ B,\mathrm{es}}^{n_{\tau^-}+ n_\tau} \big(H_1^\mathrm{dR}(B/S)^{\circ, (p^{ n_\tau + n_{\tau^-}})}_{\sigma^{-n_{\tau^-}}\tilde \tau^-} \big). \] \item[(c)] The condition $\tau^-\neq \tau^+$ implies that the quasi-isogeny $\phi:A\rightarrow B$ induces an isomorphism $\phi_{*, \tilde\tau^+}:H^{\mathrm{dR}}_{1}(A/S)^{\circ}_{\tilde \tau^+}\cong H^{\mathrm{dR}}_1(B/S)^{\circ}_{\tilde \tau^+}$ preserving the Hodge filtrations, in particular identifying the submodules $\phi_{*, \tilde \tau}(\omega^\circ_{A^\vee/S, \tilde \tau^+}) = \omega^\circ_{B^\vee/S, \tilde \tau^+}$. \end{itemize} \end{proof} \subsection{Compatibility of link morphisms and the description of Goren-Oort stata} We first recall that, although the subset $\mathtt{S}(\tau)$ is completely determined by $\mathtt{S}$ and $\tau$ as in Subsection~\ref{S:quaternion-data-T}, the lift $\tilde \mathtt{S}(\tau)_{\infty}$, which consists of all $\tilde\tau'\in \Sigma_{E,\infty}$ with signature $s_{\tilde\tau'}=0$ (see Subsection~\ref{S:CM extension}), depends on an auxiliary choice in Subsection~\ref{S:tilde S(T)}: a lift $\tilde\tau$ of $\tau$ to be contained in $\tilde\mathtt{S}(\tau)_{\infty}$. We assume that $\#(\Sigma_\infty-\mathtt{S}_\infty) \geq 2$. If $\mathfrak{p}$ splits as $\mathfrak{q}\bar\mathfrak{q}$ in $E$ for a fixed place $\mathfrak{q}$, then the $\tilde\tau$ contained in $\tilde\mathtt{S}(\tau)_{\infty}$ is always chosen to be the one in $\Sigma_{E,\infty/\mathfrak{q}}$. If $\mathfrak{p}$ is inert in $E$, then there are two possible choices: $\tilde\tau$ and its conjugate $\tilde\tau^c$ for a fixed lift $\tilde\tau$ of $\tau$. In the latter case, we denote by $\tilde\tau$ the lift of $\tau$ contained in $\tilde\mathtt{S}(\tau)_{\infty}$, by $\tilde\mathtt{S}(\tau)'=(\mathtt{S}(\tau),\mathtt{S}(\tau)'_{\infty})$ the lift of $\mathtt{S}(\tau)$ such that $\tilde\tau^c\in \tilde\mathtt{S}(\tau)'_{\infty}$, and let $\pi_{\tau}'\colon \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\tau} \to \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)'})_{k_0}$ be the corresponding $\mathbb{P}^1$-bundle. The following proposition says that $\pi_{\tau}$ and $\pi_{\tau}'$ differ from each other by a link isomorphism. \begin{prop}\label{P:ambiguity-signature} Assume that $\mathfrak{p}$ is inert in $E$. Then there exists a link isomorphism $$(\eta'_{ \tilde\mathtt{S}(\tau), \tilde \mathtt{S}(\tau)', \sharp},\eta'^{\sharp}_{\tilde\mathtt{S}(\tau),\tilde\mathtt{S}(\tau)'})\colon \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0}\xrightarrow{\sim} \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)'})_{k_0} $$ (of indentation degree $0$) associated to the identity link $\eta: \mathtt{S}(\tau)\rightarrow \mathtt{S}(\tau)$ such that the diagram \[ \xymatrix{ & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \tau} \ar[dl]_{\pi_\tau} \ar[dr]^{\pi'_\tau} \\ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0} \ar[rr]_{\eta_{\tilde \mathtt{S}(\tau), \tilde \mathtt{S}(\tau)', \sharp}}^{\cong} && \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)'})_{k_0} } \] commutes. Moreover, the link morphism $\eta_{\tilde \mathtt{S}(\tau), \tilde \mathtt{S}(\tau)', \sharp}$ satisfies the natural cocycle condition under composition. Similar statements hold for $\mathbf{Sh}_{K''}(G''_{\tilde \mathtt{S}(\tau)})_{k_0}$ and $\mathbf{Sh}_{K''}(G''_{\tilde\mathtt{S}(\tau)'})_{k_0}$. \end{prop} \begin{proof} Consider the closed subvariety $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \{\tau, \tau^-\}}$ of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \tau}$. Then by Proposition~\ref{P:restriction of GO strata} (2) (note that $\mathfrak{p}$ being inert in $E$ implies that $\tau^+\neq \tau^-$), $\pi|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \{\tau^-, \tau\}}}$ is an isomorphism. We put $$\eta'_{\tilde \mathtt{S}(\tau),\tilde \mathtt{S}(\tau)',\sharp}\colon =\pi'_{\tau}\circ(\pi|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \{\tau^-, \tau\}}})^{-1} $$ and define $\eta'^{\sharp}_{\tilde \mathtt{S}(\tau),\tilde \mathtt{S}(\tau)'}$ as the pull-back via $(\pi_{\tau}|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \{\tau^-, \tau\}}})^{-1}$ of the quasi-isonegy \[ \pi_{\tau}^*\mathbf{A}'_{\tilde \mathtt{S}(\tau),k_0}\rightarrow \mathbf{A}'_{\tilde\mathtt{S},k_0}|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0,\{\tau,\tau^-\}}}\rightarrow\pi'^*_{\tau}\mathbf{A}_{\tilde\mathtt{S}(\tau)',k_0}, \] where the two quasi-isogenies given by Theorem~\ref{T:main-thm-unitary}(2). By Proposition~\ref{P:restriction of GO strata}(2) again, $\pi'_{\tau}|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \{\tau,\tau^-\}}}$ is an isomorphism, hence so is $\eta'_{\tilde \mathtt{S}(\tau),\tilde \mathtt{S}(\tau)',\sharp}$. It remains to show that $(\eta'_{\tilde \mathtt{S}(\tau),\tilde \mathtt{S}(\tau)',\sharp}, \eta'^{\sharp}_{\tilde \mathtt{S}(\tau),\tilde \mathtt{S}(\tau)'})$ is a link morphism associated to the identity link on $\mathtt{S}$. Let $x$ be a geometric point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$, and $x'=\eta'_{\tilde \mathtt{S}(\tau),\tilde \mathtt{S}(\tau)',\sharp}(x)$. By construction, it is easy to see that the quasi-isogeny $\eta'^{\sharp}_{\tilde \mathtt{S}(\tau),\tilde \mathtt{S}(\tau)'}$ induces an isomorphism $\tcD(\mathbf{A}'_{\tilde\mathtt{S}(\tau), k_0,x})^{\circ}_{\tilde\tau'}\xrightarrow{\sim} \tcD(\mathbf{A}'_{\tilde\mathtt{S}(\tau)', k_0,x'})^{\circ}_{\tilde\tau'}$ for $\tilde\tau'\neq \sigma^a\tilde\tau, \sigma^a\tilde\tau^c$ with $a = 0, \dots, n_\tau-1$; and in the exceptional cases, we have \[ \eta'^{\sharp}_{\tilde \mathtt{S}(\tau),\tilde \mathtt{S}(\tau)'}(\tcD(\mathbf{A}'_{\tilde\mathtt{S}(\tau), k_0,x})^{\circ}_{\tilde\tau'})=\begin{cases} p\tcD(\mathbf{A}'_{\tilde\mathtt{S}(\tau)', k_0,x'})^{\circ}_{\tilde\tau'} &\text{for }\tilde\tau'=\sigma^a\tilde\tau \textrm{ for }a = 0, \dots, n_\tau-1;\\ \frac{1}{p} \tcD(\mathbf{A}'_{\tilde\mathtt{S}(\tau)', k_0,x'})^{\circ}_{\tilde\tau'} &\text{for }\tilde\tau'=\sigma^a\tilde\tau^c\textrm{ for }a = 0, \dots, n_\tau-1. \end{cases} \] Hence, $(\eta'_{\tilde \mathtt{S}(\tau),\tilde \mathtt{S}(\tau)',\sharp}, \eta'^{\sharp}_{\tilde \mathtt{S}(\tau),\tilde \mathtt{S}(\tau)'})$ verifies Definition~\ref{D:link-morphism}. The other statements of this proposition are evident by the uniqueness of link morphisms (Proposition~\ref{P:uniqueness-link-morphism}). \end{proof} The following Lemma will be needed in the proof of the main result of this section. \begin{lemma}\label{L:non-vanishing-chi} Assume that $\mathtt{S}_\infty \neq \emptyset$. Let $\chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\overline \mathbb{F}_p}): =\sum_{i=0}^{+\infty}(-1)^i\dim H^{i}_\mathrm{et}(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\overline \mathbb{F}_p}, \mathbb{Q}_{\ell})$ denote the Euler-Poincar\'e characteristic of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\overline \mathbb{F}_p}$ for some fixed prime $\ell\neq p$. Then we have $\chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\overline \mathbb{F}_p})\neq 0$. \end{lemma} \begin{proof} The assumption $\mathtt{S}_\infty \neq \emptyset$ implies that all the Shimura varieties we talk about are proper. Consider the integral model $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})$ over $\mathcal{O}_{\tilde\wp}$, and choose an embedding $\mathcal{O}_{\tilde\wp}\hookrightarrow \mathbb{C}$. By proper base change theorem and standard comparison theorems, we have $$ \chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\overline \mathbb{F}_p})=\chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}}): =\sum_{i=0}^{\infty} (-1)^{i}\dim H^{i}(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}}, \mathbb{C}), $$ where $H^{i}(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}}, \mathbb{C})$ denotes the singular cohomology of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}}$ for the usual complex topology. For each $\tilde\tau$ lifting an element of $\Sigma_{\infty}-\mathtt{S}_{\infty}$, put $\omega_{\tilde\tau}^{\circ}=\omega_{\mathbf{A}'_{\tilde\mathtt{S},\mathbb{C}},\tilde\tau}^{\circ}$ and $\mathfrak{t}_{\tilde\tau}^{\circ}=\Lie(\mathbf{A}'_{\tilde\mathtt{S},\mathbb{C}})_{\tilde\tau}=\omega^{\circ,\vee}_{\tilde\tau}$ to simplify the notation. They are line bundles over $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}}$. We have a Hodge filtration \[ 0\rightarrow \omega^{\circ}_{\tilde\tau^c}\rightarrow H^{\mathrm{dR}}_1(\mathbf{A}'_{\tilde\mathtt{S},\mathbb{C}}/\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}})^{\circ}_{\tilde\tau}\rightarrow \mathfrak{t}_{\tilde\tau}^{\circ}\rightarrow 0. \] Note that $H^{\mathrm{dR}}_1(\mathbf{A}'_{\tilde\mathtt{S},\mathbb{C}}/\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}})_{\tilde\tau}^{\circ}$ is equipped with the integrable Gauss-Manin connection so that its Chern classes are trivial by classical Chern-Weil theory. One obtains thus \[ \begin{cases} c_1(\omega_{\tilde\tau^c}^{\circ})c_1(\mathfrak{t}_{\tilde\tau}^{\circ})=c_2(H^{\mathrm{dR}}_1(\mathbf{A}'_{\tilde\mathtt{S},\mathbb{C}}/\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}})^{\circ}_{\tilde\tau})=0,\\ c_1(\omega_{\tilde\tau^c}^{\circ})+c_1(\mathfrak{t}^{\circ}_{\tilde\tau})=c_1(H^{\mathrm{dR}}_1(\mathbf{A}'_{\tilde\mathtt{S},\mathbb{C}}/\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}})^{\circ}_{\tilde\tau})=0, \end{cases} \Longrightarrow \begin{cases} c_1(\omega^{\circ}_{\tilde\tau})^2=0,\\ c_1(\omega^{\circ}_{\tilde\tau})=c_1(\omega_{\tilde\tau^c}^{\circ}), \end{cases} \] where $c_i(\mathcal{E})\in H^{2i}(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}},\mathbb{C})$ denotes the $i$-th Chern class of a vector bundle $\mathcal{E}$. Let $\mathcal{T}$ denote the tangent bundle of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})(\mathbb{C})$, and put $\det(\omega): =\bigoplus_{\tilde\tau\in \Sigma_{E,\infty}}\omega^{\circ}_{\tilde\tau}$. By Proposition~\ref{Prop:smoothness},\footnote{Even though Proposition~\ref{Prop:smoothness} was stated for $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$, its proof also works over $\mathbb{C}$.}, we get \[ \begin{cases}c_1(\mathcal{T})=-2\sum_{\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}} c_1\big(\omega_{\tilde\tau}^{\circ}\big)=-c_1(\det(\omega)),\\ c_d(\mathcal{T})=\prod_{\tau\in \Sigma_{\infty}-\mathtt{S}_{\infty}}(-2c_1(\omega_{\tilde\tau}^{\circ})), \end{cases} \] where $\tilde\tau\in \Sigma_{E,\infty}$ is an arbitrary lift of $\tau$, and $d=\#\Sigma_{\infty}-\#\mathtt{S}_{\infty}$ is the dimension of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}}$. Note that $c_1(\omega^{\circ}_{\tilde\tau})^2=0$ implies that $c_d(\mathcal{T})=\frac{(-1)^{d}}{d!}c_1(\det(\omega))^d$. It is well known that $\det(\omega)$ is ample (see \cite{lan}, for instance), hence it follows that $c_d(\mathcal{T})\neq 0$. On the other hand, there exists a canonical isomorphism $$\mathrm{Tr}\colon H^{2d}(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}},\mathbb{C})\xrightarrow{\sim} \mathbb{C},$$ which sends the cycle class of a point to $1$. The Lemma follows immediately from the non-vanishing of $c_d(\mathcal{T})$ and the well-known fact that $\mathrm{Tr}(c_d(\mathcal{T}))=\chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{\mathbb{C}})$. \end{proof} We state now the main result of this section, which will play a crucial role in our application to Tate cycles in \cite{tian-xiao3}. \begin{theorem} \label{T:link and Hecke operator} Keep the same notation as in Proposition~\ref{P:restriction of GO strata}, that is, let $\tau \in \Sigma_{\infty} - \mathtt{S}_{\infty}$ be a place such that $\tau^-$ is different from $\tau$ and let $\mathtt{T}$ be a subset of $\Sigma_\infty - \mathtt{S}_\infty$ containing $\tau$. \begin{itemize} \item[(1)] If $\tau^- \notin \mathtt{T}$ and $\tau,\tau^+ \in \mathtt{T}$, we put $\mathtt{T}_{\tau^+} = \mathtt{T} \backslash \{\tau, \tau^+\}$. Let $\eta = \eta_{\tau^-\rightarrow \tau^+}: \mathtt{S}(\tau^+)\rightarrow \mathtt{S}(\tau)$ be the link given by straight lines except sending $\tau^-$ to $\tau^+$ (to the right) with displacement $v(\eta)=n_{\tau}+n_{\tau^+}$: \[ \psset{unit=0.3} \begin{pspicture}(-1.2,-0.4)(16,2.4) \psset{linecolor=red} \psset{linewidth=1pt} \psline{-}(0,0)(0,2) \psline{-}(14.4,0)(14.4,2) \psbezier(4.8,2)(4.8,0)(9.6,2)(9.6,0) \psset{linecolor=black} \psdots(0,0)(0,2)(4.8,2)(9.6,0)(14.4,0)(14.4,2) \psdots[dotstyle=+](1,0)(-1,0)(-1,2)(1,2)(3.8,0) (3.8,2)(4.8,0)(5.8,0)(5.8,2)(8.6,0)(8.6,2)(10.6,0)(9.6,2)(10.6,2)(13.4,0)(13.4,2)(15.4,0)(15.4,2) \psset{linewidth=.1pt} \psdots(1.7,0)(1.7,2)(2.4,0)(2.4,2)(3.1,0)(3.1,2)(6.5,0)(7.2,0)(7.9,0)(6.5,2)(7.2,2)(7.9,2)(11.3,0)(12,0)(12.7,0)(11.3,2)(12,2)(12.7,2). \end{pspicture} \] \begin{enumerate} \item[(a)] We put $\eta'_{ \sharp}: \pi_{\tau}\circ i_{\tau^+}\circ (\pi_{\tau^+|_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \{\tau, \tau^+\}}}})^{-1}$, and let $\eta'^{\sharp}$ denote the natural quasi-isogeny of abelian varieties on $ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau^+)})_{k_0}$ given by (the pull-back via $(\pi_{\tau^+|_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \{\tau, \tau^+\}}}})^{-1,*}$ of) \[ (\pi_{\tau^+}|_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \{\tau, \tau^+\}}})^*\mathbf{A}'_{\tilde\mathtt{S}(\tau^+),k_0} \leftarrow i_{\tau^+}^*(\mathbf{A}'_{\tilde\mathtt{S},k_0}|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \tau}})\rightarrow i_{\tau^+}^*\pi_{\tau}^* (\mathbf{A}'_{\tilde\mathtt{S}(\tau),k_0}). \] Then $(\eta'_{ \sharp}, \eta'^{\sharp})$ is the link morphism associated to the link $\eta$ of indentation degree $n=n_{\tau^+}-n_{\tau}$ if $\mathfrak{p}$ splits in $E/F$ and $n=0$ if $\mathfrak{p}$ is inert in $E/F$. Moreover, the following diagram \begin{equation} \label{E:diagram involving link} \xymatrix@C=55pt{ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \mathtt{T}} \ar@{^{(}->}[r] \ar[d]_\cong & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \{\tau, \tau^+\}} \ar@{^{(}->}[r]^-{i_{\tau^+}} \ar[d]^{\pi_{\tau^+}|_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \{\tau, \tau^+\}}}}_\cong & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \tau} \ar[d]^{\pi_\tau} \\ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau^+)})_{k_0, \mathtt{T}_{\tau^+}} \ar@{^{(}->}[r]& \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau^+)})_{k_0} \ar[r]^{\eta'_{ \sharp}}& \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0}, } \end{equation} is commutative, where the two vertical isomorphisms are given by Proposition~\ref{P:restriction of GO strata}(2). \item[(b)] For $\tilde\tau'\in \Sigma_{E,\infty}$, the quasi-isogeny $\eta'^{\sharp}:\mathbf{A}'_{\tilde\mathtt{S}(\tau^+), k_0}\rightarrow \eta'^{*}_{\sharp} \mathbf{A}'_{\tilde\mathtt{S}(\tau), k_0}$ induces a canonical isomorphism \[ \eta'^*_{\sharp}(\Lie(\mathbf{A}'_{\tilde\mathtt{S}(\tau), k_0})_{\tilde\tau'}^{\circ}\cong \begin{cases} \Lie(\mathbf{A}'_{\tilde\mathtt{S}(\tau^+), k_0})^{\circ, (p^{v(\eta)})}_{\sigma^{-v(\eta)}\tilde\tau'} &\text{if $\tilde\tau'$ is a lifting of $\tau^+$,}\\ \Lie(\mathbf{A}'_{\tilde\mathtt{S}(\tau^+), k_0})^{\circ}_{\tau'} &\text{otherwise.} \end{cases} \] \item[(c)] The morphism $\eta'_{\sharp}$ is finite flat of degree $p^{v(\eta)}$. \end{enumerate} \item[(2)] Assume $\Sigma_{\infty} - \mathtt{S}_{\infty} = \{\tau, \tau^-\}= \mathtt{T}$ (so that $\tau^+ = \tau^-$ and $\mathfrak{p}$ is of type $\alpha2$ for $\mathtt{T}$). Then there exists a link morphism $(\eta_{\sharp},\eta^{\sharp}): \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau^-)})_{k_0}\rightarrow \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0}$ of indentation degree $2(g-n_{\tau})=2n_{\tau^-}$ associated with the trivial link $\eta:\mathtt{S}(\tau^-)\rightarrow \mathtt{S}(\tau)$ such that the diagram \[ \xymatrix{ & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \{\tau, \tau^-\}} \ar[dl]_{\pi_\tau} \ar[dr]^{\pi_{\tau^-}} \\ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0} && \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau^-)})_{k_0}\ar[rr]^{\eta_\sharp}&& \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0}. } \] coincides with the Hecke correspondence $T_{\mathfrak{q}}$ \eqref{E:Hecke-T_q} if we identify $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \{\tau, \tau^-\}}$ with $\mathbf{Sh}_{K'^p\mathrm{Iw}'_p}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$ via the isomorphism given by Theorem~\ref{T:main-thm-unitary}. \end{itemize} All descriptions above are compatible with the natural quasi-isogenies on universal abelian varieties, and similar results apply to $\mathbf{Sh}_{K''}(G''_{\tilde \mathtt{S}})_{k_0}$. \end{theorem} \begin{proof} The statements for $\mathbf{Sh}_{K''}(G''_{\tilde \mathtt{S}})_{k_0}$ follow from those analogs for $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$ by \ref{S:transfer math obj} (or in this case more explicitly by \ref{S:abel var in unitary case}). Thus, we will just prove the theorem for $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0}$. (1)(a) The commutativity of \eqref{E:diagram involving link} is tautological. It remains to show that $\pi_\tau \circ (\pi_{\tau^+}|_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \{\tau, \tau^+\}}})^{-1}$ is the link morphism $\eta'_{\sharp}$ on $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau^+)})_{k_0}$ associated to the link $\eta=\eta_{\tau^-\rightarrow \tau^+}$. Let $\tilde\tau\in \Sigma_{E,\infty}$ (resp. $\tilde\tau^+$) denote the lift of $\tau$ (resp. $\tau^+$) contained in $\tilde\mathtt{S}(\tau)_{\infty}$ (resp. $\tilde\mathtt{S}(\tau^+)_{\infty}$). By Subsection~\ref{S:tilde S(T)}, we have $\tilde\tau=\sigma^{-n_{\tau^+}}\tilde\tau^+$ if $\mathfrak{p}$ is splits in $E$. If $\mathfrak{p}$ is inert in $E$, it is also harmless to assume $\tilde\tau=\sigma^{-n_{\tau^+}}\tilde\tau^+$ in view of Propositions~\ref{P:ambiguity-signature} and \ref{P:compatibility of link and GO}. Put $\tilde \tau^- =\sigma^{-n_\tau} \tilde\tau$. Let $y=(B, \iota_{B}, \lambda_{B}, \beta'_{K'})$ be an $S$-point of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau^+)})_{k_0}$ for a test $k_0$-scheme $S$. Then the pre-image of $y$ under $ \pi_{\tau^+}|_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \{\tau, \tau^+\}}}$ is given by $x=(A, \iota_A, \lambda_A, \alpha_{K'})\in \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \{\tau,\tau^+\}}$, for which there exists a quasi-isogeny $\phi: B \to A$ such that $\phi_{*, \tilde \tau'}: H^{\mathrm{dR}}_1(B/S)^{\circ}_{\tilde\tau'}\rightarrow H^{\mathrm{dR}}_1(A/S)^{\circ}_{\tilde\tau'}$ is a well-defined isomorphism for all $\tilde \tau' \in \Sigma_{E, \infty}$ except for $\tilde \tau'=\sigma^a\tilde\tau$ or $ \sigma^{a}\tilde\tau^c$ with $a = 1, \dots, n_{\tau^+}$. In the exceptional cases, $\phi_{*,\sigma^a\tilde\tau}$ and $(\phi^{-1})_{*, \sigma^a\tilde \tau^c}$ are well defined, where $\phi^{-1}\colon A\rightarrow B$ denotes the quasi-isogeny inverse to $\phi$, and we have (by the proof of Proposition~\ref{P:Y_T=Z_T}) \begin{align*} &\quad \Ker(\phi_{*, \sigma^a\tilde \tau}) = F_{B,\mathrm{es}}^{n_\tau + a} \big( H^\mathrm{dR}_1(B/S)^{\circ, (p^{n_\tau+a})}_{\tilde \tau^-} \big)\cong \Lie(B/S)^{\circ, (p^{n_{\tau}+a})}_{\tilde\tau^-},\text{ and }\\ & \mathrm{Im} ((\phi^{-1})_{*, \sigma^a\tilde \tau^c}) = F_{B,\mathrm{es}}^{n_\tau + a} \big( H^\mathrm{dR}_1(B/S)^{\circ, (p^{n_\tau+a})}_{\tilde \tau^{-,c}} \big)\cong \Lie(B/S)^{\circ,(p^{n_{\tau}+a})}_{\tilde\tau^{-,c}}. \end{align*} If $\pi_\tau$ sends $x=(A, \iota_A, \lambda_A, \alpha_{K'})$ to $z=(B', \iota_{B'}, \lambda_{B'}, \beta_{K'})$, there is a quasi-isogeny $\psi: A \to B'$ such that $\psi_{*, \tilde \tau'}$ is an isomorphism for all $\tilde \tau' \in\Sigma_{E, \infty}$ except for $\tilde \tau'=\sigma^b\tilde\tau^-$ or $\sigma^b\tilde\tau^{-,c}$ with $b =1, \dots, n_\tau$. In the exceptional cases, $\psi_{*, \sigma^b\tilde \tau^{-,c}}$ and $(\psi^{-1})_{*, \sigma^b\tilde \tau^{-}}$ are well defined, and we have (by the proof of Proposition~\ref{P:Y_S=X_S} or rather the moduli description in Subsection~\ref{S:moduli-Y_S}) \begin{align*} &\quad \Ker(\psi_{*, \sigma^b\tilde \tau^{-,c}}) = F_{A,\mathrm{es}}^{ b} \big( H^\mathrm{dR}_1(A/S)^{\circ, (p^{b})}_{\tilde \tau^{-,c}} \big),\text{ and }\\ & \mathrm{Im} ((\psi^{-1})_{*, \sigma^b\tilde \tau^{-}}) = F_{A,\mathrm{es}}^{b} \big( H^\mathrm{dR}_1(A/S)^{\circ, (p^{b})}_{\tilde \tau^{-}} \big). \end{align*} By definition, we have $\eta'_{\sharp}(y)=z$, and the composed quasi-isogeny $\psi\circ\phi: B\rightarrow B'$ is nothing but the base change of $\eta'^{\sharp}$ to $S$. For later reference, we remark that $\psi$ and $\phi$ induces isomorphisms \begin{equation}\label{E:Lie-algebra-equality} \Lie(B/S)^{\circ}_{\tilde\tau'}\cong \Lie(A/S)^{\circ}_{\tilde\tau'}\cong \Lie(B'/S)^{\circ}_{\tilde\tau'} \end{equation} for all $\tilde\tau'$ with restriction $\tau'\in \Sigma_{\infty}-\mathtt{S}_{\infty}$ different from $\tau^-,\tau, \tau^+$, and \begin{align}\label{E:isom-Lie-algebras} \Lie(B/S)^{\circ,(p^{n_{\tau}+n_{\tau^+}})}_{\tilde\tau^-}&\xrightarrow{\sim} \Coker(\phi_{*,\tilde\tau^+})\cong \Lie(A/S)^{\circ}_{\tilde\tau^+}\xrightarrow{\sim} \Lie(B'/S)^{\circ}_{\tilde\tau^+}\\ \Lie(B/S)^{\circ,(p^{n_{\tau}+n_{\tau^{+}}})}_{\tilde\tau^{-,c}}&\xrightarrow{\sim}\Coker(\phi_{*,\tilde\tau^{+,c}})\cong \Lie(A/S)^{\circ}_{\tilde\tau^{+,c}}\xrightarrow{\sim} \Lie(B'/S)^{\circ}_{\tilde\tau^{+,c}}\nonumber \end{align} Consider the case when $S=\Spec(k)$ with $k$ a perfect field containing $k_0$. Denote by $\tcD^{\circ}_{B,\tilde\tau'}$ the $\tilde\tau'$-component of the reduced covariant Dieudonn\'e module of $B$. From the discussion above, one sees easily that $\tcD^{\circ}_{B',\tilde\tau}=\tcD^{\circ}_{B,\tilde\tau'}$ for all $\tilde\tau'\in \Sigma_{E,\infty}$ expect for $\tilde\tau'\in\{\sigma^a\tilde\tau, \sigma^{a}\tilde\tau^c\;|\; 1\leq a\leq n_{\tau^+}\}\cup \{\sigma^b\tilde\tau^-, \sigma^b\tilde\tau^{-,c}\;|\;1\leq b\leq n_{\tau}\}$. In the exceptional cases, we have \[ (\psi\circ\phi)^{-1}_*\tcD^{\circ}_{B',\tilde\tau'}=\begin{cases} p^{-1}F_{B,\mathrm{es}}^{n_{\tau}+a}(\tcD^{\circ}_{B,\tilde\tau^-}) &\text{if }\tilde\tau'=\sigma^a\tilde\tau;\\ F_{B,\mathrm{es}}^{n_{\tau}+a}(\tcD^{\circ}_{B,\tilde\tau^{-,c}}) &\text{if }\tilde\tau'=\sigma^a\tilde\tau^c;\\ p^{-1}F_{B,\mathrm{es}}^{b}(\tcD^{\circ}_{B,\tilde\tau^{-,c}}) &\text{if }\tilde\tau'=\sigma^b\tilde\tau^{-,c};\\ F_{B,\mathrm{es}}^b(\tcD^{\circ}_{B,\tilde\tau^{-}}) &\text{if }\tilde\tau'=\sigma^b\tilde\tau^-. \end{cases} \] Since the essential Frobenius $F_{B,\mathrm{es}}$ is bijective after inverting $p$, one sees easily that $\tcD^{\circ}_{B}$ can be recovered from $\tcD^{\circ}_{B'}$. This implies immediately that $\eta'_{\sharp}: \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau^+)})_{k_0} \rightarrow \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0} $ induces a bijection on $k$-valued points, i.e. $\eta'_{\sharp}$ verifies condition (1) in Definition~\ref{D:link-morphism}. By the discussion above, it is also obvious that conditions (2) and (3) of Definition~\ref{D:link-morphism} are also verified for $(\eta'_{\sharp},\eta'^{\sharp})$. Finally, from the formulas for $\tcD^{\circ}_{B,\tilde\tau}$, one sees easily that the degree of the quasi-isogeny \[(\phi\circ\psi)_{\mathfrak{q}}: B[\mathfrak{q}^{\infty}]\rightarrow B'[\mathfrak{q}^{\infty}]\] is $2(n_{\tau^+}-n_{\tau})$ if $\mathfrak{p}$ splits in $E$, and is $0$ if $\mathfrak{p}$ is inert in $E$. This shows that $(\eta'_{\sharp},\eta'^{\sharp})$ is the link morphism associated to $\eta$ with the said indentation degree. Statement (1)(b) follows from the isomorphisms \eqref{E:Lie-algebra-equality} and \eqref{E:isom-Lie-algebras} applied to the case when $B$ is the universal abelian scheme $\mathbf{A}'_{\tilde\mathtt{S}(\tau^+),k_0}$. It remains to prove (1)(c). The morphism $\eta'_{\sharp}$ is clearly quasi-finite, and hence finite because $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau^+)})_{k_0}$ is proper by Proposition~\ref{Prop:smoothness}. Since both $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$ and $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau^+)})_{k_0}$ are regular, we conclude by \cite[Theorem 23.1]{matsumura} that $\eta'_{\sharp}$ is flat at every point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$. It remains to see that the degree of $\eta'_{\sharp}$ is $p^{v(\eta)}$. Let $\mathcal{T}_{\tau}$ and $\mathcal{T}_{\tau^+}$ denote respectively the tangent bundle of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$ and $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau^+)})_{k_0}$, and let $d=\#\Sigma_{\infty}-\#\mathtt{S}(\tau)_{\infty}$ be the common dimension of these Shimura varieties. Fix a prime $\ell\neq p$. For a vector bundle $\mathcal{E}$ over a proper and smooth $k_0$-variety $X$, we denote by $c_i(\mathcal{E})\in H^{2i}_{\mathrm{et}}(X_{\overline \mathbb{F}_p},\mathbb{Q}_{\ell})(i)$ the $i$-th Chern class of $\mathcal{E}$. By Proposition~\ref{Prop:smoothness}, we have \[ c_{d}(\mathcal{T}_{\tau})=\prod_{\tau'\in \Sigma_{\infty}-\mathtt{S}(\tau)_{\infty}}c_1 \bigg(\Lie(\mathbf{A}'_{\tilde\mathtt{S}(\tau),k_0})^{\circ}_{\tilde\tau'}\otimes\Lie(\mathbf{A}'_{\tilde\mathtt{S}(\tau),k_0})^{\circ}_{\tilde\tau'^c}\bigg), \] where $\tilde\tau',\tilde\tau'^c\in \Sigma_{E,\infty}$ denote the two liftings of $\tau'$. A similar formula for $c_d(\mathcal{T}_{\tau^+})$ holds with $\tau$ replaced by $\tau^+$. By (1)(b), we have \[ \eta'^{*}_{\sharp}c_{d}(\mathcal{T}_{\tau})=c_d(\eta'^{*}_{\sharp}\mathcal{T}_{\tau})=p^{v(\eta)}c_d(\mathcal{T}_{\tau^+}). \] Let $$ \mathrm{Tr}_{?}\colon H^{2d}_{\mathrm{et}}(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(?)})_{\overline\mathbb{F}_p},\mathbb{Q}_{\ell})(d)\xrightarrow{\sim} \mathbb{Q}_{\ell} \quad \text{for }?=\tau,\tau^+ $$ be the $\ell$-adic trace map. Then we have \[ \deg(\eta'_{\sharp}) \mathrm{Tr}_{\tau}(c_d(\mathcal{T}_{\tau}))=\mathrm{Tr}_{\tau}(\eta'^{*}_{\sharp}c_d(\mathcal{T}_{\tau}))=p^{v(\eta)} \mathrm{Tr}_{\tau^+}(c_d(\mathcal{T}_{\tau^+})). \] It is well known that $\mathrm{Tr}_{\tau}(c_d(\mathcal{T}_{\tau}))=\chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{\overline \mathbb{F}_p})$ (see \cite[Expos\'e VII, Corollaire 4.9]{SGA5}), where $$ \chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{\overline \mathbb{F}_p}): =\sum_{i=0}^{2d}(-1)^i\dim H^{i}_{\mathrm{et}}(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{\overline \mathbb{F}_p},\mathbb{Q}_{\ell}) $$ is the ($\ell$-adic) Euler-Poincar\'e characteristic. Hence, one obtains \[ \deg(\eta'_{\sharp}) \cdot \chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{\overline \mathbb{F}_p})=p^{v(\eta)} \cdot \chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau^+)})_{\overline \mathbb{F}_p}) \] Since $\eta'_{\sharp}$ is purely inseparable, we have $\chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{\overline \mathbb{F}_p})=\chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau')})_{\overline \mathbb{F}_p})$. By Lemma~\ref{L:non-vanishing-chi} proved below, we have $\chi(\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)}))\neq 0$, and hence $\deg(\eta'_{\sharp})=p^{v(\eta)}$. (2) Note that $\mathfrak{p}$ splits in $E$, and fix a prime $\mathfrak{q}$ of $E$ dividing $\mathfrak{p}$. We denote by $\tilde\tau$ and $\tilde\tau^-$ the liftings of $\tau$ and $\tau^-$ in $\Sigma_{E,\infty/\mathfrak{q}}$ respectively. We define first a link morphism $(\eta_{\sharp},\eta^{\sharp}): \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau^-)})_{k_0}\rightarrow \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$ of indentation degree $p^{2(g-n_{\tau})}$ as follows. Let $y=(B',\iota_{B'},\lambda_{B'},\beta_{K'})$ be an $\overline \mathbb{F}_p$-point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau^-)})_{k_0}$. We define first a lattice $M^{\circ}_{\tilde\tau'}$ of $\tcD^{\circ}_{B',\tilde\tau'}[1/p]$ for each $\tilde\tau'\in \Sigma_{E,\infty}$ as follows: We put \[ M^{\circ}_{\sigma^{i}\tilde\tau}= \begin{cases}\frac{1}{p}\tcD^{\circ}_{B',\sigma^{i}\tilde\tau} &\text{for } i=1,\cdots, n_{\tau^-}=g-n_{\tau},\\ \tcD^{\circ}_{B',\sigma^i\tilde\tau} &\text{for } i=n_{\tau^-}+1,\cdots,g, \end{cases} \] and $M^{\circ}_{\sigma^i\tilde\tau^{c}}=M^{\circ,\vee}_{\sigma^i\tilde\tau}$. One checks easily that $M=\bigoplus_{\tilde\tau'\in \Sigma_{E,\infty}} M^{\circ,\oplus 2}_{\tilde\tau'}$ is stable under the action of Frobenius and Verschiebung homomorphisms, hence a Dieudonn\'e submodule of $\tcD_{B'}[1/p]$. As in the proof of Proposition~\ref{P:Y_S=X_S}, this gives rise to an abelian variety $B''$ equipped with an action by $\mathcal{O}_{D}$ with Dieudonn\'e module $\tcD_{B''}\cong M$. The natural inclusion $M\hookrightarrow \tcD_{B'}[1/p]$ induces an $\mathcal{O}_D$-equivariant $p$-quasi-isogeny $\phi: B'\rightarrow B''$. Since the lattice $M\subseteq \tcD_{B'}[1/p]$ is self-dual by construction, the polarization $\lambda_{B'}$ induces a prime-to-$p$ polarization $\lambda_{B''}$ on $B''$ such that $\lambda_{B'}=\phi^\vee\circ \lambda_{B''}\circ \phi$. Finally, the $K'^p$-level structure $\beta_{K'}$ on $B'$ induces naturally a $K'^p$-level structure $\beta''_{K'}$ on $B''$. Moreover, an easy computation shows that the signatures of $B''$ at $\tilde\tau$ and $\tilde\tau^-$ are respectively $0$ and $2$, and those at other $\tilde\tau'$ not lifting $\tau, \tau^-$ remain the same as those of $B'$. Thus, $(B'',\iota_{B''},\lambda_{B''},\beta''_{K'})$ is a point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$. Let $$ \eta_{\sharp}\colon \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau^-)})_{k_0} \to \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0} $$ be the map sending $y=(B',\iota_{B'},\lambda_{B'},\beta_{K'})\mapsto (B'',\iota_{B''},\lambda_{B''},\beta''_{K'})$, and let \[\eta^{\sharp}\colon \mathbf{A}'_{\tilde\mathtt{S}(\tau^-), k_0}\rightarrow \eta_{\sharp}^*\mathbf{A}'_{\tilde\mathtt{S}(\tau), k_0} \] be the $p$-quasi-isogeny whose base change to each $y$ is $\phi: B'\rightarrow B''$ constructed above. Then it is clear by construction that $(\eta_{\sharp}, \eta^{\sharp})$ is the link morphism of indentation degree ${2(g-n_{\tau})}$ associated to the trivial link on from $\mathtt{S}(\tau)$ to $\mathtt{S}(\tau^-)$. Denote by $\pi_{\{\tau,\tau^-\}}: \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0,\{\tau,\tau^-\}}\xrightarrow{\sim} \mathbf{Sh}_{K'^p\mathrm{Iw}'_{p}}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$ the isomorphism given by Theorem~\ref{T:main-thm-unitary}. Let $x=(A, \iota_A, \lambda_A, \alpha_{K'})$ be an $\overline \mathbb{F}_p$-point of $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{\{\tau, \tau^-\}}$. Then its image $(B,\iota_{B},\lambda_{B},\beta_{K'^\mathfrak{p}},\beta_{\mathfrak{p}})$ under $\pi_{\{\tau,\tau^-\}}$ is characterized as follows: \begin{itemize} \item[(a)] There exists an $\mathcal{O}_D$-equivariant $p$-quasi-isogeny $\phi: B\rightarrow A$ such that $\phi$ induces an isomorphism $\phi_{*}\colon \tcD^{\circ}_{B,\tilde\tau'}\xrightarrow{\sim} \tcD^{\circ}_{A,\tilde\tau'}$ for $\tilde\tau'$ different from $ \sigma^{i}\tilde\tau^-$ with $i=1,\dots, n_{\tau}$ and and their conjugates. In the exceptional cases, we have \[ \phi_{*}(\tcD^{\circ}_{B,\sigma^i\tilde\tau^-})=\mathrm{Im}(F_{A,\mathrm{es}}^{i}\colon \tcD^{\circ}_{A,\tilde\tau^-}\rightarrow \tcD^{\circ}_{A,\sigma^i\tilde\tau^-}), \quad \text{and} \quad \phi_{*}(\tcD^{\circ}_{B,\sigma^i\tilde\tau^{-,c}})=\frac{1}{p}\mathrm{Im}(F_{A,\mathrm{es}}^i\colon\tcD^{\circ}_{A,\tilde\tau^{-,c}}\rightarrow \tcD^{\circ}_{A,\sigma^i\tilde\tau^{-,c}}). \] \item[(b)] We have $\lambda_{B}=\phi^\vee\circ\lambda_{A}\circ\phi$, and $\beta_{K'^p}=\alpha_{K'^p}\circ\phi$. \item[(c)] Let $H_{\tilde\tau}\subseteq \tcD_{B,\tilde\tau}^{\circ}/p\tcD^{\circ}_{B,\tilde\tau}$ be the one-dimensional subspace given by the image of $p\tcD^{\circ}_{A,\tilde\tau}$ via $\phi_{*}^{-1}$. Then $H_{\tilde\tau}$ is stable under $F_{B,\mathrm{es}}^g$, and $M\colon=\bigoplus_{i=0}^{g-1}F_{B,\mathrm{es}}^i(H_{\tilde\tau})^{\oplus 2}$ is a Dieudonn\'e submodule of $\mathcal{D}_{B[\mathfrak{q}]}=\bigoplus_{i=0}^{g-1}\tcD_{B,\sigma^{i}\tilde\tau}/p\tcD_{B,\sigma^i\tilde\tau}$. Let $H_{\mathfrak{q}}$ be the subgroup scheme of $B[\mathfrak{q}]$. Then the Iwahoric level structure of $B$ at $p$ is given by $\beta_{\mathfrak{p}}=H_{\mathfrak{q}}\oplus H_{\bar\mathfrak{q}}$, where $H_{\bar\mathfrak{q}}\subseteq B[\bar\mathfrak{q}]$ is the orthogonal complement of $H_{\mathfrak{q}}$ under the natural duality between $B[\mathfrak{q}]$ and $B[\bar\mathfrak{q}]$. \end{itemize} It is clear that the image of $x$ under $\pi_{\tau}$ is $(B,\iota_{B},\lambda_{B},\beta_{K'^p})$ by forgetting the Iwahoric level structure at $p$ of $\pi_{\{\tau,\tau^-\}}(x)$. This shows that, via the isomorphism $\pi_{\{\tau,\tau^-\}}$, the map $\pi_{\tau}|_{\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0,\{\tau,\tau^-\}}}$ coincides with the projection $\mathrm{pr}_1$ in \eqref{E:Hecke-T_q}. To finish the proof of (2), it remains to show that $$ \eta_{\sharp}\circ\pi_{\tau^-}\circ\pi_{\{\tau,\tau^-\}}^{-1}: \mathbf{Sh}_{K'^p\mathrm{Iw}'_p}(G'_{\tilde \mathtt{S}(\tau)})_{k_0}\rightarrow \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0} $$ is the second projection $\mathrm{pr}_2$ in \eqref{E:Hecke-T_q}. Let $x=(A,\iota_{A},\lambda_{A},\alpha_{K'})$ be a point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0,\{\tau,\tau^-\}}$ with image $\pi_{\{\tau,\tau^-\}}(x)=(B,\iota_B,\lambda_B,\beta_{K'^\mathfrak{p}},\beta_{\mathfrak{p}})$, together with the $p$-quasi-isogeny $\phi\colon B\rightarrow A$ as described above. The image of $(A,\iota_{A},\lambda_{A},\alpha_{K'})$ under $\pi_{\tau^-}$ is given by $(B', \iota_{B'}, \lambda_{B'}, \beta_{K'}) \in \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau^-)})_{k_0}$, which admits a quasi-isogeny $\psi: B'\rightarrow A$ compatible with all structures such that $\psi_{*}: \tcD^{\circ}_{B',\tilde\tau'}\cong \tcD^{\circ}_{A,\tilde\tau'}$ for all $\tilde\tau'$ except for those $\tilde\tau'$'s lifting $\sigma^i\tau$ for any $i = 1, \dots, g-n_{\tau}=n_{\tau^-}$. In the exceptional cases, we have $$ \psi_{*}( \tcD^{\circ}_{B',\sigma^i\tilde\tau})=\mathrm{Im}(F_{A,\mathrm{es}}^i\colon \tcD^{\circ}_{A,\tilde\tau}\rightarrow \tcD^{\circ}_{\sigma^i\tilde\tau})\quad\text{and}\quad \psi_*(\tcD^{\circ}_{B',\sigma^i\tilde\tau^c})=\frac1 p\mathrm{Im}(F_{A,\mathrm{es}}^i\colon \tcD^{\circ}_{A,\tilde\tau^c}\rightarrow \tcD^{\circ}_{A,\sigma^i\tilde\tau^c}). $$ Consider the composed isogeny $\phi^{-1}\circ\psi\colon B'\rightarrow B$. Let $\tilde H_{\tilde\tau}\subseteq \tcD^{\circ}_{B,\tilde\tau}$ be the inverse image of the one-dimensional subspace $H_{\tilde\tau}\subseteq \tcD^{\circ}_{B,\tilde\tau}/p\tcD^{\circ}_{B,\tilde\tau}$ given by the image of $p\tcD^{\circ}_{A,\tilde\tau}$ as in (c) above. Then, we have \[ (\phi^{-1}\circ\psi)_*(\tcD^{\circ}_{B',\sigma^{i}\tilde\tau})= \begin{cases} F_{B,\mathrm{es}}^i(\tilde H_{\tilde\tau})&\text{for }i=1,\cdots, n_{\tau^-}=g-n_{\tau};\\ \frac1pF_{B,\mathrm{es}}^i(\tilde H_{\tilde\tau}) &\text{for }i=n_{\tau^-}+1,\cdots, g; \end{cases} \] and $(\phi^{-1}\circ\psi)_*(\tcD^{\circ}_{B',\sigma^{i}\tilde\tau^c})$ is the orthogonal complement of $(\phi^{-1}\circ\psi)_*(\tcD^{\circ}_{B',\sigma^{i}\tilde\tau})$. Let $(B'',\iota_{B''},\lambda_{B''},\beta''_{K'})$ be the image of $(B', \iota_{B'}, \lambda_{B'}, \beta_{K'}) \in \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau^-)})_{k_0}$ under $\eta_{\sharp}$. Then, the composed $p$-quasi-isogeny $B\xrightarrow{\psi^{-1}\circ\phi} B'\xrightarrow{\eta^{\sharp}}B''$, identifies $\tcD^{\circ}_{B'',\sigma^i\tilde\tau}$ with the lattice $$ \frac{1}{p}F_{B,\mathrm{es}}^{i}(H_{\tilde\tau})\subseteq \tcD^{\circ}_{B,\sigma^i\tilde\tau}[1/p],\quad \text{for all }i=1,\cdots, g. $$ Thus, one sees immediately that the map $(B,\iota_B, \lambda_B, \beta_{K'^\mathfrak{p}},\beta_{\mathfrak{p}})\mapsto (B'',\iota_{B''},\lambda_{B''},\beta''_{K'})$ is nothing but the second projection in \eqref{E:Hecke-T_q}. This finishes the proof of (2). \end{proof} Our last proposition explains the compatibility of the description of GO-divisors as in Theorem~\ref{T:main-thm-unitary} with respect to the link morphism, especially to the link morphism appearing in Theorem~\ref{T:link and Hecke operator}(1). \begin{prop} \label{P:compatibility of link and GO} Assume that $\#\Sigma_\infty - \#\mathtt{S}_\infty \geq 2$. Let $\tau_0\in \Sigma_{\infty}-\mathtt{S}_{\infty}$, and $\eta: \mathtt{S} \to \mathtt{S}'$ be a link such that all curves are straight lines except for (possibly) one curve turning to the right, linking $\tau_0\in \Sigma_{\infty}-\mathtt{S}_{\infty}$ with $\tau'_0 = \eta(\tau_0) = \sigma^{m(\tau_0)}\tau_0$ for some integer $m(\tau_0)\geq 0$. Assume that the link morphism $(\eta'_\sharp, \eta'^\sharp): \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0} \to \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'})_{k_0}$ of (some) indentation degree $n \in \mathbb{Z}$ associated to $\eta$ exists. The setup automatically implies that $\tau^+_0 = \tau'^+_0$. Let $\tau\in \Sigma_\infty - \mathtt{S}_\infty$. Then the following statements hold: \begin{enumerate} \item[(1)] The link morphism $\eta'_\sharp$ sends the GO-divisor $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \tau}$ into the GO-divisor $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'})_{k_0, \eta(\tau)}$. \item[(2)] Let $\eta_{\tau}: \mathtt{S}(\tau) \to \mathtt{S}'(\eta(\tau))$ denote the link given by removing from $\eta$ the two curves attached to $\tau$ and $\tau^-$. Then there exists a link morphism $(\eta'_{\tau,\sharp},\eta'^{\sharp}_{\tau})$ (of some indentation degree $m$) from $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0}$ to $\mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'(\eta(\tau))})_{k_0}$, associated to the link $\eta_\tau$ such that we have the following commutative diagram of Shimura varieties \begin{equation}\label{E:induced-link-morphism} \xymatrix@C=60pt{ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}})_{k_0, \tau} \ar[r]^-{\pi_{\tau}} \ar[d]^{\eta'_{\sharp}} & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}(\tau)})_{k_0} \ar[d]^{\eta'_{\tau,\sharp}} \\ \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'})_{k_0,\eta(\tau)} \ar[r]^-{\pi_{\eta(\tau)}} & \mathbf{Sh}_{K'}(G'_{\tilde \mathtt{S}'(\eta(\tau))})_{k_0}, } \end{equation} and a similar commutative diagram of quasi-isogenies of universal abelian varieties. Moreover, the indentation degree of the link morphism $\eta'_{\tau, \sharp}$ is given by $$m=n+n_{\tau}-n_{\eta(\tau)}(\mathtt{S}')=\begin{cases} 0 &\text{if $\mathfrak{p}$ is inert in $E/F$;}\\ n &\text{if $\mathfrak{p}$ splits in $E/F$ and $\tau\neq \tau_0, \tau_0^+$;}\\ n-m(\tau_0) &\text{if $\mathfrak{p}$ splits in $E/F$ and $\tau=\tau_0$;}\\ n+m(\tau_0) &\text{if $\mathfrak{p}$ splits in $E/F$ and $\tau=\tau_0^+$.} \end{cases} $$ \item[(3)] Suppose moreover that the link $\eta$ and the link morphism $(\eta'_{\sharp}, \eta'^{\sharp})$ are those appearing in Theorem~\ref{T:link and Hecke operator}(1) (so our $\tilde\mathtt{S}$ being $\tilde\mathtt{S}(\tau^+)$, $\mathtt{S}'$ being $\tilde\mathtt{S}(\tau)$, and $\tau_0$ being $\tau^-$ therin, respectively). If $\mathcal{O}_{\tau}(1)$ (resp. $\mathcal{O}_{\eta(\tau)}(1)$) denotes the tautological quotient line bundle on $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \tau}$ (resp. on $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'})_{k_0, \eta(\tau)}$) for the $\mathbb{P}^1$-fibration $\pi_{\tau}$ (resp. $\pi_{\eta(\tau)}$), then we have a canonical isomorphism \begin{equation}\label{E:link-line-bundles} \eta'^{*}_{\sharp}(\mathcal{O}_{\eta(\tau)}(1))\cong \begin{cases} \mathcal{O}_{\tau}(1) &\text{if } \tau\neq \tau_0^+;\\ \mathcal{O}_{\tau}(p^{m(\tau_0)}) &\text{if } \tau=\tau_0^+. \end{cases} \end{equation} Moreover, the induced link morphism $\eta'_{\tau, \sharp}$ finite flat of degree $p^{v(\eta_{\tau})}$, i.e it is an isomorphism if $\tau\in \{\tau^{+}_0, \tau_0\}$, and it is finite flat of degree $p^{m(\tau_0)}$ if $\tau\notin \{ \tau_0, \tau_0^{+}\}$. \end{enumerate} The analogous results hold for link morphisms for $\mathbf{Sh}_{K''}(G''_{\tilde \mathtt{S}})_{k_0}$'s. \end{prop} \begin{proof} (1) Since $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \tau}$ is reduced, it suffices to prove that $\eta'_{\sharp}$ sends every $\overline \mathbb{F}_p$-point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \tau}$ to $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'})_{k_0,\eta(\tau)}$. Let $x=(A,\iota, \lambda, \alpha_{K'^p})$ be an $\overline \mathbb{F}_p$-point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \tau}$, and $\eta'_{\sharp}(x)=(A',\iota',\lambda',\alpha'_{K'^p})$ be its image. Let $\tilde\tau\in \Sigma_{E,\infty}$ be a place above $\tau$, and put $\tilde\tau^-=\sigma^{-n_{\tau}}\tilde\tau$ and $\tilde\tau^+ =\sigma^{n_{\tau^+}}\tilde\tau$. By Lemma~\ref{Lemma:partial-Hasse}, the condition $h_{\tau}(A)=0$ is equivalent to $F_{\mathrm{es},A}^{n_{\tau}}(\tcD^{\circ}_{A,\tilde\tau^-})=\tilde\omega_{A,\tilde\tau}^{\circ}$, where $\tilde\omega^{\circ}_{A^\vee,\tilde\tau}\subseteq \tcD^{\circ}_{A,\tilde\tau}$ denotes the inverse image of $\omega^{\circ}_{A^\vee,\tilde\tau}\subseteq \mathcal{D}^{\circ}_{A,\tilde\tau}$. The latter condition is in turn equivalent to that $F_{\mathrm{es}, A}^{n_{\tau^+}}\circ F_{\mathrm{es},A}^{n_{\tau}}(\tcD^{\circ}_{A,\tilde\tau^-})=p\tcD^{\circ}_{A,\tilde\tau^+}$. We set $\eta(\tilde\tau^-)=\sigma^{m(\tau^-)}\tilde\tau^-$, where $m(\tau^-)$ the displacement of the curve in $\eta$ connecting $\tau^-$ and $\eta(\tau^-)$ (which equals to $0$ except when $\tau^-=\tau_0$); similarly, we put $\eta(\tilde\tau^+) = \sigma^{m(\tau^+)}\tilde \tau^+$. Since $\eta'^{\sharp}_{*}: \tcD^{\circ}_{A}[1/p]\rightarrow \tcD^{\circ}_{A'}[1/p]$ commutes with Frobenius and Verschiebung homomorphisms, one sees easily from condition (3) in Definition~\ref{D:link-morphism} that $F_{\mathrm{es},A'}^{n_{\eta(\tau^+)}(\mathtt{S}')} \circ F_{\mathrm{es},A'}^{n_{\eta(\tau)}(\mathtt{S}')}(\tcD^{\circ}_{A',\eta(\tilde\tau^-)})=p^u\tcD^{\circ}_{A',\eta(\tilde\tau^+)}$ for some integer $u\in \mathbb{Z}$. Here, $n_{\eta(\tau)}(\mathtt{S}')$ is the integer defined in Notation~\ref{N:n tau} associated to $\tau$ for the set $\mathtt{S}'$. But $F_{\mathrm{es},A'}^{n_{\eta(\tau^+)}(\mathtt{S}')} \circ F_{\mathrm{es},A'}^{n_{\eta(\tau)}(\mathtt{S}')}(\tcD^{\circ}_{A',\eta(\tilde\tau^-)})$ is a $W(\overline \mathbb{F}_p)$-sublattice of $\tcD^{\circ}_{A,\eta(\tilde\tau^+)}$ with quotient of length $2$ over $W(\overline \mathbb{F}_p)$. Hence, the integer $u$ has to be $1$. By the same reasoning using Lemma~\ref{Lemma:partial-Hasse}, this is equivalent to saying that $h_{\eta(\tau)}(A')=0$, or equivalently $\eta'_{\sharp}(x)\in \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'})_{k_0,\eta(\tau)}$. (2) Assume first that $\#\Sigma_{\infty}-\#\mathtt{S}_{\infty}>2$. By Proposition~\ref{P:restriction of GO strata}(2), $\pi_{\tau}|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \{\tau^-, \tau\}}}$ is an isomorphism. We define \[\eta'_{\tau,\sharp}\colon= \pi_{\eta(\tau)}\circ \eta'_{\sharp}\circ (\pi_{\tau}|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \{\tau^-, \tau\}}})^{-1} \] and $\eta'^{\sharp}_{\tau}$ as the pull-back via $(\pi_{\tau}|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \{\tau^-, \tau\}}})^{-1}$ of the quasi-isogeny \[ \pi_{\tau}^{*}\mathbf{A}'_{\tilde\mathtt{S}(\tau), k_0}\xrightarrow{\phi_{\tau}^{-1}} (\mathbf{A}'_{\tilde\mathtt{S}, k_0}|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \tau}})\xrightarrow{\eta'^{\sharp}} \eta'^{*}_{\sharp} (\mathbf{A}'_{\tilde\mathtt{S}',k_0}|_{\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'})_{k_0,\eta(\tau)}})\xrightarrow{\phi_{\eta(\tau)}} \eta'^*_{\sharp}\pi_{\eta(\tau)}^*(\mathbf{A}'_{\tilde\mathtt{S}(\eta(\tau)), k_0}) \] of abelian schemes on $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0, \tau}$. Here, the first and the third quasi-isogenies are given by Theorem~\ref{T:main-thm-unitary}(3). It is clear that the diagram \eqref{E:induced-link-morphism} is commutative. It remains to show that $(\eta'_{\tau,\sharp},\eta'^{\sharp}_{\tau})$ defines a link morphism. Conditions (1) and (2) of Definition~\ref{D:link-morphism} being clear, condition (3) can be verified by a tedious but straightforward computation. To see condition (4) on the indentation degree, we need only to discuss the case when $\mathfrak{p}$ splits in $E/F$. In this case, $\phi^{-1}_\tau[\mathfrak{q}^\infty]$ has degree $p^{2n_{\tau}}$, $\eta'^\sharp[\mathfrak{q}^\infty]$ has degree $p^{2n}$, and $\phi_{\eta(\tau)}[\mathfrak{q}^\infty]$ has degree $p^{-2n_{\eta(\tau)}(\mathtt{S}')}$. So the total degree of quasi-isogeny of $\eta'^{\sharp}_{\tau}$ is $m=n+n_{\tau}-n_{\eta(\tau)}(\mathtt{S}')$ A case-by-case discussion proves the condition (4) of Definition~\ref{D:link-morphism} on indentation degrees. Assume now $\#\Sigma_{\infty}-\#\mathtt{S}_{\infty}=2$ so that $\Sigma_{\infty}-\mathtt{S}_{\infty}=\{\tau_0, \tau_0^+=\tau_0^{-}\}$. This implies that $\mathfrak{p}$ splits as $\mathfrak{q}\bar\mathfrak{q}$ in $E$. Since the Shimura variety $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$ is zero-dimensional, we just need to define the desired link morphism $(\eta'_{\tau,\sharp}, \eta'^{\sharp}_{\tau})$ on $\overline \mathbb{F}_p$-points. For each $\tilde\tau'\in \Sigma_{E,\infty}$ with restriction $\tau'\in \{\tau_0,\tau_0^-\}$, let $t_{\tilde\tau'}\in \mathbb{Z}$ denote the integer as in Definition~\ref{D:link-morphism}(3) attached to $\tilde\tau'$ for the link morphism $(\eta'_{\sharp},\eta'^{\sharp})$. Let $y=(B,\iota_{B},\lambda_B, \beta_{K'^p})$ be an $\overline \mathbb{F}_p$-point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$. We now distinguish two cases: \begin{itemize} \item[(a)] Consider first the case $\tau=\tau_0$. Let $\tilde\tau^-_0$ be the lift of $\tau_0$ in $ \Sigma_{E,\infty/\mathfrak{q}}$. We define $M_{\tilde\tau_0^-}=p^{-t_{\tilde\tau_0}} \tcD^{\circ}_{B,\tilde\tau_0}$, and $M_{\sigma^i\tilde\tau_0^-}=p^{-\delta_i}F^{i}(M_{\tilde\tau_0^-})$ for each integer $i$ with $1\leq i\leq g-1$, where $\delta_i$ denotes the number of integers $j$ with $1\leq j\leq i$ such that $\sigma^{j}\tilde\tau_0^-\in \tilde\mathtt{S}'(\eta(\tau))_{\infty}$. Put $M_{\mathfrak{q}}=\bigoplus_{0\leq i\leq g-1}M_{\sigma^i\tilde\tau_0^-}$, and let $M_{\bar\mathfrak{q}}\subseteq \tcD^{\circ}_{B,\bar\mathfrak{q}}[1/p]$ be the dual lattice of $M_{\mathfrak{q}}$ with respect to the pairing induced by $\lambda_B$. Then $M:= M_{\mathfrak{q}}\oplus M_{\bar\mathfrak{q}}$ is a Dieudonn\'e submodule of $\tcD^{\circ}_{B}[1/p]$. By the same argument as in the proof of Proposition~\ref{P:Y_S=X_S}, there exists a unique abelian variety $B'$ equipped with an $\mathcal{O}_D$-action $\iota_{B'}$ together with a $p$-quasi-isogeny $\phi: B\rightarrow B'$ such that the induced map $\phi^{-1}_{*}: \tcD^{\circ}_{B'}\rightarrow \tcD^{\circ}_{B}[1/p]$ is identified with the natural inclusion $M\hookrightarrow \tcD^{\circ}_{B}[1/p]$. As usual, since $M$ is a self-dual lattice, $\lambda_{B}$ induces a prime-to-$p$ polarization $\lambda_{B'}$ such that $\phi^{\vee}\circ\lambda_{B'}\circ\phi=\lambda_{B}$. We equip $B'$ with the $K'^p$-level structure $\beta'_{K'^p}=\phi\circ\beta_{K'^p}$. By the construction, one sees also easily that $B'$ satisfies the necessary signature condition so that $y'=(B',\iota_{B'},\lambda_{B'},\beta'_{K'^p})$ is a point of $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'(\eta(\tau))})_{k_0}$. We define $$\eta'_{\tau, \sharp}\colon \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}\rightarrow \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'(\eta(\tau))})_{k_0},\quad \text{and}\quad \eta'^{\sharp}_{\tau}\colon \mathbf{A}'_{\tilde\mathtt{S}(\tau),k_0}\rightarrow \mathbf{A}'_{\tilde\mathtt{S}'(\eta(\tau)), k_0} $$ by $\eta'_{\tau, \sharp}(y)= y'$ and $\eta'^{\sharp}_{\tau, y}=\phi$. It is evident that $(\eta'_{\tau,\sharp},\eta'^{\sharp}_{\tau})$ is a link morphism. It remains to check that the diagram \eqref{E:induced-link-morphism} is commutative. Let $x=(A,\iota_{A},\lambda_A,\alpha_{K'^p})\in \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}})_{k_0,\tau}$ be a point above $y$, $x'=(A',\iota_{A'},\lambda_{A'},\alpha'_{K'^p})\in \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'})_{k_0,\eta(\tau)}$ be the image of $x$ under $\eta'_{\sharp}$. We need to prove that $\pi_{\eta(\tau)}(x')=y'$. Let $y''=(B'',\iota_{B''},\lambda_{B''},\beta''_{K'^p})\in \mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}'(\eta(\tau))})_{k_0,\eta(\tau)}$ denote temporarily the point $\pi_{\tau}(x')$. Denote by $\psi\colon A\rightarrow B$ and $\psi'\colon A'\rightarrow B''$ be the quasi-isogenies given by Theorem~\ref{T:main-thm-unitary}. Then we have $\psi_* (\tcD^{\circ}_{A,\tilde\tau_0^-})= \tcD^{\circ}_{B,\tilde\tau_0^-}$ and $\psi'_{*}(\tcD^{\circ}_{A',\tilde\tau_0^-})=\tcD^{\circ}_{B'',\tilde\tau_0^-}$ by Subsection~\ref{S:moduli-Y_S}, and $\eta'^{\sharp}_{*}(\tcD^{\circ}_{A,\tilde\tau_0^-})=p^{t_{\tilde\tau_0^-}}\tcD^{\circ}_{A',\tilde\tau_0^-}$ by Definition~\ref{D:link-morphism}(3) as $\eta$ is a straight line at $\tau_0^-$. Consider the quasi-isogeny $\psi'\circ\eta'^{\sharp}\circ\psi^{-1}\circ\phi^{-1}: B'\rightarrow B''$. Iit induces an isomorphism $\tcD^{\circ}_{B',\tilde\tau_0^-}\xrightarrow{\sim} \tcD^{\circ}_{B'',\tilde\tau_0^-}$. But the other components of the Dieudonn\'e modules are determined by that at $\tilde\tau_0^-$. It follows that $\psi'\circ\eta'^{\sharp}\circ\psi^{-1}\circ\phi^{-1}: B'\rightarrow B''$ is an isomorphism compatible with all structures, i.e. $y''=y'$. The computation of the indentation degree is the same as that in the case when $\#\Sigma_\infty - \#\mathtt{S}_\infty >2$. \item[(b)] In the case $\tau=\tau_0^+ = \tau_0^-$, the construction is similar. Let $\tilde\tau_0$ be the lift of $\tau_0$ in $\Sigma_{E,\infty/\mathfrak{q}}$, and $\eta(\tilde\tau_0)=\sigma^{m(\tau_0)}\tilde\tau_0$. Put $M_{\eta(\tilde\tau_0)}=p^{-t_{\tilde\tau_0}}F^{m(\tau_0)}(\tcD^{\circ}_{B,\eta(\tilde\tau_0)})$, and $M_{\sigma^{i}\eta(\tilde\tau_0)}=p^{-r_i}F^{i}(M_{\eta(\tilde\tau_0)})$ for each integer $i$ with $1\leq i\leq g-1$, where $r_i$ is the number of integers $j$ with $1\leq j\leq i$ such that $\sigma^j\eta(\tilde\tau_0)\in \tilde\mathtt{S}'(\eta(\tau))_{\infty}$. Let $M_{\mathfrak{q}}=\bigoplus_{0\leq i\leq g-1}M_{\sigma^{i}\eta(\tilde\tau_0)}$, and $M_{\bar\mathfrak{q}}\subseteq \tcD^{\circ}_{B,\bar\mathfrak{q}}[1/p]$ be the dual lattice. As in case (a) above, such a lattice $M:=M_\mathfrak{q} \oplus M_{\bar \mathfrak{q}}$ gives rise to an $\overline \mathbb{F}_p$-point $y'=(B',\iota_{B'},\lambda_{B'},\beta'_{K'^p})$ together with a $p$-isogeny $\phi: B\rightarrow B'$ compatible with all structures. We define the desired link morphism $(\eta'_{\tau,\sharp}, \eta'^{\sharp}_{\tau})$ such that $\eta'_{\tau,\sharp}(y)=y'$ and $\eta'_{\tau,\sharp,y}=\phi$. The commutativity of \eqref{E:induced-link-morphism} is proved by the same arguments as in (a). We leave the details to the reader. \end{itemize} (3) We note that, if $\tilde\tau^-$ is the lifting of $\tau^-$ not contained in $\tilde\mathtt{S}(\tau)_{\infty}$, then we have a canonical isomorphism $\mathcal{O}_{\tau}(1)\cong \Lie(\mathbf{A}'_{\tilde\mathtt{S}, k_0})^{\circ}_{\tilde\tau^-}$ by the construction of $\pi_{\tau}$; similarly, one has $\mathcal{O}_{\eta(\tau)}(1)\cong \Lie(\mathbf{A}'_{\tilde\mathtt{S}',k_0})^{\circ}_{\eta(\tilde\tau^-)}$. Now the isomorphism \eqref{E:link-line-bundles} follows immediately from Theorem~\ref{T:link and Hecke operator}(1)(b). We prove now the second part of (3). By the construction of $\eta'_{\tau}$, it follows from Theorem~\ref{T:link and Hecke operator}(1)(b) that $\eta'^{\sharp}_{\tau}\colon \mathbf{A}'_{\tilde\mathtt{S}(\tau),k_0}\rightarrow \eta'^*_{\tau,\sharp}(\mathbf{A}'_{\tilde\mathtt{S}(\eta(\tau)),k_0})$ induces, for any $\tilde\tau'\in \Sigma_{E,\infty}$ lifting an element $\Sigma_{\infty}-\mathtt{S}(\eta(\tau))$, an isomorphism \[ \eta'^{*}_{\tau, \sharp}(\Lie(\mathbf{A}'_{\tilde\mathtt{S}(\eta(\tau)),k_0})^{\circ}_{\tilde\tau'})\cong \begin{cases} \Lie(\mathbf{A}'_{\tilde\mathtt{S}(\tau), k_0})^{\circ, (p^{m(\tau_0)})}_{\sigma^{-m(\tau_0)}\tilde\tau'} &\text{if $\tilde\tau'$ lifts $\eta(\tau_0)$,}\\ \Lie(\mathbf{A}'_{\tilde\mathtt{S}(\tau), k_0})^{\circ}_{\tilde\tau'} &\text{otherwise}, \end{cases} \] If $\tau\in \{\tau_0, \tau_0^{+}\}$ or equivalently $\tau_0\in \mathtt{S}(\tau)$, then the first case above never happens. Therefore, by Proposition~\ref{Prop:smoothness}, we see that $\eta'_{\tau, \sharp}$ induces an isomorphism of tangent spaces between $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\tau)})_{k_0}$ and $\mathbf{Sh}_{K'}(G'_{\tilde\mathtt{S}(\eta(\tau))})_{k_0}$. Since $\eta'_{\tau,\sharp}$ is bijective on closed points by definition of link morphism, $\eta'_{\tau, \sharp}$ is actually an isomorphism. If $\tau\notin \{\tau_0, \tau_0^+\}$ or equivalently $\tau_0\notin \mathtt{S}(\tau)$, we conclude by the same arguments as Theorem~\ref{T:link and Hecke operator}(1)(d). \end{proof}
-573,233.889207
[ -2.578125, 2.2890625 ]
45.615866
[ -2.232421875, 0.517578125, -2.359375, -6.03125, -1.2939453125, 8.84375 ]
[ 3.0078125, 9.0703125, 1.73828125, 5.19140625 ]
2,202
41,798
[ -3.46875, 4.109375 ]
36.724499
[ -4.76171875, -3.986328125, -5.59375, -2.4765625, 1.369140625, 13.125 ]
0.390641
18.797686
13.780564
1.657935
[ 0.7358545064926147 ]
-365,706.229143
6.828748
-578,904.423701
0.154154
6.646653
[ -1.2294921875, -3.2734375, -3.98828125, -5.81640625, 1.7080078125, 12.625 ]
[ -5.46484375, -1.58984375, -2.1875, -1.2275390625, 3.337890625, 3.314453125 ]
BkiUdKg4uBhhxJVAbSlB
\section{Introduction} Quantum theory predicts that vacuum is never at rest. On average, the electromagnetic field of vacuum has no amplitude, but quantum vacuum fluctuations impose a fundamental uncertainty in its value. This is notably captured in the ground-state energy of a harmonic oscillator (HO) $\hbar\omega_r/2$. When an atom couples off-resonantly to an electromagnetic mode, equivalent to a HO, the quantum vacuum fluctuations of the mode shift the transition frequencies between states of the atom~\cite{lamb_fine_1947}. This effect is called the Lamb shift. If the atom can be modeled as a two level system (TLS), this interaction is described in the rotating wave approximation (RWA) by the Jaynes-Cummings Hamiltonian~\cite{jaynes_comparison_1963}. The so-called Lamb shift is then given by $-g^2/\Delta$ in the dispersive regime $g\ll |\Delta|$ where $g$ is the coupling strength and $\Delta = \omega_r-\omega_a$ is the frequency detuning between the mode ($\omega_r$) and atom ($\omega_a$). If one replaces the TLS with a HO, a similar effect occurs from normal-mode splitting, where in the dispersive regime, each oscillator acquires a frequency shift due to the presence of the other oscillator. This similarity is not only qualitative: in the RWA parameter regime, a classical calculation of the normal mode splitting of two HOs also predicts this shift to be $-g^2/\Delta$. A quantum calculation for two HOs will also give the same result: this shift for HOs is not influenced by the presence of quantum fluctuations. Extending this further, one can replace the TLS atom with a weakly-anharmonic oscillator, such as a transmon qubit in circuit QED. In experiments in circuit QED, a shift of $-g^2/\Delta$ was also observed, has been attributed to being induced by vacuum fluctuations, and is commonly referred to as the Lamb shift~\cite{fragner_resolving_2008}. However, normal mode splitting of two HOs, which includes no effect of quantum fluctuations, also leads to a shift of the same size. This then raises the following question: how much of the dispersive shift in weakly-anharmonic atoms arises from quantum fluctuations? Or equivalently, how much of this shift persists if quantum fluctuations are neglected? Here, we derive analytical expressions for the quantum fluctuation contribution to the dispersive shift of weakly-anharmonic atoms. We find that for a weakly-anharmonic atom coupled dispersively to a harmonic oscillator, two distinct shifts occur; one is a quantum effect due to vacuum fluctuations, another arises from normal-mode splitting. To illustrate the described physics, this work focuses on the transmon qubit~\cite{koch_charge-insensitive_2007} coupled to a $LC$-circuit. We follow the approach of transforming the Hamiltonian to its normal-mode basis~\cite{nigg_black-box_2012} and treating anharmonicity as a perturbation. By performing calculations analytically, we gain insight into the origin of different frequency shifts, and reach accurate approximations of their magnitude, extending expressions previously derived~\cite{koch_charge-insensitive_2007} to regimes of large detuning. Our expression of the AC stark shift decreases with the square of the frequency of a coupled mode, which notably places strong limitations on the coupling of low frequency mechanical elements to these type of qubits~\cite{pirkkalainen_hybrid_2013}. \begin{figure*}[t!] \includegraphics[width=0.92\textwidth]{fig1.pdf} \caption{The origin of different energy shifts in a weakly-anharmonic atom. (a) Replacing the linear inductance of an $LC$ oscillator with a Josephson junction (JJ) results in a weakly-anharmonic artificial atom. To first order, the energy level $n$ is shifted proportionally to $\bra{n}\hat{\phi}^4\ket{n}$, where $\ket{n}$ is a Fock state of the harmonic system. (b) Two coupled harmonic oscillators undergo normal-mode splitting, resulting in a frequency shift $\delta_\text{NM}$. The flux traversing one of the inductances $\phi$, is then composed of the flux from both normal mode oscillations $\phi = \phi_a+\phi_r$. Replacing an inductor with a JJ leads to the same shift as in the isolated atom $\chi_a\langle\hat{\phi}_a^4\rangle$, but also to a shift due to quantum fluctuations of the coupled oscillator $\chi_{ar}\langle\hat{\phi}_a^2\rangle \langle\hat{\phi}_r^2\rangle$.} \label{fig:fig1} \end{figure*} \section{Weak anharmonicity: case of the transmon qubit} We define a weakly-anharmonic atom as a harmonic oscillator with a small quartic potential \begin{equation} \hat{H}/\hbar = \underbrace{\omega_a (\hat{a}^\dagger\hat{a}+\frac{1}{2})}_{\hat{H}_\text{HO}}\underbrace{-\frac{\lambda}{12} \left(\hat{a}+\hat{a}^\dagger\right)^4}_{\hat{H}_\text{anh}}\ , \label{eq:hamiltonian_a} \end{equation} where $\hat{a}$ is the annihilation operator for excitations in the atom, $\omega_a$ the atomic frequency and $\lambda$ the anharmonicity. In the limit $\lambda \ll \omega_a$, corrections to the eigen-energies of $\hat{H}_\text{HO}$ due to anharmonicity are to first order equal to $-(\lambda/12) \bra{n}(\hat{a}+\hat{a}^\dagger)^4\ket{n}$, with $\ket{n}$ a number state. We can expand $(\hat{a}+\hat{a}^\dagger)^4$ and only consider terms that preserve the number of excitations $n$, since only they will give a non-zero contribution to the first-order correction \begin{equation} \hat{H}_\text{anh}/\hbar \simeq -\frac{\lambda}{2}\left(\left(\hat{a}^\dagger\hat{a}\right)^2 +\hat{a}^\dagger\hat{a}+ \frac{1}{2}\right)\ , \label{eq:levels} \end{equation} leading to energy levels \begin{equation} E_n/\hbar \simeq (\omega_a-\lambda)\left(n+\frac{1}{2}\right)-\lambda\left(\frac{n^2}{2}-\frac{n}{2}-\frac{1}{4}\right)\ . \label{eq:levels} \end{equation} If we write the transition frequencies of the atom $E_n-E_{n-1}=\hbar\omega_a-n\hbar\lambda$, the weakly-anharmonic level structure shown in Fig.~\ref{fig:fig1}(a) becomes apparent. One implementation of this Hamiltonian is the transmon qubit~\cite{koch_charge-insensitive_2007}. In addition to being described by the simple electrical circuit of Fig.~\ref{fig:fig1}(a), this system is highly relevant in many experimental endeavors~\cite{gu2017microwave}, from fundamental experiments in quantum optics~\cite{schuster_resolving_2007,bishop_nonlinear_2009,bosman2017approaching,bosman_multi-mode_2017,kirchmair2013observation}, to quantum simulations~\cite{langford_experimentally_2016} or quantum computing~\cite{takita_demonstration_2016,kelly_state_2015,riste_detecting_2015}. It is constructed from an $LC$ oscillator where the inductor is replaced by the non-linear inductance $L_J\left(I\right)$ of a Josephson junction (JJ). The transmon is weakly-anharmonic if its zero-point fluctuations in current are much smaller than the junctions critical current $I_c$. The current $I$ traversing the JJ when only a few excitations populate the circuit is then much smaller than $I_c$ and $L_J\left(I\right)\simeq L_J\left(1+I^2/2I_c^2\right)$. Intuitively, the expectation value of the current squared $\langle I^2 \rangle$, on which the inductance depends, will increase with the number of excitations in the circuit. So with increasing number of excitations $n$ in the circuit, the effective inductance of the circuit increases and the energy of each photon number state $E_n$ will tend to decrease with respect to the harmonic case. For a rigorous quantum description of the system, the flux $\phi(t) = \int^t_{-\infty}V(t')dt'$, where $V$ is the voltage across the JJ, is a more practical variable to use than current~\cite{vool2017introduction}. Note that for a linear inductance $L$, the flux $\phi$ is proportional to the current $I$ traversing the inductor $\phi=L I$. Using the conjugate variables of flux and charge the Hamiltonian of Eq.~(\ref{eq:hamiltonian_a}) can be shown to describe the transmon~\cite{koch_charge-insensitive_2007}. The anharmonicity is given by the charging energy $\hbar\lambda = e^2/2C$, the atomic frequency by $\omega_a = 1/\sqrt{L_JC}$ and the flux relates to the annihilation operator through $\hat{\phi} = \phi_\text{zpf}(\hat{a}+\hat{a}^\dagger)$, where the zero-point fluctuations in flux are given by $\phi_\text{zpf} = \sqrt{\hbar\sqrt{L_J/C}/2}$. We can recover the intuition gained by describing the system with currents by plotting the eigen-states in the normalized flux basis $\hat{\varphi} = \hat{\phi}/\phi_\text{zpf}$ of the harmonic oscillator in Fig.~\ref{fig:fig1}(a). The fluctuations in flux increases with the excitation number, hence the expectation value of the fourth-power of the flux $\langle \hat{\phi}^4\rangle\propto\langle \hat{H}_\text{anh}\rangle$ will increase. The energy of each eigen-state will then decrease, deviating from a harmonic level structure. \begin{figure*}[t!] \includegraphics[width=0.85\textwidth]{fig2_2.pdf} \caption{Fraction of the atomic energy shift due to quantum vacuum fluctuations. (a) Dressed frequency of the ground-to-first-excited state transitions of the harmonic oscillator (black) and atom (blue) as a function of detuning $\Delta = \omega_r-\omega_a$. Bare frequencies ($g=0$) are shown as dashed lines, We fixed $\lambda/\omega_a=0.01$ and $g/\omega_a=0.02$. (b) Total frequency shift $\delta\omega_a$ of the atom, decomposed into its two main components: normal-mode splitting $\delta_\text{NM}$ and a shift resulting from vacuum fluctuations $\chi_{ar}$. Coupling also changes the anharmonicity $\chi_a$, this results in a small shift absorbed here in $\delta_\text{NM}$. (c) The vacuum-fluctuations-induced shift $\chi_{ar}$ as a fraction of the total frequency shift of the atom $\delta \omega_a$ for increasing anharmonicity $\lambda$ and fixed detuning $\Delta = \omega_a/4$. For a TLS, all of the energy shift arises from quantum fluctuations, $\chi_{ar}/\delta\omega_a = 1$. In all panels, the dotted lines are computed from Eqs.~(\ref{eq:shifts_beyond}), full lines correspond to a numerical diagonalization of Eq.~(\ref{eq:hamiltonian_a_b_int_anh}). In (c), $\chi_{ar}$ is computed from numerics as half the shift resulting from adding a photon in the oscillator.} \label{fig:fig2} \end{figure*} \section{Coupled harmonic and anharmonic oscillator} \subsection{Normal-mode splitting and quantum-fluctuation-induced shifts} We now study the effect of coupling a harmonic oscillator to the atom. When an $LC$ oscillator is connected capacitively to a transmon (see Fig.~\ref{fig:fig1}(b)), circuit quantization~\cite{vool2017introduction} leads to the Hamiltonian \begin{align} \begin{split} \hat{H}/\hbar &= (\omega_a+\lambda) \hat{a}^\dagger\hat{a}-\frac{\lambda}{12} \left(\hat{a}+\hat{a}^\dagger\right)^4\\ &+\omega_r \hat{b}^\dagger\hat{b}+g \left(\hat{a}-\hat{a}^\dagger\right)\left(\hat{b}-\hat{b}^\dagger\right)\ . \label{eq:hamiltonian_a_b_int_anh} \end{split} \end{align} Here $\hat{b}$ is the annihilation operator for photons in the resonator, $\omega_r$ its frequency and $g$ the coupling strength. Compared to the Hamiltonian of Eq.~(\ref{eq:hamiltonian_a}), we replaced the frequency $\omega_a$ scaling the atomic number operator with $\omega_a+\lambda$. Doing so will ensure that $\omega_a$ corresponds to the frequency of the first atomic transition, independent of the anharmonicity $\lambda$, as proven by Eq.~(\ref{eq:levels}). We also omitted the ground-state energies $\hbar\omega_r/2$ and $\hbar(\omega_a+\lambda)/2$ in this Hamiltonian; even though vacuum fluctuations are at the origin of these omitted terms, their presence plays no role in calculating the transition frequencies of the system. To describe the dispersive regime $g\ll|\Delta|$ of this interaction, we first move to the normal-mode basis, as described in App.~1. We introduce normal-mode frequencies $\bar{\omega}_{r}$, $\bar{\omega}_{a} = \omega_{a}-\delta_\text{NM}$ and operators $\hat{\alpha},\hat{\beta}$ which eliminate the coupling term in Eq.~(\ref{eq:hamiltonian_a_b_int_anh}) whilst preserving canonical commutation relations \begin{align} \begin{split} \hat{H}/\hbar&=(\bar{\omega}_a+\lambda) \hat{\alpha}^\dagger\hat{\alpha}+\bar{\omega}_r \hat{\beta}^\dagger\hat{\beta}\\ &\underbrace{- \frac{1}{12}\left(\chi_a^{1/4}\left(\hat{\alpha} + \hat{\alpha}^\dagger\right)+\chi_r^{1/4}\left(\hat{\beta}+\hat{\beta}^\dagger\right)\right)^4}_{\hat{H}_\text{anh}}\ . \label{eq:hamiltonian_a_b_anh} \end{split} \end{align} The operators $\hat{\alpha},\hat{\beta}$ have a linear relation to $\hat{a},\hat{b}$, which determines the value of $\chi_a$ and $\chi_r$ (see App.~1). Expanding the anharmonicity leads to \begin{align} \begin{split} \hat{H}_\text{anh}/\hbar=&-\frac{\chi_{a}}{2}\left(\left(\hat{\alpha}^\dagger\hat{\alpha}\right)^2 +\hat{\alpha}^\dagger\hat{\alpha}+ \frac{1}{2}\right)\\ &-\frac{\chi_{r}}{2}\left(\left(\hat{\beta}^\dagger\hat{\beta}\right)^2 +\hat{\beta}^\dagger\hat{\beta}+ \frac{1}{2}\right)\\ &-2\chi_{ar}\left(\hat{\alpha}^\dagger\hat{\alpha}+\frac{1}{2}\right)\left(\hat{\beta}^\dagger\hat{\beta}+\frac{1}{2}\right)\ ,\\ \label{eq:shifts} \end{split} \end{align} if we neglect terms which do not preserve excitation number, irrelevant to first order in $\lambda$. This approximation is valid for $\lambda \ll |\Delta|,|3\omega_a-\omega_r|,|\omega_a-3\omega_r|$, which notably excludes the straddling regime~\cite{koch_charge-insensitive_2007}. The anharmonicity (or self-Kerr) of the normal-mode-splitted atom and resonator $\chi_a$ and $\chi_r$ is related to the AC Stark shift (or cross-Kerr) $2\chi_{ar}$ through \begin{equation} \chi_{ar}=\sqrt{\chi_{a}\chi_{r}}\ . \end{equation} The AC Stark shift is the change in frequency one mode acquires as a function of the number of excitations in the other. The appearance of an AC Stark shift and the resonators anharmonicity can be understood from the mechanism of normal-mode splitting. When the transmon and $LC$ oscillator dispersively couple, the normal-mode corresponding to the $LC$ oscillator will be composed of currents oscillating through its inductor but also partly through the JJ. We can decompose the current $I$ traversing the JJ into the current corresponding to atomic excitations $I_a$ and resonator excitations $I_r$. In Eq.~(\ref{eq:hamiltonian_a_b_anh}), this appears in the terms of flux as $\phi=\phi_a+\phi_r\propto \chi_a^{1/4}(\hat{\alpha} + \hat{\alpha}^\dagger)+\chi_r^{1/4}(\hat{\beta}+\hat{\beta}^\dagger)$. Consequently the value of the JJ inductance is not only dependent on the number of excitations in the atom but also in the resonator. Since the frequency of the normal-mode-splitted transmon and resonator depends on the value of this inductance, the atomic frequency is a function of the number of excitations in the resonator (AC Stark effect), and the resonator frequency changes as it is excited (the resonator acquires some anharmonicity). Even when the resonator mode is in its ground state, vacuum current fluctuations shift the atomic frequency. This can be verified by the presence of $1/2$ in the cross-Kerr term of Eq.~(\ref{eq:shifts}) which arise from commutation relations $[\hat{\alpha},\hat{\alpha}^\dagger]=[\hat{\beta},\hat{\beta}^\dagger]=1$, mathematically at the origin of vacuum fluctuations. To summarize, compared to an isolated harmonic oscillator the energy levels of the coupled atom are shifted by: \textit{(1)} normal-mode splitting $\delta_\text{NM}$, \textit{(2)} its anharmonicity $\chi_a$ which arises from the quantum fluctuations of its eigen-states, and \textit{(3)} the shift proportional to $\chi_{ar}$ arising from the quantum fluctuations of the resonator it is coupled to. These different effects are depicted in Fig.~\ref{fig:fig1}(b). In Fig.~\ref{fig:fig2}(a,b), we show how these shifts manifest in a typical experimental setting where the detuning between the atom and resonator is varied, without explicitly showing contribution \textit{(2)}. Off resonance, both modes are slightly shifted with respect to their un-coupled frequencies, and our theory allows us to distinguish the different effects which contribute to this shift. \subsection{Analytical expression of the shifts in the RWA} In the RWA $g\ll|\Delta|\ll\Sigma$, where $\Sigma = \omega_a +\omega_r$ the following approximations hold \begin{align} \begin{split} \bar{\omega}_a &= \omega_a- \delta_{NM} \simeq \omega_a- \frac{g^2}{\Delta} - \lambda\frac{g^2}{\Delta^2}\ ,\\ \bar{\omega}_r &\simeq \omega_r+ \frac{g^2}{\Delta} + \lambda\frac{g^2}{\Delta^2}\ ,\\ \chi_a &\simeq \lambda\left(1-2\frac{g^2}{\Delta^2}\right)\ ,\\ \chi_r &= \mathcal{O}(g^4)\ ,\\ \chi_{ar} &\simeq \lambda\frac{g^2 }{\Delta^2} , \label{eq:shifts_rwa} \end{split} \end{align} valid to leading order in $g$ and $\lambda$. The expression for the AC Stark shift was also derived by Koch \textit{et al.}~\cite{koch_charge-insensitive_2007} from perturbation theory, given in the form $\lambda g^2/\Delta(\Delta-\lambda)$. Applying perturbation theory to the Hamiltonian of Eq.~(\ref{eq:hamiltonian_a_b_int_anh}), however, fails to predict the correct shift beyond the RWA and does not make the distinction between the physical origin of the different shifts. Following Eqs.~(\ref{eq:shifts_rwa}), the total shift acquired when the resonator is in its ground-state $\delta\omega_a=\lambda-\delta_\text{NM}-\chi_a-\chi_{ar}$, is equal to $-g^2/\Delta$. This shift is equal to that of a harmonic oscillator coupled to another harmonic oscillator (here, the case $\lambda=0$) as well as that of a TLS coupled to a harmonic oscillator. The fact that the total shift has the same magnitude in these three different systems can easily lead to a confusion as to its origin. In particular since the shift of a TLS is a purely quantum effect, whereas that of two coupled harmonic oscillators can be quantitatively derived from classical physics, and the weakly-anharmonic system lies somewhere in between. This confusion can now be addressed: for a weakly-anharmonic system, there is a contribution from normal-mode splitting and from vacuum fluctuations which can both be quantified, and the former is much larger than the latter for a weakly-anharmonic system. This also explains why earlier work~\cite{fragner_resolving_2008} found the Stark shift per photon to be smaller than the Lamb shift: vacuum fluctuations was not the only measured effect, normal-mode splitting also greatly contributed to the measured shift. The proportion to which the total shift is due to vacuum fluctuations, as a function of anharmonicity, is shown in Fig.~\ref{fig:fig2}(c). \subsection{Beyond the RWA} \begin{figure}[t!] \includegraphics[width=0.38\textwidth]{fig3.pdf} \caption{Vacuum-fluctuations-induced shift $\chi_{ar}$ beyond the RWA fixing $\lambda/\omega_a=0.01$ and $g/\omega_a=0.02$. Numerical calculation (full red line), are compared to the analytical expression of Eq.~(\ref{eq:shifts_beyond}) (dashed blue line) and Eq.~(\ref{eq:shifts}) (dashed green). Resonances invalidating our approximations are denoted by red bars.} \label{fig:fig3} \end{figure} Beyond the RWA to regimes of large detuning $g\ll|\Delta|\sim\Sigma$ the approximate expressions of the different shifts are given by \begin{align} \begin{split} \bar{\omega}_a &\simeq \omega_a- g^2\frac{2\omega_r}{\Delta\Sigma}- 4\lambda g^2\frac{\omega_r\omega_a }{\Delta^2\Sigma^2}\ ,\\ \bar{\omega}_r &\simeq \omega_r+ g^2\frac{2\omega_a}{\Delta\Sigma}+ 4\lambda g^2\frac{\omega_a^2 }{\Delta^2\Sigma^2}\ ,\\ \chi_a &\simeq \lambda\left(1-4g^2\frac{\omega_r \left(\omega_a^2 + \omega_r^2\right)}{\omega_a \Delta^2 \Sigma^2}\right)\ ,\\ \chi_r &= \mathcal{O}(g^4)\ ,\\ \chi_{ar} &\simeq 4\lambda g^2\frac{ \omega_r^2}{\Delta^2\Sigma^2} . \label{eq:shifts_beyond} \end{split} \end{align} An important difference with the RWA is that the AC Stark shift $2\chi_{ar}$ scales with $\omega_r^2$, decreasing with the frequency of a coupled resonator as shown in Fig.~\ref{fig:fig3}. This notably explains why the transmon is insensitive to low frequency charge fluctuations as compared to the highly anharmonic Cooper pair box. It also explains why the transmon is not adapted to measuring individual quanta of far off-resonant systems such as low frequency mechanical oscillators~\cite{pirkkalainen_hybrid_2013}. Contrary to the AC Stark shift in the RWA, this expression cannot be derived by applying perturbation theory to Eq.~(\ref{eq:hamiltonian_a_b_int_anh}). The different shifts which arise from this method and perturbation theory are compared to two coupled harmonic oscillators and the two level system case in Supplementary Table~S1 and Fig.~S2~\cite{SI}. \section{Summary and conclusions} In conclusion, we presented a method to separate normal-mode splitting from the consequences of quantum fluctuations in the Hamiltonian of a weakly-anharmonic atom coupled to a harmonic oscillator. Through our theory, we reveal the physical origin of the different energy shifts arising in such a system. The main result is that only a small fraction of the total frequency shift can be attributed to quantum vacuum fluctuations, the dominant part being due to normal-mode splitting. We prove that this small fraction can be experimentally measured as half the Stark shift per photon, for example in Ref.~\cite{fragner_resolving_2008}. Extending this work to natural atoms (which are not perfect two-level systems either) also seems promising. Experiments in cavity QED show that the Lamb shift of natural atoms can be 40\% larger than half the Stark shift per photon~\cite{brune1994lamb}. As derived in this work, this indicates that the shift is not purely driven by quantum fluctuations. Since the original picture of the Lamb shift is of a phenomenon driven by quantum fluctuations, our results raise questions about the terminology, and interpretation of, experiments in cavity and circuit QED. In particular, should one reserve the terminology "Lamb shift" for only the part of the dispersive shift that arises from quantum fluctuations? In addition to addressing this fundamental question, we expect that the expressions derived in Eqs.~(\ref{eq:shifts_rwa}) and ~(\ref{eq:shifts_beyond}), as well as our approach to studying this Hamiltonian will become practical tools for experimental efforts in circuit QED. \section*{ACKNOWLEDGMENTS} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Grant Agreement No. 681476 - QOMD), and from the Netherlands Organisation for Scientific Research (NWO) in the Innovational Research Incentives Scheme – VIDI, project 680-47-526.
-18,725.011509
[ -3.58203125, 3.091796875 ]
61.009174
[ -2.490234375, 0.8544921875, -1.8173828125, -5.515625, -1.1416015625, 8.0546875 ]
[ 4.3359375, 9.1796875, 3.638671875, 7.5625 ]
150
2,721
[ -3.57421875, 4.23046875 ]
25.592187
[ -5.96875, -4.35546875, -4.359375, -2.5625, 1.921875, 12.296875 ]
1.284147
14.966247
28.419118
1.550117
[ 2.7055633068084717 ]
-13,254.313063
6.296215
-18,384.612654
0.776325
5.543645
[ -2.595703125, -3.916015625, -3.7890625, -4.88671875, 2.3359375, 12.4140625 ]
[ -5.171875, -1.8642578125, -2.15625, -1.5595703125, 3.08203125, 4.30859375 ]
BkiUfS45qhLBlnv5FwQt
\section{Background and Related Work} \label{sec:Background and Related Work} In the following it is briefly introduced the background of the theory and technologies behind the framework. Furthermore, it is described the developments and ideas of the existing literature in the same area of research. \subsection{Genetic Algorithms} As mentioned above \textit{Genetic Algorithms}, described in \cite{Goldberg1989} as a metaheuristic technique, can find reasonable solutions within reasonable time, for which exact techniques are unsuitable. \textit{GAs} simulate several aspects of the \enquote{Darwin's Evolution Theory} in order to enable them to converge towards optimal or near-optimal solutions. The main concept behind these metaheuristic algorithms is \textit{robustness}, the balance between efficiency and efficacy necessary for survival in many different environments. Unfortunately, in real-world applications the search is fraught with discontinuities and vast multimodal, in such a way that the space of search is noisy as shown in figure \ref{fig:A noisy state-space landscape}. In order to move carefully through the \textit{space of search}, \textit{heuristics} are needed. Some widely accepted search procedures simply lack of this quality but this does not imply that these algorithms are not useful. Indeed, they have been used successfully in many applications but where the domain of search is limited. \begin{figure}[htbp] \centering \input{figures/background_and_related_work/noisy_state-space_landscape} \caption{A noisy state-space landscape} \label{fig:A noisy state-space landscape} \end{figure} \textit{GAs} differ from more traditional metaheuristic techniques in many ways. They: \begin{itemize} \item Work with a coding of the parameter set, not the parameters themselves; \item Search from a population of states, not a single point; \item Use the objective function (\textit{fitness function}) information, not auxiliary knowledge; \item Use probabilistic transition rules, not deterministic rules. \end{itemize} In \textit{GAs} the new individuals are generated by two parents selected among the current population individuals for so-called \textit{sexual reproduction}. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{figures/background_and_related_work/genetic_algorithm} \caption[The execution of a Genetic Algorithm]{The execution of a \textit{Genetic Algorithm} (Adapted from \cite{Russell2010})} \label{fig:The execution of a Genetic Algorithm} \end{figure} As it can be seen in figure (\ref{fig:The execution of a Genetic Algorithm} (a)), \textit{GAs} begin with a set of $k$ randomly generated states called \enquote{population}. Each state is called \enquote{individual} and is represented as a string over a finite alphabet. This string is called \enquote{chromosome} and each symbol \enquote{gene}. Every iteration of the algorithm generates a new population and consists of the following steps: \begin{itemize} \item \textbf{Selection:} Each state is evaluated by the \textit{fitness function} (b). Each value influences the random choice among the successors for the next step. Once chosen the $k$ successors are grouped into couples (c); \item \textbf{Crossover:} The algorithm chooses for each individual a division point, which is called \textit{crossover point}. At this point the sexual reproduction (d) in which two children are created begins: the first takes the first part of the first parent and the second of the second parent; the other takes the second part of the first parent and first part of the second parent; \item \textbf{Mutation:} When the \textit{offsprings} are generated, each \textit{gene} is subjected to a random \textit{mutation} with a small independent probability (e). \end{itemize} Depending of the code used to represent the feasible solutions of a given problem, the representation of the genetic code can condition the \textit{crossover} and \textit{mutation} steps. For example, if the \textit{chromosome} is composed of concatenated values, a simple division may not make any sense. The mathematical explanation of why \textit{GAs} work, was given by the \enquote{Holland's Schema Theorem} in \cite{Holland1995}, which says: \begin{quote} \itshape Short, low order, above average schemata receive exponentially increasing trials in subsequent generations of a Genetic Algorithm. \end{quote} The concept of \textit{schema} is a string (\textit{chromosome}) in which some of the positions (\textit{genes}) can be left unspecified. For example, with a binary alphabet representation, the \textit{schema} $01*0$ describes all \textit{chromosomes} (\textit{instances} of the \textit{schema}) in which the first, the third and the fourth \textit{genes} are fixed, which are $0100$ and $0110$. \textit{Holland} showed that if the average \textit{fitness value} of the instances of a \textit{schema} is above the average, the number of instances of the \textit{schema} within the population will grow over time. This justifies the choice of selecting with more probability individuals that have higher \textit{fitness value} and the need to use the \textit{mutation} to shake things up. Moreover, it is shown that since each individual owns a large number of different \textit{schemas}, during each generation the number of \textit{schemas} implicitly processed are in the order of $k^3$, where $k$ is the number of individuals. Understanding if \textit{GAs} provide near-optimal solutions is the object of study of the \enquote{Building-Block Hypothesis} which has to be confirmed yet. \subsection{Hadoop MapReduce} The term \textit{Hadoop} \cite{White2012} comprises a family of many related projects with the same infrastructure for distributed computing and large-scale data processing. It is better known for the \textit{MapReduce} algorithm, shown below, and its distributed file system \textit{HDFS}, which runs on large clusters of commodity machines. \textit{Hadoop} was created by \textit{Doug Cutting} and has its origins in \textit{Apache Nuts}, an open source web search engine. In January 2008 \textit{Hadoop} was made a top-level project at \textit{Apache}, attracting to itself a large active community, including \textit{Yahoo!}, \textit{Facebook} and \textit{The New York Times}. At present, \textit{Hadoop} is a solid and valid presence in the world of cloud computing. \textit{MapReduce} is a programming model whose origins lie in the old functional programming. It was adapted by \textit{Google} \cite{Dean2004} as a system for building search indexes, distributed computing and database communities. It was written in \textit{C++} language and was made as a framework, in order to simply develop its applications. In \textit{Hadoop} programs are mainly in \textit{Java} language but it is also possible, through a mechanism called \enquote{streaming}, to develop programs in any language that supports the \textit{standard I/O}. \textit{MapReduce} is a batch query processor and the entire dataset is processed for each query. It is a linearly scalable programming model where users programs at least two functions: the \enquote{map} function and \enquote{reduction} functions. These functions process the data in terms of key/value pairs which are unaware of the size of the data or the cloud that they are operating on, so they can be used unchanged either for a small dataset or for a massive one. \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{figures/background_and_related_work/hadoop_mapreduce} \caption{\textit{Hadoop MapReduce}} \label{fig:Hadoop MapReduce} \end{figure} A typical \textit{MapReduce} program (figure \ref{fig:Hadoop MapReduce}) starts on a single machine using a piece of code called \enquote{driver}. This launches and manages the execution called \enquote{job} of the entire distributed program on the cluster. Then several components at different stages operate: \begin{itemize} \item \textbf{Splitter:} The input (a) is often in the form of a simple text, which consists of one or more files stored in the distributed file system \textit{HDFS}. It undergoes a first treatment from the \textit{Splitter} (b). Depending on the criteria used, it first creates the key/value pairs called \enquote{records} and sends them directly to an available \textit{Mapper} on the cluster. The function, where $k_1$ and $v_1$ means data types, is described as: \begin{equation*} \text{split} : \textit{input} \to \operatorname{list}\left(\textit{k}_{1}, \textit{v}_{1}\right)_{S} \end{equation*} \item \textbf{Mapper:} It is the first function of the core of \textit{Hadoop}. A \textit{Mapper} (c) runs on a machine of the cluster a process called \enquote{MapTask}. Once the input has been given, it produces a list of records according to the algorithm described by the programmer: \begin{equation*} \text{map} : \left(\textit{k}_{1}, \textit{v}_{1}\right)_{S} \to \operatorname{list}\left(\textit{k}_{2}, \textit{v}_{2}\right)_{M} \end{equation*} \item \textbf{Combiner:} It is also called \enquote{Local Reducer} and it is an optional component. The \textit{Combiner} function does not replace the \textit{reduce} function but it can help cutting down the amount of data exchanged between \textit{Mappers} and \textit{Reducers}. It (d) runs on the same machine that made the \textit{MapTask} and it computes new pairs having the same key: \begin{equation*} \text{combiner} : \left(\textit{k}_{2}, \operatorname{list}\left(\textit{v}_{2}\right)\right)_{M} \to \left(\textit{k}_{2},\textit{v}_{2}\right)_{M^1} \end{equation*} \item \textbf{Partitioner:} It (e) establishes the criteria by which records are assigned to a \textit{Reducer}. This is also called the \enquote{shuffle operation} (f) and ensures that records with the same key will be assigned to the same \textit{Reducer}. If not directly specified, the default \textit{Partitioner} acts like a hash function on keys: \begin{equation*} \text{partitioner} : \textit{k}_{2} \to \operatorname{hash}\left(\textit{k}_{2}\right) \end{equation*} \item \textbf{Reducer:} Finally, the \textit{Reducer} (g) concludes the job. If a \textit{Partitioner} with hash on keys has been used, it can process all the records with the same key created by the whole cluster: \begin{equation*} \text{reduce} : \left(\textit{k}_{2}, \operatorname{list}\left(\textit{v}_{2}\right)\right)_{M^1} \to \left(\textit{k}_{3}, \textit{v}_{3}\right)_R \end{equation*} \end{itemize} Once the job has been completed, the last records are written one for each \textit{Reducer} to the output files in \textit{HDFS} (h). \subsection{Related Work} The existent literature proposes some parallel version of \textit{GAs} using the \textit{MapReduce} paradigm. The first is an extension, by adding a second \textit{Reducer}, of \textit{MapReduce} named \enquote{MRPGA} \cite{Jin2008} based on \textit{.Net}. In this implementation a coordinator client manages the executions of the parallel \textit{GA} iterations. The chosen model is the island model in which each participating node computes \textit{GAs} operations for a portion of the entire population. In the first phase, each \textit{Mapper} node receives its own portion of population and computes the \textit{fitness value} for each of its individuals. The \textit{Reducer} nodes of the first reduce phase receive the individuals of the correspondent island and apply the \textit{selection} function. The final \textit{Reducer} computes the global \textit{selection} and the other following \textit{GAs} functions. Another approach, this time developed on \textit{Hadoop}, is presented by \cite{Verma2009}. The number of \textit{Mapper} nodes and the one of the \textit{Reducer} nodes are unrelated. The \textit{Mapper} nodes computes the \textit{fitness} function and the \textit{Reducer} a local \textit{selection} followed by the other \textit{GAs} functions. The substantial difference with \textit{MRPGA} lies in the fact that the partitioner supplies a sort of \enquote{migration} among the individuals as it randomly sends the outcome of the \textit{Mapper} nodes to different \textit{Reducer} nodes. In \cite{DiMartino2012} the three main grain parallelism are described and implemented by exploiting the \textit{MapReduce} paradigm: \begin{enumerate} \item \textit{Fitness Evaluation Level} (\textit{Global Parallelisation Model}); \item \textit{Population Level} (\textit{Coarse-grained Parallelisation Model} or \textit{Island Model}); \item \textit{Individual Level} (\textit{Fine-grain Parallelisation Model} or \textit{Grid Model}). \end{enumerate} In the \textit{Global Parallelisation Model} (figure \ref{fig:Global Parallelisation Model}) a master node manages the population and computes for them all the \textit{GAs} functions but the fitness evaluation is computed by the slave nodes. The model is adapted to \textit{MapReduce} by delegating some \textit{Mappers} the task of evaluating the \textit{fitness value} for each individual in parallel. Then, the single \textit{Reducer} collects the results and performs the other \textit{GA} operations. One generation corresponds to one \textit{MapReduce} execution, so that the whole computation is a sequence of \textit{MapReduce} executions. \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{figures/background_and_related_work/global_parallelisation_model} \caption[Global Parallelisation Model]{\textit{Global Parallelisation Model} (Adapted from \cite{Luque2011})} \label{fig:Global Parallelisation Model} \end{figure} In the \textit{Coarse-grained Parallelisation Model} (figure \ref{fig:Coarse-grain Parallelisation Model}) the population is subdivided into \enquote{islands} and the \textit{GA} is independently run on each of them. Periodically the islands exchange information by \enquote{migrating} some individuals. Here the number of \textit{Reducers} is higher than in the previous model. After having computed the \textit{fitness values} in the \textit{Mappers}, a \textit{Partitioner} assigns each island to a different \textit{Reducer} in order to compute the other \textit{GAs} functions in parallel. \begin{figure}[htbp] \centering \includegraphics[width=0.9\linewidth]{figures/background_and_related_work/coarse-grain_parallelisation_model} \caption[Coarse-grain Parallelisation Model]{\textit{Coarse-grain Parallelisation Model} (Adapted from \cite{Luque2011})} \label{fig:Coarse-grain Parallelisation Model} \end{figure} In the \textit{Fine-grain Parallelisation Model} (figure \ref{fig:Fine-grain Parallelisation Model}) each individual is placed on a grid and the \textit{GA} operations are performed in parallel by evaluating simultaneously the \textit{fitness value} and applying the \textit{selection} limited only to the small adjacent neighbourhood. This model is slightly adapted by modifying the previous one: the \textit{Partitioner} uses a pseudo-random function in such a way that the described local neighbourhood is developed. \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{figures/background_and_related_work/fine-grain_parallelisation_model} \caption[Fine-grain Parallelisation Model]{\textit{Fine-grain Parallelisation Model} (Adapted from \cite{Luque2011})} \label{fig:Fine-grain Parallelisation Model} \end{figure} The \textit{Global Parallelisation Model} has been implemented by \cite{DiMartino2012} in order to solve a problem of \textit{Automatic Test Data Generation}. It has been developed on the \textit{Google App Engine MapReduce} platform. Also serving the same purpose, \cite{DiGeronimo2012} has developed \textit{Coarse-grained Parallelisation Model} on the \textit{Hadoop MapReduce} platform. The framework presented in this paper mainly exploits the \textit{Coarse-grained Parallelisation Model}, without differing so much from the implementation proposed by \cite{DiMartino2012}. It introduces some new modifies that better suit its intrinsic nature of framework. \section{Conclusions and Future Work} \label{sec:Conclusions and Future Work} The test was preliminary for some reasons: \begin{itemize} \item Defeating the sequential version of \textit{Weka}, which is specialised in \textit{Machine Learning} algorithms, is a challenge itself; \item The probability factor of \textit{Genetic Algorithms} needs an average of a greater number of executions of the test in order to be more reliable. \end{itemize} The results obtained in the example of use of the framework show that writing \textit{GAs} with the framework is effective because of the good ratio of number of lines of code for developing the application to the ones of the framework itself. Even though the running time is not always on its side, some clues suggest that an \enquote{ad hoc} optimisation of \textit{Hadoop} might improve considerably the performance. The case in which the sequential version is defeated suggests that for large size instances it is worth parallelising. Nowadays it is possible to rent a cluster for little money without investing on an expensive and dedicate hardware. In a few minutes everything is completely operative so as to run any type of distributed applications. This fact together with the saving of money, the easiness with which it is possible to develop the algorithms and its flexibility, give this framework a relevant meaning. For all these reasons, it can be worth improving it in the future. It can be advantageous to convert the project into an \textit{open-source} project. The current complexity of the framework needs a specific care in the discovery of possible bugs and the potential interest covers the prospective of the existence of a dedicate community. Even though many of the most common basic functions are already implemented, it could be useful to add some other implementations to better cover the possible needed cases. Moreover, some tools to treat the data read and produced by the framework could help make it more compatible with external applications. The most important improvement needed is analysing the possible bottlenecks caused by the inner structure of \textit{Hadoop}. As the features of \textit{Hadoop} that can treat data in a more efficient way are many, it could be worth considering them in a future development. \section{Design and Development} \label{sec:Design and Development} Here the design and development of the single involved components are explained. It starts from the last level of abstraction, which is the \textit{Driver} component, and goes into depth as far as the \textit{Hadoop} development of the components. \subsection{Driver} The \textit{Driver} is the main part of the framework and it represents the only interface between the two involved parts in each work, managing the two main aspects: \begin{itemize} \item The interaction with the user; \item The launch of the jobs on the cluster. \end{itemize} The \textit{Driver} is executed on a machine even separated from the cluster and it also computes some functions as soon as the first results are received from the cluster. \begin{figure}[htbp] \centering \includegraphics[width=0.75\linewidth]{figures/design_and_development/driver} \caption{The \textit{Driver}} \label{fig:The Driver} \end{figure} Figure \ref{fig:The Driver} describes the elements of the whole process every time it is executed, each one managed by the \textit{Driver} process: \begin{itemize} \item \textbf{Initialiser:} This component (a) can be optional if there already exists a population. The user can define how to generate the first individuals for each island. The default component makes it randomly. The \textit{Initialiser} is a \textit{MapReduce} job which produces the individuals in the form of a sequence of files, stored in a serialised form directly in \textit{HDFS} (b); \item \textbf{Generator:} It is the heart of the framework (c). It executes one generation on the cluster and produces the new population, storing each individual again in \textit{HDFS} within the fitness values and a flag indicating if an individual satisfies the \textit{termination criterion}. \item \textbf{Terminator:} At the end of each generation, the \textit{Terminator} component (d), which does not work as a job, checks the stopping conditions (e.g., if the maximum number of generations has been reached). Once terminated, the population is directly submitted to the \textit{SolutionsFilter} job (f). \item \textbf{Migrator:} This optional job (e) allows moving individuals from an island to another, according to the criteria defined by the user such as the frequency of migration, number and destinations of migrants and the selection method for choosing migrants. \item \textbf{SolutionsFilter:} When the job terminates, all the individuals of the last generation are filtered according to those that satisfy the \textit{termination criterion} and those which do not. Then, the results of the whole process is stored in \textit{HDFS} (g). \end{itemize} \subsection{Generator} Each \textit{generator} job makes the population evolve. In order to develop the complex structure described below, it is needed to use multiple \textit{MapTasks} and \textit{ReduceTasks}. This is possible using a particular version of \textit{ChainMapper} and \textit{ChainReducer} classes of Hadoop, slightly modified in order to treat \textit{Avro} objects. Hadoop manages the exchange of information among the tasks with a raw method of serialisation. Using \textit{Avro} \cite{Cutting2011}, it is easy to store objects and to save space on the disk, allowing an external treatment of them. It also permits a quick exchange data among the parties involved in the \textit{MapReduce} communication. A chain allows to manage the tasks in the form described by the pattern: \begin{equation*} \left(\textit{MAP}\right)^+\left(\textit{REDUCE}\right)\left(\textit{MAP}\right)^* \end{equation*} which means one or more \textit{MapTasks}, followed by one \textit{ReduceTask} and other possible \textit{MapTasks}. \begin{figure}[htbp] \centering \includegraphics[width=0.7\linewidth]{figures/design_and_development/generator} \caption{The \textit{Generator} component} \label{fig:The Generator component} \end{figure} Figure \ref{fig:The Generator component} describes how the generation work is distributed on the cloud: \begin{itemize} \item \textbf{Splitter:} The \textit{Splitter} (b) takes as input the \textit{population} (a), deserialises it and splits the individuals into $J$ groups (islands) (c), according to the order of individuals. Each split contains a list of records, one per each individual: \begin{equation*} \text{split} : \textit{individuals} \to \operatorname{list}\left(\textit{individual}, \textsc{null}\right) \end{equation*} During the deserialisation, the splitter adds some fields to the objects which will be useful for the next steps of the computation, such as the \textit{fitness} function. \item \textbf{Fitness:} Here (d) according to $J$ islands, the $J$ \textit{Mappers} compute the \textit{fitness values} for each individual within its corresponding island for which it has not been calculated yet: \begin{equation*} \text{map} : \left(\textit{individual}, \textsc{null}\right) \to \left(\textit{individual}_{\text{F}}, \textsc{null}\right) \end{equation*} The user defines how to the \textit{fitness} is evaluated and the values are stored inside the corresponding field of the objects. \item \textbf{TerminationCheck:} Without leaving the same machine of the previous \textit{map}, the second \textit{map} (e) acts in a chain. It checks if the current individuals satisfy the \textit{termination criterion}. This is useful, for instance, when a lower limit target of the \textit{fitness value} is known. If at least one individual gives a positive answer to the test, the event is notified to the other phases and islands by a flag stored in \textit{HDFS}. This avoids the executions of the next phases and generations. \item \textbf{Selection:} If the \textit{termination criterion} has not been satisfied yet, this (f) is the moment to choose the individuals that will be the parents during the \textit{crossover} for the next iteration. The users can define this phase in their own algorithms. The couples which have been selected are all stored in the key: \begin{equation*} \begin{gathered} \text{map} : \left(\textit{individual}_{\text{F}}, \textsc{null}\right) \to \\ \left(\textit{couple\_information}, \textit{individual}_{\text{F}}\right) \end{gathered} \end{equation*} If an individual has been chosen more than one time, it is replicated. Then all the individuals, including those not chosen if the \textit{elitism} is active, leave the current worker and go to the correspondent \textit{Reducer} for the next step (g). \item \textbf{Crossover:} In this phase (h), the individuals are grouped by the couples established during the \textit{selection}. Then each \textit{Reducer} applies the criteria defined by the user and makes the \textit{crossover}: \begin{equation*} \begin{gathered} \text{reduce} : \left(\textit{couple\_information}, \operatorname{list}\left(\textit{individual}_{\text{F}}\right)\right) \to \\ \left(\textit{individual}, \textsc{true}\right) \end{gathered} \end{equation*} This produces the \textit{offspring} (marked with the value \textsc{true} in the value field) that is read, together with the previous population, during the next step. \item \textbf{Mutation:} During the \textit{mutation} (i), the chained \textit{Mappers} manage to make the mutation of the genes defined by the user. Only the \textit{offspring} can be mutated: \begin{equation*} \text{map} : \left(\textit{individual}, \textsc{true}\right) \to \left(\textit{individual}_{\text{M}}, \textsc{null}\right) \end{equation*} \item \textbf{Elitism:} In the last phase (j), if the user chooses to use elitism, the definitive population is chosen among the individuals of the offspring and the previous population: \begin{equation*} \text{map} : \left(\textit{individual}, \textsc{null}\right) \to \left(\textit{individual}, \textsc{null}\right) \end{equation*} At this point (k), the islands are ready to be written in \textit{HDFS} (l). \end{itemize} The architecture of the generator component provides two levels of abstraction (see figure \ref{fig:The two levels of abstraction of the Generator component}): \begin{itemize} \item The first one, which is called the \enquote{core} level, allows the whole job to work; \item The second one, which is called the \enquote{user} level, allows the user to develop his own \textit{Genetic Algorithm} and to execute it on the cloud. \end{itemize} \begin{figure*}[htbp] \centering \includegraphics[width=0.8\textwidth]{figures/design_and_development/generator_architecture} \caption{The two levels of abstraction of the \textit{Generator} component} \label{fig:The two levels of abstraction of the Generator component} \end{figure*} The \textit{core} is the base of the framework with which the end-user does not need to interact. Indeed, the fact that a \textit{MapReduce} job is executed is totally invisible to the user. It consists of everything that is needed to run an application on \textit{Hadoop}. The final user can develop his own \textit{GA} simply by implementing the relative classes, without having to deal with \textit{map} or \textit{reduce} details. If the user does not extend these classes, a simple behaviour is implemented by doing nothing else than forwarding the input as output. The framework also makes some default classes available so that the user can use them for most cases. \subsection{Terminator} The \textit{Terminator} component (figure \ref{fig:The two levels of abstraction of the Terminator component}) plays two roles: \begin{itemize} \item After the execution of every generation job, by calling the methods of the \textit{Terminator} class on the same machine where the \textit{Driver} is running; \item During the generator job, through the use of the \textit{TerminationCheckMapper}. \end{itemize} \begin{figure}[htbp] \centering \includegraphics[width=0.6\linewidth]{figures/design_and_development/terminator_architecture} \caption{The two levels of abstraction of the \textit{Terminator} component} \label{fig:The two levels of abstraction of the Terminator component} \end{figure} It checks if the stopping conditions have occurred: \begin{itemize} \item the count of the maximum number of generations has been reached; \item at least one individual has been marked of satisfying the termination criterion during the most recent generation phase. \end{itemize} The count is maintained by storing a local counter variable that is updated after the end of each generation. The check for the presence of marked individuals is done by looking for possible flags in \textit{HDFS}. If it terminates, the execution of the \textit{SolutionsFilter} component will eventually follow. \subsection{Initialiser} The \textit{Initialiser} (see figure \ref{fig:The two levels of abstraction of the Initialiser component}) computes an initial population. This is an optional component, because the entire work can start even if the population data are already present inside \textit{HDFS}. \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{figures/design_and_development/initialiser_architecture} \caption{The two levels of abstraction of the \textit{Initialiser} component} \label{fig:The two levels of abstraction of the Initialiser component} \end{figure} The class \textit{InitialiserMapper} generates the next individual according to the user's definition. \subsection{Migrator} The optional \textit{Migrator} component (figure \ref{fig:The two levels of abstraction of the Migrator component}) shuffles individuals among the islands according to the user's definition. It is at the same time a local component and a job executed on the cloud. \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{figures/design_and_development/migrator_architecture} \caption{The two levels of abstraction of the \textit{Migrator} component} \label{fig:The two levels of abstraction of the Migrator component} \end{figure} It is started by the \textit{Driver} and on the frequency counter, chooses if it is time to perform a migration. \subsection{SolutionsFilter} The component \textit{SolutionsFilter} (figure \ref{fig:The two levels of abstraction of the SolutionsFilter component}) is invoked only when at least one individual has provoked the termination by satisfying the \textit{termination criterion}. It simply filters the individuals of the last population by dividing those that satisfy the criterion from those which do not. \begin{figure}[htbp] \centering \includegraphics[width=0.8\linewidth]{figures/design_and_development/solutions_filter_architecture} \caption{The two levels of abstraction of the \textit{SolutionsFilter} component} \label{fig:The two levels of abstraction of the SolutionsFilter component} \end{figure} \section{Introduction} For problems with non polynomial complexity the search for an optimum solution is considered to be a mission impossible, as it involves huge resources and execution time. Metaheuristic techniques, such as \textit{Genetic Algorithms} (\textit{GAs}), constitute the best alternative to find near-optimal solutions for such problems within a reasonable execution time and limited resources. \textit{GAs} approach mimics the biological process of reproduction. It starts with an initial population of individuals, each of which is represented by a \textit{chromosome}. The \textit{GA} iterates by executing some common operations on the selected individuals, which are \textit{crossover} and \textit{mutation}. The population tends to improve its individuals by keeping the strongest individuals and rejecting the weakest ones. However, one of the main drawbacks of the technique is how to model the real-world problem into a genetic one. The model, usually called coding, is far from being straightforward. In order to model an original optimisation problem into a genetic one, some main functions of \textit{GAs} need to be defined properly. The \textit{fitness function} should correspond to objective function of the original problem. This function will be used to evaluate the individuals. Usually the individuals with good fitness will be selected for next generation. Often, \textit{GAs} are executed on single machines as sequential programs. However, the main principle behind these algorithms is not really sequential, as it is possible to select more than two individuals for reproduction, use more than one population, and execute the operators in parallel. Parallel systems are becoming commonplace mainly with the increasing popularity of the \textit{Cloud Systems}, \textit{GAs} can be executed in parallel without changing their main principle. Currently, one available distributed platform is \textit{Apache Hadoop} and its easy installation and maintainability are two key aspects that contributed to its great popularity. Nowadays it is common for an industry to rent a cluster on-line in order to request the execution of their applications as services. All these elements add to the motivations described in this paper. \textit{elephant56}\footnote{The name \enquote{elephant56} combines two ideas: \enquote{elephant} resembles the \textit{Hadoop} platform; \enquote{56} is the number of chromosomes of elephants citing the world of \textit{Genetics}.} is a framework for developing \textit{GAs} that can be executed on the \textit{Hadoop} platform, following the paradigm of \textit{MapReduce}. The main purpose of the framework is to completely hide the inner workings of \textit{Hadoop} and allow users to focus on the main aspects of \textit{GA} of their applications, being sure that their task will be correctly executed and with a good performance. In this way, the only concerns of users are the \textit{GA} model and the settings of the key inputs, parameters and functions used in \textit{GAs} (such as \textit{fitness}, \textit{selection}, initial population, etc...). The intended goals of this paper are: \begin{itemize} \item Develop a complete framework that allows the user to develop and execute full applications; \item Provide some frequent ready \enquote{on-the-shelf} functions, such as common criteria of \textit{selection} and \textit{individuals} representations that can be useful in most of the possible \textit{GAs} implementations; \item Develop testing strategies to evaluate the performance of \textit{elephant56}. For this reason a complete example of use of the framework is included in this paper. \end{itemize} The rest of paper is organised as follows. Section \ref{sec:Background and Related Work} shows the basis needed to understand the contents of this paper, with a quick overview of the existing literature about the subject. In section \ref{sec:Design and Development} the design and development of the framework are explained, by starting with the \textit{Driver} component and finishing with the description of all components under two main profiles: the \enquote{core} and \enquote{user}. In section \ref{sec:Preliminary Analysis} the framework is tested by developing a complete example of use called \enquote{Feature Selection Subset}, by explaining how the problem was adapted and showing the results and performance. Section \ref{sec:Conclusions and Future Work} gives a final view of the achieved results suggesting possible future work to improve the framework. \section{Preliminary Analysis} \label{sec:Preliminary Analysis} In order to test the performance of the framework, a real-world problem has been used, recording results and performance. The problem of classification in \textit{Machine Learning} consists of learning from well-known example data, called \enquote{training dataset}, with the purpose of being able to properly classify any new input data. The existing algorithms of \textit{Machine Learning} act like predictors on the new data. Therefore it is important to have a good level of \enquote{accuracy} of the prediction. One way of improving its performance is to find an optimal dataset, resulting from the original \textit{training dataset}. This problem is known as \enquote{Feature Subset Selection}. Unfortunately, the search for the optimum is a \textit{NP-Hard} problem. \textit{Genetic Algorithms} have been used to model and look for near-optimal classes for the problem. \subsection{Feature Subset Selection} The \textit{training dataset} includes a list of records, also called \enquote{instances}. Each record is a list of \enquote{attributes} (\textit{features}), in which one attribute is the \textit{class attribute}. Given a \textit{training dataset}, the next step is to build a classifier. For instance, the \textit{C4.5} algorithm can be used in order to build a \textit{Decision Tree}, which is able to give a likely class of ownership for every record in the new dataset. The effectiveness of the classifier is measured by the \textit{accuracy} of the classification of the new data: \begin{equation*} \textit{accuracy} = \frac{\text{correct classifications}}{\text{total of classifications}} \end{equation*} Sometimes, it is possible to have the same \textit{accuracy} or even a better one with a classifier that considers a lower number of attributes rather than one with more, as explained in \cite{Hall1999}. The search for an optimal \textit{feature subset} is not only important for building the classifier quickly and with very good response time, but also in terms of real costs. For example, let the problem to be solved be the identification of the presence of a certain disease. The \textit{training dataset} is a collection of real medical records, where some features are cost-free information about the patient, whereas others are expensive test results such as blood test, \textit{DNA} test, etc. In such a context, it is clear that saving the number of collected features means saving money. Once outlined the advantages of reducing the number of features, describing the problem under a mathematical point of view is needed. Having a \textit{training dataset} with $m$ different attributes, except for the \textit{class attribute} which has to be always present, the number of possible subsets is $2^{m}$. Let $C$ be the execution time in order to build a classifier, which has received a dataset, and let $A$ be the execution time to compute the \textit{accuracy} of the resulting classifier, the resulting running time to find the best subset would be: \begin{equation*} O\left(2^{m}\right)AC \end{equation*} As a consequence, the use of a \textit{GA} may simplify the search within the space of solutions and give the proper way to look for good near-optimal solutions. The resulting model is an adaptation of the application will be executed by the framework where each part is modelled based on the \textit{Feature Subset Selection}. The \textit{driver} is the main part of the algorithm and is executed on one machine. Moreover, who is going to use the algorithm must specify for the execution the following information: \begin{itemize} \item The generic arguments for the \textit{GA} execution, such as how many individuals initially to generate, the maximum number of generations, etc.; \item The \textit{training dataset}; \item The \textit{test dataset}. \end{itemize} There is also the possibility of specifying the \textit{accuracy} target in order to terminate the process before reaching the maximum number of generations, if this lower limit has been reached during the computation. Since computing the \textit{accuracy} for all the possible subsets needs exponential time, the idea is to submit a randomly generated initial group of attribute subsets, making the initial population, to the algorithm generations. During each generation, every subset is evaluated by computing the \textit{accuracy} value and all the \textit{GAs} functions are applied until target \textit{accuracy} is achieved or maximum number of generations is reached. The control of the satisfying the \textit{termination criterion} is controlled during the \textit{termination} phase. At the end, the last population is ready to be tested with the \textit{test dataset}. By giving the \textit{training dataset} as input, the first applied operation is the \textit{initialisation}. The algorithm generates the $r$ random attribute subsets which will be the initial population of individuals. Every individual (subset) is encoded as an array of $m$ bit, where each bit shows if the corresponding enumerated attribute is present into the subset (value $1$) or not (value $0$). Since the records in the \textit{training dataset} are never altered during the whole algorithm, it is not necessary to encode them. They will be available when needed for the generations. Every generation phase processes all the individuals within the population, according to the following steps: \begin{itemize} \item \textbf{Fitness:} For each subset of attributes, the \textit{training dataset} is filtered to respect the attributes in the subset. In such a way, the fitness value is computed by applying the steps: \begin{enumerate} \item Select the current portion of the dataset that is going to act as \textit{training dataset} and the portion as \textit{test dataset}; \item Build the \textit{Decision Tree} through the \textit{C4.5} algorithm and computing the \textit{accuracy} by submitting the current dataset; \item The operations are repeated according to the \textit{folding parameter}, following the technique of the \enquote{Crossing Folding}; \item The best \textit{accuracy} is returned. \end{enumerate} \item \textbf{Selection:} It chooses which individuals will be the parents during the \textit{crossover}. It is important to give the individuals with the best \textit{accuracy} the best probability to be chosen. The algorithm uses the method of the \enquote{roulette-wheel selection}: it builds a wheel according to the \textit{fitness values} (\textit{accuracy}) of each individual, after which it will turn for every new couple in such a way as to choose who will form it. \item \textbf{Crossover:} At this step, the new \textit{offspring} is produced splitting the parents of each couple into two parts, according to a random \textit{crossover point}, and then mixing the parts obtaining two new children, which have one part of the mother and one of the father. \item \textbf{Mutation} According to a probability to mutate, during this step each subset may change the attributes into itself. \item \textbf{Elitism} Optionally enabled, this step allows to choose the best individuals among the ones in the new \textit{offspring} and in the previous generation, in such a way as to guarantee the growth of the \textit{accuracy} target after every generation. \end{itemize} Since the algorithm is executed on different islands, it will be important to give a variability factor for each of them. The \textit{Migration} manages the passage of a certain number of randomly selected individuals from an island to another. \subsection{Subject} Three example dataset were given, referring to real problems. They are all coming from the \textit{UCI Machine Learning Repository}\footnote{UCI Machine Learning Repository\newline\url{http://archive.ics.uci.edu/ml/}}. Everything was submitted with little differences of parameters in each specific case, but all with two options of \textit{elitism} active and not. Since these experiments aim is to analyse the behaviour of the algorithm during a full computation, the target \textit{accuracy} was not specified to let the algorithm be executed until the maximum number of generations be reached. Each dataset was divided into two parts: \begin{itemize} \item The first $60 \%$ as \textit{training dataset}; \item The last $40 \%$ as \textit{test dataset}, eventually used to compute the obtained best \textit{accuracy}. \end{itemize} The test bench was composed by three versions: \begin{itemize} \item \textbf{Sequential:} A single machine executes a \textit{GA} similar to the framework one by using the \textit{Weka}\footnote{\textit{Weka} is a collection of \textit{Machine Learning} algorithms for data mining tasks. The algorithms can either be applied directly to a dataset or called from \textit{Java} code. \textit{Weka} contains tools for data pre-processing, classification, regression, clustering, association rules, and visualisation. It is also well-suited for developing new \textit{Machine Learning} schemes.\\\url{http://www.cs.waikato.ac.nz/ml/weka/}} \textit{Java} library. \item \textbf{Pseudo-distributed:} Here the framework version of the algorithm make the scene. It is executed on a single machine again, setting the number of islands to one. It requires \textit{Hadoop} in order to be executed. \item \textbf{Distributed:} The framework is executed on a cluster of computers on \textit{Hadoop} platform. This is the version of most interest. \end{itemize} All the versions execute on a remote \textit{Amazon EC2} cluster. This was chosen in order to give a factor of fairness to all the solutions. The single machine of the \textit{sequential} version was: \begin{center} \scriptsize \input{tables/preliminary_analysis/sequential_version} \end{center} Although it run a \textit{sequential} algorithm, the \textit{pseudo-distributed} version needed a \textit{Hadoop} installation: \begin{center} \scriptsize \input{tables/preliminary_analysis/pseudo-distributed_version} \end{center} On the other hand, the \textit{distributed} version needed a full \textit{Hadoop} cluster: \begin{center} \scriptsize \input{tables/preliminary_analysis/distributed_version} \end{center} \subsection{Results} It is interesting analysing some aspects from the results. These consist of a measure of the effort in developing the application for \textit{Feature Subset Selection} with the framework and other aspects regarding the performance. \subsubsection{Developing Effort} A total of $5811$ lines of code were written during the development of the whole framework where $1903$ include the test classes, as shown in figure \ref{fig:The number of lines of code for the framework developing}. \begin{figure}[htbp] \centering \input{figures/preliminary_analysis/pie_chart_framework_developing} \caption{The number of lines of code for the framework developing} \label{fig:The number of lines of code for the framework developing} \end{figure} For the development of the application (figure \ref{fig:The number of lines of code for the application developing}) of \textit{Feature Subset Selection} were written $535$ lines of code against the $3908$ of the framework infrastructure. \begin{figure}[htbp] \centering \input{figures/preliminary_analysis/pie_chart_application_developing} \caption{The number of lines of code for the application developing} \label{fig:The number of lines of code for the application developing} \end{figure} \subsubsection{Number of attributes} It is the number of attributes of the best individual after the final generation. This parameter is directly referred to the specific problem, because it was one of the target to achieve. As described before, the lower the value is, the better it is. \begin{figure}[htbp] \centering \input{figures/preliminary_analysis/plot_stats_attributes} \caption{Comparison of the number of attributes} \label{fig:Comparison of the number of attributes} \end{figure} Figure \ref{fig:Comparison of the number of attributes} shows that all the versions act more or less in the same way. It is more important to consider the next parameter. \subsubsection{Accuracy} The \textit{accuracy} is the value that was computed just after the execution of each algorithm. It was obtained by submitting a common \textit{test dataset} to the resultant new \textit{training dataset} filtered through the subset of attributes found at the end. \begin{figure}[htbp] \centering \input{figures/preliminary_analysis/plot_stats_accuracy} \caption{Comparison of the accuracy} \label{fig:Comparison of the accuracy} \end{figure} By looking at figure \ref{fig:Comparison of the accuracy}, again the three versions do not have substantial differences. The upside is that the framework achieves its objective, by giving a subset that has both a reduced number of attributes and an \textit{accuracy} that still suits the initial one. \subsubsection{Running time} While the number of attributes and \textit{accuracy} give a measure of efficacy of the \textit{distributed} version, the running time weighs against the different versions of the algorithms at the time of choosing which one to use. \begin{figure}[htbp] \centering \input{figures/preliminary_analysis/plot_stats_running_time} \caption{Comparison of the running time} \label{fig:Comparison of the running time} \end{figure} The results in figure \ref{fig:Comparison of the running time} are rather variegated. With \textit{German Credit} the winner is without doubt the \textit{Sequential} one and it is the same in another case. The framework must face some considerable intrinsic \textit{Hadoop} bottlenecks such as \textit{map} initialisation, data reading, copying and memorisation etc.. Every single made choice adds overhead to the running time and often this is quite enough. The proposed version of the framework does not consider every single aspect of \textit{Hadoop} and this allows the deduction that a further optimisation might increase the performance of the parallel version compared with sequential one. Nevertheless, the performance of the \textit{distributed} version is not so distant. In one case, with the \textit{Chicago Crime} dataset that is rich of instances, the \textit{distributed} framework version beats the \textit{sequential} one quite a lot. It is predictable that the \textit{distributed} version performs better when the amount of computation is large, because of its inherent distributing nature. Since the \textit{pseudo-distributed} is always second to the \textit{distributed}, this suggests that is worth subdividing the work among multiple nodes.
-36,714.788361
[ -0.908203125, 1.140625 ]
43.828265
[ -2.826171875, 1.5009765625, -0.366455078125, -3.841796875, -0.0723876953125, 4.4296875 ]
[ 0.71923828125, 5.2421875, 0.54541015625, 6.2265625 ]
287
6,596
[ -1.90234375, 1.8369140625 ]
23.04801
[ -5.51171875, -2.734375, -2.767578125, -1.0869140625, 2.017578125, 8.5546875 ]
1.529781
6.463858
21.710127
0.261721
[ 2.2992944717407227 ]
-26,883.634316
6.045331
-35,728.255864
0.361129
5.824848
[ -3.12890625, -3.099609375, -3.09765625, -3.76953125, 2.923828125, 9.765625 ]
[ -5.6171875, -1.935546875, -1.6259765625, -1.0283203125, 3.78125, 4.19140625 ]
BkiUdjg5qoTA-AMJsvsi
\section{Introduction} \subsection{Background and main results} The critical cohomological Hall algebra $\HO(\mathcal{A}_{B,W})$ associated with a smooth algebra $B$ and potential $W\in B/[B,B]$ was introduced by Kontsevich and Soibelman in \cite{KS2} as a way of categorifying the theory of Donaldson--Thomas invariants. The meaning of ``critical'' here is that the Hall algebra is built out of the hypercohomology of the sheaf of vanishing cycles of the superpotential $\Tr(W)$ on the stack of $B$-modules. This sheaf is supported on the critical locus of $\Tr(W)$, which is identified with the stack of representations of the Jacobi algebra associated to $B$ and $W$. In this way, the critical cohomological Hall algebra can be thought of as the Hall algebra categorifying Donaldson--Thomas theory for the category of representations of the Jacobi algebra. Donaldson--Thomas invariants themselves have by now an extensive literature, we refer to the sequence of papers by Dominic Joyce \cite{JoyceI}, \cite{JoyceII}, \cite{JoyceIII}, \cite{JoyceIV}, \cite{JoyceMF} for a comprehensive account, or the book \cite{JoyceDT} by Joyce and Song, and also to \cite{KS1}, \cite{KS3} for the more general and abstract account by Kontsevich and Soibelman using motivic vanishing cycles. In both treatments of Donaldson--Thomas theory, a key role is played by the integrality conjecture. In this paper we describe and prove the natural categorification of the integrality conjecture in the context of the cohomological Hall algebra. \smallbreak From now on we assume that $B=\mathbb{C} Q$ is the free path algebra of a quiver, and that we are given a Bridgeland stability condition $\zeta$ on the category of $\mathbb{C} Q$-modules, and that $\mu\in(-\infty,\infty]$ is a slope. We denote by $\Lambda_{\mu}^{\zeta}\subset\mathbb{N}^{Q_0}$ the subset of dimension vectors of slope $\mu$ with respect to the stability condition $\zeta$. Via natural correspondences there is an algebra structure on the $\mathbb{N}^{Q_0}$-graded cohomology of the vanishing cycle complex on $\mathfrak{M}^{\zeta\sst}_{\mu}$, the stack of $\zeta$-semistable $\mathbb{C}Q$-modules of slope $\mu$, that we denote \begin{equation} \label{ua} \HO\left(\Coha_{W,\mu}^{\zeta}\right):=\bigoplus_{\dd\in\Lambda_{\mu}^{\zeta}}\HO\left(\mathfrak{M}^{\zeta\sst}_{\dd},\phi_{\mathfrak{Tr}(W)^{\zeta}_{\mu}}\otimes\mathbb{L}^{-\dim(\mathfrak{M}^{\zeta\sst}_{\dd})/2}\right). \end{equation} These notions are properly defined and explained in Section \ref{Qrepsection}. This algebra categorifies Donaldson--Thomas theory in the sense that Donaldson--Thomas invariants are obtained by considering the class in the Grothendieck group of $\mathbb{N}^{Q_0}$-graded mixed Hodge structures (equipped with a monodromy action) of the underlying mixed Hodge structure of (\ref{ua}). \smallbreak The main conclusion of this paper is that the introduction of the Cohomological Hall algebra structure associated to a quiver with potential should be seen as a first step towards forging a connection between the theory of refined Donaldson--Thomas invariants and the theory of quantum enveloping algebras, and that the proof of the integrality conjecture is the motivic shadow of a Poincar\'e--Birkhoff--Witt theorem for the associated quantum enveloping algebra. In fact we reach this conclusion by providing all of the remaining steps, culminating in this PBW type theorem. This programme was already articulated in \cite{KS2} but several components had yet to be found. \smallbreak Firstly, a coproduct turning the algebra $\HO(\Coha_{W,\mu}^{\zeta})$ into a Hopf algebra had to be defined. A localised coproduct was found in the first author's paper \cite{Da13}, reinforcing the hope that $\HO(\Coha_{W,\mu}^{\zeta})$ could be turned into a quantum enveloping algebra. Meanwhile the integrality conjecture in the case of free path algebras of quivers with zero potential was proven by the second author and Markus Reineke in \cite{Meinhardt14}. In fact the proof is constructive; moreover these constructions manifestly lift to cohomology, and are not merely defined at the level of Grothendieck groups, the general setting for refined Donaldson--Thomas theory. In addition, it is proven in \cite{DaMe4} that due to formal properties of the vanishing cycle functor, the methods of \cite{Meinhardt14} are enough to prove a very general version of the integrality conjecture, even in the presence of nonzero potential. Similarly, the proof in \cite{DaMe4} is constructive, and the construction has an obvious lift to cohomology, or mixed Hodge modules, which is the lift that we explore in this paper. \smallbreak For the motivic Donaldson--Thomas theory and cohomological Hall algebras associated to quivers, a vital role is always played by the grading induced by $\mathbb{N}^{Q_0}$, the semigroup of dimension vectors. As an illustration, consider the case in which we forget about potentials and stability conditions, and consider $\HO(\mathfrak{M},\mathbb{Q})_{\vir}$, the cohomology of the stack of all finite-dimensional representations of some fixed quiver $Q$ (the subscript indicates that we normalise the constant sheaf by some system of shifts, which we define in (\ref{baby})). If $Q_0\neq \emptyset$, this cohomology is infinite-dimensional in every cohomological degree. However if we decompose the stack according to dimension vectors \begin{equation} \label{baby} \HO(\mathfrak{M},\mathbb{Q}_{\vir})\cong\bigoplus_{\dd\in\mathbb{N}^{Q_0}}\HO(\mathfrak{M}_{\dd},\mathbb{Q}[\dim(\mathfrak{M}_{\dd})]) \end{equation} and consider the resulting cohomology as a $\mathbb{N}^{Q_0}\times\mathbb{Z}$-graded vector space, where the extra $\mathbb{Z}$ keeps track of cohomological degree, each graded piece is finite-dimensional. Motivic DT partition functions are expressed as formal power series in variables $q^{1/2},x_1,\ldots,x_{n}$, where for $\dd=(\dd_1,\ldots,\dd_n)$ the coefficient of \[ x^{\dd}:=\prod_{i\in Q_0}x_i^{\dd_i} \] is a formal Laurent power series in $q^{1/2}$, the characteristic function of the $\dd$th graded piece of the cohomology (\ref{baby}). These power series may have finite order poles at $q^{1/2}=0$, due to the shifts appearing in (\ref{baby}). In the general case this formal Laurent power series is replaced by a formal power series recording the finite dimensions of graded pieces of an infinite-dimensional $\mathbb{N}^{Q_0}$-graded mixed Hodge structure associated to the quiver $Q$, a potential $W$ and a stability condition $\zeta$. Furthermore, when we introduce the cohomological Hall algebra structure on this cohomology, it respects the $\mathbb{N}^{Q_0}$-grading. In order to make the presentation in this paper conceptually uniform, we slightly change the perspective on $\mathbb{N}^{Q_0}$-gradings. We consider graded vector spaces (respectively, mixed Hodge structures) as sheaves of vector spaces (respectively, mixed Hodge modules) on the monoid $\mathbb{N}^{Q_0}$, considered as a scheme with infinitely many components, each isomorphic to $\Spec(\mathbb{C})$. Then a $\mathbb{N}^{Q_0}$-graded unital algebra is the same thing as a monoid in the category of such sheaves (respectively, mixed Hodge modules), for which the monoidal product is defined by \[ \mathcal{F}\boxtimes_+\mathcal{G}:=+_*(\mathcal{F}\boxtimes \mathcal{G}), \] where $\mathcal{F}\boxtimes\mathcal{G}$ is the usual external tensor product, and $+\colon\mathbb{N}^{Q_0}\times\mathbb{N}^{Q_0}\rightarrow \mathbb{N}^{Q_0}$ is the addition map. I.e. a unital graded algebra structure is determined by a map \[ \mathcal{F}\boxtimes_+\mathcal{F}\rightarrow \mathcal{F} \] satisfying the obvious associativity condition, along with a unit map $\mathbb{Q}_{\{0\}}\rightarrow\mathcal{F}$ from the constant sheaf supported at the monoidal unit $0\in \mathbb{N}^{Q_0}$. \smallbreak The reason this makes for a more uniform presentation is that throughout the paper we consider also a relative version of the cohomological Hall algebra, defined inside the category of monodromic mixed Hodge modules on a monoid \textit{over} $\mathbb{N}^{Q_0}$. The original, `absolute' cohomological Hall algebra of \cite{KS2}, is a monoid \begin{equation} \label{absMon} \HO(\mathcal{A}_{W,\mu}^{\zeta})\in\mathcal{D}(\MMHM(\mathbb{N}^{Q_0})) \end{equation} in the derived category of monodromic mixed Hodge modules on the space of dimension vectors for $Q$. The definition of monodromic mixed Hodge modules is recalled in Section \ref{MMHMs}; for now, think of them just as an enlargement of the usual category of mixed Hodge modules. Instead of working with the absolute cohomological Hall algebra (\ref{absMon}), and although we prove theorems regarding it, we mainly work with the monoid \begin{equation} \label{intrRel} \Ho(\mathcal{A}_{W,\mu}^{\zeta})\in\mathcal{D}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu})) \end{equation} in the category of monodromic mixed Hodge modules on the coarse moduli space of $\zeta$-semistable $\mathbb{C}Q$-modules of slope $\mu$. The monodromic mixed Hodge module (\ref{intrRel}) is isomorphic to an associated graded version of the monoid (\ref{absMon}) after applying the monoidal functor obtained by taking the direct image along the map $\dim\colon \mathcal{M}_{\mu}^{\zeta\sst}\rightarrow\mathbb{N}^{Q_0}$, and passing to the associated graded of the perverse filtration $\Pf$ on $\HO(\mathcal{A}^{\zeta}_{W,\mu})$, that we will come to shortly. In symbols, there is an isomorphism of algebras \[ \dim_*\Ho(\mathcal{A}_{W,\mu}^{\zeta})\cong \Gr_{\Pf}\left(\HO(\mathcal{A}_{W,\mu}^{\zeta})\right). \] \smallbreak The payoff for working in the relative setting is that we obtain more refined results, which are nevertheless easier to prove, since in this relative setting we are able to make extensive use of Morihiko Saito's version \cite{Sai88,Saito1,Saito89} of the famous Decomposition Theorem of Beilinson, Bernstein, Deligne and Gabber \cite{BBD}, and Saito's theory of mixed Hodge modules in general. The idea of proving local results on $\mathcal{M}_{\mu}^{\zeta\sst}$ to deduce global results about Donaldson--Thomas invariants was already present, and heavily utilized, in the work of Joyce and Song, and later the proof of the integrality conjecture for path algebras in the absence of a potential by the second author and Reineke \cite{Meinhardt14}. This brings us to the first main result of this paper, a cohomological refinement of the integrality conjecture following the proof of the conjecture found in \cite{DaMe4}. \begin{thmx}[Cohomological integrality theorem] \label{ThmA} Let $\zeta$ be a $\mu$-generic stability condition. Define the following element of $\MMHM(\mathcal{M}^{\zeta\sst}_{\dd})$: \[ \mathcal{DT}_{W,\dd}^{\zeta}:=\begin{cases}\phim{\mathcal{T}r(W)^\zeta_{\dd}}\mathcal{IC}_{\mathcal{M}^{\zeta\sst}_{\dd}}(\mathbb{Q})&\textrm{if }\mathcal{M}^{\zeta\st}_{\dd}\neq\emptyset \\0&\textrm{otherwise}\end{cases} \] and let $\mathcal{DT}^{\zeta}_{W,\mu}$ be the direct sum of all the $\mathcal{DT}_{W,\dd}^{\zeta}$ for $\dd$ of slope $\mu$ with respect to the stability condition $\zeta$. Define $\DT_{W,\dd}^{\zeta}=\Ho(\dim_*\mathcal{DT}_{W,\dd}^{\zeta})$ and define $\DT_{W,\mu}^{\zeta\sst}$ to be the direct sum of all the $\DT_{W,\dd}^{\zeta\sst}$ for $\dd$ of slope $\mu$. Then \[ \FreeComm_{\boxtimes_+}\left(\DT_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\cong \HO(\Coha^{\zeta}_{W,\mu}) \] in $\mathcal{D}(\MMHM(\mathbb{N}^{Q_0}))$. Furthermore, working over the coarse moduli space of $\zeta$-semistable representations $\mathcal{M}_{\mu}^{\zeta\sst}$, there is an isomorphism \[ \Sym_{\boxtimes_{\oplus}}\left(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\cong\Ho(\mathcal{A}_{W,\mu}^{\zeta}) \] in $\mathcal{D}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu}))$. \end{thmx} The definition of $\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}$ is given in Section \ref{MMHMs}. Requisite notions from the theory of representations of quivers, including the functors $\Sym_{\boxtimes_{\oplus}}$ and $\Sym_{\boxtimes_{+}}$, and the definition of $\mathcal{M}^{\zeta\sst}_{\mu}$, are recalled in Section \ref{Qrepsection}. The functor $\Sym_{\boxtimes_{+}}$, via the identification between monodromic mixed Hodge modules on $\mathbb{N}^{Q_0}$ and $\mathbb{N}^{Q_0}$-graded monodromic mixed Hodge structures, is the functor that sends an $\mathbb{N}^{Q_0}$-graded monodromic mixed Hodge structure $\mathcal{F}$ to the free unital $\mathbb{N}^{Q_0}$-graded symmetric algebra generated by $\mathcal{F}$. \smallbreak We explain the relation to the integrality conjecture in terms of generating functions of weight polynomials, as that is the language in which the integrality conjecture is perhaps most familiar. In the literature, this is most commonly referred to as ``refined Donaldson--Thomas theory''. Given an element $\mathcal{L}\in\mathcal{D}^{b}(\MMHM(\mathbb{N}^{Q_0}))$, we define the polynomial \[ \mathcal{Z}(\mathcal{L},q^{1/2},x_1,\ldots,x_n):=\sum_{\dd\in\mathbb{N}^{Q_0}}\chi_q(\mathcal{L}_{\dd},q^{1/2})x^{\dd}, \] where we have used the weight polynomial \[ \chi_q(\mathcal{L}_{\dd},q^{1/2}):=\sum_{i,j\in\mathbb{Z}} (-1)^i\dim(\Gr_W^j(\HO^i(\mathcal{L}_{\dd})))q^{j/2}. \] As in Theorem \ref{ThmA} we fix a quiver $Q$, potential $W$, stability condition $\zeta$, and slope $\mu$. Although the total cohomology $\HO(\mathcal{A}_{W,\mu}^{\zeta})$ (defined as in (\ref{ua})) is not in the bounded derived category $\mathcal{D}^{b}(\MMHM(\Lambda_{\mu}^{\zeta}))\subset \mathcal{D}^{b}(\MMHM(\mathbb{N}^{Q_0}))$, since each $\HO(\mathcal{A}_{W,\dd}^{\zeta})$ will typically be nonzero in arbitrarily high cohomological degree, the partition function \[ \mathcal{Z}\left(\HO(\mathcal{A}_{W,\mu}^{\zeta}),q^{1/2},x_1,\ldots,x_n\right) \] still makes sense as a formal power series. This is because for each $\dd\in\Lambda_{\mu}^{\zeta}$, and for each $j\in\mathbb{Z}$, the element $\Gr_W^j(\HO^i(\mathcal{A}_{W,\dd}^{\zeta}))$ is nonzero for only finitely many values of $i$; we denote by $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathbb{N}^{Q_0}))\subset \mathcal{D}(\MMHM(\mathbb{N}^{Q_0}))$ the full subcategory of monodromic mixed Hodge modules satisfying this condition and also the condition that for fixed $\dd$ the element $\Gr_W^j(\HO(\mathcal{L}_{\dd}))$ vanishes for $j\ll 0$. Then one of the equivalent ways to define the plethystic exponential \[ \EXP:\mathbb{Z}[q^{-1}][[q^{1/2}]][[x_1,\ldots,x_n]]_+\rightarrow\mathbb{Z}[q^{-1}][[q^{1/2}]][[x_1,\ldots,x_n]] \] where the subscript ``$+$'' on the left hand side signifies that we take the subgroup of elements $\sum_{\dd\in\mathbb{N}^{Q_0}}a_{\dd}(q^{1/2})x^{\dd}$ such that $a_0(q^{1/2})=0$, is via the formula for $\mathcal{L}\in\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathbb{N}^{Q_0}))$ \[ \EXP\left(\mathcal{Z}(\mathcal{L},q^{1/2},x_1,\ldots,x_n)\right)=\mathcal{Z}\left(\Sym_{\boxtimes_+}(\mathcal{L}),q^{1/2},x_1,\ldots,x_n\right). \] Finally, the integrality conjecture states that there exist Laurent polynomials $\Omega^{\zeta}_{W,\dd}(q^{1/2})\in\mathbb{Z}[q^{\pm 1/2}]$ such that \begin{equation} \label{ics} \mathcal{Z}(\mathcal{A}_{W,\mu}^{\zeta},q^{1/2},x_1,\ldots,x_n)=\EXP\left(\sum_{\dd\in\Lambda_{\mu}^{\zeta}} \Omega^{\zeta}_{W,\dd}(q^{1/2})x^{\dd}/(q^{1/2}-q^{-1/2})\right). \end{equation} These $\Omega_{W,\dd}^{\zeta}(q^{1/2})$ are, by definition, the Donaldson--Thomas invariants for the slope $\mu$ and the stability condition $\zeta$. Note that \[ \chi_q(\HO(\mathbb{C}\mathbb{P}^{\infty},\mathbb{Q})_{\vir},q^{1/2})=-q^{1/2}-q^{3/2}-\ldots=(q^{1/2}-q^{-1/2})^{-1} \] and so we deduce from Theorem \ref{ThmA} the equality $\Omega_{W,\dd}^{\zeta}(q^{1/2})=\chi_q(\DT_{W,\dd}^{\zeta},q^{1/2})$. Note also that for purely formal reasons one can always find formal Laurent power series $\Omega_{W,d}^{\zeta}(q^{1/2})$ satisfying (\ref{ics}) - the content of the integrality conjecture is that these formal power series are in fact polynomials. We deduce the integrality conjecture from Theorem \ref{ThmA}; since $\DT_{W,\dd}^{\zeta}$ comes from a bounded complex of monodromic mixed Hodge modules, its weight polynomial is a genuine polynomial, instead of a formal function. However, the cohomological lift provided by Theorem \ref{ThmA} is much stronger than the integrality conjecture; it upgrades an equality in the Grothendieck ring of $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathbb{N}^{Q_0}))$ to an isomorphism in that category. This is an interesting result even before introducing the extra structure of the cohomological Hall algebra, since it shows that there is an entire theory of `cohomologically refined' Donaldson--Thomas invariants waiting to be explored. For early applications of this theory, we refer the reader to the first author's reproof of the Kac positivity conjecture \cite{Da13b} (originally proved by Hausel, Letellier and Villegas \cite{HLRV13}) and proof of the quantum cluster positivity conjecture \cite{Da16a}. \smallbreak An advantage of considering constructions relative to the base $\mathcal{M}_{\mu}^{\zeta\sst}$ instead of $\mathbb{N}^{Q_0}$ is that it leads naturally to the introduction of the perverse filtration on $\HO(\mathcal{A}_{W,\mu}^{\zeta})$. We approach this through a study of the representation theory of $\Ho(\Coha_{W,\mu}^{\zeta})$, which turns out to be governed by the vanishing cycle cohomology of moduli spaces of framed modules, as considered for example in \cite{Soi14} and \cite{Fran13}. For each framing vector $\mathbf{f}\in\mathbb{N}^{Q_0}$ we consider the $\Ho(\Coha_{W,\mu}^{\zeta})$-module \[ \Ho(\mathcal{F}^{\zeta}_{W,\mathbf{f},\mu}):=\Ho\left(\bigoplus_{\dd\in\Lambda^\zeta_{\mu}}\pi^{\zeta}_{\mathbf{f},\dd,*}\mathfrak{IC}_{W,\mathbf{f},\dd}^{\zeta}\right)\in\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu})), \] where $\mathfrak{IC}_{W,\mathbf{f},\dd}^{\zeta}$ is a shifted Tate twist of the monodromic mixed Hodge module of vanishing cycles on the moduli space $\mathcal{M}_{\mathbf{f},\dd}^{\zeta}$ of framed $\zeta$-semistable representations with framing vector $\mathbf{f}$, the map $\pi^{\zeta}_{\mathbf{f},\dd}\colon \mathcal{M}^{\zeta}_{\mathbf{f},\dd}\rightarrow\mathcal{M}^{\zeta\sst}_{\dd}$ is the forgetful map to the coarse moduli space of $\zeta$-semistable representations, and the direct sum is over all dimension vectors of slope $\mu$. We also consider the absolute version \[ \HO(\mathcal{F}_{W,\mathbf{f},\mu}^{\zeta}):=\Ho\left((\mathcal{M}^{\zeta}_{\mathbf{f},\mu}\xrightarrow{\pi^{\zeta}_{\mathbf{f},\mu}}\mathcal{M}^{\zeta\sst}_{\mu}\xrightarrow{\dim}\Lambda^\zeta_{\mu})_*\mathfrak{IC}_{W,\mathbf{f},\dd}^{\zeta}\right)\in\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\Lambda^\zeta_{\mu})) \] recovering the $\HO(\mathcal{A}_{W,\mu}^{\zeta})$-modules considered by Soibelman in \cite{Soi14}. \smallbreak The following theorem, which we prove in Subsection \ref{rep_thm_proof}, is the sum of our results on the representation theory of $\Ho(\Coha_{W,\mu}^{\zeta})$ and $\HO(\Coha_{W,\mu}^{\zeta})$. \begin{thmx} \label{repthm} Let $\mu\in (-\infty,\infty)$ be a slope, and let $\zeta$ be a not necessarily $\mu$-generic stability condition. Each module $\Ho(\mathcal{F}^{\zeta}_{W,\mathbf{f},\mu})$ is cyclic, generated by $\Ho(\mathcal{F}_{W,\mathbf{f},0}^{\zeta})\cong\mathbb{Q}_{\mathcal{M}^{\zeta\sst}_0}$. There are natural surjections $\Ho(\mathcal{F}_{W,\mathbf{f},\mu}^{\zeta})\rightarrow \Ho(\mathcal{F}_{W,\mathbf{f}',\mu}^{\zeta})$ for $\mathbf{f}\geq\mathbf{f}'$, and if we fix $\dd\in\Lambda_{\mu}^{\zeta}$ and a cohomological degree $n$, then $\Ho^n(\mathcal{F}_{W,\mathbf{f},\dd}^{\zeta})$ stabilizes, as we let $\mathbf{f}_i\mapsto \infty$ for all $i\in Q_0$. Furthermore, we recover $\Ho(\Coha_{W,\mu}^{\zeta})$ in the limit, considered as a left module over itself. Similarly, each $\HO(\mathcal{A}_{W,\mu}^{\zeta})$-module $\HO(\mathcal{F}_{W,\mathbf{f},\mu}^{\zeta})$ is cyclic, generated by $\HO(\mathcal{F}_{W,\mathbf{f},0}^{\zeta})$, which is just $\mathbb{Q}$, placed in $\mathbb{Z}^{Q_0}$-degree zero. For $\mathbf{f}\geq\mathbf{f}'$ there is a natural surjective map of $\HO(\mathcal{A}_{W,\mu}^{\zeta})$-modules $\HO(\mathcal{F}_{W,\mathbf{f},\mu}^{\zeta})\rightarrow \HO(\mathcal{F}_{W,\mathbf{f}',\mu}^{\zeta})$, and as we let $\mathbf{f}$ tend to infinity in all arguments, $\HO(\mathcal{F}^{\zeta}_{W,\mathbf{f},\mu})$ tends to $\HO(\mathcal{A}^{\zeta}_{W,\mu})$ as a left $\HO(\mathcal{A}^{\zeta}_{W,\mu})$-module. \end{thmx} Coming back to the perverse filtration on $\HO(\mathcal{A}_{W,\mu}^{\zeta})$, since the forgetful map $\pi_{\mathbf{f},\dd}^{\zeta}$ is proper, we obtain perverse filtrations on $\HO(\mathcal{F}_{W,\mu}^{\zeta})$ that give rise to a perverse filtration $\Pf$ on the limit $\HO(\mathcal{A}^{\zeta}_{W,\mu})$ by Theorem \ref{repthm}. By proving that the product and the localised coproduct on $\HO(\mathcal{A}_{W,\mu}^{\zeta})$ introduced in \cite{Da13} preserve the perverse filtration, we arrive at the following theorem regarding the cohomological Hall algebra of semistable representations of the Jacobi algebra whose dimension vectors are sent to a fixed ray in the upper half plane by a generic Bridgeland stability condition. \begin{thmx}[PBW theorem]\label{qea} The localised bialgebra structure on $\HO(\mathcal{A}^{\zeta}_{W,\mu})$ induces a (non-localised) Hopf algebra structure on $\Gr_{\Pf}(\HO(\mathcal{A}_{W,\mu}^{\zeta}))$. If $\zeta$ is $\mu$-generic, this Hopf algebra is generated by its primitive elements, so that $\Gr_{\Pf}(\HO(\mathcal{A}_{W,\mu}^{\zeta}))$ is the universal enveloping algebra of the Lie algebra of its primitive elements. There is a canonical isomorphism in $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}_{\mu}^{\zeta\sst}))$: \[ \FreeComm_{\boxtimes_{\oplus}}\left(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\rightarrow \Ho(\Coha_{W,\mu}^{\zeta}) \] that is realized via a canonical embedding $\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\subset \Ho(\Coha_{W,\mu}^{\zeta})$ and the Cohomological Hall algebra multiplication. There is a similar non-canonical isomorphism over the base $\Lambda^{\zeta}_{\mu}$ consisting of dimension vectors of slope $\mu$: \[ \Sym_{\boxtimes_+}\left(\DT_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\rightarrow\HO(\mathcal{A}_{W,\mu}^{\zeta}). \] and the map is defined via the cohomological Hall algebra product on $\HO(\Coha_{W,\mu}^{\zeta})$. \end{thmx} We finally prove a PBW theorem on the base $\mathcal{M}$, instead of the base $\mathcal{M}^{\zeta}_{\mu}$. Our result should be thought of as a cohomological lift of the following incarnation of the wall crossing formula, with $\int$ denoting the Kontsevich--Soibelman integration map, \begin{equation} \label{WCI} \int[\mathfrak{M},\phi_{W}]=\prod_{\infty\xrightarrow{\mu}-\infty}\int[\mathfrak{M}^{\zeta\sst}_{\mu},\phi_{W}]\:\:{\large \substack{\\\\ \Huge \mathrm{=}\\ \scriptsize \substack{\mathrm{integrality}\\ \mathrm{conjecture}}}}\prod_{\infty\xrightarrow{\mu}-\infty}\sum_{\dd\in\Lambda_{\mu}^{\zeta}}\Omega_{W,\dd}^{\zeta,\textrm{motivic}}\cdot [B\mathbb{C}^*]. \end{equation} This is considered as an identity in the motivic quantum torus of finite-dimensional modules of the Jacobi algebra for the pair $(Q,W)$. Instead of defining any of these terms we refer the reader to \cite{JoyceMF} and \cite{KS1} for more details on motivic Hall algebras and integration maps in the general theory of motivic Donaldson--Thomas invariants, or \cite{DaMe4} for the specific case of the motivic Hall algebra of representations of a Jacobi algebra. In addition to categorifying the equality (\ref{WCI}) to an isomorphism, we prove that this isomorphism can be realised in terms of the multiplication in the cohomological Hall algebra. We define \begin{align*} \Ho(\mathcal{A}_W):=&\bigoplus_{\dd\in\mathbb{N}^{Q_0}}\Ho\left((\mathfrak{M}_{\dd}\rightarrow\mathcal{M})_*\phi_{\mathfrak{Tr}(W)}\otimes\mathbb{L}^{-\dim(\mathfrak{M}_{\dd})/2}\right) \end{align*} where $\mathfrak{M}$ is the stack of all finite-dimensional $\mathbb{C}Q$-modules, and $\mathcal{M}$ is the coarse moduli space of all finite-dimensional $\mathbb{C}Q$-modules. \begin{thmx}[Cohomological wall crossing theorem] \label{strongPBW} Let $\zeta$ be $\mu$-generic. If $\zeta$ is $\mu$-generic for all $\mu$ there is an isomorphism \begin{equation} \label{gthmd} \boxtimes_{\oplus,\infty \xrightarrow{\mu}-\infty}^{\tw} \left(\Sym_{\boxtimes_{\oplus}}\left(q_{\mu,*}^{\zeta}\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\right)\xrightarrow{} \Ho(\Coha_W) \end{equation} inside $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}))$, realized by picking an embedding of $q_{\mu,*}^{\zeta}\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\subset \Ho(\Coha_W)$ for all slopes $\mu$, where $q_{\mu}^{\zeta}\colon \mathcal{M}_{\mu}^{\zeta\sst}\rightarrow\mathcal{M}_{\mu}$ is the affinization map, and then using the product on $\Ho(\Coha_{W})$. Here the product $\boxtimes_{\oplus,\infty \xrightarrow{\mu}-\infty}^{\tw}$ is an ordered product, taken over descending slopes. Similarly, there is an isomorphism \begin{equation} \boxtimes_{+,\infty \xrightarrow{\mu}-\infty}^{\tw} \left(\Sym_{\boxtimes_{+}}\left(\DT_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\right)\xrightarrow{} \HO(\Coha_W) \end{equation} realised by picking an embedding $\DT_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\subset \HO(\mathcal{A}_{W,\mu})$ for all $\mu$ and using the multiplication in the absolute version of the cohomological algebra introduced by Kontsevich and Soibelman. \end{thmx} The above theorem means that there are essentially different PBW bases for different generic stability conditions, where the characteristic functions of different PBW bases are related to each other via the famous wall crossing formula of Joyce and Song, or Kontsevich and Soibelman. The fact that different PBW bases can have different characteristic functions is explained by the definition of the monoidal product $\boxtimes^{\tw}_{\oplus}$ -- given in Subsection \ref{prodss}. The most important feature for explaining the relevance of wall crossing formulas is that in contrast with the product $\boxtimes_{\oplus}$, $\boxtimes^{\tw}_{\oplus}$ is not symmetric; this is the categorical analogue of the fact that the quantum torus utilized in \cite{KS1} as the image of the integration map from the motivic Hall algebra is generally not commutative. In short, Theorem \ref{strongPBW} contains a cohomological lift of the wall crossing formula as well as the integrality theorem. \subsection{Acknowledgements} During the writing of this paper, Ben Davison was a postdoc at EPFL, supported by the Advanced Grant ``Arithmetic and physics of Higgs moduli spaces'' No. 320593 of the European Research Council. He would like to sincerely thank Davesh Maulik for the many stimulating conversations that helped this paper along the way. Sven Meinhardt wants to thank Tam\'{a}s Hausel for giving him the opportunity to visit Lausanne and Markus Reineke for providing a wonderful atmosphere in Wuppertal to complete this research. \subsection{How to read this paper} The paper in its current form is somewhat long, and may seem a little technical. It is broken roughly into three parts: there is the introduction above, a second part which goes up to Section \ref{cwcfSec}, and then a third part. The purpose of the second part is to prove the cohomological integrality theorem (Theorem \ref{ThmA}) and the cohomological wall crossing formula - i.e. the existence of isomorphisms as in the statement of Theorem \ref{strongPBW}, without reference to a cohomological Hall algebra. For applications in cohomological Donaldson--Thomas theory and beyond, the existence of these isomorphisms often seems to be enough, and indeed this existence is already enough to deduce the integrality theorem. The intricacy of the paper increases as we move into the third part, which is dedicated to proving that these isomorphisms can be realised as PBW isomorphisms in the cohomological Hall algebra. Therefore the reader who cares only about integrality and wall crossing for cohomological Donaldson--Thomas invariants can safely treat this paper as ending with Section \ref{cwcfSec} -- hopefully this makes the paper less daunting. \section{Hodge theory and equivariant vanishing cycles} \subsection{Monodromic mixed Hodge modules} \label{MMHMs} Let $X$ be a complex variety. Then we define as in \cite{Saito89,Saito90} the category $\MHM(X)$ of mixed Hodge modules on $X$. See \cite{Saito1} for an overview of the theory. There is an exact functor $\rat\colon \mathcal{D}(\MHM(X))\rightarrow\mathcal{D}(\perv(X))$ which takes a complex of mixed Hodge modules $\mathcal{F}$ to its underlying complex of perverse sheaves, and commutes with $f_*,f_!,f^*,f^!,\mathbb{D}_X$ and tensor product. If no remark is made to the contrary, and a non-derived target or domain category is not specified, all these functors, and indeed all functors for the entirety of this paper, will be considered as derived functors, even if their non-derived versions are well-defined. \smallbreak Fix $X$ and let $\mathcal{B}_X$ be the full subcategory containing those $\mathcal{F}\in\MHM(X\times\mathbb{A}^1)$ such that for each $x\in X$, the total cohomology of $(\{x\}\times\mathbb{G}_m\rightarrow X\times\AA^1)^*\mathcal{F}$ is an admissible variation of mixed Hodge structure on $\mathbb{G}_m$. Via Saito's description of $\MHM(X\times\mathbb{A}^1)$, $\mathcal{B}_X\subset \MHM(X\times\mathbb{A}^1)$ is the full subcategory, closed under extensions, and containing the following types of objects \begin{enumerate} \item Mixed Hodge modules $\tilde{\mathcal{IC}}_{Y\times \mathbb{A}^1}(\mathcal{L})[\dim(Y)+1]$, where $Y\subset X$ is an irreducible closed subvariety, and $\mathcal{L}$ is a pure variation of Hodge structure on $Y'\times\mathbb{G}_m$, for $Y'$ an open dense subvariety of $Y_{\reg}$ \item Mixed Hodge modules $\tilde{\mathcal{IC}}_{Y\times \{0\}}(\mathcal{L})[\dim(Y)]$, for $\mathcal{L}$ a pure variation of Hodge structure on $Y'\subset Y_{\reg}\subset X$ as above. \end{enumerate} Here $\tilde{\mathcal{IC}}_{Y\times \mathbb{A}^1}(\mathcal{L})[\dim(Y)+1]$ and $\tilde{\mathcal{IC}}_{Y\times \{0\}}(\mathcal{L})[\dim(Y)]$ are the lifts of intersection complexes to mixed Hodge modules defined in \cite[Thm.3.21]{Saito90}. Inside $\mathcal{B}_X$ there is a Serre subcategory $\mathcal{C}_X$ given by the full subcategory of $\MHM(X\times\AA^1)$ containing those objects $\mathcal{F}$ such that the total cohomology of each $(\{x\}\times\AA^1\rightarrow X\times \AA^1)^*\mathcal{F}$ is an admissible variation of mixed Hodge structure on $\mathbb{A}^1$. By the rigidity theorem \cite[Thm.4.20]{StZu85}, any such mixed Hodge module is constant once restricted to the fibres $\{x\}\times\mathbb{A}^1$, and $\mathcal{C}_X$ is the essential image of the functor $\pi^*[1]\colon \MHM(X)\rightarrow\MHM(X\times\mathbb{A}^1)$, where \[ \pi\colon X\times\mathbb{A}^1\rightarrow X \] is the projection. We define following \cite[Sec.7]{KS2} the category $\MMHM(X)=\mathcal{B}_X/\mathcal{C}_X$. \smallbreak The natural pushforward $(X\times\mathbb{G}_m\rightarrow X\times\AA^1)_!\colon \MHM(X\times\mathbb{G}_m)^{\mon}\rightarrow\MMHM(X)$ is an equivalence of categories, where $\MHM(X\times\mathbb{G}_m)^{\mon}$ is the full subcategory of $\MHM(X\times\mathbb{G}_m)$ containing those $\mathcal{F}$ such that the total cohomology of each pullback $(\{x\}\times\mathbb{G}_m\rightarrow X\times\mathbb{G}_m)^*\mathcal{F}$ is an admissible variation of mixed Hodge structure. An explicit inverse equivalence $\Theta_X$ is provided by \[ \mathcal{F}\mapsto (X\times\mathbb{G}_m\rightarrow X\times\AA^1)^*\Psi_X\mathcal{F}. \] where \[ \Psi_X\mathcal{F}=(X\times\mathbb{A}^2\xrightarrow{\id\times +} X\times \mathbb{A}^1)_*\left(\mathcal{F}\boxtimes (\mathbb{G}_m\rightarrow\mathbb{A}^1)_!\mathbb{Q}_{\mathbb{G}_m}[1]\right). \] \begin{proposition} \label{adjProp} The functor $\Psi_X$ provides a left adjoint to the natural functor $\mathcal{B}_X\rightarrow \mathcal{B}_X/\mathcal{C}_X$ which is also its right inverse. \end{proposition} \begin{proof} Let $h\in \Hom_{\mathcal{B}_X}(\Psi_X\mathcal{F},\mathcal{G})$. Then precomposing with the map $\nu_{\mathcal{F}}$ obtained from the following composition \begin{align*} &\mathcal{F}\xrightarrow{\cong} (X\times\mathbb{A}^2\xrightarrow{\id\times +} X\times \mathbb{A}^1)_*\left(\mathcal{F}\boxtimes (\{0\}\hookrightarrow\mathbb{A}^1)_!\mathbb{Q}_{\{0\}}\right)\rightarrow \\&\rightarrow (X\times\mathbb{A}^2\xrightarrow{\id\times +} X\times \mathbb{A}^1)_*\left(\mathcal{F}\boxtimes (\mathbb{G}_m\rightarrow\mathbb{A}^1)_!\mathbb{Q}_{\mathbb{G}_m}[1]\right) \end{align*} we obtain a map $\eta f :=f\circ\nu_{\mathcal{F}}\in\Hom_{\mathcal{B}_X/\mathcal{C}_X}(\mathcal{F},\mathcal{G})$, and the natural transformation of bifuntors $\eta$ can be seen to be an isomorphism on simple elements, proving the first claim. The cone of $\tau_{\mathcal{F}}$ is isomorphic to \[ (X\times\mathbb{A}^2\xrightarrow{\id\times +} X\times \mathbb{A}^1)_*\left(\mathcal{F}\boxtimes \mathbb{Q}_{\mathbb{A}^1}[1]\right) \] which is zero in $\mathcal{D}^{b}(\mathcal{B}_X/\mathcal{C}_X)$, proving the second claim. \end{proof} It follows from Proposition \ref{adjProp} (see \cite[06XM]{Stackproject} for a proof) that the natural functor \[ \mathcal{D}(\mathcal{B}_X)/\mathcal{D}_{\mathcal{C}_X}(\mathcal{B}_X)\rightarrow \mathcal{D}(\mathcal{B}_X/\mathcal{C}_X)=\mathcal{D}(\MMHM(X)) \] is an equivalence of categories, where $\mathcal{D}_{\mathcal{C}_X}(\mathcal{B}_X)$ is the full subcategory of $\mathcal{D}(\mathcal{B}_X)$ consisting of those objects whose cohomology objects lie in $\mathcal{C}_X$. The subcategory $\mathcal{D}_{\mathcal{C}_X}(\mathcal{B}_X)$ is stable under the Verdier duality functor $\mathbb{D}_{X\times\AA^1}$, and so we obtain a Verdier duality functor on $\mathcal{D}(\MMHM(X))$ which we denote $\mathbb{D}^{\mon}_X$. The associated graded object $\Gr_W(\mathcal{F})$ of an object in $\mathcal{C}_X$ with respect to the weight filtration is also in $\mathcal{C}_X$, and so the weight filtration descends to $\MMHM(X)$. If $f\colon X\rightarrow Y$ is a morphism of varieties, we define the functors $f^!,f^*,f_!,f_*$ to be the functors between the derived categories of monodromic mixed Hodge modules induced by the functors $(f\times\id_{\mathbb{A}^1})^!,(f\times\id_{\mathbb{A}^1})^*,(f\times\id_{\mathbb{A}^1})_!,(f\times\id_{\mathbb{A}^1})_*$ respectively. \begin{definition} If $X$ is considered as a variety $X\xrightarrow{\tau}\mathbb{N}^{Q_0}$ over a monoid $\mathbb{N}^{Q_0}$ of dimension vectors for a quiver $Q$, we define $\HO(X,\mathcal{F})$, for $\mathcal{F}\in\mathcal{D}(\MMHM(X))$, to be the total cohomology of $\tau_*\mathcal{F}$. So $\HO(X,\mathcal{F})$ may be thought of as a $\mathbb{N}^{Q_0}$-graded monodromic mixed Hodge module on a point, e.g. the $\mathbb{N}^{Q_0}$-graded monodromic mixed Hodge structure underlying the total hypercohomology of $\mathcal{F}$. Similarly, we define $\HO_c(X,\mathcal{F})$ to be the total cohomology of $\tau_!\mathcal{F}$. \end{definition} The monoidal structures $\otimes$ on $\mathcal{D}^{\geq}(\MMHM(X))$ and $\mathcal{D}^{\leq}(\MMHM(X))$ are defined by \begin{equation} \label{simmon} \mathcal{F}\otimes\mathcal{G}:=(X\times \AA^1\times \AA^1\xrightarrow{\id \times +}X\times \AA^1)_*(\pr_{1,2}^*\mathcal{F}\otimes\pr_{1,3}^*\mathcal{G}) \end{equation} where $\pr_{i,j}\colon X\times\AA^1\times\AA^1\rightarrow X\times\AA^1$ is the projection onto the $i$th and the $j$th component. \begin{remark} \label{convProdDef} More generally, let $X$ be a monoid in $\Sch(Y)$, the category of schemes over $Y$, with monoid map $\oplus\colon X\times_Y X\rightarrow X$. Then define \[ \mathcal{F}\boxtimes_{\oplus} \mathcal{G}:=(X\times_Y X\times \mathbb{A}^1\times\mathbb{A}^1\xrightarrow{\oplus\times +} X\times\mathbb{A}^1)_*(\pr_{1,3}^*\mathcal{F}\otimes\pr_{2,4}^*\mathcal{G}). \] We recover (\ref{simmon}) in the special case in which $X$ is considered as a monoid in $\Sch(X)$. In addition, for $\mathcal{F}\in\mathcal{D}^{b}(\MMHM(X))$ and $\mathcal{G}\in\mathcal{D}^{b}(\MMHM(Y))$ we define \[ \mathcal{F}\boxtimes \mathcal{G}:=(X\times Y\times\mathbb{A}^1\times\mathbb{A}^1\xrightarrow{\id_{X\times Y}\times +} X\times Y\times \mathbb{A}^1)_*(\pr_{1,3}^*\mathcal{F}\otimes\pr_{2,4}^*\mathcal{G}). \] Then this external tensor product is biexact and preserves weight filtrations by \cite[Sec.4]{KS2}, \cite[Prop.3.2]{Da16a}, or the proof of Proposition \ref{symmPure} below. \end{remark} There is a fully faithful embedding $\MHM(X)\rightarrow \MMHM(X)$ given by \[ i_*=(X\xrightarrow{x\mapsto (x,0)} X\times\AA^1)_* \] which is furthermore a monoidal functor, commuting with Verdier duality, as $i$ is a closed inclusion. Let $e\colon X\xrightarrow{x\mapsto(x,1)}X\times\mathbb{G}_m$ be the natural inclusion, then $e^*\Theta_X[-1]\colon \MMHM(X)\rightarrow \MHM(X)$ is also faithful (again using rigidity for variations of mixed Hodge structure \cite[Thm.4.20]{StZu85}), and so we can define an exact faithful (non-derived) functor \begin{equation} \label{formDef} \form{X}\colon \MMHM(X)\rightarrow \perv(X) \end{equation} by setting $\form{X}:=\forg_Xe^*\Theta_X[-1]$, where $\forg_X\colon \MHM(X)\rightarrow\perv(X)$ is the usual forgetful functor. \smallbreak Let $f$ be a regular function on a smooth algebraic variety $X$. Define $X_{<0}=f^{-1}(\mathbb{R}_{<0})$ and $X_0=f^{-1}(0)$. We define the functor $\psi_f\colon \mathcal{D}(\perv(X))\rightarrow\mathcal{D}(\perv(X))$ \[ \psi_f:=(X_0\rightarrow X)_*(X_0\rightarrow X)^*(X_{<0}\rightarrow X)_*(X_{<0}\rightarrow X)^* \] and define $\phi_f:=\cone((X_0\rightarrow X)_*(X_0\rightarrow X)^*\rightarrow\psi_f)$; this cone can indeed be made functorial -- see \cite{KSsheaves} for details, or the proof of Proposition \ref{TSeq} below for a functorial definition of $\phi_f$. The functors $\psi_f$ and $\phi_f$ have lifts to endofunctors of $\mathcal{D}(\MHM(X))$, defined by Saito. In the sequel we consider vanishing cycles always as a functor $\mathcal{D}(\MHM(X))\rightarrow\mathcal{D}(\MMHM(X))$ via the definition of \cite[Def.27]{KS2} \begin{equation} \label{phimdef} \phim{f}:=(X\times\mathbb{G}_m\rightarrow X\times\mathbb{A}^1)_!\phi_{f/u}(X\times\mathbb{G}_m\rightarrow X)^*. \end{equation} In (\ref{phimdef}), $u$ denotes the coordinate on $\mathbb{G}_m$. \begin{example} Let $f=0$. Then $\phi_f(\mathcal{F})\cong\mathcal{F}[1]$, since $\psi_f(\mathcal{F})=0$. On the other hand, by the same argument, $\phim{f}\mathcal{F}=(X\times\mathbb{G}_m\rightarrow X\times\mathbb{A}^1)_!(X\times\mathbb{G}_m\rightarrow X)^*\mathcal{F}[1]$. We deduce that $\phim{f}\mathcal{F}=j_!j^*\mathcal{G}[1]$ where $\mathcal{G}=(X\times\mathbb{A}^1\xrightarrow{\pi} X)^*\mathcal{F}$ and $j\colon X\times\mathbb{G}_m\rightarrow X\times\AA^1$ is the inclusion. Let $i\colon X\times \{0\}\hookrightarrow X\times\mathbb{A}^1$ be the inclusion of the complement. Since $\mathcal{G}$ is by definition trivial in $\MMHM(X)$, we deduce from the distinguished triangle $j_!j^*\rightarrow \id\rightarrow i_*i^*$ that $\phim{f}\mathcal{F}\cong i_*\mathcal{F}$ in $\MMHM(X)$. In other words, $\phim{0}\mathcal{F}\cong\mathcal{F}$, where we consider $\mathcal{F}$ on the right hand side of the isomorphism as a monodromic mixed Hodge module via pushforward along $X\xrightarrow{i} X\times\AA^1$. It follows from exactness of $\phi_{f/u}[-1]$ (a result of Gabber \cite{BBD}), $\pi^*[1]$ and $j_!$ that $\phim{f}$ is exact in general. \end{example} We collect together some useful properties of $\phim{f}$. \begin{proposition} \label{basicfacts} \begin{enumerate} \item \label{monPD}(Poincar\'e duality): There is a natural isomorphism \[ \mathbb{D}^{\mon}_X\phim{f}\cong \phim{f} \mathbb{D}_X\colon \mathcal{D}^{b}(\MHM(X))\rightarrow\mathcal{D}^{b}(\MMHM(X)), \] where $\mathbb{D}_X$ is the usual Verdier duality functor. \item (Interaction with adjunction):\label{adjInt} If $p\colon X\rightarrow Y$ is a map of varieties, and $f$ is a regular function on $Y$, then there is a natural transformation $\phim{f}p_*\rightarrow p_*\phim{fp}$, which is moreover a natural isomorphism if $p$ is proper. \item \label{HomInv}(Homotopy invariance): Let $X'\rightarrow X$ be an affine fibration with $d$-dimensional fibres. Then the natural map $\phim{f} \mathcal{F}\rightarrow p_*\phim{f\circ p}p^*\mathcal{F}$ is an isomorphism, as is the natural map $p_!\phim{f\circ p}p^*\mathcal{F}\rightarrow \phim{f}\mathcal{F}\otimes\HO_c(\mathbb{A}^d)$. \item\label{exFact} (Exactness): The functor $\phim{f}\colon \mathcal{D}^{b}(\MHM(X))\rightarrow\mathcal{D}^{b}(\MMHM(X))$ is exact, i.e. it restricts to an exact functor $\MHM(X)\rightarrow\MMHM(X)$. \item (Vanishing cycle decomposition theorem)\label{vanDec} Let $\mathcal{F}\in\MMHM(X)$ be pure, let $p\colon X\rightarrow Y$ be a proper map, and let $f$ be a regular function on $Y$. Then there is a non-canonical isomorphism \[ p_*\phim{fp}\mathcal{F}\cong \Ho(p_*\phim{fp}\mathcal{F}). \] \item (Thom--Sebastiani isomorphism): Let $f_j\colon X_j\rightarrow\mathbb{C}$ be regular functions, for $j=1,2$. Then there is a natural ismorphism of bifunctors \[ \pi_1^*\phim{f_1}(\bullet)\otimes\pi_2^*\phim{f_2}(\bullet)\rightarrow\phim{f_1\boxplus f_2}(\pi_1^*(\bullet)\otimes\pi_2^*(\bullet))|_{f^{-1}_1(0)\times f^{-1}_2(0)}. \] \item (Integral identity): Let $V_+\oplus V_-$ be a $\mathbb{C}^*$-equivariant vector bundle on the space $X$, given an $\mathbb{C}^*$-action, where the weights of the action on $V_+$ are all $1$, and the weights of the action on $V_-$ are all $-1$. Let $f$ be a $\mathbb{C}^*$-invariant function. Below, for a vector bundle $V$ we write $T_{V}$ for the total space of $V$. Then the natural map \[ (T_{V_+\oplus V_-}\rightarrow X)_!\phim{f}(\mathbb{Q}_{T_{V_+\oplus V_-}}\rightarrow (T_{V_+}\rightarrow T_{V_+\oplus V_-})_*\mathbb{Q}_{T_{V_+}}) \] is an isomorphism. \end{enumerate} \end{proposition} The first five statements follow from the corresponding statements for $\phi_f$. The first of these is proved at the level of perverse sheaves as the main theorem of \cite{Ma09}, and for mixed Hodge modules as the main theorem of \cite{Sai89duality}, see also Sch\"urmann's appendix to \cite{Br12}. The second statement is given by combining \cite[Thm.2.14, Thm.4.3]{Saito90}. Homotopy invariance then follows from the homotopy invariance statement for perverse sheaves. The hard part of the exactness statement is the statement that the shift of the usual vanishing cycle functor, $\phi_f[-1]$, is exact. This is a result of Gabber, and can be found in \cite{BBD}. The version of the decomposition theorem quoted here follows from Saito's version of the decomposition theorem \cite{Saito90}, (\ref{exFact}) and (\ref{adjInt}). The statement regarding the Thom--Sebastiani isomorphism is in fact false for $\phi_f$. The statement involving $\phim{f}$ is due to Saito \cite{Saito10}; again we refer the reader to Sch\"urmann's appendix to \cite{Br12} for the compatibility between this proof and the proof of Massey \cite{Ma01} at the level of complexes of perverse sheaves. The integral identity is proved in the above form in \cite{KS2}. In fact it is the only property that we will not use in this paper. For ease of exposition we make the following simplification in what follows. \begin{assumption} \label{phicheat} For all functions $f\colon X\rightarrow \mathbb{C}$ for which we wish to take $\phim{f}$, if $X$ is smooth, we assume that $\crit(f)\subset f^{-1}(0)$ as sets. \end{assumption} Under the assumption, the Thom--Sebastiani isomorphism for constant mixed Hodge modules simplifies to \[ \pi_1^*\phim{f_1}(\mathbb{Q}_{X_1})\otimes\pi_2^*\phim{f_2}(\mathbb{Q}_{X_1})\rightarrow\phim{f_1\boxplus f_2}(\mathbb{Q}_{X_1\times X_2}). \] The assumption can be dropped, with a little care. If it does not hold, one should instead work with the functor $\phi^{\mon,\textrm{fib}}_f:=\bigoplus_{a\in\mathbb{A}_{\mathbb{C}}^1}\phim{f-a}$ -- see \cite{DaMe4}. \begin{definition} Given an element $\mathcal{G}\in\MMHM(X)$, we say it is pure of weight $i$ if $\Gr_W^j\mathcal{G}$ is zero for all $j\neq i$. Given an element $\mathcal{F}\in\mathcal{D}(\MMHM(X))$, we say that $\mathcal{F}$ is pure of weight $i$ if each $\Ho^j(\mathcal{F})$ is pure of weight $i+j$, or we just say $\mathcal{F}$ is \textit{pure} if it is pure of weight zero. \end{definition} Consider the embedding $\mathcal{D}^{b}(\MHM(\pt))\xrightarrow{i_*}\mathcal{D}^{b}(\MMHM(\pt))$. The former category contains the element \[ \mathbb{L}:=\HO_c(\mathbb{A}^1,\mathbb{Q}), \] which is pure. There is no square root of $\mathbb{L}$ in $\mathcal{D}^{b}(\MHM(\pt))$, i.e. an element $\mathbb{L}^{1/2}$ such that $(\mathbb{L}^{1/2})^{\otimes 2}\cong \mathbb{L}$, but after embedding in $\mathcal{D}^{b}(\MMHM(\pt))$ there is a choice of square roots. We set $\mathbb{L}^{1/2}:=\HO(\mathbb{A}^1,\phim{x^2}\mathbb{Q}_{\mathbb{A}^1})$, where $x^2\colon \mathbb{A}^1\rightarrow\mathbb{C}$ is considered as a regular function, and $\mathbb{Q}_{\mathbb{A}^1}$ is the constant mixed Hodge module on $\mathbb{A}^1$. Since $\mathbb{L}^{1/2}$ is the tensor square root of $\mathbb{L}$, we deduce that it is pure, concentrated in cohomological degree 1, by Remark \ref{convProdDef}. Note that due to the definition of the category of mixed Hodge modules, this cohomological degree is with respect to the perverse t structure on the underlying category of constructible sheaves on $\mathbb{A}^1$. If $\mathcal{F}\in\MMHM(X)$ and $\mathcal{G}\in\MMHM(\pt)$ we use the abbreviation \[ \mathcal{F}\otimes\mathcal{G}:=\mathcal{F}\otimes \tau^*\mathcal{G}, \] where $\tau\colon X\rightarrow \pt$ is the map to a point. If $X$ is a smooth equidimensional variety, we define $\mathcal{IC}_{X}(\mathbb{Q}):=\mathbb{Q}_X\otimes\mathbb{L}^{-\dim(X)/2}$. This is pure, and an element of $\MMHM(X)$, again by Remark \ref{convProdDef}. If $X$ is not smooth, but is irreducible, we define $\mathcal{IC}_X(\mathbb{Q}):=\tilde{\mathcal{IC}}_X(\mathbb{Q}_{X_{\reg}})\otimes\mathbb{L}^{-\dim(X_{\reg})/2}$, where $\tilde{\mathcal{IC}}_X(\mathbb{Q}_{X_{\reg}})$ is the usual intersection cohomology mixed Hodge module complex of $X$. If $X$ is a disjoint union of irreducible varieties, we define \[ \mathcal{IC}_X(\mathbb{Q}):=\bigoplus_{Z\in\pi_0(X)}\mathcal{IC}_Z(\mathbb{Q}), \] and define \[ \phim{f}\mathcal{IC}_X(\mathbb{Q})=\bigoplus_{Z\in\pi_0(X)}\left(\phim{f}\tilde{\mathcal{IC}}_Z(\mathbb{Q}_{Z_{\reg}})\otimes\mathbb{L}^{-\dim(Z_{\reg})/2}\right). \] \begin{remark} Because of the shift in the definition of $\mathcal{IC}_X(\mathbb{Q})$, there are natural isomorphisms $\mathbb{D}^{\mon}_X\mathcal{IC}_X(\mathbb{Q})\cong\mathcal{IC}_X(\mathbb{Q})$ and $\mathbb{D}^{\mon}_X\phim{f}\mathcal{IC}_X(\mathbb{Q})\cong\phim{f}\mathcal{IC}_X(\mathbb{Q})$ for all $X$. \end{remark} For $X$ a smooth equidimensional variety we define \begin{align*} \HO(X,\mathbb{Q})_{\vir}:=&\HO(X,\mathcal{IC}_X(\mathbb{Q}))\\\cong&\HO(X,\mathbb{Q})\otimes\mathbb{L}^{-\dim(X)/2}\in\mathcal{D}^{b}(\MMHM(\pt)), \end{align*} and observing that $\mathbb{C}\mathbb{P}^{\infty}\cong \textrm{B}\mathbb{C}^*$, which has dimension $-1$ as a stack, extend this notation by defining \[ \HO(\mathbb{C}\mathbb{P}^{\infty},\mathbb{Q})_{\vir}:=\HO(\mathbb{C}\mathbb{P}^{\infty},\mathbb{Q})\otimes\mathbb{L}^{1/2}\in \mathcal{D}(\MMHM(\pt)). \] \subsection{Equivariant vanishing cycles} \label{TotCon} We do not propose here to write down a full six functor and vanishing cycles functor formalism for equivariant monodromic mixed Hodge modules via a theory of monodromic mixed Hodge modules on stacks, instead mimicking the constructions of \cite{BL94} and Totaro's approximation \cite{To99} of the Chow ring of classifying spaces by finite-dimensional algebraic approximations to produce only the definitions and constructions we will need for the rest of this paper. We will want to be able to work with equivariant monodromic mixed Hodge modules, in the following generality. We assume that we have a $G$-action on a smooth algebraic variety $X$, for $G$ an affine algebraic group, and a regular function $f$ on the stack $X/G$, i.e. a $G$-invariant regular function on $X$. Furthermore we assume that we are given a map of stacks $p\colon X/G\rightarrow Y$, where $Y$ is another complex variety. We will also assume that $G$ is special, i.e. all \'etale locally trivial principal $G$-bundles are Zariski locally trivial -- see \cite[Def.2.1]{JoyceMF} for a concise discussion of this condition in this context, or \cite{Chev58} for the original references. As noted in \cite{JoyceMF}, all such $G$ are connected. The element $\Ho\left(p_*\phim{f}\mathcal{IC}_X(\mathbb{Q})\right)$ will be an element of $\mathcal{D}^{\geq}(\MMHM(Y))$, the derived category of bounded below complexes of monodromic mixed Hodge modules on $Y$. We define $\Ho\left(p_*\phim{f}\mathcal{IC}_X(\mathbb{Q})\right)$ as follows. Let $V_1\subset V_2\subset\ldots$ be an ascending chain of $G$-representations, and let $U_1\subset U_2\subset\ldots$ be an ascending chain of subvarieties of the underlying vector spaces of $V_1,V_2,\ldots$, considered as $G$-equivariant algebraic varieties. We suppose that $G$ acts scheme-theoretically freely on each $U_i$, that each principal bundle quotient $U_i\rightarrow U_i/G$ exists in the category of schemes, and that $\codim_{V_i}(V_i\setminus U_i)\rightarrow \infty$ as $i\rightarrow\infty$. The map $X\times U_i\rightarrow X\times_G U_i$ exists as a principal bundle quotient in the category of schemes by \cite[Prop.23]{EdGr98}. We define $f_i\colon X\times_G U_i\rightarrow \mathbb{C}$ to be the induced map, and $\iota_i\colon X\times_G U_i\rightarrow X\times_G U_{i+1}$ to be the inclusion. To fix notation, we fix an embedding $G\subset \Gl_t(\mathbb{C})$ for some $t$. We set \[ V_i:=\Hom(\mathbb{C}^i,\mathbb{C}^t), \] and define $U_i\subset V_i$ to be the subscheme of surjective maps -- if $i\geq t$ then $U_i$ does indeed carry a free $G$-action via the $\Gl_t(\mathbb{C})$-action on $\mathbb{C}^t$. Arguing as in \cite[Prop.23]{EdGr98} there is a covering of $X$ by quasiprojective subvarieties $Y_i$ carrying $\Gl_t(\mathbb{C})$-equivariant line bundles, relatively ample for the projection $Y_i\times U\rightarrow U$, so by the induced $G$-invariance and \cite[Prop.7.1]{MFK94} we deduce that there is a principal $G$-bundle quotient $X\times U\rightarrow X\times_G U$. For each $i$, we define $X_i:=X\times_G U_i$. We obtain an explicit sequence of maps \[ p_{i+1,*}\phim{f_i+1}\mathbb{Q}_{X_{i+1}}\rightarrow p_{i+1,*}\iota_{i,*}\phim{f_i}\mathbb{Q}_{X_i}= p_{i,*}\phim{f_i}\mathbb{Q}_{X_i} \] and so a sequence of mixed Hodge modules \[ \mathcal{F}_i:=p_{i,*}\phim{f_i}\mathbb{Q}_{X_i}. \] \begin{proposition} \label{stabProp} Fix $n\in\mathbb{N}$. Then for $i\gg 0$ the map \begin{equation} \label{evIs} \Lambda\colon \mathcal{H}^{n}(Y,\mathcal{F}_{i+1})\rightarrow \mathcal{H}^n(Y,\mathcal{F}_i) \end{equation} in $\MMHM(Y)$ is an isomorphism. \end{proposition} \begin{proof} Since the functor $\form{X}\colon \MMHM(Y)\rightarrow\Perv(Y)$ of equation (\ref{formDef}) is faithful, it suffices to show that (\ref{evIs}) induces an isomorphism at the level of perverse sheaves, with $\mathcal{F}_i$ replaced by $p_{i,*}\phi_{f_i}\mathbb{Q}_{X_i}$. Say we can prove the same proposition, but with the map $\Lambda$ replaced by the map \[ \Lambda_{\con}\colon \mathcal{H}_{\con}^{n}(Y,\mathcal{F}_{i+1})\rightarrow \mathcal{H}_{\con}^n(Y,\mathcal{F}_i) \] of constructible sheaves on $Y$ (here and below we use $\Ho_{\con}$ to denote constructible cohomology sheaves). Then for sufficiently large $i$, \[ \cone(\mathcal{F}_{i+1}\rightarrow\mathcal{F}_i)\in \mathcal{D}_{\con}^{\geq n+1}(Y)\subset {}^{\mathfrak{p}}\mathcal{D}^{\geq n+1}(Y) \] and the proposition follows. Consider the space $U_{i,i+1}\subset \Hom(\mathbb{C}^{i+1},\mathbb{C}^t)$ of linear maps which are surjective after precomposing with the projection \[ \pi_{i,i+1}\colon \mathbb{C}^{i+1}\rightarrow \mathbb{C}^{i} \] given by $(z_1,\ldots,z_{i+1})\mapsto (z_1,\ldots,z_i)$. We denote by $j_i$ the inclusion \[ j_i\colon U_{i,i+1}\hookrightarrow U_{i+1}. \] Note that for large $i$, $G$ acts freely on $U_{i,i+1}$, and there is a $G$-equivariant affine fibration $\tau_{i,i+1}\colon U_{i,i+1}\rightarrow U_i$ given by $\tau_{i,i+1}f(z):= f(z,0)$. Define $X_{i,i+1}:=X\times_G U_{i,i+1}$, denote by $f_{i,i+1}\colon X_{i,i+1}\rightarrow \mathbb{C}$ the function induced by $f$, and denote by \begin{equation} \label{taffDef} \iota_{i,i+1}\colon X_{i,i+1}\rightarrow X_{i+1} \end{equation} the open embedding induced by $j_{i}$. The projection $\tau_{i,i+1}$ induces an affine fibration $t_{i,i+1}\colon X_{i,i+1}\rightarrow X_{i}$ with zero section \begin{equation} z_i\colon X_i\rightarrow X_{i,i+1}, \end{equation} and we define the induced function \begin{equation} p_{i,i+1}\colon X_{i,i+1}\xrightarrow{t_{i,i+1}}X_i\xrightarrow{p_i} Y. \end{equation} We factorise $\Lambda_{\con}$ as the composition \[ \Ho^n_{\con}(p_{i+1,*}\phi_{f_{i+1}}\mathbb{Q}_{X_{i+1}})\xrightarrow{a} \Ho^n_{\con}(p_{i+1,*}\iota_{i,i+1,*}\phi_{f_{i,i+1}}\mathbb{Q}_{X_{i,i+1}})\rightarrow \Ho(p_{i,*}\phi_{f_i}\mathbb{Q}_{X_i}) \] where the second map is an isomorphism by homotopy invariance. So it is enough to show that the map $a$ is an isomorphism for sufficiently large $i$. Consider the function $\overline{f}_{i+1}\colon X\times U_{i+1}\rightarrow \mathbb{C}$ given by the composition $X\times U_{i+1}\xrightarrow{\pi } X\xrightarrow{f}\mathbb{C}$, and define $\overline{f}_{i,i+1}\colon X\times U_{i,i+1}\rightarrow \mathbb{C}$ similarly. Let $\overline{\iota}_{i,i+1}\colon X\times U_{i,i+1}\rightarrow X\times U_{i+1}$ be the inclusion. Then \[ \phi_{\overline{f}_{i+1}}\mathbb{Q}_{X\times U_{i+1}}\cong\phi_{f}\mathbb{Q}_{X}\boxtimes \mathbb{Q}_{U_{i+1}} \] and \[ \left(\phi_{\overline{f}_{i+1}}\mathbb{Q}_{X\times U_{i+1}}\rightarrow\overline{\iota}_{i,i+1,*}\phi_{\overline{f}_{i,i+1}}\mathbb{Q}_{X\times U_{i,i+1}}\right)=\id_{\phi_{f}\mathbb{Q}_{X}}\boxtimes (\mathbb{Q}_{U_{i+1}}\rightarrow j_{i,*}\mathbb{Q}_{U_{i,i+1}}). \] For fixed $m$, $\Ho_{\con}^m(\mathbb{Q}_{U_{i+1}})\rightarrow \Ho_{\con}^m(\mathbb{Q}_{U_{i,i+1}})$ is an isomorphism for sufficiently large $i$, since the codimension of $U_{i+1}\setminus U_{i,i+1}$ inside $U_{i+1}$ goes to infinity as $i$ goes to infinity. Since the external tensor product is exact, we deduce that for fixed $m$ and sufficiently large $i$, \[ \Ho^m_{\con}(\phi_{\overline{f}_{i+1}}\mathbb{Q}_{X\times U_{i+1}}\rightarrow\overline{\iota}_{i,i+1,*}\phi_{\overline{f}_{i,i+1}}\mathbb{Q}_{X\times U_{i,i+1}}) \] is an isomorphism. On the other hand, taking a Zariski open subspace $C\subset X_{i+1}$ such that the principal bundle $\overline{C}=C\times_{X_{i+1}}(X\times U_{i+1})\rightarrow C$ is trivial, we have that \begin{align*} &\Ho_{\con}(\phi_{\overline{f}_{i+1}}\mathbb{Q}_{X\times U_{i+1}}\rightarrow\overline{\iota}_{i,i+1,*}\phi_{\overline{f}_{i,i+1}}\mathbb{Q}_{X\times U_{i,i+1}})|_{\overline{C}}\cong\\& \Ho_{\con}(\phi_{f_{i+1}}\mathbb{Q}_{X_{i+1}}\rightarrow\iota_{i,i+1,*}\phi_{f_{i,i+1}}\mathbb{Q}_{X\times_{G} U_{i,i+1}}))\boxtimes\id_{\mathbb{Q}_{G}}. \end{align*} and so using exactness of external tensor product again, we deduce that for fixed $n$ and sufficiently large $i$, the map \[ \Ho^n_{\con}(\phi_{f_{i+1}}\mathbb{Q}_{X_{i+1}}\rightarrow\iota_{i,i+1,*}\phi_{f_{i,i+1}}\mathbb{Q}_{X_{i,i+1}})) \] is an isomorphism. The pushforward $p_{i+1,*}$ maps $\mathcal{D}_{\con}^{\geq n+1}(X_{i+1})\rightarrow \mathcal{D}_{\con}^{\geq n+1}(Y)$, and so \[ \cone\left(p_{i+1,*}\phi_{f_{i+1}}\mathbb{Q}_{X_{i+1}}\rightarrow p_{i+1,*}\iota_{i,i+1,*}\phi_{f_{i,i+1}}\mathbb{Q}_{X_{i,i+1}}\right)\in\mathcal{D}_{\con}^{\geq n+1}(X) \] and the map $a$ is an isomorphism, as required. \end{proof} We define $\Ho\left(p_*\phim{f}\mathbb{Q}_X\right)$ to be the complex \[ \ldots\xrightarrow{0}\Ho^{n-1}(\lim_{i\mapsto\infty}p_{i,*}\phim{f_i}\mathbb{Q}_{X_i})\xrightarrow{0}\Ho^{n}(\lim_{i\mapsto\infty}p_{i,*}\phim{f_i}\mathbb{Q}_{X_i})\xrightarrow{0}\ldots \] which is well defined by Proposition \ref{stabProp}. If $\mathfrak{M}=\coprod_{s\in S} X_s/G_s$, with each $X_s$ a smooth algebraic variety, and each $G_s$ a special affine algebraic group, and $p\colon \mathfrak{M}\rightarrow Y$ is a map to an algebraic variety, we define \[ \Ho\left(p_*\phim{f}\mathbb{Q}_{\mathfrak{M}}\right):=\bigoplus_{s\in S} \Ho(p_*\phim{f|_{X_s/G_s}}\mathbb{Q}_{X_s/G_s}) \] and \[ \Ho(p_*\phim{f}\mathcal{IC}_{\mathfrak{M}}(\mathbb{Q})):=\bigoplus_{s\in S}\Ho(p_*\phim{f|_{X_s/G_s}}\mathbb{Q}_{X_s/G_s})\otimes \mathbb{L}^{(\dim(G_s)-\dim(X_s))/2}. \] We define $\Ho\left(p_!\phim{f}\mathbb{Q}_{\mathfrak{M}}\right)\in \mathcal{D}^{\leq}(\MMHM(Y))$ in similar fashion. We give the definition in the case that $\mathfrak{M}=X/G$, for $X$ a smooth irreducible variety and $G$ a special affine algebraic group, and extend to the case of a disjoint union of such stacks as above. Consider the isomorphism $\Ho(\iota_{i,i+1,!}\phim{f_{i,i+1}}\mathbb{Q}_{X_{i,i+1}})\cong \Ho(\phim{f_{i}}\mathbb{Q}_{X_{i}})\otimes\mathbb{L}^{\dim(X_{i+1})-\dim(X_i)}$ where $\iota_{i,i+1}$ is as defined in (\ref{taffDef}). Arguing as in Proposition \ref{stabProp}, for fixed $n$ the natural map \[ \Ho_{\con}^{2\dim(U_{i+1})+n}((X_{i,i+1}\rightarrow X_{i+1})_!\phim{f_{i,i+1}}\mathbb{Q}_{X_{i,i+1}})\rightarrow \Ho_{\con}^{2\dim(U_{i+1})+n}(\phim{f_{i+1}}\mathbb{Q}_{X_{i+1}}) \] is an isomorphism for sufficiently large $i$, and we can define $\Ho^n\left(p_!\phim{f}\mathbb{Q}_{X/G}\right)$ to be the limit of $\Ho^n\left(p_{i,!}\phim{f_i}\mathbb{Q}_{X_i}\otimes \mathbb{L}^{-\dim(U_i)}\right)$ as $i$ tends to infinity, and \[ \Ho\left(p_!\phim{f}\mathcal{IC}_{X/G}(\mathbb{Q})\right):=\Ho\left(p_!\phim{f}\mathbb{Q}_{X/G}\right)\otimes\mathbb{L}^{(\dim(G)-\dim(X))/2}. \] \begin{remark} Note that we do not offer here a definition of $p_*\phi_{\mon,f}\mathbb{Q}_{X/G}$, or $p_!\phi_{\mon,f}\mathbb{Q}_{X/G}$ as objects of the derived category, but instead limit ourselves to defining the total cohomology of these direct images with respect to the natural t structure on $\mathcal{D}(\MMHM(Y))$ (i.e. the pullback of the natural t structure on $\mathcal{D}(\Perv(X))$ under $\form{X}$). Of course in case $X/G$ is an actual scheme, $p_*\phi^{\mon}_{f}\mathbb{Q}_{X/G}$ and $p_!\phi^{\mon}_{f}\mathbb{Q}_{X/G}$ are well defined before passing to cohomology, and our definition recovers their respective total perverse cohomology. \end{remark} \begin{remark} As a special case of the above construction, if a map $\tau\colon X/G\rightarrow \mathbb{N}^{Q_0}$ to a semigroup of dimension vectors is understood, we define \begin{align*} \HO(X/G,\phim{f}\mathbb{Q}_{X/G}):=\Ho(\tau_*\phim{f}\mathbb{Q}_{X/G}). \end{align*} This is the definition used in \cite[Sec.7]{KS2} in the definition of the underlying $\mathbb{N}^{Q_0}$-graded monodromic mixed Hodge module of the critical cohomological Hall algebra. \end{remark} Let $h\colon X'\rightarrow X$ be a morphism of $G$-equivariant varieties. Then for each $i$ we obtain maps $h_i\colon X'_i\rightarrow X_i$. There are natural maps \[ p_{i,*}\phim{f_i}h_{i,*}\mathbb{Q}_{X'_i}\rightarrow p_{i,*}h_{i,*}\phim{f_i\circ h_i}\mathbb{Q}_{X'_i} \] which are isomorphisms if $h$ is proper or an affine fibration (see Remark \ref{basicfacts}). We precompose with the natural map \[ p_{i,*}\phim{f_i}\mathbb{Q}_{X_i}\rightarrow p_{i,*}\phim{f_i}h_{i,*}\mathbb{Q}_{X'_i} \] and let $i\mapsto \infty$, defining maps \[ \Ho\left(p_*\phim{f}\mathbb{Q}_{X/G}\right)\rightarrow \Ho\left((p h)_*\phim{f\circ h}\mathbb{Q}_{X'/G}\right). \] We define the map \[ \Ho\left((p h)_!)\phim{f\circ h}\mathbb{Q}_{X'/G}\right)\rightarrow\Ho\left(p_!\phim{f}\mathbb{Q}_{X/G}\right) \] in the same way, starting from the maps $ p_{i,!}\phim{f_i}h_{i,!}\mathbb{D}_{X'_i}\mathbb{Q}_{X'_i}\rightarrow p_{i,!}\phim{f_i}\mathbb{D}_{X_i}\mathbb{Q}_{X_i}$ . Finally, let $\upsilon\colon H\rightarrow G$ be an inclusion of groups -- the only examples we will consider are when $\upsilon$ is the inclusion of a parabolic subgroup inside $\Gl_n(\mathbb{C})$, or the inclusion $L\subset P$ of the Levi subgroup of a parabolic subgroup of $\Gl_n(\mathbb{C})$. Let $G$ act on $X$ as above, with $f$ a $G$-invariant regular function on $X$. Let $h\colon X/H\rightarrow X/G$ be the associated morphism of stacks. Then we obtain maps \begin{align*} X\times_H U_i\xrightarrow{h_i} &X\times_G U_i\\ (x,z)\mapsto&(x,z) \end{align*} which we use in the same way as above to obtain the map \[ \Ho\left(p_*\phim{f}\mathbb{Q}_{X/G}\right)\rightarrow\Ho\left((p h)_*\phim{f}\mathbb{Q}_{X/H}\right) \] and the map \[ \Ho\left((p h)_!\phim{f}\mathbb{Q}_{X/P}\right)\rightarrow \Ho\left(p_!\phim{f}\mathbb{Q}_{X/G}\right)\otimes\mathbb{L}^{\dim(G)-\dim(P)}. \] \begin{remark} \label{limKer} If $H\hookrightarrow G$ is the inclusion of a parabolic subgroup, then the induced maps $h_i$ are proper. As such, there is a natural isomorphism $\phim{f_i}h_{i,*}\mathbb{Q}_{X\times_H U_i}\cong h_{i,*}\phim{f_i}\mathbb{Q}_{X\times_H U_i}$. From the maps $h_{i,*}\mathbb{D}_{X\times_H U_i}\mathbb{Q}_{X\times_H U_i}\rightarrow \mathbb{D}_{X\times_G U_{i}}\mathbb{Q}_{X\times_G U_{i}}$ given by the natural isomorphism $h_{i,*}\cong h_{i,!}$ and Verdier duality, we obtain, in the limit, the map \[ \Ho\left((ph)_*\phim{f}\mathbb{Q}_{X/H}\right)\rightarrow \Ho\left(p_*\phim{f}\mathbb{Q}_{X/G}\right)\otimes\mathbb{L}^{\dim(G)-\dim(H)}. \] \end{remark} \section{Moduli spaces of quiver representations} \label{Qrepsection} \subsection{Basic notions}We use the notations and conventions from \cite{DaMe4}, which we briefly recall. Let $Q=(Q_0,Q_1,s,t)$ denote a quiver, that is, a pair of finite sets $Q_0$ and $Q_1$, and a pair of maps $s\colon Q_1\rightarrow Q_0$ and $t\colon Q_1\rightarrow Q_0$, the maps taking an arrow to its source and target, respectively. Denote by $\mathbb{C} Q$ the free path category of $Q$ over $\mathbb{C}$. Alternatively we may think of $\mathbb{C} Q$ as the free path algebra of $Q$, with a distinguished family of mutually orthogonal idempotents $e_i$ in bijection with the vertices $Q_0$, summing to $1_{\mathbb{C} Q}$. \smallbreak Let $\Sch_{\mathbb{C}}$ be the category of schemes over $\Spec(\mathbb{C})$. For $S\in\Sch_{\mathbb{C}}$ we denote by $\Vect_{S}^{\fd}$ the category of finite rank vector bundles over $S$. Let $\dd\in\mathbb{N}^{Q_0}$ be a dimension vector. We denote by $\mathfrak{M}_{\dd}$ the groupoid valued functor on $\Sch_{\mathbb{C}}$ defined by setting $\mathfrak{M}_{\dd}(S)$ to be the groupoid obtained from forgetting the non-invertible morphisms in the category of functors in $\Fun(\mathbb{C} Q,\Vect_{S}^{\fd})$ such that $i\in Q_0$ is sent to a vector bundle of dimension $\dd_i$. This prestack is an Artin stack, as it is represented by the following global quotient stack. First define \begin{equation} \label{Xdef} X_{\dd}:=\prod_{a\in Q_1}\Hom(\mathbb{C}^{\dd_{s(a)}},\mathbb{C}^{\dd_{t(a)}}). \end{equation} This affine space carries the change of basis action of \[ G_{\dd}:=\prod_{i\in Q_0}\Aut(\mathbb{C}^{\dd_i}), \] and there is an equivalence of stacks $\mathfrak{M}_{\dd}\cong X_{\dd}/G_{\dd}$. We denote by $\mathfrak{M}$ the union $\coprod_{\dd\in\mathbb{N}^{Q_0}} \mathfrak{M}_{\dd}$, the stack of finite-dimensional representations of $Q$, which by the equivalences just given is a countable disjoint union of finite type global quotient Artin stacks. \smallbreak For the rest of the paper, $X_{\dd}$ will be as in (\ref{Xdef}). Where we wish to be specific regarding the quiver with respect to which $X_{\dd}$ is defined, we will instead use the notation $X(Q)_{\dd}$. \smallbreak For $\dd'+\dd''=\dd\in\mathbb{N}^{Q_0}$ let $\mathfrak{M}_{\dd',\dd''}(S)$ be the groupoid of triples $(F',F,\iota)$, where $F',F\in\Fun(\mathbb{C}Q,\Vect_S^{\fd})$, and $\iota\colon F'\rightarrow F$ is a natural transformation such that $\dim(F'(i))=\dd'_i$, $\dim(F(i))=\dd_i$, and $\iota(i)$ is injective, with locally free cokernel, for every $i$. Again, $\mathfrak{M}_{\dd',\dd''}$ is a finite type Artin stack, which can be described as follows. Let $X_{\dd',\dd''}\subset X_{\dd}$ be the subspace of representations such that the flag $\mathbb{C}^{\dd'_i}\subset \mathbb{C}^{\dd_i}$ is preserved for all $i\in Q_0$, and let $G_{\dd',\dd''}\subset G_{\dd}$ be the subgroup preserving these same flags. Then \[ \mathfrak{M}_{\dd',\dd''}\cong X_{\dd',\dd''}/G_{\dd',\dd''}. \] \smallbreak A tuple $\zeta=(\zeta_i)_{i\in Q_0}\in \mathbb{H}_+^{Q_0}:=\{r\exp(i\pi\phi)\in \mathbb{C}\mid r>0, 0<\phi\le 1\}^{Q_0}\subset \mathbb{C}^{Q_0}$ provides a Bridgeland stability condition, as defined in \cite{Bridgeland02}, with \textit{central charge} defined on finite-dimensional $\mathbb{C}Q$-modules \[ Z\colon\rho\mapsto\zeta\cdot \dim (\rho)=\sum_{i\in Q_0}\zeta_i\dim (\rho_i). \] We define the \textit{slope} of a representation $\rho$ by setting \[ \Xi^{\zeta}(\rho):=\begin{cases}- \Re e (Z(\rho))/ \Im m (Z(\rho))&\textrm{if }\Im m (Z(\rho))\neq 0\\\infty&\textrm{if }\Im m (Z(\rho))=0.\end{cases} \] Likewise we define $\Xi^{\zeta}(\dd)$, for $\dd\in\mathbb{N}^{Q_0}$, to be the slope of any representation $\rho$ of dimension $\dd$. A $\mathbb{C} Q$-module $\rho$ is called $\zeta$\textit{-semistable} if for all proper submodules $\rho'\subset\rho$ we have $\Xi^{\zeta}(\rho')\leq\Xi^{\zeta}(\rho)$, and is $\zeta$\textit{-stable} if instead we have $\Xi^{\zeta}(\rho')<\Xi^{\zeta}(\rho)$ for every proper submodule. To simplify notation in what follows, we will assume that $\Xi^{\zeta}(\rho)<\infty$ in this paper. This is equivalent to the condition that $\zeta$ maps nonzero elements of $\mathbb{N}^{Q_0}$ to the open upper half plane. \smallbreak We define two pairings on $\mathbb{Z}^{Q_0}$: \begin{equation*} (\dd,\ee):=\sum_{i\in Q_0} \dd_i \ee_i-\sum_{a\in Q_1}\dd_{s(a)}\ee_{t(a)} \end{equation*} and \begin{equation*} \langle \dd,\ee\rangle:=(\dd,\ee)-(\ee,\dd). \end{equation*} Note that $(\dd,\dd)=-\dim \mathfrak{M}_{\dd}$. \smallbreak As in the introduction, for every $\mu\in (-\infty,\infty)$ we denote by $\Lambda_{\mu}^{\zeta}\subset\mathbb{N}^{Q_0}$ the submonoid of dimension vectors $\dd$ of slope $\mu$. \begin{definition} We say $\zeta$ is $\mu$-generic if $\dd,\ee\in\Lambda_{\mu}^{\zeta}$ implies $\langle \dd,\ee\rangle=0$. We say $\zeta$ is generic if it is $\mu$-generic for all $\mu$. \end{definition} We say that $\zeta$ is a King stability condition if $\Im m(\zeta_i)=1$ and $\Re e(\zeta_i)\in\mathbb{Q}$ for all $i\in Q_0$. Given a King stability condition $\zeta$, we can fix $m\in\mathbb{N}$ such that $m\Re e(\zeta_i)\in\mathbb{Z}$ for every $i$. We linearize the $G_{\dd}$-action on $X_{\dd}$ via the character \begin{align} \label{charDef} \chi_{\dd}\colon &G_{\dd}\rightarrow \mathbb{C}^*\\&(g_i)_{i\in Q_0}\mapsto \prod_{i\in Q_0}\det(g_i)^{m\Re e(\zeta_i)}, \nonumber \end{align} and define $X_{\dd}^{\zeta\sst}$ to be the variety of semistable points with respect to this linearization. By \cite{King}, using the constructions and definitions of \cite{MFK94}, the GIT quotient $X^{\zeta\sst}_{\dd}/\!\!/_{\chi_{\dd}} G_{\dd}$ provides a coarse moduli space of $\zeta$-semistable representations of dimension ${\dd}$, which we denote $\mathcal{M}^{\zeta\sst}_{\dd}$. \begin{remark} \label{BtoK} Fix a dimension vector $\dd\in\mathbb{N}^{Q_0}$ of slope $\mu$, for a $\mu$-generic stability condition $\zeta$. Then we can always find a $\mu'$-generic King stability condition $\zeta'$ such that a $\dd$-dimensional $\mathbb{C} Q$-module is $\zeta$-stable if and only if it is $\zeta'$-stable, by \cite[Lem.4.21]{DMSS15}, where $\mu'$ is the slope of $\dd$ with respect to $\zeta'$. By construction, a $\dd$-dimensional $\mathbb{C}Q$-module is $\zeta'$-semistable if and only if it is $\zeta$-semistable. We deduce that for every $\mu$-generic Bridgeland stability condition $\zeta$ and every dimension vector $\dd$ of slope $\mu$, there is a coarse moduli space $\mathcal{M}^{\zeta\sst}_{\dd}$ of $\dd$-dimensional $\zeta$-semistable $\mathbb{C}Q$-modules. \end{remark} For a slope $\mu$, we define \begin{equation} \mathcal{M}_{\mu}^{\zeta\sst}=\coprod_{\Xi^{\zeta}(\dd)=\mu}\mathcal{M}_{\dd}^{\zeta\sst} \end{equation} and \begin{equation} \mathfrak{M}_{\mu}^{\zeta\sst}=\coprod_{\Xi^{\zeta}(\dd)=\mu}\mathfrak{M}_{\dd}^{\zeta\sst}. \end{equation} We denote by \begin{equation} p_{\dd}^{\zeta}\colon \mathfrak{M}^{\zeta\sst}_{\dd}\rightarrow \mathcal{M}^{\zeta\sst}_{\dd} \end{equation} and \begin{equation} p_{\mu}^{\zeta}\colon \mathfrak{M}^{\zeta\sst}_{\mu}\rightarrow \mathcal{M}^{\zeta\sst}_{\mu} \end{equation} the maps from the stacks to their respective coarse moduli spaces, and by \begin{equation} \label{qdef} q_{\dd}^{\zeta}\colon \mathcal{M}^{\zeta\sst}_{\dd}\rightarrow \mathcal{M}_{\dd} \end{equation} and \begin{equation} \label{qmudef} q_{\mu}^{\zeta}\colon \mathcal{M}^{\zeta\sst}_{\mu}\rightarrow \mathcal{M}_{\mu} \end{equation} the maps to the affinizations. \begin{remark} Pick a slope $\mu\in (-\infty,\infty)$. Then we can define a maximally degenerate stability condition, for which every $\mathbb{C}Q$-module is automatically semistable, by fixing each $\zeta_i$ to be an element satisfying $- \Re e(\zeta_i) / \Im m (\zeta_i)=\mu$. In this case we have $\Lambda_{\mu}^{\zeta}=\mathbb{N}^{Q_0}$, and $\mathfrak{M}=\mathfrak{M}^{\zeta\sst}_{\mu}$. As a result, all the results in this paper in which we do not assume that we are working with a generic stability condition apply to the case in which we do not impose any stability condition. In addition, those results in which we do impose a genericity assumption on $\zeta$ apply to the case in which we impose no stability condition and $Q$ is symmetric, since in this case maximally degenerate stability conditions are still generic in our sense. \end{remark} \smallbreak Let $\zeta',\zeta''\in\mathbb{H}^{Q_0}_+$ be a pair of King stability conditions, and let $\dd',\dd''\in\mathbb{N}^{Q_0}$ be a pair of dimension vectors. We define $\mathfrak{M}^{(\zeta',\zeta'')\sst}_{\dd',\dd''} \cong X^{(\zeta',\zeta'')\sst}_{\dd',\dd''}/G_{\dd',\dd''}$ to be the stack of short exact sequences \begin{align} \label{aseseq} 0\rightarrow \rho'\rightarrow\rho\rightarrow\rho''\rightarrow 0 \end{align} in which $\rho'$ is a $\zeta'$-semistable $\mathbb{C}Q$-module of dimension vector $\dd'$ and $\rho''$ is a $\zeta''$-semistable $\mathbb{C}Q$-module of dimension vector $\dd''$, and $X_{\dd',\dd''}^{(\zeta',\zeta'')\sst}\subset X_{\dd',\dd''}$ is the subspace of flags of $\mathbb{C}Q$-modules for which the induced $\dd'$-dimensional module is $\zeta'$-semistable and the induced $\dd''$-dimensional module is $\zeta''$-semistable. For $\zeta\in\mathbb{H}^{Q_0}_+$ a stability condition, we define $\mathfrak{M}^{\zeta\sst}_{\dd',\dd''}$ to be the stack of short exact sequences as in (\ref{aseseq}) such that $\rho$ is $\zeta$-semistable. There is an isomorphism $\mathfrak{M}^{\zeta\sst}_{\dd',\dd''}\cong X^{\zeta\sst}_{\dd',\dd''}/G_{\dd',\dd''}$, where $X^{\zeta\sst}_{\dd',\dd''}:=X_{\dd',\dd''}\cap X_{\dd}^{\zeta\sst}$. If $\dd',\dd''\in\Lambda_{\mu}^{\zeta}$ for some $\mu\in(-\infty,\infty)$, there is equality \[ \mathfrak{M}_{\dd',\dd''}^{\zeta\sst}=\mathfrak{M}_{\dd',\dd''}^{(\zeta,\zeta)\sst}. \] \subsection{Monoidal products} \label{prodss} In this section we define and introduce the first properties of the categorification of the quantum torus from refined DT theory. \begin{definition} \label{lfDef} Let $\mathcal{F}\in\mathcal{D}^{\geq}(\MMHM(X))$ for $X$ a scheme. We say that $\mathcal{F}$ is \textit{locally finite} if for each $Z\in \pi_0(X)$ and for each $n\in\mathbb{Z}$, the element $\Gr_W^n(\Ho(\mathcal{F})|_Z)$ belongs to $\mathcal{D}^{b}(\MMHM(Z))$, where $\Ho(\mathcal{F})$ denotes the total cohomology of $\mathcal{F}$, considered as a complex with zero differential, and $\Gr_W^n(\Ho(\mathcal{F})|_Z)=0$ for $n\ll 0$. Denote by $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(X))\subset\mathcal{D}^{\geq}(\MMHM(X))$ the full subcategory of locally finite objects. \end{definition} The category $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathbb{N}^{Q_0}))$ is going to play the role of the category version of the motivic quantum torus of \cite{KS1}. The definition of this quantum torus is given in \cite[Sec.6.2]{KS1}, for now we remark that it is a power series ring in variables $x^{\dd}$ for $\dd\in\mathbb{N}^{Q_0}$, and if $\zeta$ is a generic stability condition then the subring of power series in variables $x^{\dd}$ for $\dd\in\Lambda_{\mu}^{\zeta}$ is commutative, while whole quantum torus in general is not. The categorification of this picture, then, should be a monoidal category, for which the monoidal product cannot be upgraded to a symmetric monoidal product, except on fixed subcategories indexed by the slope $\mu$, where a symmetric monoidal structure should be given. The moduli scheme $\mathcal{M}_{\mu}^{\zeta\sst}$ carries a symmetric monoidal structure given by the direct sum, and the product $\mathcal{M}_{\mu}^{\zeta\sst}\times \mathcal{M}_{\mu}^{\zeta\sst}\xrightarrow{\oplus}\mathcal{M}_{\mu}^{\zeta\sst}$ is a finite map of schemes (see \cite{DaMe4}, and \cite[Lem.2.1]{Meinhardt14}). By Remark \ref{convProdDef} the category $\mathcal{D}^{\geq}(\MMHM(\mathcal{M}^{\zeta\sst}))$ carries a symmetric monoidal product \[ \mathcal{F}\boxtimes_{\oplus}\mathcal{G}:=\oplus_*(\pi_1^*\mathcal{F}\otimes\pi_2^*\mathcal{G}) \] where $\otimes$ is defined as in (\ref{simmon}). Since $\oplus$ is finite, and $\boxtimes$ is biexact and preserves the weight filtration, it follows that $\boxtimes_{\oplus}$ is biexact and preserves the weight filtration. The monoidal unit is $\mathbb{Q}_{{\mathcal{M}_0^{\zeta\sst}}}$, the constant pure Hodge module supported on $\mathcal{M}_0^{\zeta\sst}\cong \pt$, the unit of the monoid $\mathcal{M}_{\mu}^{\zeta\sst}$. Using symmetry of the product, we define, following\footnote{Set $Y=\mathcal{M}_{\mu}^{\zeta\sst}$. Our definition differs a little from that in \cite{MSS11}, in that they consider the direct image along $\pi\colon Y^n\rightarrow \Sym^n Y$, while we consider the pushforward to $Y$ along $\oplus$. However, the map $\oplus$ factors through $\pi$, and so our definition is the direct image of theirs.} \cite[Thm.1]{MSS11} the functor \begin{align*} \Sym^n_{\boxtimes_{\oplus}}\colon&\mathcal{D}^{\geq}(\MMHM(\mathcal{M}_{\mu}^{\zeta\sst}))\rightarrow \mathcal{D}^{\geq}(\MMHM(\mathcal{M}_{\mu}^{\zeta\sst}))\\ &\mathcal{F}\mapsto (\oplus_*\mathcal{F}^{\boxtimes n})^{\Sigma_n} \end{align*} where $\Sigma_n$ is the permutation group on $n$ letters. We then define \begin{align} \label{symdef} \FreeComm_{\boxtimes_{\oplus}}\colon&\mathcal{D}^{\geq}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu}\setminus \mathcal{M}_0^{\zeta\sst}))\rightarrow \mathcal{D}^{\geq}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu}))\\ \nonumber &\mathcal{F}\mapsto\bigoplus_{n\geq 0}\Sym^n_{\boxtimes_{\oplus}}\mathcal{F}. \end{align} Then the following lemma follows from the fact that \[ \pi_0(\oplus)\colon \pi_0(\mathcal{M})\times\pi_0(\mathcal{M})\rightarrow\pi_0(\mathcal{M}) \] has finite fibers. \begin{lemma} \label{lfRem} The functor (\ref{symdef}) restricts to a functor \[ \FreeComm_{\boxtimes_{\oplus}}\colon\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu}\setminus \mathcal{M}^{\zeta\sst}_0))\rightarrow \mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu})). \] \end{lemma} \begin{proposition} \label{symmPure} The functor $\FreeComm_{\boxtimes_{\oplus}}$ takes pure objects to pure objects. \end{proposition} \begin{proof} Let $\mathcal{F}\in\MMHM(X)$ and $\mathcal{G}\in\MMHM(Y)$ for two algebraic varieties $X$ and $Y$. We define their external tensor product \[ \mathcal{F}\boxtimes \mathcal{G}:=(p\colon X\times Y\times\mathbb{A}^1\times\mathbb{A}^1\xrightarrow{\id_{X\times Y}\times +} X\times Y\times \mathbb{A}^1)_*(\pr_{1,3}^*\mathcal{F}\otimes\pr_{2,4}^*\mathcal{G}). \] as in Remark \ref{convProdDef}. Then by \cite[Lem.1]{KS2}, there is an isomorphism in $\mathcal{D}(\MMHM(X\times Y))$ \[ p_!\left(\pr_{1,3}^*\mathcal{F}\otimes\pr_{2,4}^*\mathcal{G}\right)\rightarrow p_*\left(\pr_{1,3}^*\mathcal{F}\otimes\pr_{2,4}^*\mathcal{G}\right) \] and so since $p_!\colon \mathcal{D}(\MHM(X\times \mathbb{A}^1\times Y\times\mathbb{A}^1))\rightarrow \mathcal{D}(\MHM(X\times Y\times \mathbb{A}^1))$ is left exact, and decreases weights, while $p_*$ is right exact and increases weights, we deduce that the external tensor product \[ \boxtimes\colon \mathcal{D}(\MMHM(X))\times\mathcal{D}(\MMHM(Y))\rightarrow \mathcal{D}(\MMHM(X\times Y)) \] is exact and preserves pure objects. Let $\mathcal{F}\in\MMHM(\mathcal{M}_{\mu}^{\zeta\sst})$ be pure. Let \[ \oplus^n\colon \mathcal{M}_{\mu}^{\zeta\sst}\times\ldots\times \mathcal{M}_{\mu}^{\zeta\sst} \] be the n-fold monoid map, then $\oplus^n$ is finite by \cite[Lem.2.1]{Meinhardt14}, and so \[ \mathcal{G}=\oplus^n_*\left(\mathcal{F}\boxtimes\ldots \boxtimes\mathcal{F}\right)\in\MMHM(\mathcal{M}_{\mu}^{\zeta\sst}) \] is pure. On the other hand, $\FreeComm_{\boxtimes_{\oplus}}^n\mathcal{F}\subset \mathcal{G}$ as the isotrivial direct summand under the $\Sigma_n$-action, and is therefore pure. We deduce that $\FreeComm_{\boxtimes_{\oplus}}\mathcal{F}$ is a direct sum of pure objects, and is therefore pure. \end{proof} We define a new monoidal structure on $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}^{\zeta\sst}))$ by setting \begin{equation} \mathcal{F}\boxtimes_{\oplus}^{\tw} \mathcal{G}:=\bigoplus_{\dd',\dd''\in\mathbb{N}^{Q_0}}\oplus_*(\pi_1^*\mathcal{F}_{\dd'}\otimes\pi_2^*\mathcal{G}_{\dd''})\otimes\mathbb{L}^{\langle \dd'',\dd'\rangle/2}, \end{equation} where $\mathcal{F}=\bigoplus_{\dd'\in\mathbb{N}^{Q_0}}\mathcal{F}_{\dd'}$ for $\mathcal{F}_{\dd'}\in\mathcal{D}(\MMHM(\mathcal{M}_{\dd'}^{\zeta\sst}))$ and $\mathcal{G}=\bigoplus_{\dd''\in\mathbb{N}^{Q_0}}\mathcal{G}_{\dd''}$ for $\mathcal{G}_{\dd''}\in\mathcal{D}(\MMHM(\mathcal{M}_{\dd''}^{\zeta\sst}))$. If $\zeta$ is $\mu$-generic, the restriction of $\boxtimes_{\oplus}^{\tw}$ to $\mathcal{D}^{\geq}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu}))$ is naturally isomorphic to the untwisted symmetric monoidal product $\boxtimes_{\oplus}$, but in general there is no natural isomorphism of bifunctors making $\boxtimes_{\oplus}^{\tw}$ into a symmetric monoidal product. Similarly, we define the monoidal product on $\mathcal{D}^{\geq}(\MMHM(\mathbb{N}^{Q_0}))$ by setting \begin{equation} \mathcal{F}_{\dd'}\boxtimes_+^{\tw}\mathcal{G}_{\dd''}:=+_*(\mathcal{F}_{\dd'}\boxtimes\mathcal{G}_{\dd''})\otimes\mathbb{L}^{\langle \dd'',\dd'\rangle/2}, \end{equation} where $\mathcal{F}_{\dd'}$ is a monodromic mixed Hodge module with support $\dd'\in\mathbb{N}^{Q_0}$ and $\mathcal{G}_{\dd''}$ is a monodromic mixed Hodge module with support $\dd''$. This prescription extends to a unique monoidal product $\boxtimes_+^{\tw}$ that commutes with arbitrary direct sums, and turns the map \[ \dim_*\colon \mathcal{D}^{\geq}(\MMHM(\mathcal{M}^{\zeta\sst}))\rightarrow \mathcal{D}^{\geq}(\MMHM(\mathbb{N}^{Q_0})) \] into a monoidal functor, where the domain is given the twisted monoidal structure $\boxtimes^{\tw}_{\oplus}$. \begin{remark}\label{twDual} By skew symmetry of $\langle \bullet,\bullet\rangle_Q$ and finiteness of the maps of schemes $\oplus$ and $+$, there are natural isomorphisms of bifunctors \begin{align*} &\mathbb{D}^{\mon}(\mathcal{F}\boxtimes_{\oplus}^{\tw}\mathcal{G})\cong \mathbb{D}^{\mon}\mathcal{G}\boxtimes_{\oplus}^{\tw}\mathbb{D}^{\mon}\mathcal{F}\\ &\mathbb{D}^{\mon}(\mathcal{F}\boxtimes_{+}^{\tw}\mathcal{G})\cong \mathbb{D}^{\mon}\mathcal{G}\boxtimes_{+}^{\tw}\mathbb{D}^{\mon}\mathcal{F}. \end{align*} Note the swap of arguments. \end{remark} \subsection{Framed moduli spaces} \label{framedSec} Framed moduli spaces will play a central role in what follows. Their cohomology provides an approximation to the cohomology of $\mathfrak{M}^{\zeta\sst}$, in a way which we will make precise in Section \ref{cohaDT}. Let $\mathbf{f}\in\mathbb{N}^{Q_0}$ be a dimension vector (called the framing vector). We form a new quiver $Q_{\mathbf{f}}$ by setting $Q_{\mathbf{f}}=(Q_0\sqcup\{\infty\}, Q_1\sqcup \{\beta_{i,l_i}\colon \infty \to i \mid i\in Q_0, \hbox{ }1\le l_i\le \mathbf{f}_i \})$. Given a stability condition $\zeta$ for $Q$, and a slope $\mu\in(-\infty,\infty)$, and a $\dd\in\Lambda_{\mu}^{\zeta}$ we extend $\dd$ to a dimension vector for $Q_{\mathbf{f}}$ by setting $\dd_{\infty}=1$, and we extend $\zeta$ to a stability condition $\zeta^{(\mu)}_{\mathbf{f}}$ for $Q_{\mathbf{f}}$ by picking $\zeta^{(\mu)}_{\mathbf{f},\infty}\in\mathbb{H}_+$ so that \[ -\Re e(\zeta^{(\mu)}_{\mathbf{f},\infty})/\Im m(\zeta^{(\mu)}_{\mathbf{f},\infty})=\mu+\epsilon \] for sufficiently small $\epsilon>0$, and picking $|\zeta_{\mathbf{f},\infty}^{(\mu)}|\gg 0$. A $\mathbb{C} Q_{\mathbf{f}}$-module $\rho$ with $\dim(\rho)_{\infty}=1$ is $\zeta^{(\mu)}_{\mathbf{f}}$-semistable if and only if the underlying $\mathbb{C} Q$-module is $\zeta$-semistable, and for every $\mathbb{C}Q_{\mathbf{f}}$-submodule $\rho'\subset \rho$ such that $\dim(\rho')_{\infty}=1$, the underlying $\mathbb{C} Q$-module of $\rho'$ has slope strictly less than $\mu$. A $\zeta^{(\mu)}_{\mathbf{f}}$-semistable $\mathbb{C}Q_{\mathbf{f}}$-module is automatically $\zeta^{(\mu)}_{\mathbf{f}}$-stable, and we write $\mathcal{M}^{\zeta}_{\mathbf{f},\dd}$ for the coarse moduli space of $\zeta^{(\mu)}_{\mathbf{f}}$-semistable $\mathbb{C} Q_{\mathbf{f}}$-modules of dimension $(1,\dd)\in\mathbb{N}\times\mathbb{N}^{Q_0}$. The moduli space $\mathcal{M}_{\mathbf{f},\dd}^{\zeta}$ is smooth. In fact $G_{\dd}$ acts freely on the variety \begin{equation} \label{Ydef} Y_{\mathbf{f},\dd}^{\zeta}:=X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{(1,\dd)}, \end{equation} and $\mathcal{M}_{\mathbf{f},\dd}^{\zeta}$ is the quotient. From this description we deduce that $\mathcal{M}^{\zeta}_{\mathbf{f},\dd}$ is a smooth $\mathbb{G}_m$-torsor over $Y_{\mathbf{f},\dd}^{\zeta}/G_{(1,\dd)}$. \smallbreak We denote by \[ \pi_{\mathbf{f},\dd}^{\zeta}\colon\mathcal{M}^{\zeta}_{\mathbf{f},\dd}\rightarrow \mathcal{M}^{\zeta\sst}_{\dd} \] the map given by forgetting the framing. It is a proper map, since the other two maps in the diagram \[ \xymatrix{ \mathcal{M}^{\zeta}_{\mathbf{f},\dd}\ar@/^1pc/[rr]\ar[r]_-{\pi^\zeta_{\mathbf{f},\dd}}&\mathcal{M}_{\dd}^{\zeta\sst}\ar[r]_-{q^\zeta_{\dd}}&\mathcal{M}_{\dd} } \] are, as they are GIT quotient maps. \smallbreak For $\dd\in\Lambda_{\mu}^{\zeta}$ we may alternatively extend $\dd$ to a dimension vector for $Q_{\mathbf{f}}$ by setting $\dd_{\infty}=0$. There is a natural isomorphism $X(Q_{\mathbf{f}})_{(0,\dd)}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}\cong X(Q)_{\dd}^{\zeta\sst}$. \smallbreak Later we will use a slightly different stability condition on framed modules. We define $\zeta_{\mathbf{f}}^{[\mu]}$ by picking $-\Re e(\zeta^{[\mu]}_{\mathbf{f},\infty})/\Im m(\zeta^{[\mu]}_{\mathbf{f},\infty})=\mu$. For definiteness, we pick $\lvert \zeta_{\mathbf{f}}^{[\mu]}\rvert=1$. A $\mathbb{C}Q_{\mathbf{f}}$-module is $\zeta_{\mathbf{f}}^{[\mu]}$-semistable if and only if the underlying $\mathbb{C}Q$-module is $\zeta$-semistable. As such, there is a natural inclusion of stacks (in which the domain is a scheme) \begin{equation} \label{jmap} \mathcal{M}^{\zeta}_{\mathbf{f},\dd}:=Y_{\mathbf{f},\dd}^{\zeta}/G_{\dd}\xrightarrow{j_{\mathbf{f},\dd}^{\zeta}} X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{[\mu]}\sst}_{(1,\dd)}/G_{\dd}. \end{equation} Note that if $Q$ is acyclic then $\mathcal{M}^{\zeta}_{\mathbf{f},\dd}$ is proper, but the map (\ref{jmap}) may still be profitably thought of as a partial compactification of $\mathcal{M}^{\zeta}_{\mathbf{f},\dd}$ in the category of stacks. \subsection{Jacobi algebras and potentials} Let $W\in\mathbb{C} Q/[\mathbb{C}Q,\mathbb{C}Q]$ be a finite linear combination of equivalence classes of cyclic words in $\mathbb{C} Q$ -- such a $W$ is called a \textit{potential}. A potential $W$ induces a function $\mathfrak{Tr}(W)$ on $\mathfrak{M}$, defined as follows. Firstly, assume that $W$ lifts to a single cyclic word $c=a_r\ldots a_0$ in $\mathbb{C} Q$. Then a $\mathbb{C}Q$-module $F$ determines an endomorphism \[ F(a_r)\circ\ldots\circ F(a_0)\colon F(s(a_0))\rightarrow F(s(a_0)), \] and $\Tr(F(a_r)\circ\ldots\circ F(a_0))$ determines a function on $X(Q)_{\dd}$, for each $\dd\in\mathbb{N}^{Q_0}$. By cyclic invariance of the trace, this function is $G_{\dd}$-invariant, and does not depend on the lift $c$ of $W$. Extending by linearity, we define for general $W\in\mathbb{C}Q/[\mathbb{C}Q,\mathbb{C}Q]$ the induced function \[ \mathfrak{Tr}(W)_{\dd}\colon \mathfrak{M}_{\dd}\rightarrow \mathbb{C}. \] We denote by $\mathfrak{Tr}(W)_{\dd}^{\zeta}$ the restriction of this function to $\mathfrak{M}^{\zeta\sst}_{\dd}$, and by \[ \mathcal{T}r(W)^{\zeta}_{\dd}\colon \mathcal{M}^{\zeta\sst}_{\dd}\rightarrow\mathbb{C} \] the unique function through which $\mathfrak{Tr}(W)^{\zeta}_{\dd}$ factors. Similarly, we define \[ \mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}:=\mathcal{T}r(W)_{\dd}^{\zeta}\circ\pi_{\mathbf{f},\dd}^{\zeta}\colon \mathcal{M}_{\mathbf{f},\dd}^{\zeta}\rightarrow \mathbb{C}, \] and we define $\mathfrak{Tr}(W)_{\dd',\dd''}^{\zeta}$ to be the composition \[ \mathfrak{M}_{\dd',\dd''}^{\zeta\sst}\hookrightarrow\mathfrak{M}_{\dd'+\dd''}^{\zeta\sst}\xrightarrow{\mathfrak{Tr}(W)_{\dd'+\dd''}^{\zeta}}\mathbb{C}. \] Associated to the data $(Q,W)$ is the Jacobi algebra \[ \Jac(Q,W):=\mathbb{C} Q/\langle \partial W/\partial a|a\in Q_1\rangle. \] Here the \textit{noncommutative derivative} $\partial W/\partial a$ is defined as follows. First assume that $W$ lifts to a single cyclic word $c\in\mathbb{C}Q$. Then \[ \partial W/\partial a:=\sum_{c=c'ac''}c''c'. \] We then extend the definition to general $W$ by linearity. We define $\mathfrak{M}_{W}$, the stack of finite-dimensional $\Jac(Q,W)$-modules, in the same way as the stack of finite-dimensional $\mathbb{C} Q$-modules. In particular there is a natural closed embedding of stacks $\mathfrak{M}_W\subset \mathfrak{M}$, and it is easy to show that $\mathfrak{M}_W=\crit(\mathfrak{Tr}(W))$ as substacks of $\mathfrak{M}$. In order to keep to Assumption \ref{phicheat} we will assume that $W\in \langle \partial W/\partial a|a\in Q_1\rangle$, for then $\crit(\mathfrak{Tr}(W))\subset\mathfrak{Tr}(W)^{-1}(0)$. One very common set of circumstances in which this requirement is met is when there is a grading of the arrows $Q_1$ with integers such that $W$ is homogeneous of nonzero weight. As mentioned after Assumption \ref{phicheat}, we can drop this requirement at the expense of slightly more complicated definitions. \begin{proposition}\label{TSeq} Let $W\in\mathbb{C}Q/[\mathbb{C}Q,\mathbb{C}Q]$ be a potential. Then \[ \phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\colon\MHM(\mathcal{M}^{\zeta\sst}_{\mu})\rightarrow\MMHM(\mathcal{M}^{\zeta\sst}_{\mu}) \] is a symmetric monoidal functor. \end{proposition} \begin{proof} Let $\mathcal{F}\in\MHM(\mathcal{M}^{\zeta\sst}_{\mu})$. The statement follows from the claim that the following diagram is commutative, \[ \xymatrix{ \phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\left(\mathcal{F}\boxtimes_{\oplus}\mathcal{F}\right)\ar[r]\ar[d]&\phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\left(\mathcal{F}\boxtimes_{\oplus}\mathcal{F}\right)\ar[d]\\ \phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\mathcal{F}\boxtimes_{\oplus}\phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\mathcal{F}\ar[r]&\phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\mathcal{F}\boxtimes_{\oplus}\phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\mathcal{F}, } \] where the horizontal maps are induced by the symmetric monoidal structure, and the vertical maps are the Thom-Sebastiani isomorphism. By faithfulness of the functor $\form{\mathcal{M}_{\mu}^{\zeta\sst}}$ defined in (\ref{formDef}), it is enough to prove commutativity of the diagram at the level of perverse sheaves. Let $q\colon \mathcal{M}_{\mu}^{\zeta\sst}\times\mathcal{M}_{\mu}^{\zeta\sst}\rightarrow \Sym^2(\mathcal{M}_{\mu}^{\zeta\sst})$ be the natural quotient map, and let $r\colon \Sym^2(\mathcal{M}_{\mu}^{\zeta\sst})\rightarrow\mathcal{M}^{\zeta\sst}_{\mu}$ be the map induced by $\oplus$. Then $r$ is finite, and by natural commutativity of vanishing cycle functors with proper maps, it is enough to show that the diagram \[ \xymatrix{ \phi_{\mathcal{T}r(W)_{\mu}^{\zeta}}\left(q_*\left(\mathcal{F}\boxtimes\mathcal{F}\right)\right)\ar[r]\ar[d]&\phi_{\mathcal{T}r(W)_{\mu}^{\zeta}}\left(q_*\left(\mathcal{F}\boxtimes\mathcal{F}\right)\right)\ar[d]\\ q_*\left(\phi_{\mathcal{T}r(W)_{\mu}^{\zeta}}\mathcal{F}\boxtimes\phi_{\mathcal{T}r(W)_{\mu}^{\zeta}}\mathcal{F}\right)\ar[r]&q_*\left(\phi_{\mathcal{T}r(W)_{\mu}^{\zeta}}\mathcal{F}\boxtimes_{\oplus}\phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\mathcal{F}\right), } \] commutes, or equivalently, that the Thom--Sebastiani isomorphism \[ \phi_{\mathcal{T}r(W)_{\mu}^{\zeta}}\mathcal{F}\boxtimes \phi_{\mathcal{T}r(W)_{\mu}^{\zeta}}\mathcal{F}\rightarrow \phi_{\mathcal{T}r(W)_{\mu}^{\zeta}\boxplus \mathcal{T}r(W)_{\mu}^{\zeta}}\left(\mathcal{F}\boxtimes \mathcal{F}\right) \] commutes with the $\Sigma_2$-equivariant structures on both sheaves. Let $\iota\colon\mathcal{M}_{\mu}^{\zeta}\rightarrow X$ be an embedding inside a smooth scheme, let $f$ be a function on $X$ extending $\mathcal{T}r(W)_{\mu}^{\zeta}$ and consider $\mathcal{F}$ as a perverse sheaf on $X$ via the direct image. Then we define the functor on $\MHM(X)$ \[ \Gamma_{f^{-1}(\mathbb{R}_{\leq 0})}\mathcal{F}(U):=\ker\left(\mathcal{F}(U)\rightarrow \mathcal{F}(U\setminus f^{-1}(\mathbb{R}_{\leq 0}))\right) \] and we may alternatively define $\phi_f\mathcal{F}:=(R\Gamma_{(f^{-1}(\mathbb{R}_{\leq 0})}\mathcal{F})|_{f^{-1}(0)}$. By \cite[Prop.A.2]{Br12} \cite{Ma01} the Thom--Sebastiani isomorphism is given by the natural map \[ \alpha\colon R\Gamma_{f^{-1}(\mathbb{R}_{\leq 0})\times f^{-1}(\mathbb{R}_{\leq 0})}\left(\mathcal{F}\boxtimes\mathcal{F}\right)\rightarrow R\Gamma_{(f\boxplus f)^{-1}(\mathbb{R}_{\leq 0})}\left(\mathcal{F}\boxtimes\mathcal{F}\right) \] induced by the inclusion \[ f^{-1}(\mathbb{R}_{\leq 0})\times f^{-1}(\mathbb{R}_{\leq 0})\subset (f\boxplus f)^{-1}(\mathbb{R}_{\leq 0}), \] and the $\Sigma_2$-equivariant structure is given by the $\Sigma_2$-equivariant structure on $\mathcal{F}\boxtimes\mathcal{F}$ on both sides, and so $\alpha$ lifts to a morphism of $\Sigma_2$-equivariant perverse sheaves. \end{proof} \section{Cohomological Donaldson--Thomas invariants} \subsection{Proof of Theorem \ref{ThmA}} \label{cohaDT} We start this section by showing that there is a module-theoretic partial compactification of Totaro's construction, which is what we used in Section \ref{TotCon} to define the direct image of the monodromic mixed Hodge module of vanishing cycles from the stack of representations of $Q$. Recall the definition of $\zeta_{\mathbf{f}}^{(\mu)}$ from Section \ref{framedSec}. Recall from (\ref{Ydef}) the notation \begin{align*} X_{\mathbf{f},\dd}:=&X(Q_{\mathbf{f}})_{(1,\dd)}\\ Y_{\mathbf{f},\dd}^{\zeta}:=&X_{\mathbf{f},\dd}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}. \end{align*} Assume that $\zeta_{\mathbf{f}}^{(\mu)}$ is a King stability condition for $Q_{\mathbf{f}}$, as we always can, by Remark \ref{BtoK}. Then $\zeta_{\mathbf{f}}^{(\mu)}$ defines a linearization of the natural $G_{(1,\dd)}$-action on $X_{\mathbf{f},\dd}$ and we have \begin{align*} \mathcal{M}_{\mathbf{f},\dd}^{\zeta}:=&X_{\mathbf{f},\dd}/\!\!/_{\chi}G_{(1,\dd)} \\\cong &Y_{\mathbf{f},\dd}^{\zeta}/G_{\dd} \end{align*} where $G_{(1,\dd)}=\Gl_1(\mathbb{C})\times G_{\dd}$. As a $G_{\dd}$-equivariant variety, $X_{\mathbf{f},\dd}$ admits a product decomposition $X_{\mathbf{f},\dd}=X_{\dd}\times V_{\mathbf{f},\dd}$, where $V_{\mathbf{f},\dd}:=\bigoplus_{i\in Q_0}\Hom(\mathbb{C}^{\mathbf{f}_i},\mathbb{C}^{\dd_i})$, and the extra $\Gl_1(\mathbb{C})$ factor acts by rescaling $V_{\mathbf{f},\dd}$. We define $U_{\mathbf{f},\dd}\subset V_{\mathbf{f},\dd}$ to be the subspace for which each $\alpha_i\in\Hom(\mathbb{C}^{\mathbf{f}_i},\mathbb{C}^{\dd_i})$ is surjective. The group $G_{\dd}$ acts freely on $U_{\mathbf{f},\dd}$, and the quotient is a product of Grassmannians, and exists as a fibre bundle quotient in the category of schemes. We have the following commutative diagram \[ \xymatrix{ \\ (X_{\dd}^{\zeta\sst}\times U_{\mathbf{f},\dd})/G_{\dd}\ar@/^2pc/[rr]^-{i}\ar[drr]_{\kappa_{\mathbf{f},\dd}^{\zeta}}\ar@{^(->}[r]^-h& Y_{\mathbf{f},\dd}^{\zeta}/G_{\dd}\ar@{^(->}[r]\ar[rd]^{\pi ^{\zeta}_{\mathbf{f},\dd}}& (X_{\dd}^{\zeta\sst}\times V_{\mathbf{f},\dd})/G_{\dd}\ar[d] \\&& \mathcal{M}_{\dd}^{\zeta\sst}. } \] All of the objects in the above diagram are algebraic varieties, with the exception of $(X_{\dd}^{\zeta\sst}\times V_{\mathbf{f},\dd})/G_{\dd}$, which is an Artin stack. The diagram obtained by deleting this stack and all arrows incident to it is a relative compactification of the map $\kappa_{\mathbf{f},\dd}^{\zeta}$. In what follows we use the notation $\mathbf{f}\gg 0$ to mean that $\mathbf{f}_i\gg 0$ for every $i\in Q_0$. \begin{lemma} \label{appLem} For fixed $n$ and $\mathbf{f}\gg 0$ the natural map \[ \Ho^n\left(\pi ^{\zeta}_{\mathbf{f},\dd,*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd}}\mathbb{Q}_{Y_{\mathbf{f},\dd}^{\zeta}/G_{\dd}}\right)\rightarrow\Ho^n\left(\kappa^{\zeta}_{\mathbf{f},\dd,*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd}}\mathbb{Q}_{(X_{\dd}^{\zeta\sst}\times U_{\mathbf{f},\dd})/G_{\dd}}\right) \] is an isomorphism. \end{lemma} \begin{proof} As in the proof of Proposition \ref{stabProp} it is enough to show that for fixed $n\in\mathbb{Z}$, $\Ho_{\con}^n\left(\phi_{\overline{\mathcal{T}r(W)}_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{Y_{\mathbf{f},\dd}^{\zeta}}\rightarrow\overline{h}_*\overline{h}^*\phi_{\overline{\mathcal{T}r(W)}_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{Y_{\mathbf{f},\dd}^{\zeta}}\right)$ is an isomorphism for $\mathbf{f}\gg 0$, where \[ \overline{h}\colon X_{\dd}^{\zeta\sst}\times U_{\mathbf{f},\dd} \rightarrow Y_{\mathbf{f},\dd}^{\zeta} \] is the inclusion and \[ \overline{\mathcal{T}r(W)}_{\mathbf{f},\dd}^{\zeta}\colon Y_{\mathbf{f},\dd}^{\zeta}\rightarrow \mathbb{C} \] is the composition of $\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}$ with the quotient map. On the other hand, the complex of sheaves $\phi_{\overline{\mathcal{T}r(W)}_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{Y_{\mathbf{f},\dd}^{\zeta}}$ is given by restriction of the pullback $\rho^*\phi_{\overline{\mathcal{T}r(W)}_{\dd}}\mathbb{Q}_{X^{\zeta\sst}_{\dd}}$ of $\phi_{\overline{\mathcal{T}r(W)}_{\dd}}\mathbb{Q}_{X^{\zeta\sst}_{\dd}}$ along the projection $\rho\colon X^{\zeta\sst}_{\dd}\times V_{\mathbf{f},\dd}\rightarrow X^{\zeta\sst}_{\dd}$. It follows that the cohomology (with respect to the constructible t structure) of the complex of sheaves $\phi_{\overline{\mathcal{T}r(W)}_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{Y_{\mathbf{f},\dd}^{\zeta}}$ vanishes outside of a range that is independent of $\mathbf{f}$, and the result follows from the fact that as $\mathbf{f}\gg 0$, the codimensions of $\left(X^{\zeta\sst}_{\dd}\times V_{\mathbf{f},\dd}\setminus Y^{\zeta}_{\mathbf{f},\dd}\right)$ and $\left(X^{\zeta\sst}_{\dd}\times V_{\mathbf{f},\dd}\setminus X^{\zeta\sst}_{\dd}\times U_{\mathbf{f},\dd}\right)$ inside $X^{\zeta\sst}_{\dd}\times V_{\mathbf{f},\dd}$ go to infinity. \end{proof} We deduce that for fixed $\dd\in\mathbb{N}^{Q_0}$ and $n\in\mathbb{Z}$, and for $\mathbf{f}\gg 0$ there are isomorphisms \begin{equation} \label{PhiSurj} \Phi_{\mathbf{f},\dd,W}\colon \Ho^n(\pi _{\mathbf{f},\dd,*}^{\zeta}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd}^{\zeta}})\rightarrow \Ho^n(\kappa^{\zeta}_{\mathbf{f},\dd,*}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{(X_{\dd}^{\zeta\sst}\times U_{\mathbf{f},\dd })/G_{\dd}}) \end{equation} and \begin{equation} \label{PsiSurj} \Psi_{\mathbf{f},\dd ,W}\colon \Ho^n(\kappa^{\zeta}_{\mathbf{f},\dd ,!}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd }^{\zeta}}\mathbb{Q}_{(X_{\dd}^{\zeta\sst}\times U_{\mathbf{f},\dd })/G_{\dd}}\otimes\mathbb{L}^{-\mathbf{f}\cdot \dd})\rightarrow \Ho^n(\pi_{\mathbf{f},\dd ,!}^{\zeta}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd }^{\zeta}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\otimes\mathbb{L}^{-\mathbf{f}\cdot \dd}) \end{equation} where the argument that $\Psi_{\mathbf{f},\dd ,W}$ is an isomorphism is as in Lemma \ref{appLem}. On the other hand, for $\mathbf{f}\gg 0$, the right hand side of (\ref{PhiSurj}) is by definition $\Ho^n(p_{\dd,*}^{\zeta}\phim{\mathfrak{Tr}(W)_{\dd}^{\zeta}}\mathbb{Q}_{\mathfrak{M}^{\zeta\sst}_{\dd}})$, while the left hand side of (\ref{PsiSurj}) is $\Ho^n(p_{\dd,!}^{\zeta}\phim{\mathfrak{Tr}(W)_{\dd}^{\zeta}}\mathbb{Q}_{\mathfrak{M}^{\zeta\sst}_{\dd}})$. \begin{remark} Put in words, we can say that the direct image of the vanishing cycle monodromic mixed Hodge module along the non-representable map $p^{\zeta}_{\dd}\colon\mathfrak{M}_{\dd}^{\zeta\sst}\rightarrow\mathcal{M}_{\dd}^{\zeta\sst}$ is approximated by the direct image along the representable and \textit{proper} maps $\pi _{\mathbf{f},\dd }^{\zeta}\colon \mathcal{M}_{\mathbf{f},\dd }^{\zeta}\rightarrow\mathcal{M}_{\dd}^{\zeta\sst}$. \end{remark} For $\mathbf{f}\gg 0$ we obtain isomorphisms \begin{equation} \label{PhiBar} \overline{\Phi}_{W,\mathbf{f},\dd }\colon \HO^n(\mathcal{M}_{\mathbf{f},\dd }^{\zeta},\phim{\mathcal{T}r(W)_{\mathbf{f},\dd }^{\zeta}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}})\cong\HO^n(\mathfrak{M}^{\zeta\sst}_{\dd},\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathbb{Q}_{\mathfrak{M}^{\zeta\sst}_{\dd}}) \end{equation} and \begin{equation} \label{PsiBar} \overline{\Psi}_{W,\mathbf{f},\dd }\colon \HO^n_c(\mathfrak{M}^{\zeta\sst}_{\dd},\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathbb{Q}_{\mathfrak{M}^{\zeta\sst}_{\dd}})\cong \HO^n_c(\mathcal{M}_{\mathbf{f},\dd }^{\zeta},\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd }}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\otimes\mathbb{L}^{-\mathbf{f}\cdot \dd}) \end{equation} in the same way. Since each map $\pi ^{\zeta}_{\mathbf{f},\dd }$ is proper, and $\phim{\mathcal{T}r(W)^{\zeta}_{\dd}}$ is exact, we deduce the following proposition. \begin{proposition} \label{cvs} For every $\dd\in\mathbb{N}^{Q_0}$ there are isomorphisms \begin{equation} \label{nudef} \nu_{\dd}\colon \Ho(p^{\zeta}_{\dd,*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathbb{Q}_{\mathfrak{M}_{\dd}^{\zeta\sst}})\cong \phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\Ho(p^{\zeta}_{\dd,*}\mathbb{Q}_{\mathfrak{M}_{\dd}^{\zeta\sst}}). \end{equation} \end{proposition} \begin{proof} The isomorphisms $\nu_{\dd}$ are obtained by considering the left hand side of (\ref{PhiSurj}) and using the natural isomorphisms \[ \phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\cong \pi^{\zeta}_{\mathbf{f},\dd ,*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd }}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}. \] The isomorphisms $\nu_{\dd}$ are well-defined by commutativity of the square \[ \xymatrix{ \pi ^{\zeta}_{\mathbf{f}',\dd,*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f}',\dd}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f}',\dd}^{\zeta}}\ar[r]&\pi ^{\zeta}_{\mathbf{f}',\dd,*}\iota_{\mathbf{f},\mathbf{f}',*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd }}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\\ \phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\pi ^{\zeta}_{\mathbf{f}',\dd,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f}',\dd}^{\zeta}}\ar[u]\ar[r]& \phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\pi ^{\zeta}_{\mathbf{f}',\dd,*}\iota_{\mathbf{f},\mathbf{f}',*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\ar[u] } \] where $\mathbf{f}'>\mathbf{f}$ and $\iota_{\mathbf{f},\mathbf{f}'}\colon\mathcal{M}^{\zeta}_{\mathbf{f},\dd }\rightarrow \mathcal{M}^{\zeta}_{\mathbf{f}',\dd}$ is the natural inclusion obtained by extending a framing by zero. The square commutes as it is obtained by applying the natural transformation \[ \phim{\mathcal{T}r(W)_{\dd}^{\zeta}}\pi ^{\zeta}_{\mathbf{f}',\dd,*}\rightarrow \pi ^{\zeta}_{\mathbf{f}',\dd,*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f}',\dd}} \] to the restriction map \[ \mathbb{Q}_{\mathcal{M}_{\mathbf{f}',\dd}^{\zeta}}\rightarrow \iota_{\mathbf{f},\mathbf{f}',*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}. \] \end{proof} Similarly, the natural isomorphisms $ \phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\pi ^{\zeta}_{\mathbf{f},\dd ,!}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\cong \pi ^{\zeta}_{\mathbf{f},\dd ,!}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd }}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}$, along with exactness of $\phim{\mathcal{T}r(W)^{\zeta}_{\dd}}$, induce natural isomorphisms \begin{equation} \nu_{c,\dd}\colon \Ho(p^{\zeta}_{\dd,!}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathbb{Q}_{\mathfrak{M}_{\dd}^{\zeta\sst}})\cong\phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\Ho(p_{\dd,!}^{\zeta}\mathbb{Q}_{\mathfrak{M}_{\dd}^{\zeta\sst}}). \end{equation} \smallbreak Now assume that $\zeta$ is a $\mu$-generic Bridgeland stability condition. As in \cite{DaMe4} and the introduction, for $\dd\in\Lambda_{\mu}^{\zeta}\setminus \{0\}$ we define the following element of $\MMHM(\mathcal{M}^{\zeta\sst}_{\dd})$ \[ \mathcal{DT}_{W,\dd}^{\zeta}=\begin{cases} \phim{\mathcal{T}r(W)^\zeta_{\dd}}\mathcal{IC}_{\mathcal{M}^{\zeta \sst}_{\dd}}(\mathbb{Q})&\textrm{if }\mathcal{M}^{\zeta \st}_{\dd}\neq\emptyset ,\\0&\textrm{otherwise.}\end{cases} \] We define \[ \mathcal{DT}^{\zeta}_{W,\mu}:=\bigoplus_{\dd\in\Lambda_{\mu}^{\zeta}\setminus \{0\}}\mathcal{DT}^{\zeta}_{W,\dd}\in\MMHM(\mathcal{M}_{\mu}^{\zeta\sst}). \] \begin{remark} Note that from our shifting convention on $\mathcal{IC}_{\mathcal{M}^{\zeta\sst}_{\dd}}(\mathbb{Q})$, along with exactness of $\phim{\mathcal{T}r(W)^{\zeta}_{\dd}}$ it follows that $\mathcal{DT}_{W,\dd}^{\zeta}$ is indeed a genuine (monodromic) mixed Hodge module, instead of merely being an element of $\mathcal{D}(\MMHM(\mathcal{M}^{\zeta\sst}_{\dd}))$. One can find examples for which it is not pure. For example consider the quiver $Q$ with three loops and potential $W=x^p+y^q+z^r+axyz$, with $a\neq 0$ and $p^{-1}+q^{-1}+r^{-1}<1$. Then $\mathcal{M}_1=\mathbb{A}^3$, and on $\mathcal{M}_1$ there is an identity $\mathrm{Tr}(W)=W$, and the potential has an isolated singularity at the origin. The cohomology $\Ho(\phim{W}\mathbb{Q}_{\mathbb{A}^3})$ is not pure (see \cite[Rem.3.5]{DMSS15} \cite[Ex.7.3.5]{Kul98} \cite[Ex.9.1]{Sch85}), and so \[ \mathcal{DT}_{W,1}:=\Ho(\phim{\mathcal{T}r(W)_{1}}\mathbb{Q}_{\mathcal{M}_1})\otimes\mathbb{L}^{-3/2} \] is not pure either. \end{remark} \begin{remark} \label{sdrem} Using Proposition \ref{basicfacts} we have isomorphisms \begin{align*} \mathbb{D}^{\mon}_{\mathcal{M}_{\dd}^{\zeta\sst}}\phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\mathcal{IC}_{\mathcal{M}^{\zeta\sst}_{\dd}}(\mathbb{Q})\cong& \phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\mathbb{D}^{\mon}_{\mathcal{M}_{\dd}^{\zeta\sst}}\mathcal{IC}_{\mathcal{M}^{\zeta\sst}_{\dd}}(\mathbb{Q})\\ \cong&\phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\mathcal{IC}_{\mathcal{M}^{\zeta\sst}_{\dd}}(\mathbb{Q}) \end{align*} and so \begin{equation} \label{sdual} \mathbb{D}^{\mon}_{\mathcal{M}^{\zeta\sst}_{\dd}}\mathcal{DT}_{W,\dd}^{\zeta}\cong\mathcal{DT}_{W,\dd}^{\zeta}. \end{equation} \end{remark} \begin{definition} \label{abbrev} We make the abbreviation of symbols \[ \mathfrak{IC}_{W,\dd}^{\zeta}:=\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathcal{IC}_{\mathfrak{M}_{\dd}^{\zeta\sst}}(\mathbb{Q}) \] and \[ \mathfrak{IC}_{W,\mu}^{\zeta}:=\phim{\mathfrak{Tr}(W)^{\zeta}_{\mu}}\mathcal{IC}_{\mathfrak{M}_{\mu}^{\zeta\sst}}(\mathbb{Q}). \] \end{definition} \begin{theorem}[Theorem \ref{ThmA}] \label{weakPBW} Assume that $\zeta$ is a $\mu$-generic stability condition on the quiver $Q$. There is an isomorphism in $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu}))$ \begin{equation} \label{weakeq} \Ho(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta})\cong\FreeComm_{\boxtimes_{\oplus}}\left(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right) \end{equation} and an isomorphism in $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\Lambda_{\mu}^{\zeta}))$ \begin{equation} \label{vweakeq} \Ho(\dim_*p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta})\cong\FreeComm_{\boxtimes_{+}}\left(\Ho(\dim_*\mathcal{DT}_{W,\mu}^{\zeta})\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right). \end{equation} \end{theorem} \begin{proof} By definition, the left hand side of (\ref{weakeq}) is isomorphic to its total cohomology. On the other hand, $\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}$ is also isomorphic to its total cohomology, and $\mathcal{DT}_{W,\mu}^{\zeta}$ is an object of $\MMHM(\mathcal{M}_{\mu}^{\zeta\sst})\subset \mathcal{D}(\MMHM(\mathcal{M}_{\mu}^{\zeta\sst}))$, and is trivially isomorphic to its total cohomology. It follows from the exactness of $\boxtimes_{\oplus}$ that the right hand side of (\ref{weakeq}) is also isomorphic to its total cohomology, and so it is sufficient to construct the isomorphism (\ref{weakeq}) at each cohomological degree. We first show the result, under the assumption that $W=0$. That is, we show that $\Ho(p^{\zeta}_{\mu,*}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\mu}}(\mathbb{Q}))\cong\FreeComm_{\boxtimes_{\oplus}}(\mathcal{DT}_{W=0,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})$. By Lemma \ref{appLem}, for fixed $n\in\mathbb{Z}$ and $\mathbf{f}\gg 0$ the map \[ \Ho^n\left(\pi _{\mathbf{f},\dd ,*}^{\zeta}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}(\mathbb{Q})\otimes \mathbb{L}^{\mathbf{f}\cdot \dd/2}\right)\rightarrow\Ho^n\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{0,\dd}^{\zeta}\right) \] is an isomorphism. It follows that $\Ho(p^{\zeta}_{\mu,*}\mathfrak{IC}_{0,\mu}^{\zeta})$ is pure, since $\Ho^n(\pi _{\mathbf{f},\dd ,*}^{\zeta}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}(\mathbb{Q}))$ is a pure mixed Hodge module of weight $n$, as purity is preserved by direct image along proper maps. It follows that $\Ho(p^{\zeta}_{\mu,*}\mathfrak{IC}^{\zeta}_{0,\mu})$ is locally finite in the sense of Definition \ref{lfDef}, as is $\FreeComm_{\boxtimes_{\oplus}}(\mathcal{DT}_{0,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})$, by Lemma \ref{lfRem}, since \[ \mathcal{DT}_{0,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\in\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}_{\mu}^{\zeta\sst}\setminus\mathcal{M}^{\zeta\sst}_0)). \] The element $\FreeComm_{\boxtimes_{\oplus}}(\mathcal{DT}_{0,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})$ is also a pure monodromic mixed Hodge module, by Proposition \ref{symmPure}, and so both $\Ho(p^{\zeta}_{\mu,*}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\mu}}(\mathbb{Q}))$ and $\FreeComm_{\boxtimes_{\oplus}}(\mathcal{DT}_{0,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})$ are direct sums of simple pure mixed Hodge modules, and are isomorphic if and only if they have the same class in $\Ka_0(\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu})))$. From the fact that the map $\Phi^n_{0,\mathbf{f},\dd }$ of equation (\ref{PhiBar}) is an isomorphism for fixed $n$ and $\mathbf{f}\gg 0$ we deduce that \begin{align*} \left[\Ho\left(p^{\zeta}_{\mu,*}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\mu}}(\mathbb{Q})\right)\right]_{\Ka_0}=&\lim_{\mathbf{f}\mapsto \infty}[\pi _{\mathbf{f},\dd ,*}^{\zeta}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta\sst}}(\mathbb{Q})\otimes\mathbb{L}^{\mathbf{f}\cdot \dd/2}]_{\Ka_0}\\=&\lim_{\mathbf{f}\mapsto\infty}\left[\FreeComm_{\boxtimes_{\oplus}}\left(\bigoplus_{\dd\in\Lambda_{\mu}^{\zeta}}\mathcal{DT}_{0,\dd}^{\zeta}\otimes\HO(\mathbb{P}^{\mathbf{f}\cdot \dd-1})_{\vir}\otimes\mathbb{L}^{\mathbf{f}\cdot \dd/2}\right)\right]_{\Ka_0}\\=&\left[\Sym_{\boxtimes_{\oplus}}\left(\bigoplus_{\dd\in\Lambda_{\mu}^{\zeta}}\mathcal{DT}^{\zeta}_{0,\dd}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\right]_{\Ka_0} \end{align*} as required -- for the second equality we have used the main result of \cite{Meinhardt14}. \smallbreak For the case $W\neq 0$, by Proposition \ref{TSeq} we deduce the existence of isomorphisms \begin{align*} \Ho\left(p^{\zeta}_{\mu,*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\mu}}(\mathbb{Q})\right)&\cong^{\nu_{\dd}}\phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\Ho\left(p^{\zeta}_{\mu,*}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\mu}}(\mathbb{Q})\right)\\ &\cong \phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\FreeComm_{\boxtimes_{\oplus}}\left(\bigoplus_{\dd\in\Lambda_{\mu}^{\zeta}}\mathcal{DT}^{\zeta}_{0,\dd}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\\ &\cong \FreeComm_{\boxtimes_{\oplus}}\left(\bigoplus_{\dd\in\Lambda_{\mu}^{\zeta}}\phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\mathcal{DT}^{\zeta}_{0,\dd}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\\ &= \FreeComm_{\boxtimes_{\oplus}}\left(\bigoplus_{\dd\in\Lambda_{\mu}^{\zeta}}\mathcal{DT}^{\zeta}_{W,\dd}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right). \end{align*} Finally, for the absolute case of (\ref{vweakeq}), we use the following chain of isomorphisms, for $\mathbf{f}\gg 0$, where the second is the decomposition theorem for the direct image of the vanishing cycles along a proper map from Proposition \ref{basicfacts}(\ref{vanDec}): \begin{align*} \Ho^n\left(\dim_*p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right) &\cong \Ho^n\left(\dim_*\pi _{\mathbf{f},\dd ,*}^{\zeta}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta\sst}}(\mathbb{Q})\otimes\mathbb{L}^{\mathbf{f}\cdot \dd/2}\right)\\ &\cong \Ho^n\left(\dim_*\Ho(\pi _{\mathbf{f},\dd ,*}^{\zeta}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta\sst}}(\mathbb{Q})\otimes\mathbb{L}^{\mathbf{f}\cdot \dd/2})\right)\\ &\cong \Ho^n\left(\dim_*\FreeComm_{\boxtimes_{\oplus}}\left(\bigoplus_{\dd\in\Lambda_{\mu}^{\zeta}}\mathcal{DT}^{\zeta}_{W,\dd}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\right)\\ &\cong \FreeComm_{\boxtimes_{+}}\left(\Ho(\dim_*\mathcal{DT}_{W,\mu}^{\zeta})\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right). \end{align*} \end{proof} \begin{remark} \label{weakPBWrem} We can think of Theorem \ref{weakPBW} as a `weak' Poincar\'e--Birkhoff-Witt theorem for $\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}^{\zeta}_{W,\mu}\right)$. It is a PBW theorem in the sense that it asserts that as an object in a symmetric monoidal category, $\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}^{\zeta}_{W,\mu}\right)$ is isomorphic to a free symmetric algebra. It is `weak' in the sense that the isomorphism itself is not specified in terms of an algebra structure on $\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}^{\zeta}_{W,\mu}\right)$. In Section \ref{CoHAsec} we will introduce an algebra structure on $\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}^{\zeta}_{W,\mu}\right)$ relativizing the Kontsevich--Soibelman construction \cite[Sec.7]{KS2} over the base $\mathcal{M}_{\mu}^{\zeta\sst}$, and prove the corresponding `strong' PBW theorem (Theorem \ref{strongPBW}) in Section \ref{PBWsec}. \end{remark} Taking the Verdier dual of both sides of (\ref{weakeq}) we deduce the following corollary. \begin{corollary} Assume that $\zeta$ is $\mu$-generic. Then there is an isomorphism \begin{equation} \Ho\left(p^{\zeta}_{\mu,!}\mathfrak{IC}_{W,\mu}^{\zeta}\right)\cong\FreeComm_{\boxtimes_{\oplus}}\left(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})^{\vee}_{\vir}\right) \end{equation} where $\HO(\mathbb{C}\mathbb{P}^{\infty})^{\vee}_{\vir}:=\ihom_{\MMHM(\pt)}(\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir},\mathbb{Q})$ is the dual monodromic mixed Hodge structure. \end{corollary} \begin{corollary} \label{inccor} There is a canonical inclusion \begin{equation} \label{caninc} \mathcal{DT}^{\zeta}_{W,\mu}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\rightarrow \Ho\left(p_{\mu,*}^{\zeta}\mathfrak{IC}_{W,\mu}^{\zeta}\right) \end{equation} \end{corollary} \begin{proof} First we prove the corollary in the case $W=0$. In this case, the left hand side of (\ref{caninc}) is precisely the summand of the right hand side with strict supports equal to $\mathcal{M}^{\zeta\sst}_{\dd}$ for $\dd\in\Lambda^{\zeta}_{\mu}$ by Theorem \ref{weakPBW}, from which the result follows from the decomposition theorem. For the general case, we apply the functor $\phim{\mathcal{T}r(W)_{\mu}^{\zeta}}$ to the inclusion from the case $W=0$. \end{proof} \subsection{Cohomological wall crossing formula} \label{cwcfSec} In this section we prove a weak (in the sense of Remark \ref{weakPBWrem}) version of Theorem \ref{strongPBW}. We will return to the cohomological wall crossing formula in Section \ref{CWCF} once we have introduced the cohomological Hall algebras $\Ho(\mathcal{A}_{W,\mu}^{\zeta})$ and $\HO(\mathcal{A}_{W,\mu}^{\zeta})$, but for the time being we can still prove that there is \textit{some} isomorphism (\ref{gthmd}), defined without reference to the cohomological Hall algebra. The mere existence of this isomorphism is enough for many applications. Fix a dimension vector $\dd\in\mathbb{N}^{Q_0}$. Denote by $\HN^{\geq}_{\dd}$ the set of Harder-Narasimhan types for $\dd$, that is, sequences $\dd^1,\ldots,\dd^s\in\mathbb{N}^{Q_0}$ such that the slopes $\Xi^{\zeta}(\dd^1),\ldots,\Xi^{\zeta}(\dd^s)$ are strictly decreasing and $\sum_{i=1}^s \dd^i=\dd$. Recall that by \cite[Prop.3.4]{Reineke_HN}, the moduli stack $\mathfrak{M}_{\dd}$ has a stratification by locally closed substacks \begin{equation} \label{HNstrat} \mathfrak{M}_{\dd}=\coprod_{\overline{\dd}\in\HN^{\geq}_{\dd}}\mathfrak{M}_{\overline{\dd}}^{\zeta} \end{equation} where $\mathfrak{M}_{\overline{\dd}}^{\zeta}$ is the stack of representations which have Harder--Narasimhan type $\overline{\dd}$ with respect to the stability condition $\zeta$. This is a stratification in the following weak sense. There is a partial ordering $\leq'$ on $\HN^{\geq}_{\dd}$ such that \[ \overline{\mathfrak{M}_{\overline{\dd}}^{\zeta}}\subset \bigcup_{\substack{\overline{\ee}\in\HN_{\dd}^{\geq}\\ \overline{\ee}\leq' \overline{\dd}}}\mathfrak{M}_{\overline{\ee}}^{\zeta}. \] Each of the stacks $\mathfrak{M}^{\zeta}_{\overline{\dd}}$ can be written as a global quotient stack \[ \mathfrak{M}^{\zeta}_{\overline{\dd}}\cong X^{\zeta}_{\overline{\dd}}/G_{\overline{\dd}} \] where $X^{\zeta}_{\overline{\dd}}\subset X_{\dd}$ is the subspace of representations preserving the flag defined by $\overline{\dd}$, such that each of the associated $\dd^r$-dimensional subquotients is $\zeta$-semistable, and $G_{\overline{\dd}}\subset G_{\dd}$ is the subgroup preserving the same flag. Each of these stacks comes with a map $p_{\overline{\dd}}\colon \mathfrak{M}^{\zeta}_{\overline{\dd}}\rightarrow\mathcal{M}_{\dd}$ sending a representation to its semisimplification, and a representable proper map $i_{\overline{\dd}}\colon \mathfrak{M}^{\zeta}_{\overline{\dd}}\rightarrow \mathfrak{M}_{\dd}$ given by forgetting the Harder-Narasimhan filtration, and the diagram \[ \xymatrix{ \mathfrak{M}_{\overline{\dd}}^{\zeta}\ar[d]^{p_{\overline{\dd}}}\ar[r]^{i_{\overline{\dd}}}&\mathfrak{M}_{\dd}\ar[dl]^{p_{\dd}}\\ \mathcal{M}_{\dd} } \] commutes. In the proposition below, the map $q^{\zeta}_{\mu}$ is as defined in (\ref{qmudef}). \begin{theorem} \label{gcDT} Let $\zeta$ be a Bridgeland stability condition. Then there are isomorphisms \begin{equation} \label{ngwc} \Ho\left(p_!\phim{\mathfrak{Tr}(W)}\mathcal{IC}_{\mathfrak{M}}(\mathbb{Q})\right)\cong\mbox{\larger[3]{$\boxtimes$}}_{\oplus, -\infty\xrightarrow{\mu} \infty}^{\tw} q_{\mu,!}^{\zeta}\Ho\left(p_{\mu,!}^{\zeta}\phim{\mathfrak{Tr}(W)_{\mu}^{\zeta}}\mathcal{IC}_{\mathfrak{M}_{\mu}^{\zeta\sst}}(\mathbb{Q})\right) \end{equation} and \begin{equation} \label{ngwcnc} \Ho\left(p_*\phim{\mathfrak{Tr}(W)}\mathcal{IC}_{\mathfrak{M}}(\mathbb{Q})\right)\cong\mbox{\larger[3]{$\boxtimes$}}_{\oplus, \infty\xrightarrow{\mu} -\infty}^{\tw} q_{\mu,*}^{\zeta}\Ho\left(p_{\mu,*}^{\zeta}\phim{\mathfrak{Tr}(W)_{\mu}^{\zeta}}\mathcal{IC}_{\mathfrak{M}_{\mu}^{\zeta\sst}}(\mathbb{Q})\right). \end{equation} Assume in addition that $\zeta$ is generic. Then there are isomorphisms \begin{equation} \label{gwPBW} \Ho\left(p_!\phim{\mathfrak{Tr}(W)}\mathcal{IC}_{\mathfrak{M}}(\mathbb{Q})\right)\cong \mbox{\larger[3]{$\boxtimes$}}_{\oplus,-\infty \xrightarrow{\mu}\infty}^{\tw} \left(q_{\mu,!}^{\zeta}\Sym_{\boxtimes_{\oplus}}\left(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})^{\vee}_{\vir}\right)\right) \end{equation} and \begin{equation} \label{gwPBWnc} \Ho\left(p_*\phim{\mathfrak{Tr}(W)}\mathcal{IC}_{\mathfrak{M}}(\mathbb{Q})\right)\cong \mbox{\larger[3]{$\boxtimes$}}_{\oplus, \infty\xrightarrow{\mu} -\infty }^{\tw} \left(q_{\mu,*}^{\zeta}\Sym_{\boxtimes_{\oplus}}\left(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\right). \end{equation} \end{theorem} \begin{proof} As in the proof of Theorem \ref{weakPBW} we only need to prove the case for which $W=0$ and then we can deduce the general case from the fact that monodromic vanishing cycle functors are exact, and commute with proper maps and the relevant symmetric monoidal structures. We first prove that the isomorphism (\ref{ngwc}) exists. Fix a $\dd\in\mathbb{N}^{Q_0}$. Then if we complete $\geq'$ to a total ordering $\geq$ of $\HN^{\geq}_{\dd}$ and define, for each $\overline{\dd}\in\HN^{\geq}_{\dd}$, \begin{align*} \mathfrak{M}_{\leq \overline{\dd}}^{\zeta}:=&\bigcup_{\overline{\ee}\leq \overline{\dd}}\mathfrak{M}_{\overline{\ee}}^{\zeta}\\ \mathfrak{M}_{< \overline{\dd}}^{\zeta}:=&\bigcup_{\overline{\ee}< \overline{\dd}}\mathfrak{M}_{\overline{\ee}}^{\zeta} \end{align*} then $\mathfrak{M}_{\leq \overline{\dd}}^{\zeta}\subset \mathfrak{M}_{\dd}$ is a closed embedding, and $\mathfrak{M}_{\overline{\dd}}^{\zeta}\subset\mathfrak{M}_{\leq \overline{\dd}}^{\zeta}$ is an open embedding with complement $\mathfrak{M}_{<\overline{\dd}}^{\zeta}\subset\mathfrak{M}_{\leq \overline{\dd}}^{\zeta}$. We denote by \begin{align} i_{\overline{\dd}}\colon &\mathfrak{M}_{\overline{\dd}}^{\zeta}\hookrightarrow\mathfrak{M}_{\dd} \label{id}\\ i_{<\overline{\dd}}\colon &\mathfrak{M}_{<\overline{\dd}}^{\zeta}\hookrightarrow\mathfrak{M}_{\dd}\label{idd}\\ i_{\leq\overline{\dd}}\colon &\mathfrak{M}_{\leq \overline{\dd}}^{\zeta}\hookrightarrow\mathfrak{M}_{\dd}\label{iddd} \end{align} the obvious inclusions. We will show that all of the terms in the following distinguished triangle are pure, and that the connecting maps are zero, and furthermore that the triangle is split: \begin{equation} \label{HNtri} \Ho\left(p_{\dd,!}i_{\overline{\dd},!}\mathbb{Q}_{\mathfrak{M}^{\zeta}_{\overline{\dd}}}\right)\rightarrow \Ho\left(p_{\dd,!}i_{\leq \overline{\dd},!}\mathbb{Q}_{\mathfrak{M}^{\zeta}_{\leq \overline{\dd}}}\right)\rightarrow \Ho\left(p_{\dd,!}i_{< \overline{\dd},!}\mathbb{Q}_{\mathfrak{M}^{\zeta}_{< \overline{\dd}}}\right)\rightarrow \end{equation} Fix $\overline{\dd}\in\HN^{\geq}_{\dd}$. Consider the following commutative diagram \[ \xymatrix{ &\mathfrak{M}_{\overline{\dd}}^{\zeta}\ar[d]^{q_{\overline{\dd}}}\ar[r]^-{i_{\overline{\dd}}}&\mathfrak{M}_{\dd}\ar[ddd]^{p_{\dd}}\\ X^{\zeta}_{\overline{\dd}}/(G_{\dd^1}\times\ldots \times G_{\dd^s})\ar[r]^{q_{2,\overline{\dd}}}\ar[ur]^{q_{1,\overline{\dd}}}&\mathfrak{M}_{\dd^1}^{\zeta\sst}\times\ldots\times\mathfrak{M}_{\dd^s}^{\zeta\sst}\ar[d]^{p^{\zeta}_{\dd^1}\times\ldots\times p^{\zeta}_{\dd^s}}\\ &\mathcal{M}^{\zeta\sst}_{\dd^1}\times\ldots\times\mathcal{M}^{\zeta\sst}_{\dd^s}\ar[d]^{q^{\zeta}_{\dd^1}\times\ldots\times q^{\zeta}_{\dd^s}}\\ &\mathcal{M}_{\dd^1}\times\ldots\times\mathcal{M}_{\dd^s}\ar[r]^-{\oplus}&\mathcal{M}_{\dd} } \] where $q_{1,\overline{\dd}}$ is an affine fibration of relative dimension \[ f_1(\overline{\dd}):=\sum_{1\leq r<r'\leq s}\dd^r\cdot \dd^{r'} \] and $q_{2,\overline{\dd}}$ is an affine fibration of relative dimension \[ f_2(\overline{\dd}):=\sum_{1\leq r<r'\leq s}\sum_{a\in Q_1}\dd^{r'}_{s(a)}\dd^{r}_{t(a)}. \] We define $(\overline{\dd},\overline{\dd}):=f_1(\overline{\dd})-f_2(\overline{\dd})$. Then by homotopy invariance and the decomposition theorem \begin{align*} \Ho\left(p_{\dd,!}i_{\overline{\dd},!}\mathbb{Q}_{\mathfrak{M}_{\overline{\dd}}}\right) \cong& \Ho\left(\oplus_!(q^{\zeta}_{\dd^1}\times\ldots\times q^{\zeta}_{\dd^s})_!(p^{\zeta}_{\dd^1}\times\ldots\times p^{\zeta}_{\dd^s})_!\mathbb{Q}_{\mathfrak{M}_{\dd^1}^{\zeta\sst}\times\ldots\times\mathfrak{M}_{\dd^s}^{\zeta\sst}}\right)\otimes\mathbb{L}^{-(\overline{\dd},\overline{\dd})}\\ \cong& \lim_{\mathbf{f}\mapsto\infty} \Ho\left(\oplus_!(q^{\zeta}_{\dd^1}\times\ldots\times q^{\zeta}_{\dd^s})_!(\pi^{\zeta}_{\mathbf{f},\dd ^1}\times\ldots\times \pi^{\zeta}_{\mathbf{f},\dd ^s})_!\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd ^1}^{\zeta}\times\ldots\times\mathcal{M}^{\zeta}_{\mathbf{f},\dd ^s}}\right)\otimes\mathbb{L}^{-(\overline{\dd},\overline{\dd})}\\ \cong& \oplus_!(q^{\zeta}_{\dd^1}\times\ldots\times q^{\zeta}_{\dd^s})_!\lim_{\mathbf{f}\mapsto\infty}\Ho\left((\pi^{\zeta}_{\mathbf{f},\dd ^1}\times\ldots\times \pi^{\zeta}_{\mathbf{f},\dd ^s})_!\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd ^1}^{\zeta}\times\ldots\times\mathcal{M}^{\zeta}_{\mathbf{f},\dd ^s}}\right)\otimes\mathbb{L}^{-(\overline{\dd},\overline{\dd})}\\ \cong&\oplus_!(q^{\zeta}_{\dd^1}\times\ldots\times q^{\zeta}_{\dd^s})_!\Ho\left((p^{\zeta}_{\dd^1}\times\ldots\times p^{\zeta}_{\dd^s})_!\mathbb{Q}_{\mathfrak{M}_{\dd^1}^{\zeta\sst}\times\ldots\times\mathfrak{M}_{\dd^s}^{\zeta\sst}}\right)\otimes\mathbb{L}^{-(\overline{\dd},\overline{\dd})}\\ \cong&q^{\zeta}_{\dd^1,!}\Ho\left(p^{\zeta}_{\dd^1,!}\mathbb{Q}_{\mathfrak{M}_{\dd^1}^{\zeta\sst}}\right)\boxtimes_{\oplus}\ldots\boxtimes_{\oplus} q^{\zeta}_{\dd^s,!}\Ho\left(p^{\zeta}_{\dd^s,!}\mathbb{Q}_{\mathfrak{M}_{\dd^s}^{\zeta\sst}}\right)\otimes\mathbb{L}^{-(\overline{\dd},\overline{\dd})}. \end{align*} Since each $q_{\dd^r}^{\zeta}$ is a proper map, each $q^{\zeta}_{\dd^r,!}\Ho\left(p^{\zeta}_{\dd^r,!}\mathbb{Q}_{\mathfrak{M}_{\dd^r}^{\zeta\sst}}\right)$ is pure, since as we saw in the proof of Theorem \ref{weakPBW}, $\Ho\left(p^{\zeta}_{\dd^r,!}\mathbb{Q}_{\mathfrak{M}_{\dd^r}^{\zeta\sst}}\right)$ is pure. The claim regarding the distinguished triangles (\ref{HNtri}) then follows by induction, and semisimplicity of the category of pure mixed Hodge modules. Taking care of the twists, we calculate \begin{align} \label{int1} \Ho\left(p_!\mathcal{IC}_{\mathfrak{M}}(\mathbb{Q})\right)\cong&\Ho\left(p_!\mathbb{Q}_{\mathfrak{M}}\right)\otimes\mathbb{L}^{(\dd,\dd)/2}\\ \nonumber\cong&\bigoplus_{\overline{\dd}\in\HN^{\geq}}\Ho\left(p_{\dd,!}i_{\overline{\dd},!}\mathbb{Q}_{\mathfrak{M}_{\overline{\dd}}}\right)\otimes\mathbb{L}^{(\dd,\dd)/2}\\ \nonumber\cong&\bigoplus_{(\dd^1,\ldots,\dd^s)\in\HN^{\geq}}q^{\zeta}_{\dd^1,!}\Ho\left(p^{\zeta}_{\dd^1,!}\mathbb{Q}_{\mathfrak{M}_{\dd^1}^{\zeta\sst}}\right)\boxtimes_{\oplus}\ldots\boxtimes_{\oplus} q^{\zeta}_{\dd^s,!}\Ho\left(p^{\zeta}_{\dd^s,!}\mathbb{Q}_{\mathfrak{M}_{\dd^s}^{\zeta\sst}}\right)\otimes\mathbb{L}^{-(\overline{\dd},\overline{\dd})+(\dd,\dd)/2}\\ \nonumber\cong&\bigoplus_{(\dd^1,\ldots,\dd^s)\in\HN^{\geq}}q^{\zeta}_{\dd^s,!}\Ho\left(p^{\zeta}_{\dd^s,!}\mathcal{IC}_{\mathfrak{M}_{\dd^s}^{\zeta\sst}}(\mathbb{Q})\right)\boxtimes^{\tw}_{\oplus}\ldots\boxtimes^{\tw}_{\oplus} q^{\zeta}_{\dd^1,!}\Ho\left(p^{\zeta}_{\dd^1,!}\mathcal{IC}_{\mathfrak{M}_{\dd^1}^{\zeta\sst}}(\mathbb{Q})\right)\\ \nonumber \cong&\mbox{\larger[3]{$\boxtimes$}}_{\oplus, -\infty\xrightarrow{\mu} \infty}^{\tw} q_{\mu,!}^{\zeta}\Ho\left(p_{\mu,!}^{\zeta}\mathcal{IC}_{\mathfrak{M}_{\mu}^{\zeta\sst}}(\mathbb{Q})\right). \end{align} This completes the proof of the first part of the theorem. If $\zeta$ is generic, we may apply Theorem \ref{weakPBW} at each slope $\mu\in(-\infty,\infty)$, to deduce (\ref{gwPBW}). The isomorphisms (\ref{ngwcnc}) and (\ref{gwPBWnc}) are given by taking the Verdier duals of (\ref{ngwc}) and (\ref{gwPBW}) respectively, by Remark \ref{twDual}. \end{proof} \begin{remark} In the case $W=0$, taking the direct image of the isomorphism (\ref{ngwcnc}) of Theorem \ref{gcDT} along the map $\mathcal{M}(Q)\rightarrow \mathbb{N}^{Q_0}$ we obtain the isomorphism \begin{equation} \HO\left(\mathfrak{M}(Q),\mathbb{Q}\right)\cong\mbox{\larger[3]{$\boxtimes$}}_{+, \infty\xrightarrow{\mu} -\infty}^{\tw} \HO\left(\mathfrak{M}_{\mu}^{\zeta\sst}(Q),\mathbb{Q}\right). \end{equation} The existence of such an isomorphism is proved by Franzen and Reineke in \cite{FrRe15} via a vanishing result for even cohomology, and before that for the case of a Dynkin quiver which is not an orientation of $E_8$ by Rimanyi in \cite{Ri13}, where this result is a corollary of the existence of a Poincar\'e--Birkhoff--Witt isomorphism for hypercohomology. \end{remark} Theorems \ref{weakPBW} and \ref{gcDT} give categorified upgrades of the integrality theorem, and the wall crossing formula, respectively; i.e. we replace these theorems, considered as equalities in a Grothendieck group, with isomorphisms in the appropriate categories. The rest of the paper is devoted to proving that the isomorphisms in these theorems can be taken to be induced by multiplication in appropriately defined cohomological Hall algebras. \section{Cohomological Hall algebras and modules} \label{CoHAsec} \subsection{A relative Cohomological Hall algebra} \label{relc} In this section we define the relative cohomological Hall algebra $\Ho(\Coha_{W,\mu}^{\zeta})$. Here $\zeta\in\mathbb{H}_+^{Q_0}$ is a stability condition, that we do not assume to be generic. The underlying cohomologically graded monodromic mixed Hodge module of $\Ho(\Coha_{W,\mu}^{\zeta})$ is $\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right)$ from Definition \ref{abbrev} and Theorem \ref{weakPBW}. We will define morphisms \begin{equation} \label{multint} \Ho\left(p^{\zeta}_{\dd',*}\mathfrak{IC}_{W,\dd'}^{\zeta}\right)\boxtimes^{\tw}_{\oplus} \Ho\left(p^{\zeta}_{\dd'',*}\mathfrak{IC}_{W,\dd''}^{\zeta}\right)\xrightarrow{\Ho\left(\ms_{W,\dd',\dd''}^{\zeta}\right)}\\ \Ho\left(p^{\zeta}_{\dd,*} \mathfrak{IC}_{W,\dd}^{\zeta}\right) \end{equation} for all $\dd=\dd'+\dd''$ with $\dd',\dd''\in\Lambda_{\mu}^{\zeta}$. The morphisms (\ref{multint}) will satisfy the natural associativity condition for a monoid in the category $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}^{\zeta\sst}_{\mu}))$ with the twisted monoidal product $\boxtimes^{\tw}_{\oplus}$. The result is a \textit{relative} version of the cohomological Hall algebra of Kontsevich and Soibelman \cite{KS2} -- relative in the sense that we work over the base $\mathcal{M}_{\mu}^{\zeta\sst}$, instead of the monoid $\Lambda_{\mu}^{\zeta}$ of dimension vectors of slope $\mu$ with respect to the stability condition $\zeta$, as explained in the introduction. We define (\ref{multint}) as the composition of two morphisms, defined in terms of the commutative diagram \[ \xymatrix{ &X^{\zeta\sst}_{\dd',\dd''}/\left(G_{\dd'}\times G_{\dd''}\right)\ar[dl]_{r_1}\ar[dr]^{r_2}\\ \left(X_{\dd'}^{\zeta\sst}\times X_{\dd''}^{\zeta\sst}\right)/\left(G_{\dd'}\times G_{\dd''}\right)\ar[ddr]^{p_{\dd'}^{\zeta}\times p_{\dd''}^{\zeta}}\ar[d]^{\cong} &&X_{\dd',\dd''}^{\zeta\sst}/G_{\dd',\dd''}\ar[d]^{\cong}\ar[ddl]_{p_{\dd',\dd''}^{\zeta}}\\ \mathfrak{M}_{\dd'}^{\zeta\sst}\times\mathfrak{M}_{\dd''}^{\zeta\sst}\ar[dr]&&\mathfrak{M}_{\dd',\dd''}^{\zeta\sst}\ar[dl]\\ &\mathcal{M}^{\zeta\sst}_{\dd'}\times\mathcal{M}^{\zeta\sst}_{\dd''}\ar[r]^-{\oplus}&\mathcal{M}^{\zeta\sst}_{\dd}. } \] We consider the following composition of isomorphisms \begin{align*} &\Ho\left(p^{\zeta}_{\dd',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd'}}\mathcal{IC}_{\mathfrak{M}_{\dd'}^{\zeta\sst}}(\mathbb{Q})\right)\boxtimes \Ho\left(p^{\zeta}_{\dd'',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd''}}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\dd''}}(\mathbb{Q})\right)\cong^{\mathtt{TS}}\\ &\Ho\left((p^{\zeta}_{\dd'}\times p^{\zeta}_{\dd''})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd'}\boxplus\mathfrak{Tr}(W)^{\zeta}_{\dd''}}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\dd'}\times\mathfrak{M}^{\zeta\sst}_{\dd''}}(\mathbb{Q})\right)\cong\\ &\Ho\left(((p^{\zeta}_{\dd'}\times p^{\zeta}_{\dd''})\circ r_{1})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd'}\boxplus\mathfrak{Tr}(W)^{\zeta}_{\dd''}\circ r_1}\mathcal{IC}_{X^{\zeta\sst}_{\dd',\dd''}/\left(G_{\dd'}\times G_{\dd''}\right)}(\mathbb{Q})\right)\otimes \mathbb{L}^{\sum_{a\in Q_1}\dd''_{s(a)}\dd'_{t(a)}/2}\cong\\ &\Ho\left(p^{\zeta}_{\dd',\dd'',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd',\dd''}}\mathcal{IC}_{\mathfrak{M}_{\dd',\dd''}^{\zeta\sst}}(\mathbb{Q})\right)\otimes\mathbb{L}^{-(\dd'',\dd')/2} \end{align*} to obtain the isomorphism \begin{align*} \alpha^{\zeta}_{\dd',\dd''}\colon&\oplus_*\left(\Ho\left(p^{\zeta}_{\dd',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd'}}\mathcal{IC}_{\mathfrak{M}_{\dd'}^{\zeta\sst}}(\mathbb{Q})\right)\boxtimes \Ho\left(p^{\zeta}_{\dd'',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd''}}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\dd''}}(\mathbb{Q})\right)\right)\xrightarrow{\cong}\\& \oplus_*\left(\Ho\left(p^{\zeta}_{\dd',\dd'',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd',\dd''}}\mathcal{IC}_{\mathfrak{M}_{\dd',\dd''}^{\zeta\sst}}(\mathbb{Q})\right)\otimes\mathbb{L}^{-(\dd'',\dd')/2}\right). \end{align*} Next consider the commutative diagram of stacks \begin{equation} \label{smap} \xymatrix{ \mathfrak{M}^{\zeta\sst}_{\dd',\dd''}\ar[d]^{\cong}\ar[rr]^-{s^{\zeta}_{\dd',\dd''}}&&\mathfrak{M}_{\dd}^{\zeta\sst}\ar[d]^{\cong}\\ X^{\zeta\sst}_{\dd',\dd''}/G_{\dd',\dd''}\ar[d]^{p^{\zeta}_{\dd',\dd''}}\ar[r]^{\iota^{\zeta}_{\dd',\dd''}}&X^{\zeta\sst}_{\dd}/G_{\dd',\dd''}\ar[r]^{r^{\zeta}_{\dd',\dd''}}&X^{\zeta\sst}_{\dd}/G_{\dd}\ar[d]^{p^{\zeta}_{\dd}}\\ \mathcal{M}^{\zeta\sst}_{\dd'}\times\mathcal{M}^{\zeta\sst}_{\dd''}\ar[rr]^{\oplus}&&\mathcal{M}^{\zeta\sst}_{\dd}. } \end{equation} Note that $\iota_{\dd',\dd''}^\zeta$ is a closed inclusion, while $r_{\dd',\dd''}^\zeta$ is proper and representable, and so it follows that $s_{\dd',\dd''}^\zeta$ is a proper and representable morphism of stacks. We define $V_N=\prod_{i\in Q_0}\Hom(\mathbb{C}^N,\mathbb{C}^{\dd_i})$, and $U_N=\prod_{i\in Q_0}\Hom^{\mathrm{surj}}(\mathbb{C}^N,\mathbb{C}^{\dd_i})$. Then for sufficiently large $N$, $G_{\dd}$ acts freely on $U_N$, as does $G_{\dd',\dd''}\subset G_{\dd}$. We define \begin{align} \label{XNdef}&X^{\zeta\sst}_{\dd,N}=X_{\dd}^{\zeta\sst}\times_{G_{\dd}}U_N\\ \label{XXNdef}&X^{\zeta\sst}_{\dd',\dd'',N}=X_{\dd',\dd''}^{\zeta\sst}\times_{G_{\dd',\dd''}}U_{N}, \end{align} and let $\mathrm{Tr}(W)^{\zeta}_{\dd,N}$ and $\mathrm{Tr}(W)^{\zeta}_{\dd',\dd'',N}$ denote the functions induced by $\mathrm{Tr}(W)$ on $X^{\zeta\sst}_{\dd,N}$ and $X^{\zeta\sst}_{\dd',\dd'',N}$ respectively. Passing to the limit of the composition of morphisms \begin{align}\label{preLim} \beta^{\zeta}_{\dd',\dd'',N}\colon &\oplus_*\Ho\left(p^{\zeta}_{\dd',\dd'',N,*}\phim{\mathrm{Tr}(W)^{\zeta}_{\dd',\dd'',N}}\mathbb{Q}_{X_{\dd',\dd'',N}^{\zeta\sst}}\right)\xrightarrow{\cong } \Ho\left((p^{\zeta}_{\dd,N}\circ s^{\zeta}_{\dd',\dd'',N})_*\phim{\mathrm{Tr}(W)^{\zeta}_{\dd',\dd'',N}}\mathbb{Q}_{X_{\dd',\dd'',N}^{\zeta\sst}}\right)\xrightarrow{\cong}\\ \nonumber &\rightarrow \Ho\left(p^{\zeta}_{\dd,N,*}\phim{\mathrm{Tr}(W)^{\zeta}_{\dd,N}}s^{\zeta}_{\dd',\dd'',N,*}\mathbb{Q}_{X^{\zeta\sst}_{\dd',\dd'',N}}\right)\rightarrow \Ho\left(p^{\zeta}_{\dd,N,*}\phim{\mathrm{Tr}(W)^{\zeta}_{\dd,N}}\mathbb{Q}_{X_{\dd,N}^{\zeta\sst}}\right)\otimes\mathbb{L}^{(\dd',\dd'')} \end{align} we obtain \begin{align}\label{betadef} \beta^{\zeta}_{\dd',\dd''}\colon &\oplus_*\Ho\left(p^{\zeta}_{\dd',\dd'',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd',\dd''}}\mathcal{IC}_{\mathfrak{M}_{\dd',\dd''}^{\zeta\sst}}(\mathbb{Q})\right)\rightarrow \Ho\left(p^{\zeta}_{\dd,*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathcal{IC}_{\mathfrak{M}_{\dd}^{\zeta\sst}}(\mathbb{Q})\right)\otimes\mathbb{L}^{(\dd',\dd'')/2}. \end{align} Here we have used that $\oplus_*\Ho\left(p^{\zeta}_{\dd',\dd'',N,*}\phim{\mathrm{Tr}(W)^{\zeta}_{\dd',\dd'',N}}\mathbb{Q}_{X_{\dd',\dd'',N}^{\zeta\sst}}(\mathbb{Q})\right)$ is naturally isomorphic to $\Ho\left(\oplus_*p^{\zeta}_{\dd',\dd'',N,*}\phim{\mathrm{Tr}(W)^{\zeta}_{\dd',\dd'',N}}\mathbb{Q}_{X_{\dd',\dd'',N}^{\zeta\sst}}(\mathbb{Q})\right)$ since $\oplus_*$ is exact, as $\oplus$ is finite. Composing the appropriate twists of $\beta^{\zeta}_{\dd',\dd''}$ and $\alpha^{\zeta}_{\dd',\dd''}$ gives the desired morphism \[ \Ho(\ms_{W,\dd',\dd''}^{\zeta}):= \left(\beta^{\zeta}_{\dd',\dd''}\otimes \mathbb{L}^{-(\dd',\dd'')/2}\right)\circ \left(\alpha^{\zeta}_{\dd',\dd''}\otimes \mathbb{L}^{\langle \dd'',\dd'\rangle/2}\right) \] of monodromic mixed Hodge modules. We define \[ \Ho(\ms^{\zeta}_{W,\mu})=\bigoplus_{\dd',\dd''\in\Lambda_{\mu}^{\zeta}}\Ho(\ms_{W,\dd',\dd''}^{\zeta}). \] The proof that the resulting structure is associative is standard, and is as in \cite{KS2}. \begin{definition}[Relative cohomological Hall algebra] We denote by $\Ho(\Coha_{W,\mu}^{\zeta})$ the monoid $\left(\Ho\left(p_{\mu,*}^{\zeta}\mathfrak{IC}_{W,\mu}^{\zeta}\right),\Ho(\ms^{\zeta}_{W,\mu}),\mathcal{IC}_{\mathcal{M}_0^{\zeta\sst}}(\mathbb{Q})\right)$ in $\left(\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}_{\mu}^{\zeta\sst})),\boxtimes^{\tw}_{\oplus}\right)$. \end{definition} Recall from Proposition \ref{cvs} the isomorphisms $\nu_{\mu}\colon\phim{\mathcal{T}r(W)^{\zeta}_{\mu}}\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{0,\mu}^{\zeta}\right)\cong \Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right)$. The following technical lemma will be used in proving the strong versions of the PBW theorem, as stated in Theorems \ref{qea} and \ref{strongPBW}. \begin{lemma} \label{cohacom} The following diagram commutes: \[ \xymatrix{ \phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\left(\Ho\left(p_{\mu,*}^{\zeta}\mathfrak{IC}_{0,\mu}^{\zeta}\right)\boxtimes_{\oplus}^{\tw} \Ho\left(p_{\mu,*}^{\zeta}\mathfrak{IC}_{0,\mu}^{\zeta}\right)\right)\ar[rr]^-{\phim{\mathcal{T}r(W)^{\zeta}_{\mu}}\Ho(\ms^{\zeta}_{0,\mu})}\ar[d]^{(\nu_{\mu}\boxtimes^{\tw}_{\oplus} \nu_{\mu})\circ \mathtt{TS}^{-1}} &&\phim{\mathcal{T}r(W)^{\zeta}_{\mu}}\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{0,\mu}^{\zeta}\right)\ar[d]^{\nu}\\ \Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right)\boxtimes^{\tw}_{\oplus} \Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right)\ar[rr]^-{\Ho(\ms^{\zeta}_{W,\mu})}&& \Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right). } \] \end{lemma} \begin{proof} Here $\mathtt{TS}$ is the Thom--Sebastiani isomorphism. We break the two horizontal arrows into their constituent parts, given by the constituent morphisms of the composition $\Ho(\ms^{\zeta}_{W,\mu})$. Then the problem reduces to several trivial commutativity statements regarding smaller squares. \end{proof} \begin{remark} Taking the direct image along $\dim \colon\mathcal{M}_{\mu}^{\zeta\sst}\rightarrow\mathbb{N}^{Q_0}$ we obtain an element $\Ho\left(\dim_*\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right)\right)$ in $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathbb{N}^{Q_0}))$ that is noncanonically isomorphic to the underlying $\mathbb{N}^{Q_0}$-graded monodromic mixed Hodge module of the absolute cohomological Hall algebra $\HO(\Coha^{\zeta}_{W,\mu})$, a monoid in the monoidal category $(\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathbb{N}^{Q_0})),\boxtimes^{\tw}_+)$, defined below. I.e. we obtain the monoid in $(\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathbb{N}^{Q_0}),\boxtimes^{\tw}_{+})$ \begin{equation} \label{premdef} \Gr_{\Pf}(\HO(\Coha_{W,\mu}^{\zeta})):=\left(\Ho\left(\dim_*\Ho(p_{\mu,*}^{\zeta}\mathfrak{IC}_{W,\mu}^{\zeta})\right),\Ho\left(\dim_*\Ho(\ms_{W,\mu}^{\zeta})\right), \dim_*\mathcal{IC}_{0}(\mathbb{Q})\right). \end{equation} The notation in the left hand side of (\ref{premdef}) will become more transparent after we introduce the perverse filtration in Section \ref{PervSec}. \end{remark} We complete the preceding remark by recalling the definition of the cohomological Hall algebra $\HO(\mathcal{A}^{\zeta}_{W,\mu})$ from \cite{KS2}. Firstly, mimicking the construction of $\alpha^{\zeta}_{\dd',\dd''}$, there is an isomorphism \begin{align*} \overline{\alpha}^{\zeta}_{\dd',\dd''}\colon &\Ho\left((\dim\circ p_{\dd'}^{\zeta})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd'}}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\dd'}}(\mathbb{Q})\right)\boxtimes_{+}\Ho\left((\dim\circ p^{\zeta}_{\dd''})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd''}}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\dd''}}(\mathbb{Q})\right)\rightarrow \\&\Ho\left((\dim\circ p^{\zeta}_{\dd',\dd''})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd',\dd''}}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\dd',\dd''}}(\mathbb{Q})\right)\otimes\mathbb{L}^{-(\dd'',\dd')/2}. \end{align*} Similarly, with notation as in (\ref{preLim}), applying $(\dim\circ p^{\zeta}_{\dd,N})_*\phim{\mathrm{Tr}(W)_{\dd,N}^{\zeta}}$ to the Verdier dual of \[ \mathbb{Q}_{X_{\dd,N}^{\zeta\sst}}\rightarrow s^{\zeta}_{\dd',\dd'',N,*}\mathbb{Q}_{X_{\dd',\dd'',N}^{\zeta\sst}} \] and passing to the limit, we obtain the morphism \[ \overline{\beta}^{\zeta}_{\dd',\dd''}\colon\Ho\left((\dim\circ p_{\dd',\dd''}^{\zeta})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd',\dd''}}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\dd',\dd''}}(\mathbb{Q})\right)\rightarrow \Ho\left((\dim\circ p^{\zeta}_{\dd})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\dd}}(\mathbb{Q})\right)\otimes\mathbb{L}^{(\dd',\dd'')/2}. \] Composing, we obtain the multiplication for the cohomological Hall algebra $(\HO(\Coha_{W,\mu}^{\zeta}),\HO(\ast_{W,\mu}^{\zeta}),\mathbb{Q}_{\{0\}})$ defined in \cite{KS2}, where $\HO(\Coha_{W,\mu}^{\zeta}):=\Ho\left((\dim\circ p_{\mu}^{\zeta})_*\mathfrak{IC}^{\zeta}_{W,\mu}\right)$ and $\HO(\ast_{W,\mu}^{\zeta}):=\bigoplus_{\dd',\dd''\in\Lambda_{\mu}^{\zeta}}\HO(\ast_{W,\dd',\dd''}^{\zeta})$, with \[ \HO(\ms_{W,\dd',\dd''}^{\zeta}):=\left(\overline{\beta}^{\zeta}_{\dd',\dd''}\otimes \mathbb{L}^{-(\dd',\dd'')/2}\right)\circ \left(\overline{\alpha}^{\zeta}_{\dd',\dd''}\otimes \mathbb{L}^{\langle \dd'',\dd'\rangle/2}\right). \] \subsection{Perverse filtration} \label{PervSec} Since $\mathcal{M}_{\mathbf{f},\dd }^{\zeta}$ is smooth, and $\pi^{\zeta}_{\mathbf{f},\dd }\colon \mathcal{M}_{\mathbf{f},\dd }^{\zeta}\rightarrow \mathcal{M}_{\dd}^{\zeta\sst}$ is proper, by the decomposition theorem there is an isomorphism \[ \pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\cong\Ho\left(\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\right). \] Although this decomposition is not canonical, the result implies that the natural maps $\tau_{\leq p}\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\rightarrow \pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}$ admit left inverses, where $\tau_{\leq p}$ is the truncation functor with respect to the usual t structure on the category of monodromic mixed Hodge modules. In particular, the maps \[ \HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd },\tau_{\leq p}\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\right)\rightarrow \HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd },\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\right) \] are inclusions, inducing the \textit{perverse filtration} on $\HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd },\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\right)$, with \[ \Pf^p\left(\HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd },\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\right)\right):=\HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd },\tau_{\leq p}\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\right). \] For $W\neq 0$, we consider the split inclusion \begin{align*} &\tau_{\leq p}\pi ^{\zeta}_{\mathbf{f},\dd ,*}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\xrightarrow{\cong}\phim{\mathcal{T}r(W)_{\dd}^{\zeta}}\tau_{\leq p}\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\hookrightarrow \phim{\mathcal{T}r(W)_{\dd}^{\zeta}}\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\xrightarrow{\cong}\\ &\rightarrow\pi ^{\zeta}_{\mathbf{f},\dd ,*}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}} \end{align*} where the first and last maps are isomorphisms by properness of $\pi_{\mathbf{f},\dd}^{\zeta}$ and exactness of $\phim{\mathcal{T}r(W)_{\dd}^{\zeta}}$, and the middle map is a split inclusion as it is obtained by applying the functor $\phim{\mathcal{T}r(W)_{\dd}^{\zeta}}$ to a split inclusion. We set \[ \Pf^p\left(\HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd },\pi ^{\zeta}_{\mathbf{f},\dd ,*}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\right)\right):=\HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd },\tau_{\leq p}\pi ^{\zeta}_{\mathbf{f},\dd ,*}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\right). \] The perverse filtration is well-defined in the limit, as for fixed $\dd\in\mathbb{N}^{Q_0}$, fixed cohomological degree $n$, and for sufficiently large $\mathbf{f}'>\mathbf{f}$, in the commutative diagram \[ \xymatrix{ \HO^n\left(\mathcal{M}_{\mathbf{f}',\dd}^{\zeta},\pi ^{\zeta}_{\mathbf{f}',\dd,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f}',\dd}^{\zeta}}\otimes\mathbb{L}^{-\dim(\mathfrak{M}_{\dd})/2}\right)\ar[r] &\HO^n\left(\mathcal{M}_{\mathbf{f},\dd }^{\zeta},\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\otimes\mathbb{L}^{-\dim(\mathfrak{M}_{\dd})/2}\right)\\ \HO^n\left(\mathcal{M}_{\mathbf{f}',\dd}^{\zeta},\tau_{\leq p}\left(\pi ^{\zeta}_{\mathbf{f}',\dd,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f}',\dd}^{\zeta}}\otimes\mathbb{L}^{-\dim(\mathfrak{M}_{\dd})/2}\right)\right)\ar[r]\ar[u]& \HO^n\left(\mathcal{M}_{\mathbf{f},\dd }^{\zeta},\tau_{\leq p}\left(\pi ^{\zeta}_{\mathbf{f},\dd ,*}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd }^{\zeta}}\otimes\mathbb{L}^{-\dim(\mathfrak{M}_{\dd})/2}\right)\right)\ar[u], } \] the horizontal maps are isomorphisms by Lemma \ref{appLem}. Since the constituent morphisms of $\HO(\ast_{W,\mu}^{\zeta})$ lift to morphisms of monodromic mixed Hodge modules on $\mathcal{M}^{\zeta\sst}_{\mu}$, it follows that $\HO(\ast_{W,\mu}^{\zeta})$ respects the perverse filtration, and from the definitions, along with the decomposition theorem, we obtain the following proposition. \begin{proposition} \label{PCoha} There is a natural isomorphism of monoids in $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathbb{N}^{Q_0}))$ \[ \Gr_{\Pf}\left(\HO(\Coha_{W,\mu}^{\zeta}),\HO(\ms_{W,\mu}^{\zeta}),\HO(\Coha_{W,0}^{\zeta})\right)\cong\Ho\left(\dim_* \left(\Ho(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\dd}^{\zeta}),\Ho(\ms^{\zeta}_{W,\mu}),\mathcal{IC}_{\mathcal{M}_0}^{\zeta\sst}(\mathbb{Q})\right)\right). \] Furthermore, forgetting the algebra structure, there is a noncanonical isomorphism of underlying monodromic mixed Hodge modules $\Gr_{\Pf}(\HO(\Coha_{W,\mu}^{\zeta}))\cong\HO(\Coha_{W,\mu}^{\zeta})$. \end{proposition} The isomorphism of algebras is what justifies the notation of (\ref{premdef}). The following technical lemma is what will enable us to use the localised coproduct on $\HO(\Coha^{\zeta}_{W,\mu})$ to induce a Hopf algebra structure on the associated graded object $\Gr_{\Pf}(\HO(\Coha_{W,\mu}^{\zeta}))\cong\Ho\left(\dim_* \Ho(p_{\mu,*}^{\zeta}\mathfrak{IC}_{W,\dd}^{\zeta})\right)$. It is only a very slight variation of \cite[Prop.1.4.4]{deCat12}, but we include the proof for completeness. \begin{lemma} \label{HCM} Let $V$ be a vector bundle on $\mathfrak{M}_{\dd}$, and let $\mathfrak{eu}(V)\in\HO(\mathfrak{M}_{\dd},\mathbb{Q})$ be the corresponding equivariant Euler class. Then \[ \mathfrak{eu}(V)\cdot \Pf^p(\HO(\mathfrak{M}^{\zeta\sst}_{\dd},\mathbb{Q}))\subset \Pf^{p+2\dim(V)}(\HO(\mathfrak{M}^{\zeta\sst}_{\dd},\mathbb{Q})). \] \end{lemma} \begin{proof} Let $\pr\colon T(V)\rightarrow \mathfrak{M}^{\zeta\sst}_{\dd}$ be the projection from the total space of $V$ restricted to $\mathfrak{M}^{\zeta\sst}_{\dd}$, and let $i\colon \mathfrak{M}^{\zeta\sst}_{\dd}\rightarrow T(V)$ be the inclusion of the zero section. Let $T_\mathbf{f}(V)$ be the total space of the bundle $V$ pulled back along the map $\mathcal{M}_{\mathbf{f},\dd }^{\zeta}\rightarrow\mathfrak{M}_{\dd}^{\zeta\sst}$ defined by forgetting the framing, and let $\pr_{\mathbf{f}}$ and $i_{\mathbf{f}}$ denote the corresponding projections and inclusions. Then we have the equality \[ \cdot \mathfrak{eu}(V)|_{\HO(\mathcal{M}_{\mathbf{f},\dd }^{\zeta},\mathbb{Q})}=\HO\left(i_{\mathbf{f},*}\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd }}\rightarrow \mathbb{Q}_{T_{\mathbf{f}}(V)}\otimes\mathbb{L}^{-\dim(V)}\rightarrow i_{\mathbf{f},*}\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd }}\otimes\mathbb{L}^{-\dim(V)}\right) \] and the action of $\cdot\mathfrak{eu}(V)$ on $\HO(\mathfrak{M}^{\zeta\sst}_{\dd},\mathbb{Q})$ is given by the morphism \[ \HO\left(\pi^{\zeta} _{\mathbf{f},\dd ,*}\left(\pr_{\mathbf{f},*}i_{\mathbf{f},*}\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd }}\rightarrow \pr_{\mathbf{f},*}i_{\mathbf{f},*}\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd }}\otimes\mathbb{L}^{-\dim(V)}\right)\right) \] for $\mathbf{f}\gg 0$, which respects the perverse filtration on $\HO(\mathcal{M}^{\zeta}_{\mathbf{f},\dd },\mathbb{Q})$ (with the shift by $2\dim_{\mathbb{C}}(V)$) since \[ \pi^{\zeta} _{\mathbf{f},\dd ,*}\left(\pr_{\mathbf{f},*}i_{\mathbf{f},*}\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd }}\rightarrow \pr_{\mathbf{f},*}i_{\mathbf{f},*}\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd }}\otimes\mathbb{L}^{-\dim(V)}\right) \] is a map of mixed Hodge modules on $\mathcal{M}^{\zeta\sst}_{\dd}$. The result then follows from the definition of the perverse filtration on $\HO(\mathfrak{M}^{\zeta\sst}_{\dd},\mathbb{Q})$. \end{proof} \subsection{Relative CoHA modules} Let $\zeta\in \mathbb{H}_+^{Q_0}$ be a stability condition. For each framing vector $\mathbf{f}\in\mathbb{N}^{Q_0}$, and each slope $\mu\in(-\infty,\infty)$, we will form a module \begin{align} \Ho(\mathcal{F}^{\zeta}_{W,\mathbf{f},\mu})\in\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}_{\mu}^{\zeta\sst})) \end{align} for the monoid $\Ho(\Coha_{W,\mu}^{\zeta})$. Note that for the definition we do not require that $\zeta$ is $\mu$-generic. Let $\dd\in\Lambda_{\mu}^{\zeta}$. We define \begin{align*} \mathfrak{IC}^{\zeta}_{W,\mathbf{f},\dd }:=&\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd }}\mathcal{IC}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd }}(\mathbb{Q})\otimes \mathbb{L}^{\mathbf{f}\cdot \dd/2}\\ \mathfrak{IC}^{\zeta}_{W,\mathbf{f},\mu}:=&\bigoplus_{\dd\in\Lambda_{\mu}^{\zeta}}\mathfrak{IC}^{\zeta}_{W,\mathbf{f},\dd } \end{align*} in $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}^{\zeta}_{\mathbf{f},\mu}))$. The Tate twist is chosen so that if $\dd\in\Lambda_{\mu}^{\zeta}$ then \[ \Ho\left(p_{\mu,*}^{\zeta_{\mathbf{f}}^{[\mu]}}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{[\mu]}}_{(1,\dd)}}\mathbb{Q}_{X(Q_\mathbf{f})^{\zeta_{\mathbf{f}}^{[\mu]}\sst}_{(1,\dd)}/G_{\dd}}\rightarrow p_{\mu,*}^{\zeta_{\mathbf{f}}^{[\mu]}}j^{\zeta}_{\mathbf{f},\dd ,*}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd }^{\zeta}}\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd }}\right), \] where $j_{\mathbf{f},\dd }^{\zeta}$ is as in (\ref{jmap}), gives rise to an \textit{untwisted} morphism of complexes of monodromic mixed Hodge modules \begin{equation} \label{shiftjust} \Ho(\Coha_{W,\mu}^{\zeta}):=\Ho\left(p_{\mu,*}^{\zeta}\mathfrak{IC}^{\zeta}_{W,\dd}\right)\rightarrow \Ho\left(\pi _{\mathbf{f},\mu,*}^{\zeta}\mathfrak{IC}^{\zeta}_{W,\mathbf{f},\dd }\right). \end{equation} via the natural isomorphism \[ \Ho\left(p_{\mu,*}^{\zeta_{\mathbf{f}}^{[\mu]}}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{[\mu]}}_{(1,\dd)}}\mathbb{Q}_{X(Q_\mathbf{f})^{\zeta_{\mathbf{f}}^{[\mu]}\sst}_{(1,\dd)}/G_{\dd}}\right)\otimes \mathbb{L}^{{}-\dim(\mathfrak{M}_{d})/2}\rightarrow \Ho\left(p_{\mu,*}^{\zeta}\mathfrak{IC}^{\zeta}_{W,\dd}\right) \] provided by homotopy invariance. Later we will see (see Theorem \ref{cyclesprop}) that (\ref{shiftjust}) is in fact a morphism in the category of $\Ho(\Coha_{W,\mu}^{\zeta})$-modules after taking the direct sum over all $\dd\in\Lambda_{\mu}^{\zeta}$. We define \begin{align*} \mathfrak{IC}^{\zeta}_{W,\mathbf{f},\mu}:=&\bigoplus_{\dd\in\Lambda^{\zeta}_{\mu}}\mathfrak{IC}_{W,\mathbf{f},\dd }^{\zeta},\\ \Ho(\mathcal{F}^{\zeta}_{W,\mathbf{f},\mu}):=&\Ho\left(\pi _{\mathbf{f},\mu,*}^{\zeta}\mathfrak{IC}^{\zeta}_{W,\mathbf{f},\mu}\right). \end{align*} Let $\dd'+\dd''=\dd$, with $\dd',\dd''\in\Lambda_{\mu}^{\zeta}$. Our goal is to define maps \begin{align*} & \Ho\left(p^{\zeta}_{\dd',*}\mathfrak{IC}_{W,\dd'}^{\zeta}\right)\boxtimes_{\oplus}^{\tw} \Ho\left(\pi ^{\zeta}_{\mathbf{f},\dd'',*}\mathfrak{IC}_{W,\mathbf{f},\dd''}^{\zeta}\right)\xrightarrow{\Ho(\mos_{W,\dd',\dd''}^{\zeta})}\Ho\left(\pi ^{\zeta}_{\mathbf{f},\dd,*}\mathfrak{IC}_{W,\mathbf{f},\dd}^{\zeta}\right) \end{align*} satisfying the obvious associativity constraint with respect to the relative cohomological Hall algebra multiplication $\Ho(\ms_{W,\mu}^{\zeta})$. Consider the commutative diagram \begin{equation} \label{modDiag} \xymatrix{ X(Q_{\mathbf{f}})^{(\zeta,\zeta_{\mathbf{f}}^{(\mu)})\sst}_{\dd',(1,\dd'')}\ar[d]_{r_1}/G_{\dd'}\times G_{\dd''}\ar[r]^-{r_2}&X(Q_{\mathbf{f}})_{\dd',(1,\dd'')}^{(\zeta,\zeta_{\mathbf{f}}^{(\mu)})\sst}/G_{\dd',\dd''}\ar[d]_{p^{\zeta,\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd'')}} \\ \left( X(Q)_{\dd'}^{\zeta\sst}\times Y_{\mathbf{f},\dd''}^{\zeta}\right)/G_{\dd'}\times G_{\dd''}\ar[ddr]_-{p^{\zeta}_{\dd'}\times \pi _{\mathbf{f},\dd''}}\ar[d]^{\cong}& \mathcal{M}_{\dd'}^{\zeta\sst}\times\mathcal{M}^{\zeta}_{\mathbf{f},\dd''}\ar[dd]_{\id\times\pi ^{\zeta}_{\mathbf{f},\dd''}} & X(Q_{\mathbf{f}})_{\dd',(1,\dd'')}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}/G_{\dd',\dd''}\ar@{_(->}[ul]^h\ar[ddl]^{\tau_{\dd',(\mathbf{f},\dd'')}}\ar[l]^-{p^{\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd'')}} \\ \mathfrak{M}^{\zeta\sst}_{\dd'}\times\mathcal{M}^{\zeta}_{\mathbf{f},\dd''} & & \\ &\mathcal{M}^{\zeta\sst}_{\dd'}\times\mathcal{M}^{\zeta\sst}_{\dd''}. } \end{equation} The space $Y_{\mathbf{f},\dd''}^{\zeta}$ in the above diagram is the scheme of stable framed representations defined at the start of Section \ref{cohaDT}. The map $h$ is defined to be the natural inclusion, and $\tau_{\dd',(\mathbf{f},\dd'')}$ is defined to be the composition making the diagram commute. In order to elucidate a little what the inclusion $h$ looks like, we describe its complement. The stack $X(Q_{\mathbf{f}})_{\dd',(1,\dd'')}^{(\zeta,\zeta_{\mathbf{f}}^{(\mu)})\sst}/G_{\dd',\dd''}$ is naturally isomorphic to the stack of short exact sequences \begin{equation} \label{jexp} 0\rightarrow \rho'\rightarrow \rho\rightarrow\rho''\rightarrow 0 \end{equation} where $\rho'$ is a $\dd'$-dimensional $\zeta$-semistable $\mathbb{C}Q$-representation and $\rho''$ is a $(1,\dd'')$-dimensional $\zeta_{\mathbf{f}}^{(\mu)}$-stable $\mathbb{C}Q_{\mathbf{f}}$-representation equipped with a framing $\rho''_{\infty}\cong\mathbb{C}$. The complement to the inclusion $h$ consists of those pairs such that the representation $\rho$ is not itself $\zeta_{\mathbf{f}}^{(\mu)}$-stable. For example if (\ref{jexp}) splits and $\dd'\neq 0$, it represents an element of the complement. We define $\gamma^{\zeta}_{\dd',(\mathbf{f},\dd'')}=(\id\times \pi_{\mathbf{f},\dd''}^{\zeta})\circ p^{\zeta,\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd'')}$. Consider the following composition of morphisms \begin{align*} &\Ho\left(p^{\zeta}_{\dd',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd'}}\mathcal{IC}_{\mathfrak{M}_{\dd'}^{\zeta\sst}}(\mathbb{Q})\right)\boxtimes \Ho\left(\pi^{\zeta}_{\mathbf{f},\dd'',*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd''}}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd''}^{\zeta}}(\mathbb{Q})\right)\cong^{\mathtt{TS}}\\ &\Ho\left((p^{\zeta}_{\dd'}\times\pi^{\zeta}_{\mathbf{f},\dd''})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd'}\boxplus\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd''}}\mathcal{IC}_{\mathfrak{M}^{\zeta\sst}_{\dd'}\times\mathcal{M}^{\zeta}_{\mathbf{f},\dd''}}(\mathbb{Q})\right)\cong\\ &\Ho\left((p^{\zeta}_{\dd'}\times\pi^{\zeta}_{\mathbf{f},\dd''})_*r_{1,*}\phim{(\mathfrak{Tr}(W)^{\zeta}_{\dd'}\boxplus\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd''})\circ r_1}\mathcal{IC}_{X(Q_{\mathbf{f}})_{\dd',(1,\dd'')}^{(\zeta,\zeta_{\mathbf{f}}^{(\mu)})\sst}/(G_{\dd'}\times G_{\dd''})}(\mathbb{Q})\right)\otimes \mathbb{L}^{\sum_{a\in Q_1}\dd''_{s(a)}\dd'_{t(a)}/2}\cong\\ &\Ho\left(\gamma^{\zeta}_{\dd',(\mathbf{f},\dd''),*}\phim{\mathfrak{Tr}(W)^{\zeta,\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd'')}}\mathcal{IC}_{X(Q_{\mathbf{f}})^{(\zeta,\zeta_{\mathbf{f}}^{(\mu)})\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}}(\mathbb{Q})\right)\otimes \mathbb{L}^{-(\dd'',\dd')/2}\rightarrow\\ &\Ho\left(\gamma^{\zeta}_{\dd',(\mathbf{f},\dd''),*}h_*\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd'')}}\mathcal{IC}_{X(Q_{\mathbf{f}})_{\dd',(1,\dd'')}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}/G_{\dd',\dd''}}(\mathbb{Q})\right)\otimes\mathbb{L}^{-(\dd'',\dd')/2}\cong\\ &\Ho\left(\tau_{\dd',(\mathbf{f},\dd''),*}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd'')}}\mathcal{IC}_{X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}}(\mathbb{Q})\right)\otimes\mathbb{L}^{-(\dd'',\dd')/2} \end{align*} defining the map \begin{align*} \alpha^{\zeta}_{\dd',(\mathbf{f},\dd'')}\colon&\oplus_*\left( \Ho\left(p^{\zeta}_{\dd',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd'}}\mathcal{IC}_{\mathfrak{M}_{\dd'}^{\zeta\sst}}(\mathbb{Q})\right)\boxtimes \Ho\left(\pi^{\zeta}_{\mathbf{f},\dd'',*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd''}}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd''}^{\zeta}}(\mathbb{Q})\right)\right)\rightarrow\\ &\oplus_*\left(\Ho\left(\tau_{\dd',(\mathbf{f},\dd''),*}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd'')}}\mathcal{IC}_{X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}}(\mathbb{Q})\right)\right)\otimes\mathbb{L}^{-(\dd'',\dd')/2}. \end{align*} Consider the commutative diagram \begin{equation} \label{modDiag2} \xymatrix{ X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}\ar@/^2pc/[rrrr]^-{s_{\dd',(\mathbf{f},\dd'')}}\ar[d]^{\tau_{\dd',(\mathbf{f},\dd'')}}\ar[rr]_{\iota_{\dd',(\mathbf{f},\dd'')}}&&Y_{\mathbf{f},\dd}^{\zeta}/G_{\dd',\dd''}\ar[rr]_-{r_{\dd',(\mathbf{f},\dd'')}}&&Y^{\zeta}_{\mathbf{f},\dd}/G_{\dd}\ar[d]^{\pi^{\zeta}_{\mathbf{f},\dd}}\\ \mathcal{M}^{\zeta\sst}_{\dd'}\times\mathcal{M}^{\zeta\sst}_{\dd''}\ar[rrrr]^{\oplus}&&&&\mathcal{M}^{\zeta\sst}_{\dd}. } \end{equation} We set $U_N=\prod_{i\in Q_0}\Hom^{\mathrm{surj}}(\mathbb{C}^N,\mathbb{C}^{\dd_i})$, and define \begin{align*} X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd''),N}=&X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}\times_{G_{\dd',\dd''}}U_N\\ Y^{\zeta}_{\mathbf{f},\dd,N}=&Y^{\zeta}_{\mathbf{f},\dd}\times_{G_{\dd}}U_N. \end{align*} Taking the limit in cohomology as $N\mapsto \infty$ of the composition of morphisms \begin{align*} &\pi_{\mathbf{f},\dd,N,*}^{\zeta}s_{\dd',(\mathbf{f},\dd''),N,*}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd''),N}}\mathbb{Q}_{X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd''),N}} \xrightarrow{\cong}\\\rightarrow &\pi_{\mathbf{f},\dd,N,*}^{\zeta}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{(\mu)}}_{(1,\dd),N}}s_{\dd',(\mathbf{f},\dd''),N,*}\mathbb{Q}_{X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd''),N}}\rightarrow \pi_{\mathbf{f},\dd,N,*}^{\zeta}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{(\mu)}}_{(1,\dd),N}}\mathbb{Q}_{Y^{\zeta}_{\mathbf{f},\dd,N}}\otimes \mathbb{L}^{(\dd',\dd'')/2} \end{align*} we obtain the composition $\beta^{\zeta}_{\dd',(\mathbf{f},\dd'')}:$ \begin{align*} &\oplus_*\Ho\left(\tau_{\dd',(\mathbf{f},\dd''),*}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd'')}}\mathcal{IC}_{X(Q_{\mathbf{f}})_{\dd',(1,\dd'')}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}/G_{\dd',\dd''}}(\mathbb{Q})\right)\cong\\ &\Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,*}s_{\dd',(\mathbf{f},\dd''),*}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd'')}}\mathcal{IC}_{X(Q_{\mathbf{f}})_{\dd',(1,\dd'')}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}/G_{\dd',\dd''}}(\mathbb{Q})\right)\rightarrow\\ &\Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd}}\mathcal{IC}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd}}(\mathbb{Q})\right)\otimes \mathbb{L}^{(\dd',\dd'')/2}. \end{align*} We define \[ \Ho(\mos_{W,f,\mu}^{\zeta}):= \left(\beta^{\zeta}_{\dd',(\mathbf{f},\dd'')}\otimes \mathbb{L}^{-(\dd',\dd'')/2+\mathbf{f}\cdot \dd''/2}\right)\circ \left(\alpha^{\zeta}_{\dd',(\mathbf{f},\dd'')}\otimes \mathbb{L}^{\langle \dd'',\dd'\rangle/2+\mathbf{f}\cdot \dd''/2}\right). \] \smallbreak As in the case of the cohomological Hall algebra, there is an analogous construction in $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathbb{N}^{Q_0}))$ making \[ \HO(\FCoha_{W,\mathbf{f},\mu}^{\zeta}):=\Ho\left((\dim\circ \pi^{\zeta}_{\mu})_*\mathfrak{IC}_{W,\mathbf{f},\mu}\right) \] into a module over $\HO(\Coha_{W,\mu}^{\zeta})$. In brief, we define a map \begin{align*} \overline{\alpha}^{\zeta}_{\dd',(\mathbf{f},\dd'')}\colon &+_*\left( \Ho\left((\dim\circ p^{\zeta}_{\dd'})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd'}}\mathcal{IC}_{\mathfrak{M}_{\dd'}^{\zeta\sst}}(\mathbb{Q})\right)\boxtimes \Ho\left((\dim\circ \pi^{\zeta}_{\mathbf{f},\dd''})_*\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd''}}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd''}^{\zeta\sst}}(\mathbb{Q})\right)\right)\rightarrow\\ &+_*\left(\Ho\left((\dim\circ \tau_{\dd',(\mathbf{f},\dd'')})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd',(1,\dd'')}}\mathcal{IC}_{X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}}(\mathbb{Q})\right)\right)\otimes\mathbb{L}^{-(\dd'',\dd')/2} \end{align*} in the same way as $\alpha^{\zeta}_{\dd',(\mathbf{f},\dd'')}$, and a map \begin{align*} \overline{\beta}^{\zeta}_{\dd',(\mathbf{f},\dd'')}\colon &+_*\Ho\left((\dim\circ \tau_{\dd',(\mathbf{f},\dd'')})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd',(1,\dd'')}}\mathcal{IC}_{X(Q_{\mathbf{f}})_{\dd',(1,\dd'')}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}/G_{\dd',\dd''}}(\mathbb{Q})\right)\rightarrow\\ &\Ho\left((\dim\circ \pi^{\zeta}_{\mathbf{f},\dd})_*\phim{\mathfrak{Tr}(W)^{\zeta}_{\mathbf{f},\dd}}\mathcal{IC}_{Y^{\zeta}_{\mathbf{f},\dd}/G_{\dd}}(\mathbb{Q})\right)\otimes \mathbb{L}^{(\dd',\dd'')/2} \end{align*} in the same way as $\beta^{\zeta}_{\dd',(\mathbf{f},\dd'')}$, and composing appropriate twists of these maps, we obtain the desired map \[ \HO(\mos_{W,\mathbf{f},\mu}^{\zeta})\colon \HO(\Coha_{W,\mu}^{\zeta})\boxtimes_+^{\tw}\HO(\FCoha_{W,\mathbf{f},\mu}^{\zeta})\rightarrow \HO(\FCoha_{W,\mathbf{f},\mu}^{\zeta}). \] Since $\dim\circ \pi^{\zeta}_{\mathbf{f},\mu}$ factors through the proper map $\pi^{\zeta}_{\mathbf{f},\mu}$, we obtain a perverse filtration $\Pf$ on $\HO(\FCoha_{W,\mathbf{f},\mu}^{\zeta})$ via the construction at the start of Section \ref{PervSec}, and the following companion to Proposition \ref{PCoha}. \begin{proposition} \label{PCom} The module structure $\HO(\mos_{W,\mu}^{\zeta})$ induces a module structure \[ \Gr_{\Pf}(\HO(\mos_{W,\mu}^{\zeta}))\colon \Gr_{\Pf}(\HO(\Coha_{W,\mu}^{\zeta}))\boxtimes_+^{\tw}\Gr_{\Pf}(\HO(\FCoha^\zeta_{W,\mathbf{f},\mu}))\rightarrow \Gr_{\Pf}(\HO(\FCoha^\zeta_{W,\mathbf{f},\mu})) \] and there is an isomorphism of $\Gr_{\Pf}(\HO(\Coha_{W,\mu}^\zeta))$-modules \[ \Gr_{\Pf}(\HO(\FCoha^{\zeta}_{W,\mathbf{f},\mu}))\cong\Ho(\dim_*\Ho(\FCoha^{\zeta}_{W,\mathbf{f},\mu})). \] \end{proposition} \subsection{Proof of Theorem \ref{repthm}} \label{rep_thm_proof} Recall that we define $\zeta_{\mathbf{f}}^{[\mu]}$ to be the stability condition in $\mathbb{H}_+^{(Q_{\mathbf{f}})_0}$ obtained by setting the slope of $S_{\infty}$, the simple module concentrated at vertex $\infty$, to be $\mu$, and setting $\lvert Z(S_{\infty})\rvert=1$. A $\mathbb{C}Q_{\mathbf{f}}$-module of dimension $(1,\dd)$, where $\dd$ has slope $\mu$, is $\zeta_{\mathbf{f}}^{[\mu]}$-semistable if and only if the underlying $\mathbb{C}Q$-module is $\zeta$-semistable. A $\zeta_{\mathbf{f}}^{[\mu]}$-semistable $\mathbb{C}Q_{\mathbf{f}}$-module $N$ of dimension vector $(1,\dd)$ admits a unique Harder-Narasimhan filtration with respect to the stability condition $\zeta_{\mathbf{f}}^{(\mu)}$, which is either the trivial filtration of $N$, or is of the form $0\subset N'\subset N$, where $N'$ is a $\zeta_{\mathbf{f}}^{(\mu)}$-stable $\mathbb{C}Q_{\mathbf{f}}$-module of dimension vector $(1,\dd')$ for $\dd'<\dd$, and $N/N'$ has dimension vector zero, when restricted to the framing vertex $\infty$, and is $\zeta$-semistable when considered as a $\mathbb{C}Q$-module. Geometrically, this is expressed in the following proposition, which is a special case of \cite[Prop.3.4]{Reineke_HN}. \begin{proposition} There is a stratification \[ X^{\zeta^{[\mu]}_{\mathbf{f}}\sst}_{\mathbf{f},\dd}/G_{\dd}=\mathcal{M}_{\mathbf{f},\dd}^{\zeta}\coprod \left(\coprod_{\dd'+\dd''=\dd}X(Q_{\mathbf{f}})^{(\zeta_{\mathbf{f}}^{(\mu)},\zeta)\sst}_{(1,\dd'),\dd''}/G_{\dd',\dd''}\right) \] with $j\colon \mathcal{M}^{\zeta}_{\mathbf{f},\dd}\hookrightarrow X^{\zeta^{[\mu]}_{\mathbf{f}}\sst}_{\mathbf{f},\dd}/G_{\dd}$ the inclusion of the open stratum, if the domain of $j$ is nonempty. \end{proposition} Denote by $\hat{p}^{\zeta}_{\mathbf{f},\dd}\colon X_{\mathbf{f},\dd}^{\zeta^{[\mu]}_{\mathbf{f}}\sst}/G_{\dd}\rightarrow\mathcal{M}^{\zeta\sst}_{\dd}$ the composition of $p^{\zeta}_{\dd}$ with the projection of the affine fibration $X_{\mathbf{f},\dd}^{\zeta^{[\mu]}_{\mathbf{f}}\sst}/G_{\dd}\rightarrow \mathfrak{M}^{\zeta\sst}_{\dd}$. \begin{lemma} \label{splitinj} The map $\Ho\left(\hat{p}_{\mathbf{f},\dd,!}^{\zeta}j_!\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd}}\right)\rightarrow \Ho\left(\hat{p}^{\zeta}_{\mathbf{f},\dd,!}\mathbb{Q}_{X^{\zeta^{[\mu]}_{\mathbf{f}}\sst}_{\mathbf{f},\dd}/G_{\dd}}\right)$ is a split injection in each cohomological degree. \end{lemma} \begin{proof} We complete the partial order on $\HN^{\geq}_{(1,\dd)}$ to a total order $<$ as in the proof of Theorem \ref{gcDT}. We define a total order on dimension vectors $\dd'\leq \dd$ by setting $\dd'\leq_{\tot} \ee'$ if $((1,\dd'),\dd-\dd')<((1,\ee'),\dd-\ee')$, or if $\ee'=\dd$. For each $\dd'\leq_t \dd$, let \[ i_{\dd'}\colon X(Q_{\mathbf{f}})^{(\zeta_{\mathbf{f}}^{(\mu)},\zeta)\sst}_{(1,\dd'),\dd''}/G_{\dd',\dd''}\rightarrow X_{\mathbf{f},\dd}^{\zeta^{[\mu]}_{\mathbf{f}}\sst}/G_{\dd} \] be the embedding. We define the embeddings $i_{<_{\tot}\dd'}$ and $i_{\leq_{\tot} \dd'}$ similarly (see (\ref{id}),(\ref{idd}),(\ref{iddd})). Then as in the proof of Theorem \ref{gcDT}, for each $\dd'\leq_t \dd$ there is a distinguished triangle \begin{align} \label{trin} &\Ho\left(\hat{p}_{\mathbf{f},\dd,!}^{\zeta}i_{\dd',!}\mathbb{Q}_{X(Q_{\mathbf{f}})^{(\zeta_{\mathbf{f}}^{(\mu)},\zeta)\sst}_{(1,\dd'),\dd-\dd'}/G_{\dd',\dd-\dd'}}\right)\rightarrow \Ho\left(\hat{p}_{\mathbf{f},\dd,!}^{\zeta}i_{\leq \dd',!}\mathbb{Q}_{\bigcup_{\ee\leq_{\tot} \dd'}X(Q_{\mathbf{f}})^{(\zeta_{\mathbf{f}}^{(\mu)},\zeta)\sst}_{(1,\ee),\dd-\ee}/G_{\ee,\dd-\ee}}\right)\\ \nonumber \rightarrow &\Ho\left(\hat{p}_{\mathbf{f},\dd,!}^{\zeta}i_{<_{\tot}\dd',!}\mathbb{Q}_{\bigcup_{\ee<_{\tot}\dd'}X(Q_{\mathbf{f}})^{(\zeta_{\mathbf{f}}^{(\mu)},\zeta)\sst}_{(1,\ee),\dd-\ee}/G_{\ee,\dd-\ee}}\right)\xrightarrow{\varrho_{\dd'}}.\\ \end{align} We set $\dd''=\dd-\dd'$. Consider the following commutative diagram \[ \xymatrix{ \mathcal{M}^{\zeta}_{\mathbf{f},\dd'}\times\mathfrak{M}^{\zeta\sst}_{\dd''}\ar[dr]_{\pi^{\zeta}_{\mathbf{f},\dd'}\times p^{\zeta}_{\dd''}}&\ar[l]_-{q}X(Q_{\mathbf{f}})^{(\zeta_{\mathbf{f}}^{(\mu)},\zeta)\sst}_{(1,\dd'),\dd''}/G_{\dd',\dd''}\ar[d]^{\Psi}\ar[r]^-{i_{\dd'}}&Y^{\zeta^{[\mu]}_{\mathbf{f}}\sst}_{\mathbf{f},\dd}/G_{\dd}\ar[d]^{\hat{p}^{\zeta}_{\mathbf{f},\dd}}\\ &\mathcal{M}^{\zeta\sst}_{\dd'}\times \mathcal{M}^{\zeta\sst}_{\dd''}\ar[r]^-{\oplus}&\mathcal{M}^{\zeta\sst}_{\dd} } \] where $q$ is the natural map. Then there are isomorphisms \begin{align} \label{purp} \Ho\left(\hat{p}_{\mathbf{f},\dd,!}^{\zeta}i_{\dd',!}\mathbb{Q}_{X(Q_{\mathbf{f}})^{(\zeta_{\mathbf{f}}^{(\mu)},\zeta)\sst}_{(1,\dd'),\dd''}/G_{\dd',\dd''}}\right)\cong &\oplus_*\Ho\left(\Psi_!\mathbb{Q}_{X(Q_{\mathbf{f}})^{(\zeta_{\mathbf{f}}^{(\mu)},\zeta)\sst}_{(1,\dd'),\dd''}/G_{\dd',\dd''}}\right) \\ \nonumber \cong &\oplus_*\Ho\left((\pi^{\zeta}_{\mathbf{f},\dd'}\times p^{\zeta}_{\dd''})_!\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd'}\times\mathfrak{M}^{\zeta\sst}_{\dd''}}\right)\otimes\mathbb{L}^{-(\dd'',\dd')} \\ \nonumber \cong &\oplus_*\left(\pi^{\zeta}_{\mathbf{f},\dd',!}\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd'}}\boxtimes\Ho\left(p_{\dd'',!}^{\zeta}\mathbb{Q}_{\mathfrak{M}_{\dd''}^{\zeta\sst}}\right)\otimes\mathbb{L}^{-(\dd'',\dd')}\right) \end{align} and the final mixed Hodge module is pure, since $\pi_{\mathbf{f},\dd'}^{\zeta}$ and $\oplus$ are proper, and $\Ho\left(p_{\dd'',!}^{\zeta}\mathbb{Q}_{\mathfrak{M}_{\dd''}^{\zeta\sst}}\right)$ is pure by Lemma \ref{appLem}. By induction on $\dd'$ it follows that all terms in (\ref{trin}) are pure, and so the connecting map $\varrho_{\dd'}$ is zero for all $\dd'$, and in particular $\varrho_{\dd}=0$ and so \[ \Ho^n\left(\hat{p}_{\mathbf{f},\dd,!}^{\zeta}j_!\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd}}\right)\rightarrow \Ho^n\left(\hat{p}^{\zeta}_{\mathbf{f},\dd,!}\mathbb{Q}_{X^{\zeta^{[\mu]}_{\mathbf{f}}\sst}_{\mathbf{f},\dd}/G_{\dd}}\right) \] is an injection for all $n$. This injection splits because it belongs to the semisimple category of pure mixed Hodge modules on $\mathcal{M}_{\dd}^{\zeta\sst}$. \end{proof} Since the restriction of $\mathcal{T}r(W)_{\mu}^{\zeta}$ to $\mathcal{M}_0^{\zeta\sst}$ is zero, there are natural isomorphisms \begin{align*} \Ho\left(\pi_{\mathbf{f},\mu,*}^{\zeta}\mathfrak{IC}_{W,\mathbf{f},\mu}^\zeta\right)\Big|_{\mathcal{M}^{\zeta\sst}_0}\cong&\Ho\left(\pi_{\mathbf{f},\mu,*}^{\zeta}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},0}^{\zeta}}\right) \\&\cong\mathcal{IC}_{\mathcal{M}_0^{\zeta\sst}}(\mathbb{Q}) \\&\cong\mathbb{Q}_{\mathcal{M}_0}. \end{align*} Denote by \[ \kappa_{\mathbf{f}}\colon \mathbb{Q}_{\mathcal{M}_0}\rightarrow \Ho\left(\pi_{\mathbf{f},\mu,*}^{\zeta}\mathfrak{IC}_{W,\mathbf{f},\mu}^\zeta\right) \] the natural inclusion. \begin{theorem} \label{cyclesprop} Define $\Theta_W$ to be the following composition of maps: \begin{align*} &\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right)\xrightarrow{\cong} \Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right)\boxtimes_{\oplus}^{\tw} \mathbb{Q}_{\mathcal{M}_0}\xrightarrow{\id\boxtimes^{\tw}_{\oplus} \kappa_{\mathbf{f}}}\\ &\bigoplus_{\dd',\dd''\in\Lambda^{\zeta}_{\mu}}\Ho\left(p^{\zeta}_{\dd',*}\mathfrak{IC}_{W,\dd'}^{\zeta}\right)\boxtimes_{\oplus}^{\tw} \Ho\left(\pi^{\zeta}_{\mathbf{f},\dd'',*}\mathfrak{IC}_{W,\mathbf{f},\dd''}^{\zeta}\right)\xrightarrow{\Ho(\mos_{W,\mu}^{\zeta})}\Ho\left(\pi^{\zeta}_{\mathbf{f},\mu,*}\mathfrak{IC}_{W,\mathbf{f},\mu}^{\zeta}\right). \end{align*} Then $\Theta_W$ is a map of $\Ho(\Coha^{\zeta}_{W,\mu})$-modules $\Ho(\Coha^{\zeta}_{W,\mu})\rightarrow \Ho(\FCoha_{W,\mathbf{f},\mu}^{\zeta})$ that is a split surjection in $\MMHM(\mathcal{M}_{\mu}^{\zeta\sst})$ in each cohomological degree. \end{theorem} \begin{proof} Putting $W=0$ and fixing $\dd\in\Lambda^{\zeta}_{\mu}$, the Verdier dual of the map \[ \Theta_0\colon \Ho\left(p^{\zeta}_{\dd,*}\mathfrak{IC}^{\zeta}_{0,\dd}\right)\rightarrow\Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,*}\mathfrak{IC}^{\zeta}_{0,\mathbf{f},\dd}\right) \] is a Tate twist of the map of Lemma \ref{splitinj}, via the isomorphism \[ \Ho\left(\hat{p}^{\zeta}_{\mathbf{f},\dd,!}\mathbb{Q}_{X_{\mathbf{f},\dd}^{\zeta^{[\mu]}_{\mathbf{f}}\sst}/G_{\dd}}\right)\cong\Ho\left(p^{\zeta}_{\dd,!}\mathfrak{IC}^{\zeta}_{0,\dd}\right)\otimes\mathbb{L}^{\mathbf{f}\cdot \dd} \] provided by homotopy invariance. In particular, $\Ho^n\left(p^{\zeta}_{\dd,*}\mathfrak{IC}^{\zeta}_{0,\dd}\right)\rightarrow\Ho^n\left(\pi^{\zeta}_{\mathbf{f},\dd,*}\mathfrak{IC}^{\zeta}_{0,\mathbf{f},\dd}\right)$ is a split surjective map for every $n$ by Lemma \ref{splitinj}. Applying $\phim{\mathcal{T}r(W)_{\dd}^{\zeta}}$ to this map, we deduce that the top map in the commutative diagram \[ \xymatrix{ \phim{\mathcal{T}r(W)_{\dd}^{\zeta}}\Ho\left(\hat{p}_{\mathbf{f},\dd,*}^{\zeta}\mathcal{IC}_{X_{\mathbf{f},\dd}^{\zeta^{[\mu]}_{\mathbf{f}}\sst}/G_{\dd}}(\mathbb{Q})\right)\ar[d]\ar[rr]^-{}&&\phim{\mathcal{T}r(W)_{\dd}^{\zeta}}\Ho\left(\hat{p}_{\dd,*}^{\zeta}j_*\mathcal{IC}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd}}(\mathbb{Q})\right)\ar[d]\\ \Ho\left(\hat{p}^{\zeta}_{\mathbf{f},\dd,*}\phim{\mathfrak{Tr}(W)_{(1,\dd)}^{\zeta^{[\mu]}}}\mathcal{IC}_{X_{\mathbf{f},\dd}^{\zeta^{[\mu]}_{\mathbf{f}}\sst}/G_{\dd}}(\mathbb{Q})\right)\ar[rr]^-{} &&\Ho\left(\hat{p}_{\mathbf{f},\dd,*}^{\zeta} j_*\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathcal{IC}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd}}(\mathbb{Q})\right) } \] is split surjective in each cohomological degree. Factoring the map $\hat{p}^{\zeta}_{\mathbf{f},\dd}=p_{\dd}^{\zeta}\circ \pi $ where $\pi \colon X_{\mathbf{f},\dd}^{\zeta^{[\mu]}_{\mathbf{f}}\sst}/G_{\dd}\rightarrow\mathfrak{M}_{\dd}^{\zeta}$ is the affine fibration induced by the inclusion of quivers $Q\rightarrow Q_{\mathbf{f}}$, we obtain from Propositions \ref{basicfacts} and \ref{cvs} that the leftmost vertical arrow is an isomorphism. On the other hand, $\hat{p}^{\zeta}_{\mathbf{f},\dd} j=\pi_{\mathbf{f},\dd}^{\zeta}$ and so is projective, so that the rightmost vertical map is an isomorphism too. It follows that $\Theta_W$, which is given by the bottom map after identifying \[ \Ho\left(\hat{p}^{\zeta}_{\mathbf{f},\dd,*}\phim{\mathfrak{Tr}(W)_{(1,\dd)}^{\zeta_{\mathbf{f}}^{[\mu]}}}\mathcal{IC}_{X_{\mathbf{f},\dd}^{\zeta^{[\mu]}_{\mathbf{f}}\sst}/G_{\dd}}(\mathbb{Q})\right)\cong\Ho\left(p^{\zeta}_{\dd,*}\phim{\mathfrak{Tr}(W)_{\dd}^{\zeta}}\mathcal{IC}_{X^{\zeta\sst}_{\dd}/G_{\dd}}(\mathbb{Q})\right) \] via homotopy invariance, is a split surjection in each cohomological degree. The proof that $\Theta_W$ is $\Ho(\Coha^{\zeta}_{W,\mu})$-linear is as in \cite[Prop.3.4]{Fran13}. Alternatively, linearity follows from the linearity part of Proposition \ref{canmaps} below, and the realisation of $\Ho\left(\Coha^{\zeta}_{W,\mu}\right)$ as the limit of $\Ho\left(\mathcal{F}^{\zeta}_{W,\mathbf{f},\mu}\right)$ as $\mathbf{f}\mapsto\infty$. \end{proof} \begin{corollary} \label{cyclescor} The map \begin{align*} \HO(\Coha_{W,\mu}^{\zeta})\rightarrow &\HO(\mathcal{F}_{W,\mathbf{f},\mu}^{\zeta})\\ x\mapsto & \HO(\cdot^{\zeta}_{W,\mathbf{f},\mu})(x,z_0), \end{align*} where $z_0=1\in\HO\left(\mathcal{M}_{\mathbf{f},0}^{\zeta},\phim{\mathcal{T}r(W)_{\mathbf{f},0}^{\zeta}}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},0}^{\zeta}}(\mathbb{Q})\right)\cong\mathbb{Q}$, is a surjective map of $\Coha_{W,\mu}^{\zeta}$-modules. \end{corollary} \begin{proof} We deduce from Proposition \ref{PCom} and Theorem \ref{cyclesprop} that the map is a split surjection after passing to the associated graded with respect to the perverse filtration, and the result follows. \end{proof} To complete our account of the representation theory of $\Ho(\Coha_{W,\mu}^{\zeta})$, we introduce natural morphisms $\Ho(\mathcal{F}_{W,\mathbf{f},\mu}^{\zeta})\rightarrow \Ho(\mathcal{F}_{W,\mathbf{f}',\mu}^{\zeta})$ for $\mathbf{f}'<\mathbf{f}$. Consider the inclusion of quivers $Q_{\mathbf{f}'}\subset Q_{\mathbf{f}}$. We define a functor from the category of $\mathbb{C}Q_{\mathbf{f}}$-modules to the category of $\mathbb{C}Q_{\mathbf{f}'}$-modules by precomposing with this inclusion, defining the map \[ \Phi\colon X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{(1,\dd)}/G_{\dd}\rightarrow X(Q_{\mathbf{f}'})^{\zeta^{[\mu]}_{\mathbf{f}'}\sst}_{(1,\dd)}/G_{\dd}. \] We define $N_{\ee}/G_{\dd}:=\Phi^{-1}\left(X(Q_{\mathbf{f}'})^{(\zeta_{\mathbf{f}'}^{(\mu)},\zeta)\sst}_{(1,\ee),\dd-\ee}/G_{\ee,\dd-\ee}\right)$ where we use again the Harder--Narasimhan stratification \[ Y_{\mathbf{f}',\dd}^{\zeta^{[\mu]}_{\mathbf{f}'}\sst}/G_{\dd}:=\coprod_{\ee\leq_t \dd}X(Q_{\mathbf{f}'})^{(\zeta_{\mathbf{f}'}^{(\mu)},\zeta)\sst}_{(1,\ee),\dd-\ee}/G_{\ee,\dd-\ee}. \] This induces a stratification $Y^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\mathbf{f},\dd}/G_{\dd}=\coprod_{\ee\leq_t \dd}N_{\ee}/G_{\dd}$ with open stratum $j\colon N_{\dd}/G_{\dd}\rightarrow \mathcal{M}_{\mathbf{f},\dd}^{\zeta}$. The map $N_{\dd}/G_{\dd}\rightarrow Y_{\mathbf{f}',\dd}^{\zeta_{\mathbf{f}'}^{(\mu)}\sst}$ is an affine fibration, with section given by extending a $\mathbb{C}Q_{\mathbf{f}'}$-module to a $\mathbb{C}Q_{\mathbf{f}}$-module by setting the action of all of the arrows in $\mathbb{C}Q_{\mathbf{f}}$ that are not in $\mathbb{C}Q_{\mathbf{f}'}$ to be the zero map. We define $\Xi_{\mathbf{f},\mathbf{f}'}$ to be the following composition of morphisms. \begin{align*} &\Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,*}\mathfrak{IC}_{W,\mathbf{f},\dd}^{\zeta}\right)\rightarrow \Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,*}j_*\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}|_{N_{\dd}/G_{\dd}}}\mathcal{IC}_{N_{\dd}/G_{\dd}}(\mathbb{Q})\right)\otimes\mathbb{L}^{\mathbf{f}\cdot \dd/2}\xrightarrow{\cong}\\&\xrightarrow{\cong}\Ho\left(\pi^{\zeta}_{\mathbf{f}',\dd,*}\phim{\mathcal{T}r(W)_{\mathbf{f}',\dd}^{\zeta}}\mathcal{IC}_{\mathcal{M}^{\zeta}_{\mathbf{f}',\dd}}(\mathbb{Q})\right)\otimes \mathbb{L}^{\mathbf{f}\cdot \dd/2-(\mathbf{f}\cdot \dd-\mathbf{f}'\cdot \dd)/2}\xrightarrow{=}\Ho\left(\pi_{\mathbf{f}',\dd,*}^{\zeta}\mathfrak{IC}_{W,\mathbf{f}',\dd}^{\zeta}\right). \end{align*} \begin{proposition} \label{canmaps} The morphism $\Xi_{\mathbf{f},\mathbf{f}'}\colon \Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,*}\mathfrak{IC}_{W,\mathbf{f},\dd}^{\zeta}\right)\rightarrow \Ho\left(\pi_{\mathbf{f}',\dd,*}^{\zeta}\mathfrak{IC}_{W,\mathbf{f}',\dd}^{\zeta}\right)$ is a morphism of $\Ho(\Coha_{W,\mu}^{\zeta})$-modules, which is a split surjection in each cohomological degree. \end{proposition} \begin{proof} We first show $\Ho(\Coha_{W,\mu}^{\zeta})$-linearity. The inclusion $X(Q_{\mathbf{f}'})^{\zeta_{\mathbf{f}'}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}\rightarrow X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}$ defined by extending by zero induces a map \begin{align*} \Omega_0\colon &\Ho\left(\tau_{\dd',(\mathbf{f},\dd''),*}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}}^{(\mu)}}_{\dd',(1,\dd'')}}\mathcal{IC}_{X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}}(\mathbb{Q})\right)\otimes\mathbb{L}^{-(\dd'',\dd')/2}\rightarrow\\ &\Ho\left(\tau_{\dd',(\mathbf{f}',\dd''),*}\phim{\mathfrak{Tr}(W)^{\zeta_{\mathbf{f}'}^{(\mu)}}_{\dd',(1,\dd'')}}\mathcal{IC}_{X(Q_{\mathbf{f}'})^{\zeta_{\mathbf{f}'}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}}(\mathbb{Q})\right)\otimes\mathbb{L}^{-(\dd'',\dd')/2-(\mathbf{f}-\mathbf{f}')\cdot \dd''/2} \end{align*} where $\tau_{\dd',(\mathbf{f},\dd'')}$ and $\tau_{\dd',(\mathbf{f}',\dd'')}$ are as in (\ref{modDiag2}). We first check the relation \begin{equation} \label{Co1} \Omega_0\alpha_{\dd',(\mathbf{f},\dd'')}=\alpha_{\dd',(\mathbf{f}',\dd'')}(\id\boxtimes\Xi_{\mathbf{f},\mathbf{f}'}). \end{equation} Consider the diagram \[ \xymatrix{ X(Q_{\mathbf{f}})^{\zeta\sst,\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd'}\times G_{\dd''} \ar[r]^-{r_1}& \left( X(Q)_{\dd'}^{\zeta\sst}\times Y_{\mathbf{f},\dd''}^{\zeta}\right)/G_{\dd'}\times G_{\dd''}\\ X(Q_{\mathbf{f}'})^{\zeta\sst,\zeta_{\mathbf{f}'}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd'}\times G_{\dd''}\ar[u] \ar[r]^-{r'_1}& \left( X(Q)_{\dd'}^{\zeta\sst}\times Y_{\mathbf{f}',\dd''}^{\zeta}\right)/G_{\dd'}\times G_{\dd''}\ar[u] } \] where $r'_1$ is defined in the same way as $r_1$, defined as in (\ref{modDiag}), and the vertical maps are the inclusions. This diagram is Cartesian, as are the similarly defined diagrams \[ \xymatrix{ X(Q_{\mathbf{f}})^{\zeta\sst,\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd'}\times G_{\dd''}\ar[r]^-{r_2}&X(Q_{\mathbf{f}})^{\zeta\sst,\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}\\ X(Q_{\mathbf{f}'})^{\zeta\sst,\zeta_{\mathbf{f}'}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd'}\times G_{\dd''}\ar[u]\ar[r]^-{r'_2}&X(Q_{\mathbf{f}'})^{\zeta\sst,\zeta_{\mathbf{f}'}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}.\ar[u] } \] and \[ \xymatrix{ X(Q_{\mathbf{f}})^{\zeta\sst,\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}&\ar[l]_-{h}X(Q_{\mathbf{f}})_{\dd',(1,\dd'')}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}/G_{\dd',\dd''}\\ X(Q_{\mathbf{f}'})^{\zeta\sst,\zeta_{\mathbf{f}'}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}\ar[u]&\ar[l]_-{h'}X(Q_{\mathbf{f}'})_{\dd',(1,\dd'')}^{\zeta_{\mathbf{f}'}^{(\mu)}\sst}/G_{\dd',\dd''}\ar[u] } \] with $h,h',r_2,r'_2$ defined as in (\ref{modDiag}). Breaking $\alpha_{\dd',(\mathbf{f},\dd'')}$ and $\alpha_{\dd',(\mathbf{f}',\dd'')}$ into their constituent morphisms, the equation (\ref{Co1}) breaks into a number of simpler commutativity relations, which are all obtained via commutativity of pullbacks obtained from the above commutative diagrams. Next consider the commutative diagram \begin{equation} \label{paral} \xymatrix{ X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}\ar[rr]^-{\iota_{\dd',(\mathbf{f},\dd'')}}&&Y_{\mathbf{f},\dd}^{\zeta}/G_{\dd',\dd''}\ar[rr]^{r_{\dd',(\mathbf{f},\dd'')}}&&Y_{\mathbf{f},\dd}^{\zeta}/G_{\dd} \\ X(Q_{\mathbf{f}'})^{\zeta_{\mathbf{f}'}^{(\mu)}\sst}_{\dd',(1,\dd'')}/G_{\dd',\dd''}\ar[u]\ar[rr]^-{\iota_{\dd',(\mathbf{f}',\dd'')}}&&Y_{\mathbf{f}',\dd}^{\zeta}/G_{\dd',\dd''}\ar[rr]^{r_{\dd',(\mathbf{f}',\dd'')}}\ar[u]^{\iota_1}&&Y_{\mathbf{f}',\dd}^{\zeta}/G_{\dd}\ar[u]^{\iota_2} } \end{equation} where again the vertical arrows are the inclusions. As in (\ref{modDiag2}) we define \begin{align*} s_{\dd',(\mathbf{f},\dd'')}=&r_{\dd',(\mathbf{f},\dd'')}\iota_{\dd',(\mathbf{f},\dd'')}\\ s_{\dd',(\mathbf{f}',\dd'')}=&r_{\dd',(\mathbf{f}',\dd'')}\iota_{\dd',(\mathbf{f}',\dd'')}. \end{align*} Then the squares in (\ref{paral}) are cartesian squares of projective maps, with the leftmost square a transversal intersection. We define \begin{align*} X(Q_{\mathbf{f}})_{\dd',(1,\dd''),N}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}=&X(Q_{\mathbf{f}})_{\dd',(1,\dd'')}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}\times_{G_{\dd',\dd''}}U_N\\ Z^{\zeta}_{\mathbf{f},\dd',\dd'',N}=&X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{(1,\dd)}\times_{G_{\dd',\dd''}}U_N\\ X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{(1,\dd),N}=&X(Q_{\mathbf{f}})^{\zeta_{\mathbf{f}}^{(\mu)}\sst}_{(1,\dd)}\times_{G_{\dd}}U_N \end{align*} with $U_N=\prod_{i\in Q_0}\Hom^{\mathrm{surj}}(\mathbb{C}^N,\mathbb{C}^{\dd_i})$ as before. Then we define maps \begin{align*} \gamma_{\dd',(\mathbf{f},\dd''),N}\colon &\Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,N,*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd,N}}s_{\dd',(\mathbf{f},\dd''),N,*}\mathbb{Q}_{X(Q_{\mathbf{f}})_{\dd',(1,\dd''),N}^{\zeta_{\mathbf{f}}^{(\mu)}\sst}}\right)\rightarrow\\&\Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,N,*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd,N}}r_{\dd',(\mathbf{f},\dd''),N,*}\mathbb{Q}_{Z^{\zeta}_{\mathbf{f},\dd',\dd'',N}}\right)\otimes\mathbb{L}^{\sum_{a\in Q_1}\dd'_{s(a)}\dd''_{t(a)}} \end{align*} and \begin{align*} \delta_{\dd',(\mathbf{f},\dd''),N}\colon &\Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,N,*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\mathbf{f},\dd,N}}r_{\dd',(\mathbf{f},\dd''),N,*}\mathbb{Q}_{Z^{\zeta}_{\mathbf{f},\dd',\dd'',N}}\right)\otimes\mathbb{L}^{\sum_{a\in Q_1}\dd'_{s(a)}\dd''_{t(a)}}\rightarrow\\ &\Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,N,*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\mathbf{f},\dd,N}}\mathbb{Q}_{X(Q_{\mathbf{f}})^{\zeta^{(\mu)}_{\mathbf{f}}\sst}_{(1,\dd),N}}\right)\otimes\mathbb{L}^{-(\dd',\dd'')}. \end{align*} We define maps $\Omega_1$ and $\Omega_2$ in the same way as $\Omega_0$, by restricting along the inclusions $\iota_{1,N}$ and $\iota_{2,N}$ and passing to the limit in cohomology. From \cite[Cor.2.13]{Da13} we deduce the following identities \begin{align*} \Omega_1\gamma_{\dd',(\mathbf{f},\dd'')}=\gamma_{\dd',(\mathbf{f}',\dd'')}\Omega_0\\ \Omega_2\delta_{\dd',(\mathbf{f},\dd'')}=\delta_{\dd',(\mathbf{f}',\dd'')}\Omega_1. \end{align*} Now, from the equations \begin{align*} \Omega_2=&\Xi_{\mathbf{f},\mathbf{f}'}\\ \beta_{\dd',(\mathbf{f},\dd'')}=&\delta_{\dd',(\mathbf{f},\dd'')}\gamma_{\dd',(\mathbf{f},\dd'')}\\ \beta_{\dd',(\mathbf{f}',\dd'')}=&\delta_{\dd',(\mathbf{f}',\dd'')}\gamma_{\dd',(\mathbf{f}',\dd'')} \end{align*} we deduce \begin{align*} \Ho(\mos_{W,\mathbf{f}',\dd',\dd''}^{\zeta})\circ(\id\boxtimes \Xi_{\mathbf{f},\mathbf{f}'})=&\beta_{\dd',(\mathbf{f}',\dd'')}\alpha_{\dd',(\mathbf{f}',\dd'')}\circ(\id\boxtimes \Xi_{\mathbf{f},\mathbf{f}'}) \\=&\beta_{\dd',(\mathbf{f}',\dd'')}\Omega_0\alpha_{\dd',(\mathbf{f},\dd'')} \\=&\Xi_{\mathbf{f},\mathbf{f}'}\beta_{\dd',(\mathbf{f},\dd'')}\alpha_{\dd',(\mathbf{f},\dd'')} \\=&\Xi_{\mathbf{f},\mathbf{f}'}\Ho(\mos_{W,\mathbf{f},\dd',\dd''}^{\zeta}), \end{align*} finishing the proof of $\Ho(\Coha_{W,\mu}^{\zeta})$-linearity. The proof of surjectivity is the same as in Theorem \ref{cyclesprop}: one starts by showing that the morphism \[ \Ho\left(\pi_{\mathbf{f},\dd,*}^{\zeta}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd}^{\zeta}}(\mathbb{Q})\right)\rightarrow \Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,*}j_*\mathcal{IC}_{N_{\dd}/G_{\dd}}(\mathbb{Q})\right) \] is a split surjection between pure mixed Hodge modules in each cohomological degree, and then uses that $N_{\dd}/G_{\dd}\rightarrow \mathcal{M}_{\mathbf{f}',\dd}^{\zeta}$ is an affine fibration to deduce that \[ \Ho\left(\pi_{\mathbf{f},\dd,*}^{\zeta}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd}^{\zeta}}(\mathbb{Q})\right)\rightarrow \Ho\left(\pi^{\zeta}_{\mathbf{f},\dd,*}j_*\mathcal{IC}_{\mathcal{M}^{\zeta}_{\mathbf{f}',\dd}}(\mathbb{Q})\right)\otimes\mathbb{L}^{-(\mathbf{f}-\mathbf{f}')\cdot \dd/2} \] is a split surjection in each cohomological degree, and then proceeds as in the proof of Theorem \ref{cyclesprop}. \end{proof} We define the morphism of $\HO(\mathcal{A}_{W,\mu}^{\zeta})$-modules $\HO(\Xi_{\mathbf{f},\mathbf{f}'})\colon \HO(\mathcal{F}_{W,\mathbf{f},\mu}^{\zeta})\rightarrow \HO(\mathcal{F}_{W,\mathbf{f}',\mu}^{\zeta})$ in the same way as $\Xi_{\mathbf{f},\mathbf{f}'}$. Then after passing to the associated graded with respect to the perverse filtration, $\HO(\Xi_{\mathbf{f},\mathbf{f}'})$ is a split surjection, as it is equal to $\Ho\left(\dim_*\Xi_{\mathbf{f},\mathbf{f}'}\right)$. It follows that $\HO(\Xi_{\mathbf{f},\mathbf{f}'})$ is surjective. Putting together Lemma \ref{appLem}, Theorem \ref{cyclesprop}, Corollary \ref{cyclescor} and Proposition \ref{canmaps}, we deduce Theorem \ref{repthm}. \section{The perverse associated graded Hopf algebra} \label{PBWsec} \subsection{Proof of Theorem \ref{qea}} Let $\zeta\in \mathbb{H}_+^{Q_0}$ be a stability condition, and let $\mu\in(\infty,\infty)$ be a slope - we make no genericity assumption on $\zeta$ for now. Consider again the algebra from the end of Section \ref{relc} \[ \HO\left(\mathcal{A}_{W,\mu}^{\zeta}\right)=\Ho\left((\dim\circ p^{\zeta}_{\mu})_*\mathfrak{IC}_{W,\mu}^{\zeta}\right). \] By \cite[Thm.5.11]{Da13} this algebra carries a localised bialgebra structure, in the sense that for all decompositions $\dd=\dd'+\dd''$, with $\dd',\dd''\in\Lambda_{\mu}^{\zeta}$, there are maps \begin{align} \label{coprodint} \Ho\left((\dim\circ p^{\zeta}_{\dd})_*\mathfrak{IC}_{W,\dd}^{\zeta}\right)\rightarrow \left(\Ho\left((\dim\circ p^{\zeta}_{\dd'})_*\mathfrak{IC}_{W,\dd'}^{\zeta}\right)\boxtimes_+^{\tw} \Ho\left((\dim\circ p^{\zeta}_{\dd''})_*\mathfrak{IC}_{W,\dd''}^{\zeta}\right)\right)[\mathfrak{L}_{\dd',\dd''}^{-1}] \end{align} where \begin{align*} \mathfrak{L}_{\dd',\dd''}:=&\prod_{i,j\in Q_0}\prod_{1\leq t'\leq \dd'(i)}\prod_{1\leq t''\leq \dd''(j)}(y_{j,t''}-x_{i,t'})\\ \in &\HO_{G_{\dd'}}(\pt)\otimes \HO_{G_{\dd''}}(\pt)\\ =&\mathbb{C}[x_{1,1},\ldots,x_{1,\dd'_1},\ldots,x_{n,1},\ldots,x_{n,\dd'_n}]^{\Sym_{\dd'}}\otimes\\&\otimes \mathbb{C}[y_{1,1},\ldots,y_{1,\dd''_1},\ldots,y_{n,1},\ldots,y_{n,\dd''_n}]^{\Sym_{\dd''}} \end{align*} with $Q_0=\{1,\ldots,n\}$. These maps are required to satisfy the natural compatibility condition with the multiplication, see \cite[Def.5.3]{Da13}. We will describe this comultiplication in a little more detail, for further details see \cite{Da13}. Consider the proper map $s_{\dd',\dd''}\colon \mathfrak{M}_{\dd',\dd''}^{\zeta\sst}\rightarrow\mathfrak{M}_{\dd}^{\zeta\sst}$. We form the map \[ \gamma\colon \HO(\mathfrak{M}^{\zeta\sst}_{\dd},\mathfrak{IC}_{W,\dd}^{\zeta})\rightarrow \HO(\mathfrak{M}^{\zeta\sst}_{\dd',\dd''},\phim{\mathfrak{Tr}(W)_{\dd',\dd''}^{\zeta}}\mathcal{IC}_{\mathfrak{M}_{\dd',\dd''}^{\zeta\sst}}(\mathbb{Q}))\otimes \mathbb{L}^{(\dd',\dd'')/2} \] given as the tensor product with $\mathbb{L}^{(\dd,\dd)/2}$ of the limit as $N\mapsto\infty$ of the composition of maps \begin{align} \label{sagain} &\Ho\left(\dim_*p^{\zeta}_{\mu,N,*}\phim{\mathrm{Tr}(W)_{\dd,N}^{\zeta}}\mathbb{Q}_{X_{\dd,N}^{\zeta\sst}}\right)\rightarrow \Ho\left(\dim_*p^{\zeta}_{\mu,N,*}\phim{\mathrm{Tr}(W)^{\zeta}_{\dd,N}}s_{\dd',\dd'',N,*}\mathbb{Q}_{X^{\zeta\sst}_{\dd',\dd'',N}}\right)\rightarrow\\&\rightarrow \Ho\left(\dim_*p^{\zeta}_{\mu,N,*}s_{\dd',\dd'',N,*}\phim{\mathrm{Tr}(W)^{\zeta}_{\dd',\dd'',N}}\mathbb{Q}_{X^{\zeta\sst}_{\dd',\dd'',N}}\right),\nonumber \end{align} where $X^{\zeta\sst}_{\dd,N}$ and $X^{\zeta\sst}_{\dd',\dd'',N}$ are as in (\ref{XNdef}) and (\ref{XXNdef}) respectively. Considering instead the composition of maps \begin{align} \label{alterng} &\Ho\left(\dim_*\Ho\left(p^{\zeta}_{\mu,N,*}\phim{\mathrm{Tr}(W)_{\dd,N}^{\zeta}}\mathbb{Q}_{X_{\dd,N}^{\zeta\sst}}\right)\right)\rightarrow \Ho\left(\dim_*\Ho\left(p^{\zeta}_{\mu,N,*}\phim{\mathrm{Tr}(W)^{\zeta}_{\dd,N}}s_{\dd',\dd'',N,*}\mathbb{Q}_{X^{\zeta\sst}_{\dd',\dd'',N}}\right)\right)\rightarrow\\&\rightarrow \Ho\left(\dim_*\Ho\left(p^{\zeta}_{\mu,N,*}s_{\dd',\dd'',N,*}\phim{\mathrm{Tr}(W)^{\zeta}_{\dd',\dd'',N}}\mathbb{Q}_{X^{\zeta\sst}_{\dd',\dd'',N}}\right)\right)\nonumber \end{align} we deduce that $\gamma$ respects the perverse filtration, i.e. \[ \gamma\left(\Pf_s\left(\HO(\mathfrak{M}_{\dd}^{\zeta\sst},\mathfrak{IC}_{W,\dd}^{\zeta})\right)\right)\subset \Pf_{s}\left(\HO\left(\mathfrak{M}^{\zeta\sst}_{\dd',\dd''},\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd',\dd''}}\mathcal{IC}_{\mathfrak{M}_{\dd',\dd''}^{\zeta\sst}}(\mathbb{Q})\right)\otimes\mathbb{L}^{(\dd',\dd'')/2}\right). \] Composing with the isomorphism \begin{align*} \Gamma\colon &\HO\left(\mathfrak{M}^{\zeta\sst}_{\dd',\dd''},\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd',\dd''}}\mathcal{IC}_{\mathfrak{M}_{\dd',\dd''}^{\zeta\sst}}(\mathbb{Q})\right)\otimes\mathbb{L}^{(\dd',\dd'')/2}\rightarrow \\&\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\otimes\HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\otimes \mathbb{L}^{(\dd',\dd'')/2+(\dd'',\dd')/2} \end{align*} we likewise deduce \begin{align} \nonumber &\Gamma\circ\gamma\left(\Pf_s\left(\HO(\mathfrak{M}_{\dd}^{\zeta\sst},\mathfrak{IC}_{W,\dd}^{\zeta})\right)\right)\subset \\\nonumber& \Pf_s\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\otimes\HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\otimes \mathbb{L}^{(\dd',\dd'')/2+(\dd'',\dd')/2}\right)\\\nonumber & =\Pf_s\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw}\HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta}))\otimes \mathbb{L}^{(\dd',\dd'')}\right)\\\label{downshift} & =\Pf_{s-2(\dd',\dd'')}\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw}\HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta}))\right), \end{align} since $\Gamma$ lifts to a morphism of monodromic mixed Hodge modules on $\mathcal{M}^{\zeta\sst}$. The final equality (\ref{downshift}) does not respect the cohomological degree, or the mixed Hodge structure. We define \[ \mathfrak{E}_{1,\dd',\dd''}:=\prod_{a\in Q_1}\prod_{1\leq l'\leq \dd'_{s(a)}}\prod_{1\leq l''\leq \dd''_{t(a)}}(x_{s(a),l'}-y_{t(a),l''}) \] and \[ \mathfrak{E}_{0,\dd',\dd''}:=\prod_{i\in Q_0}\prod_{1\leq l'\leq \dd'_i}\prod_{1\leq l''\leq \dd''_i}(x_{i,l'}-y_{i,l''}). \] Multiplication by $\mathfrak{E}^{-1}_{1,\dd',\dd''}\mathfrak{E}_{0,\dd',\dd''}$ defines a map \begin{align*} \beta^*\colon &\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw}\HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\rightarrow \\& \HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw}\HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})[\mathfrak{L}^{-1}_{\dd',\dd''}]. \end{align*} The comultiplication is defined \cite[Cor.5.9]{Da13} by \begin{align*} \Delta_{W,\dd',\dd''}^{\zeta}\colon &\mathcal{A}_{Q,W,\dd}^{\zeta}\rightarrow \left(\mathcal{A}_{Q,W,\dd'}^{\zeta}\boxtimes^{\tw}_+ \mathcal{A}_{Q,W,\dd''}^{\zeta}\right)[\mathfrak{L}^{-1}_{\dd',\dd''}] \\ \Delta_{W,\dd',\dd''}^{\zeta}:=&(\cdot\mathfrak{E}^{-1}_{1,\dd',\dd''}\mathfrak{E}_{0,\dd',\dd''})\circ \Gamma\circ\gamma. \end{align*} The comultiplication respects cohomological degree, and lifts to a morphism in the category of monodromic mixed Hodge structures. For a dimension vector $\ee$, we define $\lvert \ee\rvert=\sum_{i\in Q_0}\ee_i$. So $\mathfrak{L}_{\dd',\dd''}$ is of cohomological degree $2\lvert\dd'\rvert \lvert \dd''\rvert$. We extend the perverse filtration to the right hand side of (\ref{coprodint}) by setting \begin{align*} &\Pf_{s}\left(\HO(\mathfrak{M}_{\dd'}^{\zeta\sst},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}_{\dd''}^{\zeta\sst},\mathfrak{IC}_{W,\dd''}^{\zeta})[\mathfrak{L}_{\dd',\dd''}^{-1}]\right)=\\&\sum_{n\geq 0}\Pf_{s+2\lvert\dd''\rvert\lvert\dd'\rvert n}\left(\HO(\mathfrak{M}_{\dd'}^{\zeta\sst},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)\cdot \mathfrak{L}_{\dd',\dd''}^{-n}. \end{align*} By \cite[Prop.4.1]{Da13}, the natural map \begin{align*} &\left(\HO(\mathfrak{M}_{\dd'}^{\zeta\sst},\mathfrak{IC}_{W,\dd}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)\xrightarrow{f_{\dd',\dd''}}\\& \left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)[\mathfrak{L}_{\dd',\dd''}^{-1}] \end{align*} is an inclusion, since the operation $\cdot\mathfrak{L}_{\dd',\dd''}$ is injective on the domain. By definition, $f_{\dd',\dd''}$ preserves the perverse filtration. \begin{proposition} \label{HCM2} The group $\Pf_s\left(\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)[\mathfrak{L}_{\dd',\dd''}^{-1}]\right)$ consists of those elements $a\cdot \mathfrak{L}_{\dd',\dd''}^{-n}$ such that $a\in \Pf_{s+2n\lvert\dd'\rvert\lvert\dd''\rvert}\left(\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)\right)$. \end{proposition} \begin{proof} One inclusion is clear. On the other hand, say we can write \[ a=\sum_{i=0}^{n}a_i\mathfrak{L}_{\dd',\dd''}^{-i} \] with $a_i\in \Pf_{s+2i\lvert\dd'\rvert\lvert\dd''\rvert}\left(\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)\right)$. Then by Lemma \ref{HCM}, each term $a_i\mathfrak{L}_{\dd',\dd''}^{n-i}\in \Pf_{s+2n\lvert\dd'\rvert\lvert\dd''\rvert}\left(\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)\right)$ and the lemma follows. \end{proof} By Lemma \ref{HCM} and Proposition \ref{HCM2}, multiplication by $\mathfrak{E}_{1,\dd',\dd''}^{-1}\cdot\mathfrak{E}_{0,\dd',\dd''}$ induces a map \begin{align} \nonumber&\Pf_s\left(\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)[\mathfrak{L}_{\dd',\dd''}^{-1}]\right)\rightarrow \\ \label{upshift} &\Pf_{s+2(\dd',\dd'')}\left(\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}_{\dd''}^{\zeta\sst},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)[\mathfrak{L}_{\dd',\dd''}^{-1}]\right). \end{align} Combining the shifts in perverse degree in (\ref{upshift}) and (\ref{downshift}), we deduce that \[ \Delta^{\zeta}_{W,\dd',\dd''}\left(\Pf_s (\HO(\mathfrak{M}_{\dd}^{\zeta\sst},\mathfrak{IC}_{W,\dd}^{\zeta}))\right)\subset \Pf_s\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw}\HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})[\mathfrak{L}_{\dd',\dd''}^{-1}]\right). \] The proof of the following is basically the original proof of the Atiyah--Bott lemma of \cite{AtBo83}, adapted to deal with vanishing cycles and passage to the associated graded of the perverse filtration. \begin{lemma} \label{PAB} The map $\Gr_{\Pf}(f_{\dd',\dd''})$ is injective. \end{lemma} \begin{proof} We abbreviate $\Pf_i=\Pf_i\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)$. It is sufficient to show that the degree $2\lvert\dd'\rvert \lvert\dd''\rvert$ map $\Gr_{\Pf}(\cdot \mathfrak{L}_{\dd',\dd''})$ is injective. For then if $a$ in the domain of $f_{\dd',\dd''}$ lies in $\Pf_{i+1}\setminus \Pf_i$, then for all $r\in\mathbb{N}$, $(\mathfrak{L}_{\dd',\dd''})^r\cdot a\notin \Pf_{i+2r|\dd'||\dd''|}$, so that \[ f_{\dd',\dd''}(a)\notin \left(\sum_{r\geq 0}\Pf_{i+2\lvert\dd'\rvert \lvert\dd''\rvert r}\left(\HO(\mathfrak{M}^{\zeta\sst}_{\dd'},\mathfrak{IC}_{W,\dd'}^{\zeta})\boxtimes_+^{\tw} \HO(\mathfrak{M}^{\zeta\sst}_{\dd''},\mathfrak{IC}_{W,\dd''}^{\zeta})\right)\cdot \mathfrak{L}_{\dd',\dd''}^{-r}\right) \] and $\Gr_{\Pf}(f_{\dd',\dd''})(a)\neq 0$. Consider the subgroup $\mathbb{G}_m\cong T\subset G_{\dd'}\times G_{\dd''}$ given by the embedding \[ z\mapsto \left( (z^{\lvert\dd''\rvert}\id_{\mathbb{C}^{\dd'_i}})_{i\in Q_0},(z^{-\lvert\dd'\rvert}\id_{\mathbb{C}^{\dd''_i}})_{i\in Q_0}\right) \] and let $P_{\dd',\dd''}:=(G_{\dd'}\times G_{\dd''})/T$. Note that $T$ acts trivially on $X^{\zeta\sst}_{\dd'}\times X^{\zeta\sst}_{\dd''}$ and furthermore $T$ acts trivially on the linearization (\ref{charDef}), so that the linearization of the $G_{\dd'}\times G_{\dd''}$-action lifts to a linearization of the $P_{\dd',\dd''}$ action. Let \[ p'_{\dd',\dd''}\colon (X^{\zeta\sst}_{\dd'}\times X^{\zeta\sst}_{\dd''})/P_{\dd',\dd''}\rightarrow \mathcal{M}_{\dd'}^{\zeta\sst}\times \mathcal{M}_{\dd''}^{\zeta\sst} \] be the map to the GIT quotient. Via the Cartesian square \[ \xymatrix{ \mathfrak{M}_{\dd'}^{\zeta\sst}\times \mathfrak{M}_{\dd''}^{\zeta\sst}\ar[r]\ar[d]&\pt/(G_{\dd'}\times G_{\dd''})\ar[d]\\ (X^{\zeta\sst}_{\dd'}\times X^{\zeta\sst}_{\dd''})/P_{\dd',\dd''}\ar[r]&\pt/P_{\dd',\dd''} } \] we obtain the isomorphism \begin{align*} &\Ho\left(p^{\zeta}_{\dd',*}\mathfrak{IC}_{W,\dd'}^{\zeta}\boxtimes p^{\zeta}_{\dd'',*}\mathfrak{IC}_{W,\dd''}^{\zeta}\right)\cong\\& \Ho\left(p'_{\dd',\dd'',*}\phim{f}\mathcal{IC}_{X^{\zeta\sst}_{\dd'}\times X^{\zeta\sst}_{\dd''}/P_{\dd',\dd''}}(\mathbb{Q})\right)\otimes_{\HO_{P_{\dd',\dd''}}(\pt)}\HO_{G_{\dd'}\times G_{\dd''}}(\pt)\otimes \mathbb{L}^{1/2}. \end{align*} where $f$ is the function induced by $\mathrm{Tr}(W)_{\dd'}\boxplus \mathrm{Tr}(W)_{\dd''}$ on $(X^{\zeta\sst}_{\dd'}\times X^{\zeta\sst}_{\dd''})/P_{\dd',\dd''}$. After picking an isomorphism $\lambda\colon \HO_{G_{\dd'}\times G_{\dd''}}(\pt)\cong \HO_{P_{\dd',\dd''}}(\pt)\otimes \HO_{T}(\pt)$ we obtain the isomorphism \begin{align*} &\Ho\left(p^{\zeta}_{\dd',*}\mathfrak{IC}_{W,\dd'}^{\zeta}\boxtimes p^{\zeta}_{\dd'',*}\mathfrak{IC}_{W,\dd''}^{\zeta}\right)\cong \Ho\left(p'_{\dd',\dd'',*}\phim{f}\mathcal{IC}_{X^{\zeta\sst}_{\dd'}\times X^{\zeta\sst}_{\dd''}/P_{\dd',\dd''}}(\mathbb{Q})\right)\otimes \HO(\mathcal{IC}_{\pt/T}(\mathbb{Q})). \end{align*} Let \begin{align*} D^i=&\Ho^{\geq i}\left(p'_{\dd',\dd'',*}\phim{f}\mathcal{IC}_{X^{\zeta\sst}_{\dd'}\times X^{\zeta\sst}_{\dd''}/P_{\dd',\dd''}}(\mathbb{Q})\right)\otimes \HO(\mathcal{IC}_{\pt/T}(\mathbb{Q}))\\&\subset \Ho(p^{\zeta}_{\dd',*}\mathfrak{IC}_{W,\dd'}^{\zeta}\boxtimes p^{\zeta}_{\dd'',*}\mathfrak{IC}_{W,\dd''}^{\zeta}) \end{align*} and let $E^i_j=D^i\cap \Pf_j$. We remark, though we do not use the fact, that the filtration $D$, and hence the filtration $E$, does not depend on $\lambda$. Consider the associated bigraded object $\Gr_E\left(\Ho(p^{\zeta}_{\dd',*}\mathfrak{IC}_{W,\dd'}^{\zeta}\boxtimes p^{\zeta}_{\dd'',*}\mathfrak{IC}_{W,\dd''}^{\zeta})\right)$. Multiplication by $\mathfrak{L}_{\dd',\dd''}$ preserves $D$ degree, and so induces a map \[ \Gr_{E}(\cdot \mathfrak{L}_{\dd',\dd''})\colon \Gr_{E,j}^i\Ho(p^{\zeta}_{\dd',*}\mathfrak{IC}_{W,\dd'}^{\zeta}\boxtimes p^{\zeta}_{\dd'',*}\mathfrak{IC}_{W,\dd''}^{\zeta})\rightarrow \Gr_{E,j+|\dd'||\dd''|}^i\Ho(p^{\zeta}_{\dd',*}\mathfrak{IC}_{W,\dd'}^{\zeta}\boxtimes p^{\zeta}_{\dd'',*}\mathfrak{IC}_{W,\dd''}^{\zeta}). \] Specifically, the map is given by multiplication by the $T$-equivariant Euler characteristic of $X_{\dd',\dd''}\rightarrow X_{\dd'}\times X_{\dd''}$, which is nonzero since the vector bundle has no sub bundle with trivial $T$-weight. It follows that $\Gr_{E}(\cdot \mathfrak{L}_{\dd',\dd''})$ is injective, so that $\Gr_{\Pf}(\cdot \mathfrak{L}_{\dd',\dd''})$ is injective too. \end{proof} \begin{proposition} The triple $\left(\Gr_{\Pf}(\HO(\Coha_{W,\mu}^{\zeta})),\Gr_{\Pf}(\HO(\ms^{\zeta}_{W,\mu})),\Gr_{\Pf}(\Delta_{W,\mu}^{\zeta})\right)$ defines a localised bialgebra in the sense of \cite[Def.5.3]{Da13}. \end{proposition} Now assume that $\zeta$ is $\mu$-generic. By Corollary \ref{inccor} and Proposition \ref{PCoha}, there is a natural inclusion in $\mathcal{D}^{\geq}(\MMHM(\Lambda^{\zeta}_{\mu}))$ \[ \bigoplus_{\dd\in \Lambda_{\mu}^{\zeta}}\DT_{W,\dd}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\hookrightarrow\Gr_{\Pf}(\HO(\Coha^{\zeta}_{W,\mu})) \] where \[ \DT_{W,\dd}^{\zeta}:=\HO(\mathcal{M}^{\zeta\sst}_{\dd},\mathcal{DT}_{W,\dd}^{\zeta}). \] \begin{proposition} \label{primitivity} If $\zeta$ is a $\mu$-generic stability condition, then for $\dd\in\Lambda_{\mu}^{\zeta}$ the subspace $\DT_{W,\dd}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}$ is a primitive subspace in $\Gr_{\Pf}(\HO(\Coha_{W,\mu}^{\zeta}))$, i.e. the composition of maps \begin{align*} &\bigoplus_{\dd\in \Lambda_{\mu}^{\zeta}}\DT_{W,\dd}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\hookrightarrow\Gr_{\Pf}(\HO(\Coha^{\zeta}_{W,\mu}))\xrightarrow{\Gr_{\Pf}(\Delta^{\zeta}_{W,\mu})}\\&\bigoplus_{\dd',\dd''\in\Lambda_{\mu}^{\zeta}}\Gr_{\Pf}\left((\HO(\Coha^{\zeta}_{W,\dd'})\boxtimes^{\tw}_+\HO(\Coha^{\zeta}_{W,\dd''}))[\mathfrak{L}_{\dd',\dd''}^{-1}]\right) \twoheadrightarrow \bigoplus_{\dd',\dd''\in\Lambda_{\mu}^{\zeta}\setminus\{0\}}\Gr_{\Pf}\left((\Coha^{\zeta}_{W,\dd'}\boxtimes^{\tw}_+\Coha^{\zeta}_{W,\dd''})[\mathfrak{L}_{\dd',\dd''}^{-1}]\right) \end{align*} is zero. \end{proposition} \begin{proof} We need to show that for decompositions $\dd=\dd'+\dd''$, with $\dd'\neq 0\neq \dd''$, the image of $\DT_{W,\dd}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})$ under the map \begin{align} \label{int4} &\Gr_{\Pf}(\Delta^{\zeta}_{\dd',\dd''})\colon \HO\left(\mathcal{M}_{\dd}^{\zeta\sst},\Ho\left(p^{\zeta}_{\dd,*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathcal{IC}_{\mathfrak{M}_{\dd}^{\zeta\sst}}\right)\right)\rightarrow \\ \nonumber&\left(\HO\left(\mathcal{M}_{\dd'}^{\zeta\sst},\Ho\left(p^{\zeta}_{\dd',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd'}}\mathcal{IC}_{\mathfrak{M}_{\dd'}^{\zeta\sst}}\right)\right)\boxtimes_{+}^{\tw} \HO\left(\mathcal{M}^{\zeta\sst}_{\dd''},\Ho\left(p^{\zeta}_{\dd'',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd''}}\mathcal{IC}_{\mathfrak{M}_{\dd''}^{\zeta\sst}}\right)\right)\right)[\mathfrak{L}_{\dd',\dd''}^{-1}] \end{align} is zero. We assume that $\mathcal{M}^{\zeta\st}_{\dd}\neq \emptyset$, or the statement is trivial, for then by definition $\DT_{W,\dd}^{\zeta}=0$. In terms of the commutative diagram \begin{equation} \label{int5} \xymatrix{ \Ho\left(\dim_*\Ho\left(p^{\zeta}_{\mu,*}\phim{\mathfrak{Tr}(W)_{\dd}^{\zeta}}\mathbb{Q}_{X_{\dd}^{\zeta\sst}}\right)\right)\ar[r]^-{\gamma''} &\Ho\left(\dim_*\Ho\left(p^{\zeta}_{\mu,*}s_{\dd',\dd'',*}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd',\dd''}}\mathbb{Q}_{\mathfrak{M}^{\zeta\sst}_{\dd',\dd''}}\right)\right)\\ \Ho\left(\dim_*\phim{\mathcal{T}r(W)_{\dd}^{\zeta}}\Ho\left(p^{\zeta}_{\mu,*}\mathbb{Q}_{X_{\dd}^{\zeta\sst}}\right)\right)\ar[r]^-{\gamma'} \ar[u]^{\mathbb{L}^{-(\dd,\dd)/2}\otimes\Ho(\dim_*\nu_{\dd})}&\Ho\left(\dim_*\phim{\mathcal{T}r(W)_{\dd}^{\zeta}}\Ho\left(p^{\zeta}_{\mu,*}s_{\dd',\dd'',*}\mathbb{Q}_{\mathfrak{M}^{\zeta\sst}_{\dd',\dd''}}\right)\right)\ar[u]^{\mathbb{L}^{-(\dd,\dd)/2}\otimes\Ho(\dim_*\nu_{\dd',\dd''})}\\ \Ho\left(\dim_* \phim{\mathcal{T}r(W)^{\zeta}_{\dd}}\mathcal{IC}_{\mathcal{M}^{\zeta\sst}_{\dd}}(\mathbb{Q})\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\otimes\mathbb{L}^{-(\dd,\dd)/2}\ar[u]^{\iota} } \end{equation} where $\gamma''=\Gr_{\Pf}(\gamma)\otimes\mathbb{L}^{-(\dd,\dd)/2}$, it is sufficient to show that \[ \gamma''\circ(\mathbb{L}^{-(\dd,\dd)/2}\otimes\Ho(\dim_*\nu_{\dd}))\circ\iota=0. \] On the other hand, the map $\gamma'\iota$ is obtained by applying $\Ho\left(\dim_*\phim{\mathcal{T}r(W)_{\dd}^{\zeta}}\right)$ to a map \begin{align} \label{nearmiss} \mathcal{IC}_{\mathcal{M}^{\zeta\sst}_{\dd}}(\mathbb{Q})\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\rightarrow \Ho\left(p^{\zeta}_{\mu,*}s_{\dd',\dd'',*}\mathbb{Q}_{\mathfrak{M}^{\zeta\sst}_{\dd',\dd''}}\right) \end{align} which is necessarily zero, since for every cohomological degree, the domain and the target of (\ref{nearmiss}) are pure complexes of mixed Hodge modules with distinct strict support. \end{proof} Let $\mathcal{P}^{\zeta}_{W,\mu}\subset \Gr_{\Pf}(\HO(\Coha^{\zeta}_{W,\mu}))$ be the subalgebra generated by $\bigoplus_{\dd\in\Lambda_{\mu}^{\zeta}}\DT_{W,\dd}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}$. Since $\mathcal{P}^{\zeta}_{W,\mu}$ is generated by primitive elements of the localised Hopf algebra, it follows from Lemma \ref{PAB} that the localised bialgebra structure on $\mathcal{P}^{\zeta}_{W,\mu}$ lifts to an honest bialgebra structure. We denote by $\overline{\Delta}_{W,\mu}^{\zeta}\colon \mathcal{P}^{\zeta}_{W,\mu}\rightarrow \mathcal{P}^{\zeta}_{W,\mu}\boxtimes_+ \mathcal{P}^{\zeta}_{W,\mu}$ the induced coproduct. \begin{corollary} \label{trueHopf} Assume that $\zeta$ is $\mu$-generic. The quadruple \[ \left(\mathcal{P}_{W,\mu}^{\zeta},\Ho\left(\dim_*\Ho(\ms_{W,\mu}^{\zeta})\right), \overline{\Delta}^{\zeta}_{W,\mu},\mathcal{P}_{W,0}\right). \] extends to a Hopf algebra. \end{corollary} \begin{proof} All that is missing is a compatible antipode, but existence and uniqueness of an antipode is a formal consequence of connectedness of the algebra. \end{proof} Consider the inclusion \[ \iota \colon \Sym_{\boxtimes_{\oplus}}(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})\hookrightarrow \Free_{\boxtimes_{\oplus}}(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}). \] Retaining the assumption that $\zeta$ is $\mu$-generic, the relative cohomological Hall algebra map $\Ho(\ms_{W,\mu}^{\zeta})\colon \Ho(p_{\mu,*}^{\zeta}\mathfrak{IC}_{W,\mu}^{\zeta})\boxtimes_{\oplus}\Ho(p_{\mu,*}^{\zeta}\mathfrak{IC}_{W,\mu}^{\zeta})\rightarrow \Ho(p_{\mu,*}^{\zeta}\mathfrak{IC}_{W,\mu}^{\zeta})$ along with the inclusions \[ \Free_{\boxtimes_{\oplus}}(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})\rightarrow \Free_{\boxtimes_{\oplus}}\left(\Ho(p_{\mu,*}^{\zeta}\mathfrak{IC}_{W,\mu}^{\zeta})\right) \] and $\iota$ induce a map \begin{equation} \label{GaDef} \Gamma_{W,\mu}^{\zeta}\colon \Sym_{\boxtimes_{\oplus}}(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})\rightarrow \Ho(\Coha_{W,\mu}^{\zeta}) \end{equation} \begin{lemma} \label{inj} The map $\Gamma^{\zeta}_{W,\mu}$ is an injection. \end{lemma} \begin{proof} We start by assuming $W=0$. We first claim that the map \begin{equation} \label{pron} \Psi\colon \Sym_{\boxtimes_{+}}(\HO(\mathcal{M}_{\mu}^{\zeta\sst},\mathcal{DT}_{0,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}))\rightarrow \mathcal{P}^{\zeta}_{0,\mu} \end{equation} is an injection. For this, consider the non-counital coproduct $\overline{\Delta}^{\zeta}_{0,\mu,\red}$ induced by $\overline{\Delta}^{\zeta}_{0,\mu}$ on the nonunital algebra $(K=\ker(\mathcal{P}^{\zeta}_{0,\mu}\rightarrow \mathcal{P}^{\zeta}_{0,0}),\HO(\ms_{W,\mu}^{\zeta})\Large{|_K})$. Let $\alpha$ be an element of the left hand side of (\ref{pron}), and let $n$ be the maximum number such that the $n$th component of $\alpha$ in the decomposition \[ \Sym_{\boxtimes_{+}}\left(\HO(\mathcal{M}_{\mu}^{\zeta\sst},\mathcal{DT}_{0,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})\right):=\bigoplus_{n}\Sym^n_{\boxtimes_{+}}\left(\HO(\mathcal{M}_{\mu}^{\zeta\sst},\mathcal{DT}_{0,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})\right) \] is nonzero. We may assume that $n> 0$, otherwise the image of $\alpha$ under (\ref{pron}) is a scalar multiple of the unit of $\HO(\Coha_{0,\mu})$. Then $\overline{\Delta}_{0,\mu,\red}^{\zeta,n}(\Psi(\alpha))=\iota(\alpha)\neq 0$, proving the claim. On the other hand, $\HO(\Gamma_{0,\mu}^{\zeta})$ factors as $i\Psi$, where $i\colon\mathcal{P}^{\zeta}_{0,\mu}\hookrightarrow\Ho(\mathcal{A}_{0,\mu}^{\zeta})$ is the inclusion, and so $\HO(\Gamma_{0,\mu}^{\zeta})$ is an injection. Still assuming $W=0$, in each degree, the left and right hand sides of (\ref{GaDef}) are direct sums of shifts of simple perverse sheaves $\mathcal{IC}_{Z}(\mathbb{Q})$ for $Z\in\mathcal{M}_{\mu}^{\zeta\sst}$ locally closed subvarieties, and $\mathbb{Q}$ the trivial local system on $Z$. More precisely, the $Z$ all have the form $\Sym^{\circ,a_1}X_{\dd_1}\times \ldots \Sym^{\circ,a_i}X_{\dd_i}$ for decompositions $\dd=\sum_{n=1}^i a_n\dd_n$, and $\Sym^{\circ,a_n} X_{\dd_n}$ defined to be $\Sym^{a_n} X_{\dd_n}$ minus the big diagonal, and we have \[ \HO(\mathcal{IC}_Z(\mathbb{Q}))=\bigotimes_{n=1}^i\Sym^{a_i}\left(\IC_{\mathcal{M}^{\zeta\sst}_{\dd_i}}(\mathbb{Q})\right). \] Each $\HO(\mathcal{IC}_{\mathcal{M}^{\zeta\sst}_{\dd_i}}(\mathbb{Q}))$ is nonzero: for instance in degree zero, the underlying vector space has a basis indexed by path components of $\mathcal{M}^{\zeta\st}_{\dd_i}$, i.e. $\HO^0(\mathcal{IC}_{\mathcal{M}^{\zeta\sst}_{\dd_i}})\cong\mathbb{Q}$, from which it follows that each $\HO(\mathcal{IC}_{Z}(\mathbb{Q}))\neq 0$. It follows from the injectivity of (\ref{pron}) that $\Gamma_{0,\mu}^{\zeta}$ is an injection, since it is a morphism between direct sums of simple objects in a semisimple category, and it is an injection after passing to hypercohomology. Since it is a map of pure mixed Hodge modules, it is in addition split injective, and so we deduce that $\Gamma^{\zeta}_{0,\mu}$ remains injective after applying $\phim{\mathcal{T}r(W)_{\mu}}$. Now the result follows from Lemma \ref{cohacom}, which gives the commutativity of \[ \xymatrix{ \phim{\mathcal{T}r(W)}\Sym_{\boxtimes_{\oplus}}\left(\mathcal{DT}^{\zeta}_{0,\mu}\otimes \HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right) \ar[d]^{\nu}\ar[rr]^-{\phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\Gamma_{0,\mu}}&&\phim{\mathcal{T}r(W)_{\mu}^{\zeta}}\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}^{\zeta}_{0,\mu}\right)\ar[d]^{\nu}\\ \Sym_{\boxtimes_{\oplus}}\left(\mathcal{DT}^{\zeta}_{W,\mu}\otimes \HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\ar[rr]^-{\Gamma_{W,\mu}}&&\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}^{\zeta}_{W,\mu}\right). } \] \end{proof} \begin{theorem} \label{zPBW} The natural map obtained from the relative cohomological multiplication and the natural inclusions of Corollary \ref{inccor} \[ \Gamma_{W,\mu}\colon \Sym_{\boxtimes_{\oplus}}(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})\rightarrow \Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right) \] is an isomorphism in $\mathcal{D}^{\geq}(\MMHM(\mathcal{M}_{\mu}^{\zeta\sst}))$. \end{theorem} \begin{proof} From Lemma \ref{inj} we know already that the map is an inclusion, in particular, in the case $W=0$. But by Theorem \ref{weakPBW}, if $W=0$, the left and the right hand side are also pure mixed Hodge modules with the same class in the Grothendieck group, so that \[ \Gamma_{0,\mu}\colon \Sym_{\boxtimes_{\oplus}}\left(\mathcal{DT}^{\zeta}_{0,\mu}\otimes \HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\rightarrow \Ho\left(p_{\mu,*}^{\zeta}\mathfrak{IC}_{0,\mu}^{\zeta}\right) \] is an isomorphism. Again applying Lemma \ref{cohacom} we are done. \end{proof} Combining Theorem \ref{zPBW}, Corollary \ref{trueHopf} and Proposition \ref{primitivity}, we deduce that \[ \left(\Ho\left(\dim_*\Ho\left( p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right)\right),\Ho\left(\dim_*\Ho(\ms_{W,\mu}^{\zeta})\right), \overline{\Delta}^{\zeta}_{W,\mu},\Ho\left(\dim_*\Ho(\ms_{W,\mu}^{\zeta})\right)_0\right) \] is a connected unital Hopf algebra that is generated by the primitive sub monodromic mixed Hodge module $\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}$, and is therefore the universal enveloping algebra of its Lie algebra of primitive elements. This completes the proof of Theorem \ref{qea}. \subsection{Proof of Theorem \ref{strongPBW}} \label{CWCF} We finish this section by proving Theorem \ref{strongPBW}, the cohomological wall crossing theorem relating different PBW bases for the cohomological Hall algebra $\Ho(p_*\mathfrak{IC}_W)$. It turns out that almost all of the hard work goes into proving that $\Ho(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta})$ admits a PBW basis, i.e. Theorem \ref{zPBW}. Fix a generic stability condition $\zeta$. By Corollary \ref{inccor} there are canonical split inclusions $\mathcal{DT}^{\zeta}_{W,\mu}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\rightarrow \Ho\left(p_{\mu,*}^{\zeta}\mathfrak{IC}_{W,\mu}^{\zeta}\right)$ giving rise to split inclusions $q^{\zeta}_{\mu,*}\mathcal{DT}^{\zeta}_{W,\mu}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\rightarrow q_{\mu,*}^{\zeta}\Ho\left(p^{\zeta}_{\mu,*}\mathfrak{IC}_{W,\mu}^{\zeta}\right)$. By the proof of Theorem \ref{gcDT}, the distinguished triangles (\ref{HNtri}) split, and picking a splitting for each of the diagrams gives rise to a specific isomorphism (\ref{gwPBWnc}), and in particular, for each slope $\mu$, an embedding of $q^{\zeta}_{\mu,*}\mathcal{DT}_{W,\mu}^{\zeta}\otimes \HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}$ inside $\Ho\left(p_*\mathfrak{IC}_W\right)$. We fix this embedding, in writing the main theorem, where for simplicity we also fix an isomorphism $\Ho(q^{\zeta}_{\mu,*}\mathcal{DT}_{W,\mu}^{\zeta}\otimes \HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir})\cong q^{\zeta}_{\mu,*}\mathcal{DT}_{W,\mu}^{\zeta}\otimes \HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}$, which we are free to do via the decomposition theorem, stated as in Proposition \ref{basicfacts}(\ref{vanDec}). \begin{theorem} \label{sPBW} If $\zeta$ is a generic stability condition, the map \[ \boxtimes_{\oplus,\infty \xrightarrow{\mu}-\infty}^{\tw} \left(\Sym_{\boxtimes_{\oplus}}\left(q_{\mu,*}^{\zeta}\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\right)\xrightarrow{\Ho(\ms_W)} \Ho\left(p_*\mathfrak{IC}_W\right) \] is an isomorphism. \end{theorem} \begin{proof} Fix a dimension vector $\dd$, and let $\HN^{\geq}_{\dd}$ be the set of Harder--Narasimhan types summing to $\dd$, endowed with a total order order refining Reineke's order as in the proof of Theorem \ref{gcDT}. Then since each of the distinguished triangles (\ref{HNtri}) are split, there is a canonical filtration $G$, indexed by $\HN^{\geq}_{\dd}$, on \[ \Ho\left(p_{\dd,*}\phim{\mathfrak{Tr}(W)_{\dd}}\mathcal{IC}_{\mathfrak{M}_{\dd}}(\mathbb{Q})\right)\cong\phim{\mathcal{T}r(W)_{\dd}}\Ho\left(p_{\dd,*}\mathcal{IC}_{\mathfrak{M}_{\dd}}(\mathbb{Q})\right) \] defined by setting \[ G_{\overline{\dd}}:=\phim{\mathcal{T}r(W)_{\dd}}\Ho\left(p_{\dd,*}i_{\leq \overline{\dd}}\mathcal{IC}_{\mathfrak{M}_{\leq \overline{\dd}}}(\mathbb{Q})\right). \] After giving the domain of the map \[ \Lambda\colon\bigoplus_{(\dd^1,\ldots,\dd^g)\in\HN^{\geq}_{\dd}}\left(q^{\zeta}_{\dd_1,*}\Ho\left(p_{\dd^1,*}^{\zeta}\mathfrak{IC}_{W,\dd_1}^{\zeta}\right)\boxtimes^{\tw}_{\oplus}\ldots\boxtimes^{\tw}_{\oplus}q^{\zeta}_{\dd_g,*}\Ho\left(p_{\dd^g,*}^{\zeta}\mathfrak{IC}_{W,\dd_g}^{\zeta}\right)\right)\xrightarrow{\Ho(\ms_W)} \Ho\left(p_{\dd,*}\mathfrak{IC}_W\right) \] the filtration induced by the order on $\HN_{\dd}^{\geq}$, the associated graded with respect to the filtration is the identity map, and the result follows by Theorem \ref{zPBW}. \end{proof} \begin{corollary} For a generic stability condition $\zeta$ there exist embeddings $\DT^{\zeta}_{W,\mu}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\subset \HO(\Coha_{W})$ such that the induced map \[ \boxtimes_{\oplus,\infty \xrightarrow{\mu}-\infty}^{\tw}\left(\Sym_{\boxtimes_+}\left(\DT^{\zeta}_{W,\mu}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\right)\xrightarrow{\HO(\ms_W)}\HO(\Coha_W) \] is an isomorphism. \end{corollary} \begin{proof} By Theorem \ref{sPBW} and Proposition \ref{PCoha}, the corollary is true after replacing $\HO(\Coha_W)$ with the perverse associated graded $\Gr_{\Pf}(\HO(\Coha_{W}))=\Ho(\dim_*\Ho(p_*\mathfrak{IC}_W))$. Then for any lift of the embedding $\DT^{\zeta}_{W,\mu}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\subset \Gr_{\Pf}(\HO(\Coha_{W}))$ to an embedding $\DT^{\zeta}_{W,\mu}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\subset \HO(\Coha_{W})$, the result follows. \end{proof} \subsection{Special CoHAs} Let $(Q,W)$ be a quiver with potential, let $\mu\in (-\infty,\infty)$ be a $\zeta$-generic stability condition for some stability condition $\zeta$. Without modifying the above constructions, we do not expect to get a nontrivial Lie bracket on the space of primitive elements $\DT_{W,\mu}^{\zeta}\otimes \HO(\mathbb{C}\mathbb{P}^{\infty})$ on the associated perverse associated Hopf algebra $\Gr_{\Pf}(\HO(\mathcal{A}_{W,\mu}^{\zeta}))$; essentially the same argument as Proposition \ref{primitivity} demonstrates vanishing of the Lie bracket. The goal of this section is firstly to prove the categorified integrality theorem and wall crossing formula for DT invariants of `special' $\mathbb{C}Q$-modules, and secondly to provide one framework for producing nontrivial Lie brackets. We assume that for every $\dd\in\mathbb{N}^{Q_0}$, we have locally closed subvarieties \[ l^{\zeta}_{\dd}\colon \mathcal{M}^{\prop,\zeta\sst}_{\dd}\hookrightarrow \mathcal{M}^{\zeta\sst}_{\dd} \] forming a submonoid. We denote by $\tilde{l}_{\dd}^{\zeta}\colon \mathfrak{M}_{\dd}^{\prop,\zeta\sst}\hookrightarrow\mathfrak{M}_{\dd}^{\zeta\sst}$ the associated inclusion of stacks, where $\mathfrak{M}_{\dd}^{\prop,\zeta\sst}=(p^{\zeta}_{\dd})^{-1}(\mathcal{M}^{\prop,\zeta\sst}_{\dd})$. We assume that the diagram \[ \xymatrix{ \mathcal{M}^{\prop,\zeta\sst}_{\dd'}\times\mathcal{M}^{\prop,\zeta\sst}_{\dd''}\ar@{^(->}[d]_{l^{\zeta}_{\dd'}\times l^{\zeta}_{\dd''}}\ar[r]^-{\oplus}&\mathcal{M}^{\prop,\zeta\sst}_{\dd}\ar@{^(->}[d]^{l^{\zeta}_{\dd}}\\ \mathcal{M}^{\zeta\sst}_{\dd'}\times \mathcal{M}^{\zeta\sst}_{\dd''}\ar[r]^-{\oplus}&\mathcal{M}^{\zeta\sst}_{\dd} } \] is Cartesian. \begin{lemma} \label{SpLemma} Let $\dd\in\Lambda_{\mu}^{\zeta}$. Let $\mathbf{f}\in\mathbb{N}^{Q_0}$ be a framing vector, and let \[ \xymatrix{ \mathcal{M}^{\prop,\zeta}_{\mathbf{f},\dd}\ar[r]^{l^{\zeta}_{\mathbf{f},\dd}}\ar[d]_{\pi^{\zeta}_{\mathbf{f},\dd}|_{\mathcal{M}^{\prop,\zeta}_{\mathbf{f},\dd}}}&\mathcal{M}^{\zeta}_{\mathbf{f},\dd}\ar[d]^{\pi^{\zeta}_{\mathbf{f},\dd}}\\ \mathcal{M}^{\prop,\zeta\sst}_{\dd}\ar[r]^{l^{\zeta}_{\dd}}&\mathcal{M}^{\zeta\sst}_{\dd} } \] be the obvious Cartesian diagram. Then there are isomorphisms \begin{equation} \label{ft1} \Ho^n\left(l_{\dd,*}^{\zeta}l_{\dd}^{\zeta,*}\Ho\left(p_{\dd,!}^{\zeta}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathbb{Q}_{\mathfrak{M}_{\dd}^{\zeta\sst}}\right)\right)\cong \lim_{\mathbf{f}\mapsto\infty}\left(\Ho^n\left(\pi_{\mathbf{f},\dd,!}^{\zeta}\tilde{l}^{\zeta}_{\mathbf{f},\dd,*}\tilde{l}^{\zeta,*}_{\mathbf{f},\dd}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd}^{\zeta}}\otimes\mathbb{L}^{-\mathbf{f}\cdot \dd}\right)\right). \end{equation} and \begin{equation} \label{ft2} \HO_c^n(\mathcal{M}_{\dd}^{\zeta\sst},l_{\dd}^{\zeta,*}\Ho(p_{\dd,!}^{\zeta}\phim{\mathfrak{Tr}(W)^{\zeta}_{\dd}}\mathbb{Q}_{\mathfrak{M}_{\dd}^{\zeta\sst}}))\cong \lim_{\mathbf{f}\mapsto\infty}\left(\Ho^n\left(\dim_!\pi^{\zeta}_{\mathbf{f},\dd,!}\tilde{l}^{\zeta}_{\mathbf{f},\dd,*}\tilde{l}^{\zeta,*}_{\mathbf{f},\dd}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd}^{\zeta}}\otimes\mathbb{L}^{-\mathbf{f}\cdot \dd}\right)\right) \end{equation} \end{lemma} \begin{proof} For sufficiently large $\mathbf{f}$ we may write the left hand side of (\ref{ft1}) as \[ \Ho^n\left(l_{\dd,*}^{\zeta}l_{\dd}^{\zeta,*}\Ho\left(\pi_{\mathbf{f},\dd,!}^{\zeta}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd}^{\zeta}}\otimes \mathbb{L}^{-\mathbf{f}\cdot \dd}\right)\right)\cong\Ho^n\left(l_{\dd,*}^{\zeta}l_{\dd}^{\zeta,*}\pi_{\mathbf{f},\dd,!}^{\zeta}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd}^{\zeta}}\otimes\mathbb{L}^{-\mathbf{f}\cdot \dd}\right) \] where the isomorphism is an application of the decomposition theorem (Proposition \ref{basicfacts}(\ref{vanDec})). It is sufficient to observe that there is an natural isomorphism \begin{equation} \label{swapper} l^{\zeta}_{\dd,*}l^{\zeta,*}_{\dd}\pi^{\zeta}_{\mathbf{f},\dd,!}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd}}\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd}}\rightarrow \pi^{\zeta}_{\mathbf{f},\dd,!}l^{\zeta}_{\mathbf{f},\dd,*}l^{\zeta,*}_{\mathbf{f},\dd}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd}}\mathbb{Q}_{\mathcal{M}^{\zeta}_{\mathbf{f},\dd}} \end{equation} by base change. Similarly, we write the left hand side of (\ref{ft2}) as \[ \Ho^n\left(\dim_!l_{\dd,*}^{\zeta}l_{\dd}^{\zeta,*}\pi_{\mathbf{f},\dd,!}^{\zeta}\phim{\mathcal{T}r(W)_{\mathbf{f},\dd}^{\zeta}}\mathbb{Q}_{\mathcal{M}_{\mathbf{f},\dd}^{\zeta}}\right) \] and we obtain (\ref{ft2}) by applying $\Ho^n(\dim_!)$ to the isomorphism (\ref{swapper}) for sufficiently large $\mathbf{f}$. \end{proof} We define \[ \Ho(\Coha^{\zeta}_{W,\mu})^{\prop}=\mathbb{D}^{\mon}_{\mathcal{M}_{\mu}^{\zeta\sst}}l^{\zeta,*}_{\mu}\mathbb{D}^{\mon}_{\mathcal{M}_{\mu}^{\zeta\sst}}\Ho(\Coha^{\zeta}_{W,\mu}) \] and \[ \HO(\Coha^{\zeta}_{W,\dd})^{\prop}=\mathbb{D}^{\mon}_{\Lambda_{\mu}^{\zeta}}(\lim_{\mathbf{f}\mapsto\infty}(\HO_c(\mathcal{M}_{\mathbf{f},\dd}^{\zeta},l_{\mathbf{f},\dd}^{\zeta,*}\phim{\mathcal{T}r(W)^{\zeta}_{\mathbf{f},\dd}}\mathcal{IC}_{\mathcal{M}_{\mathbf{f},\dd}^{\zeta}}(\mathbb{Q})\otimes\mathbb{L}^{-\mathbf{f}\cdot \dd}). \] As a word of explanation for the notation above, note that we do not claim that $\Ho(\Coha^{\zeta}_{W,\mu})^{\prop}$ is isomorphic to its total cohomology. That is why we prefer this notation over, say $\Ho(\Coha^{\prop,\zeta}_{W,\mu})$. As a further word of explanation, note that when $\mathcal{M}^{\prop,\zeta}_{\mu}=\mathcal{M}^{\zeta}_{\mu}$ the above definitions recover the definitions already provided in the absence of the superscript $\prop$ by Proposition \ref{basicfacts}(\ref{monPD}). Applying the restriction functor $l^{\zeta,*}_{\mu}$ to Theorem \ref{ThmA}, we obtain the corollary: \begin{corollary}\label{SpWU} Let $\zeta$ be a $\mu$-generic stability condition. We define the restricted DT invariants \[ \mathcal{DT}^{\prop,\zeta}_{W,\dd}=\mathbb{D}^{\mon}_{\mathcal{M}^{\zeta}_{\dd}}l^{\zeta,*}_{\dd}\mathcal{DT}^{\zeta}_{W,\dd} \] and \[ \DT^{\prop,\zeta}_{W,\dd}=\dim_*\mathcal{DT}^{\prop,\zeta}_{W,\dd}. \] Then there are isomorphisms \begin{equation} \label{fri} \FreeComm_{\boxtimes_+}\left(\DT_{W,\mu}^{\prop,\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\cong \HO(\Coha^{\zeta}_{W,\mu})^{\prop} \end{equation} and \begin{equation} \label{sri} \Sym_{\boxtimes_{\oplus}}\left(\mathcal{DT}_{W,\mu}^{\prop,\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\cong\Ho(\mathcal{A}_{W,\mu}^{\zeta})^{\prop} \end{equation} in $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\mathcal{M}_{\mu}^{\zeta\sst}))$ and $\mathcal{D}^{\geq, \mathrm{lf}}(\MMHM(\Lambda^{\zeta}_{\mu}))$ respectively. \end{corollary} \begin{proof} We prove the existence of the isomorphism (\ref{sri}), the isomorphism (\ref{fri}) is then obtained by applying $\dim_*$. The isomorphism (\ref{sri}) is given by the chain of isomorphisms \begin{align*} \Sym_{\boxtimes_{\oplus}}\left(\mathcal{DT}_{W,\mu}^{\prop,\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\cong & \Sym_{\boxtimes_{\oplus}}\left(\mathbb{D}^{\mon}_{\mathcal{M}^{\zeta\sst}_{\mu}}l^{\zeta,*}_{\mu}\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right) \\ \cong &\Sym_{\boxtimes_{\oplus}}\left(\mathbb{D}^{\mon}_{\mathcal{M}^{\zeta\sst}_{\mu}}\left(l^{\zeta,*}_{\mu}\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}^{\vee}\right)\right) \\ \cong&\mathbb{D}^{\mon}_{\mathcal{M}^{\zeta\sst}_{\mu}}l^{\zeta,*}_{\mu}\Sym_{\boxtimes_{\oplus}}\left(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})^{\vee}_{\vir}\right)\\ \cong &\mathbb{D}^{\mon}_{\mathcal{M}^{\zeta\sst}_{\mu}}l^{\zeta,*}_{\mu}\Sym_{\boxtimes_{\oplus}}\left(\mathbb{D}^{\mon}_{\mathcal{M}^{\zeta\sst}_{\mu}}\left(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\right)\\ \cong &\mathbb{D}^{\mon}_{\mathcal{M}^{\zeta\sst}_{\mu}}l^{\zeta,*}_{\mu}\mathbb{D}^{\mon}_{\mathcal{M}^{\zeta\sst}_{\mu}}\Sym_{\boxtimes_{\oplus}}\left(\left(\mathcal{DT}_{W,\mu}^{\zeta}\otimes\HO(\mathbb{C}\mathbb{P}^{\infty})_{\vir}\right)\right)\\ \cong &\mathbb{D}^{\mon}_{\mathcal{M}^{\zeta\sst}_{\mu}}l^{\zeta,*}_{\mu}\mathbb{D}^{\mon}_{\mathcal{M}^{\zeta\sst}_{\mu}}\Ho\left(\Coha_{W,\mu}^{\zeta}\right). \end{align*} \end{proof} In the same way that Theorem \ref{strongPBW} is an improvement upon Theorem \ref{ThmA}, we can improve upon Corollary \ref{SpWU}, and there is a cohomological wall crossing isomorphism incorporating also the `strong' PBW theorem, which we finish by briefly describing. Let $\dd=\dd'+\dd''$ with $\dd',\dd''\in\Lambda_{\mu}^{\zeta}$. We consider the maps \begin{align*} \alpha^{\prop,\zeta}_{\dd',\dd''}=&\mathbb{D}^{\mon}_{\mathcal{M}_{\dd',\dd''}^{\zeta\sst}}l^{\zeta,*}_{\mu}\mathbb{D}^{\mon}_{\mathcal{M}_{\dd}^{\zeta\sst}}\alpha^{\zeta}_{\mu}\\ \beta^{\prop,\zeta}_{\mu}=&\mathbb{D}^{\mon}_{\mathcal{M}_{\dd}^{\zeta\sst}}l^{\zeta,*}_{\mu}\mathbb{D}^{\mon}_{\mathcal{M}_{\dd',\dd''}^{\zeta\sst}}\beta^{\zeta}_{\mu}\\ \Ho(\ms_{W,\dd',\dd''}^{\zeta})^{\prop}:= &\left(\beta^{\prop,\zeta}_{\dd',\dd''}\otimes \mathbb{L}^{-(\dd',\dd'')/2}\right)\circ \left(\alpha^{\prop,\zeta}_{\dd',\dd''}\otimes \mathbb{L}^{\langle \dd'',\dd'\rangle/2}\right)\\ \Ho(\ms_{W,\mu}^{\zeta})^{\prop}:= &\bigoplus_{\dd',\dd''\in\Lambda_{\mu}^{\zeta}}\Ho(\ms_{W,\dd',\dd''}^{\zeta})^{\prop}. \end{align*} Similarly, we define the natural isomorphism \begin{align*} \overline{\alpha}_{\dd',\dd''}^{\prop,\zeta}\colon&\lim_{N\mapsto \infty}\Ho(\dim_*p_{\dd',N,*}l^{\zeta}_{\dd',N,*}l^{\zeta,*}_{\dd',N}\phim{\mathcal{T}r(W)_{\dd',N}^{\zeta\sst}}\mathbb{Q}_{X_{\dd',N}^{\zeta}})\boxtimes \\&\lim_{N\mapsto \infty}\Ho(\dim_*p_{\dd'',N,*}l^{\zeta}_{\dd'',N,*}l^{\zeta,*}_{\dd'',N}\phim{\mathcal{T}r(W)_{\dd'',N}^{\zeta\sst}}\mathbb{Q}_{X_{\dd'',N}^{\zeta}})\rightarrow\\ &\Ho(\dim_*p_{\dd',\dd'',N,*}l^{\zeta}_{N,*}l^{\zeta,*}_{\dd',\dd'',N}\phim{\mathcal{T}r(W)_{\dd',\dd'',N}^{\zeta\sst}}\mathbb{Q}_{X_{\dd',\dd'',N}^{\zeta}}) \end{align*} and define \[ \overline{\beta}^{\prop,\zeta}_{\dd',\dd''}=\lim_{N\mapsto\infty}\left(\Ho\left(\dim_*p^{\zeta}_{\dd,N,*}l^{\zeta}_{\dd,N,*}l^{\zeta,*}_{\dd,N}\phim{\mathrm{Tr}(W)_{\dd,N}^{\zeta}}(s^{\zeta}_{\dd',\dd'',N,*}\mathbb{Q}_{X_{\dd',\dd'',N}^{\zeta\sst}}\otimes \mathbb{L}^{(\dd',\dd'')}\rightarrow \mathbb{Q}_{X_{\dd,N}^{\zeta\sst}})\right)\right) \] and composing appropriate twists we obtain the maps \[ \HO(\ms_{W,\dd',\dd''}^{\zeta})^{\prop}=(\overline{\beta}_{\dd',\dd''}^{\prop,\zeta}\otimes\mathbb{L}^{\langle \dd'',\dd'\rangle/2})(\overline{\alpha}_{\dd',\dd''}^{\prop,\zeta}\otimes\mathbb{L}^{(\dd',\dd')/2+(\dd'',\dd'')/2}) \] and \[ \HO(\ms_{W,\mu}^{\zeta})^{\prop}:=\bigoplus_{\dd',\dd''\in\Lambda_{\mu}^{\zeta}}\HO(\ms_{W,\dd',\dd''}^{\zeta})^{\prop}\colon \HO(\Coha^{\zeta}_{W,\mu})^{\prop}\boxtimes_+^{\tw} \HO(\Coha^{\zeta}_{W,\mu})^{\prop}\rightarrow \HO(\Coha^{\zeta}_{W,\mu})^{\prop}. \] We consider the filtration \[ \Pf^p\left(\HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd},l^{\zeta}_{\mathbf{f},\dd,*}l^{\zeta,*}_{\mathbf{f},\dd}\pi ^{\zeta}_{\mathbf{f},\dd,*}\mathfrak{IC}_{W,\mathbf{f},\dd}\right)\right):=\HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd},l^{\zeta}_{\mathbf{f},\dd,*}l^{\zeta,*}_{\mathbf{f},\dd}\tau_{\leq p}\pi ^{\zeta}_{\mathbf{f},\dd,*}\mathfrak{IC}_{W,\mathbf{f},\dd}\otimes\mathbb{L}^{-\mathbf{f}\cdot \dd}\right) \] on $\HO(\Coha_{W,\mu}^{\zeta})^{\prop}$. Then by definition, we have an isomorphism of algebras \begin{equation} \label{restrictedAG} \Ho(\dim_*(\Ho(\Coha_{W,\mu}^{\zeta})^{\prop}))\cong\Gr_{\Pf}\left(\HO(\Coha_{W,\mu}^{\zeta})^{\prop}\right). \end{equation} Since the multiplication on $\Ho(\Coha^{\zeta}_{W,\mu})^{\prop}$ is defined by restricting the multiplication on $\Ho(\Coha^{\zeta}_{W,\mu})$, we deduce that the modification of Theorem \ref{strongPBW} obtained by adding the superscript $\prop$ everywhere remains true, where all relative isomorphisms are obtained by restricting the isomorphisms in the statement of Theorem \ref{strongPBW} to the locus $\mathcal{M}^{\prop}$ or $\mathcal{M}^{\prop,\zeta\sst}$. By (\ref{restrictedAG}), the statements of Theorem \ref{strongPBW} over the base $\mathbb{N}^{Q_0}$ follow as in the proof of Theorem \ref{strongPBW}, by observing that all required maps become isomorphisms after passing to the perverse filtration. \begin{remark} For the purposes of trying to produce interesting Lie algebras of primitive elements, the strategy is to find $\mathcal{M}^{\prop,\zeta}_{\mu}$ such that the definition \[ \Pf_{\prop}^p\left(\HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd},l^{\zeta}_{\mathbf{f},\dd,*}l^{\zeta,*}_{\mathbf{f},\dd}\pi ^{\zeta}_{\mathbf{f},\dd,*}\mathfrak{IC}_{W,\mathbf{f},\dd}\right)\right):=\HO\left(\mathcal{M}^{\zeta}_{\mathbf{f},\dd},\tau_{\leq p}l^{\zeta}_{\mathbf{f},\dd,*}l^{\zeta,*}_{\mathbf{f},\dd}\pi ^{\zeta}_{\mathbf{f},\dd,*}\mathfrak{IC}_{W,\mathbf{f},\dd}\otimes\mathbb{L}^{-\mathbf{f}\cdot \dd}\right) \] provides an alternative filtration of $\HO(\mathcal{A}_{W,\mu}^{\zeta})^{\prop}$. In this case, the argument for the vanishing of the Lie bracket does not apply, and we obtain for each such filtration a Lie algebra $\mathfrak{g}_Q[u]$ of primitive elements with possibly nonzero Lie bracket, such that $\HO(\mathcal{A}_{W,\mu}^{\zeta})^{\prop}$ is a deformation of $U(\mathfrak{g}_Q[u])$. \end{remark} \bibliographystyle{plain}
-332,074.599114
[ -2.2421875, 2.072265625 ]
19.445749
[ -2.583984375, 0.9033203125, -2.23828125, -5.828125, -1.330078125, 8.53125 ]
[ 4.0546875, 10.40625, 1.6611328125, 6.25 ]
960
17,032
[ -3.439453125, 4.04296875 ]
36.321318
[ -5.390625, -4.25, -6.234375, -2.8984375, 1.7373046875, 14.7421875 ]
0.49014
7.558152
21.295209
0.644935
[ 1.2747740745544434 ]
-231,302.603536
8.684652
-332,690.285297
0.47121
6.391486
[ -1.4384765625, -3.50390625, -4.30859375, -5.71875, 1.8251953125, 13.1796875 ]
[ -5.359375, -1.71875, -2.189453125, -0.94091796875, 3.67578125, 3.810546875 ]
BkiUeAY5qsMAI5yh7a-M
\section{Introduction} Digitalization opens new perspectives for control engineering and automation by making large amounts of data from experiments and numerical models available. Learning-based control exploits this cumulated knowledge and potentially also performs autonomous exploration of unseen system behavior in order to find an optimal control policy. An example is deep reinforcement learning (RL), providing prominent results, one of which is the application to Atari Arcade video-games \cite{mnih2015human}. Compared with traditional control techniques, learning-based methods offer the potential to reduce modeling and controller design effort. However, many industrial applications are \emph{safety-critical} systems, i.e. systems with physical constraints that have to be satisfied. This essentially limits the application of most available learning-based control algorithms, which do not provide safety certificates. In order to address this limitation, we present efficient and scalable methods for the synthesis of a safety strategy consisting of a safe set and corresponding safe control law, which are cheap to implement and can be applied together with existing controllers or modern learning techniques to enhance them with safety guarantees. \emph{Contributions:} We consider dynamical systems with \emph{unknown} Lipschitzian nonlinearity and formulate the safe set and safe controller synthesis as convex optimization problems, which directly employ available data. Our analysis considers an approximate linear model of the system and uses data to incorporate the unknown nonlinear effects. The computations are based on Lyapunov's method and result in two optimization problems: The first optimization problem defines a quadratic approximation of the nonlinearity in the Lyapunov conditions and the second one describes the computation of the safe set and controller. Similar to \cite{akametalu2014reachability,fisac2017general} the framework can be used to augment any desired controller, which is lacking safety guarantees, in particular one which is based on learning. We extend the technique to reduce conservatism of the safe set by putting a prior on the unknown dynamics in the form of a Gaussian process model, which is beneficial, especially in case of high dimensional systems and sparse data. Due to its less conservative nature, this extension favors safe exploration beyond the system behavior seen so far and is well suited for iteratively learning in closed-loop. We illustrate the approach using examples, including a convoy of partly non-cooperative autonomous cars. \emph{Related work:} Given its relevance in industrial applications, there has been a growing interest in safe learning methods in the past years. Extensions of existing RL methods have been developed to enable safe RL with respect to different notions of safety, see \cite{garcia2015comprehensive} for a survey. A detailed literature review regarding RL, focusing on safety with respect to state and input constraints as also considered in this work, can be found in \cite{fisac2017general}. There are few results for efficient controller tuning from data with respect to best worst-case performance (also worst-case stability under physical constraints) by Bayesian min-max optimization, see e.g. \cite{wabersich2015automatic}, or by safety constrained Bayesian optimization as e.g. in \cite{Berkenkamp2015SafeRobustLearning,berkenkamp2016safequad}. In \cite{berkenkamp2016safe} a method was developed that allows to analyze a given closed-loop system (under an arbitrary RL algorithm) with respect to safety. Recent developments include the concept of a supervisory framework, which consists of a safe set in the state space and a safety controller. As long as the system state is in the interior of the safe set, any control law (e.g. unsafe learning-based control) can be applied. The safety controller only interferes if necessary, in case that the system reaches the boundary of the safe set, see e.g. \cite{akametalu2014reachability,fisac2017general}. Such a framework allows for certifying an arbitrary learning-based control algorithm with safety guarantees. Previously proposed techniques are based on a differential game formulation, which results in solving a min-max optimal control problem. An active field of research aims at extending these techniques to larger scale systems, mostly by considering special cases as described, e.g., in \cite{chen2015safe,kaynama2015scalable,fisac2015pursuit}. For some relevant cases, the existence of analytic solutions \cite{darbon2016algorithms} has been shown. The results presented in this paper are based on the concept of a safety framework, but compared to previous work we focus on approximation techniques to improve scalability with respect to the system dimension. \emph{Structure of the paper:} In Section \ref{sec:problem_description} we state the problem and in Section \ref{sec:safe_sets_from_data} we present our main result for safe set and controller computation. We then show an extension using a stronger assumption on the unknown system dynamics by considering Gaussian processes in Section \ref{sec:safe_active_exploration_for_nonlinear_systems} in order to reduce conservatism of the safe set. The results are demonstrated on numerical examples within the respective sections. We conclude the paper in Section \ref{sec:conclusion}. \emph{Notation:} The set of symmetric matrices of dimension $n$ is $\mSetSymMat{n}$, the set of positive (semi-) definite matrices is ($\mSetPosSemSymMat{n}$) $\mSetPosSymMat{n}$, the set of integers in the interval $[a,b]\subset\mathbb{R}$ is $\mIntInt{a}{b}$, the set of integers in the interval $[a,\infty)\subset\mathbb{R}$ is $\mIntGeq{a}$, and for $\epsilon > 0$ let $\mBall{\epsilon}{\bar x} = \left\lbrace x \in \mathbb{R} | \mNorm{x-\bar x}{2}\leq \epsilon\right\rbrace$. The boundary of an arbitrary compact set $\mathcal C \subset \mathbb{R}^n$ is $\partial \mathcal C$. Given a set $\mathcal D = \lbrace (x_i, y_i) \rbrace_{i=1}^N$, let $\mathcal D_x=\lbrace x_i \rbrace_{i=1}^{N}$ and $\mathcal D_y=\lbrace y_i \rbrace_{i=1}^{N}$. Define $\mDefFunction{\Delta_{\mDataSetX}}{\mathbb{R}^n}{\mathcal D_x}$ as $\Delta_{\mDataSetX}(x)={\argmin_{\bar x\in\mathcal D_x}\mNorm{\bar x - x}{2}}$, which picks the closest element in $\mathcal D_x$ with respect to $x\in\mathbb{R}^n$ under the $2$-norm. Given a set $\mathcal A\subset\mathbb{R}^n$ and a locally Lipschitz continuous function $\mDefFunction{f}{\mathbb{R}^n}{\mathbb{R}^m}$, the local Lipschitz constant $L \leq |f(x) - f(y)|/\mNorm{x-y}{2}$ for all $x,y\in\mathcal A$ is denoted by $L_{f(x)}(\mathcal A)$. The Minkowski sum of two sets $\mathcal A_1, \mathcal A_2 \subset \mathbb{R}$ is denoted by $\mathcal A_1 \oplus \mathcal A_2$. \section{Problem description}\label{sec:problem_description} \begin{figure}[t] \centering \input{fig/tkiz/bigData.tkiz} \caption{Illustration of Assumption~\ref{ass:big_data}. Red dots display observations $(x_i,d(x_i))$.}\label{fig:illustration_Ass_big_data} \end{figure} We consider deterministic nonlinear systems of the form \begin{align}\label{eq:general_system} \dot x(t) = Ax(t) + Bu(t) + d(x(t)) \end{align} where $A\in\mathbb{R}^{n\times n}$, $B \in \mathbb{R}^{n\times m}$ and $d: \mathbb{R}^{n} \rightarrow \mathbb{R}^n$ is locally Lipschitz continuous. The system is subject to polytopic state constraints $x(t) \in \mathbb{X} := \lbrace x\in \mathbb{R}^n|A_x x \leq b_x \rbrace$, $A_x\in\mathbb{R}^{n_x\times n}$, $b_x\in\mathbb{R}^{n_x}$ and polytopic input constraints $u(t)\in\mathbb{U}:=\lbrace u\in\mathbb{R}^m|A_u u \leq b_u \rbrace$, $A_u\in\mathbb{R}^{n_u\times m}$, $b_u\in\mathbb{R}^{n_u}$. The origin is contained in $\mathbb{X}$, $(A,B)$ is controllable, and the system state is fully observable. The explicit form of the nonlinearity $d$ is unknown, hence $d$ is assumed to be a memoryless black-box function, which will be identified from system measurements. However, we assume that $B$, i.e. the influences of the control input, are known. The matrix $A$ incorporates system knowledge in form of a linear model, which will be used in the design procedure in Section \ref{subsec:choice_of_P}. The linear model can, e.g., be selected as an approximate system model. For identification of $d(x)$, we have access to finitely many observations $\mathcal D=\lbrace(x_i,d(x_i))\rbrace_{i=1}^{N}$ that fulfill the following property (Figure~\ref{fig:illustration_Ass_big_data}). \begin{assumption}\label{ass:big_data} Given a set $\mathcal D=\lbrace (x_i, d(x_i)) \rbrace_{i=1}^N$ with $N$ data tuples, where $x_i\in\mathbb{X}$, there exists a non-trivial subset $\mDataRegion_\delta \subseteq \mathbb{X}$ such that for any $x\in\mDataRegion_\delta$ there exists an $x_i\in\mathcal D_x$ such that $||x-x_i||_2 \leq \delta$. \end{assumption} \begin{remark} Assumption~\ref{ass:big_data} implies that there exists a region, in which the collected data samples are dense. Intuitively, one can think of $\delta$ together with the Lipschitz constant $L$, as a `measure of knowledge' that we have about $d(x)$ inside $\mDataRegion_\delta$. The `knowledge' increases as $\delta$ gets smaller. \end{remark} \begin{remark}\label{rem:noisy_data} For simplicity, we assume noise-free data $\mathcal D$. It would, however, be possible to incorporate bounded or stochastic noise with only minor changes. \end{remark} We consider the problem of providing a safety certificate for an arbitrary control law by means of a safe set and controller, as proposed in \cite{akametalu2014reachability,fisac2017general}. Consider a potentially unsafe control strategy $\bar u(t)$, obtained for example by application of RL (which often cannot guarantee constraint satisfaction). In order to achieve minimal interference with the desired control $\bar u(t)$, the goal is to compute a set of states $\mathcal{S}$, for which we know a control strategy $u_\mathcal{S}(t)$ such that input and state constraints will be satisfied for all future times, in particular considering that $d(x)$ is unknown. The control $\bar u(t)$ can then safely be applied in the interior of S, until it becomes necessary to take a safety-ensuring action $u_\mathcal{S}$ on the boundary of $\mathcal{S}$ which guarantees that we stay in $\mathcal{S}$, i.e. that we can still provide a safe control strategy in the future. More formally: \begin{definition}\label{def:safe_set} A set $\mathcal{S}\subseteq \mathbb{X}$ is called a \emph{safe set} for system \eqref{eq:general_system} if there exists a \emph{safe control law} $\mDefFunction{u_\mathcal{S}}{\mathcal{S}}{\mathbb{U}}$ such that for an arbitrary (learning-based) policy $\bar u(t)$, the safe controller \begin{align}\label{eq:safe_control_law} u(t) = \begin{cases} u_S(x(t)),~&x(t)\in\partial\mathcal{S} \lor \bar u(t) \notin \mathbb{U}\\ \bar u(t),~&\text{otherwise} \end{cases} \end{align} guarantees that the state $x(t)$ is contained in $\mathcal{S}$ for all $t>0$ if $x(0)\in \mathcal{S}$. \end{definition} In particular, we aim at finding an algorithm that scales well in computational complexity with respect to the dimensionality of system \eqref{eq:general_system} as well as the number of measurements. \section{Safe-sets for nonlinear systems from data}\label{sec:safe_sets_from_data} We first introduce the class of safe-sets considered. Afterwards, we motivate the proposed method and highlight its basic idea using an example, in order to then introduce the algorithm for safe set and controller computation in the remainder of the section. \subsection{Ellipsoidal safe set} In order to provide a scalable optimization-based approach, we restrict the form of the safe set to an ellipsoidal set of the form \begin{align}\label{eq:quadratic_safe_set} \mathcal{S}^P(\gamma) = \left\lbrace x\in\mathbb{X} | x^\top P x \leq \gamma \right\rbrace \end{align} with $P\in\mSetPosSymMat{n}, \gamma\in\mathbb{R}^n, \gamma >0$, and the safe controller to the class of linear state feedback control laws $u_\mathcal{S}=Kx$ with $K\in \mathbb{R}^{m\times n}$. To construct $\mathcal{S}^P(\gamma)$, we leverage Lyapunov's direct method, with a quadratic Lyapunov function $V(x)=x^\top (\gamma^{-1}P) x$. By standard Lyapunov arguments, the following sufficient conditions ensure, analogously to \cite[Lemma 1]{akametalu2014reachability}, that $\mathcal{S}^P(\gamma)$ fulfills Definition~\ref{def:safe_set}: \begin{subequations} \begin{align} \label{eq:nominal_invariant_set_req_X} \mathcal{S}^P(\gamma) &\subseteq \mathbb{X}\\\label{eq:nominal_invariant_set_req_U} Kx&\in \mathbb{U} \\\label{eq:nominal_invariant_set_req_vdot} \qquad \dot V(x) &\leq 0 \end{align} \end{subequations} for all $x\in\partial\mathcal{S}^P(\gamma)$. \subsection{Motivating Example} Consider the system $\dot x = x +d(x) + u$ with $d(x)=- x^3$ subject to input constraints $|u|\leq 2$ and state constraints $|x|\leq 2$. The task is to find a \emph{safe} interval $\mathcal{S}=[a,b]$ and a corresponding \emph{safe control law} $\mDefFunction{u_\mathcal{S}}{[-2,2]}{[-2,2]}$ according to Definition~\ref{def:safe_set}. The nonlinearity $d(x)$ is unknown, but we are given a set of noise-free observations $\mathcal D=\lbrace x_i, d(x_i) \rbrace_{i=1}^N$ such that the convex hull $\mathrm{conv}(\lbrace x_i \rbrace_{i=1}^N )$ equals the state space $[-2,2]$ (this will not be a necessary assumption later, see also Assumption~\ref{ass:big_data}). We consider a linear state feedback $u_\mathcal{S}=kx$, $k\in \mathbb{R}$. \textit{Robust approach}: Without any knowledge of $d(x)$, a robust approach is to consider the dynamics $\dot x = x + w + u$, with $|w|\leq 8$, where the bound on $w$ is estimated from $\mathcal D$. In this case, there does not exist a feasible controller gain k w.r.t. input constraints for any state and $\forall |w|\leq 8$, such that $\dot x \leq 0$, i.e. there does not exist a safe set. \textit{Proposed approach}: Let $V(x)=px^2,~p>0$ be our Lyapunov candidate function. We analyze $\dot V(x) = 2p(x^2+k x^2+ x d( x))\leq 0$. At the boundary of the state space we have the measurements $2d(2)=-2d(-2)=-16$, providing that $\dot V (x) \leq 0$. By standard Lyapunov arguments, this implies that for $k=0$ we have that for all $x(0)\in \partial \mathcal{S}=\lbrace -2, 2\rbrace$, $x(t)\in \mathcal{S}$ for all $t>0$. We conclude that $u_\mathcal{S}(t)=0$ is a safe controller for which the state constraint set constitutes a safe set. This example highlights that rather than taking uniform bounds on the unknown dynamics, we can provide less conservative safe sets by quantifying the effect of the unknown dynamics in the form of \emph{state-dependent} disturbances, which can be inferred from the available data $\mathcal D$. In the following, we will exploit this concept for safe set and controller computations and conclude the section with two examples. \subsection{Computation of the safe set and controller} Given a matrix $P$ (see Section \ref{subsec:choice_of_P}) that determines the shape of the safe set, we write the problem of finding the size of the safe set $\mathcal{S}^P(\gamma)$ and a corresponding safe controller $u_\mathcal{S}$ as two consecutive convex optimization problems. \begin{remark}\label{rem:ellipsoidal_data_region} Given the shape $P$ of the safe set \eqref{eq:quadratic_safe_set}, there exists a $\bar \gamma >0$ small enough such that Assumption~\ref{ass:big_data} is satisfied on the sub-level set $\mathcal{S}^P(\bar \gamma)$, i.e. \begin{align}\label{eq:ellipsoidal_data_region} \mathcal{S}^P(\bar\gamma)\subseteq \mDataRegion_\delta. \end{align} \end{remark} The proposed procedure first uses data to bound the effects of the nonlinearity on the Lyapunov decrease by a quadratic form in the largest possible safe set, i.e. over $\gamma\in (0,\bar \gamma]$. This quadratic bound is then used as input to the second optimization problem, which computes the controller and set size in order to take into account the nonlinearities in addition to the linear system dynamics. The restriction to a quadratic bound of the nonlinearity is motivated by the fact that it can be treated efficiently by means of a convex problem. In order to reduce conservatism, we bound the nonlinearity on sub-regions of the safe set, described by intervals \begin{align}\label{eq:intervals} \gamma\in\Gamma_i=[\gamma_1^i, \gamma_2^i],~\gamma_1^i<\gamma_2^i,~ \gamma_2^i\leq \bar \gamma,~i=1,2,..,n_\Gamma, \end{align} which are defined such that $\mathcal{S}^P(\gamma)\subseteq \mDataRegion_\delta$ for any $\gamma\in\Gamma_i$. Note that the selection of sub-intervals is possible as we will use the quadratic bound for upper bounding \eqref{eq:nominal_invariant_set_req_vdot}, which is only required to hold on $\partial \mathcal{S}^P(\gamma)$. For every interval $\Gamma_i$, we then formulate two convex optimization problems in order to determine the volume of the safe set and the safe controller. In case no solution exists, the interval can be reduced. In general, the smaller the intervals are chosen, the less conservative the bound will be. \emph{Bounding the nonlinear effects:} Given an interval $\Gamma_i$, consider the neighborhood $\mathcal {R}(\Gamma_i)= \lbrace \mathcal{S}^P(\gamma_i^2)\setminus \mathcal{S}^P(\gamma_i^1)\rbrace \oplus \mBall{\delta}{0}$. The indices of data samples inside the set $\mathcal {R}(\Gamma_i)$ are given by $\mathbb I_\mSafeRing(\Gamma_i) = \lbrace k \in \mIntGeq{1}|x_k\in\mathcal D_x,~ x_k\in \mathcal {R}(\Gamma_i)\rbrace$. We seek to find a quadratic bound on the nonlinearity arising in the Lyapunov decrease \eqref{eq:nominal_invariant_set_req_vdot} for all $\bar x \in \mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)$, i.e. to find a $\mQuadBoundMatrixFunction{\Gamma_i}$ such that \begin{align}\nonumber \dot V(\bar x)=&2\gamma^{-1} {\bar x}^\top P(A+BK){\bar x} + 2\gamma^{-1} {\bar x}^\top Pd({\bar x}) \\\label{eq:general_opt_proof_2} \leq &2\gamma^{-1}{\bar x}^\top P(A+BK){\bar x} + 2\gamma^{-1}{\bar x}^\top \mQuadBoundMatrixFunction{\Gamma_i} {\bar x}. \end{align} The first optimization problem for bounding the nonlinearity over each interval is given by \begin{subequations}\label{eq:quad_bound} \begin{align} \mQuadBoundMatrixFunction{\Gamma_i} = &\argmin_{\tilde Q \in \mSetSymMat{n}} \sum_{k \in \mathbb I_\mSafeRing(\Gamma)} \left(x_k^\top \tilde Q x_k - p_k\right)^2\\\nonumber \text{s.t.}&\text{ for all } k\in \mathbb I_\mSafeRing(\Gamma_i): \\\label{eq:quad_bound_2} &~\lambda_k \geq 0 \\\label{eq:quad_bound_3} &~ \begin{pmatrix} -\tilde Q - \lambda_k I_n & \lambda_k x_k\\ \lambda_k x_k^\top & -\lambda_k\left(x_k^\top x_k - \delta^2\right) + p_k \end{pmatrix}\preceq 0 \end{align} \end{subequations} with $p_k=x_k^\top P d(x_k) + \delta L_{x^\top P d(x)}(\mathcal {R}(\Gamma_i))$ and $\delta$ as defined in Assumption~\ref{ass:big_data}. \begin{lemma}\label{lem:quad_bound} Let Assumption~\ref{ass:big_data} hold and let $\bar \gamma$ satisfy \eqref{eq:ellipsoidal_data_region}. Consider an interval $\Gamma_i$ according to \eqref{eq:intervals}. If \eqref{eq:quad_bound} attains a solution, then for all $\bar x \in \mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)$ it holds that \begin{align}\label{eq:quad_bound_result} \bar x^\top P d(\bar x) \leq \bar x^\top Q (\Gamma_i) \bar x. \end{align} \end{lemma} \begin{proof} We prove that for all $\bar x \in \mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)$ the implication $\eqref{eq:quad_bound_2},\eqref{eq:quad_bound_3}\Rightarrow \eqref{eq:quad_bound_result}$ holds. First note that for any $\bar x \in\mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)$ there exists a $k\in \mathbb I_\mSafeRing(\Gamma_i)$ such that $||\bar x-x_k||\leq \delta$ by the definition of the intervals, i.e. $\gamma_i^2\leq \bar \gamma$, see also Remark \ref{rem:ellipsoidal_data_region}. For notational ease let $f(\bar x) = d(\bar x)^\top P \bar x$. Equation \eqref{eq:quad_bound_result} reads $f(\bar x)-\bar x^\topQ(\Gamma_i)\bar x\leq 0 $. For all $k\in\mathbb I_\mSafeRing(\Gamma_i)$ and for all $\bar x_k \in \mBall{\delta}{x_k}$ we have therefore by Lipschitz continuity $f(\bar x_k) - f(\Delta_{\mDataSetX}{(\bar x_k)})+ f(\Delta_{\mDataSetX}{(\bar x_k)}) - \bar x_k^\top Q(\Gamma_i)\bar x_k \leq p_k - \bar x_k^\top Q(\Gamma_i)\bar x_k$. Note that by the definition of the intervals and Remark \ref{rem:ellipsoidal_data_region} the relation $\lbrace\mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)\rbrace\subset\bigcup_{i\in\mathbb I_\mSafeRing(\Gamma_i)}\mBall{\delta}{x_k}$ holds. As a consequence, if for all $k\in\mathbb I_\mSafeRing(\Gamma_i)$ and for all $\bar x_k \in \mBall{\delta}{x_k}$ we have that $p_k - \bar x_k^\top Q(\Gamma_i)\bar x_k\leq 0$, then the quadratic bound \eqref{eq:quad_bound_result} holds for all $\bar x\in\mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)$. Finally, using the S-Lemma (see \cite{polik2007survey}) the condition $ \bar x \in \mBall{\delta}{x_k} \Rightarrow p_i-\bar x^\top Q(\gamma)\bar x \leq 0 $ is equal to \eqref{eq:quad_bound_2},\eqref{eq:quad_bound_3} which completes the proof. \end{proof} \begin{remark}\label{rem:large_data_set_quad_bound} The optimization problem in \eqref{eq:quad_bound} is a convex semidefinite programming problem. In case that there are more observations ($N\gg0$) than the optimization algorithm can handle in \eqref{eq:quad_bound}, one can iteratively calculate $\mQuadBoundMatrixFunction{\Gamma_i}$: Solve \eqref{eq:quad_bound} using a subset of $\mathcal D$ in order to obtain $\mQuadBoundMatrixFunctionIteration{1}{\Gamma_i}$, in the next iteration choose another disjoint subset of $\mathcal D$ and add the constraint $\tilde Q\succeq \mQuadBoundMatrixFunctionIteration{1}{\Gamma_i}$ to \eqref{eq:quad_bound} in order to obtain $\mQuadBoundMatrixFunctionIteration{2}{\Gamma_i}$. Repeat until all subsets of $\mathcal D$ are processed which yields a feasible, possibly suboptimal solution to \eqref{eq:quad_bound}. \end{remark} Problem (8) provides a bound on the nonlinear effect in the Lyapunov decrease by means of the Lipschitz constant of $x^\top Pd(x)$. In practice, a local Lipschitz constant can e.g. be obtained from data as $\hat L_{x^\top P d(x)}(\mathcal {R}(\Gamma_i)) = 2\max_{k_1,k_2\in\mathbb I_\mSafeRing(\Gamma_i)} ||x_{k_1}^\top Pd(x_{k_1})-x_{k_2}^\top Pd(x_{k_2})||/||x_{k_1} - x_{k_2}||$. Since $Q(\Gamma_1)$ does not have to be positive definite, stabilizing effects of the nonlinearity can be considered in \eqref{eq:general_opt_proof_2}, i.e. $\bar x^\top Q(\Gamma_i) \bar x$ can be negative and therefore contribute to rendering $\dot V(x)$ negative on the boundary of the safe set. This is demonstrated in Section~\ref{sec:simple_example_lipschitz}, see also Figure~\ref{fig:example} (right). \medskip \emph{Calculation of safe level set and safe controller:} By using the quadratic bound from the first optimization problem, we are able to state the second optimization problem for safe set and controller design satisfying the conditions \eqref{eq:nominal_invariant_set_req_X}-\eqref{eq:nominal_invariant_set_req_vdot} as follows. Let $\gamma \in \mathbb{R}$, $E = P^{-1}$, $Y \in \mathbb{R}^{m\times n}$, $\Gamma_i$ be chosen according to \eqref{eq:intervals}. The optimization problem is given by \begin{subequations}\label{eq:general_opt} \begin{align} \min_{\gamma,Y} & -\gamma \\\label{eq:general_opt_0} \mathrm{s.t.} &~ \gamma\in \Gamma_i\\\label{eq:general_opt_1} &~ AE\gamma + EA^T\gamma + BY + Y^\top B^\top + 2E \mQuadBoundMatrixFunction{\Gamma_i} E\gamma \preceq 0\\\nonumber &~\forall j\in\mIntInt{0}{n_x}:\\\label{eq:general_opt_5} &~~~~\begin{pmatrix}b_{x,j}^2 & A_{x,j}E \\ EA_{x,j}^\top & \gamma E\end{pmatrix}\succeq 0\\\nonumber &~\forall k\in\mIntInt{0}{n_u}: \\\label{eq:general_opt_6} &~~~~\begin{pmatrix}b_{u,k}^2 & A_{u,k}Y \\ Y^\top A_{u,k}^\top & \gamma E\end{pmatrix}\succeq 0. \end{align} \end{subequations} \begin{theorem}\label{thm:invariant_set_from_data_lipschitz} Let Assumption~\ref{ass:big_data} hold and let $\bar \gamma$ satisfy \eqref{eq:ellipsoidal_data_region}. Consider an interval $\Gamma_i$ according to \eqref{eq:intervals}. If \eqref{eq:general_opt} attains a solution $\lbrace \gamma^*, Y^*\rbrace$, then $\mathcal{S}^P(\gamma^*)$ is a safe set for system \eqref{eq:general_system} according to Definition~\ref{def:safe_set} with $u_S(x) = Kx$, $K={\gamma^*}^{-1}Y^*E^{-1}$. \end{theorem} \begin{proof} We prove the result in two steps: 1) Conditions \eqref{eq:general_opt_5}-\eqref{eq:general_opt_6} imply (\ref{eq:nominal_invariant_set_req_X}) and (\ref{eq:nominal_invariant_set_req_U}). By \cite[Section 5.2.2]{boyd1994linear} we can rewrite (\ref{eq:nominal_invariant_set_req_U}) as $A_{u,i}K(\gamma^{-1}P)^{-1}K^\top A_{u,i}^\top \leq b_{u,i}^2$ which equals \eqref{eq:general_opt_6}. The matrix inequality for the states can be derived similarly. 2) Conditions \eqref{eq:general_opt_0}-\eqref{eq:general_opt_1} ensure that $\mathcal{S}^P(\gamma^*)$ fulfills \eqref{eq:nominal_invariant_set_req_vdot}. For all $x \in \partial \mathcal{S}^P(\gamma)$ we have to fulfill \eqref{eq:nominal_invariant_set_req_vdot} which is implied by \eqref{eq:general_opt_proof_2}. A sufficient condition for \eqref{eq:nominal_invariant_set_req_vdot} is therefore $\gamma^{-1}P(A+BK) + \gamma^{-1}(A+BK)^\top P + 2\gamma^{-1} \mQuadBoundMatrixFunction{\Gamma_i} \preceq 0$, i.e. that $\dot V(x)\leq 0$ for all $x\in\mathbb{R}^n$. Multiplying from left and right by $\gamma P^{-1}$ yields \eqref{eq:general_opt_1}. We have shown that $\mathcal{S}^P(\gamma^*)$ is a safe set according to Definition~\ref{def:safe_set}. The objective in \eqref{eq:general_opt} yields the largest safe set under these sufficient conditions. \end{proof} Note that optimizing over $P$ and $K$ in \eqref{eq:general_opt} is not possible, because the bound obtained in the first optimization step depends on $P$. Problem \eqref{eq:quad_bound} and \eqref{eq:general_opt} provide (semidefinite) convex optimization problems for computing a safe set and controller by directly employing data points. Such problems can be solved efficiently even for higher dimensions, see e.g. \cite{toh1999sdpt3,mosek}. While this offers a general approach with favorable scalability properties, the limitation is that the resulting safe set cannot be larger than the convex hull of the data points plus a $\delta$-neighborhood. Exploration is therefore limited. Nevertheless, initially collected data can be iteratively extended inside $\mDataRegion_\delta$ such that $\delta$ from Assumption~\ref{ass:big_data} gets smaller over time. Recomputation of the safe set can then reduce conservatism. We present an extension in Section \ref{sec:safe_active_exploration_for_nonlinear_systems} which further reduces conservatism and improves exploration. \subsection{Shape of the safe set}\label{subsec:choice_of_P} Using the linear model of the system dynamics in \eqref{eq:general_system}, we define an approximate initial shape matrix $P$ of the safe set by neglecting the unknown nonlinearity. Assume that $\mDataRegion_\delta$ is given by $\lbrace x\in \mathbb{R}^n | x^\top A_\delta x\leq 1 \rbrace$ with $A_\delta\in\mSetPosSymMat{n}$, which can be e.g. calculated as the minimum volume covering ellipse of the data points $\mathcal D$, see \cite[p. 222]{boyd2004convex}. We can find a safe set for system \eqref{eq:general_system} by setting $d(x)=0$, resulting in the following optimization problem with $E \in \mSetPosSymMat{n}$ and $Y \in \mathbb{R}^{m\times n}$ \begin{subequations}\label{eq:intial_guess_P} \begin{align} \min_{E,Y} & -\mathrm{log det} (E) \\\label{eq:intial_guess_P_0} \mathrm{s.t.} & ~~A_\delta^{-1}\succeq E \\\label{eq:intial_guess_P_1} & ~~AE + EA^T + BY + Y^\top B^\top \preceq 0\\ & ~~(\ref{eq:general_opt_5}),(\ref{eq:general_opt_6}). \end{align} \end{subequations} If \eqref{eq:intial_guess_P} attains a solution, then analogously to Theorem~\ref{thm:invariant_set_from_data_lipschitz}, we obtain a safe set for system \eqref{eq:general_system} according to Definition~\ref{def:safe_set} with $P=E^{-1}$, $u_S(x) = Kx$, and $K=YE^{-1}$, but for $d(x)=0$. Constraint \eqref{eq:intial_guess_P_0} ensures that the safe set is a subset of $\mDataRegion_\delta$, i.e. the set in which Assumption~\ref{ass:big_data} is satisfied and therefore the data is dense. This implies that $(0,1]$, i.e. $\bar \gamma=1$, is the maximum set size such that Assumption~\ref{ass:big_data} as well as all constraints are fulfilled. The optimization problem \eqref{eq:general_opt} then aims at improving the initial approximation obtained via \eqref{eq:intial_guess_P} with respect to the nonlinearity by designing a different control law and safe set size. \subsection{Illustrative numerical example}\label{sec:simple_example_lipschitz} \begin{figure}[t]% \centering \subfloat{\input{fig/tkiz/simpleExample.tkiz}}% \subfloat{\input{fig/tkiz/simpleExampleNonlinearBound.tkiz}}% \caption{\textbf{Left:} Sample trajectories under the safe control law, when starting on the boundary of the safe set $\mathcal{S}^P$. Red dots: Data points, black lines: Closed-loop system trajectories, elliptic ring $\mathcal {R}({\Gamma_1})$, which contains $\partial\mathcal{S}^P$. Dashed set: Safe set using $Q_{GP}$ from Section \ref{sec:safe_active_exploration_for_nonlinear_systems}. \textbf{Right:} Quadratic bound $x^\top \mQuadBoundMatrixFunction{\Gamma_i} x$ according to Lemma~\ref{lem:quad_bound} (blue) of non-linear term $x^\top P d(x)$ (red).}% \label{fig:example}% \end{figure} Consider a nonlinear system of the form \eqref{eq:general_system}, where $A = \begin{psmallmatrix} -1 &2 \\ -3& 4 \end{psmallmatrix}$, $B=\begin{psmallmatrix} 0.5 \\ -2 \end{psmallmatrix}$, $d(x) = \begin{psmallmatrix} 0.5x_1^4 \\ 0.35-1.5x_2^3 \end{psmallmatrix}$ with input constraints $|u|\leq 3$ and state constraints $|x|\leq 2$. The origin is not a stable equilibrium, and is neither an equilibrium point. For simplicity we consider a grid of data points as illustrated in Figure~\ref{fig:example}, however any data set fulfilling Assumption~\ref{ass:big_data} could be used. The data was taken inside the set $\lbrace x\in\mathbb{R}^n|x^\top P x \leq 1 \rbrace$ with $P = \left(\begin{smallmatrix} 0.7651 & 0.2162 \\ 0.2162 & 0.6481 \end{smallmatrix}\right)$ obtained by solving \eqref{eq:intial_guess_P}, which also corresponds to $\mDataRegion_\delta$ with $\delta=0.15$. We apply Theorem~\ref{thm:invariant_set_from_data_lipschitz} by solving \eqref{eq:quad_bound} and \eqref{eq:general_opt} for $\Gamma_1=[0.9,1]$, $L_{x^\top P d(x)}(\mathcal {R}(\Gamma_1)) \approx 6.02$ and obtain $ K^* = (0.5261,~2.2953),\gamma^*=1$. The results are illustrated in Figure \ref{fig:example}. Note that in general the quadratic bound must only hold on $\partial\mathcal{S}(\gamma)$ and could therefore be violated around the origin. \subsection{Simulation: Safety for autonomous convoys} Consider a convoy of cars or trucks as depicted in Figure~\ref{fig:car_convoy}. Given a target velocity $v_{\text{tar}}$ and a possibly small target distance $x_{\text{tar}}$, the goal is to drive closely behind each other, in order to leverage slipstream effects for efficiency. We assume that it is possible to overwrite the local controllers, i.e. the acceleration of car~1, car~3 and car~4 in a centralized way if necessary to ensure safety. During a supervised observation phase, initial data about the system is collected. We consider the problem of finding a safe, centralized control law, and a safe set such that the cars will not crash, even if we cannot determine the acceleration of car~2 and car~5. Let $z_{i+1\rightarrow i}=x_{\text{tar}}-x_{i+1\rightarrow i}$ be the difference between the target distance $x_{\text{tar}}$ and the actual distance $x_{i+1\rightarrow i}$ of car $i+1$ and car $i$. Let $v_i$ be the difference between the target velocity of the convoy $v_{\text{tar}}$ and the velocity $\bar v_i$ of car $i$. The dynamics of all cars $i=1,...,5$ are given as $\dot z_{i+1\rightarrow i} = v_{i+1} - v_i$, $\dot v_{i} = u_i$ where $u_i$ is the applied acceleration. The control law of car $1$ is given by $u_1(v_1) = -v_1$, of cars $3,4$ by $u_i(z_{i\rightarrow i-1},v_i) = 0.1z_{i\rightarrow i-1}-0.3v_i$ and of cars $2,5$ (which we cannot overwrite) by $u_2(z_{2\rightarrow1},v_2) = \max{\lbrace\min{\lbrace z_{2\rightarrow1}-v_2,0.9\rbrace},-0.9}\rbrace$, $u_5(z_{5\rightarrow 4},v_5) = \max{\lbrace\min{\lbrace z_{5\rightarrow 4}-v_5,0.9\rbrace},-0.9}\rbrace$, i.e. they apply a saturated, stabilizing state feedback law and are therefore nonlinear. The target distance between the cars is 1 meter. In order to avoid a crash the state constraints are given by $x_{i+1 \rightarrow i}\geq 0$. The maximum acceleration of cars 1, 3, and 4 is $3~[m/s^2]$, i.e. $|u_i|\leq 3,~ i=1,3,4$. We are given observations of $\dot z_{2\rightarrow 1},\dot v_2$ and $\dot z_{5\rightarrow 4}, \dot v_5$ in the interval $[-0.8, 0.8]$ with $\delta = 0.013$. In Figure~\ref{fig:car_convoy_example}, a numerical simulation under the resulting safe control law \eqref{eq:safe_control_law} is shown, starting from the boundary of the safe set with $v_2(0)=0.02~[m/s],x_{2\rightarrow 1}(0)=-0.76~[m]$ and the remaining states equal to zero, which represents the situation that the second car is too close to the first one and its velocity is slightly higher than the reference velocity. As we can see in Figure~\ref{fig:car_convoy_example}, the first car has to accelerate quickly several times during the first two seconds for safety reasons, since the second car (which cannot be controlled) would decelerate more as car $3$ would be able to compensate. The same situation occurs at $4.2$ seconds between car $3$ and car $4$ and at around $10$ seconds between car $1$ and car $2$ again. After six safety interventions in total, the local controllers of the cars are able to stabilize the overall system. \begin{figure}[t] \vspace{0.2cm} \centering \input{fig/tkiz/convoy.tkiz} \caption{Illustration of the autonomous car convoy. The acceleration of red cars cannot be controlled.} \label{fig:car_convoy} \end{figure} \begin{figure}[t] \centering \subfloat{\input{fig/tkiz/FiveCarConvoy1.tkiz}}\\ \subfloat{\input{fig/tkiz/FiveCarConvoy2.tkiz}}% \subfloat{\input{fig/tkiz/FiveCarConvoy3.tkiz}}\\ \subfloat{\input{fig/tkiz/FiveCarConvoy4.tkiz}}% \subfloat{\input{fig/tkiz/FiveCarConvoy5.tkiz}} \caption{Sample trajectory of the car convoy under the safety framework, when starting on the boundary of $\mathcal{S}^P(\gamma)$. Grey lines indicate times when the safe control law $u_\mathcal{S}$ is applied.} \label{fig:car_convoy_example} \end{figure} \section{Safe-sets using Gaussian Processes} \label{sec:safe_active_exploration_for_nonlinear_systems} The previous sections are based on Lipschitz continuity of the unknown nonlinearity $d(x)$, which was incorporated into the quadratic upper bound in \eqref{eq:general_opt}, see Lemma~\ref{lem:quad_bound}. A shortcoming of using only Lipschitz continuity as `prior' knowledge is the requirement of relatively dense and structured data, see Assumption~\ref{ass:big_data}. This implies that almost no `exploration' can be made beyond the data, observed so far. By putting a stronger prior on the class of functions for modeling the nonlinearity $d(x)$, we develop a less conservative quadratic bound, which can then be used in \eqref{eq:general_opt} and is generally expected to yield a larger safe set. In addition, it improves exploration beyond the data points allowing to iteratively improve the safe set during closed-loop operation, where initially few data points are generally available. \subsection{Gaussian Processes} We use a Gaussian process model (GP) in order to perform Bayesian inference on the unknown nonlinearity (see. e.g. \cite{rasmussen2006gaussian}) for each element $d_i(x)$ of $d(x)$. The GP is defined by a mean function $\mu^i(x)$, together with a covariance (kernel) function $k^i( x, x')$, denoted with $\mathcal{GP}(c_\mu^i,k^i)$ for the GP prior on $d_i(x)$ in short. We set the mean prior function to be constant, i.e. $\mu^i = c_\mu^i$. Given observations $\bm y_N^i=[y_1^i,..,y_N^i]^T$ at locations $X_N=[ x_1,.., x_N]^T$ where $y_j^i=d_i(x_j)$, the posterior distribution of $d_i(x)$ is given by \begin{align}\label{eq:GP_mu} \mu_N^i( x) &= c_\mu^i + k_N^i(x)^T {K_N^i}^{-1}(\bm y_N^i-\bm c_\mu^i)\\ k_N^i(x,x')&=k^i(x, x')- k_N^i(x)^T {K_N^i}^{-1}k_N^i(x')\\\label{eq:GP_sigma} {\sigma_N^i}^2(x) &= k_N^i(x, x), \end{align} where $k_N^i(x)=[k^i(x_1, x),..,k^i( x_N,x)]^T$, $k_N^i( x, x')$ is the posterior covariance, ${\sigma_N^i}^2(x)$ is the variance, $K_N^i=[k^i(x,x')]_{x,x'\in X_N}$ is the positive definite covariance matrix matrix and $\bm c_\mu^i$ is a vector of $N$ elements, each equal to $c_\mu^i$. The posterior mean for the vector $d(x)$ is $\mu_N(x)=[\mu_N^0(x),..,\mu_N^n(x)]^\top$ and the variance $\sigma_N^2(x)=[{\sigma_N^0}^2(x),..,{\sigma_N^n}^2(x)]^\top$. \subsection{GP-based bounding of nonlinearity} \begin{figure} \vspace{-0.25cm} \end{figure} \input{algorithms/QGP.alg} The GP model provides a measure for the posterior variance of $d(x)$, which is used to improve the bound on the effect of the nonlinearity on the Lyapunov decrease. Instead of using Lipschitz-based arguments, as in Section \ref{sec:safe_sets_from_data}, we calculate a strict quadratic bound on highly probable, worst-case realizations of the nonlinear term $x^\top P d(x)$ for all $x\in \mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)$, given an interval $\Gamma_i$. The intervals $\Gamma_i$ in this section are not limited to a dense data region $\mDataRegion_\delta$. Therefore one can drop the constraint \eqref{eq:intial_guess_P_0} in the computation of $P$ in \eqref{eq:intial_guess_P} and choose $\bar \gamma = 1$ for the construction of the intervals \eqref{eq:intervals}. By relying on the GP, the largest interval $\bar \gamma$ can be chosen independent of Assumption~\ref{ass:big_data} and Remark~\ref{rem:ellipsoidal_data_region}. Algorithm \ref{alg:calculationQGP} summarizes the calculation of the quadratic bound, implementing the following idea. Let $f(x)$ be a function that has to be quadratically upper bounded by $x^\top Q x$ for all $x\in \lbrace \mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)\rbrace$. In order to enforce the infinite dimensional constraint $\forall x\in \lbrace \mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)\rbrace: f(x)\leq x^\top Q x$, we proceed iteratively by starting with a finite approximation, which will be improved until $f(x)\leq x^\top Q x$ holds for all $x\in\lbrace \mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)\rbrace$. In line \ref{alg:calculationQGP_lambdaFcn} of Algorithm \ref{alg:calculationQGP}, the function $f$ is defined, which returns the maximum value of the nonlinear term $x^\top P d(x)$ with a chosen probability, e.g. with $99.73\%$ by letting $c=3$. Starting with a finite number of samples of $f(x)$ for $x\in\mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)$ (lines \ref{alg:calculationQGP_initialX},\ref{alg:calculationQGP_initialY}) we compute an initial guess for a quadratic bound on $x^\top P d(x)$ given by $x^\top Q^1 x$ in line \ref{alg:calculationQGP_initialQ}, where $G(X,Y) = \argmin_{\tilde Q\in \mSetSymMat{n}} \sum_{(x_i,y_i)\in(X,Y)} \left(x_i^\top \tildeQ x_i-y_i\right)^2$ \text{s.t. for all } $(x_i,y_i)\in \mathcal P:~y_i \leq x_i^\top \tildeQ x_i$, which yields a quadratic upper bound on $\lbrace y_i \rbrace_{i=1}^N$. We search for potential violations of the current bound in line \ref{alg:calculationQGP_violatingX} and add it to the set of data points in lines \ref{alg:calculationQGP_newX} and \ref{alg:calculationQGP_newY}. After that we update the quadratic bound in line \ref{alg:calculationQGP_updateQ}. The algorithm iterates until there is no violation. This way for all $x\in \mathcal{S}^P(\gamma_i^2)\setminus\mathcal{S}^P(\gamma_i^1)$, the quadratic form $x^\top Q_{GP}(\Gamma_i)x$ will be a bound on $x^\top P d(x)$ with high probability. \begin{remark} The optimization problem in line \ref{alg:calculationQGP_violatingX} is continuous (compare for example \cite[Chapter 2G]{dontchev2009implicit}), but non-convex. An alternative approach is to build a discretization of $\mathcal {R}(\Gamma_i)$, which we denote by $\mathcal D_x$, with grid size $\delta$, $|\mathcal D_x|=N$, and evaluate the posterior mean and covariance $\mu_N(x), \sigma_N(x)$ for each $x\in \mathcal D_x$. By selecting $p_k = x_k^\top P \mu_N(x_k) + \beta_N \sum_{i=1}^n \sigma_N^i(x) + \delta L_{x^\top P d(x)}(\mathcal {R}(\Gamma_i))$ with $\beta_N$ as defined in \cite[Lemma 3]{Berkenkamp2017SafeRL}, the bound $Q_{GP}$ can be approximated using \eqref{eq:quad_bound} as a convex optimization problem. \end{remark} For the second step of calculating a safe set size and controller, we can simply use the bound $Q_{GP}(\Gamma_i)$ instead of $Q(\Gamma_i)$ in \eqref{eq:general_opt} in order to obtain a set, which is safe with the selected probability. By construction, $Q_{GP}(\Gamma_i)$ will be less conservative, or equal than $Q(\Gamma_i)$. This is due to the fact that we put a prior on the unknown nonlinearity $d(x)$, which allows for Bayesian inference and therefore improved extra- and interpolation based on the data, as opposed to using estimates based on Lipschitz continuity. In Figure \ref{fig:example} (left), the benefit of using $Q_{GP}$ over the Lipschitz based bound $Q$ is illustrated. Moreover, the main advantage is that we do \emph{not} require particular assumptions on the data and the safe set is not limited to a subset of $\mDataRegion_\delta$, as it is the case in Lemma~\ref{lem:quad_bound}. \subsection{Numerical example: Exploration}\label{subsec:exploration} Consider a nonlinear system of the form \eqref{eq:general_system} with $A = \begin{psmallmatrix} -1 &2 \\ -3& 4 \end{psmallmatrix}$, $B=\begin{psmallmatrix} 0.5 \\ -2 \end{psmallmatrix}$, $d(x) = \begin{psmallmatrix} 0.5x_1^2\sin(6x_1) \\ -0.8x_2^3 \end{psmallmatrix}$, input constraints $|u|\leq 4$, and state constraints $|x|\leq 4$. We use a squared exponential kernel as defined in \cite{rasmussen2006gaussian} with ${\sigma_f^1}={\sigma_f^2}=0.05$ and $l^1=l^2=0.2$. Given initial data in $[-0.2,0.2]^2$ with $\delta = 0.05$, we solve \eqref{eq:general_opt} using the high probability ($c=3$, $99.73\%$) bound $Q_{GP}(\Gamma_i)$ with an initial $P$ obtained via \eqref{eq:intial_guess_P} and $\Gamma_i = [1-i0.1-0.1,1-i0.1]$ for $i=1,2,..,8$, i.e. we start with $i=1$ and iterate $i=2,3,..$ until we find a feasible solution. We assume that the desired control input $\bar u(t)$ is given by the policy gradient with signed derivative (PGSD) algorithm (see \cite{kolter2009policy}), which is a policy search RL method without any safety guarantees. During closed-loop operation under the safe control law \eqref{eq:safe_control_law} we collect data $\mathcal D$. Every $0.2$ seconds we recompute the safe level set, i.e. $\gamma(t)$, where the safe set size converges after $2.5$ seconds. In Figure~\ref{fig:example_exploration} the evolution of the safe set size (volume of the ellipse) is shown as well as the distance of the system state to the origin, which has to be minimized by the PGSD learning based control law. The unsafe RL input is `overwritten' three times indicated by the grey lines to ensure safety, until it begins to converge. \begin{figure}[t \centering \input{fig/tkiz/safeSetEvolutionExploration.tkiz} \caption{Safe RL using the proposed safety framework together with the PGSD \cite{kolter2009policy} algorithm. The distance of the state to the origin, as well as the volume of the safe set is shown over time. Grey-shaded time points depict application of the safe control law $u_\mathcal{S}$.} \label{fig:example_exploration \end{figure} \section{Conclusion}\label{sec:conclusion} This paper presents a safety framework that allows to enhance arbitrary learning-based and unsafe control strategies, for nonlinear and potentially larger scale systems with safety certificates. The nonlinearity is assumed to be unknown and we only require a possibly inaccurate linear system model and observations of the system. A key feature is that the proposed method directly exploits the available data, without the need of an additional learning mechanism. By relying on convex optimization problems, the proposed method is scalable with respect to the system dimension and number of data points. In order to reduce conservatism of the safe set calculations, the approach was extended using a Gaussian process model as prior on the nonlinearity. This modification enables safe exploration and thereby iterative computation of the safe set during closed-loop operation. The results were demonstrated using several numerical example problems, showing that the safety framework can be used to certify arbitrary and in particular learning-based controllers.
-37,643.166439
[ -2.080078125, 2.177734375 ]
27.812114
[ -2.763671875, 0.900390625, -1.7392578125, -6.14453125, -1.1865234375, 8.5234375 ]
[ 4.265625, 7.72265625, 1.6455078125, 8.0234375 ]
356
5,434
[ -2.048828125, 2.126953125 ]
27.758962
[ -6.22265625, -5.23828125, -5.54296875, -2.18359375, 2.7890625, 13.78125 ]
0.88326
10.420482
25.616489
1.520218
[ 2.3368358612060547 ]
-23,236.936966
6.146301
-37,235.79631
0.778466
6.029773
[ -2.009765625, -3.57421875, -4, -5.2734375, 2.423828125, 12.40625 ]
[ -5.5, -1.8583984375, -2.072265625, -1.2568359375, 3.5859375, 4.3046875 ]
BkiUdHjxaJJQnMJkXr4l
\section{Introduction} In this paper we tackle the inverse problem of reconstructing a cavity $D$ within a planar domain $\Omega$ taking advantage of boundary measurements of the solution of the following boundary value problem: \begin{equation}\label{cavityproblem} \left\{ \begin{aligned} -\Delta u + u^3 &= f \qquad & \text{in } \Omega \setminus D \\ \frac{\partial u}{\partial\mathbf n} &= 0 & \text{on } \partial \Omega \cup \partial D. \end{aligned} \right. \end{equation} The investigation of this problem is mainly motivated by the mathematical modelling of the electrical activity of the heart regarding, in particular, the detection of ischemic regions from boundary measurements of the transmembrane potential, \cite{BCP}. These regions are composed of non-excitable tissue, that can be modeled as an electrical insulator (cavity) \cite{Perez}, \cite{Relan},\cite{Fronteraetal}. Identification of ischemic regions and their shape is fundamental to perform successful radiofrequency ablation for the prevention of tachycardias and of more serious heart disease. In the steady-state case the transmembrane potential in the presence of an ischemic region satisfies exactly Problem (\ref{cavityproblem}). Hence, mathematically, the inverse problem boils down in determining a cavity $D$ from boundary measurements of the solution $u$. In \cite{BCP} part of the authors analyzed the well-posedness of (\ref{cavityproblem}) and uniqueness of the inverse problem under minimal regularity assumptions on the unknown cavity. More precisely, they proved that one measurement of the potential $u$ on an open arc of $\partial\Omega$ is enough to detect uniquely a finite union of disjoint, compact, simply connected sets with Lipschitz boundary. The inverse problem is highly nonlinear and severly ill-posed since, as for the linear conductivity problem, even within a class of smooth cavities only a very weak logarithmic-type continuous dependence on data is expected to hold , see \cite{AMR}, \cite{ABRV}. In \cite{BRV} the authors analyzed the mathematical model in the case of conductivity inhomogeneities of arbitrary shape and size in the two-dimensional setting. In particular, the issue of reconstructing the inhomogeneity from boundary measurements was addressed. The strategy used in \cite{BRV} for the reconstruction from few data was based on the minimization of a quadratic mismatch functional with a perimeter penalization term. In order to derive a more manageable problem, the perimeter functional was relaxed using a phase-field approach justified by showing the $\Gamma$-convergence of the relaxed functional to the functional with perimeter penalization. In recent years this kind of approach has been successfully implemented in inverse boundary value problems for partial differential equations and systems, see for example \cite{BRV},\cite{BZ}, \cite{R}, \cite{LY}, \cite{DES}.\\ Here, we use a similar approach starting from the minimization of the following quadratic boundary misfit functional with a Tikhonov regularization penalizing the perimeter of the set $D$: \begin{equation} \label{perfunct_intro}J(D) = \frac{1}{2} \int_{\Sigma}(u(D) - u_{meas})^2d\sigma+\alpha \textrm{Per}(D) \end{equation} where $\alpha>0$ represents the regularization parameter, $u_{meas}$ the measurements corresponding to some solution of (\ref{cavityproblem}). Assuming uniform Lipschitz regularity of the cavity $D$, we prove continuity of solutions to (\ref{cavityproblem}) with respect to $D$ in the Hausdorff metric and, as a consequence, the existence of minima in the class of Lipschitz cavities $D$, showing the stability of the functional with respect to noisy data and the convergence of minimizers as $\alpha\rightarrow 0$ to the solution of the inverse problem. \\ In the linear counterpart of the problem, it is natural to interpret cavities as perfectly insulating inclusions, namely regions in which the conductivity of the medium is vanishing. This scenario (together with the case of perfectly conducting inclusions, where the conductivity goes to infinity) is usually referred to as \textit{extreme} conductivity inclusions. For this reason, it is natural to interpret the cavity problem under exam as the limit case of the inclusion detection, and hence to approximate it by means of inclusion detection problems associated with very low conductivities $\delta$. This entails the introduction of an approximation of the forward problem \eqref{cavityproblem}, leading to a solution map $u_\delta$ and to the corresponding functional $J_{\delta}$ and minimization problem (see \eqref{mindelta}). Since the functional is not differentiable and its minimization is conducted in a non-convex space, we propose as in \cite{BRV} a Modica-Mortola relaxation of the functional $J_{\delta}$ via a family of smooth functionals $J_{\varepsilon,\delta}$ defined on a suitable subset of $H^1(\Omega)$ to guarantee $\Gamma-$ convergence as $\varepsilon\rightarrow 0$ and as $\delta\rightarrow 0$ to the functional $J$. \\ This theoretical convergence result motivates the choice to approximate the original regularized problem \eqref{perfunct_intro} by minimizing the functional $J_{\delta,\varepsilon}$ for fixed, small values of $\delta$ and $\varepsilon$. The Fr\'echet differentiability of such functionals ultimately suggests to employ a first-order optimization method to iteratively converge to a critical point, satisfying (necessary) optimality conditions. As further motivated in Section \ref{sec:numerics}, we can sequentially perform the minimization of $J_{\delta,\varepsilon}$ for reducing values of $\varepsilon$ and $\delta$ to obtain a candidate regularized solution of the original cavity detection problem. \\ Nevertheless, there is a gap between the theoretical results and the numerical implementation: in particular, unlike the conductivity case, the phase-field relaxation via $J_{\delta,\varepsilon}$ is not able to mitigate the non-convexity of the original problem. Indeed, as explained above, we must assume that the cavity $D$ is of Lipschitz class. To guarantee such a regularity for the minimizers of the functionals $J_\delta$, and to ensure the $\Gamma-$convergence as $\varepsilon \rightarrow 0$, we are forced to set the minimization of $J_{\delta,\varepsilon}$ in a suitable non-convex subset $\mathcal{K}_{\eta}$ of $H^1(\Omega;[0,1])$. This allows, on the one hand, for a complete and thorough theoretical analysis of the relaxation strategy, but on the other one, it still makes it impossible to minimize $J_{\delta,\varepsilon}$ by means of standard gradient-based schemes. However, numerical evidence shows that it is possible to perform such a minimization on the whole space $H^1(\Omega;[0,1])$ and still have convergence to a function satisfying the desired additional regularity. \par From a numerical standpoint, we can compare our strategy with other existing approaches in the literature related with the linear counterpart of the problem, namely the cavity detection in the linear conductivity problem. In such a context, phase-field techniques have been studied for the reconstruction of cavities (and cracks) in the conductivity case in \cite{ring2010reconstruction} and in the elasticity case in \cite{ABCRV} and \cite{A}. \\ Among the several alternative strategies, we can perform a main distinction between algorithms which have been originally developed for inclusion detection and later extended to the cavity case, and algorithms specifically suited for the reconstruction of cavities. \\ Regarding the first family, we can trace back to the first algorithm, introduced by Friedman and Vogelius in \cite{friedman1989} for the detection of arbitrarily small (extreme) conductivity inclusions. It is based on the asymptotic expansion of a suitable mismatch functional, and it has been further developed, by means of polarization tensors, by Ammari and Kang in \cite{ammari2007polarization}. Subsequently, many other techniques originally designed for inclusion detection have been extended to the cavity case. For example, the enclosure method, allowing for the reconstruction of the convex hull of inclusions in electrical impedance tomography, has been formulated in the cavity case in \cite{ikehata2002numerical}, whereas the factorization method, developed by Br\"uhl and Hanke, has been investigated for the cavity problem in \cite{hanke2003recent}, also comparing it to the MUSIC algorithm. Analogously, the level set method, allowing for the reconstruction an inclusion as a level curve of a suitable function which is iteratively updated, has been successfully and efficiently implemented in \cite{burger2003levenberg}. Recently, also the monotonicity method, which exploits the monotonicity of the Dirichlet-to-Neumann map to define an iterative reconstruction algorithm, has been studied in the presence of extreme inclusions, see \cite{candiani2020monotonicity}. \\ Among the second family of algorithms, namely the ones that are innatily suited for extreme inclusions detection in the linear conductivity equation, we recall the the method of fundamental solutions (see \cite{borman2009method}), the algorithm by Kress and Rundell involving nonlinear boundary integral equations (see \cite{kress2005nonlinear}) and the conformal mapping technique (see \cite{kress2012inverse} and \cite{munnier2017conformal}). \par The remaining part of the paper is structured as follows: in Section \ref{sec:notation} we set the notation and introduce the main assumptions regarding the forward problem and the class of cavities we aim at reconstructing. Section \ref{sec:direct} is devoted to the analysis of the forward problem \eqref{cavityproblem}, both recalling the well-posedness results from \cite{BCP} and proving a novel result about the continuous dependence of boundary measurements with respect to cavity perturbations. Section \ref{sec:recon} outlines our approach to the reconstruction of cavities, studying the regularization properties of \eqref{perfunct_intro} and thoroughly describing the phase-field relaxation. It also contains the main theorems of the paper, namely the $\Gamma-$convergence results for the relaxed functionals as $\delta$ and $\varepsilon$ go to $0$. Finally, Section \ref{sec:numerics} provides a numerical counterpart of the proposed strategy, formulating and discussing two optimization algorithms, and in Section \ref{sec:results} we report the results of some numerical experiments, assessing the effectiveness of such algorithms. \section{Notation and main assumptions} \label{sec:notation} We consider the following inhomogeneous Neumann problem \begin{equation} \label{probcav} \left\{ \begin{array}{ll} -\Delta u+u^3=f, & \hbox{in $\Omega\setminus{D}$} \\ \displaystyle{\frac{\partial u}{\partial\mathbf{n}}}=0, & \hbox{on $\partial(\Omega\setminus{D})$}. \end{array} \right. \end{equation} where $\mathbf{n}$ is the outer unit normal to $\Omega\backslash D$. In what follows we will use the notation $$\Omega_D=\Omega\setminus{D}$$ Let us first recall the definition of Lipschitz (or $C^{0,1}$) regularity. \begin{definition} [Lipschitz or $C^{0,1}$ regularity]\ \\\label{lipclass} Let $\Omega$ be a bounded domain in $\mathbb{R}^2$. We say that a portion $S$ of $\partial \Omega$ is of class $C^{0,1}$ with constants $r_0$ and $L_0$, if for any ${P}\in S$, there exists a rigid transformation of coordinates under which we have ${P}={0}$ and \begin{equation*} \mathring{\Omega}\cap B_{r_0}({0})=\{{(x_1,x_2)}\in B_{r_0}({0})\, :\, x_2>\psi(x_1)\}, \end{equation*} where ${\psi}$ is a $C^{0,1}$ function on $(-r_0,r_0)\subset \mathbb{R}$ such that \begin{equation*} \begin{aligned} {\psi}({0})&=0,\\ \|{\psi}\|_{C^{0,1}(B_{r_0}({0}))}&\leq L_0. \end{aligned} \end{equation*} We say that e domain $\Omega$ is Lipschitz (or a $C^{0,1}$) with constants $r_0$, $L_0$, if $\partial\Omega\in C^{0,1}$. \end{definition} We can now state our main set of assumptions. \begin{assumption}\label{as:1} $\Omega\subset \mathbb{R}^2$ is a bounded Lipschitz domain with constants $r_0, L_0$. \end{assumption} \begin{assumption}\label{as:2} $\Sigma \subset \partial \Omega$, an open arc, is the portion of boundary which is accessible for measurement. \end{assumption} \begin{assumption}\label{as:3} The cavity $D$ is the union of at most $M$ disjoint, compact, simply connected sets and has Lipschitz boundary, i.e $D\in\mathcal{D}$ defined by $$ \mathcal{D}=\{\exists N\leq M \;|\; D=\cup_{j=1}^N D_j\subset \Omega\,, \partial D\in C^{0,1} \textrm{ with constants } r_0,L_0\} $$ where, $\forall \;1\leq j\leq N$, $D_j$ is compact and simply connected; moreover we assume ${\rm dist}(D_j,D_i)\geq d_0\,\,\,\forall i\neq j$ and ${\rm dist}(D,\partial\Omega)\geq 2 d_0>0.$ \end{assumption} \begin{assumption}\label{as:4} The source term $f$ in \eqref{probcav} satisfies \begin{equation} f \in L^{\infty}(\Omega),\,\, f\geq 0, \,\,\,\mathrm{supp}(f)\subset\Omega_{d_0}=\{x\in\Omega: dist(x,\partial\Omega)\leq d_0\}. \end{equation} \end{assumption} \vskip 2truemm We will denote by $$\textrm{Per}(D)=\operatorname{TV}(\chi_D,\Omega)=\sup\left\{\int_{\Omega}\chi_D(x)\textrm{div}\varphi(x)dx:\varphi\in C_c^1(\Omega,\mathbb{R}^2), |\varphi|\leq 1\right\}.$$ In particular, if $D\in\mathcal{D}$ we have that $$ \textrm{Per}(D)=\mathcal{H}^1(\partial D)<+\infty $$ where $\mathcal{H}^1(\partial D)$ is the one-dimensional Hausdorff measure of $\partial D$. \vskip 5truemm In the sequel $A\triangle B:=(A\backslash B)\cup(B\backslash A)$ will indicate the symmetric difference of the two sets $A$ and $B$. Finally, let us recall the definition of the Hausdorff distance between two sets $A$ and $B$: \[d_H(A,B)=\max\{\sup_{x\in A}\inf_{y\in B}\textrm{dist} (x,y),\sup_{y\in B}\inf_{x\in A}\textrm{dist}(y,x)\}.\] \begin{remark} Throughout the paper, for the sake of brevity, we will denote with $v$ the indicator function of some set $D\subset \Omega$ and we will use $\textrm{Per}(D)$ or $\operatorname{TV}(v)$ depending on the situation. \end{remark} We will use several times throughout the paper the following compactness result \begin{proposition} \label{compactD} $\mathcal{D}$ is compact with respect to the Hausdorff topology. \end{proposition} \begin{proof} Let us first consider the case $M=1$ i.e. $D\in \mathcal{D}$ is compact and simply connected. Let $\{D_k\}_{k=1}^{\infty}\subset\mathcal{D}$ be a sequence of sets in $\Omega$. Then by Blaschke's Selection Theorem (see for example Theorem 3.1 in \cite{dalMT}) there exists a subsequence that we still indicate by $\{D_k\}_{k=1}^{\infty}$ converging in the Hausdorff metric to a compact set $D$. Furthermore, as a consequence of Theorem 2.4.7, Remark 2.4.8 and Theorem 2.4.10 of \cite{HP}, $\{\partial D_k\}_{k=1}^{\infty}$ converges in the Hausdorff metric to $\partial D$ and $\partial D$ is Lipschitz with constants $r_0,L_0$ and is connected, which implies that $D$ is also simply connected. So, $D\in \mathcal{D}$ which concludes the proof. If $M>1$ and $D_k\rightarrow D=\cup_{j=1}^N D_{j}$, where, $\forall j$, $D_j$ is simply connected and its boundary is Lipschitz with constants $r_0,L_0$. Because of the uniformity of the Lipschitz property, for sufficiently large $k$ we have that $D_k=\cup_{j=1}^N D_{j,k}$, i.e. $D_k$ has the same number of disjoint connected components ad $D$. Moreover, for any fixed $j$, $D_{j,k}\rightarrow D_j$, possibly up to a subsequence. Finally, by the definition of Hausdorff distance we conclude that $d(D_j,D_i)\geq d_0$ for any $i\neq j.$ \end{proof} \section{Analysis of the direct problem} \label{sec:direct} \subsection{Well posedness and main estimates} We first recall a well posedness result for problem (\ref{probcav}) proved in \cite{BCP} (in a more general setting) together with some estimates on the solution which will be useful in the subsequent discussion. Note that, by assumptions $1$, $3$, the domains $\Omega_D$ have Lipschitz boundaries for any $D\in \mathcal{D}$. Then we have: \begin{theorem} \label{exist} Suppose that Assumptions $1-4$ hold. Then problem \eqref{probcav} has a unique solution $u\in H^1(\Omega_D)$. Furthermore, the following bounds hold: \begin{equation} \label{apriori} \|u\|_{H^1(\Omega_D)} \le C\big (\|f\|_{(H^1)'}+\|f\|^{1/3}_{(H^1)'}\big ) \end{equation} \begin{equation}\label{boundsu} 0\leq u(x) \leq\left(ess\,\sup_{\Omega_D}f\right)^{1/3}\quad\quad \mathrm{a.e.}\quad x\in\Omega_D\,. \end{equation} where $C=\max\{1,|\Omega_D|^{1/3}$\} and $(H^1)'=H^1(\Omega_D)'$ is the dual space of the Sobolev space $H^1(\Omega_D)$. \end{theorem} The proof follows by suitable Sobolev estimates and by the maximum principle, see \cite{BCP} Proposition 3.4 and Theorem 3.5. \subsection{Continuity properties of the solutions with respect to $D$} Let us consider the weak formulation of Problem \eqref{probcav} \begin{equation} \label{weakfucav1} \int_{\Omega_D} \nabla u\cdot\nabla\phi + \int_{\Omega_D}u^3\phi=\int_{\Omega_D} f\phi, \ \ \ \forall\phi\in H^1(\Omega_D). \end{equation} \smallskip By Theorem \ref{exist}, there is a unique solution $u_D\in H^1(\Omega_D)$ of \eqref{weakfucav1} which is uniformly bounded in $H^1(\Omega_D)$ and in $L^{\infty}(\Omega_D)$ by constants depending only on $f$ (for a given $\Omega$). \smallskip In this section, we will prove the continuity of the trace map $D\mapsto u_D\big |_{\Sigma}$ for domains in the class $\mathcal{D}$ defined in assumption 3; more precisely : \smallskip let $D_n\in \mathcal{D}$ be a sequence of sets converging to $D$ in the metric defined by the Hausdorff distance $d_H$ and let $u_n:=u_{D_n}$, $u:=u_D$. Then \begin{equation} \label{convdom} \lim_{n\to \infty}\int_{\Sigma}|u_n-u|^2 =0\,. \end{equation} \smallskip The proof of our claim will require some intermediate steps. To begin with, by known results on approximation of bounded Lipschitz domains (see e.g. \cite{Ver} theorem $1.12$) one can construct, for any $\varrho>0$, a subset $D_{\varrho}$ such that $\Omega_{\varrho}:=\Omega\backslash D_{\varrho}$ satisfies the following properties \begin{enumerate} \item $\Omega_{d_0}\subset\Omega_{\varrho}\subset\subset\Omega_D$ and $\partial\Omega_{\varrho}$ is $C^{0,1}$ (actually smooth) \item $\big | \Omega_D\setminus \Omega_{\varrho}\big |<\varrho$\,. \end{enumerate} Then, by the convergence of $D_n$ to $D$ in the Hausdorff metric (see the proof of Proposition \ref{compactD} above) there exists a positive integer $n_{\varrho}$ such that $\Omega_{\varrho}\subset\subset\Omega_{D_n}$ for every $n>n_{\varrho}$. Note that \begin{equation} \label{omenodn} \big | \Omega_{D_n}\setminus \Omega_{\varrho} \big |\le \big | \Omega_D\setminus \Omega_{\varrho}\big |+ \big |D\setminus D_n \big |<\varrho+o(1)\,, \end{equation} for $n\to\infty$. \smallskip Then we have \begin{theorem} \label{contdom} Let $u$, $u_n$, $\Omega_{\varrho}$, $n_{\varrho}$ be defined as above. Then, for any $\epsilon>0$ there exists $\varrho(\epsilon)>0$ such that, for every $\varrho<\varrho(\epsilon)$ and $n>n_{\varrho}$, \begin{equation} \label{convdomeps} \|u_n-u\|_{H^1(\Omega_{\varrho})}<\epsilon\,. \end{equation} \end{theorem} \begin{proof} Since $u$, $u_n$, solve \eqref{weakfucav1} respectively in $\Omega_D$ and in $\Omega_{D_n}$ and recalling that supp$\,f\subset \Omega_{d_0}\subset\Omega_{\varrho}$, we have \begin{equation} \nonumber \int_{\Omega_D} \nabla u\cdot\nabla\phi + u^3\phi=\int_{\Omega_{D_n}} \nabla u_n\cdot\nabla\phi + u_n^3\phi\,. \end{equation} $\forall\phi\in H^1(\Omega)\,$ (note that by our assumptions on the domains, any $\phi\in H^1(\Omega_D)$ or in $H^1(\Omega_{D_n})$ is the restriction of a function in $H^1(\Omega)$). \smallskip By the decompositions $\Omega_D=\Omega_{\varrho}\,\cup\,\big (\Omega_D\setminus\Omega_{\varrho} \big )$, $\,\,\Omega_{D_n}=\Omega_{\varrho}\,\cup\, \big (\Omega_{D_n}\setminus\Omega_{\varrho}\big )$, we have \begin{equation} \nonumber \int_{\Omega_{\varrho}} \nabla u\cdot\nabla\phi + u^3\phi +\int_{\Omega_D\setminus\Omega_{\varrho}} \nabla u\cdot\nabla\phi + u^3\phi= \end{equation} \begin{equation} \nonumber \int_{\Omega_{\varrho}} \nabla u_n\cdot\nabla\phi + u_n^3\phi+\int_{\Omega_{D_n}\setminus\Omega_{\varrho}} \nabla u_n\cdot\nabla\phi + u_n^3\phi\,. \end{equation} By rearranging terms: \begin{equation} \nonumber \int_{\Omega_{\varrho}} \nabla (u-u_n) \cdot\nabla\phi + (u^3-u^3_n)\phi= \end{equation} \begin{equation} \label{weakformdiffer} -\int_{\Omega_D\setminus\Omega_{\varrho}} \nabla u\cdot\nabla\phi + u^3\phi +\int_{\Omega_{D_n}\setminus\Omega_{\varrho}} \nabla u_n\cdot\nabla\phi + u_n^3\phi\,. \end{equation} Let $\phi_{\varrho,n}\in H^1(\Omega)$ be a function satisfying \begin{equation} \nonumber \phi_{\varrho,n}\,\big |_{\Omega_{\varrho}}=(u-u_n)\,\big |_{\Omega_{\varrho}}\,. \end{equation} The existence of $\phi_{\varrho,n}$ for every $\varrho$ (and $n$) follows by the extension property which holds for the Lipschitz domain $\Omega_{\varrho}$. Moreover, by the uniform bounds on $u$, $u_n$ in $\Omega_{\varrho}$ and by the continuity of the extension operator, we readily get \begin{equation} \label{stimphindel} \|\phi_{\varrho,n}\|_{H^1(\Omega)}\le C\,, \end{equation} where the constant $C$ depends only on $\Omega_{\varrho}$ and $f$. Actually, by properties $1$ and $2$ above and since $\Omega_D$ is Lipschitz we can take $C$ independent of $\varrho$. \smallskip By choosing $\phi=\phi_{\varrho,n}$ in \eqref{weakformdiffer} we obtain \begin{equation} \nonumber \int_{\Omega_{\varrho}} \nabla (u-u_n) \cdot\nabla (u-u_n) + (u-u_n)^2(u^2+u u_n+u_n^2)= \end{equation} \begin{equation} \label{weakformdiff} -\int_{\Omega_D\setminus\Omega_{\varrho}} \nabla u\cdot\nabla\phi_{\varrho,n} + u^3\phi_{\varrho,n} +\int_{\Omega_{D_n}\setminus\Omega_{\varrho}} \nabla u_n\cdot\nabla\phi_{\varrho,n} + u_n^3\phi_{\varrho,n}\,. \end{equation} We now estimate the integrals at the right hand side. First, since $u\in H^1(\Omega_D)\cap L^{\infty}(\Omega_D)$ we get by \eqref{stimphindel} and by Holder inequality \begin{equation} \nonumber \Big |\int_{\Omega_D\setminus\Omega_{\varrho}} \nabla u\cdot\nabla\phi_{\varrho,n} + u^3\phi_{\varrho,n}\Big |\le C_1\Big (\int_{\Omega_D\setminus\Omega_{\varrho}} |\nabla u|^2\Big )^{1/2}+ C_2\big |\Omega_D\setminus\Omega_{\varrho} \big |^{1/2}\,, \end{equation} with $C_1$, $C_2$ independent of $\varrho$ and $n$. By Property $2$ and by the integrability of $|\nabla u|^2$, we can now write \begin{equation} \label{estrhs1b} \Big |\int_{\Omega_D\setminus\Omega_{\varrho}} \nabla u\cdot\nabla\phi_{\varrho,n} + u^3\phi_{\varrho,n}\Big |\le C_1 \,o(1)+ C_2\,\varrho^{1/2}\,, \end{equation} for $\varrho\to 0$. By similar estimates of the second term at the right hand side of \eqref{weakformdiff} and taking into account \eqref{omenodn} we obtain \begin{equation} \nonumber \Big |\int_{\Omega_{ D_n}\setminus\Omega_{\varrho}} \nabla u_n\cdot\nabla\phi_{\varrho,n} + u_n^3\phi_{\varrho,n}\Big |\le C_1\Big (\int_{\Omega_{ D_n}\setminus\Omega_{\varrho}} |\nabla u_n|^2\Big )^{1/2}+ C_2\,\big (\varrho+o(1)\big )^{1/2}\,. \end{equation} \emph{Claim:} the ${u_n}'s$ can be extended to $\Omega$ in such a way that the sequence $|\nabla u_n|^2$ is \emph{uniformly integrable} in $\Omega$. \smallskip \begin{proof} The function $u_n$ is a weak solution of the Neumann problem \begin{equation} \nonumber \left\{ \begin{array}{ll} -\Delta u_n=f-u_n^3, & \hbox{in $\Omega_{D_n}$} \\ \displaystyle{\frac{\partial u}{\partial\mathbf{n}}}=0, & \hbox{on $\partial\Omega_{D_n}$}, \end{array} \right. \end{equation} By the estimates of the previous section the right hand side of the above equation satisfies \begin{equation} \nonumber \mathrm{ess}\,\sup_{\Omega_{D_n}} |f-u_n^3|\le C\,, \end{equation} with $C$ independent of $n$. Then, known regularity results for the Neumann problem in Lipschitz domains \cite{JK}, \cite{Cost} imply that $u_n\in H^{3/2}(\Omega_{D_n})$ with uniformly bounded norm. Moreover, $u_n$ has an extension (still denoted by $u_n$) to $\Omega$ satisfying \begin{equation} \nonumber \|u_n\|_{ H^{3/2}(\Omega)}\le C\, \end{equation} (see \cite{Grisvard} Theorem 1.4.3.1). Hence, by Sobolev imbeddings $\{u_n\}_{n\in \mathbb{N}}$ is a relatively compact subset of $H^1(\Omega)$; in particular, $|\nabla u_n|^2$ is relatively compact in $L^1(\Omega)$ and therefore is uniformly integrable. \end{proof} The above implies that estimate \eqref{estrhs1b} holds for \emph{both terms} on the right hand side of \eqref{weakformdiff}. \smallskip Hence, by taking $\varrho<\varrho(\epsilon)$ small enough and $n>n_{\varrho}$ large, we have \begin{equation} \nonumber \int_{\Omega_{\varrho}} \nabla (u-u_n) \cdot\nabla (u-u_n) + (u-u_n)^2(u^2+u u_n+u_n^2)\le \epsilon^2\,. \end{equation} Finally, since $u^2+u u_n+u_n^2\ge \frac{3}{4} u^2$ and $\|u\|_{L^{\infty}(\Omega_{\varrho})}>0$, by Poincar\'{e} inequality (Theorem A.1 in \cite{BCMP}) we get \begin{equation} \nonumber \|u-u_n\|^2_{H^{1}(\Omega_{\varrho})}\le C\,\epsilon^2\,, \end{equation} for some constants $C$ independent of $\varrho$, $n$. Then, the result follows by redefining $C^{1/2}\epsilon\rightarrow\epsilon$. \end{proof} We can now prove \begin{corollary} \label{limitdom} Let $\Omega$, $\Omega_{d_0}$ be defined as in Section $2$ and let $D_n\,,D\subset \Omega$, $n=1,2,...$ such that ${D_n}\in \mathcal{D}$, and $D_n\rightarrow D$ in the Hausdorff metric. Let $u\in H^1(\Omega_D)$ and $u_n\in H^1(\Omega_{D_n})$ be the solutions of \eqref{weakfucav1} in $\Omega_D$ and in $\Omega_{D_n}$ respectively. Then, \begin{equation} \label{convdomteo} \lim_{n\to \infty}\|u_n-u\|_{L^2(\Sigma)} =0\,. \end{equation} \end{corollary} \begin{proof} Note that $D\in \mathcal{D}$ by the proof of Proposition \ref{compactD}. Fix $\epsilon>0$ and let $\varrho<\varrho(\epsilon)$, $n>n_{\varrho}$ such that \eqref{convdomeps} holds. Then, by standard trace theorems \begin{equation} \nonumber \int_{\Sigma}|u_n-u|^2\le C\,\|u_n-u\|^2_{H^1(\Omega_{d_0})}\le C\|u_n-u\|^2_{H^1(\Omega_{\varrho})}<\epsilon^2\, \end{equation} and the corollary follows. \end{proof} \smallskip \begin{remark} \label{convinmisur} Let $D_n$, $D$, be as in Corollary \ref{limitdom}, but assume that ${D_n}\rightarrow D\in \mathcal{D}$ in measure in $\Omega$, that is $|D_n\triangle D|\rightarrow 0$ (see \cite{AFP}, Remark $3.37$). Nevertheless, by Proposition \ref{compactD} the sequence ${D_n}$ has compact closure in the Hausdorff topology. Hence, there exists a subsequence $D_{n_k}$ which converges in the Hausdorff metric. Then the subsequence necessarily converges to $D$, since $D$ has (Lipschitz) continuous boundary; hence, the above corollary applies to this subsequence. \end{remark} \begin{remark} Since we are considering the case of domains that are uniformly Lipschitz it is also possible to derive the continuity of solutions with respect to perturbations of the cavities in the Hausdorff topology in the framework of Mosco convergence, see Theorem 7.2.7 in \cite{BB} and \cite{CD}. \end{remark} \section{Reconstruction of cavities} \label{sec:recon} In \cite{BCP} the authors prove uniqueness for the inverse problem, i.e. that assuming it is possible to measure the solution $u$ to \eqref{probcav} on $\Sigma$ the cavity $D$ is uniquely determined. In this section we deal with the problem of reconstructing the cavity presenting a new rigorous algorithm based on $\Gamma$-convergence. We start by formulating a minimization problem for a functional $J$ depending on the cavity $D$ as a variable: let $u=u(D)$ be the unique $H^1(\Omega_D)$ solution of the boundary value problem \eqref{probcav}, and consider In order to reconstruct the cavity $D$ a natural approach is to minimize a quadratic misfit functional measuring the discrepancy on the boundary between the solution and the data perturbed with a term that penalizes the perimeter of the cavity $D$, i.e. \begin{equation}\label{minper} \min_{D\in \mathcal{D}} J(D):\, J(D) = \frac{1}{2} \int_{\Sigma}(u(D) - u_{meas})^2d\sigma+\alpha \textrm{Per}(D), \end{equation} where $\mathcal{D}$ and $\textrm{Per}(D)$ are specified in Section \ref{sec:notation}, and $\alpha>0$ is referred to as the \textit{regularization parameter}, which balances the contribution of the quadratic mismatch term and the regularization term in $J$. In Subsection \ref{subsec:regu}, we show that the minimization problem \eqref{minper} satisfies several desirable properties and can be interpreted as a regularization strategy for the inverse problem of determining $D$ from $u_{meas}$. In Subsection \ref{subsec:delta} we propose an approximation of problem \eqref{minper} by perturbing the solution map $u(D)$ appearing in it. The resulting problem is later relaxed by a phase-field approach presented in Subsection \ref{subsec:eps}. Finally, in Subsection \ref{subsec:gamma}, we prove the convergence of the introduced approximate functionals to the original $J$ in the sense of the $\Gamma$-convergence, which also entails the convergence of the associated minimizers. \subsection{Regularization properties of the minimization problem} \label{subsec:regu} The minimization problem \eqref{minper} can be interpreted as a regularized counterpart of the inverse problem of determining $D$ from $u_{meas}$. In particular, in the following three results, we prove that for every $\alpha >0$ the functional $J$ admits a minimum, that the minimizers are stable with respect to perturbations of the datum $u_{meas}$ and that, if the amount of the noise on the measurements converges to zero and the parameter $\alpha$ is suitably chosen, the solutions of \eqref{minper} converge to the unique solution $D$ of the inverse problem. \begin{proposition}\label{mincav} For every $\alpha >0$ there exists at least one solution to the minimization problem (\ref{minper}). \end{proposition} \begin{proof} Assume $\{D_k\}_{k\geq 0}\in\mathcal{D}$ is a minimizing sequence. Then there exists a positive constant $C$ such that $$J(D_k)\leq C,\,\,\, \forall k$$ and in particular $$\textrm{Per}(D_k) \leq C,\,\,\forall k.$$ Then by compactness (see for example \cite{AFP} Theorem 3.39) there exists a set of finite perimeter $D_0$ such that, possibly up to a subsequence, $$ |D_k\triangle D_0|\rightarrow 0,\,\, k\rightarrow \infty. $$ Then by the lower semicontinuity property of the perimeter functional (see \cite{EG} Section 5.2.1, Theorem 1) it follows that $$ \textrm{Per}(D_0)\leq \liminf_{k\rightarrow\infty}\textrm{Per}(D_k). $$ Moreover, by Remark \ref{convinmisur} and Proposition \ref{compactD} we may assume that the sequence also converges in the Hausdorff metric to $D_0\in \mathcal{D}$. Hence, by Corollary \ref{limitdom} it follows that \[ \int_{\Sigma}(u(D_k) - u_{meas})^2d\sigma\rightarrow \int_{\Sigma}(u(D_0) - u_{meas})^2d\sigma\,\,\textrm{as }k\rightarrow\infty. \] Finally, \[ J(D_0)\leq \liminf_{k\rightarrow\infty}J(D_k)=\lim_{k\rightarrow\infty}J(D_k)=\inf_{D\in \mathcal{D}}J(D) \] and the claim follows. \end{proof} \begin{proposition} The solutions of (\ref{minper}) are stable w.r.t. perturbation of the data $u_{meas}$ i.e. if $\{u_k\}\subset L^2(\Sigma)\rightarrow u_{meas}$ in $L^2(\Sigma)$ as $k\rightarrow \infty$ then the solutions $D_k$ of (\ref{minper}) with datum $u_k$ are such that, up to subsequences, $$ d_H(D_k,\bar D)\rightarrow 0,\,\, \textrm{ as }k\rightarrow \infty, $$ where $\bar D\in \mathcal{D}$ is a solution of (\ref{minper}), with datum $u_{meas}$. \end{proposition} \begin{proof} Observe that for any $D_k$ \[ \frac{1}{2} \int_{\Sigma}(u(D_k) - u_k)^2d\sigma+\alpha \textrm{Per}(D_k)\leq \frac{1}{2} \int_{\Sigma}(u(D) - u_k)^2d\sigma+\alpha \textrm{Per}(D),\,\,\, \forall D\in\mathcal{D} \] Hence, $\textrm{Per}(D_k)\leq K$ and hence, possibly up to subsequences, we have that $$ d_H(D_k,\bar D)\rightarrow 0,\,\, k\rightarrow \infty $$ for some $\bar D\in\mathcal{D}$ and $$ \textrm{Per}(\bar D)\leq \liminf_{k\rightarrow\infty}\textrm{Per}(D_k). $$ Furthermore, by Corollary 3.3 $$ u(D_k)\rightarrow u(\bar D),\,\,k\rightarrow \infty \textrm{ in } L^2(\Sigma), $$ implying \[ \begin{aligned} J(\bar D)&\leq \liminf_{k\rightarrow\infty}\frac{1}{2} \int_{\Sigma}(u(D_k) - u_k)^2d\sigma+\alpha \textrm{Per}(D_k)\\ &\leq \lim_{k\rightarrow\infty}\frac{1}{2} \int_{\Sigma}(u(D) - u_k)^2d\sigma+\alpha \textrm{Per}(D)=\frac{1}{2} \int_{\Sigma}(u(D) - u_{meas})^2d\sigma+\alpha \textrm{Per}(D),\,\,\,\forall D\in \mathcal{D}. \end{aligned} \] and the claim follows. \end{proof} Now we prove that the solution to the minimization problem (\ref{minper}) converges as $\alpha\rightarrow 0$ to the unique solution of the inverse problem defined at the beginning of this section. \begin{proposition} Assume a solution $\tilde D\in \mathcal{D}$ to the inverse problem corresponding to datum $u_{meas}$ exists. For any $\eta>0$ let $(\alpha(\eta))_{\eta>0}$ be such that $\alpha(\eta)=o(1)$ and $\frac{\eta^2}{\alpha(\eta)} $ is bounded as $ \eta\rightarrow 0$.\\ Furthermore, let $D_{\eta}$ be a solution to the minimization problem (\ref{minper}) with $\alpha=\alpha(\eta)$ and datum $u_{\eta}\in L^2(\Sigma)$ satisfying $\|u_{meas}-u_{\eta}\|_{L^2(\Sigma)}\leq \eta$. Then $$ D_{\eta}\rightarrow \tilde D $$ in the Hausdorff metric as $\eta\rightarrow 0$. \end{proposition} \begin{proof} Consider the solution $\tilde D$ to the inverse problem corresponding to the datum $u_{meas}$. By definition of $D_{\eta}$, \begin{equation}\label{11} \begin{aligned} \frac{1}{2} \int_{\Sigma}(u(D_{\eta}) - u_{\eta})^2d\sigma+\alpha\textrm{Per}(D_{\eta})&\leq \frac{1}{2} \int_{\Sigma}(u(\tilde D) - u_{\eta})^2d\sigma+\alpha \textrm{Per}(\tilde D)\\ &=\frac{1}{2} \int_{\Sigma}(u_{meas} - u_{\eta})^2d\sigma+\alpha \textrm{Per}(\tilde D)\leq \eta^2+\alpha \textrm{Per}(\tilde D) \end{aligned} \end{equation} In particular \[ \textrm{Per}(D_{\eta})\leq \frac{\eta^2}{\alpha}+\textrm{Per}(\tilde D)\leq C \] Hence, arguing as in Proposition \ref{mincav}, possibly up to subsequences, $$ d_H(D_{\eta}, D_0)\rightarrow 0, \textrm{as } \eta\rightarrow 0 $$ for some $ D_0\in \mathcal{D}$. Passing to the limit in (\ref{11}) as $\eta\rightarrow 0$ we derive $$ \int_{\Sigma}(u(D_{\eta}) - u_{\eta})^2d\sigma\rightarrow 0, $$ hence, also $$ \frac{1}{2}\int_{\Sigma}(u(D_{\eta}) - u_{meas})^2d\sigma\leq\int_{\Sigma}(u(D_{\eta}) - u_{\eta})^2d\sigma+\int_{\Sigma}(u_{meas}- u_{\eta})^2d\sigma\rightarrow 0. $$ By Corollary \ref{limitdom}, from last relation we have $$ u(D_0)=u_{meas}\,\,\text{ on } \Sigma $$ and by the uniqueness of the inverse problem proved in \cite{BCP} this implies $D_0=\tilde D$ which concludes the proof. \end{proof} \bigskip Before concluding the section, we observe that the minimization problem \eqref{minper} can be equivalently formulated in terms of the indicator function $v$ of $D$ as follows \begin{equation}\label{minv} \min_{v\in X_{0,1}} J(v):\, J(v) = \frac{1}{2} \int_{\Sigma}(u(v) - u_{meas})^2d\sigma+\alpha \textrm{TV}(v), \end{equation} where \[ X_{0,1}= \{ v \in BV(\Omega): v(x)\equiv\chi_{\Omega_D} \text{ a.e. in $\Omega$ }, D\in\mathcal{D} \}, \] $u(v):=u(D)$ where $v = \chi_D$ and $u(D)$ is the solution of the boundary value problem \ref{probcav}. As a consequence of the proof of Proposition \ref{mincav}, the functional $J$ is lower semicontinuous with respect to the $L^1$ topology. \subsection{Filling the cavity with a fictitious material} \label{subsec:delta} In order to address the minimization problem \eqref{minv} numerically we will follow the approach proposed in \cite{BC} for a topological optimization problem in linear elasticity, which consists in first filling the cavity $D$ with a fictitious material of very small conductivity $\delta>0$ and considering, for $v\in X_{0,1}$, the transmission boundary value problem \begin{equation} \left\{ \begin{aligned} -\textrm{div}(a_{\delta}(v) \nabla u) + vu^3 &= f \qquad \text{in } \Omega \\ \displaystyle{\frac{\partial u}{\partial\mathbf{n}}}=0,\qquad \text{on } \partial \Omega, \end{aligned} \right. \label{eq:probclassic} \end{equation} where $a_{\delta}(v)=\delta+(1-\delta)v$. Note that, from Proposition 2.1 in \cite{BRV}, problem (\ref{eq:probclassic}) has a unique solution $u_\delta=u_{\delta}(v)\in H^1(\Omega)$ for any fixed $\delta>0$. Also, under Assumption \ref{as:4} on $f$ one can extend the truncation argument introduced in \cite{BCP} to derive a similar estimate as (\ref{boundsu}) in Section 3 obtaining \begin{equation}\label{estimateudelta} \|u_{\delta}\|_{L^{\infty}(\Omega)}\leq C \end{equation} and using the variational formulation for $u_{\delta}$ and estimate (\ref{estimateudelta}) \begin{equation}\label{estimategradudelta} \int_{\Omega}a_{\delta}(v_{\delta})|\nabla u_{\delta}|^2\leq C \end{equation} where $C$ does not depend on $\delta$. It is now natural to replace the minimization problem (\ref{minv}) with the following one: \begin{equation}\label{mindelta} \min_{v\in X_{0,1}} J_{\delta}(v):\, J_{\delta}(v) = \frac{1}{2} \int_{\Sigma}(u_{\delta}(v) - u_{meas})^2d\sigma+\alpha \textrm{TV}(v). \end{equation} In the sequel we will use the following continuity result for solutions to Problem (\ref{eq:probclassic}) with respect to $v\in X_{0,1}$ in the $L^1$ topology. More precisely: \begin{proposition}\label{cont} Let $f$ satisfy Assumption \ref{as:4} in Section 2. Then if $\{v_n\}$ is a sequence in \\$\tilde{X}=\{v\in L^1(\Omega;[0,1]): v=1 \text{ a.e. in }\Omega_{d_0}\}$ such that $v_n\rightarrow \overline{v}\in\tilde{X}$ in $L^1(\Omega)$ it follows that \[ \int_{\Sigma}(u_{\delta}^n(v_n) - u_{meas})^2d\sigma\rightarrow\int_{\Sigma}(u_{\delta}(\overline{v}) - u_{meas})^2d\sigma,\,\, \text{ as }n\rightarrow \infty \] where $u_{\delta}^n(v_n)$ denotes the solution to \eqref{eq:probclassic} corresponding to $v=v_n$ and $u_{\delta}(\overline{v})$ denotes the solution to \eqref{eq:probclassic} corresponding to $v=\overline{v}$. \end{proposition} \begin{proof} Let $w_n=u_n- \overline{u}$ where, to simplify the notation, we have set $ \overline{u}:=u_{\delta}(\overline{v})$ and $u_n:=u_{\delta}^n(v_n)$. Then an easy computation shows that $w_n$ is solution to the problem \begin{equation} \left\{ \begin{aligned} -\textrm{div}(a_{\delta}(v_n) \nabla w_n) + v_nq_nw_n &= \textrm{div}((a_{\delta}(v_n)-a_{\delta}(\overline{v})) \nabla \overline{u})-(v_n-\overline{v})\overline{u}^3 \qquad \text{in } \Omega \\ \displaystyle{\frac{\partial w_n}{\partial\mathbf{n}}}=0,\qquad \text{on } \partial \Omega, \end{aligned} \right. \label{eq:probclassic2} \end{equation} where $q_n=u^2_n+u_n\overline{u}+\overline{u}^2$. Multiplying the equation by $w_n$ and integrating by parts over $\Omega$ we obtain \begin{equation} \int_{\Omega}a_{\delta}(v_n) |\nabla w_n|^2+\int_{\Omega}v_nq_nw_n^2 =\int_{\Omega}\left(a_{\delta}(v_n)-a_{\delta}(\overline{v})\right) \nabla \overline{u}\cdot\nabla w_n-\int_{\Omega}(v_n-\overline{v})\overline{u}^3w_n. \label{eq:diff} \end{equation} Set \[ I:=\int_{\Omega}a_{\delta}(v_n) |\nabla w_n|^2+\int_{\Omega}v_nq_nw_n^2. \] Then \begin{equation}\label{I} I\geq \delta \int_{\Omega} |\nabla w_n|^2+\int_{\Omega_{d_0}}q_nw_n^2\geq \delta\left(\int_{\Omega} |\nabla w_n|^2+\int_{\Omega_{d_0}}q_nw_n^2\right)\geq C \left(\|\nabla w_n\|^2_{L^2(\Omega)}+\|w_n\|^2_{L^2(\Omega_{d_0})}\right). \end{equation} Last inequality on the right hand side follows by an application of Poincar\'e inequality (see Theorem A1 in \cite{BCMP}). In fact, setting $g_n=\frac{1}{\int_{\Omega_{d_0}}\,q_n}q_n\chi_{\Omega_{d_0}}$ and $\overline{w}_n=\int_{\Omega_{d_0}}g_nw_n$ we have \[ \|w_n\|^2_{L^2(\Omega_{d_0})}\leq 2(S^2\|\nabla w_n\|^2_{L^2(\Omega)}+|\Omega_{d_0}|\overline{w}^2_n). \] Observe now that \[ \overline{w}^2_n\leq\frac{1}{\int_{\Omega_{d_0}}q_n}\int_{\Omega_{d_0}}q_nw^2_n \] and that $$ q_n\geq \frac{3}{4}\overline{u}^2. $$ Hence, $$ \int_{\Omega_{d_0}}q_n \geq \frac{3}{4}\int_{\Omega_{d_0}}\overline{u}^2=m_0>0, $$ as, if $m_0=0$ this would imply $\overline{u}=0$ a.e. in $\Omega_{d_0}$. Then, the equation for $\overline{u}$ and $v=1$ a.e. in $\Omega_{d_0}$ would imply $f=0$ a.e. in $\Omega_{d_0}$ which is a contradiction. Therefore \[ \overline{w}^2_n\leq \frac{1}{m_0}\int_{\Omega_{d_0}}q_nw^2_n \] and \[ \|w_n\|^2_{L^2(\Omega_{d_0})}\leq C\left(\|\nabla w_n\|^2_{L^2(\Omega)}+\int_{\Omega_{d_0}}q_nw^2_n\right) \] which trivially implies the last inequality in (\ref{I}). Moreover, again by Poincar\'e inequality \begin{align} \|w_n\|^2_{H^1(\Omega)}\leq C\left(\|\nabla w_n\|^2_{L^2(\Omega)}+\|w_n\|^2_{L^2(\Omega_{d_0})}\right)\leq \int_{\Omega}|(a_{\delta}(v_n)-a_{\delta}(\overline{v})| |\nabla \overline{u}||\nabla w_n|+\int_{\Omega}|(v_n-\overline{v})\,\overline{u}^3\,w_n|\\ \leq C\left(\left(\int_{\Omega}|v_n-\overline{v}|^2 |\nabla \overline{u}|^2\right)^{1/2}+ \left(\int_{\Omega}|v_n-\overline{v}|^2 | \overline{u}|^6\right)^{1/2}\right)\|w_n\|_{H^1(\Omega)} \end{align} which gives \[ \|w_n\|_{H^1(\Omega)}\leq C\left(\left(\int_{\Omega}|v_n-\overline{v}|^2 |\nabla \overline{u}|^2\right)^{1/2}+\left(\int_{\Omega}|v_n-\overline{v}|^2 | \overline{u}|^6\right)^{1/2}\right). \] Finally, since $v_n\rightarrow \overline{v}$ a.e. in $\Omega$, applying the dominated convergence theorem we get that as $n\rightarrow \infty$ \[ \|w_n\|_{H^1(\Omega)}\rightarrow 0 \] which implies that \[ \|w_n\|_{L^2(\Sigma)}\rightarrow 0 \] and the claim follows. \end{proof} \subsection{Phase-field relaxation} \label{subsec:eps} In practice it is still difficult to address the minimization problem \eqref{mindelta} numerically because of the non-differentiability of the cost functional and the non-convexity of the space $X_{0,1}$. So, we will consider a further regularization in which the total variation is approximated by a Ginzburg-Landau type of energy and the space $X_{0,1}$ is substituted with a more regular convex space of phase field variables. This kind of approach has been used extensively in shape and topology optimization and also in the context of inverse problems (see for example \cite{BRV}, \cite{BZ}, \cite{BC}, \cite{R}, \cite{LY}, \cite{DES}). Hence, we introduce a phase field relaxation of the total variation which will allow to operate with more regular functions with values in $[0,1]$ and formulate a relaxed version of Problem (\ref{mindelta}) similarly as in \cite{BRV}.\\ To this purpose, we define the following subset of $H^1(\Omega)$ \[ \mathcal{K} = \{ v \in H^1(\Omega;[0,1]): v=1 \text{ a.e in }\Omega_{d_0}\}.\]\\ For every $\varepsilon >0$, we will consider the optimization problem: \begin{equation}\label{minrel} \min_{v \in \mathcal{K}} J_{\delta,{\varepsilon}}(v); \quad J_{\delta,{\varepsilon}}(v) =\frac{1}{2} \int_{\Sigma}(u_{\delta}(v) - u_{meas})^2d\sigma+ \alpha \int_{\Omega}\left( \gamma\varepsilon|\nabla v|^2 + \frac{\gamma}{\varepsilon} v^2(1-v)^2 \right), % \end{equation} where $\gamma$ is a suitable normalization constant. We have the following \begin{proposition} \label{mindelteps} For every fixed $\delta>0$ and $ \varepsilon > 0$, the minimization problem (\ref{minrel}) has a solution $v_{\delta,\varepsilon}, \in \mathcal{K}$. Furthermore, if $u^n_{meas}\rightarrow u_{meas}$ in $L^2(\Sigma)$ and $v^n_{\delta,\epsilon}$ denotes the solution of Problem (\ref{minrel}) with datum $u_{meas}^n$, then possibly up to a subsequence, we have that $v^n_{\delta,\epsilon}\rightarrow v_{\delta,\varepsilon}$ in $H^1(\Omega)$ where $v_{\delta,\varepsilon}\in \mathcal{K}$ is solution of Problem (\ref{minrel}) with datum $u_{meas}$. \end{proposition} We omit the proof of Proposition \ref{mindelteps} since it can be found in \cite{BRV}, see Proposition 2.6 and Proposition 2.7 therein. At this stage it is natural to address the following two problems: the possible $\Gamma$-convergence of $J_{\delta,{\varepsilon}}$ to $J_{\delta}$ (defined in \eqref{mindelta}) as $\varepsilon \rightarrow 0$ and the $\Gamma$-convergence of $J_{\delta}$ to $J$ (defined in (\ref{minv})) as $\delta \rightarrow 0$. This would imply, thanks to the fundamental theorem of $\Gamma$-convergence that minima of $J$ could be approximated by minima of $J_{\delta,{\varepsilon}}$ for $\varepsilon$ and $\delta$ sufficiently small. In order to prove the convergence in $\varepsilon$ we need to adapt the proof of Modica Mortola to our case but, compared to the analysis in \cite{DES} and \cite{BRV}, here the functional $J_{\delta}$ is defined on a subset, $X_{0,1}\subset BV(\Omega)$, of characteristic functions of more regular domains. For this reason, we are forced to restrict the relaxed functional $J_{\delta,\varepsilon}$ to some suitable subset of $\mathcal{K}$. For $\eta\in(0,1)$ let us define \[ \mathcal{K}_{\eta}=\{v\in\mathcal{K}: \{v\geq \eta \}={\Omega}_D \text{ a. e. } \text{for some }D\in\mathcal{D}\} \] Note that though $\Omega_D$ is an open set the definition makes sense since $\partial D$ has zero Lebesgue measure. Though this set is not convex let us notice that it is a weakly closed subset of $\mathcal{K}$ with respect to the $H^1(\Omega)$ topology guaranteeing the existence of minima of the functional $J_{\delta,{\varepsilon}}$ in $K_{\eta}$. In fact, we can prove \begin{lemma}\label{wclosed} Let $\{v_k\}\in \mathcal{K}_{\eta}$ be a sequence converging weakly in $H^1(\Omega)$ to an element $v$. Then $v\in\mathcal{K}_{\eta}$. \end{lemma} \begin{proof} Let us define ${\Omega}_{D_k}:=\{v_k\geq \eta \}$. By Proposition \ref{compactD}, possibly up to a subsequence, we have that \[D_k\rightarrow D_0\in \mathcal{D}\,\,\,and\,\,\,\,\partial D_k\rightarrow \partial D_0 \] in the Hausdorff topology. Then, $D_k$ also converges to $D_0$ in measure which implies that \[\chi_{{\Omega}_{D_k} }\rightarrow \chi_{{\Omega}_{D_0}} \] in $L^1(\Omega)$ and almost everywhere in $\Omega$. Let us now show that \[ \{v\geq \eta\}={\Omega}_{D_0}, \textrm{ a.e.} \] In fact, since \[ \eta\chi_{\Omega_{D_k}}\leq v_k\leq \eta+(1-\eta)\chi_{\Omega_{D_k}} \] and noting that $v_k$, possibly up to a subsequence, converges a.e. to $v$, passing to the limit pointwise we have \[ \eta\chi_{\Omega_{D_0}}\leq v\leq \eta+(1-\eta)\chi_{\Omega_{D_0}}\,\,\text {a.e.} \] From the left-hand side inequality it follows that ${\Omega}_{D_0}\subseteq \{v\geq \eta \}$ while from the right-hand side inequality we get that $ \{v\geq \eta\}\subseteq {\Omega}_{D_0}$ concluding the proof. \end{proof} As an immediate consequence we get \begin{corollary} For every fixed $\delta>0$ and $ \varepsilon > 0$, the minimization problem $$\min_{v \in \mathcal{K}_{\eta}} J_{\delta,{\varepsilon}}(v)$$ where $J_{\delta,{\varepsilon}}$ is defined in \eqref{minrel} has a solution $v_{\delta,\varepsilon}$. Furthermore, if $u^n_{meas}\rightarrow u_{meas}$ in $L^2(\Sigma)$ and $v^n_{\delta,\epsilon}$ denotes a solution with datum $u_{meas}^n$, then possibly up to a subsequence, we have that $v^n_{\delta,\epsilon}\rightarrow v_{\delta,\varepsilon}$ in $H^1(\Omega)$ where $v_{\delta,\varepsilon}\in \mathcal{K}_{\eta}$ is a solution with datum $u_{meas}$. \end{corollary} \subsection{Analysis of the $\Gamma-$limits} \label{subsec:gamma} We now investigate the asymptotic properties of the introduced functionals: in particular, we first concentrate on the limit of $J_{\delta,{\varepsilon}}$ as $\varepsilon \rightarrow 0$, in the sense of $\Gamma-$convergence. The proof of the next theorem will clarify our choice of the subset $\mathcal{K}_{\eta}$. For $v\in L^1(\Omega)$, consider the following extensions of the cost functionals \begin{equation} \tilde{J}_{\delta}(v) = \left\{ \begin{aligned} J_{\delta}(v) & \quad \textit{if $v \in X_{0,1}$}\\ \infty & \quad \textit{otherwise in } L^1(\Omega) \end{aligned} \right. \label{minregbis1} \end{equation} and of (\ref{minrel}) \begin{equation} \tilde{J}_{\delta,\varepsilon}(v) = \left\{ \begin{aligned} J_{\delta,\varepsilon}(v) & \quad \textit{if $v \in \mathcal{K}_{\eta}$}\\ \infty & \quad \textit{otherwise in } L^1(\Omega) \end{aligned} \right. \label{minrel2} \end{equation} Then \begin{theorem} \label{thm:Gconv_eps} Consider a sequence $\{\varepsilon_k\}$ s.t. $\varepsilon_k \rightarrow 0$ as $k\rightarrow +\infty$. Then, the functionals $\tilde{J}_{\delta,\varepsilon_k}$ converge to $\tilde{J}_{\delta}$ in $L^1(\Omega)$ in the sense of the $\Gamma-$convergence. \end{theorem} \begin{proof} Write $$ \tilde{J}_{\delta,\varepsilon}(v)=F_{\delta}(v)+G_{\varepsilon}(v) $$ where \[F_{\delta}(v):= \frac{1}{2} \int_{\Sigma}(u_{\delta}(v) - u_{meas})^2d\sigma\] for any $v\in L^1(\Omega)$ and \[G_{\varepsilon}(v):= \int_{\Omega}\left( \gamma\varepsilon|\nabla v|^2 + \frac{\gamma}{\varepsilon}v^2(1-v)^2\right)\] for $v\in\mathcal{K}_{\eta}$ and $\infty$ otherwise in $L^1(\Omega)$.\\ Then from the continuity of $F_{\delta}(v)$ in $L^1(\Omega)$ derived in Proposition \ref{cont} it is enough to show $\Gamma$-convergence of $G_{\varepsilon}$ as $\varepsilon \rightarrow 0$ to $\operatorname{TV}(v)$. Then by Remark 1.7 in \cite{dalM} it follows that $ \tilde{J}_{\delta,\varepsilon}$ $\Gamma$-converges to $\tilde{J}_{\delta}$ as $\varepsilon \rightarrow 0$. \\ Let us prove $\Gamma$-convergence of $G_{\varepsilon}$ as $\varepsilon \rightarrow 0$.\\ \it{(i)} We first prove the $\liminf$ property i.e. for every sequence $\varepsilon_k\rightarrow 0$ and for every sequence $\{v_k\} \subset L^1(\Omega)$ s.t. $v_k \xrightarrow{L^1} v $, \[\operatorname{TV}(v) \leq \liminf_k {G}_{\varepsilon_k}(v_{k})\]. Consider a sequence $v_{k}$ converging in $L^1(\Omega)$ to a function $v\in L^1(\Omega)$ as $\varepsilon_k\rightarrow 0$ for $k\rightarrow \infty$. We can assume that $\{v_k\}\in \mathcal{K}_{\eta}$ since otherwise the claim would follow trivially. Moreover, by [MM] we know that $v=\chi_{\Omega_D}$ for some $D\subset\Omega$ with finite perimeter and $\operatorname{TV}(v)\le \liminf_k {G}_{\varepsilon_k}(v_{k})$. \\ Finally, by reasoning as in the proof of Lemma \ref{wclosed} we obtain the following relation a.e. \[ \eta\chi_{\Omega_{D_0}}\leq \chi_{\Omega_D}\leq \eta+(1-\eta)\chi_{\Omega_{D_0}}, \] for some $D_0\in \mathcal D$, from which it follows that a.e. \[ \chi_{\Omega_D}=\chi_{\Omega_{D_0}}\in X_{0,1}. \] \textit{(ii)} Let us now prove the $\limsup$ property: for any $v\in L^1(\Omega)$ there exists a sequence $\{v_k\}$ converging to $v$ in $L^1(\Omega)$ as $k\rightarrow +\infty$ such that \[\limsup_k G_{\varepsilon_k}(v_{k})\leq \operatorname{TV}(v).\] Let us first observe that the above inequality can be checked on a suitable dense set of $X_{0,1}$, see for example Remark 1.29 of \cite{B}. Therefore, we may only consider the set \[\mathcal{L}=\{ \chi_{\Omega_D}, D\subset\Omega,\,\,D\in \mathcal{D},\,\, \partial D\in C^{\infty}\}.\] Actually, by Theorem 1.12 in \cite{Ver} (see in particular properties (ii) and (iii)) it follows that for any set $D\in\mathcal{D}$ there exists a sequence of smooth domains $D_k\in\mathcal{D}$ (i.e. $\forall k$, $\partial D_k$ is $\mathcal{C}^{\infty}$ and satisfies assumption $3$ with constants $r_0$, $L_0$) such that \[ {\partial{D_k}}\rightarrow {\partial D}\,\,\textrm{ in\,\,the\,\,Hausdorff\,\,metric }\textrm{ as }k \rightarrow +\infty \] In particular, this implies \begin{enumerate} \item[(1)] \[ \chi_{\Omega_{D_k}}\rightarrow \chi_{\Omega_D}\,\,\textrm{ in }L^1(\Omega)\textrm{ as }k \rightarrow +\infty \] \item[(2)] \[\mathcal{H}^1(\partial D_k)\rightarrow \mathcal{H}^1(\partial D),\,\,\textrm{as }k \rightarrow +\infty\] \end{enumerate} In order to check the $\limsup$ property on $ \mathcal{L}$, we follow the standard approach by constructing a suitable recovery sequence. Hence, let us consider the Cauchy problem \[ \begin{cases} g'=g(1-g)\,\textrm{in }\mathbb{R}\\ g(0)=\eta\in (0,1) \end{cases} \] Note that the solution is globally defined, $0<g<1$ and $g$ has limits $0$ and $1$ for $t\to -\infty$ and $t\to +\infty$ respectively. Now, for any $\beta>0$ we take $M_{\beta}>0$ such that $g(M_{\beta})\le 1-\beta$ and $g(M_{\beta})\le 0$ and consider the function \[ g_{\beta}(t):=\begin{cases} 1,\,\,t\in (M_{\beta}+1,\infty)\\ (1-g(M_{\beta}))(t-M_{\beta})+g(M_{\beta}),\,\,t\in [M_{\beta},M_{\beta}+1]\\ g(t),\,\,t\in [-M_{\beta},M_{\beta}]\\ g(-M_{\beta})(t+M_{\beta}+1),\,\,t\in [-M_{\beta}-1,-M_{\beta}]\\ 0,\,\,t\in (-\infty,-M_{\beta}-1) \end{cases} \] Let us now fix $v\in \mathcal{L}$ i.e. $v:=\chi_{\Omega_D}$ with $D$ smooth and define the signed distance from $\partial D$ \[ \rho(x):= \begin{cases} -\inf_{y\in\partial D}d(x,y),\,\, x\in D\\ \inf_{y\in\partial D}d(x,y),\,\, x\in \Omega_D. \end{cases} \] Then we can define \[ v_{\varepsilon,\beta}(x):= \begin{cases} 1,\,\, \textrm{in }\{x\in\Omega:\rho(x)\geq (M_{\beta}+1)\varepsilon\}\\ g_{\beta}\left(\frac{\rho(x)}{\varepsilon}\right),\,\,\textrm{in }\{x\in\Omega:|\rho(x)|\leq (M_{\beta}+1)\varepsilon\}\\ 0,\,\,\textrm{in }\{x\in\Omega:\rho(x)\leq -(M_{\beta}+1)\varepsilon\}. \end{cases} \] A crucial observation is that for every positive and small enough $\varepsilon, \beta$, the function $v_{\varepsilon,\beta}\in \mathcal{K}_{\eta}$. In fact, by definition we have $v_{\varepsilon,\beta}\in H^1(\Omega;[0,1])$ and $v_{\varepsilon,\beta}(x)=g_{\beta}(0)=g(0)=\eta$ for $x\in \partial D$. Moreover, since $g_{\beta}$ is a strictly increasing function of the signed distance from $\partial D$ (and by recalling that $|\partial D|=0$) we readily get $\{v_{\varepsilon,\beta}\geq \eta \text{ a.e.}\}=\Omega_{D}$, with $D\in \mathcal{D}$, so that the claim follows. Now, by standard arguments, see for example \cite{M} and \cite{MM}, we can find a sequence $\{v_{\varepsilon_k,\beta_k}\}_{k=1}^{\infty}$ (with $\beta_k\rightarrow 0,\,\, \varepsilon_k\rightarrow 0$) converging in $L^1(\Omega)$ to $v$ and satisfying the $\limsup$ property. \end{proof} As a consequence, from the equicoerciveness of the functionals $G_{\varepsilon}$ and by the $\Gamma$-convergence, see for example Theorem 7.4 in \cite{dalM}, we derive the following convergence result for the solutions of Problem (\ref{minrel2}). \begin{corollary} \label{convmin} Assume $\delta>0$, $\varepsilon>0$, and let $v_{\delta,\varepsilon}$ be a minimum of the functional (\ref{minrel2}). Then there exists a sequence $\varepsilon_k\rightarrow 0$ as $k\rightarrow +\infty$ and a function $v_{\delta}\in X_{0,1}$ such that $v_{\delta, \varepsilon_k}\rightarrow v_{\delta}$ in $L^1(\Omega)$ and $v_{\delta}$ is a minimizer of (\ref {minregbis1}). \end{corollary} \vskip 3truemm \begin{remark} \label{rem:potential} It can be proved that all the previous results, starting from Proposition \ref{mindelteps}, also hold by replacing in the relaxed functional $G_{\epsilon}(v)$ the potential $v^2(1-v)^2$ with any positive function vanishing only for $v=0$ and $v=1$. \end{remark} We are now left with proving the $\Gamma$-convergence of $J_{\delta}$ to $J$ as defined in \eqref{minv}. To this aim we need to prove some preliminary results. The first concerning properties of the set $X_{0,1}$ and the second regarding continuity properties of solutions to \ref{eq:probclassic} as $\delta\rightarrow 0$ that we derive adapting the proof of Theorem 4.2 in \cite{R}. \begin{lemma}\label{closeX} The set $X_{0,1}$ is closed in the $L^1(\Omega)$ topology. \end{lemma} \begin{proof} Consider a sequence $\{v_n\}\in X_{0,1}$ and assume that $v_n\rightarrow v$ in $L^1(\Omega)$. Then, possibly up to subsequences, $v_n=\chi_{\Omega_{D_n}}\rightarrow v$ pointwise a.e. in $\Omega$ where $D_n\in\mathcal{D}$. Hence, it follows that $v=\chi_{\Omega_D}$ for some measurable set $D$. Also, by Proposition \ref{compactD} it follows that, possibly up to subsequences, $D_n$ converges in the Hausdorff topology to $D_0\in \mathcal{D}$. Obviously, this also implies that $\chi_{D_n}\rightarrow \chi_{D_0}$ in $L^1(\Omega)$. Hence, up to a set of measure zero $D=D_0$ implying that $v\in X_{0,1}$. \end{proof} \begin{proposition}\label{convudeltan} Under Assumptions \ref{as:1} - \ref{as:4} in Section 2, let $\{v_{\delta_n}\}_{n\geq 1}$ be a sequence of elements in $X_{0,1}$ converging in $L^1(\Omega)$ as $\delta_n\rightarrow 0$ to $v$. Then $v={\chi}_{\Omega_{D}}$ a.e. with $D\in\mathcal{D}$ and the traces on $\Sigma$ of the corresponding solutions to (\ref{eq:probclassic}), $u_{\delta_n}(v_{\delta_n})|_\Sigma$, converge strongly in $L^2(\Sigma)$ to $\tilde{u}|_\Sigma$, where $\tilde{u}$ is the solution to problem \eqref{probcav} with cavity $D$. \end{proposition} Since the proof of Proposition \ref{convudeltan} is long and rather technical and instrumental to get the $\Gamma-$ convergence result, we have preferred to put it in the Appendix. The previous proposition can be used to prove the $\Gamma$-convergence, as $\delta_n \rightarrow 0$, of the functionals $\tilde{J}_{\delta}$ defined in \eqref{minregbis1} to the limit functional \begin{equation} \tilde{J}(v) = \left\{ \begin{aligned} J(v) & \quad \textit{if $v \in X_{0,1}$}\\ \infty & \quad \textit{otherwise in } L^1(\Omega), \end{aligned} \right. \qquad J(v) = \frac{1}{2} \int_{\Sigma}(u(v) - u_{meas})^2d\sigma+\alpha \textrm{TV}(v), \label{minregbis} \end{equation} where $u(v)$ is the solution of \eqref{probcav} with cavity $D$ such that $v = \chi_{\Omega_D}$. \begin{theorem} \label{thm:Gconv_del} Consider a sequence $\{\delta_n \}$ s.t. $\delta_n \rightarrow 0$. Then, the functionals $\tilde{J}_{\delta_n}$ converge to $\tilde{J}$ in $L^1(\Omega)$ in the sense of the $\Gamma$-convergence. \end{theorem} \begin{proof} \it{(i)} We first prove the $\liminf$ property i.e. for every sequence $\delta_n\rightarrow 0$ and for every sequence $\{ v_{\delta_n} \} \subset L^1(\Omega)$ s.t. $v_{\delta_n} \xrightarrow{L^1} v $, $\tilde{J}(v) \leq \liminf_n \tilde{J}_{\delta_n}(v_{\delta_n})$. Consider a sequence $v_{\delta_n}$ converging in $L^1(\Omega)$ to a function $v\in L^1(\Omega)$ as $\delta_n\rightarrow 0$ for $n\rightarrow \infty$. Then we can assume that \begin{equation}\label{uniformbound} J_{\delta_n}(v_{\delta_n})\leq C. \end{equation} In fact, if $\liminf_n \tilde{J}_{\delta_n}(v_{\delta_n})=+\infty$ then the $\liminf$ property trivially follows. Hence, possibly up to a subsequence, $ \liminf_n \tilde{J}_{\delta_n}(v_{\delta_n})=\lim_n \tilde{J}_{\delta_n}(v_{\delta_n})<+\infty$ which implies \ref{uniformbound}. Then $v_{\delta_n}\in X_{0,1}$ and $$ \operatorname{TV}(v_{\delta_n})\leq C $$ and by the lower semicontinuity of the total variation with respect to the $L^1$ convergence we have that \begin{equation}\label{per1} \operatorname{TV}(v)\leq \liminf_{n\rightarrow +\infty}\operatorname{TV}(v_{\delta_n})\leq C \end{equation} Also, possibly up to subsequences, since $v_{\delta_n}=\chi_{\Omega_{D_n}}\rightarrow v$ a.e. in $\Omega$, it follows that $v=\chi_{\Omega_D}$ and $v=1\textrm{ in }\Omega_{d_0}$. Furthermore, by Lemma \ref{closeX} we have that $v\in X_{0,1}$. Finally, since $v_{\delta_n} \in X_{0,1}$, we can use Proposition \ref{convudeltan} to conclude that \begin{equation}\label{contfunct1} \int_{\Sigma}|u_{\delta_n}(v_{\delta_n})-u_{meas}|^2\rightarrow \int_{\Sigma}|u(v)-u_{meas}|^2 \end{equation} as $n\rightarrow +\infty$. Hence, using (\ref{per1}) and (\ref{contfunct1}) we have \begin{equation}\label{liminf} J(v)\leq \liminf_{n\rightarrow +\infty} J_{\delta_n}(v_{\delta_n}). \end{equation} \textit{(ii)} Let us now prove the following property equivalent to the $\limsup$ property: for any $v\in L^1(\Omega)$ there exists a sequence $\{v_{\delta_n}\}$ converging to $v$ in $L^1(\Omega)$ such that $\limsup_{n\rightarrow \infty}\tilde{J}_{\delta_n}(v_{\delta_n})\leq \tilde{J}(v)$. Let $v\in L^1(\Omega)$. Then if $\tilde{J}(v)=+\infty$ then the property trivially follows. So, we can assume that $v\in X_{0,1}$. Consider now the following sequence $\{v_{\delta_n}\}_{n\geq 0}=\{v\}_{n\geq 0}$. Then $$ \limsup_{n\rightarrow +\infty} J_{\delta_n}(v_{\delta_n})=\limsup_{n\rightarrow +\infty}\int_{\Sigma}|u_{\delta_n}(v)-u_{meas}|^2+\alpha \operatorname{TV}(v) $$ and by Proposition \ref{convudeltan}, possibly up to subsequences, it follows that $$ \limsup_{n\rightarrow +\infty}\int_{\Sigma}|u_{\delta_n}(v)-u_{meas}|^2=\lim_{n\rightarrow +\infty}\int_{\Sigma}|u_{\delta_n}(v)-u_{meas}|^2=\int_{\Sigma}|u(v)-u_{meas}|^2 $$ where $u(v)$ is the solution of (\ref{probcav}) corresponding to $v=\chi_{\Omega_D}$ and hence we finally obtain that $$ \limsup_{n\rightarrow +\infty} J_{\delta_n}(v_{\delta_n})=\lim_{n\rightarrow +\infty}\int_{\Sigma}|u_{\delta_n}(v)-u_{meas}|^2+\alpha \operatorname{TV}(v)=\int_{\Sigma}|u(v)-u_{meas}|^2+\alpha \operatorname{TV}(v)=J(v) $$ concluding the proof. \end{proof} From \cite{AFP} it follows that the functionals $J_{\delta}$ are equicoercive in $L^1(\Omega)$ and hence as a consequence of the above theorem and the fundamental theorem of $\Gamma$-convergence (see for example Theorem 7.4 of \cite{dalM}) we have \begin{corollary} For any $\delta>0$ let $v_{\delta}$ be a minimizer of (\ref {mindelta}). Then there exists a sequence $\delta_n\rightarrow 0$ as $n\rightarrow +\infty$ and a function $v=\chi_{\Omega_D}\in X_{0,1}$ such that $v_{\delta_n}\rightarrow v$ in $L^1(\Omega)$ and $D$ is solution to Problem (\ref {minper}). \end{corollary} \section{Reconstruction algorithm} \label{sec:numerics} In this section, we describe a numerical algorithm which takes advantage of the relaxation strategy proposed in section \ref{sec:recon} for the reconstruction of cavities. In the first subsection, we analyze an algorithm tackling problem \eqref{minrel} - namely, the minimization of the functional $J_{\delta,\varepsilon}$ for fixed values of $\delta,\varepsilon$ - where we replace the potential $v^2(1-v)^2$ in $G_\varepsilon(v)$ by $v(1-v)$. This choice is in line with the assumptions of Remark \ref{rem:potential} and is preferable for numerical reasons, because of the efficiency of the implementation and of the superior performances in the reconstruction. A similar algorithm, which was proposed in \cite{DES} for a linear equation, has already been studied for the reconstruction of inclusions in the considered nonlinear counterpart in \cite{BRV}: therefore, we summarize here the main convergence results, and outline a more efficient implementation. In the last subsection, instead, we propose an algorithm tackling problem \eqref{minv}, namely, the minimization of $J$ and thus the (stable) reconstruction of cavities. \subsection{An iterative algorithm for the relaxed problem} When fixing $\delta, \varepsilon>0$, the problem of minimizing $J_{\delta,\varepsilon}$ over $\mathcal{K}$ is analogous to what discussed for conductivity inclusions in \cite{BRV}, in which $\delta$ is replaced by the parameter $k$ denoting the physical conductivity inside the inclusion. To minimize the relaxed functional $J_{\delta,\varepsilon}$, we can take advantage of its differentiability. In particular, it is possible to prove the following result: \begin{proposition}(see \cite[Proposition 2.10]{BRV}) Under Assumptions \ref{as:1} - \ref{as:4} in Section 2, for every fixed $\delta, \varepsilon>0$, the operator $u_\delta \colon \mathcal{K} \rightarrow H^1(\Omega)$ defined by \eqref{eq:probclassic}) is Fréchet-differentiable, and so is $J_{\delta,\varepsilon}: \mathcal{K} \rightarrow \mathbb{R}$. Moreover, for every $v \in \mathcal{K}$ and $\vartheta \in H^1(\Omega) \cap L^\infty(\Omega)$, \begin{equation} \begin{aligned} J_{\delta,\varepsilon}'(v)[\vartheta] = & \int_{\Omega}(1-\delta) \nabla u_\delta(v) \cdot \nabla p_\delta(v) \vartheta + \int_{\Omega} (1-\delta) u_\delta(v)^3 p_\delta(v) \vartheta \\ &+ 2 \alpha \varepsilon \int_{\Omega}\nabla v \cdot \nabla \vartheta + \frac{\alpha}{\varepsilon} \int_{\Omega}(1-2v)\vartheta; \end{aligned} \label{eq:Frechet_Jade} \end{equation} where $p_\delta\colon \mathcal{K} \rightarrow H^1(\Omega)$ is the solution map of the \textit{adjoint problem}: \begin{equation} \int_{\Omega} a_\delta(v) \nabla p_\delta(v) \cdot \nabla \psi + \int_{\Omega} a_\delta(v) 3 u_\delta(v)^2 p_\delta(v) \psi = \int_{\partial \Omega}{(u_\delta(v)-u_{meas})\psi} \qquad \forall \psi \in H^1(\Omega). \label{eq:adjoint} \end{equation} \label{prop:Frechet_Jade} \end{proposition} Taking advantage of the differentiability of $J_{\delta,\varepsilon}$ and of the convexity of $\mathcal{K}$, we can derive the following necessary optimality condition: \begin{equation} \text{if}\quad v^* \in \argmin_{v \in \mathcal{K}} J_{\delta,\varepsilon}(v), \qquad \text{then} \quad J_{\delta,\varepsilon}'(v^*)[v-v^*] \geq 0 \quad \forall v \in \mathcal{K} \label{eq:OC_Jade} \end{equation} Notice that such condition is not sufficient, unless some other properties are verified, such as the convexity of the functional $J_{\delta,\varepsilon}$. We consider the following iterative algorithm, which takes advantage of the Fréchet differentiability of $J_{\delta,\varepsilon}$: in particular, the rationale of our strategy is to tackle the minimization of $J_{\delta,\varepsilon}$ by means of a sequence of linearized problems at some iterates $v^{(k)}$. The subsequent iterate is computed by minimizing a functional which consists of the first-order expansion of $J_{\delta,\varepsilon}$ around $v^{(k)}$ plus a term which penalizes the distance from $v^{(k)}$, due to the local effectiveness of the linearization. A tentative update scheme would read as \begin{equation} \label{eq:min_move_Jade_expl} v^{(k+1)} = \argmin_{v \in \mathcal{K}} \left\{ \frac{1}{2\tau_k} \| v - v^{(k)}\|_{L^2(\Omega)}^2 + J_{\delta,\varepsilon}'(v^{(k)})[v-v^{(k)}] \right\}, \end{equation} where $\{\tau_k\}$ is a sequence of prescribed step lengths. Since $J_{\delta,\varepsilon}'$ is evaluated in the previous iterate, \eqref{eq:min_move_Jade_expl} corresponds to an explicit scheme, and in the absence of the constraint on $\mathcal{K}$ it would reduce to the explicit Euler discretization of the gradient flow associated with $J_{\delta,\varepsilon}$. The explicit treatment of $J_{\delta,\varepsilon}'$ is beneficial for numerical reasons (due to the severe nonlinearity of the differential), but can lead to instabilities, which entails that the choice of the steplenght $\tau_k$ should be very conservative. As already exploited in \cite{DES} and in \cite{BRV}, we can provide a semi-implicit treatment of the derivative by splitting it in a linear part and a nonlinear one, and evaluating only the nonlinear part in the previous iterate. In particular, we define: \[ \begin{aligned} \tilde{J}_{\delta,\varepsilon}'(v^{(k)})[v-v^{(k)}] = & \int_{\Omega}(1-\delta) \nabla u_\delta(v^{(k)}) \cdot \nabla p_\delta(v^{(k)}) (v-v^{(k)}) + \int_{\Omega} (1-\delta) u_\delta(v^{(k)})^3 p_\delta(v^{(k)}) (v-v^{(k)}) \\ &+\frac{\alpha}{\varepsilon} \int_{\Omega}(1-2v^{(k)})(v-v^{(k)}) + 2 \alpha \varepsilon \int_{\Omega}\nabla v \cdot \nabla \vartheta, \end{aligned} \] where only the last term has been treated implicitly. Through this definition, we finally describe our iterative scheme as: \begin{equation} \label{eq:min_move_Jade} v^{(k+1)} = \argmin_{v \in \mathcal{K}} \left\{ \frac{1}{2\tau_k} \| v - v^{(k)}\|_{L^2(\Omega)}^2 + \tilde{J}_{\delta,\varepsilon}'(v^{(k)})[v-v^{(k)}] \right\} \end{equation} for prescribed timesteps $\tau_k$. At each iteration, the algorithm requires to solve an inner minimization problem associated with a quadratic functional on the convex set $\mathcal{K}$, which can be treated by standard tools of convex optimization. Unfortunately, since the functional $J_{\delta,\varepsilon}$ is in general non convex, the convergence of $v^{(k)}$ to a minimizer is not ensured: nevertheless, we aim to prove that, in the limit, the iterates reach a stationary point, namely, an element $v^*$ which satisfies the (necessary) optimality conditions \eqref{eq:OC_Jade}. As outlined in \cite{DES}, expression \eqref{eq:min_move_Jade} resembles the discretization of a parabolic obstacle problem, and it is more generally reminiscent of De Giorgi's theory of minimizing movements for differentiable functionals (see \cite[Chapter 7]{braides2014local}). Finally, analogously to what is done in \cite{DES} and \cite{BRV}, we can prove the convergence to a stationary point only in a fully discretized context. \begin{remark} \label{convrelax} Notice that we are minimizing the functional $J_{\delta,\varepsilon}$ in $\mathcal{K}$ instead of $\mathcal{K}_\eta$. This discrepancy between theory and practice is necessary for our purposes, since the non-convexity of the space $\mathcal{K}_\eta$ would not allow to use standard first-order optimization schemes. Nevertheless, numerical evidence from section \ref{sec:results} will show that the algorithm converges to a point belonging to $\mathcal{K}_\eta$: thus, we can consider the minimization within $\mathcal{K}$ as a convex relaxation of the original problem in $\mathcal{K}_\eta$. \end{remark} \subsubsection{Discretization of the forward and adjoint boundary value problems} In order to numerically solve the boundary value problem \eqref{eq:probclassic}, we consider a finite element formulation, which also entails a numerical approximation of the minimization problem \eqref{minrel} we are tackling. \par In what follows, we introduce a a shape regular triangulation $\mathcal{T}_h$ of $\Omega$, on which we define $V_h \subset H^1(\Omega)$: \[ V_h = \{ w_h \in C(\bar{\Omega}), w_h|_K \in \mathbb{P}_1(K) \text{ } \forall K \in \mathcal{T}_h \}; \qquad \mathcal{K}_h = V_h \cap \mathcal{K}, \] where $\mathbb{P}_1(K)$ denotes the space of polynomials of order $1$ on a domain $K$. A discrete counterpart of is provided by considering its weak formulation in $V_h$, which can be interpreted as a nonlinear system of algebraic equations, and approximately solved by means of a Newton-Rhapson algorithm. Moreover, \cite[Proposition 3.1]{BRV} shows that, if we consider an approximation $v_h \in \mathcal{K}_h$ of the indicator function $v \in \mathcal{K}$, and denote the discrete solution associated with $v_h$ as $u_{\delta,h}(v_h)$, then $u_{\delta,h}(v_h) \rightarrow u_\delta(v)$ as the mesh size $h$ reduces. We can analogously introduce the approximate solution $p_{\delta,h}$ of the adjoint equation \eqref{eq:adjoint}, and the discrete version $J_{\delta,\varepsilon,h}$ of the functional \eqref{minrel}, together with its optimality conditions \eqref{eq:OC_Jade}. Finally, \cite[Proposition 3.4]{BRV} guarantees that, choosing a starting point $v^{(0)}_h \in \mathcal{K}_h$, there exists a collection of timesteps $\{\tau_k\}$ satisfying $0<\tau_{\min} \leq \tau_k \leq \tau_{\max}$ such that the sequence generated by \eqref{eq:min_move_Jade} (where $J_{\delta,\varepsilon}$ is replaced by $J_{\delta,\varepsilon,h}$) converges in $W^{1,\infty}$ up to a subsequence to a point satisfying the discrete optimality conditions. \subsubsection{Implementation aspects} By means of \cite[Proposition 3.4]{BRV} and of the ancillary result \cite[Lemma 3.2]{BRV}, we know that the discretized functional $J_{\delta,\varepsilon}$ reduces along the iterates of \eqref{eq:min_move_Jade} for a value of $\tau_k$ which is sufficiently small, within an interval $[\tau_{\min}, \tau_{\max}]$. This suggest the possibility to enhanche the iterative algorithm with an adaptive choice of the steplength $\tau_k$, which allows to enlarge it - and therefore save iterations - or to reduce it to guarantee the decrease of the functional across the iterations: \begin{algorithm}{Reconstruction of critical points of $J_{\delta,\varepsilon}$} \label{alg:Jade} \begin{enumerate} \item choose an initial guess $v^{(0)}_h \in \mathcal{K}_h$ and a step size $\tau_0$ \item for $k = 0, \ldots, K_{\max}$ \begin{itemize} \item \textbf{if} $k==0$ \item[] \begin{itemize} \item set $\tilde{v}_h^{(k+1)} = v_h^{(k)}$; \end{itemize} \item[] \textbf{else} \item[] \begin{itemize} \item compute $\tilde{v}_h^{(k+1)}$ from $v_h^{(k)}$, $\tilde{J}'_{\delta,\varepsilon,h}(v_h^{(k)})$ and $\tau_k$ via \eqref{eq:min_move_Jade}; \end{itemize} \item compute $u_{\delta,h}(\tilde{v}_h^{(k+1)})$ and $J_{\delta,\varepsilon,h}(\tilde{v}_h^{(k+1)})$ \item \textbf{if} $J_{\delta,\varepsilon}(\tilde{v}_h^{(k+1)}) > J_{\delta,\varepsilon}(v_h^{(k)})$ \item[] \begin{itemize} \item reduce the steplength $\tau_k$; \end{itemize} \item[] \textbf{else} \item[] \begin{itemize} \item increase the steplength $\tau_k$; \item accept the iteration: $v_h^{(k+1)} = \tilde{v}_h^{(k+1)}$ and $u_{\delta,h}(v_h^{(k+1)}) = u_{\delta,h}(\tilde{v}_h^{(k+1)})$ \item compute $p_{\delta,h}(v_h^{(k+1)})$ and $\tilde{J}'_{\delta,\varepsilon,h}(v_h^{(k+1)})$ \item check the stopping criterion on $v_h^{(k+1)}$ and increase $k$; \end{itemize} \end{itemize} \end{enumerate} \end{algorithm} The algorithm is moreover coupled with an adaptive mesh refinement routine: indeed, the iterates $v_h^{(k)}$ are expected to show some regions of diffuse interface, approximating the jump set of the indicator function of the true cavity. The thickness of such regions scales according to $\varepsilon$: in order to precisely capture the support of the gradient, without excessively increasing the total number of elements in $\mathcal{T}_h$, we locally refine the mesh according to the gradient of $v_h^{(k)}$ every $N_{\operatorname{adapt}}=30$ steps. We nevertheless fix a minimum size $h_{\min}=10^{-3}$. \begin{remark} In Algorithm \ref{alg:Jade}, the (tentative) update $\tilde{v}_h^{(k+1)}$ is computed by solving the semi-implicit scheme \eqref{eq:min_move_Jade}. As in \cite{DES} and in \cite{BRV}, this is done by means of the Primal-Dual Active Set algorithm, which requires a small number of (sub)iterations. According to the interpretation proposed in \cite{hintermuller2002primal}, we can consider PDAS as a generalized Newton's algorithm for the solution of \eqref{eq:min_move_Jade}, where the constraint is included in the form of a Lagrange multiplier. Alternatively, we observe that the explicit scheme \eqref{eq:min_move_Jade_expl} also admits the following alternative formulation: \begin{equation} v^{(k+1)} =\proj_\mathcal{K}\big(v^{(k)}- \tau_k \nabla J_{\delta,\varepsilon}(v^{(k)})\big) \label{eq:projGrad} \end{equation} where $\proj_\mathcal{K}$ is the orthogonal projection on the compact set $\mathcal{K}$ and $\nabla J_{\delta,\varepsilon}(v^{(k)})$ is the Fréchet gradient of $J_{\delta,\varepsilon}$, representing the Fréchet differential $J'_{\delta,\varepsilon}(v^{(k)})$. This approach, which allows to avoid subroutines, has also been investigated in \cite{blank}, where it has been applied to a linear elasticity problem. In our case, preliminary tests have not shown a significant discrepancy between the usage of PDAS or of \eqref{eq:projGrad}, therefore we make use of the former. \end{remark} \subsection{An iterative algorithm for the regularized problem} In this subsection, we tackle the minimization of the functional $J$, namely, the stable reconstruction of cavities via perimeter-based regularization. The core idea of our approach is to iteratively apply Algorithm \ref{alg:Jade} for decreasing values of $\epsilon$ and $\delta$, using the stationary point of the previous $J_{\delta_n,\varepsilon_n}$ as a starting point for the minimization of the new functional. \begin{algorithm}{Reconstruction of critical points of $J$} \label{alg:J} \begin{enumerate} \item select initial values $(\varepsilon_0,\delta_0)$ \item start from an initial guess $v_h^{(0,0)}$ \item for $n = 0, 1, \ldots$ \begin{itemize} \item apply Algorithm \ref{alg:Jade} on $v_h^{(0,n)}$ until convergence to $v_h^{(*,n)}$ \item update $(\varepsilon_{n+1},\delta_{n+1})$ \item set $v_h^{(0,n+1)} = v_h^{(*,n)}$. \end{itemize} \end{enumerate} \end{algorithm} Despite it is impossible to prove the convergence of the iterates to a minimizer of $J$, such an algorithm is motivated by several considerations. \\ Firstly, for large values of $\varepsilon$, we conjecture that the (discrete version of the) functional $J_{\delta,\varepsilon}$ is convex, as it has been proved in \cite[Theorem 3.1]{burger2006phase} for a linear elasticity problem. In this case, Algorithm \ref{alg:Jade} is expected to converge to a global minimum of $J_{\delta,\varepsilon}$ (see, e.g. \cite[Corollary 27.10]{bauschke2011convex} regarding the projected gradient scheme). This is also supported by numerical evidence: the first steps of Algorithm \ref{alg:J} are done efficiently, and provide a good starting point for the ones with smaller $\varepsilon$. \\ Secondly, the sequence of stationary points $v_h^{(*,n)}$ generated by Algorithm \ref{alg:J} is supposed to converge to a stationary point of $J$. Notice that this is not guaranteed by the $\Gamma$-convergence of the functional $J_{\delta,\varepsilon}$ to $J$ (separately in $\varepsilon$ and in $\delta$), because we cannot guarantee that $v_h^{(*,n)}$ are minimizers. Nevertheless, e.g. \cite[Theorem 8.1]{braides2014local} shows how to define a minimizing movement for a limit functional starting from a minimizing movent along a sequence of functionals, and in particular \cite[Chapter 11]{braides2014local} and \cite{sternberg2009critical} provides conditions under which a sequence of critical points converge to a stationary point for $J$. Unfortunately, the functionals $J_{\delta,\varepsilon}$ and $J$ do not match the required assumptions: in particular, $J$ is not convex and not differentiable, due to its definition on the non-convex space $X_{0,1}$. \section{Numerical experiments} \label{sec:results} In this section, after a brief resume of the numerical setting for the simulations, we report and comment the results of our numerical experiments. We first analyze the performance of Algorithm \ref{alg:Jade}, particularly focusing on the dependence of the solution on the choice of the parameters $\varepsilon$ and $\delta$. Then, we move to the study of Algorithm \ref{alg:J}, of which we assess the effectiveness even on complicated shapes, and the robustness with respect to noisy data. \subsection{Setup} In all the experiments, we consider $\Omega \subset \mathbb{R}^2$ the unitary ball centered in the origin and assume to have access to the measurement on $\Gamma = \partial \Omega$. Both Algorithm \ref{alg:Jade} and \ref{alg:J} are tested making use of synthetic data: in particular, the boundary datum $u_{\operatorname{meas}}$ is generated by solving the forward problem in the presence of the true inclusion, and perturbing it with some additive Gaussian noise. To do so, we need to create an alternative mesh $\mathcal{T}_h^{\operatorname{ex}}$ of the domain $\Omega$ in the presence of the exact cavity and the associated finite element space $V_h^{\operatorname{ex}}$. The value of the solution at the external boundary is then interpolated on the boundary of the mesh $\mathcal{T}_h$ which is used for the reconstruction, and which does not contain any hole. Notice that this whole procedure also prevents the presence of an \textit{inverse crime}, which occurs whenever the exact data are simulated via the same model that is employed by the reconstruction algorithm. \par Moreover, we perform reconstructions from multiple measurements: namely, we assume that $N_{\operatorname{meas}}$ measurements $u_{meas}^i$ are available, respectively associated to different sources $f^i$ in \eqref{probcav}. In the expressions of $J_{\delta,\varepsilon}$ (and analogously for $J$), the data mismatch term is thus replaced by an average of the mismatch of every $u_\delta^i(v)$ with respect to $u_{meas}^i$, where $u_\delta^i(v)$ is the solution of \eqref{eq:probclassic} in the presence of a cavity $v$ and with forcing term $f^i$. In order to comply with Assumption \ref{as:4} on $f$, we consider $N_{\operatorname{meas}} = 4$ measurements associated with the sources \[ f^i(x,y) = \text{exp}\left\{-\frac{(x-x_i)^2+(y-y_i)^2}{r_f^2}\right\}; \quad (x_i,y_i) = R_f\left(\cos\left(\frac{i\pi}{N}\right),\sin\left(\frac{i\pi}{N_{\operatorname{meas}}}\right)\right), \] which are well localized in space close to the points $(x_i,y_i)$, that are sufficiently close to the boundary (and far from the cavity) depending on $R_f$. Every datum $u_{meas}^i$ is generated by considering the traces of the exact solution associated to $f^i$ and adding random noise with Gaussian distribution with null mean and standard deviation equal to $\eta_{\text{noise}}= \eta_{\operatorname{noise}}\max\{u^i(x): x \in \Gamma \}$. Whenever not specified, we consider a $1\%$ noise level: $\eta_{\operatorname{noise}}=0.01$. \par All computations are implemented with Matlab R2021a, running on a laptop with 16GB RAM and 2.2GHz CPU. We acknowledge the use of the MATLAB redbKIT library \cite{redbKIT} for the implementation of the finite element assemblers. \subsection{Algorithm \ref{alg:Jade}: numerical results} As depicted in section \ref{sec:numerics}, Algorithm \ref{alg:Jade} is a more efficient version of the one proposed in \cite{BRV}, to which we refer for a complete numerical analysis. In the current study, we are mostly interested in reporting the behavior of the reconstructed solution with respect to $\varepsilon$ and $\delta$. The phase-field parameter $\varepsilon$ is strictly connected with the so-called diffuse interface region. Indeed, the minimizer of $J_{\delta,\varepsilon}$ is expected to be different from $\{0,1\}$ only in a small region, typically corresponding to a tubular neighborhood of the boundary of a reconstructed cavity, whose width is proportional to $\varepsilon$. A small value of $\varepsilon$ is thus preferred, but requires a sufficient refinement of the mesh, which is attained without affecting the efficiency of the algorithm by means of a local adaptive refinement. In Figure \ref{fig:eps} we set $\delta = 10^{-5}$ and compare the reconstructions associated with different values of $\delta$, ranging from $0.0125$ to $0.05$. In each graphic, we report a contour plot of the reconstructed indicator function, together with a dashed line denoting the boundary of the exact inclusion. It is possible to notice the dependence of thickness of the diffusion interface region from $\varepsilon$. \begin{figure} \subfloat[$\varepsilon = 0.05$]{\includegraphics[width=0.33\textwidth]{Figures/Ellipse_e-05.png}} \subfloat[$\varepsilon = 0.025$]{\includegraphics[width=0.33\textwidth]{Figures/Ellipse_e-025.png}} \subfloat[$\varepsilon = 0.0125$]{\includegraphics[width=0.33\textwidth]{Figures/Ellipse_e-0125.png}} \caption{Dependence of the reconstruction on $\varepsilon$} \label{fig:eps} \end{figure} The fictitious conductivity $\delta$ can be chosen independently of $\varepsilon$. If $\delta$ is close to $1$, equation \eqref{eq:probclassic} becomes significantly different from a cavity problem \eqref{probcav}, thus the reconstruction is expected to be less accurate; whereas for much smaller values $\delta$ the forward problem becomes numerically unstable. In particular, it is easy to show that the $H^1$ norm of $u_{\delta,h}(v)$ is bounded by a term scaling as $\frac{1}{\delta}$. Also in this case, nevertheless, a local refinement in the region where the gradient of $v_h^{(k)}$ is steep is beneficial to reduce the ill-conditioning of the problem. In Figure \ref{fig:delta} we set $\varepsilon = 0.025$ and compare the reconstructions associated to different values of $\delta$, ranging from $10^{-5}$ to $10^{-3}$. \begin{figure} \subfloat[$\delta = 10^{-3}$]{\includegraphics[width=0.33\textwidth]{Figures/Ellipse_d-3.png}} \subfloat[$\delta = 10^{-4}$]{\includegraphics[width=0.33\textwidth]{Figures/Ellipse_d-4.png}} \subfloat[$\delta = 10^{-5}$]{\includegraphics[width=0.33\textwidth]{Figures/Ellipse_d-5.png}} \caption{Dependence of the reconstruction on $\delta$} \label{fig:delta} \end{figure} In all the proposed examples, the regularization parameter $\alpha$ is chosen heuristically, and we use as a stopping criterion the relative distance between the iterates. The number of iterations required to reach convergence ranges between $100$ and $200$, which consists in a significant speedup with respect to the case without the step adaptation, which often requires over a thousand iterations (see \cite{BRV}) \subsection{Algorithm \ref{alg:J}: numerical results} In the numerical implementation of Algorithm \ref{alg:J}, we initialize $\varepsilon$, $\delta$ by $\varepsilon_0 = 0.1$ and $\delta_0 = 10^{-2}$, and reduce them by a factor $4$ and $10$, respectively. The initial guess for $v_h^{(0,0)}$ is a constant function of value $0$. In Figures \ref{fig:combined2} we show some results of the application of the combined algorithm for the minimization of $J_{\delta,\varepsilon}$ for the reconstruction of a polygonal cavity. \begin{figure} \subfloat[Iteration 60]{\includegraphics[width=0.33\textwidth]{Figures/VE_Rectangle90.png}} \subfloat[Iteration 120]{\includegraphics[width=0.33\textwidth]{Figures/VE_Rectangle130.png}} \subfloat[Iteration 180]{\includegraphics[width=0.33\textwidth]{Figures/VE_Rectangle220.png}} \caption{Combined algorithm: results} \label{fig:combined2} \end{figure} In Figure \ref{fig:nonconv} we report some additional results showing that the algorithm can effectively tackle the reconstruction of more complicated domains, such as non-convex ones and ones consisting of more than a single connected component. \begin{figure} \subfloat[]{\includegraphics[width=0.33\textwidth]{Figures/nonconvex1.png}} \subfloat[]{\includegraphics[width=0.33\textwidth]{Figures/nonconvex2.png}} \subfloat[]{\includegraphics[width=0.33\textwidth]{Figures/nonconvex3.png}} \caption{Additional reconstructions: non-convex domains} \label{fig:nonconv} \end{figure} As a final study, we discuss the behavior of the proposed algorithm in the presence of higher noise level. As previously explained, all the simulations analyzed so far are based on synthetic data perturbed by a Gaussian noise with variance equal to the $1\%$ of the peak value of the signal. In Figure \ref{fig:noise}, we report the reconstructions associated with the same inclusion, but with larger level of noise ((a): $2\%$, (b): $5\%$). As depicted in (c), a higher level of noise can be treated by increasing the value of the regularization parameter $\alpha$, at the price of a lower quality of the reconstruction. \begin{figure} \subfloat[$\eta_{\operatorname{noise}} = 0.02, \alpha = 10^{-5}$ ]{\includegraphics[width=0.33\textwidth]{Figures/noise2.png}} \subfloat[$\eta_{\operatorname{noise}} = 0.05, \alpha = 10^{-5}$ ]{\includegraphics[width=0.33\textwidth]{Figures/noise5.png}} \subfloat[$\eta_{\operatorname{noise}} = 0.05, \alpha = 10^{-4}$ ]{\includegraphics[width=0.33\textwidth]{Figures/noise5b.png}} \caption{Reconstruction in the presence of large noise} \label{fig:noise} \end{figure} \section{Final remarks} We have analyzed the problem of reconstructing Lipschitz cavities from boundary measurements in a model arising from cardiac electrophysiology. The reconstruction algorithm relies on a detailed investigation of the dependence of the solutions to the direct problem on the cavities and is based on a phase-field approach that we justify via $\Gamma-$ convergence of a relaxed family of functionals $J_{\varepsilon,\delta}$ to the original penalized misfit functional $J$. This implies convergence of minima of $J{\varepsilon,\delta}$ to minima of $J$. In order to prove our result we have to restrict the relaxed functionals to a non convex subset $\mathcal{K}_{\eta}$ of the convex set of admissible functions $\mathcal{K}$, while in the numerical algorithm we need to minimize the approximating functionals $J_{\varepsilon,\delta}$ over the whole convex set $\mathcal{K}$; nevertheless, as discussed in remark \ref{convrelax}, numerical calculations seem to indicate that the minima of the functional $J_{\delta,\epsilon}$ in $\mathcal{K}$ belong to $\mathcal{K}_{\eta}$. Although we have not found a theoretical justification to this property, it could be useful to remark that the $\Gamma-$ convergence of $J_{\delta,\epsilon}$ to $J_{\delta}$ (theorem \ref{thm:Gconv_eps}) and the resulting convergence of the minima (corollary \ref{convmin}) may also be achieved on different subsets $\mathcal{H}\subseteq \mathcal{K}$. In fact, by inspection of the proof of the above results, one finds that $\mathcal{H}$ should be a weakly closed subset of $H^1(\Omega)$ such that: \begin{itemize} \item if $v_n\in\mathcal{H}$ is such that $v_n\rightarrow \chi_{\Omega_D}$ in $L^1(\Omega)$, then $D\in {\mathcal D}$, where ${\mathcal D}$ was defined in Assumption $3$; \item $\mathcal{H}$ contains the functions $v_{\varepsilon,\beta}$ defined in the proof of theorem \ref{thm:Gconv_eps}) for some $\eta>0$. \end{itemize} Note that the first condition is needed in the proof of the lim inf property and the last one for the lim sup property. It is not clear to us if it is possible to construct a subset $\mathcal{H}$ which is also convex (this would somehow justify the 'convex relaxation' argument of remark \ref{convrelax}). \section{Acknowledgements} We would like to thank Giovanni Bellettini for the stimulating and useful suggestions. The work of LR is supported by the Air Force Office of Scientific Research under award number FA8655-20-1-7027. The authors are members of the ``Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni'' (GNAMPA), of the ``Istituto Nazionale per l'Alta Matematica'' (INdAM). \section{Appendix} In this appendix we prove Proposition \ref{convudeltan} where we make use of a suitable version of a Caccioppoli type inequality which we prove below. \begin{theorem} (Caccioppoli type) Let $A=A(x)$, $x\in B_{2R}$ be a symmetric $n\times n$ matrix, $L^\infty$ and elliptic. Let $w\in L^\infty(B_{2R})$ be a weight such that $0<\delta\leq w\leq1$ a.e. in $B_{2R}$ and $u$ a solution in a weak sense to $$-\div(wA\nabla u) +wu^3.$$ Then \begin{equation} \label{eq:caccioppoli2} \int_{B_R}w\,|\nabla u|^2 \leq \frac{\tilde{C}}{R^2}\int_{B_{2R}}w\;u^2=0 \ \ \ \rm{in} \ \ B_{2R} \end{equation} \end{theorem} \begin{proof} Let $\chi\in \mathcal{C}_0^\infty(B_{2R})$, $0\leq\chi\leq1$ in $B_{2R} $, $\chi\equiv1$ in $B_{R}$ and $|\nabla\chi|\leq\displaystyle{\frac{C}{R}}$ on $B_{2R}$. In the weak formulation take as test function $\varphi=u\chi^2$ $$-\int_{B_{2R}}div (wA\nabla u)u\chi^2 +\int_{2B}wu^4\chi^2=0$$ i.e., integrating by parts, $$-\int_{\partial B_{2R}}wA\nabla u \cdot\nu u\chi^2 +\int_{B_{2R}}w\nabla u\,A\nabla(u\chi^2)+\int_{B_{2R}}wu^4\chi^2=0$$ where the first term is $=0$ because ot the definition of $\chi$. Then we have \[ \int_{B_{2R}} w\nabla u A\nabla u \chi^2+\int_{B_{2R}} w \nabla u\,A\,\nabla\chi^2 u+\int_{B_{2R}}w\,u^4\chi^2=0 \] using ellipticity condition and boundedness of $A$ and of throwing away the last term (that is $\geq0$) , we get \[ \lambda\int_{B_{2R}} w|\nabla u|^2 \chi^2\leq \int_{B_{2R}}2 w|\nabla u A\nabla \chi|\;|u|\;\chi\leq2\Lambda\int_{B_{2R}} w|\nabla u|\;|\nabla \chi|\;|u|\chi \] and using Young's inequality the latest is \[ \leq \epsilon \int_{B_{2R}}w \;|\nabla u|^2\;\chi^2 + \frac{2\Lambda}{\epsilon}\int_{B_{2R}}w\;|u|^2\,|\nabla\chi|^2 \] from which, using the properties of $\chi$, \[ (\lambda-\epsilon)\int_{B_R} w\,|\nabla u|^2\leq(\lambda-\epsilon)\int_{B_{2R}}w\,|\nabla u|^2\chi^2\leq \frac{2\Lambda C^2}{\epsilon \;R^2}\int_{B_{2R}}w\;|u|^2 \] Taking for instance $\epsilon=\frac{\lambda}{2}$ the theorem is proved with $\displaystyle{\tilde{C}=\frac{8\Lambda \;C^2}{\lambda^2}}.$ \end{proof} \bigskip {\bf Proof of Proposition \ref{convudeltan}}\\ \begin{proof} For seek of clarity we divide the proof in several steps.\\ {\it First step.} We start proving some weak convergence results. Let $\{v_{\delta_n}\}_{n\geq 1}$ be a given sequence of elements in $X_{0,1}$ converging in $L^1(\Omega)$ as $\delta_n\rightarrow 0$ to an element $v$.Then by Lemma \ref{closeX} it follows that $v\in X_{0,1}$ i.e. $v=\chi_D$ with $D\in\mathcal{D}$. Thus, $v_{\delta_n}\rightarrow v:=\chi_{\Omega_D}$ in $L^1(\Omega)$, a.e. in $\Omega$ and also, \begin{equation}\label{adeltanconv} \sqrt{a_{\delta_n}}\rightarrow \chi_{\Omega_D},\,\, a_{\delta_n}:=a_{\delta_n}(v_n) = \delta_n + (1-\delta_n) v_n\rightarrow \chi_{\Omega_D},\text{ a.e. in }\Omega \text{ and in } L^p(\Omega) \end{equation} for any $p\in [1,\infty]$. Consider now $u_{\delta_n}$ solution of Problem \ref{eq:probclassic} for $v=v_{\delta_n}$. Then from \eqref{estimateudelta}, \eqref{estimategradudelta}, we know that the sequences $\{ \sqrt{a_n} u_n \}$ and $\{ \sqrt{a_n} \nabla u_n \}$ are uniformly bounded, respectively in $L^2(\Omega)$ and in $L^2(\Omega; \mathbb{R}^2)$; so, possibly up to a subsequence, \begin{align} \sqrt{a_{\delta_n}} u_{\delta_n} &\rightharpoonup \tilde{u} \text{ in } L^2(\Omega) \label{eq:weaku} \\ \sqrt{a_{\delta_n}} \nabla u_{\delta_n} &\rightharpoonup V \text{ in } L^2(\Omega; \mathbb{R}^2) \label{eq:weakgradu} \end{align} {\it Second step.} In this part we will show that the weak limits $\tilde{u}$ and of $V$ are a.e. equal to zero inside the cavity $D$. In fact, observe that for any $B_{2R} (y)\subset D$ we have $a_{\delta_n}\rightarrow0$ a.e. in $B_{2R} (y)$ and by dominated convergence theorem \begin{equation}\label{B2R0} \int_{B_{2R}} a_{\delta_n}\;u_{\delta_n}^2 \rightarrow 0, \end{equation} and so $$\sqrt{a_{\delta_n}}\;u_{\delta_n}\rightarrow 0$$ in $L^2(B_{2R}(y)$. By uniqueness of the (weak) limit, from (\ref{eq:weaku}), we deduce that $ \tilde{u}\equiv 0,\,\ \text{a.e. in } B_{2R}(y) $ and by the arbitrariness of $y$ it follows \begin{equation}\label{uincavity} \tilde{u}\equiv 0,\,\ \text{a.e. in } D. \end{equation} In order to conclude a similar result for $V$ we apply the Cacciopoli type inequality \eqref{eq:caccioppoli2} \begin{equation} \int_{B_R} a_{\delta_n} |\nabla u_{\delta_n}|^2 \leq C \int_{B_{2R}} a_{\delta_n} u_{\delta_n}^2. \label{eq:Caccioppoli} \end{equation} which entails \[ \sqrt{a_{\delta_n}}\;\nabla u_{\delta_n}\rightarrow0\; \text{in}\;L^2\left(B_R(y),\mathbb{R}^2\right) \] and which implies \begin{equation}\label{Vincavity} V\equiv\vec{0},\,\ \text{a.e.} \ \ D \end{equation} Hence, by the fact that $a_{\delta_n}\rightarrow \chi_D$ a.e. in $\Omega$ and by (\ref{eq:weaku}) and \eqref{eq:weakgradu}) we also have that \begin{align} a_{\delta_n} u_{\delta_n} &\rightharpoonup \tilde{u} \text{ in } L^2(\Omega) \label{eq:weak_u2} \\ a_{\delta_n} \nabla u_{\delta_n} &\rightharpoonup V \text{ in } L^2(\Omega; \mathbb{R}^2) \label{eq:weak_gradu2} \end{align} {\it Third step.} In this part of the proof we will show that $\tilde{u}\in H^1(\Omega_D)$ and that $\nabla\tilde{u}=V$ a.e. in $\Omega_D$. Fix $a>0$ and define the set $D^a\equiv\{x\in \Omega \,|\, dist(x,D)\leq a\}$ and let $N_0:=N_0(a)$ be such that for $n\geq N_0$ $D_{n}\subset D^a$ and $a_{\delta_n}=1$ in $\Omega_{ D^a}$. Then by (\ref{estimateudelta}) and (\ref{estimategradudelta}) the following uniform estimate holds \begin{equation}\label{estimatesinDa} \|u_{\delta_n}\|_{H^1(\Omega_{ D^a})}\leq C \end{equation} which implies that, possibly up to a subsequence, that for some $U^a\in H^1(\Omega_{D^a})$ \begin{equation}\label{weakH1convtoU} u_{\delta_n}\rightharpoonup U^a \textrm{ in } H^1(\Omega_{D^a}) \end{equation} and therefore strongly in $L^2(\Omega_{D^a})$ \begin{equation}\label{strongconv} u_{\delta_n}\rightarrow U^a\textrm{ in } L^2(\Omega_{D^a}). \end{equation} Now, from \ref{eq:weaku} and recalling that $a_{\delta_n}=1$ in $\Omega_{ D^a}$ for $n>N_0$ we can infer that \begin{equation} \int_{\Omega}u_{\delta_n}\varphi \rightarrow\int_{\Omega}U^a\varphi \end{equation} for any $\varphi\in L^2(\Omega)$ such that $\varphi=0$ in $\Omega\backslash\Omega_{ D^a}$. So, $$ \int_{\Omega_{ D^a}}U^a\varphi=\int_{\Omega_{ D^a}}\tilde{u}\varphi,\,\,\forall\varphi\in L^2(\Omega) $$ which implies that $$ U^a=\tilde{u}|_{\Omega_{ D^a}} $$ and hence \begin{equation}\label{gradequality} \nabla(U^a)=\nabla(\tilde{u}|_{\Omega_{ D^a}})\textrm{ in }L^2(\Omega_{D^a},\mathbb{R}^2). \end{equation} Let now $\varphi\in H^1(\Omega)$ (observe that $\varphi\in H^1(\Omega_{ D^a})\, \forall a>0$). From \ref{eq:weaku} and \ref{eq:weakgradu} it follows that for $n>N_0$ $$ \int_{\Omega}(\sqrt{a_{\delta_n}} \nabla u_{\delta_n}\cdot\nabla\varphi+\sqrt{a_{\delta_n}} u_{\delta_n}\varphi) =\int_{\Omega_{ D^a}\cup (D_a\backslash D)\cup (D\backslash D^a)}(\sqrt{a_{\delta_n}} \nabla u_{\delta_n}\cdot\nabla\varphi+\sqrt{a_{\delta_n}} u_{\delta_n}\varphi) $$ Hence, setting \[ \int_{D^a\backslash D}(\sqrt{a_{\delta_n}} \nabla u_{\delta_n}\cdot\nabla\varphi+\sqrt{a_{\delta_n}} u_{\delta_n}\varphi)=\epsilon_n(a) \] and observing that by (\ref{eq:weaku}), (\ref{eq:weakgradu}) and (\ref{uincavity}), (\ref{Vincavity}) one has that \[ \int_{D\backslash D^a}(\sqrt{a_{\delta_n}} \nabla u_{\delta_n}\cdot\nabla\varphi+\sqrt{a_{\delta_n}} u_{\delta_n}\varphi)=o(1). \] we can write $$ \int_{\Omega}(\sqrt{a_{\delta_n}} \nabla u_{\delta_n}\cdot\nabla\varphi+\sqrt{a_{\delta_n}} u_{\delta_n}\varphi) =\int_{\Omega_{ D^a}}(\sqrt{a_{\delta_n}} \nabla u_{\delta_n}\cdot\nabla\varphi+\sqrt{a_{\delta_n}} u_{\delta_n}\varphi)+\epsilon_n(a)+o(1). $$ Then again by (\ref{eq:weaku}) and (\ref{eq:weakgradu}) we can write \begin{equation}\label{relationtildeu1} \int_{\Omega}(V\cdot\nabla\varphi+\tilde{u}\varphi)+o(1) =\int_{\Omega_{ D^a}}( \nabla(\tilde{u}|_{\Omega_{ D^a}})\cdot\nabla\varphi+ \tilde{u}\varphi)+\epsilon_n(a). \end{equation} and by (\ref{uincavity}) and (\ref{Vincavity}) this last relation also implies that \begin{equation}\label{relationtildeu2} \int_{\Omega_D}(V\cdot\nabla\varphi+\tilde{u}\varphi)+o(1) =\int_{\Omega_{ D^a}}( \nabla(\tilde{u}|_{\Omega_{ D^a}})\cdot\nabla\varphi+ \tilde{u}\varphi)+\epsilon_n(a). \end{equation} Finally, let us pick up $a=a_n\rightarrow 0$ as $n\rightarrow \infty$ in (\ref{relationtildeu2}) and consider a sequence $\tilde{u}_n\in H^1(\Omega_D)$ such that $\tilde{u}_n|_{\Omega_{D^{a_n}}}=\tilde{u}$ and with $\tilde{u}_n\rightarrow \tilde{u}$ in $L^2(\Omega_D)$. Then the integral on the right-hand side of (\ref{relationtildeu2}) can be written in the form \[ \int_{\Omega_{ D^{a_n}}}( \nabla\tilde{u}_n\cdot\nabla\varphi+ \tilde{u}_n\varphi)=\int_{\Omega_D}( \nabla(\tilde{u}_n)\cdot\nabla\varphi+ \tilde{u}_n\varphi)+\tilde{\epsilon}_n(a_n). \] We observe that by using Schwartz inequality (and the uniform estimate \eqref{estimatesinDa} we have that $\epsilon_n(a_n),\tilde{\epsilon}_n(a_n)$ both converge to zero as $n\rightarrow \infty$. Hence, \[ \int_{\Omega_D}(V\cdot\nabla\varphi+\tilde{u}\varphi)=\lim_{n\rightarrow \infty} \int_{\Omega_D}( \nabla \tilde{u}_n\cdot\nabla\varphi+ \tilde{u}_n\varphi) \] i.e. $\tilde{u}_n\rightharpoonup\tilde{u}$ in $H^1(\Omega_D)$ and $\nabla\tilde{u}=V$.\\ {\it Fourth step.} We now show that $\tilde{u}$ is the solution of the cavity problem \eqref{probcav} i.e. \\ \begin{equation}\label{equtil} \int_{\Omega_D} \nabla \tilde{u} \cdot\nabla \varphi + \int_{\Omega_D} \tilde{u}^3\varphi = \int_{\Omega_D} f\;\varphi \ \ \ \ \ \textrm{for} \ \ \varphi\in H^1(\Omega_D) \end{equation} Consider the weak formulations for $u_{\delta_n}$ \begin{equation}\label{equdeltan} \int_\Omega a_{\delta_n}\nabla u_{\delta_n}\cdot \nabla \tilde{\varphi} + \int_\Omega a_{\delta_n} u_{\delta_n}^3\tilde{\varphi} = \int_\Omega f\;\tilde{\varphi } \ \ \ \ \ \textrm{for} \ \ \tilde{\varphi}\in H^1(\Omega) \end{equation} Consider $\varphi\in H^1(\Omega_D)$ and extend it to $\tilde{\varphi}\in H^1(\Omega)$. Then subtracting \eqref{equtil} from \eqref{equdeltan} we obtain \begin{equation}\label{equdiffer} \int_{\Omega_D} (a_{\delta_n}\nabla u_{\delta_n} - \nabla\tilde{u} )\cdot\nabla {\varphi} + \int_{\Omega_D} (a_{\delta_n} u_{\delta_n}^3-\tilde{u} ^3){\varphi}+\int_D a_{\delta_n}\nabla u_{\delta_n}\cdot \nabla \tilde{\varphi} + \int_D a_{\delta_n} u_{\delta_n}^3\tilde{\varphi} = 0 \end{equation} Because of the convergence results collected in the previous steps all the terms in \eqref{equdiffer} tend to $0$ as $\delta_n\rightarrow0$. {\it Step 5. } Let us finally prove the convergence of the traces in $L^2$ i.e. $$\|u_{\delta_n}-\tilde{u}\|_{L^2(\Sigma)}\rightarrow 0.$$ In \eqref{equtil} and \eqref{equdeltan} take the test function $\varphi=(u_{\delta_n}-\tilde{u})\chi^2\in H^1(\Omega_D)$ where $\chi$ is the cutoff function such that $0\leq\chi\leq1$ in $\Omega_D$, $\chi=1$ in $\Omega_{d_0/2}$, $\chi=0$ in $\Omega_D\setminus\Omega_{d_0}$ and $|\nabla\chi|\leq\frac{C}{d_0}$. Plugging $\varphi$ into \eqref{equdeltan} and in \eqref{equtil} and subtracting the two equations, recalling that $supp (f)\subset\Omega_{d_0}$, we obtain \begin{equation}\label{eqdiff} \int_{\Omega_{d_0}}\nabla(u_{\delta_n}-\tilde{u})\cdot\nabla[(u_{\delta_n}-\tilde{u})\chi^2]+\int_{\Omega_{d_0}}(u_{\delta_n}^3-\tilde{u}^3)(u_{\delta_n}-\tilde{u})\chi^2=0 \end{equation} i.e. \begin{equation}\label{eqdiff2} \int_{\Omega_{d_0}}|\nabla(u_{\delta_n}-\tilde{u})|^2\chi^2+2\int_{\Omega_{d_0}}\nabla(u_{\delta_n}-\tilde{u})\nabla\chi(u_{\delta_n}-\tilde{u})\chi+\int_{\Omega_{d_0}}(u_{\delta_n}-\tilde{u})^2(u_{\delta_n}^2+u_{\delta_0}\tilde{u}+\tilde{u}^2)\chi^2=0 \end{equation} Applying Young's inequality to the second term in \eqref{eqdiff2} we obtain that \begin{equation}\label{eqdiff3} \left|\int_{\Omega_{d_0}}\nabla(u_{\delta_n}-\tilde{u})\cdot\nabla\chi(u_{\delta_n}-\tilde{u})\chi\right|\leq \epsilon\int_{\Omega_{d_0}}|\nabla(u_{\delta_n}-\tilde{u})|^2\chi^2+\frac{1}{\epsilon}\int_{\Omega_{d_0}}|\nabla\chi|^2|u_{\delta_n}-\tilde{u}|^2 \end{equation} and combining the above back in \eqref{eqdiff2}, reordering terms, using the properties of $\chi$ and the $L^\infty$ estimates on $\tilde{u}$ and $u_{\delta_n}$ we get \[ \int_{\Omega_{d_0}}|\nabla(u_{\delta_n}-\tilde{u})|^2\chi^2\leq C\int_{\Omega_{d_0}}|u_{\delta_n}-\tilde{u}|^2 \] which implies \[ \int_{\Omega_{d_0/2}}|\nabla(u_{\delta_n}-\tilde{u})|^2\leq C\|u_{\delta_n}-\tilde{u}\|_{L^2(\Omega_{d_0})}^2 \] and therefore, using the fact that $v_{\delta_n}=1$ a.e. in $\Omega_{d_0}$, the fact that $a_{\delta_n}=1$ a.e. in $\Omega_{d_0}$ and the convergences proved above, that $$\|u_{\delta_n}-\tilde{u}\|_{H^1(\Omega_{d_0/2})}\leq C\|u_{\delta_n}-\tilde{u}\|_{L^2(\Omega_{d_0})}\rightarrow 0.$$ Finally, by the trace inequality we conclude that $$\|u_{\delta_n}-\tilde{u}\|_{L^2(\Sigma)}\rightarrow 0.$$ concluding the proof. \end{proof}
-95,769.282818
[ -2.67578125, 2.41015625 ]
31.301939
[ -3.2265625, 0.759765625, -2.046875, -6.19921875, -0.72705078125, 8.8125 ]
[ 3.201171875, 8.796875, 0.7666015625, 6.04296875 ]
595
11,332
[ -3.208984375, 3.7109375 ]
31.464156
[ -6.0625, -4.6015625, -5.37890625, -2.5, 2.314453125, 13.859375 ]
1.151131
18.43306
21.877205
2.010055
[ 1.5098960399627686 ]
-55,315.142416
6.362778
-93,870.349735
0.270446
6.294168
[ -1.935546875, -3.599609375, -3.947265625, -5.453125, 2.09375, 12.875 ]
[ -5.6953125, -2.130859375, -1.9375, -1.435546875, 3.91796875, 4.75 ]
BkiUcNbxK6Ot9Pm3uYdF
\section{\label{sec:intro}Introduction} The quark masses and CKM matrix elements are fundamental parameters of the Standard Model. To understand their values in terms of the underlying physics and probe the limits of the Standard Model, they must be extracted from experiment with greater precision. In addition, the low-energy couplings (LECs) of chiral perturbation theory (ChPT) parametrize the strong interactions at energies small compared to the scale of chiral symmetry breaking \cite{ Weinberg:1978kz, Gasser:1983yg, Gasser:1984gg}. Improving knowledge of the Standard Model and chiral effective theory parameters requires improved calculations of strong force contributions to the relevant hadronic matrix elements. Mixed-action lattice QCD calculations can be used to calculate hadronic matrix elements while exploiting the advantages of different discretizations of the fermion action. For example, fermions with more desirable features for a specific physics purpose may be used for the valence quarks, while fermions more adequate for massive production may be used for the sea quarks, to include the effects of vacuum polarization. The construction of chiral effective theories for lattice QCD incorporates discretization effects, thereby relating the chiral and continuum extrapolations and improving control of the continuum limit~\cite{Lee:1999zxa,Rupak:2002sm}. Staggered ChPT (SChPT) was developed to analyze results of lattice calculations with staggered fermions~\cite{Lee:1999zxa,Aubin:2003mg}; it has been used extensively to control extrapolations to physical light-quark masses and to remove dominant light-quark and gluon discretization errors~\cite{Bazavov:2009bb}. Mixed-action ChPT was developed for lattice calculations performed with Ginsparg-Wilson valence quarks and Wilson sea quarks~\cite{Bar:2002nr,Bar:2003mh}. The formalism for staggered sea quarks and Ginsparg-Wilson valence quarks was developed in Ref.~\cite{Bar:2005tu}. Mixed-action ChPT for differently improved staggered fermions was introduced for calculations of the $K^0-\overline{K^0}$ mixing bag parameters entering $\varepsilon_K$ in and beyond the Standard Model \cite{ Bae:2010ki, Bailey:2012wb,Bae:2013tca} and the $K\to\pi\ell\nu$ vector form factor \cite{Bazavov:2012cd}. We have calculated the pion and kaon masses and axial-current decay constants in all taste representations at next-to-leading order (NLO) in mixed-action SChPT. The results generalize those of Refs.~\cite{Aubin:2003mg,Aubin:2003uc,SWME:2011aa,Bailey:2012jy} to the mixed-action case; the results could be used to improve determinations of LECs poorly determined by existing analyses and to improve determinations of light-quark masses, the Gasser-Leutwyler couplings, and the pion and kaon decay constants. In Sec.~\ref{sec:maschpt} we review the formulation of mixed-action SChPT. Results for the masses are presented in Sec.~\ref{sec:mass}, and for the decay constants, in Sec.~\ref{sec:decay}. In Sec.~\ref{sec:sum}, we conclude. \section{\label{sec:maschpt}Mixed-action staggered chiral perturbation theory} As for ordinary, unmixed SChPT, the theory is constructed in two steps. First one builds the Symanzik effective continuum theory (SET) for the lattice theory. Then one maps the operators of the SET into those of ChPT~\cite{Lee:1999zxa,Aubin:2003mg,Bae:2010ki,CB:notes}. \subsection{\label{subsec:set}Symanzik effective theory} Through NLO the SET may be written \begin{align} S_\mr{eff} &= S_\mr{QCD} + a^2 S_6 + \dots\,, \end{align} where $S_\mr{QCD}$ has the form of the QCD action, but possesses taste degrees of freedom and respects the continuum taste SU(4) symmetry. To account for differences in the masses of valence and sea quarks in lattice calculations, the SET can be formulated with bosonic ghost quarks and fermionic valence and sea quarks~\cite{Bernard:1993sv}. We use the replica method \cite{Damgaard:2000gh} and so include in the action only (fermionic) valence and sea quarks. The operators in $S_6$ have mass-dimension six, and they break the continuum symmetries to those of the mixed-action lattice theory. In valence and sea sectors, these symmetries are identical to those in the unmixed case~\cite{Lee:1999zxa,Aubin:2003mg}, but now there are no symmetries rotating valence and sea quarks together~\cite{Bae:2010ki,CB:notes}. As in the unmixed case, only a subset of the operators in $S_6$ contribute to the leading-order (LO) chiral Lagrangian, and they are four-fermion operators respecting the remnant taste symmetry $\Gamma_4\rtimes\,$SO(4) $\subset$ SU(4). They can be obtained from those of the unmixed SET by introducing projection operators $P_{v,\sigma}$ onto the valence and sea sectors in the $\Gamma_4\rtimes\,$SO(4)-respecting operators of the unmixed theory and allowing the LECs in the valence and sea sectors to take different values~\cite{CB:notes}. Generically, \begin{align} c\,&\bar{\psi}(\gamma_s\otimes\xi_t)\psi\,\bar{\psi}(\gamma_s\otimes\xi_t)\psi\label{eq:genop} \\ \longrightarrow\,c_{vv}\,&\bar{\psi}(\gamma_s\otimes\xi_t)P_v\psi\,\bar{\psi}(\gamma_s\otimes\xi_t)P_v\psi + (v\rightarrow\sigma) \nonumber\\ +\,2c_{v\sigma}\,&\bar{\psi}(\gamma_s\otimes\xi_t)P_v\psi\,\bar{\psi}(\gamma_s\otimes\xi_t)P_\sigma\psi\,, \nonumber \end{align} where $\gamma_s$ ($\xi_t$) is a spin (taste) matrix, and the quark spinors $\psi$ carry flavor indices taking on values in the valence and sea sectors. In Eq.~\eqref{eq:genop}, the flavor indices are contracted within each bilinear. For the action of the projection operators on the spinors, we may write \begin{align} (P_v\psi)_i &= \psi_i & (P_\sigma\psi)_i &= 0 & \text{for $i\in v$}\,,\\ (P_v\psi)_i &= 0 & (P_\sigma\psi)_i &= \psi_i & \text{for $i\in \sigma$}\,.\nonumber \end{align} In the unmixed case, $c_{vv}=c_{\sigma\sigma}=c_{v\sigma}=c$, and we recover the operators of the unmixed theory. \subsection{\label{subsec:lo-chlag}Leading order chiral Lagrangian} Mapping the SET operators into the chiral theory at LO, we may write~\cite{CB:notes} \begin{align} \mc{L}_\mr{LO} &= \frac{f^2}{8} \mr{Tr}(\partial_{\mu}\Sigma \partial_{\mu}\Sigma^{\dagger}) - \frac{1}{4}\mu f^2 \mr{Tr}(M\Sigma+M\Sigma^{\dagger})\label{eq:LOlag}\\ &+ \frac{2m_0^2}{3}[\mr{Tr}(\phi_I)]^2 + a^2 \mc{V}\,.\nonumber \end{align} \begin{comment} \wlee{wlee: I prefer using Claude's notation of $m_0^2$ instead of $\delta$, which represents the anomaly term explicitly. In this paper, we already use too many $\delta$'s for many different meanings.} \jab{jab: I replaced ``$\delta$'' with ``$4m_0^2/3$'' everywhere, matching the conventional normalization (for which the mass of the taste-singlet $\eta^\prime$ in the SU(3) chiral theory approaches $m_0$ as $m_0\rightarrow\infty$). I added explanation of this convention to the text, immediately below.} \end{comment} The first three terms are identical to the kinetic energy, mass, and anomaly operators of the unmixed theory, respectively; the normalization of the anomaly term is arbitrary, but natural in SU(3) SChPT, for which the mass of the taste-singlet $\eta^\prime$ approaches $m_0$ as $m_0\rightarrow\infty$~\cite{Aubin:2003mg}. To construct the potential $\mc{V}$, the projection operators are conveniently included in spurions. The result can be written \begin{align} \mc{V} = \mc{U} + \mc{U}^\prime - C_\mr{mix}\mr{Tr}(\tau_3\Sigma\tau_3\Sigma^\dagger),\label{eq:pot} \end{align} where the last term is a taste-singlet potential new in the mixed-action theory, with $\tau_3\equiv P_\sigma - P_v$. It arises from four-quark operators in which $\xi_t=I$, the identity in taste space; such operators map to constants in the unmixed case. In the mixed-action theory, they yield nontrivial chiral operators because the projection operators $P\neq 1$ are included in the taste spurions~\cite{CB:notes}. In the appendix we present a derivation of the last term in Eq.~\eqref{eq:pot}. \begin{comment} \wlee{wlee: we need to explain why we need to include the last term. I think the explanation in Claude's note gives the answer to this issue. I suggest that we need to add a section in the appendix which derives the last term from the scratch. Please refer to Claude's note on details. Of course, we should mention that it came originally from Claude's inote.} \jab{jab: I added an explanation to the text above and an appendix containing a derivation.} \end{comment} The potentials $\mc{U}$ and $\mc{U}^\prime$ contain single- and double-trace operators, respectively, that are direct generalizations of those in unmixed SChPT. The operators in $\mc{U}^{(\prime)}$ have independent LECs for the valence-valence, sea-sea, and valence-sea sectors. We write \begin{align} \mc{U} &= \mc{U}_{vv}+\mc{U}_{\sigma\sigma}+\mc{U}_{v\sigma}\,,\\ \mc{U^\prime} &= \mc{U^\prime}_{vv}+\mc{U^\prime}_{\sigma\sigma}+\mc{U^\prime}_{v\sigma}\,, \end{align} where \begin{align} -\mc{U}_{vv} &= C^{vv}_1\textrm{Tr}(\xi_5P_v\Sigma\xi_5P_v\Sigma^{\dagger})\\ &+ C^{vv}_6\ \sum_{\mu<\nu} \textrm{Tr}(\xi_{\mu\nu}P_v\Sigma \xi_{\nu\mu}P_v\Sigma^{\dagger}) \nonumber\\ &+ \frac{C^{vv}_3}{2} [ \textrm{Tr}(\xi_{\nu}P_v\Sigma \xi_{\nu}P_v\Sigma) + p.c.] \nonumber\\ &+ \frac{C^{vv}_4}{2} [ \textrm{Tr}(\xi_{\nu 5}P_v\Sigma \xi_{5\nu}P_v\Sigma) + p.c.] \,,\nonumber \end{align} \begin{align} -\mc{U}_{\sigma\sigma} &= C^{\sigma\sigma}_1\textrm{Tr}(\xi_5P_\sigma\Sigma\xi_5P_\sigma\Sigma^{\dagger})\\ &+ C^{\sigma\sigma}_6\ \sum_{\mu<\nu} \textrm{Tr}(\xi_{\mu\nu}P_\sigma\Sigma \xi_{\nu\mu}P_\sigma\Sigma^{\dagger}) \nonumber\\ &+ \frac{C^{\sigma\sigma}_3}{2} [ \textrm{Tr}(\xi_{\nu}P_\sigma\Sigma \xi_{\nu}P_\sigma\Sigma) + p.c.] \nonumber\\ &+ \frac{C^{\sigma\sigma}_4}{2} [ \textrm{Tr}(\xi_{\nu 5}P_\sigma\Sigma \xi_{5\nu}P_\sigma\Sigma) + p.c.] \,,\nonumber \end{align} \begin{align} -\mc{U}_{v\sigma} &= C^{v\sigma}_1[\textrm{Tr}(\xi_5P_v\Sigma\xi_5P_\sigma\Sigma^{\dagger}) + p.c.]\\ &+ C^{v\sigma}_6\ \sum_{\mu<\nu}[ \textrm{Tr}(\xi_{\mu\nu}P_v\Sigma \xi_{\nu\mu}P_\sigma\Sigma^{\dagger}) + p.c.] \nonumber\\ &+ C^{v\sigma}_3 [ \textrm{Tr}(\xi_{\nu}P_v\Sigma \xi_{\nu}P_\sigma\Sigma) + p.c.]\nonumber\\ &+ C^{v\sigma}_4 [ \textrm{Tr}(\xi_{\nu 5}P_v\Sigma \xi_{5\nu}P_\sigma\Sigma) + p.c.]\,, \nonumber \end{align} \begin{align} -\mc{U^\prime}_{vv} &= \frac{C^{vv}_{2V}}{4}[\textrm{Tr}(\xi_{\nu}P_v\Sigma)\textrm{Tr}(\xi_{\nu}P_v\Sigma) + p.c.]\\ &+ \frac{C^{vv}_{2A}}{4} [ \textrm{Tr}(\xi_{\nu5}P_v\Sigma)\textrm{Tr}(\xi_{5\nu}P_v\Sigma) + p.c.] \nonumber\\ &+ \frac{C^{vv}_{5V}}{2} [ \textrm{Tr}(\xi_{\nu}P_v\Sigma) \textrm{Tr}(\xi_{\nu}P_v\Sigma^{\dagger})] \nonumber\\ &+ \frac{C^{vv}_{5A}}{2} [ \textrm{Tr}(\xi_{\nu5}P_v\Sigma) \textrm{Tr}(\xi_{5\nu}P_v\Sigma^{\dagger}) ]\,,\nonumber \end{align} \begin{align} -\mc{U^\prime}_{\sigma\sigma} &= \frac{C^{\sigma\sigma}_{2V}}{4}[\textrm{Tr}(\xi_{\nu}P_\sigma\Sigma)\textrm{Tr}(\xi_{\nu}P_\sigma\Sigma) + p.c.]\\ &+ \frac{C^{\sigma\sigma}_{2A}}{4} [ \textrm{Tr}(\xi_{\nu5}P_\sigma\Sigma)\textrm{Tr}(\xi_{5\nu}P_\sigma\Sigma) + p.c.]\nonumber \\ &+ \frac{C^{\sigma\sigma}_{5V}}{2} [ \textrm{Tr}(\xi_{\nu}P_\sigma\Sigma) \textrm{Tr}(\xi_{\nu}P_\sigma\Sigma^{\dagger})]\nonumber\\ &+ \frac{C^{\sigma\sigma}_{5A}}{2} [ \textrm{Tr}(\xi_{\nu5}P_\sigma\Sigma) \textrm{Tr}(\xi_{5\nu}P_\sigma\Sigma^{\dagger}) ]\,,\nonumber \end{align} \begin{align} -\mc{U^\prime}_{v\sigma} &= \frac{C^{v\sigma}_{2V}}{2}[\textrm{Tr}(\xi_{\nu}P_v\Sigma)\textrm{Tr}(\xi_{\nu}P_\sigma\Sigma) + p.c.]\\ &+ \frac{C^{v\sigma}_{2A}}{2} [ \textrm{Tr}(\xi_{\nu5}P_v\Sigma)\textrm{Tr}(\xi_{5\nu}P_\sigma\Sigma) + p.c.] \nonumber\\ &+ \frac{C^{v\sigma}_{5V}}{2} [ \textrm{Tr}(\xi_{\nu}P_v\Sigma) \textrm{Tr}(\xi_{\nu}P_\sigma\Sigma^{\dagger}) + p.c.]\nonumber\\ &+ \frac{C^{v\sigma}_{5A}}{2} [ \textrm{Tr}(\xi_{\nu5}P_v\Sigma) \textrm{Tr}(\xi_{5\nu}P_\sigma\Sigma^{\dagger}) + p.c.]\,,\nonumber \end{align} where $p.c.$ indicates the parity conjugate. In the unmixed case, $C_\mr{mix}=0$, $C^{vv}=C^{\sigma\sigma}=C^{v\sigma}=C$, and the potential $\mc{V}$ reduces to that of ordinary SChPT. Restricting attention to two-point correlators of sea-sea particles yields results of the unmixed theory, as expected~\cite{Bernard:1993sv}. \subsection{\label{subsec:tree}Tree-level masses and propagators} As in the unmixed theory, the potential $\mc{V}$ contributes to the tree-level masses of the pions and kaons, which fall into irreducible representations (irreps) of $\Gamma_4\rtimes\,$SO(4). For a taste $t$ pseudo-Goldstone boson (PGB) $\phi^t_{xy}$ composed of quarks with flavors $x,y,\ x\neq y$, \begin{comment} \jab{jab: The notation here matches our recent papers on taste non-Goldstone pions and kaons in ordinary SChPT, which is the basis for the notation in the sections below.} \end{comment} \begin{align} m_{xy,\,t}^2 &= \mu (m_x + m_y) + a^2 \Delta_F^{xy}\,,\label{eq:tree}\\ t &\in F\in\{P,A,T,V,I\} \,,\nonumber \end{align} where $F$ labels the taste $\Gamma_4\rtimes\,$SO(4) irrep (pseudoscalar, axial, tensor, vector, or scalar). The notation here matches that in our recent papers \cite{ Bailey:2012jy, SWME:2011aa} on taste non-Goldstone pions and kaons in ordinary SChPT. It is also the basis for the notation in the sections below. The mass splitting $\Delta_F^{xy}$ depends on the LEC of the taste-singlet potential ($C_\mr{mix}$), the LECs in the single-trace potential ($\mc{U}$), and the sector (valence or sea) of the quark flavors $x$ and $y$. Expanding the LO Lagrangian through $\mc{O}(\phi^2)$, we have \begin{align} \Delta^{vv}_F &= \frac{8}{f^2}\sum_{b\neq I}\,C^{vv}_b\,(1-\theta^{tb}\theta^{b5})\,,\label{eq:splt-vv}\\ \Delta^{\sigma\sigma}_F &= \frac{8}{f^2}\sum_{b\neq I}\,C^{\sigma\sigma}_b\,(1-\theta^{tb}\theta^{b5})\,, \label{eq:splt-ss}\\ \Delta^{v\sigma}_F &=\frac{16C_\mr{mix}}{f^2}+\frac{8}{f^2}\sum_{b\neq I}\left[\tfrac{1}{2}(C^{vv}_b+C^{\sigma\sigma}_b)-C^{v\sigma}_b\theta^{tb}\theta^{b5}\right]\,, \label{eq:splt-vs} \end{align} where the splitting is $\Delta_F^{vv}$ if both quarks are valence quarks ($xy\in vv$), $\Delta_F^{\sigma\sigma}$ if both quarks are sea quarks ($xy\in \sigma\sigma$), and $\Delta_F^{v\sigma}$ otherwise. The sub(super)script $b$ and taste $t$ are indices labeling the generators of the fundamental irrep of U(4). The numerical constant $\theta^{tb}=+1$ if the generators for $t$ and $b$ commute and $-1$ if they anti-commute. The LEC $C_b = C_1$, $C_6$, $C_3$, or $C_4$ if $b$ labels a generator corresponding to the $P$, $T$, $V$, or $A$ irrep of $\Gamma_4\rtimes\,$SO(4), respectively. The residual chiral symmetry in the valence-valence sector, as for the unmixed theory, implies $F=P$ particles are Goldstone bosons for $a\neq 0,\ m_q = 0$, and therefore $\Delta_P^{vv}=0$. The same is not true for the taste pseudoscalar, valence-sea PGBs, and generically, $\Delta_P^{v\sigma}\neq 0$. In the flavor-neutral sector, $x = y$, the PGBs mix in the taste singlet, vector, and axial irreps. The Lagrangian mixing terms (hairpins) are \begin{align} &\tfrac{1}{2}\,\delta_I^{ij}\,\phi^I_{ii}\phi^I_{jj} + \tfrac{1}{2}\,\delta_V^{vv}\,\phi^\mu_{\val{i}\val{i}}\phi^\mu_{\val{j}\val{j}} + \tfrac{1}{2}\,\delta_V^{\sigma\sigma}\,\phi^\mu_{\sea{i}\sea{i}}\phi^\mu_{\sea{j}\sea{j}} + \delta_V^{v\sigma}\,\phi^\mu_{\val{i}\val{i}}\phi^\mu_{\sea{j}\sea{j}} \\ &+ (V\rightarrow A,\ \mu \rightarrow \mu5)\,,\nonumber \label{eq:hair} \end{align} where $i,j$ are flavor indices; $\mu$ ($\mu5$) is a taste index in the vector (axial) irrep; and we use an overbar (underbar) to restrict summation to the valence (sea) sector. The $\delta_I^{ij}$-term is the anomaly term; $\delta_I^{ij}\equiv 4m_0^2/3$. In continuum ChPT, taking $m_0\rightarrow\infty$ at the end of the calculation decouples the $\eta^\prime$~\cite{Sharpe:2001fh}. In SChPT, taking $m_0\rightarrow\infty$ decouples the $\eta^\prime_I$. The flavor-singlets in other taste irreps are PGBs and do not decouple~\cite{Aubin:2003mg}. The $\delta_{V,\,A}^{vv,\sigma\sigma,v\sigma}$-terms are lattice artifacts from the double-trace potential $a^2\mathcal{U}^\prime$, and the couplings $\delta_{V,\,A}^{vv,\sigma\sigma,v\sigma}$ depend linearly on its LECs, \begin{align} \delta^{vv}_V &= \frac{16a^2}{f^2}(C^{vv}_{2V}-C^{vv}_{5V}) & \delta^{vv}_A &= \frac{16a^2}{f^2}(C^{vv}_{2A}-C^{vv}_{5A}) \\ \delta^{\sigma\sigma}_V &= \frac{16a^2}{f^2}(C^{\sigma\sigma}_{2V}-C^{\sigma\sigma}_{5V}) & \delta^{\sigma\sigma}_A &= \frac{16a^2}{f^2}(C^{\sigma\sigma}_{2A}-C^{\sigma\sigma}_{5A})\\ \delta^{v\sigma}_V &= \frac{16a^2}{f^2}(C^{v\sigma}_{2V}-C^{v\sigma}_{5V}) & \delta^{v\sigma}_A &= \frac{16a^2}{f^2}(C^{v\sigma}_{2A}-C^{v\sigma}_{5A})\,. \end{align} Although the mass splittings and hairpin couplings are different in the three sectors, the tree-level propagator can be written in the same form as in the unmixed case. We have ($k,l$ are flavor indices) \begin{align} G^{tb}_{ij,\,kl}(p^2)=\delta^{tb}\left(\frac{\delta_{il}\delta_{jk}}{p^2+m_{ij,\,t}^2}+\delta_{ij}\delta_{kl}\,D^t_{il}\right), \label{treeprop} \end{align} where the disconnected propagators vanish (by definition) in the pseudoscalar and tensor irreps, and for the singlet, vector, and axial irreps, \begin{align} D^t_{ij} &\equiv \frac{-1}{I_t J_t} \frac{\delta^{ij}_F}{1+\delta^{\sigma\sigma}_F\sea{\sigma_t}}\quad\text{for $ij\notin vv$}\,,\label{eq:discDnotvv}\\ D^t_{ij} &\equiv \frac{-1}{I_t J_t} \left(\frac{(\delta^{v\sigma}_F)^2/\delta^{\sigma\sigma}_F}{1+\delta^{\sigma\sigma}_F\sea{\sigma_t}}+ \delta^{vv}_F-(\delta^{v\sigma}_F)^2/ \delta^{\sigma\sigma}_F\right)\label{eq:discDvv}\\ &\text{for $ij\in vv$}\,,\nonumber \end{align} where $I_t \equiv p^2+m_{ii,\,t}^2$, $J_t \equiv p^2+ m_{jj,\,t}^2$, and we use the replica method to quench the valence quarks~\cite{Damgaard:2000gh} and root the sea quarks~\cite{Aubin:2003mg}, so that \begin{comment} \wlee{wlee: This part is very confusing. I think it is much better to define $\delta_I^{ij}$ in the context of Eqs. (18), (19) and (20). Otherwise, this could be quite misleading. I know that you implicitly assume something here which is completely hidden to the careful readers.} \jab{jab: $\delta_I^{ij}$ is now introduced with the vector and axial hairpin couplings in Eq. (18), and defined in the text between Eqs. (18) and (19). I also modified Eqs. (25) and (26), replacing ``$=$'' with ``$\rightarrow$'' to represent application of the replica method.} \end{comment} \begin{align} \sea{\sigma_t} &\equiv \sum_{\sea{i}}\frac{1}{p^2 + m_{\sea{ii},\,t}^2} \rightarrow \tfrac{1}{4}\sum_{\sea{i}^\prime}\frac{1}{p^2 + m_{\sea{i}^\prime\sea{i}^\prime,\,t}^2}\,,\\ \val{\sigma_t} &\equiv \sum_{\val{i}}\frac{1}{p^2 + m_{\val{ii},\,t}^2} \rightarrow 0 \,. \end{align} The index $\sea{i}^\prime$ is summed over the physical sea quark flavors. As for the continuum, partially quenched case~\cite{Sharpe:2000bc}, the factors arising from iterating sea quark loops can be reduced to a form convenient for doing loop integrations. For three nondegenerate, physical sea quarks $u$, $d$, $s$, we have \begin{align} \frac{1}{1+\delta^{\sigma\sigma}_F\sea{\sigma_t}}&=\frac{(p^2+m_{uu,\,t}^2)(p^2+m_{dd,\,t}^2)(p^2+m_{ss,\,t}^2)}{\bigl(p^2+m_{\pi^0_t}^2\bigr)\bigl(p^2+m_{\eta_t}^2\bigr)\bigl(p^2+m_{\eta^\prime_t}^2\bigr)}\,,\label{eq:detrat} \end{align} where $m_{\pi^0_t}^2$, $m_{\eta_t}^2$, and $m_{\eta^\prime_t}^2$ are the eigenvalues of the matrices (for tastes $F = I,V,A$) \begin{align} \begin{pmatrix} m_{uu,\,t}^2 + \delta^{\sigma\sigma}_F/4 & \delta^{\sigma\sigma}_F/4 & \delta^{\sigma\sigma}_F/4 \\ \delta^{\sigma\sigma}_F/4 & m_{dd,\,t}^2 + \delta^{\sigma\sigma}_F/4 & \delta^{\sigma\sigma}_F/4 \\ \delta^{\sigma\sigma}_F/4 & \delta^{\sigma\sigma}_F/4 & m_{ss,\,t}^2 + \delta^{\sigma\sigma}_F/4 \end{pmatrix}\,. \end{align} \begin{comment} \jab{jab: Eq. (28) follows directly from Eq. (27). Maybe the simplest way to see it is to first solve Eq. (27) for the denominator of the right-hand side. Next one can note that the result (in terms of the hairpin coupling and the factors in the numerator of the right-hand side) is the determinant of the matrix obtained from adding Eq. (28) to $p^2$. Identifying $p^2$ with the negative of the eigenvalue, the result follows. The derivation of this result is in Appendix A of Ref.~\cite{Sharpe:2000bc}, cited immediately above.} \end{comment} In the disconnected propagator $D^t_{ij}$, an additional piece appears in the valence-valence sector (Eq.~\eqref{eq:discDvv}). As noted in Refs.~\cite{Bae:2010ki,CB:notes}, this piece has the form of a quenched disconnected propagator, for which $\sea{\sigma_t}=0$, and the assumption of factorization leads us to expect its suppression. In the unmixed case, the mass splittings and hairpin couplings in the valence and sea sectors are degenerate, and the propagator reduces. \begin{figure}[htbp!] \subfigure[]{\includegraphics[width=0.17\textwidth]{pion_mass_qflow_1.pdf}} \subfigure[]{\includegraphics[width=0.17\textwidth]{pion_mass_qflow_3.pdf}} \subfigure[]{\includegraphics[width=0.17\textwidth]{pion_mass_qflow_8.pdf}} \subfigure[]{\includegraphics[width=0.17\textwidth]{pion_mass_qflow_10.pdf}} \subfigure[]{\includegraphics[width=0.17\textwidth]{pion_mass_qflow_5.pdf}} \subfigure[]{\includegraphics[width=0.17\textwidth]{pion_mass_qflow_6.pdf}} \subfigure[]{\includegraphics[width=0.2\textwidth]{pion_decay_qflow_1.pdf}} \subfigure[]{\includegraphics[width=0.2\textwidth]{pion_decay_qflow_3.pdf}} \subfigure[]{\includegraphics[width=0.2\textwidth]{pion_decay_qflow_5.pdf}} \caption{Quark flows for the NLO self-energy tadpoles (a-f) and current-vertex loops (g-i). The $x$ and $y$ quarks are continuously connected to the external lines, closed loops are sea quarks, and current insertions are represented by crossed boxes.\label{fig:qflowdiag}} \end{figure} \section{\label{sec:mass}Next-to-leading order corrections to masses} For a taste $t$ PGB $\phi^t_{xy}$ composed of quarks with flavors $x,y,\ x\neq y$, the mass is defined in terms of the self-energy, as in continuum ChPT. The NLO mass can be obtained by adding the NLO self-energy to the tree-level mass, \begin{align} M_{xy,\,t}^2 = m_{xy,\,t}^2 + \Sigma_{xy,\, t}(-m_{xy,\,t}^2) \,. \end{align} $\Sigma_{xy,\, t}$ consists of connected and disconnected tadpole loops with vertices from the LO Lagrangian at $\mc{O}(\phi^4)$ and tree-level graphs with vertices from the NLO Lagrangian at $\mc{O}(\phi^2)$. The tadpole graphs contribute the leading chiral logarithms, while the tree-level terms are analytic in the quark masses and the square of the lattice spacing. We have not attempted to enumerate all terms in the NLO Lagrangian. It consists of generalizations of the Gasser-Leutwyler terms~\cite{Gasser:1984gg}, as in ordinary, unmixed SChPT, as well as generalizations of the Sharpe-Van de Water Lagrangian~\cite{Sharpe:2004is} to the mixed action case. There also exist additional operators including traces over taste-singlets; such operators vanish in the unmixed theory. Given the different kinds of operators in the NLO Lagrangian, the analytic terms at NNLO have the same form as those in the unmixed theory, but with distinct LECs for valence-valence, sea-sea, and valence-sea PGBs. We have calculated the tadpole graphs for the sea-sea PGBs and find them identical to the results in the unmixed theory, as expected~\cite{Bernard:1993sv}. Below we consider the tadpole graphs for the valence-valence and valence-sea PGBs. \subsection{\label{subsec:val-val}Valence-valence sector} For any $\Gamma_4\rtimes\,$SO(4) irrep, the calculation of the valence-valence PGB self-energies proceeds as for the unmixed case~\cite{Aubin:2003mg,SWME:2011aa}. Quark flow diagrams corresponding to the tadpole graphs are shown in diagrams (a-f) of Fig.~\ref{fig:qflowdiag}. The kinetic energy, mass, and $\mc{U}$ vertices yield graphs of types (a), (c), and (d), and the taste-singlet potential vertices ($\propto C_\mr{mix}$) yield graphs of type (a), \begin{align} \frac{a^2C_\mr{mix}}{3f^2(4\pi f)^2}\,\sum_{\val{i}^\prime\sea{i}^\prime b}\,\ell(m^2_{\val{i}^\prime \sea{i}^\prime,\,b})\,,\label{eq:Cmix} \end{align} where $\val{i}^\prime$ is summed over $\val{x},\val{y}$; $\sea{i}^\prime$ is summed over the physical sea quarks $u,d,s$; and $\ell(m^2)\equiv m^2 \ln (m^2/\Lambda^2) + \delta_1(mL)$ is the chiral logarithm, with $\Lambda$ the scale of dimensional regularization and $\delta_1$ the correction for finite spatial volume~\cite{Bernard:2001yj}. ($L$ is the spatial extent of the lattice.) Vertices from $\mc{U}^\prime$ yield graphs of types (b), (e), and (f). The hairpin vertex graphs are of types (e) and (f). As in the unmixed case, they can be combined and eliminated in favor of a contribution of type (d). In the mixed-action case, the necessary identity is ($t\in V,A$) \begin{align} \frac{\delta^{vv}_F}{p^2+m_{\val{x}\val{x},\,t}^2} + \frac{\delta^{v\sigma}_F}{4}\sum_{\sea{i}^\prime} D^t_{\val{x}\sea{i}^\prime} = -(p^2+m_{\val{y}\val{y},\,t}^2) D^t_{\val{x}\val{y}}\,. \label{eq:coolID} \end{align} This relation follows from Eqs.~\eqref{eq:discDnotvv} and \eqref{eq:discDvv}. As in the unmixed theory, graphs of type (b) come from vertices $\propto \omega^{vv}_t \equiv 16(C^{vv}_{2F} + C^{vv}_{5F})/f^2$ for $F=V,A$; they have the same form as those in the unmixed case~\cite{SWME:2011aa}. Adding the various contributions and evaluating the result at $p^2=-m^2_{\val{x}\val{y},\,t}$, we have the NLO, one-loop contributions to the self-energies of the valence-valence PGBs, \begin{align} -\Sigma&^{\text{NLO loop}}_{\val{x}\val{y},\,t}(-m^2_{\val{x}\val{y},\,t})= \frac{a^2}{48(4\pi f)^2}\ \times \label{eq:Sigvv}\\ \sum_c\Biggl[&\left(\Delta^{vv,v\sigma}_{ct}-\Delta^{vv}_t-\Delta^{v\sigma}_c+\frac{16C_\mr{mix}}{f^2}\right)\,\sum_{\val{i}^\prime\sea{i}^\prime} \ell(m^2_{\val{i}^\prime\sea{i}^\prime,\,c})\nonumber\\ +\ &\frac{3}{2}\Biggl(\sum_{b\in V,A}\omega^{vv}_b\tau_{cbt}\tau_{cbt}(1+\theta^{ct})\Biggr)\ell(m^2_{\val{x}\val{y},\,c})\Biggr]\nonumber \\ +\ &\frac{1}{12(4\pi f)^2}\,\int\frac{d^4q}{\pi^2}\ \times \nonumber \\ \sum_c\Biggl[&a^2\left(\Delta^{vv}_{ct}-\Delta^{vv}_t-\Delta^{vv}_c\right)(D_{\val{xx}}^c+D_{\val{yy}}^c)\nonumber \\ + \biggl[&\Bigl(2(1-\theta^{ct}) + \rho^{ct}\Bigr)q^2 +\Bigl(2(1+2\theta^{ct}) + \rho^{ct}\Bigr)m^2_{\val{x}\val{y},\,5}\nonumber \\ +\ &2a^2\Delta^{\prime vv}_{ct} + a^2\Bigl(2\theta^{ct}\Delta^{vv}_t + (2+\rho^{ct})\Delta^{vv}_c\Bigr)\biggr]D_{\val{x}\val{y}}^c\Biggr]\,,\nonumber \end{align} where $\rho^{ct}\equiv -4(2+\theta^{ct})$ unless $c=I$, when it vanishes, $\tau_{cbt}\equiv\textrm{Tr}(T^cT^bT^t)$ is a trace over (a product of) generators of U(4), and \begin{align} \Delta^{vv}_{ct}&\equiv\frac{8}{f^2}\sum_{b\neq I} C^{vv}_b(5+3\theta^{cb}\theta^{bt}-4\theta^{5b}\theta^{bt}-4\theta^{cb}\theta^{b5})\,,\\ \Delta^{\prime vv}_{ct}&\equiv\frac{8\theta^{ct}}{f^2}\sum_{b\neq I} C^{vv}_b(1+3\theta^{cb}\theta^{bt}-2\theta^{5b}\theta^{bt}-2\theta^{cb}\theta^{b5})\,,\\ \Delta^{vv,v\sigma}_{ct}&\equiv \frac{8}{f^2}\sum_{b\neq I}\bigl[\tfrac{1}{2}(9C^{vv}_b + C^{\sigma\sigma}_b) + C^{v\sigma}_b(3\theta^{cb}\theta^{bt}\nonumber\\ &-\ 4\theta^{cb}\theta^{b5})-4C^{vv}_b\theta^{5b}\theta^{bt}\bigr]\,. \end{align} The form of Eq.~\eqref{eq:Sigvv} is the same as that in ordinary SChPT~\cite{SWME:2011aa}; the differences are in the definition of the disconnected propagators and the LECs of the effective field theory. The reduction to the unmixed case is straightforward. The valence-valence, taste-pseudoscalar PGBs are true Goldstone bosons in the chiral limit, $m_x,m_y\rightarrow 0,\ a\neq 0$. Setting $t=5$ in Eqs.~\eqref{eq:Sigvv} and noting that \begin{align} \Delta^{vv,v\sigma}_{c5}&=\Delta^{v\sigma}_c - \frac{16C_\mr{mix}}{f^2}\label{eq:Delmix-vsig}\\ \Delta^{vv}_{c5}&=\Delta^{vv}_c\\ \Delta^{\prime vv}_{c5}&=-\theta^{c5}\Delta^{vv}_c\\ \Delta^{vv}_5&=0\,, \end{align} we have \begin{align} -\Sigma^{\text{NLO loop}}_{\val{x}\val{y},\,5}(-m^2_{\val{x}\val{y},\,5})=\frac{\mu(m_{\val{x}}+m_{\val{y}})}{2(4\pi f)^2}\,\sum_b\theta^{b5}\int\frac{d^4q}{\pi^2}D^b_{\val{x}\val{y}}\,, \end{align} which is the generalization of the results of Ref.~\cite{Aubin:2003mg} to the mixed-action case. As in ordinary SChPT, only graphs of type (d) contribute. To generalize to the mixed-action theory, one has only to replace the disconnected propagators $D^t_{\val{x}\val{y}}$ with their counterparts in the mixed-action theory. \subsection{\label{subsec:val-sea}Valence-sea sector} We consider mesons $\phi^t_{\val{x}\sea{y}}$ with one valence quark $\val{x}$ and one sea quark $\sea{y}$. For tadpoles with vertices from the kinetic energy and mass terms of the LO Lagrangian (Eq.~\eqref{eq:LOlag}), we find graphs of types (a), (c), and (d). \begin{align} \frac{1}{48(4\pi f)^2}\,\sum_{c,\sea{i}^\prime}\Bigg[ \left( p^2 + \mu(m_{\val{x}} + m_{\sea{y}}) - a^2\Delta^{v\sigma}_c \right) \ell(m^2_{\val{x}\sea{i}^\prime,c}) \\ +\ \left( p^2 + \mu(m_{\val{x}} + m_{\sea{y}}) - a^2\Delta^{\sigma\sigma}_c \right) \ell(m^2_{\sea{y}\sea{i}^\prime,c}) \Bigg] \nonumber \end{align} \begin{align} +\ \frac{1}{12(4\pi f)^2}\,\sum_c \int \frac{d^4q}{\pi^2} \Biggl[ \left( p^2 + q^2 + \mu(3m_{\val{x}} + m_{\sea{y}}) \right) D^c_{\val{x}\val{x}} \nonumber\\ +\ \left( p^2 + q^2 + \mu(m_{\val{x}} + 3m_{\sea{y}}) \right) D^c_{\sea{y}\sea{y}} \nonumber\\ -\ 2\theta^{ct} \left( p^2 + q^2 - \mu(m_{\val{x}} + m_{\sea{y}}) \right) D_{\val{x}\sea{y}}^c \Biggr]\,,\nonumber \end{align} where $\sea{i}^\prime$ is summed over the physical sea-quark flavors. As for the sea-sea and valence-valence sectors, the $q^2 D^c_{\val{x}\val{x}}$ and $q^2 D^c_{\sea{y}\sea{y}}$ terms can be eliminated in favor of a $q^2 D^c_{\val{x}\sea{y}}$ term. But for the valence-sea mesons, an additional term arises, with the form of a connected contribution [graph (e) of Fig.~\ref{fig:qflowdiag}]. The necessary identities are \begin{align} (q^2 + 2\mu m_{\val{x}})D^t_{\val{x}\val{x}} &= \frac{\delta^{v\sigma}_F}{\delta^{\sigma\sigma}_F}(q^2 + m_{\sea{yy},\,t}^2) D^t_{\val{x}\sea{y}} \label{eq:vsig_q2Dxx_id}\\ &-\, a^2\Delta^{vv}_F D^t_{\val{x}\val{x}} + \frac{(\delta^{v\sigma}_F)^2/\delta^{\sigma\sigma}_F - \delta^{vv}_F}{q^2 + m_{\val{xx},\,t}^2}\,,\nonumber \\ (q^2 + 2\mu m_{\sea{y}})D^t_{\sea{y}\sea{y}} &= \frac{\delta^{\sigma\sigma}_F}{\delta^{v\sigma}_F}(q^2 + m_{\val{xx},\,t}^2) D^t_{\val{x}\sea{y}} \label{eq:vsig_q2Dyy_id} \\ &-\, a^2\Delta^{\sigma\sigma}_F D^t_{\sea{y}\sea{y}}\,,\nonumber \end{align} which hold for $t\in F = V,A,I$. Applying these identities to the above result gives \begin{align} \frac{1}{48(4\pi f)^2}\,\sum_{c,\sea{i}^\prime}\Bigg[& \left( p^2 + \mu(m_{\val{x}} + m_{\sea{y}}) - a^2\Delta^{v\sigma}_c \right) \ell(m^2_{\val{x}\sea{i}^\prime,c}) \\ +\ &\left( p^2 + \mu(m_{\val{x}} + m_{\sea{y}}) - a^2\Delta^{\sigma\sigma}_c \right) \ell(m^2_{\sea{y}\sea{i}^\prime,c}) \Bigg] \nonumber \\ -\ \frac{1}{12(4\pi f)^2}&\sum_{c\in V,A} \left( \delta^{vv}_c - (\delta^{v\sigma}_c)^2/\delta^{\sigma\sigma}_c \right) \ell(m^2_{\val{x}\val{x},c}) \nonumber \\ +\ \frac{1}{12(4\pi f)^2}&\sum_c \int \frac{d^4q}{\pi^2}\ \times \nonumber \\ \Biggl[ &\left( p^2 + \mu(m_{\val{x}} + m_{\sea{y}}) - a^2\Delta^{vv}_c \right) D^c_{\val{x}\val{x}} \nonumber\\ +\ &\left( p^2 + \mu(m_{\val{x}} + m_{\sea{y}}) - a^2\Delta^{\sigma\sigma}_c \right) D^c_{\sea{y}\sea{y}} \nonumber \\ +\ &\Biggl( -2\theta^{ct} p^2 + \left( \frac{\delta^{v\sigma}_c}{\delta^{\sigma\sigma}_c} + \frac{\delta^{\sigma\sigma}_c}{\delta^{v\sigma}_c} - 2\theta^{ct} \right) q^2 \nonumber \\ + \left( \frac{\delta^{\sigma\sigma}_c}{\delta^{v\sigma}_c} + \theta^{ct} \right) & (2\mu m_{\val{x}}) + \left( \frac{\delta^{v\sigma}_c}{\delta^{\sigma\sigma}_c} + \theta^{ct} \right)(2\mu m_{\sea{y}}) \nonumber \\ +\ &a^2\left( \frac{\delta^{v\sigma}_c}{\delta^{\sigma\sigma}_c} \Delta^{\sigma\sigma}_c + \frac{\delta^{\sigma\sigma}_c}{\delta^{v\sigma}_c} \Delta^{vv}_c \right) \Biggr) D_{\val{x}\sea{y}}^c \Biggr]\,.\nonumber \end{align} From the taste-singlet potential, we find contributions not only from graphs of type (a), as in the valence-valence sector, but also from graphs of types (c) and (d), \begin{align} \frac{a^2C_\mr{mix}}{3f^2(4\pi f)^2}\,\sum_{b}\Bigg[\sum_{\sea{i}^\prime} \left( 8\ell(m^2_{\val{x}\sea{i}^\prime,b}) + \ell(m^2_{\sea{y}\sea{i}^\prime,b}) \right) \label{eq:vsig_Ipot}\\ +\ 4\int \frac{d^4q}{\pi^2} \left( D^b_{\val{x}\val{x}} + D^b_{\sea{y}\sea{y}} - 2\theta^{bt} D_{\val{x}\sea{y}}^b \right) \Bigg]\,.\nonumber \end{align} From the single-trace potential $\mc{U}$, we have graphs of types (a), (c), and (d), \begin{align} \frac{a^2}{48(4\pi f)^2} \sum_{b}\Bigg[\sum_{\sea{i}^\prime}\Big( \Delta_{bt}^{v\sigma,v\sigma} \ell(m^2_{\val{x}\sea{i}^\prime,b}) \label{eq:vsig_U} \\ +\ \Delta_{bt}^{v\sigma,\sigma\sigma} \ell(m^2_{\sea{y}\sea{i}^\prime,b}) \Big) \nonumber \\ +\ 4 \int \frac{d^4q}{\pi^2} \Big( \Delta_{bt}^{v\sigma,vv} D^b_{\val{x}\val{x}} + \Delta_{bt}^{v\sigma,\sigma\sigma} D^b_{\sea{y}\sea{y}} \nonumber \\ +\ 2 \Delta_{bt}^{\prime\, v\sigma,v\sigma} D_{\val{x}\sea{y}}^b \Big) \Bigg]\,,\nonumber \end{align} where \begin{align} \Delta_{ct}^{v\sigma,v\sigma} \equiv \frac{8}{f^2}\sum_{b\neq I} & \bigl[ 4C^{vv}_b + C^{\sigma\sigma}_b \bigl( 1 + 3 \theta^{cb} \theta^{bt} \bigr) \\ -\ &4 C^{v\sigma}_b \bigl( \theta^{5b}\theta^{bt} + \theta^{cb}\theta^{b5} \bigr) \bigr]\,,\nonumber \\ \Delta_{ct}^{v\sigma,\sigma\sigma} \equiv \frac{8}{f^2}\sum_{b\neq I} & \bigl[ C^{\sigma\sigma}_b \bigl( \tfrac{9}{2} - 4 \theta^{cb}\theta^{b5} \bigr) + \tfrac{1}{2}C^{vv}_b \\ +\ & C^{v\sigma}_b \bigl( 3 \theta^{cb} \theta^{bt} - 4 \theta^{5b}\theta^{bt} \bigr) \bigr]\,, \nonumber \\ \Delta_{ct}^{v\sigma,vv} \equiv \frac{8}{f^2}\sum_{b\neq I} & \bigl[ C^{vv}_b \bigl( \tfrac{9}{2} - 4 \theta^{cb}\theta^{b5} \bigr) + \tfrac{1}{2}C^{\sigma\sigma}_b \\ +\ & C^{v\sigma}_b \bigl( 3 \theta^{cb} \theta^{bt} - 4 \theta^{5b}\theta^{bt} \bigr) \bigr]\,,\nonumber\\ \Delta_{ct}^{\prime\,v\sigma,v\sigma} \equiv \frac{8\theta^{ct}}{f^2}\sum_{b\neq I} & \bigl[ \bigl( C^{vv}_b + C^{\sigma\sigma}_b \bigr) \bigl( \tfrac{1}{2} - \theta^{cb}\theta^{b5} \bigr) \\ +\ & C^{v\sigma}_b \bigl( 3 \theta^{cb} \theta^{bt} - 2 \theta^{5b}\theta^{bt} \bigr) \bigr]\,.\nonumber \end{align} In the unmixed case, $\Delta_{ct}^{v\sigma,v\sigma}=\Delta_{ct}^{v\sigma,\sigma\sigma}=\Delta_{ct}^{v\sigma,vv}=\Delta_{ct}$, $\Delta_{ct}^{\prime\,v\sigma,v\sigma}=\Delta_{ct}^{\prime}$, and the contribution from $\mc{U}$ reduces \cite{Aubin:2003mg,SWME:2011aa}. We note that $\Delta_{ct}^{v\sigma,\sigma\sigma}$ appears in both connected and disconnected terms. From the double-trace potential $\mc{U}^\prime$, we have, after combining graphs of types (e) and (f) to eliminate those of type (f), \begin{align} &\frac{1}{12(4\pi f)^2}\ \times \\ \sum_c \biggl[&\frac{3a^2}{8} \sum_{b\in V,A} \tau_{cbt}\tau_{cbt} \left( \omega^{v\sigma}_b + \frac{\theta^{ct}}{2} ( \omega^{vv}_b + \omega^{\sigma\sigma}_b ) \right) \ell ( m_{\val{x}\sea{y},\,c}^2 ) \nonumber \\ + &\int \frac{d^4q}{\pi^2} \rho^{ct} \left( q^2 + m_{\val{x}\sea{y},\,5}^2 + \frac{a^2}{2} ( \Delta^{vv}_c + \Delta^{\sigma\sigma}_c ) \right) D^c_{\val{x}\sea{y}} \biggr] \nonumber \\ +\ &\frac{1}{3(4\pi f)^2}\sum_{c\in V,A} \biggl[ \Bigl( \delta^{vv}_c - ( \delta^{v\sigma}_c )^2 / \delta^{\sigma\sigma}_c \Bigr) \ell(m^2_{\val{xx},c}) \nonumber \\ + &\int \frac{d^4q}{\pi^2} \biggl( \Bigl( 2 - \delta^{\sigma\sigma}_c / \delta^{v\sigma}_c - \delta^{v\sigma}_c / \delta^{\sigma\sigma}_c \Bigr) q^2 \nonumber \\ +\ &\Bigl( 1 - \delta^{\sigma\sigma}_c / \delta^{v\sigma}_c \Bigr) ( 2 \mu m_{\val{x}} + a^2 \Delta^{vv}_c ) \nonumber \\ +\ &\Bigl( 1 - \delta^{v\sigma}_c / \delta^{\sigma\sigma}_c \Bigr) ( 2 \mu m_{\sea{y}} + a^2 \Delta^{\sigma\sigma}_c ) \biggr) D^c_{\val{x}\sea{y}} \biggr]\,.\nonumber \end{align} The reduction of this expression in the unmixed case is immediate. In the valence-valence sector and unmixed cases, the graphs of types (e) and (f) can be combined into a graph of type (d). In the valence-sea sector, we eliminate graphs of type (f) in favor of those of type (d), but a contribution of type (e) remains. Adding the various contributions and evaluating the sum at $p^2=-m^2_{\val{x}\sea{y},\,t}$ gives, for graphs with connected propagators, \begin{align} -\Sigma&_{\val{x}\sea{y},\,t}^{\mr{NLO\ loop,\ con}}(-m^2_{\val{x}\sea{y},t}) = \frac{a^2}{48(4\pi f)^2}\ \times \label{eq:Sigvs-con}\\ \sum_c \Bigg[ & \left( \Delta_{ct}^{v\sigma,v\sigma} - \Delta^{v\sigma}_t -\Delta^{v\sigma}_c + \frac{128 C_\text{mix}}{f^2}\right) \sum_{\sea{i}^\prime} \ell(m^2_{\val{x}\sea{i}^\prime,c}) \nonumber \\ +\ &\left( \Delta_{ct}^{v\sigma,\sigma\sigma} - \Delta^{v\sigma}_t -\Delta^{\sigma\sigma}_c + \frac{16 C_\text{mix}}{f^2}\right) \sum_{\sea{i}^\prime} \ell(m^2_{\sea{y}\sea{i}^\prime,c}) \nonumber \\ +\ &\frac{3}{2}\sum_{b\in V,A} \tau_{cbt}\tau_{cbt} \left(\omega_b^{v\sigma}+ \frac{\theta^{ct}}{2}(\omega_b^{vv}+\omega_b^{\sigma\sigma}) \right)\ell(m^2_{\val{x}\sea{y},c}) \Bigg] \nonumber \\ +\ &\frac{1}{4(4\pi f)^2}\sum_{c\in V,A} \left(\delta^{vv}_c - (\delta^{v\sigma}_c)^2/\delta^{\sigma\sigma}_c\right)\ell(m^2_{\val{x}\val{x},c})\,, \nonumber \end{align} while for the graphs with disconnected propagators, we have \begin{align} -\Sigma&_{\val{x}\sea{y},\,t}^{\mr{NLO\ loop,\ disc}}(-m^2_{\val{x}\sea{y},t}) =\ \frac{1}{12(4\pi f)^2}\int\frac{d^4q}{\pi^2}\ \times \label{eq:Sigvs-disc} \\ \sum_{c} \Biggl[ & a^2\left( \Delta_{ct}^{v\sigma,vv}-\Delta^{v\sigma}_t-\Delta^{vv}_c + \frac{16 C_\text{mix}}{f^2}\right) D^c_{\val{x}\val{x}} \nonumber\\ +\ & a^2\left( \Delta_{ct}^{v\sigma,\sigma\sigma}-\Delta^{v\sigma}_t-\Delta^{\sigma\sigma}_c + \frac{16 C_\text{mix}}{f^2}\right) D^a_{\sea{y}\sea{y}} \nonumber\\ +\ & \bigg[ \left(8-3\bigg(\frac{\delta^{v\sigma}_c}{\delta^{\sigma\sigma}_c}+\frac{\delta^{\sigma\sigma}_c}{\delta^{v\sigma}_c}\bigg)-2\theta^{ct}+\rho^{ct}\right)q^2 \nonumber \\ +\ & \left( 4 - 3 \frac{ \delta^{\sigma\sigma}_c }{ \delta^{v\sigma}_c } + 2 \theta^{ct} + \frac{ \rho^{ct} }{2} \right)( 2 \mu m_{\val{x}} ) \nonumber \\ +\ & \left( 4 - 3 \frac{ \delta^{v\sigma}_c }{ \delta^{\sigma\sigma}_c } + 2 \theta^{ct} + \frac{ \rho^{ct} }{2} \right)( 2 \mu m_{\sea{y}} ) + 2 a^2\Delta_{ct}^{\prime\, v\sigma,v\sigma} \nonumber \\ +\ & a^2\bigg(2\theta^{ct}\Delta^{v\sigma}_t + \bigg(4-3\frac{\delta^{\sigma\sigma}_c}{\delta^{v\sigma}_c}+\frac{\rho^{ct}}{2}\bigg)\Delta^{vv}_c \nonumber \\ +\ & \bigg(4-3\frac{\delta^{v\sigma}_c}{\delta^{\sigma\sigma}_c}+\frac{\rho^{ct}}{2}\bigg)\Delta^{\sigma\sigma}_c \bigg) - \frac{32a^2\theta^{ct}C_\text{mix}}{f^2} \bigg]D_{\val{x}\sea{y}}^c \Biggr]\,.\nonumber \end{align} The reduction in the unmixed case is straightforward. There is no symmetry under $\val{x}\leftrightarrow \sea{y}$; when using the replica method, the valence and sea sectors of the effective theory are distinguished by the operations of partial quenching (the valence quarks) and rooting (the sea quarks). The taste-pseudoscalars are not Goldstone bosons (in the chiral limit) at non-zero lattice spacing, and the self-energy does not vanish in the chiral limit. In the continuum limit, the symmetry is restored, and the masses vanish, in accord with Goldstone's theorem. \section{\label{sec:decay}Next-to-leading order corrections to decay constants} As for continuum and ordinary SChPT, the decay constants are defined by matrix elements of the axial currents, \begin{align} -i f_{xy,\,t}\, p_\mu = \bra{0}\, j^{\mu5}_{xy,\,t}\, \ket{\phi^t_{xy}(p)} \,. \end{align} The NLO corrections are the same types of diagrams that appear in continuum and unmixed SChPT. We have one-loop wave function renormalization contributions [graphs (a), (c), and (d) of Fig.~\ref{fig:qflowdiag}], one-loop graphs from insertions of the $\mc{O}(\phi^3)$-terms of the LO current [graphs (g), (h), and (i) of Fig.~\ref{fig:qflowdiag}], and terms analytic in the quark masses and squared lattice spacing, from the NLO Lagrangian~\cite{Aubin:2003uc}. As for the NLO analytic corrections to the masses, the NLO analytic corrections to the decay constants have the same form as in the unmixed theory, with distinct LECs for the valence-valence, sea-sea, and valence-sea sectors. Turning to the one-loop corrections, we note that the LO current is determined by the kinetic energy vertices of the LO Lagrangian; these vertices are the same in mixed-action and unmixed SChPT. Therefore, the LO current in the mixed-action case is the same as the LO current in unmixed SChPT. Likewise, the NLO wave function renormalization corrections are determined by self-energy contributions from tadpoles with kinetic energy vertices from the LO Lagrangian. Moreover, nothing in the calculation of the relevant part of the self-energies or the current-vertex loops is sensitive to the sector of the external quarks. Therefore, to generalize the one-loop graphs of the unmixed case, we have only to replace the propagators with those of the mixed-action theory. The results hold for all sectors of the mixed-action theory (valence-valence, sea-sea, and valence-sea). We have \begin{align} &\frac{f^\text{NLO loop}_{xy,\,t}}{f} = 1 - \frac{1}{8(4\pi f)^2}\sum_c\ \times \label{eq:decayfin} \\ &\left[\frac{1}{4}\sum_{\val{i}^\prime\sea{i}^\prime} \ell(m^2_{\val{i}^\prime\sea{i}^\prime,\,c}) + \int\frac{d^4q}{\pi^2}(D^c_{xx}+D^c_{yy}-2\theta^{ct}D^c_{xy})\right]\,.\nonumber \end{align} The form of this result is the same as that in the unmixed theory~\cite{Bailey:2012jy}, and the reduction in the unmixed case is immediate. \section{\label{sec:sum}Conclusion} In mixed-action SChPT, we have calculated the NLO loop corrections to the masses and decay constants of pions and kaons in all taste irreps. We have cross-checked all results by performing two independent calculations and verifying the results reduce correctly when valence and sea quark actions are the same. In the valence-valence sector, the taste pseudoscalars are Goldstone bosons in the chiral limit, at non-zero lattice spacing, as in ordinary, unmixed SChPT. The NLO analytic corrections arise from tree-level contributions of the (NLO) Gasser-Leutwyler and generalized Sharpe-Van de Water Lagrangians. They have the same form as in the unmixed case, with independent LECs in the valence-valence, sea-sea, and valence-sea sectors. The NLO loop corrections to the self-energies of the valence-valence pions and kaons are given in Eq.~\eqref{eq:Sigvv}; those for the valence-sea pions and kaons are given in Eqs.~\eqref{eq:Sigvs-con} and \eqref{eq:Sigvs-disc}; and those for the decay constants are given in Eq.~\eqref{eq:decayfin}. As given above, the results for the decay constants and valence-valence masses have the same form as the results in ordinary, unmixed SChPT; the results for the valence-sea masses have additional corrections that vanish in the ordinary, unmixed case. The loop integrals are the same as those in the unmixed theory. \acknowledgements We thank Claude Bernard for sharing his unpublished notes on mixed-action staggered chiral perturbation theory. The research of W.~Lee was supported by the Creative Research Initiatives Program (No.~20160004939) of the National Research Foundation of Korea (NRF) funded by the Korean government (MEST). J.A.B. is supported by the Basic Science Research Program of the National Research Foundation of Korea (NRF) funded by the Ministry of Education (No.~2015024974). \section*{Appendix} Here we present a derivation of the taste-singlet potential in Eq.~\eqref{eq:pot}. The analysis is the same as for the ordinary, unmixed case, except that the spurion fields carry factors of the projection operators $P_{v,\sigma}$. Consider the bilinears in Eq.~\eqref{eq:genop}. Noting that the staggered U(1)$_{\epsilon}$ symmetry implies that $\{\gamma_s\otimes\xi_t , \gamma_5\otimes\xi_5\} = 0$, we see that taste-singlet bilinears, for which $\xi_t=\xi_I=I$, must have vector or axial spin structure, $\gamma_s = \gamma_{\mu},i\gamma_{\mu}\gamma_5$. The taste structure of the associated four-fermion operators may be written~\cite{Lee:1999zxa} \begin{align} \pm \sum_{\mu}\left[\bar{\psi}_R(\gamma_{\mu}\otimes F_R)\psi_R \pm \bar{\psi}_L(\gamma_{\mu}\otimes F_L)\psi_L\right]^2\,, \end{align} where the positive (negative) signs apply for vector (axial) spin, and the spurion fields $F_X\rightarrow X F_X X^\dagger$ for $X=L,R\in$ SU(3) ensure that the operators are invariant under SU(3)$_L\times$SU(3)$_R$ transformations. Enumerating all chiral singlets that are quadratic in the spurions and invariant under parity, there exists only a single nontrivial operator~\cite{Lee:1999zxa}, \begin{align} \mr{Tr}(F_L\Sigma F_R\Sigma^\dagger)\,. \end{align} For the unmixed theory, setting $F_L=F_R=I$ for the taste singlet operators yields only a trivial operator. But in the unmixed case, we have $F_{L,R}=P_{v,\sigma}I$, and there are four nontrivial operators invariant under the chiral symmetry~\cite{CB:notes}: \begin{align} \mr{Tr}(P_v\Sigma P_v \Sigma^\dagger),\ &\mr{Tr}(P_v\Sigma P_{\sigma}\Sigma^\dagger),\ \\ \mr{Tr}(P_\sigma\Sigma P_v\Sigma^\dagger),\ &\mr{Tr}(P_{\sigma}\Sigma P_{\sigma}\Sigma^\dagger)\,.\nonumber \end{align} Introducing LECs, adding the results, and demanding parity invariance gives~\cite{CB:notes} \begin{align} C^{vv}_0 \, \mr{Tr}(P_v\Sigma P_v\Sigma^\dagger) + C^{\sigma\sigma}_0\,\mr{Tr}(P_{\sigma}\Sigma P_{\sigma}\Sigma^\dagger)\\ +\ C^{v\sigma}_0\left[\mr{Tr}(P_v\Sigma P_{\sigma}\Sigma^\dagger) + \mr{Tr}(P_{\sigma}\Sigma P_v\Sigma^\dagger)\right]\,,\nonumber \end{align} where the equality of the coefficients of the last two operators follows from parity. Noting $P_v + P_{\sigma} = 1$ (the identity in flavor space), defining $\tau_3 = P_\sigma - P_v$, eliminating $P_{v,\sigma}$ in favor of $\tau_3$ and $1$, and collecting nontrivial operators, we have \begin{align} C_\mr{mix}\,\mr{Tr}(\tau_3\Sigma\tau_3\Sigma^\dagger)\,, \end{align} where $C_\mr{mix}\equiv \tfrac{1}{4}(C^{vv}_0 + C^{\sigma\sigma}_0 - 2 C^{v\sigma}_0)$. In the unmixed case, $C^{vv}_0 = C^{\sigma\sigma}_0 = C^{v\sigma}_0$, $C_\mr{mix}=0$, and we recover the correct (trivial) result.
-61,027.267691
[ -2.982421875, 2.75390625 ]
13.112745
[ -2.814453125, 1.3046875, -1.6064453125, -5.58203125, -1.1455078125, 6.9375 ]
[ 3.041015625, 8.828125, 2.951171875, 6.76171875 ]
265
4,520
[ -2.716796875, 3.1484375 ]
35.178735
[ -5.6015625, -3.845703125, -4.11328125, -2.275390625, 1.8896484375, 11.046875 ]
1.150711
6.759534
28.539823
2.718122
[ 2.913114070892334 ]
-39,906.580014
6.92146
-60,513.663974
0.920569
5.966027
[ -2.318359375, -3.6328125, -4.09375, -5.3359375, 2.240234375, 12.6484375 ]
[ -5.25, -2.01953125, -2.48828125, -1.439453125, 3.318359375, 4.3984375 ]
BkiUdozxK6Ot9UjEBvcU
\section{Proof of Theorem~\ref{th:1}} \label{th: Proof of Th1} In the derivation of large system analysis, we use well-known lemmas including trace lemma~\cite[Lemma 2.6]{bai1998no},\cite[Theorem 3.4]{RMT} along with rank-1 perturbation lemma~\cite[Lemma 2.6]{silverstein1995empirical},\cite[Theorem 3.9]{RMT}. The former one shows asymptotic convergence of $\textbf{x}\herm \textbf{A} \textbf{x}-\frac{1}{N}\text{Tr}{\bf{A}}\rightarrow0$ when ${\bf x}\in \mathbb{C}^N$ has i.i.d entries with zero mean, variance of $\frac{1}{N}$ and independent of $\mathbf{A}$. The latter one states that addition of rank-1 matrix ${\bf {xx}}\herm$ to the random Gram matrix ${\bf{X}} {\bf X}\herm$ does not affect trace $\frac{1}{N}\text{Tr}({\bf X} {\bf X}\herm+{\bf I}_N)$ term in the large dimensional limit. The formal presentation of these lemmas are given in~\cite{bai1998no,silverstein1995empirical}. Starting from amplitude projection in~\eqref{eq:amp project}, we apply trace lemma, along with rank-1 perturbation lemma to get \begin{equation}\label{eq:proof Th1 eq 1} \begin{aligned} \omega_{k,i}-\frac{1}{N}\mathrm{tr}( {\bf \Theta}_{k}\boldsymbol{\Sigma} \vec{S}_i \vec{S}_i \herm \boldsymbol{\Sigma} )\xrightarrow{N\rightarrow \infty} 0 \\ \end{aligned} \end{equation} almost surely, where $\boldsymbol{\Sigma}=\big(\sum_{j }{{p}_{j}}\mathbf{ h}_{j}\mathbf{ h}_{j}\herm +\mathbf{I}_N\big)^{-1}$. From matrix identities~\cite{Horn-Johnson-90}, we know that $ {\partial \mathbf{Y}^{-1}}/{\partial x}=- \mathbf{Y}^{-1} ( {\partial \mathbf{Y}}/{\partial x} ) \mathbf{Y}^{-1} $ with $\bf Y$ being a matrix depending on variable $x$. Thus, the above trace term can be written equivalently as \begin{equation}\label{eq:proof Th1 eq 2} \frac{1}{N}\mathrm{tr}( {\bf \Theta}_{k}\boldsymbol{\Sigma} \vec{S}_i \vec{S}_i \herm \boldsymbol{\Sigma} )=\frac{\partial}{\partial x} m_{k,i}(z,x)|_{x=0,z=-1} \end{equation} where \begin{equation} m_{k,i}(z,x)=\frac{1}{N} \mathrm{tr}\big(\boldsymbol{\Theta}_{k} \big(\sum_{j }{{p}_{j}}\mathbf{ h}_{j}\mathbf{ h}_{j}\herm -z\mathbf{I}_N-x \vec{S}_i \vec{S}_i \herm \big)^{-1} \big). \end{equation} The term $m_{k,i}(z,x)$ is the Stieltjes transforms of a measure. It is shown in~\cite[Theorem 1]{wagner2012large}, where under Assumption~\ref{as:0}-\ref{as:2}, and for $z \in \mathbb{C} \backslash \mathbb{R}^+$, $x\in \mathbb{R}^-$, these Stieltjes transforms have deterministic equivalents such that \begin{equation}\label{eq:proof Th1 eq 3} m_{k,i }(z,x)-\bar{m}_{k,i }(z,x)\xrightarrow{N\rightarrow \infty} 0 \end{equation} almost surely, where the deterministic equivalents $\bar{m}_{k,i }(z,x)$ are given as the solutions of the following fixed-point iterations \begin{equation}\label{Proof equivalent ST form} \bar{m}_{k,i}(z,x)=\frac{1}{N} \!{\mathrm{tr} }\big(\boldsymbol{\Theta}_k \bf{T}_i(z,x)\big), \forall k\in\mathcal{U}, i\in\{1,...,S\} \end{equation} where \begin{equation} \bf{T}_i(z,x)=\biggr(\frac{1}{N}\sum\limits_{j=1}^K \frac{p_j \boldsymbol{\Theta}_j}{1+p_j\bar{m}_{j,i}(z,x)}-x\vec{S}_i \vec{S}_i \herm-z\mathbf{I}_N\biggr)^{-1}. \end{equation} As the result, from~\eqref{eq:proof Th1 eq 1},~\eqref{eq:proof Th1 eq 2}, and~\eqref{eq:proof Th1 eq 3}, we get \begin{equation} \omega_{k,i}- \bar m^{\prime}_{k,i}\xrightarrow{N\rightarrow \infty} 0 \end{equation} where $\bar m^{\prime}_{k,i}= \bar m^{\prime}_{k,i}(z,x)|_{x=0,z=-1}$ with $\bar m^{\prime}_{k,i}(z,x)\triangleq\frac{\partial}{\partial x} \bar m_{k,i}(z,x)$. The values of $\bar m^{\prime}_{k,i}$ can be evaluated by taking derivative of $\bar{m}_{k,i}(z,x)$ in~\eqref{Proof equivalent ST form}, and evaluating the derivative at point $({x=0,z=-1})$. In doing so, we get \begin{equation} \bar{m}^{\prime}_{k,i}=\frac{1}{N} \!{\mathrm{tr} }\big(\boldsymbol{\Theta}_k \bf{T}^{\prime}_i\big), \forall k\in\mathcal{U}, i\in\{1,...,S\} \end{equation} where $\bf{T}^{\prime}_i=\bf{T}^{\prime}_i(z,x)|_{x=0,z=-1}$, or equivalently \begin{equation}\label{eq:Tbkl prime} \mathbf{T}_{i}^{\prime} = \mathbf{T}\bigg(\frac{1}{N} \sum_{j \in \mathcal{U}} \frac{{p}_j^2 \boldsymbol{\Theta}_{j}\bar{m}_{j,i}^{\prime}}{(1+{ p}_j\bar{ m}_{j})^2} +\vec{S}_i \vec{S}_i \herm \bigg) \mathbf{T} \end{equation} where $\mathbf{T}=\mathbf{T}_{i}(-1,0)$, and $\bar{ m}_{j}=\bar{ m}_{j,i}(-1,0)$ . Since $\bar{m}_{k,i}'=\frac{1}{N} \text{Tr} (\boldsymbol{\Theta}_{k} \mathbf{T}_{i}^{\prime})$ with $\mathbf{T}_{i}^{\prime}$ given by~\eqref{eq:Tbkl prime}, we get a system of equation to evaluate $\bar{m}_{k,i}'$ as $ [\bar{m}_{1,i}^{\prime},...,\bar{m}_{K,i}^{\prime}]$ $=(\mathbf{I}_K-\mathbf{L})^{-1} {\vec b}_{i}$ with ${\vec b}_{i}$ and $\mathbf{L}$ defined as in~\eqref{eq:Th1 eq2} and \eqref{eq:Th1 eq3}, respectively, which completes the proof of the theorem. \section{Introduction} High spatial utilization is a promising approach to meet the significant spectral efficiency enhancements required for 5G cellular networks. In general, this is achieved by using a large number of antennas $N$ at the base stations (BSs) to serve a large number of user equipments (UEs) $K$ on the same frequency-time resources. However, such large dimensions of the channel matrices pose challenges on computational complexity and hardware costs. A promising solution to these problems lies in the concept of two-stage beamforming (TSB), which concatenates an outer-beamformer (analog/digital) with an inner beamformer/receiver. Joint spatial division and multiplexing (JSDM) is introduced in~\cite{UserPartionAdhikary} for a downlink scenario wherein a statistical OBF matrix creates multiple virtual sectors. Exploiting the similarity among covariance matrices of co-located UEs, the authors in~\cite{UserPartionAdhikary} propose to group UEs based on their statistical properties. Then, the OBF matrix is designed based on the eigenvectors of group-specific covariance matrices. The performance of such a system depend on group-formation, and cross-sector interference management~\cite{XuJSDM-Grouping14,NamJSDM-Grouping14}. The authors in~\cite{AnttiGesbert16,AyshvarAntti18} study JSDM-based TSB in a downlink system to maximize the weighted sum-rate. It is observed that the reduced spatial dimensions results in significant inter-sector interference leakage as the number of UEs $K$ increases. This issue is addressed in~\cite{AyshvarAntti18} by coordination of interference among sectors. Similar performance degradation appears in the equivalent uplink problem where the work in~\cite{TakahashiAntti19} mitigates the effects of inter-group interference using layered belief propagation detector. In this paper, we consider uplink of a single-cell system wherein $K$ single-antenna UEs communicate with a BS equipped with $N$ antennas. In this case, it is well-known that linear minimum mean square error (MMSE) receiver attains the maximum signal-to-interference-plus-noise ratios (SINRs)~\cite{tse_viswanath_2005}. Motivated by this observation, a novel JSDMA-TSB method is proposed that adjusts the dimensions of UE-specific OBF matrices based on the projection of the MMSE vectors into the beam domain. To this end, the angular-domain is divided into $S$ fixed narrow-sectors such that each sector contains $D<<N$ DFT beams. Then, so-called deterministic equivalents~\cite{RMT} are computed for the amplitude projection of MMSE vectors into each sector via asymptotic analysis in a regime, where $N$, $K$ and $D$ grow large with a non-trivial ratio $N/K=c$ and $N/D=S$. The deterministic equivalents provide tight approximations for the considered metrics in finite-dimensional problems while those depend only on the statistical CSI~\cite{RMT}. The OBF matrix of each UE is obtained by concatenating the sectors whose AP-MMSES values are larger. As a result, the OBF matrices adopt the MMSE strategy, and adjust the direction and the number of sectors for each UE based on the level of multiple access interference while relying solely on statistical CSI. The inner-receiver for a UE k is designed, as in the conventional TSB methods, based on the resulting reduced dimensional channel matrix of size $D_k\times K$. The numerical analysis shows that the attained per-UE rates closely follow the rate of optimal MMSE receiver. Also, it is observed that the dimension $D_k$ depends on the angular position of UE $k$, system load, UEs' angular spread, UEs' powers, and the desired bound on performance degradation. \section{Problem Statement} \label{sec:System model} \subsection{System Model} We consider uplink of a single-cell multi-user large-scale MIMO system, where a single base station (BS) with $N$ antenna elements serves $K<N$ single-antenna user terminals (UE). Under this convention and assuming narrow-band transmission, we define ${\bf h}_{k} \in \mathbb{C}^{N}$ as the channel between the BS and UE $k$. Then, the received signal of UE $k$ at BS can be expressed as \begin{equation} \textstyle \ y_{k} = {\bf w}_{k}\herm {\bf h}_{k} x_{k} +\sum_{i\setminus k}{\bf w}_{k}\herm{\bf h}_{i}s_{i} +{\bf w}_{k}\herm {\bf n}\ \end{equation} where the first term is the desired signal and the second term represents intra-cell interference. The vector ${\bf w}_{k}\in \cc{N}$ denotes the receiver vector of UE $k$. The zero mean, unit variance data symbol intended to UE $k$ is denoted by $x_{k}$, and is assumed to be independent across UEs. Zero-mean white Gaussian noise at the receiver is denote by ${\bf n}\sim \mathcal {CN}(0,\sigma^{2}\eye{N})$. The MMSE receiver for a UE $k$ is given as $\mathbf{w}^{\star}_{k}=(\sum_{j\setminus k}{p_{j}}\mathbf{ h}_{j}\mathbf{ h}_{j}\herm + \sigma^2\mathbf{I}_{N})^{-1}\mathbf{ h}_{k}$ with $p_j$ being the transmit power of UE $j$. The superscript $()^{\star}$ indicates the optimality of the MMSE receiver. Nevertheless, the implementation of large scale MMSE receiver is not feasible with large antenna arrays due to computational complexity constraints. Given a limited angular spread\footnote{In a typical cellular configuration with a tower-mounted BS and no significant local scattering, the propagation between the BS antennas and any given UE is expected to be characterized by the local scattering around the UE. This results in UE's signal to arrive at BS from a limited angular spread.}, it is possible to do receive-processing in a smaller dimensional space than $N$. To this end, we first need to introduce a statistical model for the channel vectors. \subsection{Channel Model} \label{sec:Ch model} {The channel from BS to UE $k$ is modelled as ${\bf h}_{k} = {\bf \Theta}_{k}^{1/2}{\bf z}_{k}$ where ${\bf z}_{k}\in \mathbb {C}^{N}$ represents small-scale fading and has i.i.d, zero-mean, unit-variance complex entries.} The matrix ${\bf \Theta}_{k}\in \mathbb {C}^{N\times N}$ accounts for the UE specific channel correlation at the BS. {The pathloss due to large scale fading is implicitly considered in the correlation matrix unless otherwise stated. In the latter case, pathloss values are explicitly declared by expressing the correlation matrix as $a^2_{k}{\bf \Theta}_{k}$ where $a^2_{k}$ accounts for pathloss from the BS to UE $k$. } \subsection{Beamformer design} \label{sec:Beamformer design} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Sectors.pdf} \caption{Illustration of sectorized angular domain.} \label{fig:sec} \end{figure}% The beamforming vector of a UE $k$ is presented as $\vec{w}_k=\vec{B}_k\vec{v}_k$, where we'd like to design outer-beamfomer (OBF) $\vec{B}_k\in\cc{N}{D_k}$ based on statistical CSI in order to decrease the complexity of inner-receiver $\vec{v}_k \in\cc{D_k}$. Here, $\{D_k\}_{ \forall k}$ are UE dependent design parameters, which trade-off performance and complexity of obtaining the inner-receivers. To do so, we divide the beam-domain into $S$ fixed narrow sectors $\set{\bf{S}_i}{i}{S}$ as shown in Fig.~\ref{fig:sec}. Let $\bf U$ be the $N\times N$ unitary matrix the columns of which are DFT vectors/beams $\set{\bf{u}_j}{j}{N}$. Each sector contains $D$ DFT beams, i.e., $ \bf{S}_i=\{\bf{u}_j, j\in\{(i-1)\times D+1 ,...,(i-1)\times D+D\}\}$. Since MMSE vector is the optimal receiver for the considered system model, we propose to form the OBFs based on projection of the optimal MMSE vectors into each sector. The projection of MMSE vector of UE $k$ into a sector $\vec{S}_i$ is given as $\vec{S}_i\herm \vec{w}^{\star}_k$, and thus, the normalized squared norm of this projection, denoted by $\omega_{k,i}$, is given as \begin{equation} \label{eq:amp project} \omega_{k,i}=\frac{1}{N}\mathbf{ h}_{k} \herm \boldsymbol{\Sigma}_{k} \vec{S}_i \vec{S}_i \herm \boldsymbol{\Sigma}_{k} \mathbf{ h}_{k} \end{equation} where $\boldsymbol{\Sigma}_{k}=\big(\sum_{j \setminus k}{{p}_{j}}\mathbf{ h}_{j}\mathbf{ h}_{j}\herm +\mathbf{I}_N\big)^{-1}$. Given approximations for $\omega_{k,i}$ values, the OBF for UE $k$ can be attained by selecting the sectors that have larger projection norm. This ensures that the inner-receiver has enough information to yield SINR values close to the optimal MMSE ones. Under this convention, the SINR of a UE is attained via inner-receiver by processing signals received within a vector space of size $D_k<N$. In deriving the approximations for $\omega_{k,i}$ values, we use results from random matrix theory that allows approximating functional of the random matrices by deterministic quantities~\cite{RMT}. These quantities depend only on the underlying statistical properties, and yield precise approximations for practical problems of finite dimensions. The result of this analysis is presented in the following section. \section{Large system analysis} \label{sec: Large sys} In deriving the large system analysis, the following assumptions (widely used in the literature) are made to properly define the growth rate of system dimensions. \begin{assumption}\label{as:0} As $N\to \infty$, $\!0 < \! \frac{N}{K} < \infty$, and $0 < \! \frac{N}{S} < \infty$. \end{assumption} \begin{assumption} \label{as:2} The spectral norm of ${\bf \Theta}_{k}$ is uniformly bounded as $N\to\infty$, i.e., $ \! \lim \sup_{{N \to \infty}}$ $ \!\!\!\max_{\forall k} \{\left\|{\bf \Theta}_{k}\right\|\}\!\! < \infty.$ \end{assumption} In order to ensure that the total power in the system does not grow unbounded as the number of UEs grow large, we normalize UEs' powers by the number of antennas $N$. Also, without loss of generality, the Gaussian noise variance is assumed to be one. Following the same approach as in~\cite{wagner2012large}, deterministic equivalents for $\omega_{k,i}$ terms can be derived in terms of statistical CSI. The results are summarized in the following theorem. \begin{theorem}\label{th:1} Under Assumptions \ref{as:0}-\ref{as:2}, the following holds almost surely \begin{equation} \omega_{k,i}-\bar{ \omega}_{k,i} \rightarrow 0 \end{equation} where the values of $\bar{ \omega}_{k,i}$ can be evaluated as \begin{equation}\label{eq:Th1 eq0} [\bar{\omega}_{1,i},...,\bar{\omega}_{K,i}]=(\mathbf{I}_K-\mathbf{L})^{-1} {\vec b}_{i},\,\forall i\in\{1,...,S\} \end{equation} where \begin{equation}\label{eq:Th1 eq1} \left[\mathbf{L}\right]_{i,j}= \frac{1}{N} \frac{ {\mathrm{tr}}\left(\boldsymbol{\Theta}_{i} \mathbf{T}\boldsymbol{\Theta}_{j}\mathbf{T}\right) }{(1/\bar p_j+ \bar m_{j})^2}, \end{equation} and \begin{equation}\label{eq:Th1 eq2} \begin{aligned} {\vec b}_{i}=\!\!\left[\frac{1}{N} {\mathrm{tr}} \left(\boldsymbol{\Theta}_{1} \mathbf{T}\vec{S}_i \vec{S}_i \herm \mathbf{T}\right),\ldots,\frac{1}{N} {\rm Tr}(\boldsymbol{\Theta}_{K} \mathbf{T}\vec{S}_i \vec{S}_i \herm \mathbf{T})\right] \end{aligned} \end{equation} with ${\bf{T}}$ given by \begin{align}\label{eq:Th1 eq3} {\bf{T}} = \left(\frac{1}{N}\sum\limits_{j\in \mathcal U}\frac{ \bar p_j{\bf{\Theta}}_{j}}{1+ \bar p_j\bar m_{j}} + {\bf I}_N\right)^{-1}\!\!\!, \end{align} and $\bar m_{j},\forall j\in \mathcal{U}$ are given as the fixed-point solution of $\bar m_{j}=\frac{1}{N}\mathrm{tr}(\boldsymbol{\Theta}_{j}{\bf{T}}),\forall j\in \mathcal{U}$. \end{theorem} \begin{IEEEproof} The proof is given in Appendix~\ref{th: Proof of Th1}.\end{IEEEproof} The results of the theorem yield approximations for $\omega_{k,i},k\in\mathcal{U},i\in\{1,...,S\}$ values in finite regime. The results are utilized in the following to propose algorithms for obtaining OBF matrices. \section{Algorithms for designing two-stage beamformers} \label{sec:alg for TSB} Given approximations for $\omega_{k,i}$ values, the OBF for UE $k$ can be attained by selecting the sectors that have larger projection norm, i.e., $\bf{B}_k=\{\bf{S}_i\}_{i\in\mathcal{B}_k}$ where $\mathcal{B}_k$ holds the indices of selected sectors. The received signal for UE $k$ after applying OBF is given as \begin{equation} \ \bf{B}_k\herm\bf{y} =\bf{B}_k\herm {\bf H}\vec{x} +\bf{B}_k\herm {\bf n}\ \end{equation} where ${\bf H}=[\vec{h}_1,...,\vec{h}_K]$, and $\vec{x}=[x_1,...,x_K]\tran$. The inner-receiver of UE $k$ applies a MMSE vector based on the $D_k\times K$ equivalent channel given by $\bf{B}_k\herm {\bf H}$. Since we have $D_k=|\mathcal{B}_k|\times D$, the complexity of inner-receiver is determined by the cardinality of set $\mathcal{B}_k$. For a given UE $k$, we propose to select the sectors whose $\bar \omega_{k,i}$ values are larger than $\delta \max(\bar \omega_{k,1},...,\bar \omega_{k,S})$. The parameter $0\leq \delta\leq 1$ trades off the complexity and performance. Larger $\delta$ values yield smaller $D_k$ values but also degrades the performance. These steps are summarized in Algorithm~\ref{alg:1}. \vspace{0.2cm} \begin{algorithm} [H] \caption{Two-stage beamforming algorithm.} \label{alg:1} \begin{algorithmic}[1] \LOOP \IF{Any change in the UEs' statistics or during the initial stage} \STATE Obtain $\bar \omega_{k,i}$ values from~\eqref{eq:Th1 eq0}. \STATE Obtain OBFs as $\bf{B}_k=\{\bf{S}_i\}_{i\in\mathcal{B}_k},\forall k\in \mathcal{K}$ where $\mathcal{B}_k=\{j|\bar \omega_{k,j}\geq \delta \max(\bar \omega_{k,1},...,\bar \omega_{k,S})\}$ \ENDIF \STATE Obtain inner-receivers as $\mathbf{v}_{k}=(\sum_{j\setminus k}{p_{j}}\bf{B}_k\herm\mathbf{ h}_{j}\mathbf{ h}_{j}\herm \bf{B}_k+ \sigma^2\mathbf{I}_{D_k})^{-1}\bf{B}_k\herm \mathbf{ h}_{k}$. \ENDLOOP \end{algorithmic} \end{algorithm} Concerning the complexity analysis, we notice that the evaluation of inner-receiver $\mathbf{v}_{k}$ involves a matrix inversion of size $D_k\times D_k$ with a complexity in the order of $\mathcal{O}(D_k^3)$. Due to the limited angular spread of UEs' signals, $D_k$ values are expected to be much smaller than $N$. Concerning the calculation of approximate $\bar \omega_{k,i}$ values, we notice that $\bar \omega_{k,i}$ values in Step 3 of the algorithm are updated only when there are sufficient changes in CSI statistics, which vary at a much slower rate than the fading CSI. The computation of approximate $\bar \omega_{k,i}$ values requires matrix inversion in $(\mathbf{I}_K-\mathbf{L})^{-1}$, and evaluation of $\{\bar m_k\}$ values. The complexity of evaluating the former one is of order $\mathcal{O}(K^3)$. The latter one is evaluated via a fixed point iteration with complexity of $\mathcal{O}( N^3)$ per-iteration. \section{Numerical Analysis} \label{sec:Simulation Results} Monte Carlo simulations are now used to validate the performance of the proposed solution. By assuming a diffuse 2-D field of isotropic scatterers around the receiver~\cite{JakesMicrowavebook}, the correlation matrix for an antenna element spacing of $\Delta$ is given by \begin{equation}\label{eq:corr model} \left[{\bf{\Theta}}_{k}\right]_{j,i}=\frac{a_{k}^2}{\varphi_{k}^{\max}-\varphi_{k}^{\min}}\int_{\varphi_{k}^{\min}}^{\varphi_{k}^{\max}} \! e^{i\frac{2\pi}{w}\Delta (j-i){\rm cos}(\varphi)} \, \mathrm{d}\varphi \end{equation} where waves arrive with an angular spread $\Delta \varphi$ from $\varphi_{\min}$ to $\varphi_{\max}$. The wavelength is denoted by $w$, and the antenna element spacing is fixed to half the wavelength $\Delta=1/2w$. The UEs are distributed over a circle of radius 300m between angular position $\frac{\pi}{6}$ to $\frac{5\pi}{6}$. The angular separation between UEs are the same and equal to $\frac{2\pi}{3}\frac{1}{K}$. The angular spread $\Delta \varphi$ is the same for all UEs and equal to $\pi/10$. Thus, increasing the number of UEs results in an increase in overlap among UEs' signals angle-of-arrivals (AOAs). The number of antennas at BS is fixed to $N=225$, and the angular domain is divided into $S=45$ sectors. The effect of pathloss and additive noise is captured in received signal to noise ratio (SNR) at an antenna element of BS. The SNR is denoted by $\rho$ in the following. In order to validate the large system analysis, Fig.~\ref{fig:Accur} shows the exact values of $\omega_{k,i}$ and the deterministic equivalents $\bar \omega_{k,i}$ for 5 selected UEs, and a given random realization of small-scale fading. The number of UEs $K$ is equal to $135$. It can be seen that the values of deterministic $\bar \omega_{k,i}$ closely follows the exact ones $\omega_{k,i}$. The OBF matrices in Algorithm~\ref{alg:1} are designed based on these accurate approximations. Thus, the spatial filtering is expected to reduce the dimensions of inner-receivers with minimal performance degradation \begin{figure}[t] \centering {\includegraphics[clip, trim=0cm 7cm 0cm 0cm, width=\columnwidth]{f4.pdf}} \caption{The values of $\bar \omega_{k,i}$ and $\omega_{k,i}$ for 5 selected UEs, $N=225$, $K=135$, $S=45$. } \label{fig:Accur} \end{figure} \begin{figure}[b] \centering \includegraphics[width=\linewidth]{f2.pdf} \caption{Rate and the number of beams per UE vs. $\delta$, $N=225$, $S=45$, $\rho=10$. \label{fig:rate vs d1} \end{figure}% Fig.~\ref{fig:rate vs d1} illustrates the trade off between complexity and performance in Algorithm~\ref{alg:1}. The upper and lower plots in the figure show the averaged number of beams allocated per UE and attainable rates in b/s/Hz/UE, respectively, versus $\delta$ values. The results are presented for the cases with the number of UEs equal to 135 and 225. The attainable rate using Algorithm~\ref{alg:1} is titled as OBF-MMSE in the figure. Also, the rates of optimal MMSE receiver and matched filtering are presented as benchmarks. As can be seen from the figure, the number of beams per UE decreases as $\delta$ value increases. The parameter $\delta$ adjusts the number of beams that are passed to the inner-receiver. A higher value of $\delta$ neglects more beams with small AP-MMSE values. Setting $\delta=0.1$ as an example, cuts off the sectors whose AP-MMSE values are less than one-tenth of the maximum value. It can be seen that at point $\delta=0.1$, the gap to the optimum rate is small. Also, the number of beams per UE is near $N/4$. Thus, at $\delta=0.1$ a proper trade off between the performance and complexity is achieved. In Fig.~\ref{fig:rate vs KN}, the attainable rate in b/s/Hz/UE along with corresponding averaged number of beams per UE in Algorithm~\ref{alg:1} is plotted versus load $K/N$. The results are presented for the cases with $\delta$ equals to 0.01 and 0.1. Interestingly the gap to the optimal rate is almost fixed for a given value of $\delta$ over the whole range of load $K/N$. The larger values of $\delta$ yield a larger gap. It can be seen that the number of beams per UE increases as load of the system increases. This is due to the fact that larger load results in stronger multiple access interference. Thus, in order to keep the performance degradation within a given limit, a larger number of degrees of freedom is needed in the inner-receivers to mitigate the interference. In an alternative presentation, the number of beams allocated to each UE is plotted versus UEs' angular positions in Fig.~\ref{fig:num beam vs ang}. The value of parameter $\delta$ is fixed to $0.1$, and the results are plotted for various number of UEs. As mentioned earlier the higher load generally needs a larger number of beams to keep a certain performance degradation. The other observation is that the number of allocated beams is larger for UEs residing in front of antenna array, while UEs in sides of the array needs smaller number of beams. This is due to the fact that the signal of UEs residing in sides of the array are less interfered. Also, the DFT beams become more dense in front of the array while the resolution of DFT beams decreases towards the sides of the array. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{f3.pdf} \caption{Rate and the number of beams per UE vs. load $K/N$, $N=225$, $S=45$, $\rho=10$.} \label{fig:rate vs KN} \end{figure}% \begin{figure}[t] \centering \includegraphics[width=\linewidth]{f1.pdf} \caption{The number of allocated beams vs. angular position of UEs, $\delta=0.1$, $N=225$, $S=45$, $\rho=10$.} \label{fig:num beam vs ang} \end{figure}% \section{Conclusions} \label{sec:Conclusion} Based on large system analysis, a novel TSB method was proposed that adjusts the dimensions of UE-specific OBF matrices based on the projection of the optimal MMSE vectors into the beam domain. This approach takes the multi-access interference into account when designing OBF, and thus, yields an optimal selection of sectors for a given UE. This allowed us to study the optimal window-sizes $|\mathcal{B}_k|$ given a certain performance degradation. It was observed that the window-size in average increases as the load of the system grows large, i.e., as multiple-access interference increases. Also, the numerical analysis showed that the UEs residing in the sides of the antenna array need smaller window-sizes, which is due to lower interference and lower resolution of DFT beams in the sides of the array. It was shown that the attained SINR values based on the proposed approach closely follow the optimal MMSE receiver while the computational burden of obtaining inner-beamformer is greatly reduced. \appendices \input{Appendix} \bibliographystyle{IEEEtran}
-27,351.661551
[ -2.43359375, 2.150390625 ]
41.610738
[ -3.65625, -0.1671142578125, -1.8095703125, -6.05078125, -0.5703125, 8.3828125 ]
[ 1.6611328125, 8.5, 0.2421875, 5.390625 ]
203
3,238
[ -3.4296875, 4.04296875 ]
29.625969
[ -5.796875, -4.21875, -4.0703125, -2.03125, 2.130859375, 11.4765625 ]
1.064623
17.44991
31.34651
1.417138
[ 1.3382225036621094 ]
-17,585.643738
5.801729
-27,058.391033
0.936868
5.884254
[ -2.609375, -3.75390625, -4.20703125, -5.328125, 2.568359375, 12.7265625 ]
[ -5.5078125, -1.900390625, -2.076171875, -1.3486328125, 3.314453125, 4.01171875 ]
BkiUfd425V5jLX3kjIIW
\section{Introduction} For positive integers $m$ and $n$ and for $q$ a nonzero element of a field $\mathbb{K}$ that is not a root of unity, let us denote by $\mathcal{A}=\Oq$ the algebra of $m\times n$ quantum matrices. There is a natural action of the algebraic torus $\mathcal{H}=(\mathbb{K}^*)^{m+n}$ on $\mathcal{A}$ which, by work of Goodearl and Letzter \cite{bg} allows the prime spectrum of $\mathcal{A}$ to be partitioned into a finite number of disjoint \emph{$\mathcal{H}$-strata}. Moreover, each $\mathcal{H}$-stratum is homeomorphic (with respect to the Zariski topology) to the prime spectrum of a commutative Laurent polynomial ring over $\mathbb{K}$. In this work, we complete the project started in~\cite{bln} and continued in~\cite{bldim,bll}; namely, that of determining a useful condition to determine the dimension of a given $\mathcal{H}$-stratum. Furthermore, this condition enables one to easily enumerate the $\mathcal{H}$-strata in $\Oq$ with respect to dimension. The principal motivation for this originates in Dixmier's idea that for an infinite-dimensional algebra, identifying the primitive ideals forms an important first step towards understanding the representation theory of the algebra. On the other hand, as a consequence of the $\mathcal{H}$-stratification theory, the primitive ideals are those prime ideals that are maximal within their $\mathcal{H}$-stratum. In particular, primitive $\mathcal{H}$-primes correspond to zero-dimensional $\mathcal{H}$-strata. Our condition is roughly described as follows. Within each $\mathcal{H}$-stratum there is a unique prime ideal that is invariant under the action of $\mathcal{H}$, a so-called \emph{$\mathcal{H}$-prime}. Next, to any given $\mathcal{H}$-prime, we may associate a certain permutation $\tau$, which for reasons that will become clear, we call a \emph{toric permutation}. Our first main result is the following theorem. \begin{thm} \label{maintheorem} Let $J$ be an $\mathcal{H}$-prime in $\Oq$ and let $\tau$ be the associated toric permutation. The dimension of the $\mathcal{H}$-stratum containing $J$ is precisely the number of odd cycles in the disjoint cycle decomposition of $\tau$. \end{thm} To clarify, the parity of a cycle is defined to be the parity of the number of inversions. Thus a cycle is odd if and only if it has even length. The crucial point in the proof of Theorem~\ref{maintheorem} is an isomorphism given in Theorem~\ref{main2} between the kernels of certain linear maps. Before describing this, we note that the authors~\cite{bcl}, and independently Yakimov \cite{yakimov2} with stronger hypotheses on $\mathbb{K}$ and $q$, have generalized the isomorphism to obtain a formula for calculating the dimension of torus-invariant strata in certain important subalgebras of the quantized enveloping algebra $U_q(\mathfrak{g})$ of a simple complex Lie algebra $\mathfrak{g}$. In this work we give a proof in the context of quantum matrices both for completeness and for its interesting combinatorial nature. The proof of Theorem~\ref{maintheorem} depends on two parametrizations of the set of $\mathcal{H}$-primes of $\Oq$. The first, due to Cauchon~\cite{cauchon1}, assigns to every $\mathcal{H}$-prime a combinatorial object called a \emph{Cauchon diagram}. This is simply an $m\times n$ grid of squares coloured black or white according to the rule that if a square is black, then either all squares strictly above or all squares strictly to the left are also black. See Figure~\ref{cauchonexamples} for some examples. Interestingly, Cauchon diagrams appear independently in the work of Postnikov~\cite{postnikov} under the name \reflectbox{L}-diagrams (or ``le''-diagrams), as a parametrization of the totally nonnegative Grassmann cells of the totally nonnegative Grassmannian. The connection between quantum matrices and total nonnegativity is detailed in papers of Goodearl, Launois and Lenagan~\cite{gll,gll2,gll3}. The second parametrization of $\mathcal{H}$-primes consist of the set of \emph{restricted ($m+n$)-permutations}, that is, permutations $\sigma\in S_{m+n}$ that satisfy $$-n\leq \sigma(i)-i\leq m$$ for all $i\in\{1,\ldots,m+n\}$. In~\cite{launois1} it has been shown that the set of restricted ($m+n$)-permutations ordered under the reverse Bruhat order is order-isomorphic to the poset of $\mathcal{H}$-primes ordered by inclusion. This result was recently generalized by Yakimov \cite{yakimov}. A bijection between the set of Cauchon diagrams and the set of restricted permutations can be made using \emph{pipe dreams} (see Section~\ref{pipedreams}). Bell and Launois~\cite{bldim} have shown that the dimension of a given $\mathcal{H}$-stratum may be computed using the associated Cauchon diagram $D$. In particular, they show that one may construct a certain skew-symmetric matrix $M(D)$ from $D$ such that the dimension of the $\mathcal{H}$-stratum is exactly $\dim(\ker(M(D))$. We let $\omega$ denote the maximum element in the poset of restricted $(m+n)$-permutations, and suppose that we have a Cauchon diagram $D$ whose corresponding restricted permutation is $\sigma$. The \emph{toric permutation} corresponding to $D$ is defined to be the permutation $\tau=\sigma\omega^{-1}$. In Theorem~\ref{main2} we construct an isomorphism between $\ker(M(D))$ and $\ker(P_\sigma+P_\omega)$, where $P_{\mu}$ is the matrix representation of a permutation $\mu$. As this latter space has dimension equal to the number of odd cycles of $\tau$, we obtain Theorem~\ref{maintheorem}. As an application we are able to show in Corollary~\ref{enumerationcor} that the number of $d$-dimensional $\mathcal{H}$-strata in $\Oq$ is the coefficient of $\frac{x^m}{m!}\frac{y^n}{n!}{t^d}$ in the power series expansion of $$(e^{-y}+e^{-x}-1)^\frac{-1-t}{2}(e^x+e^y-1)^\frac{1-t}{2}.$$ By determining the coefficients of this power series, we are able to settle several conjectures from~\cite{bll,bldim} concerning the asymptotic proportion of $d$-dimensional $\mathcal{H}$-strata in $\Oq$. Namely, we prove in Theorem~\ref{egf2cor} that for fixed $m$ and $d$, the proportion of $d$-dimensional $\mathcal{H}$-strata in $\Oq$ tends to $\frac{a(d)}{m!2^m}$ as $n\rightarrow\infty$, where $a(d)$ is the coefficient of $t^d$ in the polynomial $(t+1)(t+3)\cdots (t+2m-1)$. \section{Preliminaries} In this section, we give some basic background on quantum matrices, the Goodearl-Letzter stratification theory, and Cauchon diagrams. \subsection{Quantum Matrices} Throughout this paper, we set $\mathbb{K}$ to be a field, $q$ is a nonzero element of $\mathbb{K}$ that is not a root of unity, and we fix two positive integers $m$ and $n$. For a positive integer $\ell$, let $[\ell]:=\{1,2,\ldots, \ell\}$. \begin{defn} We let $\Oq$ denote the \emph{quantized coordinate ring of $m\times n$ matrices}. This is the algebra with generators $x_{i,j}$ for all $(i,j)\in [m]\times [n]$, subject to the following relations: \begin{enumerate} \item For all $i\in[m]$ and $j,k\in[n]$ with $j<k$, $$x_{i,j}x_{i,k}=qx_{i,k}x_{i,j};$$ \item For all $j\in[n]$ and $i,\ell \in[m]$ with $i<\ell$, $$x_{i,j}x_{\ell,j}=qx_{\ell,j}x_{i,j};$$ \item For all $i,\ell\in [m]$ with $i<\ell$ and distinct $j,k\in [n]$: \begin{displaymath} x_{i,j}x_{\ell,k} = \left\{\begin{array}{ll} x_{\ell,k}x_{i,j}, & \textnormal{ if $j>k$;} \\ x_{\ell,k}x_{i,j} +(q-q^{-1})x_{i,k}x_{\ell,j}, & \textnormal{ if $j<k$.} \end{array} \right. \end{displaymath} \end{enumerate} \end{defn} The algebra $\Oq$ is colloquially known as \emph{the algebra of $m\times n$ quantum matrices}, or just \emph{quantum matrices}. It is well-known that $\Oq$ may be presented as an iterated Ore extension of the base field. We set $X$ to be the matrix of generators obtained by setting $X[i,j]:=x_{i,j}$. The collection of prime ideals of $\Oq$ is the \emph{prime spectrum} and is denoted by ${\rm spec}(\Oq)$. We endow the prime spectrum with the Zariski topology. As we assume that $q$ is not a root of unity, every prime ideal of $\Oq$ is completely prime \cite{GL2}. \subsection{$\mathcal{H}$-Stratification Theory} We wish to understand the structure of ${\rm spec}(\Oq)$. To this end, a useful tool is the $\mathcal{H}$-stratification of Goodearl and Letzter. First notice that the algebraic torus $\mathcal{H}=(\mathbb{K}^*)^{m+n}$ acts rationally by automorphisms on $\Oq$ as follows. If $$h=(\rho_1,\ldots,\rho_m,\gamma_1,\ldots,\gamma_n)\in\mathcal{H},$$ then for all $(i,j)\in [m]\times [n]$ we set $$h\cdot x_{i,j} := \rho_i\gamma_jx_{i,j}.$$ An ideal $J$ is an \emph{$\mathcal{H}$-ideal} if $h\cdot J = J$ for all $h\in\mathcal{H}$. An $\mathcal{H}$-ideal $P$ is called an \emph{$\mathcal{H}$-prime ideal} if, for any $\mathcal{H}$-ideals $I$ and $J$, $IJ\subseteq P$ implies either $I\subseteq P$ or $J\subseteq P$. It can be shown that an $\mathcal{H}$-prime ideal is, in fact, a prime ideal. The collection of all $\mathcal{H}$-prime ideals is denoted by $\hspec$. \begin{thm}[Goodearl and Letzter \cite{bg}]\label{Hstratification} For the algebra $\mathcal{A}=\Oq$, the following hold. \begin{enumerate} \item There are only finitely many $\mathcal{H}$-prime ideals. \item The set {\rm spec($\mathcal{A}$)} can be partitioned into a disjoint union as follows: $$\emph{spec}(\mathcal{A}) = \bigcup_{J\in \mathcal{H}\text{\emph{-spec}}(\mathcal{A})} Y_J,$$ where $$\displaystyle{Y_J:=\{P\in\text{\emph{spec}}(\mathcal{A}) \mid \bigcap_{h\in\mathcal{H}} h\cdot P = J\}}$$ is the \emph{$\mathcal{H}$-stratum} associated to $J$. \item Each $\mathcal{H}$-stratum is homeomorphic to the prime spectrum of a commutative Laurent polynomial ring over $\mathbb{K}$. \item The primitive ideals of ${\rm spec}(\mathcal{A})$ are precisely those ideals that are maximal within their $\mathcal{H}$-stratum. \end{enumerate} \end{thm} The \emph{dimension} of an $\mathcal{H}$-stratum $Y_J$ is the Krull dimension of the commutative Laurent polynomial ring for which $Y_J$ is homeomorphic to by Theorem~\ref{Hstratification}(3). In other words, the dimension of $Y_J$ is the length $d$ of the longest chain $P_0\subset P_1\subset\cdots\subset P_d$ of prime ideals contained in $Y_J$. \subsection{Cauchon Diagrams} The deleting-derivations algorithm due to Cauchon and applied to the algebra $\Oq$ allows one to obtain a nice combinatorial parametrization of $\hspec$. \begin{defn} An \emph{$m\times n$ diagram} is simply an $m\times n$ grid of squares, each square coloured either white or black. A diagram is a \emph{Cauchon diagram} if the colouring has the property that if a square is black, then either every square strictly above or every square strictly to the left is also black. \end{defn} \begin{figure}[htbp] \begin{center} \includegraphics[width=3.5in]{cauchonexamples2.pdf} \caption{Three $3\times 3$ diagrams. The left and center diagrams are Cauchon; the right diagram is not.} \label{cauchonexamples} \end{center} \end{figure} \begin{thm}[Cauchon~\cite{cauchon2}] \label{cauchonsthm} For every $m$ and $n$, $\hspec$ is in bijective correspondence with the collection of $m\times n$ Cauchon diagrams. \end{thm} Given an $m\times n$ Cauchon diagram $D$ with $N$ white squares, it is convenient to label the white squares by the elements of $[N]$. Given such a labelling, we construct an $N\times N$ skew symmetric matrix $M(D)$ by the rule \begin{displaymath} M(D)[i,j]= \left\{\begin{array}{ll} 1 & \textnormal{if square $i$ is strictly below or strictly to the right of square $j$,} \\ -1 & \textnormal{if square $i$ is strictly above or strictly to the left of square $j$, and}\\ 0 & \textnormal{otherwise.} \end{array} \right. \end{displaymath} The analysis of $M(D)$ has proven crucial in the study of the dimension of $\mathcal{H}$-strata due to the following theorem. \begin{thm}[\cite{bldim}] \label{BL} Let $J$ be an $\mathcal{H}$-prime ideal with corresponding Cauchon diagram $D$. The dimension of the $\mathcal{H}$-stratum containing $J$ is $\dim(\ker(M(D))$. \end{thm} \section{Pipe Dreams and Permutations} \label{pipedreams} In this section, we describe the \emph{pipe dreams} construction, which gives a bijection between Cauchon diagrams and restricted permutations. Let us call a permutation $\sigma$ of $[m+n]$ \emph{restricted} if, for all $i\in[m+n]$, we have $-n\leq \sigma(i)-i\leq m$. The set of all restricted permutations of $[m+n]$ is a subposet of the symmetric group of $[m+n]$ endowed with the Bruhat order \cite{launois1}. Moreover we have the following result that was proved in \cite{launois1}. \begin{thm}\label{orderisomorphic} For fixed $m$ and $n$, the poset $\hspec$, ordered by inclusion, is order-isormophic to the poset of restricted permutations of $[m+n]$, ordered by the Bruhat order. \end{thm} The connection between Cauchon diagrams and restricted permutations suggested by Theorems \ref{cauchonsthm} and \ref{orderisomorphic} can be made clear via a notion called \emph{pipe dreams}. To explain this idea, let us fix an $m\times n$ diagram $D$. We lay ``pipes'' on the squares of $D$ by placing a ``hyperbola'' on every white square and a ``cross'' on every black square (see Figure~\ref{standard}). Next, we label the sides of the diagram by elements of $[m+n]$ as in, for example, Figure~\ref{standard} (we assume the general form of the labelling is obvious from this consideration). \begin{figure}[htbp] \begin{center} \includegraphics[width=2.5in]{restricted.pdf} \caption{Example of applying pipe dreams to a given diagram} \label{standard} \end{center} \end{figure} The restricted permutation $\sigma$ is obtained from this process by defining $\sigma(i)$ to be the label (on the left or top side of $D$) reached by following the pipe starting at label $i$ (on the bottom or right side of the $D$). To be clear, when following a pipe through a black square, we always go straight through the square. For example, in the diagram of Figure~\ref{standard}, we obtain the restricted permutation $(1\,2)(\,3\,4\,7\,5)$. Notice that the inverse of the permutation is obtained by reversing this procedure, that is, starting at a label from the left or top and following the pipe to the bottom or right of the diagram. \begin{defn} \label{omega} We denote by $\omega$ the restricted $(m+n)$-permutation obtained from the $m\times n$ diagram consisting of only black squares. Therefore, \begin{displaymath} \omega(i) = \left\{\begin{array}{ll} m+i & \textnormal{if $1\leq i\leq n$,} \\ i-n & \textnormal{if $n+1\leq i\leq n+m$.} \end{array} \right. \end{displaymath}The permutation $\omega$ is the maximum element in the set of restricted $(m+n)$-permutations ordered by the reverse Bruhat order. \end{defn} \section{Dimension of the $\mathcal{H}$-strata} In this section we prove Theorem \ref{maintheorem}. \subsection{Toric Permutations} We introduce the notion of \emph{toric permutations}, which are essential in our analysis. \begin{defn} \label{torusdef} Given a diagram $D$ with restricted permutation $\sigma$, the \emph{toric permutation} associated to $D$ is simply the permutation $\tau=\sigma\omega^{-1}$. \end{defn} The toric permutation can be obtained via pipe dreams simply by following the same procedure except the bottom and right sides of the diagram are now relabelled as in Figure~\ref{pipes}. Notice that under this labelling, any cycle in the toric permutation may be found by considering the diagram as a ``torus'' and following the pipe around the ``torus''. \begin{figure}[htbp] \begin{center} \includegraphics[width=2.5in]{toric.pdf} \caption{New row/column labelling to obtain the toric permutation $(1\,3\,5)(2\,6\,4)(7)$} \label{pipes} \end{center} \end{figure} The proof of Theorem~\ref{maintheorem} requires the following two lemmas. For a permutation $\mu\in S_k$, $P_\mu$ denotes the corresponding $k\times k$ permutation matrix. That is, $P_\mu$ is the $k\times k$ matrix whose entries are defined by $P_\mu[i,j]:= \delta_{j,\sigma(i)}$, where $\delta$ denotes the Kronecker symbol. \begin{lem} \label{technicallemma2} Let $D$ be an $m\times n$ diagram whose corresponding restricted permutation is $\sigma$. If $\tau=\sigma\omega^{-1}$ is the toric permutation associated to $D$, then $\vb{v}\in\ker(P_{\omega}+P_{\sigma})$ if and only if $\vb{v}_{b}=-\vb{v}_{\tau(b)}$ for every label $b$, where $\vb{v}_i$ denotes the $i$th coordinate of $\vb{v}$. \end{lem} \begin{proof} Consider the pipe in $D$ corresponding to the cycle in $\tau$ containing $b$ and $\tau(b)$. Now $\vb{v}\in\ker(P_{\omega}+P_{\sigma})$ if and only if for all $a$ we have $\vb{v}_{\omega(a)}+\vb{v}_{\sigma(a)} = 0$. Taking $a=\omega^{-1}(b)$ we obtain $\vb{v}_{b}=-\vb{v}_{\tau(b)}$ as desired. \end{proof} \begin{lem} \label{kernel} Let $D$ be an $m\times n$ diagram whose corresponding restricted permutation is $\sigma$, and let $\tau$ denote the toric permutation $\tau=\sigma\omega^{-1}$. Then the dimension of $\ker(P_\omega+P_{\sigma})$ is the number of odd cycles in the disjoint-cycle decomposition of $\tau$. \end{lem} \begin{proof} Let $\{\gamma_1,\ldots,\gamma_\ell\}$ be the set of odd cycles in the disjoint cycle decomposition of $\sigma\omega^{-1}$. Given the odd cycle $\gamma=(a_1\, a_2\,\ldots\, a_{2k})$, define the vector $\vb{v}^\gamma \in \mathbb{Z}^{m+n}$ by \begin{displaymath} \vb{v}^\gamma_b = \left\{\begin{array}{ll} 1 & \textnormal{if $b=a_i$ and $i$ is odd,} \\ -1 & \textnormal{if $b=a_i$ and $i$ is even,}\\ 0 & \textnormal{otherwise.} \end{array} \right. \end{displaymath} We claim that the set $B=\{\vb{v}^{\gamma_i} \mid i\in [\ell]\}$ forms a basis for $\ker(P_{\omega}+P_{\sigma})$. Since the odd cycles are mutually disjoint, it is clear that the members of $B$ form an independent set in the $\mathbb{Z}$-module $\mathbb{Z}^{m+n}$. Suppose that $\vb{v}\in \ker(P_{\omega}+P_{\sigma})$. By Lemma~\ref{technicallemma2}, we know that if $b$ is in an even cycle of $\tau$, then $\vb{v}_b=0$. Moreover, the values of the entries corresponding to an odd cycle agree up to multiplication by $-1$. Thus we see that $\vb{v}$ can be written as a linear combination of elements of $B$. \end{proof} \subsection{Proof of Theorem~\ref{maintheorem}} \begin{notn} Fix an $m\times n$ diagram $D$ with $N$ white squares labelled by distinct elements of the set $[N]$ such that labels are strictly increasing from left to right along rows and if $i < j$ then the label of each white box in row $i$ is strictly less than the label of each white box in row $j$. Let $\tau=\sigma\omega^{-1}$ be the toric permutation of $D$. Let $\vb{w}$ be in the column space of $M(D)$ and $\vb{v}$ in the column space of $P_\omega+P_\sigma$. We refer to the entries of $\vb{w}$ by $\vb{w}_j$ for $j\in [N]$ and the entries of $\vb{v}$ by $\vb{v}_a$ where $a\in [m+n]$. \begin{enumerate} \item Since, in the toric labelling, each side of a row or column is given the same label, we may unambiguously refer to a row or column by this label. \item Given a diagram $D$ with the toric labelling and a white square $i$ of $D$, let $\lef{i}$ and $\up{i}$ be, respectively, the labels of the rows or columns reached by following the bottom and top pipes of the hyperbola pipe placed on $i$. See Example \ref{tne}. \item For $S\subseteq [N]$, let $\vb{w}_S = \sum_{j\in S}\vb{w}_j$. \item For a given white square $i$, let $A(i), R(i), B(i)$ and $L(i)$ be the sets of white squares that are, respectively, strictly above, strictly to the right, strictly below and strictly to the left of square $i$. \end{enumerate} \end{notn} \begin{figure}[htbp] \begin{center} \includegraphics[width=2.5in]{isoexample.pdf} \caption{} \label{tnefig} \end{center} \end{figure} \begin{ex}\label{tne} Consider the diagram in Figure~\ref{tnefig}. For this example, we have labelled the white squares in a regular font, and the row and column labels in bold font. We have, for example, $\lef{7}=\vb{4}$ and $\up{7}=\vb{7}$, while $\lef{8}=\vb{7}$ and $\up{8}=\vb{6}$. On the other hand $A(5)=\{2\}, R(5)=\emptyset, B(5)=\{6,9\}$ and $L(5)=\{4\}$. \end{ex} Before proving the main theorem of this section, we note that by the definition of $M(D)$ we have $\vb{w}\in\ker(M(D))$ if and only if for every white square $i$, the following identity holds $$\vb{w}_{A(i)}+\vb{w}_{L(i)}=\vb{w}_{B(i)}+\vb{w}_{R(i)}.$$ \begin{thm} \label{main2} If $D$ is a diagram and $\sigma$ is the restricted permutation obtained from $D$, then $$\ker(P_\omega+P_\sigma) \simeq \ker(M(D)).$$ \end{thm} \begin{proof} We place the toric labelling on $D$. Suppose that $D$ has $N$ white squares labelled $1,2,\ldots, N$. The theorem is proved once we exhibit injective functions $\phi: \ker(P_\omega+P_\sigma) \rightarrow \ker(M(D))$ and $\psi: \ker(M(D)) \rightarrow \ker(P_\omega+P_\sigma)$. The reader may wish to refer to the example immediately following this proof where we give an explicit calculation of $\phi$ and $\psi$ for the diagram of Figure~\ref{tnefig}. Given $\vb{v}\in\ker(P_\omega+P_\sigma)$, we take $\vb{w}:=\phi(\vb{v})$ to be the vector whose entries are defined by $\vb{w}_i = \vb{v}_{\lef{i}}-\vb{v}_{\up{i}}$. To show $\vb{w}\in\ker(M(D))$ we need only to check that for all white squares $i$ the relation $\vb{w}_{A(i)}+\vb{w}_{L(i)}-\vb{w}_{B(i)}-\vb{w}_{R(i)}=0$ holds. Notice that, if $S=\{s_1,s_{2},\ldots,s_k\}$ is any block of consecutive white squares in the same row, where $s_1$ is the left-most white square, and $s_k$ is the right-most, then we have $\vb{w}_S=\vb{v}_{\lef{s_1}}-\vb{v}_{\up{s_k}}$ since one can easily check that $\up{s_i}=\lef{s_{i+1}}$ for $i \in [k-1]$. For a small example, see Figure~\ref{proof1}. \begin{figure}[htbp] \begin{center} \includegraphics[width=2.5in]{proof1.pdf} \caption{$\vb{w}_{\{s_1,s_2,s_3\}} = \vb{v}_{\lef{s_1}} - \vb{v}_{\up{s_3}}$} \label{proof1} \end{center} \end{figure}Similarly, if $S=\{s_1,s_{2},\ldots,s_k\}$ is any block of consecutive white squares in the same column, where $s_1$ is the bottom-most white square, and $s_k$ is the top-most white square, then $\vb{w}_S=\vb{v}_{\lef{s_1}}-\vb{v}_{\up{s_k}}$. Fix a white square $i$. For the sake of brevity, we assume that $A(i),L(i),B(i)$ and $R(i)$ are all non-empty; the remaining cases are similar. We now show that $\vb{w}_{A(i)}+\vb{w}_{L(i)}-\vb{w}_{B(i)}-\vb{w}_{R(i)} =0$ (Figure~\ref{proof2}). Suppose that the label of the row containing $i$ is $r$ and the label of the column containing $i$ is $c$. Let $s_{r,1}$ be the left-most white square in row $r$, $s_{r,k}$ the right-most white square in row $r$, $s_{c,1}$ the bottom-most white square in column $c$ and $s_{c,\ell}$ the top-most white square in column $c$. \begin{figure} \begin{center} \includegraphics[width=2.5in]{proof2.pdf} \caption{} \label{proof2} \end{center} \end{figure}We have \begin{eqnarray*} \vb{w}_{A(i)}+\vb{w}_{L(i)}-\vb{w}_{B(i)}-\vb{w}_{R(i)} &= & (\vb{v}_{\up{i}} - \vb{v}_{\up{s_{c,\ell}}}) + (\vb{v}_{\lef{s_{r,1}}} - \vb{v}_{\lef{i}}) \\ && -(\vb{v}_{\lef{s_{c,1}}} - \vb{v}_{\lef{i}}) - (\vb{v}_{\up{i}} - \vb{v}_{\up{s_{r,k}}}) \\ & = & (\vb{v}_{\lef{s_{r,1}}} + \vb{v}_{\up{s_{r,k}}}) - (\vb{v}_{\up{s_{c,1}}}+\vb{v}_{\lef{s_{c,\ell}}})\\ &=& (\vb{v}_r + \vb{v}_{\tau(r)}) - (\vb{v}_c+\vb{v}_{\tau(c)})\\ & = & 0, \end{eqnarray*} where the last equality follows by Lemma~\ref{technicallemma2}. Now define $\psi: \ker(M(D)) \rightarrow \ker(P_\omega+P_\sigma)$ as follows. Let $\vb{w}\in \ker(M(D))$ and write $\vb{v}=\psi(\vb{w})$. If $a$ is a column we take $\vb{v}_a$ to be the sum of all white squares in column $a$, but if $a$ is a row, then we take $\vb{v}_a$ to be the negative of the sum of all white squares in row $a$. By Lemma~\ref{technicallemma2}, we want to show that for all labels $a$, we have \begin{eqnarray} \vb{v}_a+\vb{v}_{\tau(a)}& = &0.\label{one} \end{eqnarray} Fix a label $a \in [m+n]$ and let $(s_1,s_2,\ldots,s_k)$ be the sequence of white squares used in the pipe from $a$ to $\tau(a)$. \begin{figure}[htbp] \begin{center} \includegraphics[width=3.5in]{proof3.pdf} \caption{} \label{proof3} \end{center} \end{figure}We assume that $a$ is a column as the case in which $a$ is a row is argued similarly. We first prove by induction that, for every $i=1,\ldots, k$, we have \begin{eqnarray} \vb{v}_a &= & \vb{w}_{s_1} + \vb{w}_{A(s_1)} \nonumber \\ &= &(-1)^{i+1}\vb{w}_{s_i}+\vb{w}_{A(s_i)}-\vb{w}_{B(s_i)}.\label{two1} \end{eqnarray} See Figure~\ref{proof3}. If $i=1$, then Equation~(\ref{two1}) is trivially true since $B(s_i)=\emptyset$. So suppose that it is true for all values less than $i>1$. There are two cases to consider. If $i$ is even, then $s_i$ is the first white square to the left of $s_{i-1}$. Note that $\vb{w}_{R(s_i)} = \vb{w}_{R(s_{i-1})} + \vb{w}_{s_{i-1}}$ and $\vb{w}_{L(s_{i-1})} = \vb{w}_{L(s_{i})} + \vb{w}_{s_{i}}$. Since $\vb{w}\in\ker(M(D))$ we have the two equations $$\vb{w}_{A(s_{i-1})} + \vb{w}_{L(s_{i})}+\vb{w}_{s_{i}} - \vb{w}_{B(s_{i-1})} - \vb{w}_{R(s_{i-1})} = 0,$$ and $$\vb{w}_{A(s_{i})} + \vb{w}_{L(s_{i})} - \vb{w}_{B(s_{i})} - \vb{w}_{R(s_{i-1})}-\vb{w}_{s_{i-1}} = 0.$$ Subtracting the second equation from the first we obtain \begin{eqnarray} \vb{w}_{s_{i-1}}+\vb{w}_{A(s_{i-1})}-\vb{w}_{B(s_{i-1})} &=& -\vb{w}_{s_{i}} + \vb{w}_{A(s_{i})}-\vb{w}_{B(s_{i})}. \label{three1} \end{eqnarray} But since $i-1$ is odd, the left hand side of~(\ref{three1}) is equal, by induction, to $\vb{w}_{s_1} + \vb{w}_{A(s_1)}$ and thus~(\ref{three1}) implies~(\ref{two1}) for the case that $i$ is even. Now for $i$ odd, $s_{i}$ is the first white square above $s_{i-1}$. It is easy to check that we have the two equations $$\vb{w}_{A(s_{i-1})} = \vb{w}_{A(s_{i})} + \vb{w}_{s_{i}}$$ and $$\vb{w}_{B(s_{i})} = \vb{w}_{B(s_{i-1})} + \vb{w}_{s_{i-1}}.$$ By induction we obtain \begin{eqnarray*} \vb{w}_{s_1} + \vb{w}_{A(s_1)} &= &-\vb{w}_{s_{i-1}}+\vb{w}_{A(s_{i-1})}-\vb{w}_{B(s_{i-1})}\\ & = & -\vb{w}_{s_{i-1}} +\vb{w}_{A(s_{i})} + \vb{w}_{s_{i}} - (\vb{w}_{B(s_{i})} - \vb{w}_{s_{i-1}})\\ & = & \vb{w}_{s_{i}} + \vb{w}_{A(s_{i})} - \vb{w}_{B(s_{i})}. \end{eqnarray*} This finishes the proof of Equation~(\ref{two1}). Now we verify Equation~(\ref{one}). If $k$ is even, then $\tau(a)$ is a column-label and so $A(s_k)=\emptyset$. By~(\ref{two1}), \begin{eqnarray*} \vb{v}_a & = & \vb{w}_{s_1} + \vb{w}_{A(s_1)}\\ & = & -\vb{w}_{s_k}+\vb{w}_{A(s_k)}-\vb{w}_{B(s_k)}\\ &=& -\vb{w}_{s_k}-\vb{w}_{B(s_k)}\\ &=& -\vb{v}_{\tau(a)}. \end{eqnarray*} On the other hand, if $k$ is odd, then $\tau(a)$ is a row-label and so $L(s_k)=\emptyset$. Since $\vb{w}\in\ker(M(D))$, we must have $\vb{w}_{A(s_k)}-\vb{w}_{B(s_k)}=\vb{w}_{R(s_k)}$. Hence \begin{eqnarray*} \vb{v}_a & = & \vb{w}_{s_1} + \vb{w}_{A(s_1)}\\ & = & \vb{w}_{s_k}+\vb{w}_{A(s_k)}-\vb{w}_{B(s_k)}\\ &=& \vb{w}_{s_k}+\vb{w}_{R(s_k)}\\ &=& -\vb{v}_{\tau(a)}. \end{eqnarray*} The final step is to prove that the functions $\phi$ and $\psi$ are injective. First, suppose that $\psi(\vb{w})=\vb{0}$ with $\vb{w}\neq\vb{0}$. There must, therefore, exist a white square $i$ (with $i$ minimum) such that $\vb{w}_i\neq 0$ but $\vb{w}_{A(i)}=0$ and $\vb{w}_{L(i)}=0$. Since $\vb{w}\in\ker(M(D))$ we have $\vb{w}_{B(i)}+\vb{w}_{R(i)}=0$. On the other hand, since $\psi(\vb{w})=\vb{0}$ we have, by the construction of $\psi(\vb{w})$, that $\vb{w}_{B(i)}+\vb{w}_{i}=0$ and $0=\vb{w}_{R(i)}+\vb{w}_{i}=-\vb{w}_{B(i)}+\vb{w}_{i}$. Thus we must have $\vb{w}_i=0$, contradicting the choice of $i$. Therefore $\psi$ is injective. To show that $\phi$ is injective we show that $\psi(\phi(\vb{v}))=-2\vb{v}$ for every $\vb{v}\in\ker(P_\omega+P_\sigma)$. First, if $a$ is a row then $$\psi(\phi(\vb{v}))_a = -\sum_{i \mbox{ in column }a} \phi(\vb{v})_i= -\sum_{i \mbox{ in column }a} (\vb{v}_{\lef{i}}-\vb{v}_{\up{i}})=-(\vb{v}_a - \vb{v}_{\tau(a)}).$$ On the other hand, by Lemma~\ref{technicallemma2}, we know that $\vb{v}_{\tau(a)} = -\vb{v}_a$ and thus $\psi(\phi(\vb{v}))_a = -2\vb{v}_a$. A similar argument shows that if $a$ is a column then $\psi(\phi(\vb{v}))_a = -2\vb{v}_a$. Hence $\phi$ is injective. \end{proof} \begin{rem} The linear maps $\phi$ and $\psi$ defined in the proof of Theorem \ref{main2} are not inverses. However one can check that $$\psi\circ \phi = -2\cdot {\rm id}|_{{\rm ker}(P_\omega+P_\sigma)} \mbox{ and } \phi\circ \psi=-2\cdot {\rm id}|_{{\rm ker}(M(D))}.$$ \end{rem} \begin{ex} In this example we give examples of the maps $\phi$ and $\psi$ from the above proof. If $M$ is a matrix, then $M^t$ denotes its transpose. We consider the diagram from Figure~\ref{tnefig}. The corresponding toric permutation is $(1\,4\,8\,7\,2\,6)(3\,5)$. By Lemma~\ref{kernel}, we may construct $\vb{v}=(1,1,0,-1,0,-1,-1,1)^t$ corresponding to the cycle $(1\,4\,8\,7\,2\,6)$, and $\vb{v'}:=(0,0,1,0,-1,0,0,0)^t$ corresponding to the cycle $(3\,5)$. \begin{figure}[htbp] \begin{center} \includegraphics[width=2.5in]{isoexample2.pdf} \caption{Calculating $\phi((1,1,0,-1,0,-1,-1,1)^t)$ } \label{tnefig2} \end{center} \end{figure} In Figure~\ref{tnefig2}, we now replaced the row or column-label $i$ with $\vb{v}_i$. We wish to construct an element $\phi(\vb{v})=\vb{w}$ in $\ker((M(D))$. By the isomorphism in the proof of Theorem~\ref{main2}, we see that, for example, $\vb{w}_8= \vb{v}_{\lef{8}}-\vb{v}_{\up{8}} = -1-(-1)= 0$. Continuing in this way, we find that $\vb{w}=(-1,1,-2,1,-1,2,0,0,0,2)^t$. In Figure~\ref{tnefig3}, we have replaced the white square labels from Figure~\ref{tnefig} with the corresponding value of $\vb{w}$. Using this, let us calculate $\psi(\vb{w})=\vb{u}$. This is easier: if $i$ is a row, then $\vb{u}_i$ is simply the negative of the row sum, and if $i$ is a column, then $\vb{u}_i$ is the column sum. Thus we obtain, $\vb{u}=(-2,-2,0,2,0,2,2,-2)^t$. It is easy to check that $\vb{u}=-2\vb{v} \in \ker(P_\sigma+P_\omega)$. \begin{figure}[htbp] \begin{center} \includegraphics[width=2.5in]{isoexample3.pdf} \caption{Calculating $\psi((-1,1,-2,1,-1,2,0,0,0,2)^t).$ } \label{tnefig3} \end{center} \end{figure} \end{ex} \begin{cor} \label{maincor2} For a Cauchon diagram $D$, the dimension of the $\mathcal{H}$-stratum corresponding to $D$ is equal to the number of odd cycles in the disjoint cycle decomposition of the toric permutation associated to $D$. \end{cor} \begin{proof} This follows immediately from Theorems~\ref{BL} and~\ref{main2}. \end{proof} \section{Enumeration of $\mathcal{H}$-primes with respect to dimension} In this section we apply Corollary~\ref{maincor2} to obtain enumeration formulas for the total number of $m\times n$ $\mathcal{H}$-strata of a given dimension. Suppose that we are given an $m\times n$ diagram $D$ with the property that the disjoint cycle decomposition of the toric permutation $\tau$ consists of exactly one cycle $\tau=(a_1\,a_2\,\ldots\,a_{m+n})$. Without loss of generality, take $a_1=1$. Using the pipe-dreams visualization of $\tau$, it is easy to check that the sequence $(a_1,a_2,\ldots,a_{m+n})$ consists of contigious subsequences $R_1,C_1,\ldots, R_k, C_k$ alternating between increasing sets of row-labels (the $R_i)$ and decreasing sets of column-labels (the $C_i$). In other words, we may write $\tau=(R_1, C_1, R_2,\ldots, R_k, C_k)$ where the $R_i$ form a partition of $[m]$ and each $R_i$ is written in increasing order, while the $C_i$ form a partition of $m+[n]$ and each $C_i$ is written in decreasing order. We call $\tau$ a \emph{toric cycle}. On the other hand, given a toric cycle $\tau=(R_1, C_1, R_2,\ldots, R_k, C_k)$, it is a simple matter to check that the permutation $\tau\omega$ is restricted in the sense of Section~\ref{pipedreams}. By Theorems~\ref{cauchonsthm} and~\ref{orderisomorphic}, we conclude that there exists a unique Cauchon diagram whose toric permutation is $\tau$. \begin{defn} The number of partitions of $[n]$ into $k$ non-empty parts is the \emph{Stirling number of the second kind}, and will here be denoted by ${n\brace k}$. Note that some authors write $S(n,k)$ for ${n\brace k}$. \end{defn} The above discussion implies that the number $d_{m,n}$ of $m\times n$ diagrams whose toric permutation consists of exactly one cycle is exactly $$d_{m,n} = \sum_{k=1}^{\min(m,n)} k!(k-1)!{m\brace k}{n\brace k}.$$ Let $\mathcal{D}(x,y):=\sum_{m,n\geq 1}d_{m,n}\frac{x^m}{m!}\frac{y^n}{n!}$ be the exponential generating function of $d_{m,n}$ and let $C(x,y)$ be the exponential generating function of all toric permutations (and hence of Cauchon diagrams). The relation between these two generating functions is provided to us via the well-known exponential formula. Note that while this formula is given in Section 5.1 of Stanley~\cite{stanley} (see also Chapter 3 in Wilf~\cite{wilf}) for single variable exponential generating functions, it is straightforward to generalize the result to multivariable exponential generating functions. For those readers unfamiliar with the exponential formula, we now roughly describe the approach. Suppose we wish to find the exponential generating function $A(x,y)$ of a set of a structures, where each structure can be written uniquely as a disjoint union of labelled ``irreducible'' components. For example, in the case that is of interest to us, the ``structures'' are the toric permutations, and the labelled irreducible components are the toric cycles. The exponential formula says that if $B(x,y)$ is the exponential generating function for the labelled irreducible components, then $A(x,y)=\exp(B(x,y))$. In our case then, we obtain the following formula. \begin{eqnarray} C(x,y)&=&\exp(x+y)\exp(\mathcal{D}(x,y))\label{eqnC1}, \end{eqnarray} where the extra $\exp(x+y)$ is the exponential generating function for the all-black diagrams (i.e., the identity toric permutation). We may refine $\mathcal{D}(x,y)$ by noting that $$\mathcal{D}_e(x,y)=\frac{1}{2}(\mathcal{D}(x,y) - \mathcal{D}(-x,-y))$$ is the generating function for the even toric cycles, while $$\mathcal{D}_o(x,y)=\frac{1}{2}(\mathcal{D}(x,y) + \mathcal{D}(-x,-y))$$ is the generating function for the odd toric cycles. Therefore, if $C(x,y,t)$ is the generating function whose coefficient of $\frac{x^n}{n!}\frac{y^m}{m!}t^d$ is the number of $m\times n$ Cauchon diagrams with $d$ odd cycles in the toric permutation, then \begin{eqnarray} C(x,y,t) & = & \exp(x+y+\mathcal{D}_e(x,y) +t\mathcal{D}_o(x,y)) \nonumber \\ & = & \exp(x+y)\exp(\mathcal{D}(x,y))^{\frac{t+1}{2}}\exp(\mathcal{D}(-x,-y))^{\frac{t-1}{2}}.\label{eqnD} \end{eqnarray} On the other hand, it follows from \cite[Corollary 1.5]{launois1} that the number of $\mathcal{H}$-primes in $\Oq$ is the poly-Bernoulli number $B_n^{(-m)}$ \cite{kaneko2}. As a consequence, we may apply the work of Kaneko~\cite[Remark~p~223]{kaneko2} to conclude that $C(x,y)$ satisfies \begin{eqnarray} C(x,y)&=&\frac{e^{x+y}}{e^x+e^y-e^{x+y}}.\label{eqnC2} \end{eqnarray} We thus obtain \begin{thm}\label{egf4} If $h(m,n,d)$ is the number of $m\times n$ Cauchon diagrams whose toric permutation has $d$ odd cycles, then $C(x,y,t) = \sum_{m,n,d} h(m,n,d)t^d\frac{x^m}{m!}\frac{y^n}{n!}$ satisfies \begin{eqnarray} C(x,y,t) &= & (e^{-y}+e^{-x}-1)^\frac{-1-t}{2}(e^x+e^y-1)^\frac{1-t}{2}.\label{eqnD2} \end{eqnarray} \end{thm} \begin{proof} Comparing Equations~(\ref{eqnC1}) and~(\ref{eqnC2}) we see that \begin{eqnarray*} \exp(D(x,y)) &= &(e^x + e^y-e^{x+y})^{-1}\\ & = & e^{-x-y}(e^{-y}+e^{-x}-1)^{-1}. \end{eqnarray*} Substituting the latter equality into Equation~(\ref{eqnD}) leads to Equation~(\ref{eqnD2}), as desired. \end{proof} Corollary~\ref{maincor2} and Theorem~\ref{egf4} immediately give us \begin{cor} \label{enumerationcor} If $h(m,n,d)$ is the number of $d$-dimensional $\mathcal{H}$-strata in the prime spectrum of $\Oq$, then $$C(x,y,t) = \sum_{m,n,d} h(m,n,d)t^d\frac{x^m}{m!}\frac{y^n}{n!}$$ satisfies \begin{eqnarray} C(x,y,t) &= & (e^{-y}+e^{-x}-1)^\frac{-1-t}{2}(e^x+e^y-1)^\frac{1-t}{2}. \end{eqnarray}\qed \end{cor} Bell and Launois, together with Nguyen~\cite{bln} and Lutley~\cite{bll}, have given exact formulas for the number of $2\times n$ and $3\times n$ primitive $\mathcal{H}$-strata respectively. Using these formulas, asymptotic results are obtained and a general conjecture is proposed in~\cite{bldim} for the asymptotic proportion of $d$-dimensional $m\times n$ $\mathcal{H}$-strata for fixed $d$ and $m$. While it turns out that the conjecture is false in general, we are now in a position to give the correct asymptotic proportions (see Theorem~\ref{egf2cor}). The most important result towards this goal is given in the following theorem. \begin{thm} \label{egf2} If $m,n >0$ and $d\geq 0$ are integers, then for all integers $k\in\{1-m,\ldots ,m+1\}$ there exist rational numbers $c_k(m,d)$ such that the number $h(m,n,d)$ of $m\times n$ $d$-dimensional $\mathcal{H}$-strata in the prime spectrum of $\Oq$ is given by $$h(m,n,d) = \sum_{k=1-m}^{m+1} c_k(m,d)k^n.$$ Moreover, if $a(d)=[t^d](t+1)(t+3)\cdots(t+2m-1)$, then $$c_{m+1}(m,d) = 2^{-m}a(d),$$ where $[t^d]p(t)$ denotes the coefficient of $t^d$ in the polynomial $p(t)$. \end{thm} Before we begin the proof, we collect some elementary facts regarding the Stirling numbers of the second kind. These can be found in~\cite{stanley} for example. \begin{prop} \label{snprop} If $n$ and $k$ are nonnegative integers, then the following hold: \begin{eqnarray} {n\brace k} &=& \frac{1}{k!}\sum_{j=0}^{k}(-1)^{k-j}\binom{k}{j}j^n;\label{stir1}\\ \sum_{k=0}^n{n\brace k}(x)_k &=& x^n, \textnormal{ where $(x)_k:=x(x-1)\cdots (x-k+1)$};\\ \frac{1}{k!}(e^x-1)^k &=& \sum_{m=k}^\infty {m\brace k}\frac{x^m}{m!}. \end{eqnarray} \end{prop} \begin{proof}[Proof of Theorem~\ref{egf2}] We have \begin{eqnarray*} (e^x+e^y-1)^\frac{1-t}{2} & = & (1+ e^x-1+e^y-1)^\frac{1-t}{2}\\ & = & \sum_{k\geq 0} {\frac{1}{2}(1-t)\choose k} (e^x-1+e^y-1)^k \\ & = & \sum_{k\geq 0} {\frac{1}{2}(1-t)\choose k}\sum_{\ell=0}^k {k\choose \ell}(e^x-1)^\ell (e^y-1)^{k-\ell} \\ & = & \sum_{k\geq 0} {\frac{1}{2}(1-t)\choose k}\sum_{\ell=0}^k k!\left(\sum_{m=\ell}^\infty{m\brace \ell}\frac{x^m}{m!}\right)\left(\sum_{n=k-\ell}^\infty{n\brace k-\ell}\frac{y^n}{n!}\right)\\ & = & \sum_{k\geq 0} \sum_{\ell=0}^k\sum_{m=\ell}^\infty\sum_{n=k-\ell}^\infty\left(\frac{1}{2}(1-t)\right)_k {m\brace \ell}{n\brace k-\ell}\frac{x^m}{m!}\frac{y^n}{n!}. \end{eqnarray*} Thus if we set $f(m,n)=\left[\frac{x^m}{m!}\frac{y^n}{n!}\right](e^x+e^y-1)^\frac{1-t}{2}$, then \begin{eqnarray*} f(m,n)&=& \sum_{\ell=0}^m\sum_{k=\ell}^{n+\ell}\left(\frac{1}{2}(1-t)\right)_k{m\brace \ell}{n\brace k-\ell}\\ &=& \sum_{\ell=0}^m\sum_{k=0}^{n}\left(\frac{1}{2}(1-t)\right)_{\ell+k}{m\brace \ell}{n\brace k}\\ &=& \sum_{\ell=0}^m\sum_{k=0}^{n}\left(\frac{1}{2}(1-t)\right)_{\ell}\left(\frac{1}{2}(1-t)-\ell\right)_{k}{m\brace \ell}{n\brace k}\\ & = & \sum_{\ell=0}^m\left(\frac{1}{2}(1-t)\right)_{\ell}{m\brace \ell}\sum_{k=0}^n \left(\frac{1}{2}(1-t)-\ell\right)_{k}{n\brace k}\\ & = & \sum_{\ell=0}^m\left(\frac{1}{2}(1-t)\right)_{\ell}{m\brace \ell}\left(\frac{1}{2}(1-t)-\ell\right)^n. \end{eqnarray*} Similarly, if we set $g(m,n)=\left[\frac{x^m}{m!}\frac{y^n}{n!}\right](e^{-x}+e^{-y}-1)^\frac{-1-t}{2},$ then $$g(m,n) = \sum_{\ell=0}^m\left(\frac{-1}{2}(1+t)\right)_{\ell}{m\brace \ell}(-1)^{m+n}\left(\frac{-1}{2}(1+t)-\ell\right)^n.$$ Hence \begin{eqnarray} &~& \left[\frac{x^m}{m!}\frac{y^n}{n!}\right]C(x,y,t) \nonumber \\ & = & \sum_{m^\prime=0}^m\sum_{n^\prime=0}^n {m\choose m^\prime}{n\choose n^\prime}\left[\sum_{\ell_1=0}^{m^\prime}\left(\frac{1}{2}(1-t)\right)_{\ell_1}{m^\prime\brace \ell_1}\left(\frac{1}{2}(1-t)-\ell_1\right)^{n^\prime}\right] \nonumber\\ && \cdot \left[ \sum_{\ell_2=0}^{m-m^\prime}\left(\frac{-1}{2}(1+t)\right)_{\ell_2}{m-m^\prime\brace \ell_2}(-1)^{m-m^\prime+n-n^\prime} \cdot\left(\frac{-1}{2}(1+t)-\ell_2\right)^{n-n^\prime}\right] \nonumber\\ &=& \sum_{m^\prime=0}^m\sum_{\ell_1=0}^{m^\prime}\sum_{\ell_2=0}^{m-m^\prime}{m\choose m^\prime}{m^\prime\brace \ell_1}{m-m^\prime\brace \ell_2}(-1)^{m-m^\prime}\left(\frac{1}{2}(1-t)\right)_{\ell_1}\nonumber\\ && \cdot \left(\frac{-1}{2}(1+t)\right)_{\ell_2}\left[\sum_{n^\prime=0}^n {n\choose n^\prime} \left(\frac{1}{2}(1-t)-\ell_1\right)^{n^\prime} \cdot\left(\frac{1}{2}(1+t)+\ell_2\right)^{n-n^\prime}\right]\nonumber\\ & =&\sum_{m^\prime=0}^m\sum_{\ell_1=0}^{m^\prime}\sum_{\ell_2=0}^{m-m^\prime}{m\choose m^\prime}{m^\prime\brace \ell_1}{m-m^\prime\brace \ell_2}(-1)^{m-m^\prime} \cdot \left(\frac{1}{2}(1-t)\right)_{\ell_1}\nonumber\\ & & \cdot \left(\frac{-1}{2}(1+t)\right)_{\ell_2}\cdot\left(\frac{1}{2}(1-t)-\ell_1+\frac{1}{2}(1+t)+\ell_2\right)^n\nonumber\\ & = &\sum_{m^\prime=0}^m\sum_{\ell_1=0}^{m^\prime}\sum_{\ell_2=0}^{m-m^\prime}{m\choose m^\prime}{m^\prime\brace \ell_1}{m-m^\prime\brace \ell_2}(-1)^{m-m^\prime} \left(\frac{1}{2}(1-t)\right)_{\ell_1}\left(\frac{-1}{2}(1+t)\right)_{\ell_2} \nonumber\\ & & \cdot (1-\ell_1+\ell_2)^n.\label{eqnH} \end{eqnarray} Note that within the indices of summation we have the bounds $0\leq \ell_1\leq m$ and $0\leq \ell_2\leq m$ and so $1-m\leq 1-\ell_1+\ell_2\leq 1+m$. Therefore, since $h(m,n,d)=[t^d]\left[\frac{x^m}{m!}\frac{y^n}{n!}\right]C(x,y,t)$, the first conclusion in the theorem statement follows from Equation (\ref{eqnH}). Now notice that $1-\ell_1+\ell_2 = 1+m$ if and only if $m^\prime=0$ (and thus $\ell_1=0$) and $\ell_2=m$. Therefore \begin{eqnarray*} c_{m+1}(m,d) & = & [t^d](-1)^m\left(\frac{-1}{2}(1+t)\right)_{m} \\ & = & (-1)^m \left(\frac{-1}{2}\right)^m [t^d](t+1)(t+3)\cdots (t+2m-1)\\ & = & 2^{-m}[t^d](t+1)(t+3)\cdots (t+2m-1). \end{eqnarray*} \end{proof} The formula given in Equation (\ref{eqnH}) in the proof of Theorem~\ref{egf2} is amenable to computation in Maple. To show its utility, we give some examples. Note that $h(2,n,0)$ appears in~\cite{bln}, while $h(3,n,0)$ appears in~\cite{bll}. \vspace{0.6cm} \begin{tabular}{l|l} $(m,d)$ & $h(m,n,d)$ \\ \hline \\ $(2,0)$ & $\frac{3}{4}3^n-\frac{1}{2}2^n+\frac{1}{2} -\frac{1}{4}(-1)^n$\\ \\ \hline \\ $(2,1) $& $3^n -\frac{1}{2}2^n$\\ \\ \hline \\ $(2,2) $& $\frac{1}{4}3^n -\frac{1}{2}+\frac{1}{4}(-1)^n$\\ \\ \hline \\ $(3,0) $&${\frac {15}{8}}\,{4}^{n}-\frac{9}{4}\,{3}^{n}+{\frac {13}{8}}\,{2}^{n}-\frac{3}{4}\, \left( -1 \right) ^{n}+\frac{3}{8}\, \left( -2 \right) ^{n}$\\ \\ \hline \\ $(3,3) $&$\frac{1}{8}4^n -\frac{3}{8}2^n-\frac{1}{8}(-2)^n$\\ \\ \hline \\ $(4,0) $&$ {\frac {105}{16}}{5}^{n}-{\frac {45}{4}}{4}^{n}+9{3}^{n}-\frac{11}{4}{2}^{n}-\frac{5}{8}- \left( -1 \right)^{n}\left( -2 \right) ^{n}{\frac {15}{16}}\left( -3 \right) ^{n}$\\ \\ \hline \\ $(4,4) $& $\frac{1}{16}5^n-\frac{1}{4}3^n+\frac{3}{8}-\frac{1}{4}(-1)^n+\frac{1}{16}(-3)^n$\\ \\ \hline \\ $(5,0) $&${\frac {945}{32}}{6}^{n}-{\frac {525}{8}}{5}^{n}+{\frac {2025}{32}}\,{4}^{n}-(30){3}^{n}+{\frac {23}{16}}{2}^{n}+{\frac {225}{32}}\left( -2 \right) ^{n}-{\frac {75}{8}}\left( -3 \right) ^{n}+{\frac {105}{32}}\left( -4 \right) ^{n}$\\ \end{tabular} \vspace{0.6cm} \begin{thm} \label{egf2cor} Fix a positive integer $m$ and let $a(d)=[t^d](t+1)(t+3)\cdots (t+2m-1)$. Then the proportion of $d$-dimensional $\mathcal{H}$-strata in $\Oq$ tends to $a(d)/(m!2^m)$ as $n\rightarrow\infty$. \end{thm} \begin{proof} Note that for fixed $m$, it is easily seen from Equation (\ref{stir1}) that the Stirling number of the second kind satisfies ${n\brace m} \sim m^n/m!$ as $n\rightarrow \infty$. From this we deduce that for $k<m$, ${n\brace k}/{n\brace m} \rightarrow 0$ as $n\rightarrow \infty$. Now recall that, for $n\geq m$, the total number of $\mathcal{H}$-primes is the poly-Bernoulli number \cite[Corollary 1.5]{launois1} and so we deduce from \cite[Theorem 2]{ak} that the total number of $\mathcal{H}$-primes in $m\times n$ quantum matrices is equal to $$B_n^{(-m)} = \sum_{k=0}^m (k!)^2{n+1\brace k+1}{m+1\brace k+1}.$$ Thus as $n\rightarrow \infty$, we have \begin{eqnarray*} B_n^{(-m)} &\sim & (m!)^2{n+1\brace m+1}\\ & \sim & (m!)(m+1)^n. \end{eqnarray*} Therefore, by Theorem~\ref{egf2}, we have \begin{eqnarray*} \lim_{n\rightarrow \infty} \frac{\textnormal{number of $d$-dimensional $\mathcal{H}$-strata in $\Oq$}}{\textnormal{total number of $\mathcal{H}$-strata in $\Oq$}} &=& \frac{a(d)}{m!2^m}. \end{eqnarray*} \end{proof} A more sophisticated asymptotic analysis would be of interest. In particular, we pose the following question. For a fixed positive integer $d$, does $$\lim_{n\rightarrow\infty} \frac{\textnormal{ number of $d$-dimensional $\mathcal{H}$-strata in $\mathcal{O}_q(M_{n,n}(\mathbb{K}))$}}{\textnormal{total number of $\mathcal{H}$-strata in $\mathcal{O}_q(M_{n,n}(\mathbb{K}))$}}$$ exist? If so, what is its value?
-55,446.944854
[ -2.66015625, 2.421875 ]
38.104449
[ -2.458984375, 0.96484375, -2.294921875, -5.9765625, -1.3701171875, 8.7890625 ]
[ 4.1640625, 10.0390625, 1.755859375, 6.65234375 ]
322
5,125
[ -3.263671875, 3.6953125 ]
35.585656
[ -5.0078125, -3.375, -5.0390625, -2.50390625, 1.330078125, 12.515625 ]
0.661241
19.021778
24.760976
3.410645
[ 1.4767496585845947 ]
-32,911.468517
5.754146
-55,612.008049
0.447609
5.901818
[ -1.4072265625, -3.01171875, -3.9375, -5.50390625, 1.8115234375, 12.1875 ]
[ -6.06640625, -2.359375, -2.619140625, -1.578125, 4.1640625, 5.265625 ]
BkiUc__xK7ICUyBu0Ytk
\section{Introduction} As discussed in a variety of papers, unresolved radio point sources are an important foreground in temperature anisotropy maps of the cosmic microwave background (CMB) \cite{zotti,tof1, wright, tegmark}. With Wilkinson Microwave Anisotropy Probe (WMAP) data \cite{bennett}, the difference in the CMB power spectra determined at various frequency channels and the cross power spectra between the channels, has allowed the unresolved point source contamination to be constrained with an amplitude $A_{\rm ps}=(0.011 \pm 0.001) \mu$K$^2$-sr \cite{nolta} for the foreground power spectrum, when scaled to the Q-band. In the WMAP analysis, this point source amplitude is taken to be a constant in $C_l$, similar to the case of a shot-noise type power spectrum for unresolved point sources. This shot-noise, with an appropriate scaling in frequency, is then removed from each of the power spectra when constructing the final WMAP temperature anisotropy power spectrum \cite{hinshaw}. While the WMAP estimate on the point source correction is consistent with a point source power spectrum dominated by the shot-noise, this estimate is dominated by measurements relative to the Q-band \cite{nolta}. Since the WMAP temperature anisotropy power spectrum is based on V- and W-band data \cite{dunkley}, and the point source correction has a larger uncertainty in V- and W- bands \cite{nolta}, it is unclear if a simple shot-noise correction fully describes point sources in the WMAP temperature anisotropy power spectrum. To account for uncertainty in the amplitude of point-source shot-noise in parameter estimates, the WMAP likelihood contains an additional marginalization of the uncertainty of $A_{\rm ps}$, but the best-fit amplitude of point-sources remain fixed to the a priori determined value \cite{hinshaw}. We also note that alternative approaches have been considered to estimate the amplitude of point-sources \cite{Huffenberger1,Huffenberger2}, though these works also concentrated on establishing the shot-noise correction. Beyond the shot-noise, unresolved radio point sources are likely to have a clustered distribution on the sky as they are expected to be a biased tracer of the large-scale structure. Thus, the angular power spectrum of sources contains not just a shot-noise but also a clustering piece determined by the dark matter power spectrum, point source bias, and the redshift distribution. Existing calculations suggest that the shot-noise part of the power spectrum from bright, rare sources dominates clustering at low radio frequencies, especially when the flux threshold for point source removal is at the level of $\sim$ 1 Jy \cite{GonzalezNuevo:2004fj,tof1}. Thus, the assumption of a shot-noise point source contribution to the angular power spectrum of temperature anisotropies is likely to be adequate for low-frequency bands of WMAP such as the Q-band, but may not be appropriate at high frequencies, such as the W band, whose data are used in the temperature power spectrum. The approach using a shot-noise spectrum with an uncertainty that is marginalized over in the WMAP likelihood is is bit different from the approach advocated by the WMAP team to account for another foreground in CMB data involving the Sunyaev-Zel'dovich (SZ) effect from galaxy clusters. There, the angular power spectrum is estimated based on a model for the cluster distribution and gas properties \cite{Komatsu}, with an overall uncertainty in the amplitude of the SZ power spectrum captured by a free parameter which is then freely varied as a nuisance parameter when best-fit cosmological parameter values and their uncertainties using a Markov-Chain Monte-Carlo (MCMC) code \cite{lewis,spergel,dunkley}. Since the clustering component of the angular power spectrum of unresolved radio sources may be important, it could be that simply including point-sources as a shot-noise correction in the V/W-band WMAP temperature anisotropy power spectrum results in biased estimates of cosmological parameters, especially for parameters like the spectral index of density perturbations, which has been discussed previously in the context of uncertainties related to point-source shot-noise amplitude \cite{Huffenberger1}. Moreover, current CMB analyses make use of the combination of datasets such as WMAP and ACBAR which have different treatments related to how point sources are accounted in the parameter fits. At the ACBAR frequency of 150 GHz \cite{reichardt}, the clustering of sources may need to be included properly, especially given that the high angular resolution of ACBAR also allows removal of sources down to a lower flux density level, where the shot-noise associated with rare, bright sources may be subdominant. The clustering of point sources could also account for some fraction of the excess arcminute-scale anisotropies detected by ACBAR \cite{tof2,Cooray3}. Given the lack of adequate details related to the exact clustering power spectrum of radio sources in datasets such as WMAP and ACBAR, we make a general estimate of the angular power spectrum of radio point-sources and include the overall amplitude of clustering as an additional nuisance parameter to be included and marginalized over when estimating for cosmological parameters. For this, we make use of number counts at 95 GHz \cite{sadler} and assume the redshift distribution of high-frequency radio sources follows the same distribution as estimated for NVSS \cite{condon} sources at low frequencies \cite{Ho}. As some of these assumptions are likely to be invalid to some extent, we do not fix the clustering spectrum to our model but allow the overall amplitude to vary and marginalize over that uncertainty when constraining cosmological parameter. Thus, the uncertainty in our predictions related to clustering of sources is unlikely to dominate and we confirm this by noting that the differences to best-fit parameter values with point source clustering included are not significant, especially for the case of combined WMAP and ACBAR data. Moreover, our estimate of clustering is consistent with the allowed level of point source correction in V- and W-band combination of WMAP, as measured in terms of differences in the power spectra \cite{nolta}. While the differences in estimated cosmological parameters are small, a proper estimate of cosmological parameters with clustering included is useful for a proper statistical analysis on important scientific results such as on the extent to which the spectral index of density perturbations $n_s$ is different from the Harrison-Zel'dovich value at -1. While the impact on current datasets is small, for future data such as Planck that probe down to smaller scales over a wide range of frequencies, we suggest that it will be necessary to account for clustering of point sources when model fitting cosmological parameters. This paper is organized as follows: we first discuss the angular clustering power spectrum of radio points sources. Section~III discusses model fits to recent CMB anisotropy data from WMAP and ACBAR. We discuss our results and conclude with a summary in Section~IV. \section{Clustering of Radio Point Sources} The angular power spectrum of radio sources, in units of $(\mu K)^2$, generally contains two components \cite{Scott,Oh:2003sa} \begin{equation} C_l = \left(\frac{\partial B_\nu}{\partial T}\right)^{-2}\left[\int_0^{S_{\rm cut}} S^2 \frac{dN}{ds} \, dS + \bar{I}^2 w_l \right]\, , \end{equation} where $w_l$ is the Legendre transform of the angular correlation function $w(\theta)$ of unresolved radio point sources, $\bar{I}$ is the average intensity (in flux units) produced by these sources \begin{equation} \bar{I} = \int_0^{S_{\rm cut}} S \frac{dN}{dS} \, dS \, , \end{equation} and the conversion factor from flux to antenna temperature using the CMB black-body spectrum, $B_\nu(T=2.726{\rm K})$, is \begin{equation} \frac{\partial B_\nu}{\partial T}=\frac{2k_B}{c^2} \left(\frac{k_BT}{h}\right)^2 \frac{x^4 e^x}{(e^x-1)^2} \, , \end{equation} where $x\equiv h\nu/k_BT=\nu/56.84$ GHz is the dimensionless frequency. This conversion can be simplified as $\partial B_\nu/\partial T= [(99.27 {\rm Jy} sr^{-1})/\mu K] x^4 e^x/(e^x-1)^2$. Since we know little about the clustering of radio sources below the point source detection limit W- and V- bands of WMAP and at 150 GHz of ACBAR, we make use of a simplified set of assumptions to estimate the source clustering. In general, the angular power spectrum of the source sources can be written with the halo model \cite{Cooray2} such that $w_l$ is \begin{equation} w_l^{\rm lin} = \int dz\, \frac{dr}{dz} a^2(z) \frac{n^2(z)}{d_A^2(z)} P_{ss}\left(k=\frac{l}{d_A},z\right) \, , \end{equation} where $P_{ss}(k,z)$ is the three-dimensional power spectrum of radio sources as a function of redshift. In the halo model, source clustering at large angular scales can be described with the linear matter power spectrum scaled by a constant and scale-free bias factor: \begin{equation} P_{ss}(k) \approx b^2_s P^{\rm lin}(k) \, , \end{equation} where the source bias factor, when combined with an estimate of the number density of sources, provide some information on the halo mass scale associated with those sources through the luminosity- or flux-averaged halo occupation number $\langle N(M,z)\rangle$, halo bias $b_{\rm halo}(M,z)$, and the halo mass function $dn/dM$ \cite{Cooray2}: \begin{eqnarray} b_s = \frac{1}{\bar{n}_g}\int dM\, \frac{dn}{dM}(z)\, b_{\rm halo}(M,z) \langle N(M,z) \rangle\, . \end{eqnarray} At small angular scales, clustering traces the non-linear power spectrum generated by the so-called 1-halo term. Separating the occupation number to central and satellite radio sources, $\langle N(M)\rangle= \langle N_{\rm s} \rangle + \langle N_{\rm c} \rangle$, the 1-halo power spectrum is \begin{eqnarray} && P^{1h}(k) = \\\ && \int dM\; n(M)\; \frac{2 \langle N_{\rm s} \rangle \langle N_{\rm c} \rangle u(k|M) + \langle N_{\rm s} \rangle^2 u^2(k|M)}{\bar{n}_g^2} \, . \nonumber \end{eqnarray} Here, $u(k|M)$ is the normalized density profile in Fourier space. At deeply non-linear scales, however, the shot-noise term is expected to dominate the clustering spectrum, but the transition scale may lie larger than the shot-nose amplitude. The above form of the 1-halo term allows us to easily understand a simple behavior. If radio sources occupy dark matter halos such that there is only one source per halo, regardless of the halo mass, then with $N_{\rm s}=0$, $P^{1h}=0$. Thus, the 1-halo term only exists to the extent that more than one radio source occupies a halo. While there is limited information on the halo occupation properties of radio sources at the frequencies of interest, observations at frequencies around 30 GHz suggest that multiple radio sources are found in large dark matter halos such as groups and clusters, though at 30 GHz, the central galaxy tends to be the dominant bright source in most galaxy clusters \cite{Cooray4}. Given the lack of detailed knowledge on the clustering of radio sources or even ingredients such as luminosity functions or exact redshift distributions that can be used to generate a reliable halo model for the radio source population using approaches such as the conditional-luminosity functions that are used to describe clustering of optical or IR and far-IR galaxies \cite{Amblard}, we make several approximations. First we note that at large angular scales, $C_l \approx \bar{I}^2\langle b_{s}^2\rangle w_l^{\rm lin}$ \cite{Scott}. To calculate the angular power spectrum, we assume that unresolved sources trace the same large-scale structure as low-frequency NVSS sources and estimate $\bar{I}$ by integrating over the number counts at 95 GHz as estimated by \cite{sadler}. We use 95 GHz as a first estimate here since it is close to both WMAP channels on one end and ACBAR at the other end. We make use of the redshift distribution estimates for NVSS to calculate clustering at high frequencies \cite{Ho}. In addition to linear clustering, we also include a non-linear correction to the angular clustering using a 1-halo model that assumes a simple power-law occupation number for satellite galaxies with $N_s(M) \sim M^\beta$ with $\beta= 0.85$ when $M > 10^{12.5}$ M$_{\odot}$. The typical bias factor estimate for sources from this occupation number is about $1$ at $z \sim 1$. Before calculating anisotropies for CMB, we verified that our prediction for source clustering, when applied for low-frequency sources, generally agrees with measurements from the literature \cite{Blake,Overzier}. In Fig.~1, we show the angular power spectrum of radio sources as fluctuations in the CMB temperature $C_l$, and a comparison to the difference in power spectra of V and W-bands of WMAP \cite{nolta}. For this comparison, we follow the same procedure as the WMAP analysis \cite{nolta} and scaled the power spectrum to Q-band (40.7 GHz) and estimate $A_{\rm ps}=r(Q)^{-2}C_l^{\rm PS}(Q)$ with $r(\nu)=(e^x-1)^2/(x^2e^x)$ with a numerical value for the Q-band of $r(Q)=1.089$. In Fig~1, we have converted our estimate of $C_l$'s from 95 GHz counts to Q-band with average spectral index of bright resolved WMAP sources with $\alpha \approx -0.09$ \cite{wright}, with the scaling $\nu^{\alpha-2}r(\nu)$ for temperature units instead of intensity. In addition to clustering we also include the V- and W-band combined estimate of shot-noise in WMAP data with a value of $0.007$ $\mu$K$^2$-sr, when scaled to the Q-band. As shown in Fig.~1, the sum of this shot-noise and the clustering we estimate is consistent with the allowed amplitude of point source correction from Nolta et al. \cite{nolta}. Though there are larger uncertainties in point source estimates of the V and W-band, we cannot simply rule out that the unresolved sources only contribute with a shot-noise type power spectrum. If Q-band is also included, as shown in Ref.~\cite{nolta}, the differences are more consistent with a shot-noise power spectrum, as expected for low-frequencies since Q-band dominates such an estimate. Since the final WMAP power spectrum is composed of V- and W-band data, we find that there is some motivation to include source clustering when estimating cosmological parameters. While the clustering amplitude is unconstrained, this is of little concern to us in the cosmological parameter estimation since we will not fix the clustering of radio sources to a pre-determined model, but parameterize the overall amplitude of clustering with a free parameter. Thus, when model fitting the data we parameterize the clustering part as $P_{\rm WMAP}C_l$ and consider $P_{\rm WMAP}$ as a nuisance parameter that captures all uncertainties in our calculation, which includes the spectral index of sources from counts at 95 GHz to WMAP band, their redshift distribution, parameters of the halo model, among others. When quoting cosmological parameter measurements, we marginalize the likelihood over $P_{\rm WMAP}$. This approach is consistent with how the WMAP team included the effect of SZ angular power spectrum in parameter estimation with a parameter $A_{\rm SZ}$ that is freely varied. Note that we only include a model for the source clustering since the CMB power spectrum released by the WMAP team already has a shot-noise removed from the data for point sources when combining V and W-data to a final power spectrum. In addition to clustering of sources as relevant for WMAP, we also include clustering of point sources as related to ACBAR data \cite{reichardt}. In parameter estimation, unlike the WMAP team that fitted and removed a constant shot-noise spectrum for unresolved radio sources when estimating an optimal power spectrum from data, the ACBAR team included the shot-noise of radio sources as an extra component in their model fits. Thus, while we only include clustering spectrum of radio sources for WMAP, for ACBAR data, we include both a clustering spectrum and a shot-noise for radio sources. The shot-noise was taken to be consistent with estimates made by the ACBAR team and the clustering component was taken by simply frequency scaling the same WMAP spectrum to 150 GHz with the same scaling as the one involved with the shot-noise part. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.8,angle=-90,width=8cm]{fig1.eps} \end{center} \caption{The amplitude of point source correction to the V- and W-band WMAP angular power spectrum. The data points show the measurement from the WMAP team scaled to the Q-band. We ignore the corrections with Q-band as the final power spectrum from the WMAP team uses only V- and W-band data. The dotted line shows our model for the point source clustering (scaled to Q-band) with $P_{\rm WMAP}=1$, while the solid line shows the total clustering spectrum arising from point sources with clustering and the shot-noise. The shot-noise is taken to be the same as estimated by the WMAP team for V and W-band data with a value $A_{\rm PS}=0.007$ $\mu$K$^2$-sr } \end{figure} \section{Cosmological Parameters with Clustered Point Sources} The method we use to estimate cosmological parameters is based on the publicly available Markov Chain Monte Carlo package CosmoMC \cite{lewis} with a convergence diagnostics based on the Gelman and Rubin statistic. We used WMAP 5-year data \cite{Komatsu5yr} (both temperature and temperature-polarization cross-correlation) alone and in combination with ACBAR data \cite{reichardt}. We only account for point sources in temperature anisotropies. Since WMAP polarization data do not probe small angular scales, where polarized point sources contribute, ignoring point sources in polarization is a safe assumption. In our estimates we make use of the flat $\Lambda$CDM cosmological model with 6 cosmological parameters: baryon density $\Omega_bh^2$, dark matter density $\Omega_ch^2$, reionization optical depth $\tau$, ratio of the sound horizon to the angular diameter distance at the decoupling measured by $\theta$, amplitude of the curvature perturbation $A_s$ (with flat prior on $log(A_s)$) and spectral index $n_s$; these two last parameters are both defined at the pivot scale $k_0=0.002/$ Mpc as in \cite{dunkley}. To this set we include $A_{\rm SZ}$, the amplitude of SZ contribution, and two parameters $P_{\rm WMAP}$ and $P_{\rm ACBAR}$ for the amplitude of point-source clustering. To study the impact of point sources on running of the spectral index and estimates of the tensor-to-scalar ratio, we also consider additional runs where these quantities are varied. \subsection{WMAP and ACBAR data} When estimating parameters with existing WMAP and ACBAR data, with point sources and SZ included, the total CMB anisotropy spectrum is \begin{equation} C_l^{\rm tot} = C_l^{\rm CMB} + C_l^{\rm PS}+C_l^{\rm SZ} \, . \end{equation} The point-source angular power spectrum contains two parts as discussed: $C_l^{\rm PS}=C_l^{\rm sn}+C_l^{\rm c}$, but since the WMAP team removed the shot-noise when combining data to a single estimate of the power spectrum, we take $C_l^{\rm WMAP}=C_l^{\rm CMB}+C_l^{\rm c}+C_l^{\rm SZ}$. There is a slight complication here since $C_l^{\rm c}$ is a combination of the clustering in V- and W-bands, and we make the simple assumption here that the clustering of sources between these two bands can be scaled by a constant while the shape remains the same. The uncertainty in the variation of the point source clustering with frequency, to some extent, is not expected to be a significant issue since we allow the overall amplitude to vary with $P_{\rm WMAP}$. Note that the same complication exists for $C_l^{\rm SZ}$, but in this case the frequency dependence is known exactly. \begin{figure}[!t] \begin{center} \includegraphics[scale=0.8,angle=-90,width=8cm]{./fig2.eps} \end{center} \caption{Best-fit CMB angular power spectrum for WMAP 5-year data and ACBAR with a comparison to measurements. We also show the input power spectra of point source clustering for WMAP (middle line) and ACBAR data (bottom line) with $P_{\rm WMAP}=P_{\rm ACBAR}=1$ (see text for details). In the case of WMAP, we show custering $C_l^c$ part only as the shot-noise is removed from the data, while for ACBAR we show the total.} \end{figure} \begin{figure*} \begin{center} \begin{tabular}{ccc} \resizebox{50mm}{!}{\includegraphics{./fig3.eps}} & \resizebox{50mm}{!}{\includegraphics{./fig4.eps}} & \resizebox{50mm}{!}{\includegraphics{./fig5.eps}} \\ \resizebox{50mm}{!}{\includegraphics{./fig6.eps}} & \resizebox{50mm}{!}{\includegraphics{./fig7.eps}} & \resizebox{50mm}{!}{\includegraphics{./fig8.eps}} \\ \resizebox{50mm}{!}{\includegraphics{./fig9.eps}} & \resizebox{50mm}{!}{\includegraphics{./fig10.eps}} & \resizebox{50mm}{!}{\includegraphics{./fig11.eps}} \\ \end{tabular} \caption{Marginalized parameter constraints for WMAP without clustering (red line), WMAP with point source clustering (green line), and WMAP+ACBAR with point source clustering signal for both (blue line). From left to right, each of the panels show the constraint on the baryon density, cold dark matter density, the ratio of sound horizon to the angular diameter distance at the decoupling (top panels), optical depth, spectral index, amplitude of curvature perturbations (middle panels), and SZ normalization, WMAP point source normalization, and ACBAR point source normalization (lower panels), respectively. The spectral index and the amplitude of perturbations is measured at $k=0.002$ Mpc$^{-1}$.} \label{test4} \end{center} \end{figure*} \begin{table*} \caption{Mean values and marginalized $68 \%$ c.l. limits for several cosmological parameters from WMAP and WMAP+ACBAR, with and without clustering of point sources (see text for details).} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Parameter & WMAP 5-yr & WMAP 5-yr & WMAP 5-yr & WMAP+ACBAR & WMAP+ACBAR\\ & & with clustering & with clustering & & with clustering \\ & & $C_l^{c}\times P_{\rm WMAP}$ & $C_l^{c}\times(0<P_{\rm WMAP} < 1)$& &$C_l^{c}\times(0<P_{\rm WMAP} < 1)$ \\ & & & & & $C_l^{c}\times(0<P_{\rm ACBAR} < 1)$ \\ \hline \hline $\Omega_bh^2$&$0.02277\pm0.00062$&$0.02397_{-0.00104}^{+0.00103}$& $0.02366_{-0.00083}^{+0.00084}$ &$0.02269_{-0.00060}^{+0.00059}$&$0.02298_{-0.00065}^{+0.00063}$\\ $\Omega_{\rm c}h^2$&$0.1093_{-0.0063}^{+0.0064}$&$0.1025_{-0.0075}^{+0.0075}$& $0.1041\pm0.0069$ &$0.1103\pm0.0059$&$0.1092_{-0.0059}^{+0.0056}$\\ $\Omega_{\Lambda}$&$0.744_{-0.029}^{+0.030}$&$0.780\pm0.034$& $0.772_{-0.030}^{+0.031}$ &$0.740_{-0.029}^{+0.028}$&$0.747_{-0.027}^{+0.028}$\\ $n_s$&$0.965\pm0.014$&$0.949_{-0.017}^{+0.018}$& $0.953\pm0.016$&$0.963\pm0.014$&$0.959\pm0.014$\\ $\tau$&$0.087\pm0.017$&$0.088\pm0.017$& $0.088_{-0.017}^{+0.018}$ &$0.087_{-0.017}^{+0.016}$&$0.086_{-0.016}^{+0.018}$\\ $\Delta^2_R$&$(2.39\pm0.10)\cdot10^{-9}$&$(2.33\pm0.10)\cdot10^{-9}$& $(2.35\pm0.10)\cdot10^{-9}$ &$(2.41\pm0.10)\cdot10^{-9}$&$(2.40\pm0.10)\cdot10^{-9}$\\ \hline $\sigma_8$&$0.793\pm0.036$& $0.726\pm 0.056$& $0.742\pm0.047$ &$0.798\pm0.033$ &$0.784\pm0.033$\\ $\Omega_m$&$0.256_{-0.030}^{+0.029}$&$0.22\pm0.034$& $0.228_{-0.030}^{+0.031}$&$0.260_{-0.028}^{+0.0029}$&$0.253_{-0.028}^{+0.0027}$\\ $H_0$&$72.1_{-2.6}^{+2.7}$&$76.3_{-3.9}^{+4.0}$&$75.3\pm3.4$ &$71.7\pm2.5$&$72.5\pm2.6$\\ $z_{reion}$&$11.0\pm1.4$&$10.5_{-1.3}^{+1.4}$&$10.7\pm1.4$ &$11.0\pm1.4$&$10.8\pm1.4$\\ $t_0$&$13.68_{-0.13}^{+0.14}$&$13.49\pm0.19$& $13.54\pm0.16$&$13.69_{-0.12}^{+0.13}$&$13.65\pm0.13$\\ $A_{SZ}$&$1.04_{-0.69}^{+0.68}$&$1.00\pm0.68$& $1.00\pm0.68$&$0.98_{-0.66}^{+0.67}$&$0.91_{-0.65}^{+0.68}$\\ $P_{\rm WMAP}$&$--$&$<1.38 (2\sigma)$& $<0.93 (2\sigma)$ &$--$&$<0.46 (2\sigma)$\\ $P_{\rm ACBAR}$&$--$&$--$&$--$ &$--$&$<0.56 (2\sigma)$\\ \hline \end{tabular} \label{table:1} \end{center} \end{table*} As described, in addition to WMAP 5-year data, we also include ACBAR data at large multipoles. To avoid complicating the analysis when different datasets overlap, which requires a calculation of the covariance matrix between different experiments, as they observe the same sky, in the likelihood calculation, we take the same approach as the WMAP team and use WMAP data out to $\ell < 900$ and ACBAR data from $900 < \ell < 2000$. For ACBAR data, we make a separate estimate of the angular power spectrum by scaling the flux-cut of unresolved point sources to be at the lower flux threshold and in agreement with previous shot-noise estimates \cite{reichardt}. Again, we include an overall uncertainty in the ACBAR angular power spectrum of radio sources, in this case the sum of clustering and shot-noise terms of the power spectrum, with $P_{\rm ACBAR}$. In Fig.~2, we show the angular power spectrum of CMB anisotropies with best-fit cosmological model for WMAP and ACBAR data, as well as the two input power spectra for point source clustering with both $P_{\rm WMAP}=P_{\rm ACBAR}=1$. \begin{figure*}[] \centering \subfigure[]{\includegraphics[width=8cm]{./fig12.eps}} \hspace{0.1cm} \subfigure[]{\includegraphics[width=8cm]{./fig13.eps}} \caption{\label{burn-in1} Two-dimensional marginalized distributions showing the $68\%$ and $95\%$ confidence level contours for $n_s$ vs. $\Omega_bh^2$ (left) and $n_s$ vs. $\sigma_8$ (right) with WMAP 5-years data alone (empty contours), WMAP 5-years data with clustered point sources (red contours), and WMAP 5-years data+ACBAR data with point source clustering (blue contours).} \end{figure*} \begin{table}[!ht] \caption{Constraints on spectral index $n_s$, running of the spectral index ${dn_s/d\ln k}$ and tensor to scalar ratio $r$ from WMAP+ACBAR with and without point source signal. These values are evaluated at $k=0.002$ Mpc$^{-1}$.} \begin{center} \begin{tabular}{cll} \hline parameter & WMAP+ACBAR & WMAP+ACBAR \\ & & with clustering\\ \hline \hline n$_s$ & $1.036\pm0.046$ & $1.041\pm0.050$ \\ $dn_s/d\ln k$ & $-0.038\pm0.024$ & $-0.044\pm0.025$ \\ \hline n$_s$ & $0.981\pm0.019$ & $0.977_{-0.019}^{+0.020}$ \\ r & $<0.36 (2\sigma)$ & $<0.39 (2\sigma)$ \\ \hline \end{tabular} \end{center} \end{table} Since we only include the clustering term of unresolved point sources for WMAP data, we follow the WMAP team's approach on the shot-noise term and marginalize the likelihood over the uncertainty related to point source shot-noise term A$_{\rm ps}$ using the public WMAP likelihood routine. This uncertainty only makes a small difference in best-fit cosmological parameters \cite{nolta}. The results related to $\Lambda$CDM runs are summarized in Table~I. In the case where we do not consider clustering of point sources, we essentially recover the same results as Ref.~\cite{dunkley}, with small differences at the level of 0.1$\sigma$, which we believe is due to differences in the numerical codes and the convergence criteria. With clustering of point sources included, however, the spectral index estimated with WMAP 5-year data alone changes from $0.965 \pm 0.014$ with point source shot-noise only to $0.949 \pm 0.018$ with source clustering in addition to the shot-noise from the WMAP likelihood. This is a difference of about $1\sigma$, but this large difference primarily comes from the fact that $P_{\rm WMAP}$ is largely unconstrained by the data with a $2\sigma$ upper limit of 1.38. The change in $n_s$ is captured by a similar change in $\sigma_8$ with values changing from 0.793 $\pm 0.036$ without clustering to $0.726 \pm 0.056$ with clustering. If we put a prior that $P_{\rm WMAP}$ is uniform between 0 and 1, $n_s=0.953\pm0.016$ and the difference from the case with clustering ignored is about $\sim0.8\sigma$. \begin{figure*}[] \centering \subfigure[]{\includegraphics[width=8cm]{fig14.eps}} \hspace{0.1cm} \subfigure[]{\includegraphics[width=8cm]{fig15.eps}} \caption{\label{burn-in2} Two-dimensional marginalized distributions showing the $68\%$ and $95\%$ confidence level contours for $n_s$ vs. $dn_s/d\ln k$ (left) and $n_s$ vs. $r$ (right) of interest with WMAP 5-year and ACBAR data (empty contours) and and WMAP 5-years data+ACBAR data with clustering for point sources (blue contours).} \end{figure*} With the addition of ACBAR data, the clustering amplitudes of unresolved point sources in both WMAP and ACBAR are better constrained. Though we only use WMAP point source clustering model for WMAP data only and a separate model for ACBAR point sources at $\ell > 900$, both parameters are better constrained because the combination of WMAP and ACBAR data pin down the overall cosmological model leaving less room for the point source piece to change in amplitude. In combination, with clustering of point sources included for both WMAP and ACBAR, we find $n_s=0.959 \pm 0.014$, which is different from the WMAP+ACBAR value of $n_s=0.963 \pm 0.014$ by about 0.3$\sigma$. In Fig.~3 we summarize likelihoods of the parameters involved. As shown there, when clustering is included the large differences on cosmological parameter estimates with WMAP data alone appear in $n_s,\Omega_bh^2$ and $\Omega_ch^2$. However, as discussed for $n_s$, once we include ACBAR data and with clustering of point sources both for WMAP and ACBAR, the probability distributions are more consistent with the WMAP data alone, but with clustering ignored. While it seems like large multipole data from an experiment such as ACBAR do not improve cosmological parameters estimated from WMAP, in our case, we do see an improvement by constraining the point source clustering amplitude better. The associated contour plots for parameters that are mostly affected by clustering of point sources are summarized in Fig.~4 for combinations of $n_s$ vs. $\Omega_bh^2$ (left) and $n_s$ vs. $\sigma_8$ (right). \begin{figure}[htbp] \begin{center} \includegraphics[scale=0.8,angle=-90,width=8cm]{./fig16.eps} \end{center} \caption{Planck mock data with and without a contribution from point sources: an offset in the temperature power spectrum is visible for small scales. Also showed are the model for point sources used in the paper and a simulated signal for point sources from public point source maps for Planck HFI channels (in this case at 143 GHz) from the Planck team (see text for details).} \end{figure} In addition to standard $\Lambda$CDM runs with a power-law power spectrum for density perturbations, we also study the impact of point sources on the running of the spectral index and on the tensor-to-scalar ratio, in addition to the main parameters of the $\Lambda$CDM cosmological model outlined in Table~I. In Table~II we summarize our results. Our results for the combination of WMAP and ACBAR without clustering are generally consistent with previous results \cite{Komatsu5yr}, but with minor differences such as a 2$\sigma$ upper limit on $r$ of 0.36 instead of 0.4. The differences between with and without clustering are also minor and this is primarily due to the fact that we take the combination of WMAP and ACBAR. In Fig.~5 we summarize these results in contour plots with $n_s$ vs. running (dn$_s/d\ln k$, left) and $n_s$ vs. $r$ (right). \begin{figure*}[!] \centering \subfigure[]{\includegraphics[width=8cm]{./fig17.eps}} \hspace{0.1cm} \subfigure[]{\includegraphics[width=8cm]{./fig18.eps}} \caption{\label{planck2} Two-dimensional marginalized distributions showing the $68\%$ and $95\%$ confidence level contours for $n_s$ vs. $\Omega_bh^2$ (left) and $n_s$ vs. $\sigma_8$ (right) in three different cases: Planck mock data alone (empty contours), Planck mock data with clustering for point sources marginalized as a shot-noise (red contours, see text for details), and finally Planck mock data with a clustered point source signal which is marginalized over with a model for clustering. The marginalization over point source clustering in Planck completely removes the bias introduced by the point sources signal and contours overlap.} \end{figure*} \begin{table*} \caption{Mean values and marginalized $68 \%$ c.l. limits for several cosmological parameters from Planck mock data with and without a point source contribution.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Parameter & Planck mock data & Planck mock data & Planck mock data & Planck mock data \\ & & with clustering & with clustering & with clustering \\ & & Point sources ignored & $C_l^c \times (0<P_{Planck}<1)$& Shot-noise only ($C_l^{sn} = P_{\rm Planck}$)\\ \hline \hline $\Omega_bh^2$&$0.02270_{-0.00024}^{+0.00025}$&$0.02753\pm0.00028$&$0.02271\pm0.00025$& $0.00271_{-0.00024}^{+0.00025}$\\ $\Omega_{\rm c}h^2$&$0.1082\pm0.0019$&$0.0924\pm0.0017$&$0.1080_{-0.0020}^{+0.0019}$ & $0.1092_{-0.0019}^{+0.0020}$ \\ $\Omega_{\Lambda}$&$0.750\pm0.010$&$0.831\pm0.007$& $0.751_{-0.010}^{+0.011}$& $0.745\pm0.010$ \\ $n_s$&$0.962_{-0.007}^{+0.008}$&$1.159\pm0.007$& $0.961_{-0.007}^{+0.008}$& $0.973\pm0.007$\\ $\tau$&$0.090\pm0.007$&$0.260_{-0.016}^{+0.015}$&$0.090_{-0.006}^{+0.007}$& $0.094_{-0.008}^{+0.007}$\\ $\Delta^2_R$&$(2.41\pm0.10)\cdot10^{-9}$&$(1.85\pm0.10)\cdot10^{-9}$&$(2.41\pm0.10)\cdot10^{-9}$ & $(2.40\pm0.10)\cdot10^{-9}$\\ \hline $\sigma_8$&$0.789_{-0.009}^{+0.008}$&$0.903_{-0.014}^{+0.015}$&$0.787\pm0.009$& $0.808\pm0.009$ \\ $\Omega_m$&$0.250\pm0.010$&$0.169\pm0.007$& $0.249_{-0.011}^{+0.010}$& $0.255\pm0.010$ \\ $H_0$&$72.4_{-0.9}^{+1.0}$&$84.3_{-1.2}^{+1.1}$&$72.5\pm1.0$& $72.0\pm1.0$ \\ $z_{reion}$&$11.3\pm0.6$&$19.9_{-0.7}^{+0.8}$&$11.3\pm0.6$& $11.6\pm0.6$\\ $t_0$&$13.69\pm0.04$&$13.04_{-0.04}^{+0.05}$& $13.69_{-0.05}^{+0.04}$& $13.70\pm0.04$\\ $A_{SZ}$&$1.40\pm0.07$&$<2.00$& $1.66_{-0.21}^{+0.23}$& $1.84_{-0.14}^{+0.16}$\\ $P_{\rm Planck}$&$--$&$--$& $<1.00 (2\sigma)$& $C_l^{\rm sn}<3.6\times10^{-5}$ $\mu$k$^2$-sr $(2\sigma)$\\ \hline \end{tabular} \label{table:2} \end{center} \end{table*} While we find differences at the level of $1\sigma$ for WMAP data alone with clustering of point sources, our results show that the differences are smaller and insignificant once WMAP data are combined with ACBAR data and using two separate estimates for point source clustering in WMAP and ACBAR data. In future, Planck data will observe CMB anisotropies down to smaller angular scales and extending to higher frequencies where clustering of sources becomes increasingly important \cite{Scott,GonzalezNuevo:2004fj}. In this case, it is clear that a simplified approach with a shot-noise for unresolved point sources in Planck data may not be appropriate when extracting cosmological parameters. \subsection{Planck mock data} To understand how clustering of point sources impact cosmological parameter determination, we created several mock datasets with noise properties consistent with Planck 143 GHz channel of HFI and assuming the best-fit WMAP5 parameters \cite{Komatsu5yr} for cosmology. We model the point source clustering and the shot-noise by making use of existing high-frequency data as we did for ACBAR. While we only consider a single clustering spectrum, at high-frequencies of Planck HFI, two separate populations of point sources are expected: radio, dominating at low-frequencies, and sub-mm or far-IR sources at high-frequencies \cite{Amblard}. Here, as we only have total number counts at 150 GHz, without any information on hwo to separate the counts to the two populations, we make use of a single clustering spectrum. It will be necessary to return to this topic later once Planck data become available with additional information, from Planck and Herschel, on the far-IR population in HFI channels. We summarize our results related to Planck data in Fig.~6. In addition to the analytical model of point source clustering used in this paper based on the halo model, we also made use of publicly available Planck source maps\footnote{http://www.planck.fr/article334.html} from the Planck Working Sub-Group for Compact Source Fields to measure the angular power spectrum of points sources at the Planck HFI 143 GHz channel. These maps are derived from a model based on GalICS model\footnote{http://galics.iap.fr/} using the Mock Map Facility (MoMaF,\cite{blaizotetal05}). The power spectrum is computed after removing point sources with a flux greater than 72 mJy, 5$\sigma$ detection level of Planck at that frequency according to \cite{fernandez08}. As shown in Fig.~6, while the amplitude of our analytical model matches with the residual clustering spectrum of point sources in the Planck simulation at multipoles of $10^3$, our analytical model has residual point sources that are more clustered than the simulated sources. We believe this is due to the finite size of boxes used to simulate the source distribution by the Planck team. While we fix our point sources to the analytical model as shown in Fig.~6, we again capture the uncertainty in the amplitude with a parameter $P_{\rm Planck}$ (in this case clustering and shot-noise combined as in the case of ACBAR data) and marginalize over this parameter when estimating cosmological parameters. We follow the same procedure as fitting existing WMAP and ACBAR data to extract cosmological parameters with Planck. We use data at $l < 2000$, though Planck analysis can be extended to higher multipoles, which are likely to be contaminated by additional secondary anisotropies beyond SZ \cite{Cooray5}, and complications of the non-Gaussian covariance \cite{Cooray6}. The best-fit cosmological parameters and 68\% confidence errors for the standard 6-parameter $\Lambda$CDM case complemented by $A_{\rm SZ}$ and $P_{\rm Planck}$ are tabulated in Table~III. Without point sources, we recover the best-fit cosmology that was used to create the mock. Once the mock includes point sources and we ignore the effect of point sources when model fitting the data, we find that the parameters are significantly biased; in some parameters this bias is more than 20$\sigma$. As in the case of current data, we consider the possibility that if it is adequate to model point sources with just a shot-noise power spectrum. We allow $C_l^{sn} = P_{\rm Planck} \times 0.0075$ $\mu$k$^2$-sr and fit the data by varying $P_{\rm Planck}$. The values of cosmological parameters with the shot-noise marginalized over is tabulated in Table~III, in addition to the 2$\sigma$ upper limit on $C_l^{sn}$ from the data. With a shot-noise description only in the model fit, we find biases in cosmological parameters at the level of 1.5$\sigma$, for example, in the case of the spectral index. Once we include a model for point source clustering, in addition to the shot-noise, and marginalize over the overall amplitude with our parameter $P_{\rm Planck}$ to capture the overall clustering amplitude, we find that the biases in best-fit parameters from the values used for the mock are removed. We show couple of examples for combinations involving the scalar spectral index $n_s$ in Fig. 7 and Fig. 8 for $\Omega_ch^2$ and $\sigma_8$, respectively. As shown, once point source clustering is included in the fit, cosmological parameter biases are negligible. Ignoring point source clustering, however, impacts the measurements significantly, though including only a shot-noise for Planck point sources still results in an appreciable bias. To be completely safe, we suggest that a reasonable model for point source clustering and shot-noise be included in the cosmological parameter analysis with Planck (at each channel used for cosmological measurements) and the uncertainty in the modeling or predicting the total point source contribution be marginalized over. \section{Summary} The faint radio point sources that are unresolved in cosmic microwave background (CMB) anisotropy maps are likely to be a biased tracer of the large-scale structure. While the shot-noise contribution to the angular power spectrum of radio point sources has been considered so far when extracting cosmological parameters with CMB data, we have shown here that one should also allow for the possibility of source clustering. This is especially necessary at high frequencies where the clustering of sources is expected to dominate the shot-noise level of the angular power spectrum at tens of arcminute angular scales. As we find, the differences seen by the WMAP team for V and W-band angular power spectra do allow point source clustering, though one can wrongly conclude clustering is unnecessary if lower frequency data are included. Here, we have made an estimate of the clustering of unresolved radio sources in both WMAP and ACBAR by making use of existing counts at 95 GHz and by making several assumptions on the sources such as the redshift distribution. To account for the uncertainty in modeling the clustering, we included an extra nuisance parameter for each dataset and have marginalized over this parameter when model fitting for cosmological parameters. For the combination of WMAP 5-year data and ACBAR, we find that the spectral index changes from a mean value of $0.963 \pm 0.014$ without point-source clustering to a value of $0.959 \pm 0.014$ when the clustering of point sources are included in model fits, a difference of $0.3\sigma$. We also discussed the full parameter set with clustering of radio point sources and changes to additional parameters such as $dn_s/d\ln k$ and the tensor-to-scalar ratio $r$. While we find that the differences are marginal with and without source clustering in current data, we have suggested that it is necessary to account for source clustering with future datasets such as Planck, especially to properly model fit anisotropies at arcminute angular scales and using high-frequency data. For Planck, we find that simply including the point sources as a shot-noise only out to $l$ of 2000 for cosmological parameter estimation results in biases at the level of 1.5$\sigma$. While we simply model Planck point sources with a single power spectrum, since at high frequencies both radio and far-IR sources are expected to contribute, it may be necessary to return to a proper model of total unresolved source clustering in Planck in future. \begin{center} {\bf Acknowledgments} \end{center} This work was supported by NSF CAREER AST-0645427. We thank Mike Nolta for clarifying our questions related to the point source modeling by the WMAP team. AC thanks Dipartimento di Fisica and INFN, Universita' di Roma-La Sapienza and Aspen Center for Physics for hospitality while this research was completed. AA acknowledges partial support from a McCue fellowship from the UCI Center for Cosmology.
-29,015.418835
[ -3.404296875, 3.12109375 ]
14.435696
[ -2.556640625, 0.26025390625, -2.5234375, -5.42578125, -1.0771484375, 8.15625 ]
[ 1.5732421875, 6.53125, 0.939453125, 3.4375 ]
579
5,782
[ -3.48046875, 4.171875 ]
28.962282
[ -6.71875, -5.01171875, -5.046875, -2.783203125, 2.390625, 14.046875 ]
2.088124
12.128629
19.40505
4.662942
[ 2.3266706466674805 ]
-21,369.452208
5.565894
-28,103.652315
1.180784
5.67002
[ -2.974609375, -4.22265625, -3.876953125, -4.578125, 2.546875, 12.4609375 ]
[ -5.48828125, -2.083984375, -2.2109375, -1.4853515625, 3.6015625, 4.7265625 ]
BkiUdODxK3YB9ohkMhwr
\section{Introduction and main results}\label{sec-1} The {\it sphericalization} of a locally compact metric space was first introduced by Bonk and Kleiner \cite{BK02} in defining a metric on the one point compactification of an unbounded space. It is a natural generalization of the deformation from the Euclidean distance on $\mathbb{R}^n$ to the chordal distance on $\mathbb{S}^n$. In \cite{WY}, Wang and Yang introduced a chordal distance on $p$-adic numbers which is an ultra-metric. Recently, the authors generalized this notation to ultra-metric spaces and provided a new proof for a recent work of Heer \cite{Heer} concerning the quasim\"{o}bius uniformization of Cantor set in \cite{ZLL1}. Inspired by \cite{BK02}, Balogh and Buckley in \cite{BB} defined a {\it flattening} transformation on a bounded metric space. It was shown in \cite{BHX} that these two conformal transformations are dual in the sense that if one starts from a bounded metric space, then performs a flattening transformation followed by a sphericalization, then the object space is bilipschitz equivalent to the original space. This duality comes from the idea that the stereographic projection between Euclidean space and the Riemann sphere can be realized as a special case of inversion. Sphericalization and flattening have a lot of applications in the area of analysis on metric spaces and asymptotic geometry, such as \cite{BB,BHX,BuSc,HSX, LS}. Recently, Wildrick investigated the quasisymmetric parametrization of unbounded $2$-dimensional metric planes by using the sphericalization (named by {a warping process} in \cite{W}). It was shown in \cite{JJ} that two visual geodesic Gromov hyperbolic spaces are roughly quasi-isometric if and only if their Gromov boundaries are quasim\"obius equivalent by virtue of the flattening and sphericalized deformations. In \cite{M}, Mineyev studied the metric conformal structures on the idea boundaries of hyperbolic complexes via sphericalization. In this paper, we investigate certain applications of sphericalization in Gromov hyperbolic metric spaces. In \cite{Gr87}, Gromov observed that the essential large scale geometry of classical hyperbolic spaces $\mathbb{H}^n$ is determined by a {\it $\delta$-inequality} concerning quadruples of points, and meanwhile introduced the concept of $\delta$-hyperbolicity for general metric spaces. Since its appearance, the theory of Gromov hyperbolicity has been found numerous applications in the study of geometric group theory and geometric function theory, for instance \cite{BB03,BHK,BrHa,KLM} and the references therein. We briefly review the theory of Gromov hyperbolic spaces. For a more complete exposition, see \cite{BrHa,BuSc} or Subsection \ref{s-g}. A geodesic metric space $X$ is called Gromov hyperbolic if there is a constant $\delta\geq 0$ such that each point on the side of any geodesic triangle is within the distance $\delta$ from some point on one of the other two sides. The Gromov boundary of $X$, denoted by $\partial_\infty X$, is defined as the set of equivalence classes of geodesic rays, with two geodesic rays $\gamma_i$, $i=1,2$, being equivalent if the Hausdorff distance ${\operatorname{dist}}_\mathcal{H}(\gamma_1,\gamma_2)<\infty$. There is a cone topology on $X\cup \partial_\infty X$ so that it is metrizable, see \cite{BrHa}. In \cite{BuSc}, Buyalo and Schroeder systematically investigated two different kinds of metrics (namely, Bourdon metric and Hamenst\"adt metric, for the definitions see Section \ref{sec-2}) on $\partial_\infty X$ which are respectively based at a point in $X$ and $\partial_\infty X$ (cf. \cite{Bou,Ha}). As the first aim of this paper, we show that the doubling property of these two conformal gauge on the Gromov boundary of a Gromov hyperbolic space are coincided. \begin{theorem}\label{thm-1} Let $X$ be a Gromov hyperbolic space and $\partial_\infty X$ its Gromov boundary. Then $\partial_\infty X$ is doubling for any Bourdon metric if and only if it is doubling for any Hamenst\"adt metric. \end{theorem} The terminology used in Theorem \ref{thm-1} and in the rest of this section will be explained in Section \ref{sec-2}. \begin{remark} In \cite{LS}, the third author and Shanmugalingam showed that the process of sphericalization preserves the Ahlfors regular and doubling measures in metric spaces. With the aid of this quantitative result, Heer \cite{Heer} proved the quasim\"obius invariance of doubling metric spaces, which is needed in the proof of Theorem \ref{thm-1}. In this note, we provide a quite different but direct proof for Heer's result by means of Assouad's embedding Theorem (see \cite[Theorem 8.1.1]{BS}). \end{remark} The doubling property of metric spaces plays an important role in the area of analysis on metric spaces. For instance, Herron \cite{Her06} demonstrated that a Gromov hyperbolic abstract domain with doubling Gromov boundary carries a uniformizing volume growth density. Recently, Wang and Zhou \cite{WZ} studied the equivalence of weakly quasim\"obius maps and quasim\"obius maps on doubling quasi-metric spaces. A metric space $X$ is said to be of {\it bounded growth at some scale}, if there exist constants $r$ and $R$ with $R>r>0$, and $N\in \mathbb{N}$ such that every open ball of radius $R$ in $X$ can be covered by $N$ open balls of radius $r$. In \cite{BS}, Bonk and Schramm proved that a Gromov hyperbolic space with bounded growth at some scale embeds rough isometrically into the classical hyperbolic spaces with higher dimension. In \cite{BuSc}, Buyalo and Schroeder provided a different proof of this result. According to \cite[Theorem $9.2$]{BS} every Gromov hyperbolic geodesic space $X$ of bounded growth at some scale has a boundary $\partial_\infty X$ of finite Assouad dimension (which is equivalent to the doubling condition, see \cite{TV}). Here $\partial_\infty X$ is equipped with any visual (Bourdon) metric. Recently, Herron corroborated that a locally Ahlfors regular abstract domain is bounded growth at some scale associated to the quasihyperbolic metric, see \cite[Proposition 3.6]{Her06}. Combining these results with Theorem \ref{thm-1} we obtain the following two corollaries; for the related definitions see \cite{BS, Her06}. \begin{corollary} Let $X$ be a Gromov hyperbolic geodesic metric space with bounded growth at some scale. Then the Gromov boundary equipped with any Bourdon or Hamenst\"adt metric is doubling. \end{corollary} \begin{corollary}Let $\Omega$ be a locally quasiconvex locally Ahlfors $Q$-regular abstract domain with a Gromov hyperbolic quasihyperbolization. Then the Gromov boundary equipped with any Bourdon or Hamenst\"adt metric is doubling. \end{corollary} The second goal of this paper is to explore the characterization of Gromov hyperbolic domains; here and hereafter, a Gromov hyperbolic domain always means an incomplete metric space which is Gromov hyperbolic in its quasihyperbolic metric $k$ (see Definition \ref{def-k}). In \cite{KLM}, Koskela, Lammi and Manojlovi\'{c} proved that if $(\Omega,d)$ is a bounded abstract domain, then $(\Omega,k)$ is Gromov hyperbolic if and only if the length space $(\Omega,l_d)$ satisfies both the Gehring-Hayman condition and the ball separation condition. It is natural to ask whether this result holds for unbounded domains. In this work, we consider the unbounded case via sphericalization and establish the following result similar to \cite[Theorem 1.2]{KLM}. \begin{theorem}\label{thm-2} Let $Q>1$ and let $(X,d,\mu)$ be an Ahlfors $Q$-regular metric measure space with $(X,d)$ an annular quasiconvex, proper and geodesic space. Let $\Omega\subsetneq X$ be a domain $($an open connected set$)$, and let $l_d$ be the length metric on $\Omega$ associated to $d$. Then $(\Omega,k)$ is Gromov hyperbolic if and only if $(\Omega,l_d)$ satisfies both the Gehring-Hayman condition and the ball separation condition. \end{theorem} In \cite{BHK}, Bonk, Heinonen and Koskela proved that a bounded domain in $\mathbb{R}^n$ is uniform if and only if it is Gromov hyperbolic with respect to the quasihyperbolic metric and there is a natural quasisymmetric identification between the Euclidean boundary and the Gromov boundary. Recently, Lammi \cite{La} showed that the inner boundary of a Gromov hyperbolic domain with a suitable growth condition in the quasihyperbolic metric is homeomorphic to the Gromov boundary. Note that the quasihyperbolic boundary condition stated in \cite{La} implies the boundness of the domain. So it is natural to ask whenever the Gromov boundary and the inner metric boundary of an unbounded Gromov hyperbolic domain are homeomorphic. Motivated by these considerations, we focus on the study of the boundary behavior of Gromov hyperbolic domains, particularly, for unbounded domains. It was shown in \cite{BHK, GO} that there is a characterization of uniform domains (see Definition \ref{def-u}) in terms of two metrics, the quasihyperbolic metric $k$ and the distance ratio metric $j$ (see (\ref{def-j})). That is, a domain $D$ is uniform in $\mathbb{R}^n$ if and only if there exist constants $c_1\ge1$ and $c_2\geq 0$ such that for all $x,y\in D$, \begin{equation}\label{l-1} k_D(x,y)\le c_1 j_D(x,y) +c_2.\end{equation} Subsequently, Vuorinen observed that the additive constant $c_2$ on the right hand side of (\ref{l-1}) can be chosen to be $0$. This observation leads to the definition of $\varphi$-{\it uniform domains} introduced in \cite{Vu85}. \begin{definition}\label{def-v} Let $X$ be a rectifiably connected, locally compact and complete metric space, and let $D\subsetneq X$ be a domain with $d_D(x)={\operatorname{dist}}(x,\partial D)$ for all $x\in D$. Let $\varphi:[0,\infty)\to [0,\infty)$ be a homeomorphism. We say that $D$ is $\varphi$-{\it uniform} if for all $x$, $y$ in $D$, $$k_D(x,y)\leq \varphi(r_D(x,y))\;\;\;\;\;\mbox{where}\;\;\;\;\;r_D(x,y)=\frac{d(x,y)}{d_D(x)\wedge d_D(y)}.$$ Here and hereafter, $r\wedge s=\min\{r,s\}$ for all $r,s\in \mathbb{R}$. \end{definition} In \cite{Vai-4}, V\"ais\"al\"a has also investigated this class of domains, and pointed out that these two classes of domains are the same provided $\varphi$ is a slow function (i.e. a function $\varphi$ satisfying $\lim_{t\to\infty} \varphi(t)/t=0$). We remark that every convex domain is $\varphi$-uniform with $\varphi(t)=t$. However, in general, convex domains need not be uniform. For example, $D=\{(x_1,x_2)\in \mathbb{R}^2:0<x_2<1\}$ is $\varphi$-uniform with $\varphi(t)=t$, but it is not uniform. The third purpose of this paper is to study whether or not the $\varphi$-uniformity condition is sufficient for the homeomorphism equivalence between the Gromov boundary and metric boundary of Gromov hyperbolic domain. Our result in this direction is as follows. \begin{theorem}\label{thm-3} Let $Q>1$ and let $(\Omega,d,\mu)$ be a locally compact, $c$-quasiconvex, Ahlfors $Q$-regular incomplete metric measure space. Assume that $(\Omega,k)$ is roughly starlike Gromov hyperbolic and $(\Omega,d)$ is $\varphi$-uniform, where $k$ is the quasihyperbolic metric of $\Omega$. \begin{enumerate} \item If $\Omega$ is bounded and $\int_1^\infty \frac{dt}{{\varphi^{-1}(t)}}<\infty$, then the Gromov boundary and the metric boundary of $\Omega$ are homeomorphic; \item If $\Omega$ is unbounded and $\int_1^\infty \frac{dt}{\sqrt{\varphi^{-1}(t)}}<\infty$, then the Gromov boundary of $\Omega$ and $\partial \Omega\cup\{\infty\}$ are homeomorphic. Here and hereafter, $\overline{\Omega}\cup \{\infty\}$ is the one point compactification of the space $(\Omega,d)$ and $\partial \Omega\cup\{\infty\}=\overline{\Omega}\cup \{\infty\}\setminus \Omega$. \end{enumerate} \end{theorem} Note that Theorem \ref{thm-3} is new even for Gromov hyperbolic domains in $\mathbb{R}^n$. It follows from \cite[Theorem 1.11]{BHK} that (inner) uniform domains in $\mathbb{R}^n$ are Gromov hyperbolic. However, $\varphi$-uniform domains need not be Gromov hyperbolic. Let $G=\{(x_1,x_2,x_3)\in \mathbb{R}^3:0<x_3<1\}$. Thus $G$ is a convex domain which is $\varphi$-uniform with $\varphi(t)=t$. Moreover, it is not difficult to check that $G$ is $LLC_2$ (see \cite[Chapter 7]{BHK}) but not $c$-uniform for any $c\geq 1$. Hence we see from \cite[Proposition 7.12]{BHK} that $G$ is not $\delta$-hyperbolic for any $\delta\geq 0$. We note that a Gromov hyperbolic $\varphi$-uniform domain in $\mathbb{R}^n$ is not necessarily uniform. For instance, $\Omega=\{(x_1,x_2)\in \mathbb{R}^2:0<x_2<1\}$ is a plane simply connected convex domain, and therefore it is a Gromov hyperbolic $\varphi$-uniform domain. But it is not uniform. Therefore, Theorem \ref{thm-3} is a generalization of the results in \cite{BHK, La}. Moreover, the unbounded case is also in our considerations by using the sphericalized transformation. The organization of this paper is as follows. In Section \ref{sec-2}, we recall some definitions and preliminary results. In Section \ref{sec-3}, we will prove the main results. \section{Preliminary and Notations}\label{sec-2} \subsection{Metric geometry} In this paper, we always use $(X,d)$, $(X',d')$, $(Y, d)$ etc to denote metric spaces. For $(X,d)$, its metric completion and metric boundary are denoted by $\overline{X}$ and $\partial X:= \overline{X}\setminus X$, respectively. A domain $D\subset X$ is an open connected set. The open (resp. closed) metric ball with center $x\in X$ and radius $r>0$ is denoted by $${\mathbb{B}}(x,r)=\{z\in X:\; d(z,x)<r\}\;\;(\mbox{resp.}\;\; \overline{{\mathbb{B}}}(x,r)=\{z\in X:\; d(z,x)\leq r\}).$$ We say that $X$ is {\it incomplete} if $\partial X\neq \emptyset$. $X$ is called {\it proper} if all its closed balls are compact. By a curve, we mean a continuous function $\gamma:$ $I=[a,b]\to X$. The length of $\gamma$ is denoted by $$\ell(\gamma)=\sup\Big\{\sum_{i=1}^{n}d\big(\gamma(t_i),\gamma(t_{i-1})\big)\Big\},$$ where the supremum is taken over all partitions $a=t_0<t_1<t_2<\ldots<t_n=b$. The curve is rectifiable if $\ell(\gamma)<\infty$. We also denote the image $\gamma(I)$ of $\gamma$ by $\gamma$, and the subcurve of $\gamma$ with endpoints $x$ and $y$ by $\gamma[x,y]$. A curve $\gamma$ is called {\it rectifiable}, if the length $\ell_d(\gamma)<\infty$. A metric space $(X, d)$ is called {\it rectifiably connected} if every pair of points in $X$ can be joined with a rectifiable curve $\gamma$. Suppose the curve $\gamma$ is rectifiable. The length function $s_{\gamma}$: $[a,b]\to [0, \ell(\gamma)]$ is defined by $$s_{\gamma}(t)=\ell(\gamma[a,t]).$$ Then there is a unique curve $\gamma_s:$ $[0, \ell(\gamma)]\to X$ such that $$\gamma=\gamma_s\circ s_{\gamma}.$$ Obviously, $\ell(\gamma_s[0,t])=t$ for each $t\in [0, \ell(\gamma)]$. The curve $\gamma_s$ is called the {\it arc-length parametrization} of $\gamma$. For a rectifiable curve $\gamma$ in $X$, the line integral over $\gamma$ of each Borel function $\varrho:$ $X\to [0, \infty)$ is defined as follows: $$\int_{\gamma}\varrho ds=\int_{0}^{\ell(\gamma)}\varrho\circ \gamma_s(t) dt.$$ For $c\geq 1$, a curve $\gamma\subset X$, with endpoints $x,y$, is {\it$c$-quasiconvex}, $c \geq 1$, if its length is at most $c$ times the distance between its endpoints; i.e., if $\gamma$ satisfies $$\ell(\gamma)\leq cd(x,y).$$ $X$ is {\it $c$-quasiconvex} if each pair of points can be joined by a $c$-quasiconvex curve. Let $c \geq 1$ and $0 < \lambda \leq 1/2$. An incomplete metric space $(D, d)$ is said to be {\it locally $(\lambda,c)$-quasiconvex,} if for all $x\in D$, each pair of points in ${{\mathbb{B}}}(x, \lambda d_D(x))$ can be joined with a $c$-quasiconvex curve. A {\it geodesic} $\gamma$ joining $x$ to $y$ in $X$ is a map $\gamma:I=[0,l]\to X$ from an interval $I$ to $X$ such that $\gamma(0)=x$, $\gamma(l)=y$ and $$\;\;\;\;\;\;\;\;d(\gamma(t),\gamma(t'))=|t-t'|\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox{for all}\;\;t,t'\in I.$$ If $I=[0,\infty)$ or $\mathbb{R}$, then $\gamma$ is called a {\it geodesic ray} or a {\it geodesic line}. A metric space $X$ is said to be {\it geodesic} if every pair of points can be joined by a geodesic arc. Every rectifiably connected metric space $(X, d)$ admits a length (or intrinsic) metric, its so-called length distance given by $$\ell_d(x, y) = \inf\ell_d(\gamma),$$ where the infimum is taken over all rectifiable curves $\gamma$ joining $x$ to $y$ in $X$. \begin{definition} Let $X$ be a metric space with $p\in X$, and let $c\geq 1$. We say that $X$ is $c$-{\it annular quasiconvex with respect to $p$}, if for all $r>0$, every pair of points in the annular $\overline{{\mathbb{B}}}(p,2r)\setminus {\mathbb{B}}(p,r)$ can be joined by a $c$-quasiconvex curve lying in $\overline{{\mathbb{B}}}(p,2cr)\setminus {\mathbb{B}}(p,r/c)$. $X$ is called $c$-{\it annular quasiconvex} if it is $c$-annular quasiconvex at each point. \end{definition} \begin{definition}\label{def-k} Let $(X,d)$ be a rectifiably connected, locally compact and complete metric space, and let $D\subsetneq X$ is a domain. The {\it quasihyperbolic length} of a curve $\gamma \subset D$ is defined as $$\ell_{k}(\gamma)=\ell_{k_D}(\gamma)=\int_{\gamma}\frac{|dz|}{d_D(z)}.$$ For any $x,y\in D$, the {\it quasihyperbolic distance} $k(x,y)$ between $x$ and $y$ is defined by $$k(x,y)=k_D(x,y)=\inf\ell_{k}(\gamma), $$ where the infimum is taken over all rectifiable curves $\gamma$ joining $x$ to $y$ in $D$. \end{definition} We remark that the resulting space $(D,k)$ is complete, proper and geodesic whenever $D$ is locally quasiconvex (cf. \cite[Proposition $2.8$]{BHK}). There is another useful metric in the study of geometric function theory. We recall the definition of distance ratio metric $j$ as follows: \begin{equation}\label{def-j} j_D(x,y)=\log\Big(1+\frac{d(x,y)}{d_D(x)\wedge d_D(y)}\Big).\end{equation} \begin{definition} Let $(X,d)$ be a rectifiably connected, locally compact and complete metric space, and let $D\subsetneq X$ is a domain. Let $C_{gh}\geq 1$ be a constant. We say that $D$ satisfies the {\it $C_{gh}$-Gehring-Hayman inequality}, if for all $x$, $y$ in $D$ and for each quasihyperbolic geodesic $\gamma$ joining $x$ and $y$ in $D$, we have $$\ell(\gamma)\leq C_{gh}\ell(\beta_{x,y}),$$ where $\beta_{x,y}$ is any other curve joining $x$ and $y$ in $D$. In other words, quasihyperbolic geodesics are essentially the shortest curves in $D$. \end{definition} \begin{definition} Let $(X,d)$ be a rectifiably connected, locally compact and complete metric space, and let $D\subsetneq X$ is a domain. Let $C_{bs}\geq 1$ be a constant. We say that $D$ satisfies the {\it $C_{bs}$-ball separation condition}, if for all $x$, $y$ in $D$ and for each quasihyperbolic geodesic $\gamma$ joining $x$ and $y$ in $D$, we have for every $z\in \gamma$, $${\mathbb{B}}(z,C_{bs}d_D(z)) \cap \beta_{x,y} \not=\emptyset ,$$ where $\beta_{x,y}$ is any other curve joining $x$ and $y$ in $D$. \end{definition} \begin{definition}\label{def-u} Let $(X,d)$ be a rectifiably connected, locally compact and complete metric space, and let $D\subsetneq X$ is a domain. We say that $D$ is $c$-{\it uniform} provided there exists a constant $c$ with the property that each pair of points $x$, $y$ in $D$ can be joined by a rectifiable arc $\gamma$ in $D$ satisfying \begin{enumerate} \item $\min\{\ell(\gamma[x,z]),\ell(\gamma[z,y])\}\leq c\,d_D(z)$ for all $z\in \gamma$, and \item $\ell(\gamma)\leq c\,d(x,y)$, \end{enumerate} \noindent where $\gamma[x,z]$ the part of $\gamma$ between $x$ and $z$. \end{definition} \subsection{Mappings on metric spaces} In this part, we recall certain definitions for mappings between metric spaces. Here primes always denote the images of points under $f$, for example, $x'=f(x)$. A quadruple in $X$ is an ordered sequence $Q = (x,y,z,w)$ of four distinct points in $X$. The {\it cross ratio} of $Q$ is defined to be the number $$r(x,y,z,w) = \frac{d(x,z)d(y,w)}{d(x,y)d(z,w)}.$$ Observe that the definition is extended in the well known manner to the case where one of the points is $\infty$. For example, $$r(x,y,z,\infty) = \frac{d(x,z)}{d(x,y)}.$$ Suppose that $\eta$ and $\theta$ are homeomorphisms from $[0, \infty)$ to $[0, \infty)$, and that $f:$ $(X_1,d_1)\to (X_2,d_2)$ is an embedding between two metric spaces. Then \begin{enumerate} \item we call that $f$ is {\it $L$-bilipschitz} for some $L\geq 1$ if for all $x,y\in X_1,$ $$L^{-1}d_1(x,y)\le d_2(x',y')\le Ld_1(x,y).$$ \item $f$ is said to be {\it $\eta$-quasisymmetric} if for all $x$, $y$ and $z$ in $X_1$, $$d_1(x,y)\leq t d_1(x,z)\;\;\;\;\;\;\mbox{implies that}\;\;\;\;\;\;d_2(x',y')\leq \eta(t) d_2(x',z').$$ \item $f$ is called {\it $\theta$-quasim\"obius} if for all $x,y,z,w$ in $X_1$, $$r(x,y,z,w)\leq t\;\;\;\;\;\;\mbox{implies that}\;\;\;\;\;\;r(x',y',z',w')\leq \theta(t).$$ \end{enumerate} For a set $A$ in a metric space $X$, it is called {\it $c$-cobounded} in $X$ for $c\geq 0$, if every point $x\in X$ has distance at most $c$ from $A$. If $A$ is $c$-cobounded for some $c\geq 0$, then we say that $A$ is {\it cobounded} in $X$ \cite{BS}. \begin{definition} Let $\lambda\geq 1$ and $c\geq 0$. A mapping $f:$ $(X_1,d_1)\to (X_2,d_2)$ is said to be a {\it roughly $(\lambda, c)$-quasi-isometry}, if $f(X_1)$ is $c$-cobounded in $X_2$ and for all $x,y\in X$, $$\frac{1}{\lambda}d_1(x,y)-c\leq d_{2}(x',y')\leq \lambda d_1(x,y)+c.$$ \end{definition} \subsection{Sphericalization of metric measure spaces} In this subsection, we recall some materials concerning metric measure spaces and sphericalization, for which we refer to standard references \cite{BB03,HSX,LS}. Let $(X, d)$ be a metric space. $X$ is said to be {\it doubling} if there is a constant $C$ such that every (metric) ball ${\mathbb{B}}$ in $X$ can be covered with at most $C$ balls of half the radius of ${\mathbb{B}}$. A positive Borel measure $\mu$ on $X$ is {\it doubling} if there is a constant $C_\mu$ such that $$\mu({\mathbb{B}}(x,2r))\leq C_\mu \mu({\mathbb{B}}(x,r))$$ for all $x\in X$ and $r>0$. Moreover, $X$ is said to be {\it Ahlfors $Q$-regular} if it admits a positive Borel measure $\mu$ such that $$C^{-1}R^Q\leq \mu({{\mathbb{B}}}(x,R)) \leq C R^Q$$ for all $x\in X$ and $0<R< {\operatorname{diam}}(X)$ (it is possible that the diameter of $X$ satisfies ${\operatorname{diam}}(X)=\infty$), where $C\geq 1$ and $Q>0$ are constants. Note that Ahlfors regular spaces are necessarily doubling. For instance, the Euclidean space $\mathbb{R}^n$ with Lebesgue measure satisfies the Ahlfors $n$-regularity. Given an unbounded locally compact metric space $(X,d)$ and a non-isolated point $a\in X$, we consider the one point compactification $\dot{X}=X\cup \{\infty\}$ and define $d_a:\dot{X}\times \dot{X}\to[0,\infty)$ as follows $$d_a(x,y)=d_a(y,x)=\begin{cases} \displaystyle\; \frac{d(x,y)}{[1+d(x,a)][1+d(y,a)]},\; \;\;\;\;\mbox{if}\;\;x,y\in X,\\ \displaystyle\;\;\;\;\;\frac{1}{1+d(x,a)},\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \mbox{if}\;\;y=\infty\; \mbox{and}\;x\in X,\\ \displaystyle\;\;\;\;\;\;\;\;\;\;0, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mbox{if}\;\; x=\infty=y. \end{cases} $$ In general, $d_a$ is not a metric on $X$, but a quasimetric. However, there is a standard procedure, known as {\it chain construction}, to construct a metric from a quasimetric as follows. Define $$\widehat{d}_a(x,y):=\inf \Big\{ \sum_{j=0}^k d_a(x_j,x_{j+1})\Big\},$$ where the infimum is taken over all finite sequences $x=x_0,x_1,...,x_k,x_{k+1}=y$ from $\dot{X}$. Then $(\dot{X},\widehat{d}_a)$ is a metric space and it is said to be the {\it sphericalization} of $(X,d)$ associated to the point $a\in \dot{X}$. Moreover, by \cite[Lemmma 2.2.5]{BuSc}, we have for all $x,y\in \dot{X}$ \begin{equation}\label{z-1.1} \frac{1}{4}d_a(x,y)\leq \widehat{d}_a(x,y)\leq d_a(x,y).\end{equation} In the case that $(X,d)$ is a rectifiably connected unbounded metric space, we define a Borel function $\rho_a:X\to [0,\infty)$ by $$\rho_a(x)=\frac{1}{[1+d(a,x)]^2}.$$ A similar argument as \cite[4.1 and (4.2)]{BHX}, we obtain that for any rectifiable curve $\gamma$ joining $x$ and $y$, \begin{equation}\label{z-1.2} \ell_{\widehat{d}_a}(\gamma)=\int_\gamma \rho_a(z)|dz|.\end{equation} Suppose that $(X,d)$ is a proper space equipped with a Borel-regular measure $\mu$ such that the measures of non-empty open bounded sets are positive and finite. We define the {\it spherical measure} $\mu_a$ associated to the point $a\in X$ as $$\mu_a(A)=\int_{A\setminus \{\infty\}}\frac{1}{\mu({\mathbb{B}}(a,1+d(a,z)))^2}d\mu(z).$$ \subsection{Gromov hyperbolic spaces}\label{s-g} In this subsection, we give some basic information about Gromov hyperbolic spaces, for which we refer to standard references \cite{BS,BrHa,BuSc,Gr87,Vai-0}. We begin with the definition of $\delta$-hyperbolic spaces. We say that a metric space $(X,d)$ is {\it Gromov hyperbolic}, if there is a constant $\delta\geq 0$ such that it satisfies the following $\delta$-inequality $$(x|y)_w\geq \min\{(x|z)_w,(z|y)_w\}-\delta$$ for all $x,y,z,w\in X$, where $(x|y)_w$ is the {\it Gromov product} with respect to $w$ defined by $$(x|y)_w=\frac{1}{2}[d(x,w)+d(y,w)-d(x,y)].$$ \begin{definition} Suppose that $(X, d)$ is a Gromov $\delta$-hyperbolic metric space for some constant $\delta\geq 0$. \begin{enumerate} \item A sequence $\{x_i\}$ in $X$ is called a {\it Gromov sequence} if $(x_i|x_j)_w\rightarrow \infty$ as $i,$ $j\rightarrow \infty.$ \item Two such sequences $\{x_i\}$ and $\{y_j\}$ are said to be {\it equivalent} if $(x_i|y_i)_w\rightarrow \infty$. \item The {\it Gromov boundary} or {\it the boundary at infinity} $\partial_\infty X$ of $X$ is defined to be the set of all equivalence classes. \item For $a\in X$ and $\eta\in \partial_\infty X$, the Gromov product $(a|\eta)_w$ of $a$ and $\eta$ is defined by $$(a|\eta)_w= \inf \big\{ \liminf_{i\rightarrow \infty}(a|b_i)_w:\; \{b_i\}\in \eta\big\}.$$ \item For $\xi,$ $\eta\in \partial_\infty X$, the Gromov product $(\xi|\eta)_w$ of $\xi$ and $\eta$ is defined by $$(\xi|\eta)_w= \inf \big\{ \liminf_{i\rightarrow \infty}(a_i|b_i)_w:\; \{a_i\}\in \xi\;\;{\rm and}\;\; \{b_i\}\in \eta\big\}.$$ \end{enumerate} \end{definition} Let $X$ be a proper, geodesic $\delta$-hyperbolic space, and let $w\in X$. We say that $X$ is {\it $K$-roughly starlike} with respect to $w$ if for each $x\in X$ there is some point $\eta\in\partial_\infty X$ and a geodesic ray $\gamma=[w,\eta]$ emanating from $w$ to $\eta$ with $${\operatorname{dist}}(x,\gamma)\leq K.$$ This concept was introduced by Bonk, Heinonen and Koskela \cite{BHK}, see also \cite{BS}. They proved that bounded uniform spaces and every Gromov hyperbolic domain in ${\mathbb R}^n$ are roughly starlike. For $0<\varepsilon<\min\{1,\frac{1}{5\delta}\}$, define $$\rho_{w,\varepsilon}(\xi,\zeta) =e^{-\varepsilon(\xi|\zeta)_w}$$ for all $\xi,\zeta$ in the Gromov boundary $\partial_\infty X$ of $X$ with convention $e^{-\infty}=0$. We now define $$d_{w,\varepsilon}(\xi,\zeta):=\inf \Big\{\sum_{i=1}^{n} \rho_{w,\varepsilon} (\xi_{i-1},\xi_i):n\geq 1,\xi=\xi_0,\xi_1,...,\xi_n=\zeta\in \partial_\infty X\Big\}.$$ Then $(\partial_\infty X,d_{w,\varepsilon})$ is a metric space with $$\rho_{w,\varepsilon}/2\leq d_{w,\varepsilon}\leq \rho_{w,\varepsilon},$$ and we call $d_{w,\varepsilon}$ the {\it Bourdon metric} of $\partial_\infty X$ based at $w$ with the parameter $\varepsilon$. Following \cite{BuSc}, we say that $b:X\to \mathbb{R}$ is a {\it Busemann function} based at $\xi$, denoted by $b\in \mathcal{B}(\xi)$, if for some $w\in X$, we have $$b(x)=b_{\xi,w}(x)=b_\xi(x,w)=(\xi|w)_x-(\xi|x)_w\;\;\;\;\;\;\;\;\;\;\;\;\mbox{for}\;x\in X.$$ We next define the Gromov product of $x,y\in X$ based at the Busemann function $b=b_{\xi,w}\in \mathcal{B}(\xi)$ by $$(x|y)_b=\frac{1}{2}(b(x)+b(y)-d(x,y)).$$ Similarly, for $x\in X$ and $\eta\in \partial_\infty X\setminus\{\xi\}$, the Gromov product $(x|\eta)_b$ of $x$ and $\eta$ is defined by $$(x|\eta)_b= \inf \big\{ \liminf_{i\rightarrow \infty}(x|z_i)_b:\; \{z_i\}\in \eta\big\}.$$ For points $\xi_1,\xi_2\in \partial_\infty X\setminus\{\xi\}$, we define their Gromov product based at $b$ by $$(\xi_1|\xi_2)_b=\inf\big\{\liminf_{i\to\infty} (x_i|y_i)_b: \{x_i\}\in\xi_1 , \{y_i\}\in\xi_2\}.$$ According to \cite[$(3.2)$ and Example $3.2.1$]{BuSc}, we know that \begin{equation}\label{z0} (x|y)_b\doteq_{10\delta} (x|y)_{w,\xi}:=(x|y)_w-(\xi|x)_w-(\xi|y)_w,\end{equation} where $x\doteq_{a}y$ means that $|x-y|\le a$ for some real number $a\geq 0$. It follows from \cite[Proporsition $3.2.3$]{BuSc} that $(x|y)_b,(y|z)_b,(x|z)_b$ form a $22\delta$-triple for every $x,y,z\in X$. Now we recall the definition of the {\it Hamenst\"adt metric} of $\partial_\infty X$ based at $\xi$ or a Busemann function $b=b_{\xi,w}\in \mathcal{B}(\xi)$. For $\varepsilon>0$ with $e^{22\varepsilon\delta}\leq 2$, define $$\rho_{b,\varepsilon}(\xi_1,\xi_2)= e^{-\varepsilon(\xi_1|\xi_2)_b}\;\;\;\;\;\;\;\mbox{for all}\;\xi_1,\xi_2\in \partial_\infty X.$$ Then for $i=1,2,3$ with $\xi_i\in \partial_\infty X\setminus\{\xi\}$, we have $$\rho_{b,\varepsilon}(\xi_1,\xi_2)\leq e^{22\varepsilon\delta} \max\{\rho_{b,\varepsilon}(\xi_1,\xi_3),\rho_{b,\varepsilon}(\xi_3,\xi_2)\}.$$ That is, $\rho_{b,\varepsilon}$ is a $K'$-quasi-metric on $\partial_\infty X\setminus\{\xi\}$ with $K'=e^{22\varepsilon\delta}\leq 2$. We now define $$\sigma_{b,\varepsilon}(x,y):=\inf \Big\{\sum_{i=1}^{n} \rho_{b,\varepsilon} (x_{i-1},x_i):n\geq 1,x=x_0,x_1,...,x_n=y\in\partial_\infty X\setminus\{\xi\} \Big\}.$$ By \cite[Lemma $3.3.3$]{BuSc}, we see that $(\partial_\infty X\setminus\{\xi\}, \sigma_{b,\varepsilon})$ is a metric space with $$\rho_{b,\varepsilon}/2\leq \sigma_{b,\varepsilon}\leq \rho_{b,\varepsilon}.$$ Then $\sigma_{b,\varepsilon}$ is called the {\it Hamenst\"adt metric} on the punctured space $\partial_\infty X\setminus\{\xi\}$ based at $\xi$ with parameter $\varepsilon$. To conclude this part, we note that $\partial_\infty X$ equipped with any Bourdon metric is bounded. However, the punctured space $\partial_\infty X\setminus\{\xi\}$ equipped with any Hamenst\"adt metric $\sigma_{b,\varepsilon}$ is unbounded. \section{Proofs of the main results}\label{sec-3} \subsection{} In this subsection, we shall give the proof of Theorem \ref{thm-1}. To this end, we introduce some necessary results. The first one is known as Assouad's embedding Theorem. \begin{theorem}\label{Thm-2} $($\cite[Theorem $8.1.1$]{BuSc}$)$ Let $(Z,d)$ be a doubling metric space. Then for every $s\in(0,1)$ there is a bilipschitz embedding $\varphi:(Z,d^s) \to \mathbb{R}^N$, where $N\in \mathbb{N}$ depends only on $s$ and the doubling constant of the metric $d$. \end{theorem} Secondly, the following auxiliary result concerns the quasim\"obius invariance of doubling metric spaces. We note that this lemma has been proved by Heer \cite{Heer}. His proof was based on a recent work of the third author and Shanmugalingam \cite{LS}. However, our arguments are quite different and based on Assouad's embedding Theorem. We think it is interesting and thus present a proof here. \begin{lemma}\label{lem-2} Let $(X,d)$ and $(X',d')$ be metric spaces, and let $f:X\to X'$ be a quasim\"{o}bius embedding. If $X'$ is a doubling metric space, then $X$ is doubling. \end{lemma} \begin{proof} Without loss of generality, we may assume that $X'=f(X)$ and $f$ is a homeomorphism because the subspace of a doubling metric space is also doubling, see \cite[Remark 2.8]{TV}. Then according to Theorem \ref{Thm-2}, we see that there is a quasisymmetric embedding $$\varphi:(X',d') \to Z:=\varphi(X')\subset \mathbb{R}^N,$$ where $N\in \mathbb{N}$ depends only on the doubling constant of the metric $d'$. Moreover, we obtain a quasim\"{o}bius homeomorphism $$g=\varphi\circ f:X\to Z.$$ In order to show that $X$ is doubling, we consider two cases. \begin{case} $X$ is unbounded.\end{case} If $g(x)\to\infty$ as $x\to \infty$, by \cite[Theorem $3.10$]{Vai-5}, we see that $g$ is quasisymmetric and thus the claim follows from \cite[Theorem 2.10]{TV}. If $g(x)\to p\neq\infty$ as $x\to \infty$, we may assume without loss of generality that $g(\infty)=0\in Z$. Let $$u(x)=\frac{x}{|x|^2}$$ be the reflection about the unit sphere centered at the origin in $\mathbb{R}^N\cup \{\infty\}$. Clearly, $u$ is a M\"{o}bius transformation. It follows that $u\circ g:X\to u(Z)$ is quasim\"{o}bius with $u\circ g(x)\to \infty$ as $x\to \infty$. Again by \cite[Theorem 3.10]{Vai-5} we see that $u\circ g$ is quasisymmetric. Therefore, it follows from \cite[Theorem 2.10]{TV} that $X$ is a doubling metric space. \begin{case} $X$ is bounded. \end{case} In this case, we consider the spherical metric $\sigma$ on $\mathbb{R}^N\cup \{\infty\}$ which is determined by the length element $$|dz|_\sigma=\frac{2|dz|}{1+|z|^2},$$ where $|dz|$ is the Euclidean length element and $|z|$ is the Euclidean norm of a point $z\in \mathbb{R}^N$. Then $g:X\to (Z,\sigma)$ is a quasim\"{o}bius map between two bounded metric spaces, which is actually quasisymmetric. By \cite[Theorem 2.10]{TV}, we see that $X$ is doubling. \end{proof} Next, we show that the Gromov boundary of a hyperbolic space equipped with any Bourdon metric and Hamenst\"adt metic are quasim\"obius equivalent. \begin{lemma}\label{lem-3} Let $X$ be a Gromov $\delta$-hyperbolic space and $\partial_\infty X$ its Gromov boundary. Then the identity map $(\partial_\infty X\setminus\{\xi\}, d_{w,\varepsilon})\to (\partial_\infty X\setminus\{\xi\}, \sigma_{b,\varepsilon'})$ is $\theta$-quasim\"obius with $\theta$ depending only on $\delta,\varepsilon$ and $\varepsilon'$, where $d_{w,\varepsilon}$ is the Bourdon metric based at $w\in X$ with parameter $\varepsilon>0$ and $\sigma_{b,\varepsilon'}$ is the H\"amenstadt metric based at the Busemann function $b=b_{\xi,o}$ with parameter $\varepsilon'>0$ for $o\in X$ and $\xi\in \partial_\infty X$. \end{lemma} \begin{proof} For any distinct points $x_i\in \partial_\infty X\setminus\{\xi\}$, $i=1,2,3,4$, by \cite[Lemmas 2.2.2 and 3.2.4]{BuSc} we may assume that all of them lie in $X$. Thus by $(\ref{z0})$, we obtain $$(x_1|x_2)_w+(x_3|x_4)_w-(x_1|x_3)_w-(x_2|x_4)_w$$ \begin{eqnarray}\nonumber &=& (x_1|x_2)_o+(x_3|x_4)_o-(x_1|x_3)_o-(x_2|x_4)_o \\ \nonumber &\doteq_{40\delta}& (x_1|x_2)_b+(x_3|x_4)_b-(x_1|x_3)_b-(x_2|x_4)_b. \end{eqnarray} Therefore, we have \begin{eqnarray}\nonumber \frac{\sigma_{b,\varepsilon'}(x_1,x_3)\sigma_{b,\varepsilon'}(x_2,x_4)}{\sigma_{b,\varepsilon'}(x_1,x_2)\sigma_{b,\varepsilon'}(x_3,x_4)}&\leq& 4e^{-\varepsilon'[(x_1|x_2)_b+(x_3|x_4)_b-(x_1|x_3)_b-(x_2|x_4)_b]} \\ \nonumber &\leq& 4e^{-40\varepsilon'\delta}e^{-\varepsilon'[(x_1|x_2)_w+(x_3|x_4)_w-(x_1|x_3)_w-(x_2|x_4)_w]} \\ \nonumber &\leq& 4e^{-40\varepsilon'\delta}4^{\varepsilon'/\varepsilon} \Big[\frac{d_{w,\varepsilon}(x_1,x_3)d_{w,\varepsilon}(x_2,x_4)}{d_{w,\varepsilon}(x_1,x_2)d_{w,\varepsilon}(x_3,x_4)}\Big]^{\varepsilon'/\varepsilon}. \end{eqnarray} This proves Lemma \ref{lem-3}. \end{proof} \emph{Proof of Theorem \ref{thm-1}.} This follows from Lemmas \ref{lem-2} and \ref{lem-3}.\qed \subsection{} In this subsection, we will prove Theorem \ref{thm-2} with the aid of the following auxiliary results. It seems that Lemma \ref{lem-4} below is well known, but we failed to find a reference for its proof. As we will use this to prove Theorem \ref{thm-2}, we give a proof here. \begin{lemma}\label{lem-4} Let $f:(X,d)\to (Y,d')$ be a rough $(\lambda,c)$-quasi-isometry between two proper geodesic Gromov hyperbolic spaces. If $X$ is $K$-roughly starlike with respect to some point $w\in X$, then $Y$ is $K'$-roughly starlike with respect to the point $f(w)$. \end{lemma} \begin{proof} Since $f:X\to Y$ is a rough $(\lambda,c)$-quasi-isometry, we see that for any $x'\in Y$ there is some point $x\in X$ such that \begin{equation}\label{l-2} d'(f(x),x')\leq c.\end{equation} By the assumption that $X$ is $K$-roughly starlike with respect to $w\in X$, we see that there is a geodesic ray $\gamma$ emanating from $w$ to some $\xi\in \partial_\infty X$ and satisfying \begin{equation}\label{l-3}{\operatorname{dist}}(x,\gamma)\leq K.\end{equation} Moreover, we see from \cite[Proposition 6.3]{BS} that there is an extension $f:\partial_\infty X\to \partial_\infty Y$ of $f$ with $f(\xi)=\xi'\in \partial_\infty Y$. Then take another geodesic ray $\gamma'$ emanating from $w'=f(w)$ to $\xi'$. Because $f(\gamma)$ is a rough quasi-isometric ray, it follows from the extended stability theorem \cite[Theorem 6.32]{Vai-0} that the Hausdorff distance \begin{equation}\label{l-4} {\operatorname{dist}}_\mathcal{H}(f(\gamma), \gamma')\leq C.\end{equation} Hence we obtain from (\ref{l-2}), (\ref{l-3}) and (\ref{l-4}) that $${\operatorname{dist}}(x',\gamma')\leq c+C+\lambda K+c=K',$$ as desired. \end{proof} We now pause to recall certain auxiliary definitions that we shall need. Suppose that $(X,d)$ is a $C$-annular quasiconvex, geodesic and proper metric space, and $\Omega\subsetneq X$ is a domain. For any $x\in\Omega$, denote $d_{\Omega}(x)={\operatorname{dist}}(x,\partial \Omega)$. Let $0<\lambda\leq 1/2$. Following \cite[Chapter 7]{BHK} or \cite{HSX}, a point $x_0$ in $\Omega$ is said to be a {\it $\lambda$-annulus point} of $\Omega$, if there is a point $a\in\partial\Omega$ such that $$t=d(x_0,a)=d_{\Omega}(x_0),$$ the annulus ${\mathbb{B}}(a,t/\lambda)\setminus \overline{{\mathbb{B}}}(a,\lambda t)$ is contained in $\Omega$. If $x_0$ is not a $\lambda$-annulus point of $\Omega$, then it is said to be a {\it $\lambda$-arc point} of $\Omega$. Then we have the following lemma. \begin{lemma}\label{z-0} Suppose that $(X,d)$ is a $C$-annular quasiconvex, geodesic and proper metric space, and $\Omega\subsetneq X$ is a bounded $\delta$-hyperbolic domain. Then $(\Omega,k)$ is $K$-roughly starlike with respect to some point $w\in \Omega$, where $K$ is a constant depending only on $C$ and $\delta$. \end{lemma} \begin{proof} Choose a point $w$ such that $$d_{\Omega}(w)=\max\{d_{\Omega}(x):x\in \Omega\}.$$ We shall find a constant $K$ depending only on $C$ and $\delta$ such that for each $x\in \Omega$ there exists a quasihyperbolic geodesic ray $\alpha$ emanating from $w$ satisfying $${\operatorname{dist}}_k(x,\alpha)\leq K.$$ Let $\lambda=1/(2C)$. Fix $x\in \Omega$, we consider two cases. \begin{case} $x$ is a $\lambda$-arc point.\end{case} Then by \cite[Lemma 7.3]{HSX}, we see that there exist two points $a,b\in\partial_\infty \Omega$ and a $C_1$-anchor $\gamma$ connecting $a, b$ with $x\in \gamma$ and $C_1=C_1(\lambda,C)$, where $\partial_\infty \Omega$ is the Gromov boundary of hyperbolic space $(\Omega,k)$. For the definition of anchor see \cite[Definition 7.2]{HSX}. Moreover, by the definition of anchor, we see that $\gamma$ is a continuous quasihyperbolic $(C_1,C_1)$-quasigeodesic, that is, $$\ell_k(\gamma[x,y])\leq C_1 k(x,y)+C_1$$ for all $x,y\in \gamma$. On the other hand, we see from \cite[Proposition 2.8]{BHK} that $(\Omega,k)$ is a proper geodesic metric space. Thus there is a quasihyperbolic geodesic line $\gamma_1$ connecting $a$ and $b$. It follows from \cite[Lemma 3.4]{HSX} that the quasihyperbolic Hausdorff distance $${\operatorname{dist}}_\mathcal{H}(\gamma, \gamma_1)\leq C_2,$$ where $C_2$ is a constant depending only on $C_1$ and $\delta$. This implies that there is a point $y\in\gamma_1$ with $$k(x,y)\leq C_2$$ Moreover, we may join $w$ to $a$ and $b$ by quasihyperbolic geodesic rays $\alpha_1$ and $\alpha_2$, respectively. Now by \cite[Lemma 3.1]{HSX}, we find that $${\operatorname{dist}}_k(y,\alpha_1\cup\alpha_2)\leq 24\delta,$$ and thus, $${\operatorname{dist}}_k(x,\alpha_1\cup\alpha_2)\leq 24\delta+C_2=K,$$ as desired. \begin{case} $x$ is a $\lambda$-annular point. \end{case} Thus there is a point $x_0\in\partial \Omega$ with $$t=d(x_0,x)=d_{\Omega}(x),$$ the annulus ${\mathbb{B}}(x_0,t/\lambda)\setminus \overline{{\mathbb{B}}}(x_0,\lambda t)$ is contained in $\Omega$. Then choose a quasihyperbolic geodesic ray $\alpha$ emanating from $w$ to $x_0$. Since $d(w,x_0)\geq d_{\Omega}(w)\geq d_{\Omega}(x)=t$, there exist a point $z\in \alpha$ with $d(z,x_0)=t$. Because $X$ is $C$-annular quasiconvex, we see that there is a curve $\beta\subset {\mathbb{B}}(x_0,Ct)\setminus \overline{{\mathbb{B}}}(x_0,t/C)$ connecting $x$ and $z$ with $$\ell(\beta)\leq Cd(x,z)\leq 2Ct.$$ Note that ${\mathbb{B}}(x_0,t/\lambda)\setminus \overline{{\mathbb{B}}}(x_0,\lambda t)$ is contained in $\Omega$ and $\lambda=1/(2C)$, it follows that $\beta\subset \Omega$ and for each $u\in \beta$, we have $d_{\Omega}(u)\geq t/(2C)$. Therefore, we have $$k(x,z)\leq \ell_k(\beta)\leq 4C^2=K,$$ as required. \end{proof} \emph{Proof of Theorem \ref{thm-2}.} Firstly, it follows from \cite[Theorem 6.1]{BB03} that the Gehring-Hayman condition and the ball separation condition imply the Gromov hyperbolicty. It remains to show the necessity. If $(\Omega,d)$ is bounded, then the assertion follows from \cite[Theorem 1.2]{KLM}. If $\Omega$ is unbounded, by \cite[Corollary 5.2]{KLM}, it suffices to check the rough starlikeness of $(\Omega, k)$. Towards this end, take a point $a\in \partial \Omega$. Denote the sphericalization of metric space $(X,d)$ associated to the point $a$ by $(\dot{X},\widehat{d_a})$. Since $(X,d)$ is annular quasiconvex, it follows from \cite[Proposition 6.3 ]{BHX} that $(X,d)$ is quasiconvex. Hence, \cite[Lemma 3.4]{HRWZ} implies $(\Omega,d)$ is locally quasiconvex. Since $a\in\partial \Omega$ and $(\Omega,d)$ is unbounded, it follows from \cite[Theorem 4.12]{BHX} that the identity map $(\Omega,k)\to (\Omega,\widehat{k}_a)$ is bilipschitz, where $\widehat{k}_a$ is the quasihyperbolic metric of $\Omega$ with respect to the metric $\widehat{d_a}$. It thus implies that $(\Omega,\widehat{k}_a)$ is Gromov hyperbolic, since $(\Omega,k)$ is Gromov hyperbolic and the quasihyperbolic metric is a length metric. Hence, by Lemma \ref{lem-4}, we only need to show that $(\Omega,\widehat{k}_a)$ is roughly starlike. According to \cite[Theorem 6.5(a)]{BHX}, it follows that the space $(\dot{X},\widehat{d_a})$ is quasiconvex and annularly quasiconvex. Note that the rough starlikeness and Gromov hyperbolicity are preserved under bilipschitz mappings. So we may assume that $(\dot{X},\widehat{d_a})$ is geodesic, because the quasiconvexity condition implies that the identity map $(\dot{X},\widehat{d_a})\to (\dot{X},\ell_{\widehat{d}_a})$ is bilipschitz, where $\ell_{\widehat{d}_a}$ is the length metric of $(\dot{X},\widehat{d_a})$. Hence, by Lemma \ref{z-0}, we get $(\Omega,\widehat{k}_a)$ is roughly starlike, as desired. \qed \subsection{} The purpose of this subsection is to prove Theorem \ref{thm-3}. We also need some useful lemmas. The first one shows that bounded $\varphi$-uniformity condition implies the quasihyperbolic growth condition as follows. \begin{lemma}\label{lem-5} Let $(\Omega,d)$ be a locally compact, rectifiably connected and incomplete metric space. If $(\Omega,d)$ is bounded and $\varphi$-uniform, then there is an increasing function $\phi:[0,\infty)\to[0,\infty)$ such that for all $x\in \Omega$ $$k(w,x)\leq \phi\Big(\frac{d(w)}{d(x)}\Big),$$ where $w\in \Omega$ satisfies $d(w)=\max_{x\in \Omega}d(x)$ and $k$ is the quasihyperbolic metric of $\Omega$. In particularly, we can take $\phi(t)=\varphi(\frac{{\operatorname{diam}} \Omega}{d(w)}t)$. Here and hereafter, we use $d(x)$ to denote the distance from $x$ to the boundary of $\Omega$ with respect to the metric $d$. \end{lemma} Secondly, we verify that the $\varphi$-uniformity condition is preserved under sphericalization. \begin{lemma}\label{lem-6} Let $(\Omega,d)$ be a locally compact, $c$-quasiconvex and incomplete metric space with $a\in\partial\Omega$. If $(\Omega,d)$ is unbounded and $\varphi$-uniform, then there exists a homeomorphism $\psi:[0,\infty)\to [0,\infty)$ such that the sphericalized space $(\Omega,\widehat{d}_a)$ associated to $a$ is $\psi$-uniform. \end{lemma} \begin{proof} Since $(\Omega,d)$ is $c$-quasiconvex, we observe from \cite[Proposition 4.3]{BHX} that there are constants $\lambda\in(0,1/2)$ and $c_0\geq 1$ depending only on $c$ such that the sphericalization $(\Omega,\widehat{d}_a)$ is locally $(\lambda,c_0)$-quasiconvex. Moreover, we see from \cite[Theorem 4.12]{BHX} that the identity map $(\Omega,k)\to (\Omega,\widehat{k}_a)$ is $80c$-bilipschitz, where $\widehat{k}_a$ is the quasihyperbolic metric of $(\Omega,\widehat{d}_a)$. To prove this lemma, we only need to find a homeomorphism $\psi:[0,\infty)\to [0,\infty)$ such that \begin{equation}\label{l-new} \widehat{k}_a(x,y) \leq \psi\left(\frac{\widehat{d}_a(x,y)}{\widehat{d}_a(x)\wedge \widehat{d}_a(y)}\right)\end{equation} for all $x,y\in\Omega$, where $\widehat{d}_a(x)$ denotes the distance from $x$ to the boundary of $\Omega$ with respect to the metric $\widehat{d}_a$. To this end, we divide the proof into two cases. \begin{case}$\widehat{d}_a(x,y)\leq \frac{\lambda}{3c_0}\widehat{d}_a(x)$.\end{case} A similar argument as in \cite[Lemma 3.8]{HRWZ} shows that \begin{equation}\label{z-2} \widehat{k}_a(x,y) \leq 3c_0\frac{\widehat{d}_a(x,y)}{\widehat{d}_a(x)}\leq 3c_0\frac{\widehat{d}_a(x,y)}{\widehat{d}_a(x)\wedge \widehat{d}_a(y)},\end{equation} as desired. \begin{case}$\widehat{d}_a(x,y)> \frac{\lambda}{3c_0}\widehat{d}_a(x)$. \end{case} Then we claim that for all $x\in \Omega$, \begin{equation}\label{z-1} \widehat{d}_a(x)\leq \frac{2d(x)}{[1+d(x,a)]^2}.\end{equation} This can be seen as follows. Take a point $x_0\in\partial \Omega$ with $d(x)=d(x,x_0)$. We consider two possibilities. If $d(x_0,a)\leq \frac{1}{2}d(x,a)-\frac{1}{2}$, then we have $$d(x)\geq d(x,a)-d(x_0,a)\geq \frac{1}{2}(1+d(x,a)),$$ and therefore, $$\widehat{d}_a(x)\leq \widehat{d}_a(x,\infty)\leq \frac{1}{1+d(x,a)}\leq \frac{2d(x)}{[1+d(x,a)]^2}.$$ On the other hand, if $d(x_0,a)> \frac{1}{2}d(x,a)-\frac{1}{2}$, thus we have by (\ref{z-1.1}) that \begin{eqnarray}\nonumber \widehat{d}_a(x)&\leq& \widehat{d}_a(x,x_0) \\ \nonumber&\leq& \frac{d(x,x_0)}{[1+d(x,a)][1+d(x_0,a)]} \\ \nonumber&\leq& \frac{2d(x)}{[1+d(x,a)]^2}, \end{eqnarray} as needed. This proves (\ref{z-1}). Now by (\ref{z-1.1}) and (\ref{z-1}), we have for all $x,y\in \Omega$, \begin{eqnarray}\nonumber \widehat{j}_a(x,y) &=& \log\Big(1+\frac{\widehat{d}_a(x,y)}{\widehat{d}_a(x)\wedge \widehat{d}_a(y)}\Big) \\ \nonumber&\geq& \frac{1}{2}\log\Big(1+\frac{\widehat{d}_a(x,y)}{\widehat{d}_a(x)}\Big)\Big(1+\frac{\widehat{d}_a(x,y)}{\widehat{d}_a(y)}\Big) \\ \nonumber&\geq& \frac{1}{2}\log \Big(1+\frac{d(x,y)(1+d(x,a))^2}{8d(x)}\Big)\Big(1+\frac{d(x,y)(1+d(y,a))^2}{8d(y)}\Big) \\ \nonumber&>& \log\Big(1+\frac{d(x,y)}{8\sqrt{d(x)d(y)}}\Big) \\ \nonumber&>& \log\frac{\sqrt{d(x)d(y)}+2d(x,y)}{\sqrt{d(x)d(y)}}-4\log2 \\ \nonumber&\geq& \frac{1}{2}\log\Big(1+\frac{d(x,y)}{d(x)}\Big)\Big(1+\frac{d(x,y)}{d(y)}\Big)-4\log 2 \\ \nonumber&\geq& \frac{1}{2}j(x,y)-4\log 2. \end{eqnarray} Because the identity map $(\Omega,k)\to (\Omega,\widehat{k}_a)$ is $80c$-bilipschitz and $(\Omega,d)$ is $\varphi$-uniform, the above inequality implies that \begin{eqnarray}\label{z-3} \widehat{k}_a(x,y)&\leq& 80c k(x,y) \\ \nonumber &\leq& 80c\varphi(e^{j(x,y)}-1) \\ \nonumber &\leq& 80c\varphi\Big[256\Big(1+\frac{\widehat{d}_a(x,y)}{\widehat{d}_a(x)\wedge \widehat{d}_a(y)}\Big)^2-1\Big]. \end{eqnarray} Set $\psi(t)=3c_0t$ whenever $0\leq t\leq \lambda/3c_0$, and $\psi(t)=80cc_0\varphi(256(1+t)^2-1)$ whenever $t\geq \lambda/3c_0$. Therefore, by (\ref{z-2}) and (\ref{z-3}), we obtain \eqref{l-new}. \end{proof} \begin{remark} In \cite{LVZ19}, Li, Vuorinen and Zhou proved that quasim\"obius mappings preserve $\varphi$-uniform domains of $\mathbb{R}^n$. In the proof of Lemma \ref{lem-6}, we calculate the control function for our needs. \end{remark} \emph{Proof of Theorem \ref{thm-3}.} $(1)$ We assume first that $(\Omega,d)$ is bounded. Since $(\Omega,d,\mu)$ is Ahlfors $Q$-regular and $(\Omega,k)$ is roughly starlike and Gromov hyperbolic, we see from \cite[Theorem $5.1$]{KLM} that $(\Omega,d)$ satisfies the Gehring-Hayman condition. Because $(\Omega,d)$ is $\varphi$-uniform, it follows from Lemma \ref{lem-5} that $(\Omega,d)$ satisfies the quasihyperbolic growth condition with $\phi(t)=\varphi(\frac{{\operatorname{diam}} \Omega}{d(w)}t)$. That is, for all $x\in \Omega$ we have $$k(w,x)\leq \phi\Big(\frac{d(w)}{d(x)}\Big),$$ where $w\in \Omega$ satisfies $d(w)=\max_{x\in \Omega}d(x)$ and $k$ is the quasihyperbolic metric of $\Omega$. Note that the conditions $$\int_1^\infty \frac{dt}{{\varphi^{-1}(t)}}<\infty\;\;\;\;\mbox{and}\;\;\;\;\int_1^\infty \frac{dt}{{\phi^{-1}(t)}}<\infty$$ are mutually equivalent. Therefore, according to \cite[Theorem $1.1$]{La}, we immediately see that the Gromov boundary and the metric boundary of $\Omega$ are homeomorphic. $(2)$ Assume that $(\Omega,d)$ is unbounded. Let $a\in \partial\Omega$. Denote by $(\Omega,\widehat{d}_a,\mu_a)$ the sphericalization of $(\Omega,d,\mu)$ associated to the point $a$. Let $\ell_{\widehat{d}_a}$ be the length metric of $\Omega$ with respect to the metric $\widehat{d}_a$. In order to show that there is a homeomorphism identification $\partial \Omega\cup \{\infty\}\to \partial_\infty \Omega,$ we need some preparations. Firstly, we show that \begin{claim}\label{z-4} the identity map $(\partial \Omega\cup \{\infty\},d)\to (\partial \Omega\cup \{\infty\}, \ell_{\widehat{d}_a})$ is a homeomorphism.\end{claim} Indeed, by the definition of $\ell_{\widehat{d}_a}$, we see that $d(x_n,a)\to \infty$ if and only if $\ell_{\widehat{d}_a}(x_n,a)\to 0$, as $n\to \infty.$ Thus it suffices to verify that for any sequence $\{x_n\}\subset \Omega$, $\{x_n\}$ is Cauchy in the metric $d$ if and only if $\{x_n\}$ is Cauchy in the metric $\ell_{\widehat{d}_a}$. On one hand, by \cite[(2.11)]{LS} and the quasiconvexity of $(\Omega,d)$, it follows that $$\ell_{\widehat{d}_a}(x_n,x_m)\leq \ell_d(x_n,x_m)\leq cd(x_n,x_m),$$ where $\ell_d$ is the length metric of $(\Omega,d)$. On the other hand, we see from (\ref{z-1.1}) that $$d(x_n,x_m) \leq 4[1+d(x_n,a)][1+d(x_m,a)] \ell_{\widehat{d}_a}(x_n,x_m),$$ as required. We have proved Claim \ref{z-4}. Secondly, we check that \begin{claim}\label{z-5} $(\Omega,\widehat{k}_a)$ is roughly starlike Gromov hyperbolic and $(\Omega,\ell_{\widehat{d}_a})$ satisfies the Gehring-Hayman condition, where $\widehat{k}_a$ is the quasihyperbolic metric of $\Omega$ with respect to the metric $\widehat{d}_a$. \end{claim} By \cite[Theorem $4.12$]{BHX}, we see that the identity map $(\Omega,k)\to (\Omega,\widehat{k}_a)$ is $80c$-bilipschitz. Since $(\Omega, k)$ is Gromov hyperbolic, it follows from \cite[Page 402, Theorem 1.9]{BrHa} that $(\Omega, \widehat{k}_a)$ is Gromov hyperbolic as well. Since $(\Omega,k)$ is roughly starlike, we see from Lemma \ref{lem-4} that $(\Omega,\widehat{k}_a)$ is also roughly starlike. Furthermore, according to \cite[Proposition 3.1]{LS}, we immediately find that the sphericalization $(\Omega,\widehat{d}_a,\mu_a)$ is Ahlfors $Q$-regular. Thus we obtain from \cite[Theorem 5.1]{KLM} that $(\Omega,\ell_{\widehat{d}_a})$ satisfies the Gehring-Hayman condition, which shows Claim \ref{z-5}. Next, we claim that \begin{claim}\label{z-6} there is a natural homeomorphism identification $$(\partial \Omega\cup \{\infty\}, \ell_{\widehat{d}_a})\to (\partial_\infty \Omega_a,\widehat{\rho}),$$ where $\partial_\infty \Omega_a$ is the boundary at infinity of Gromov hyperbolic space $(\Omega,\widehat{k}_a)$ and $\widehat{\rho}$ is an arbitrary Bourdon metric on $\partial_\infty \Omega_a$. \end{claim} Since $(\Omega,d)$ is $\varphi$-uniform, we see from Lemma \ref{lem-6} and its proof that both the spaces $(\Omega,\widehat{d}_a)$ and $(\Omega,\ell_{\widehat{d}_a})$ are $\psi$-uniform with $\psi(t)=80c\varphi(256(1+t)^2-1)$ for all $t\geq 1$. Thus it follows from Lemma \ref{lem-5} that there is an increasing function $\phi:[0,\infty)\to [0,\infty)$ and $w\in \Omega$ such that $$\widehat{k}_a(w,x)\leq \phi\Big(\frac{\widehat{d}_a(w)}{\widehat{d}_a(x)}\Big)$$ with $\phi(t)=\psi(\frac{t}{\widehat{d}_a(w)})$, since ${\operatorname{diam}}_{\widehat{d}_a}(\Omega)\leq 1$ by \eqref{z-1.1}. Moreover, it follows from the definitions of $\psi$ and $\phi$ that the conditions $$\int_1^\infty \frac{dt}{\sqrt{\varphi^{-1}(t)}}<\infty\;\;\;\;\mbox{and}\;\;\;\;\int_1^\infty \frac{dt}{\phi^{-1}(t)}<\infty$$ are mutually equivalent. Now we observe that the conditions \cite[(1.3) and (1.5)]{La} with respect to the metric $\widehat{k}_a$ are verified. Therefore, it follows from Claim \ref{z-5} and \cite[Theorem 1.1]{La} that there is a natural homeomorphism identification $(\partial \Omega\cup \{\infty\}, \ell_{\widehat{d}_a})\to (\partial_\infty \Omega_a,\widehat{\rho})$, which proves Claim \ref{z-6}. Finally, since the identity map $(\Omega,k)\to (\Omega,\widehat{k}_a)$ is $80c$-bilipschitz, we see from \cite[Propositions 6.3 and 6.5]{BS} that there is a natural homeomorphism identification $$(\partial_\infty \Omega,\rho) \to (\partial_\infty \Omega_a,\widehat{\rho}),$$ where $\partial_\infty \Omega$ is the Gromov boundary of hyperbolic space $(\Omega,k)$ and $\rho$ is a Bourdon metric defined on $\partial_\infty \Omega$. Therefore, this together with Claims \ref{z-4} and \ref{z-6} shows that there is a natural homeomorphism identification $\partial \Omega\cup \{\infty\}\to \partial_\infty \Omega.$ Hence Theorem \ref{thm-3} is proved. \qed
-54,374.97879
[ -3.001953125, 2.76953125 ]
46.317829
[ -2.501953125, 0.98486328125, -2.23828125, -5.984375, -0.9853515625, 8.859375 ]
[ 4.28125, 9.2578125, 1.8984375, 5.875 ]
426
6,526
[ -3.380859375, 4.1171875 ]
32.790984
[ -5.3828125, -3.970703125, -5.62890625, -2.4375, 1.8564453125, 13.96875 ]
0.51465
27.279643
20.364695
2.433514
[ 1.8298344612121582 ]
-33,890.950806
5.627337
-54,663.037162
0.479251
5.819505
[ -1.62109375, -3.25390625, -4.12109375, -5.32421875, 1.830078125, 12.5234375 ]
[ -6.0078125, -2.138671875, -2.34765625, -1.25, 4.08203125, 4.83984375 ]
BkiUbdvxK7Ehm308o-UQ
\section{\bf Introduction} \label{sec:1} The goal of this paper is to deepen the links between the areas in the title. Invariant theory is concerned with the study of group actions on algebras, and in the present article we entirely concentrate on actions of finite groups on polynomial algebras via linear substitution of the variables. To begin with, let us briefly sketch the already existing links between the mentioned areas. For a finite-dimensional vector space $V$ over a field $\mathbb F$ and a finite group $G \le \GL(V)$, let $\mathbb F[V]^G \subset \mathbb F[V]$ denote the ring of invariants. Since E. Noether we know that $\mathbb F[V]^G \subset \mathbb F[V]$ is an integral ring extension and that $\mathbb F[V]^G$ is a finitely generated $\mathbb F$-algebra. In particular, $\mathbb F[V]^G$ is an integrally closed noetherian domain and hence a Krull domain. Benson \cite{Be93a} and Nakajima \cite{Na82z} determined its class group. Krull domains (their ideal theory and their class groups) are a central topic in multiplicative ideal theory (see the monographs \cite{Gi92,HK98} and the recent survey \cite{HK11b}). B. Schmid \cite{Sc91a} observed that the Noether number of a finite abelian group $G$ equals the Davenport constant of $G$ (a constant of central importance in zero-sum theory) and this established a first link between invariant theory and arithmetic combinatorics. Moreover, ideal and factorization theory of Krull domains are most closely linked with zero-sum theory via transfer homomorphisms (see \cite{Ge-HK06a, Ge09a} and Subsection \ref{3.B}). These links serve as our starting point. It is well-known that a domain $R$ is a Krull domain if and only if its monoid $R^{\bullet}$ of nonzero elements is a Krull monoid if and only if $R$ (resp. $R^{\bullet}$) has a divisor theory. To start with Krull monoids, a monoid $H$ is Krull if and only if its associated reduced monoid $H/H^{\times}$ is Krull, and every Krull monoid $H$ is a direct product $H^{\times} \times H_0$ where $H_0$ is isomorphic to $H/H^{\times}$. A reduced Krull monoid is uniquely determined (up to isomorphism) by its characteristic (roughly speaking by its class group $\mathcal C (H)$ and the distribution of the prime divisors in its classes; see the end of Subsection \ref{4.A}). By definition of the class group, a Krull monoid $H$ is factorial if and only if $\mathcal C(H)$ is trivial. Information on the subset $\mathcal C(H)^* \subset \mathcal C(H)$ of classes containing prime divisors is the crucial ingredient to understand the arithmetic of $H$, and hence in order to study the arithmetic of Krull monoids the first and most important issue is to determine $\mathcal C (H)^*$. By far the best understood setting in factorization theory are Krull monoids with finite class groups where every class contains a prime divisor. Indeed, there has been an abundance of work on them and we refer the reader to the survey by W.A. Schmid in this proceedings \cite{Sc16a}. A canonical method to obtain information on $\mathcal C (H)^*$ is to identify explicitly a divisor theory for $H$. A divisor theory of a monoid (or a domain) $H$ is a divisibility preserving homomorphism from $H$ to a free abelian monoid which satisfies a certain minimality property (Subsection \ref{2.1}). The concept of a divisor theory stems from algebraic number theory and it has found far-reaching generalizations in multiplicative ideal theory (\cite{HK98}). Indeed, divisor-theoretic tools, together with ideal-theoretic and valuation-theoretic ones, constitute a highly developed machinery for the structural description of monoids and domains. All the above mentioned concepts and problems from multiplicative ideal theory are studied for the ring of invariants. Theorem~\ref{main-theorem} (in Subsection \ref{4.A}) provides an explicit divisor theory of the ring of invariants $R=\mathbb F[V]^G$. The divisibility preserving homomorphism from $R^{\bullet}$ goes into a free abelian monoid which can be naturally described in the language of invariant theory, and the associated canonical transfer homomorphism $\theta\colon R^\bullet \to \mathcal{B}(\mathcal{C}(R)^*)$ from the multiplicative monoid of the ring $R$ onto the monoid of zero-sum sequences over the class group of $R$ also has a natural invariant theoretic interpretation. In addition to recovering the result of Benson and Nakajima on the class group $\mathcal C(\mathbb{F}[V]^G)$ (our treatment is essentially self-contained), we gain further information on the multiplicative structure of $R$, and we pose the problem to determine its characteristic (Problem \ref{characteristic-problem}). In particular, whenever we can show -- for a given ring of invariants -- that every class contains at least one prime divisor, then all results of factorization theory (obtained for Krull monoids with finite class group and prime divisors in all classes) apply to the ring of invariants. In Subsection \ref{subsec:abelian} we specialize to abelian groups whose order is not divisible by the characteristic of $\mathbb F$. The Noether number $\beta(G)$ is the supremum over all finite dimensional $G$-modules $V$ of the maximal degree of an element in a minimal homogeneous generating system of $\mathbb F[V]^G$, and the Davenport constant $\mathsf D (G)$ is the maximal length of a minimal zero-sum sequence over $G$. We start with a result on the structural connection between $\mathbb F[V]^G$ and the monoid of zero-sum sequences over $G$, that lies behind the equality $\beta(G)=\mathsf D(G)$. Clearly, the idea here is well known (as far as we know, it was first used by B. Schmid \cite{Sc91a}, see also \cite{F-M-P-T08a}). The benefit of the detailed presentation as given in Proposition \ref{prop:bschmid} is twofold. First, the past 20 years have seen great progress in zero-sum theory (see Subsection \ref{3.D} for a sample of results) and Proposition \ref{prop:bschmid} allows to carry over all results on the structure of (long) minimal zero-sum sequences to the structure of $G$-invariant monomials. Second, we observe that the submonoid $M^G$ of $R^\bullet$ consisting of the invariant monomials is again a Krull monoid, and restricting the transfer homomorphism $\theta\colon R^\bullet \to \mathcal{B}(\mathcal{C}(R)^*)$ (mentioned in the above paragraph) to $M^G$ we obtain essentially the canonical transfer homomorphism $M^G\to \mathcal{B}(\mathcal{C}(M^G)^*)$. This turns out to be rather close to the transfer homomorphism $\psi\colon M^G\to \mathcal{B}(\widehat G)$ into the monoid of zero-sum sequences over the character group of $G$ (see Proposition \ref{prop:bschmid}), which is responsible for the equality $\beta(G)=\mathsf D(G)$. The precise statement is given in Proposition \ref{prop:diagram}, which explains how the transfer homomorphism $\psi$ (existing only for abelian groups) relates to the more general transfer homomorphism $\theta$ from the above paragraph which exists for an arbitrary finite group. In Proposition \ref{prop:diagram} we point out that every class of $\mathcal C ( \mathbb F[V]^G )$ contains a prime divisor which contributes to Problem \ref{characteristic-problem}. Let now $G$ be a finite non-abelian group. Until recently the precise value of the Noether number $\beta (G)$ was known only for the dihedral groups and very few small groups (such as $A_4)$. In the last couple of years the first two authors have determined the precise value of the Noether number for groups having a cyclic subgroup of index two and for non-abelian groups of order $3p$ \cite{Cz-Do14a, Cz14d, Cz-Do15a}. In this work results on zero-sum sequences over finite abelian groups (for example, information on the structure of long minimal zero-sum sequences and on the $k$th Davenport constants) were successfully applied. Moreover, a decisive step was the introduction of the $k$th Noether numbers, a concept inspired by the $k$th Davenport constants of abelian groups. The significance of this concept is that it furnishes some reduction lemmas (listed in Subsection \ref{sec:5.1} ) by which the ordinary Noether number of a group can be bounded via structural reduction in the group. The concept of the $k$th Davenport constants $\mathsf D_k (G)$ has been introduced by Halter-Koch \cite{HK92c} for abelian groups in order to study the asymptotic behavior of arithmetical counting functions in rings of integers of algebraic number fields (see \cite[Theorem 9.1.8]{Ge-HK06a}, \cite[Theorem 1]{Ra05a}). They have been further studied in \cite{De-Or-Qu01,Fr-Sc10}. In the last years the third author and Grynkiewicz \cite{Ge-Gr13a, Gr13b} studied the (small and the large) Davenport constant of non-abelian groups, and among others determined their precise values for groups having a cyclic subgroup of index two. It can be observed that for these groups the Noether number is between the small and the large Davenport constant. This motivated a new and more abstract view at the Davenport constants, namely $k$th Davenport constants of BF-monoids (Subsection \ref{2.E}). The goal is to relate the Noether number with Davenport constants of suitable monoids as a generalization of the equation $\beta (G) = \mathsf D (G)$ in the abelian case. Indeed, the $k$th Davenport constant $\mathsf D_k (G)$ of an abelian group $G$ is recovered as our $k$th Davenport constant of the monoid $\mathcal B (G)$ of zero-sum sequences over $G$. We apply the new concept of the $k$th Davenport constants to two classes of BF-monoids. First, to the monoid $\mathcal{B}(G,V)$ associated to a $G$-module $V$ in Subsection~\ref{5.D} (when $G$ is abelian we recover the monoid $M^G$ of $G$-invariant monomials from Subsection \ref{subsec:abelian}), whose Davenport constants provide a lower bound for the corresponding Noether numbers (see Proposition \ref{prop:module-davenport}). Second, we study the monoid of product-one sequences over finite groups (Subsections \ref{3.1} and \ref{3.C}). We derive a variety of features of the $k$th Davenport constants of the monoid of product-one sequences over $G$ and observe that they are strikingly similar to the corresponding features of the $k$th Noether numbers (see Subsection \ref{sec:5.1} for a comparison). We pose a problem on the relationship between Noether numbers and Davenport constants of non-abelian groups (Problem \ref{Noether-Davenport}) and we illustrate the efficiency of the above methods by Examples \ref{example:C_pq rtimes C_q}, \ref{S_4}, and \ref{Pauli} (appearing for the first time), where the explicit value of Noether numbers and Davenport constants of some non-abelian groups are determined. \smallskip \centerline{\it Throughout this paper, let $G$ be a finite group, $\mathbb F$ be a field, and} \centerline{\it $V$ be a finite dimensional $\mathbb F$-vector space endowed with a linear action of $G$.} \section{\bf Multiplicative Ideal Theory: Krull monoids, C-monoids, and Class Groups} \label{sec:2} We denote by $\mathbb N$ the set of positive integers, and we put $\mathbb N_0 = \mathbb N \cup \{0\}$. For every $n \in \mathbb N$, we denote by $C_n$ a cyclic group with $n$ elements. For real numbers $a, b \in \mathbb R$, we set $[a, b] = \{ x \in \mathbb Z \colon a \le x \le b \}$. If $A, B$ are sets, we write $A \subset B$ to mean that $A$ is contained in $B$ but may be equal to $B$. In Subsections \ref{2.A} -- \ref{2.D} we gather basic material on Krull monoids and C-monoids. In Subsection \ref{2.E} we introduce a new concept, namely Davenport constants of BF-monoids. \subsection{\bf Monoids and Domains: Ideal theoretic and divisor theoretic concepts} \label{2.A}~ Our notation and terminology follows \cite{Ge-HK06a} and \cite{HK98} (note that the monoids in \cite{HK98} do contain a zero-element, whereas the monoids in \cite{Ge-HK06a} and in the present manuscript do not contain a zero-element). By a {\it monoid}, we mean a commutative, cancellative semigroup with unit element. Then the multiplicative semigroup $R^{\bullet}=R\setminus \{0\}$ of non-zero elements of a domain is a monoid. Following the philosophy of multiplicative ideal theory we describe the arithmetic and the theory of divisorial ideals of domains by means of their multiplicative monoids. Thus we start with monoids. Let $H$ be a multiplicatively written monoid. An element $u \in H$ is called \begin{itemize} \item {\it invertible} if there is an element $v \in H$ with $u v = 1$. \item {\it irreducible} (or an {\it atom}) if $u$ is not invertible and, for all $a, b \in H$, $u = ab$ implies $a$ is invertible or $b$ is invertible. \item {\it prime} if $u$ is not invertible and, for all $a, b \in H$, $u \t ab$ implies $u \t a$ or $u \t b$. \end{itemize} We denote by $\mathcal A (H)$ the set of atoms of $H$, by $H^{\times}$ the group of invertible elements, and by $H_{{\text{\rm red}}} = \{aH^{\times} \colon a \in H \}$ the associated reduced monoid of $H$. We say that $H$ is reduced if $|H^{\times}| = 1$. We denote by $\mathsf q (H)$ a quotient group of $H$ with $H \subset \mathsf q (H)$, and for a prime element $p \in H$, let $\mathsf v_p \colon \mathsf q (H) \to \mathbb Z$ be the $p$-adic valuation. Each monoid homomorphism $\varphi \colon H \to D$ induces a group homomorphism $\mathsf q (H) \colon \mathsf q (H) \to \mathsf q (D)$. For a subset $H_0 \subset H$, we denote by $[H_0] \subset H$ the submonoid generated by $H_0$, and by $\langle H_0 \rangle \le \mathsf q (H)$ the subgroup generated by $H_0$. We denote by $\widetilde H = \big\{ x \in \mathsf q (H) \, \colon \, x^n \in H \text{~for some~} n \in \mathbb N \big\}$ the \ {\it root closure} \ of $H$, and by $ \widehat H = \big\{ x \in \mathsf q (H) \, \colon \, \text{there exists~} c \in H \text{~such that~} cx^n \in H \text{~for all~} n \in \mathbb N \big\} $ the {\it complete integral closure} \ of $H$. Both $\widetilde{H}$ and $\widehat{H}$ are monoids, and we have $H \subset \widetilde H \subset \widehat H \subset \mathsf q (H)$. We say that $H$ is root closed (completely integrally closed resp.) if $H = \widetilde H$ ($H=\widehat H$ resp.). For a set $P$, we denote by $\mathcal F (P)$ the free abelian monoid with basis $P$. Then every $a \in \mathcal F (P)$ has a unique representation in the form \[ a = \prod_{p \in P} p^{\mathsf v_p (a)} \,, \ \text{where} \ \mathsf v_p (a) \in \mathbb N_0 \ \text{and} \ \mathsf v_p (a) = 0 \ \text{for almost all} \ p \in P \,. \] The monoid $H$ is said to be \begin{itemize} \item {\it atomic} if every $a \in H \setminus H^{\times}$ is a product of finitely many atoms of $H$. \item {\it factorial} if every $a \in H \setminus H^{\times}$ is a product of finitely many primes of $H$ (equivalently, $H=H^{\times} \times \mathcal F (P)$ where $P$ is a set of representatives of primes of $F$). \item {\it finitely generated} if $H = [E]$ for some finite subset $E \subset H$. \end{itemize} If $H = H^{\times} \times \mathcal F (P)$ is factorial and $a \in H$, then $ |a| = \sum_{p \in P} \mathsf v_p (a) \in \mathbb N_0 $ is called the length of $a$. If $H$ is reduced, then it is finitely generated if and only if it is atomic and $\mathcal A (H)$ is finite. Since every prime is an atom, every factorial monoid is atomic. For every non-unit $a\in H$, \[ \mathsf L_H (a) = \mathsf L (a) = \{k \in \mathbb N \colon a \ \text{may be written as a product of $k$ atoms} \} \subset \mathbb N \] denotes the {\it set of lengths} of $a$. For convenience, we set $\mathsf L (a) = \{0\}$ for $a \in H^{\times}$. We say that $H$ is a BF-monoid if it is atomic and all sets of lengths are finite. A monoid homomorphism $\varphi \colon H \to D$ is said to be \begin{itemize} \item a {\it divisor homomorphism} if $\varphi (a) \t \varphi (b)$ implies that $a \t b$ for all $a, b \in H$. \item {\it cofinal} if for every $\alpha \in D$ there is an $a \in H$ such that $\alpha \t \varphi (a)$. \item a {\it divisor theory} (for $H$) if $D = \mathcal F (P)$ for some set $P$, $\varphi$ is a divisor homomorphism, and for every $p \in P$, there exists a finite nonempty subset $X \subset H$ satisfying $p = \gcd \big( \varphi (X) \big)$. \end{itemize} Obviously, every divisor theory is cofinal. Let $H \subset D$ be a submonoid. Then $H \subset D$ is called \begin{itemize} \item {\it saturated} if the embedding $H \hookrightarrow D$ is a divisor homomorphism. \item {\it divisor closed} if $a \in H$, $b \in D$ and $b \t a$ implies $b \in H$. \item {\it cofinal} if the embedding $H \hookrightarrow D$ is cofinal. \end{itemize} It is easy to verify that $H \hookrightarrow D$ is a divisor homomorphism if and only if $H = \mathsf q (H) \cap D$, and if this holds, then $H^{\times} = D^{\times} \cap H$. If $H \subset D$ is divisor closed, then $H \subset D$ is saturated. For subsets $A, B \subset \mathsf q (H)$, we denote by $(A \negthinspace : \negthinspace B) = \{ x \in \mathsf q (H) \colon x B \subset A \}$, by $A^{-1} = (H \negthinspace : \negthinspace A)$, and by $A_v = (A^{-1})^{-1}$. A subset $\mathfrak a \subset H$ is called an $s$-ideal of $H$ if $\mathfrak a H = \mathfrak a$. A subset $X \subset \mathsf q (H)$ is called a fractional $v$-ideal (or a {\it fractional divisorial ideal}) if there is a $c \in H$ such that $cX \subset H$ and $X_v = X$. We denote by $\mathcal F_v (H)$ the set of all fractional $v$-ideals and by $\mathcal I_v (H)$ the set of all $v$-ideals of $H$. Furthermore, $\mathcal I_v^* (H)$ is the monoid of $v$-invertible $v$-ideals (with $v$-multiplication) and $\mathcal F_v (H)^{\times} = \mathsf q \big( \mathcal I_v^* (H) \big)$ is its quotient group of fractional invertible $v$-ideals. The monoid $H$ is completely integrally closed if and only if every non-empty $v$-ideal of $H$ is $v$-invertible, and $H$ is called $v$-noetherian if it satisfies the ACC (ascending chain condition) on $v$-ideals. If $H$ is $v$-noetherian, then $H$ is a BF-monoid. We denote by $\mathfrak X (H)$ the set of all minimal nonempty prime $s$-ideals of $H$. The map $\partial \colon H \to \mathcal I_v^* (H)$, defined by $\partial (a) = aH$ for each $a \in H$, is a cofinal divisor homomorphism. Thus, if $\mathcal H = \{aH \colon a \in H \}$ is the monoid of principal ideals of $H$, then $\mathcal H \subset \mathcal I_v^* (H)$ is saturated and cofinal. \subsection{\bf Class groups and class semigroups}~ \label{2.B} Let $\varphi \colon H \to D$ be a monoid homomorphism. The group $\mathcal C (\varphi) = \mathsf q (D)/ \mathsf q ( \varphi (H))$ is called the {\it class group} of $\varphi$. For $a \in \mathsf q (D)$, we denote by $[a]_{\varphi} = a \mathsf q ( \varphi (H)) \in \mathcal C ( \varphi)$ the class containing $a$. We use additive notation for $\mathcal C ( \varphi )$ and so $[1]_{\varphi}$ is the zero element of $\mathcal C ( \varphi )$. Suppose that $H \subset D$ and that $\varphi = (H \hookrightarrow D)$. Then $\mathcal C ( \varphi ) = \mathsf q (D)/\mathsf q (H)$, and for $a \in D$ we set $[a]_{\varphi}=[a]_{D/H} = a \mathsf q (H)$. Then \[ D/H = \{ [a]_{D/H} \colon a \in D\} \subset \mathcal C ( \varphi ) \] is a submonoid with quotient group $\mathsf q (D/H)=\mathcal C (\varphi)$. It is easy to check that $D/H$ is a group if and only if $H \subset D$ is cofinal. In particular, if $D/H$ is finite or if $\mathsf q (D)/\mathsf q (H)$ is a torsion group, then $D/H= \mathsf q (D)/\mathsf q (H)$. Let $H$ be a monoid. Then $\mathcal H \subset \mathcal I_v^* (H)$ is saturated and cofinal, and \[ \mathcal C_v(H) = \mathcal I_v^* (H)/\mathcal H = \mathcal F_v (H)^{\times}/ \mathsf q (\mathcal H) \] is the {\it $v$-class group} of $H$. We will also need the concept of class semigroups which are a refinement of ordinary class groups in commutative algebra. Let $D$ be a monoid and $H \subset D$ a submonoid. Two elements $y, y' \in D$ are called $H$-equivalent, if $y^{-1}H \cap D = {y'}^{-1} H \cap D$. $H$-equivalence is a congruence relation on $D$. For $y \in D$, let $[y]_H^D$ denote the congruence class of $y$, and let \[ \mathcal C (H,D) = \{ [y]_H^D \colon y \in D \} \quad \text{and} \quad \mathcal C^* (H,D) = \{ [y]_H^D \colon y \in (D \setminus D^{\times}) \cup \{1\} \}. \] Then $\mathcal C (H,D)$ is a semigroup with unit element $[1]_H^D$ (called the {\it class semigroup} of $H$ in $D$) and $\mathcal C^* (H,D) \subset \mathcal C (H,D)$ is a subsemigroup (called the {\it reduced class semigroup} of $H$ in $D$). The map \[ \theta \colon \mathcal C (H,D) \to D/H \,, \quad \text{defined by} \quad \theta ( [a]_H^D) = [a]_{D/H} \quad \text{for all} \ a \in D \,, \] is an epimorphism, and it is an isomorphism if and only if $H \subset D$ is saturated. \subsection{\bf Krull monoids and Krull domains}~ \label{2.C} \begin{theorem} \label{2.1}~ Let $H$ be a monoid. Then the following statements are equivalent{\rm \,:} \begin{itemize} \item[(a)] $H$ is $v$-noetherian and completely integrally closed, \item[(b)] $\partial \colon H \to \mathcal I_v^* (H)$ is a divisor theory. \item[(c)] $H$ has a divisor theory. \item[(d)] There is a divisor homomorphism $\varphi \colon H \to D$ into a factorial monoid $D$. \item[(e)] $H_{{\text{\rm red}}}$ is a saturated submonoid of a free abelian monoid. \end{itemize} \noindent \smallskip {\rm If $H$ satisfies these conditions, then $H$ is called a } Krull monoid. \end{theorem} \begin{proof} See \cite[Theorem 2.4.8]{Ge-HK06a} or \cite[Chapter 22]{HK98}. \end{proof} Let $H$ be a Krull monoid. Then $\mathcal I_v^* (H)$ is free abelian with basis $\mathfrak X (H)$. Let $\mathfrak p \in \mathfrak X (H)$. Then $\mathsf v_{\mathfrak p}$ denotes the $\mathfrak p$-adic valuation of $\mathcal F_v(H)^{\times}$. For $x \in \mathsf q (H)$, we set $\mathsf v_{\mathfrak p}(x)= \mathsf v_{\mathfrak p} (xH)$ and we call $\mathsf v_{\mathfrak p}$ the $\mathfrak p$-adic valuation of $H$. Then $\mathsf v \colon H \to \mathbb N_0^{( \mathfrak X (H))}\,, \quad \text{defined by} \quad \mathsf v (a) = \bigl( \mathsf v_\mathfrak p (a) \bigr)_{\mathfrak p \in \mathfrak X (H)}$ is a divisor theory and $H = \{ x \in \mathsf q (H) \colon \mathsf v_{\mathfrak p} (x) \ge 0 \ \text{for all} \ \mathfrak p \in \mathfrak X (H) \}$. If $\varphi \colon H \to D=\mathcal F (P)$ is a divisor theory, then there is an isomorphism $\Phi \colon \mathcal I_v^* (H) \to D$ such that $\Phi \circ \partial = \varphi$, and it induces an isomorphism $\overline{\Phi} \colon \mathcal C_v (H) \to \mathcal C (\varphi)$. Let $D= \mathcal F (P)$ be such that $H_{{\text{\rm red}}} \hookrightarrow D$ is a divisor theory. Then $D$ and $P$ are uniquely determined by $H$, \[ \mathcal C (H) = \mathcal C (H_{{\text{\rm red}}}) = D/H_{{\text{\rm red}}} \] is called the {\it $($divisor$)$ class group} of $H$, and its elements are called the classes of $H$. By definition, every class $g \in \mathcal C (H)$ is a subset of $\mathsf q (D)$ and $P \cap g$ is the set of prime divisors lying in $g$. We denote by $\mathcal C (H)^* = \{ [p]_{D/H_{{\text{\rm red}}}} \colon p \in P \} \subset \mathcal C (H)$ the subset of classes containing prime divisors (for more details we refer to the discussion after Definition 2.4.9 in \cite{Ge-HK06a}). \begin{proposition} \label{prop:torsion-divtheory} Let $H$ be a Krull monoid, and let $\varphi \colon H \to D=\mathcal F (P)$ be a divisor homomorphism. \begin{enumerate} \item There is a submonoid $C_0 \subset \mathcal C(\varphi)$ and an epimorphism $C_0 \to \mathcal C_v (H)$. \smallskip \item Suppose that $H \subset D$ is saturated and that $\mathsf q (D)/\mathsf q (H)$ is a torsion group. We set $D_0 =\{ \gcd_D (X) \colon X\subset H \mbox{ finite}\}$, and for $p\in P$ define $e(p)=\min\{\mathsf{v}_p(h)\colon h\in H \ \text{with} \ \mathsf v_p (h)>0 \}$. \begin{itemize} \smallskip \item[(a)] $D_0$ is a free abelian monoid with basis $\{ p^{e(p)} \colon p\in P \}$. \smallskip \item[(b)] The embedding $H\hookrightarrow D_0$ is a divisor theory for $H$. \end{itemize} \end{enumerate} \end{proposition} \begin{proof} 1. follows from \cite[Theorem 2.4.8]{Ge-HK06a}, and 2. from \cite[Lemma 3.2]{Sc10a}. \end{proof} Let $R$ be a domain with quotient field $K$. Then $R^{\bullet}=R\setminus \{0\}$ is a monoid, and all notions defined for monoids so far will be applied for domains. To mention a couple of explicit examples, we denote by $\mathsf q (R)$ the quotient field of $R$ and we have $\mathsf q (R)=\mathsf q (R^{\bullet})\cup \{0\}$, and for the complete integral closure we have $\widehat R = \widehat{R^{\bullet}} \cup \{0\}$ (where $\widehat R$ is the integral closure of $R$ in its quotient field). We denote by $\mathfrak X (R)$ the set of all minimal nonzero prime ideals of $R$, by $\mathcal I_v (R)$ the set of divisorial ideals of $R$, by $\mathcal I_v^* (R)$ the set of $v$-invertible divisorial ideals of $R$, and by $\mathcal F_v (R)$ the set of fractional divisorial ideals of $R$. Equipped with $v$-multiplication, $\mathcal F_v (R)$ is a semigroup, and the map \[ \iota^{\bullet} \colon \mathcal F_v (R) \to \mathcal F_v (R^{\bullet}) \,, \quad \text{defined by} \quad \mathfrak a \mapsto \mathfrak a \setminus \{0\} \,, \] is a semigroup isomorphism mapping $\mathcal I_v (R)$ onto $\mathcal I_v (R^{\bullet})$ and fractional principal ideals of $R$ onto fractional principal ideals of $R^{\bullet}$. Thus $R$ satisfies the ACC on divisorial ideals of $R$ if and only if $R^{\bullet}$ satisfies the ACC on divisorial ideals of $R^{\bullet}$. Furthermore, $R$ is completely integrally closed if and only if $R^{\bullet}$ is completely integrally closed. A domain $R$ is a Krull domain if it is completely integrally closed and satisfies the ACC on divisorial ideals of $R$, and thus $R$ is a Krull domain if and only if $R^{\bullet}$ is a Krull monoid. If $R$ is a Krull domain, we set $\mathcal C (R) = \mathcal C (R^{\bullet})$. The group $\mathcal F_v (R)^{\times}$ is the group of $v$-invertible fractional ideals and the set $\mathcal I_v^* (R)=\mathcal F_v(R)^{\times}\cap \mathcal I_v (R)$ of all $v$-invertible $v$-ideals of $R$ is a monoid with quotient group $\mathcal F_v (R)^{\times}$. The embedding of the non-zero principal ideals $\mathcal H (R) \hookrightarrow \mathcal I_v^* (R)$ is a cofinal divisor homomorphism, and the factor group \[ \mathcal C_v(R) = \mathcal F_v(R)^\times / \{aR \colon a \in K^\times\} = \mathcal I_v^*(R)/\mathcal H(R) \] is called the {\it $v$-class group} of $R$. The map $\iota^\bullet$ induces isomorphisms $\mathcal F_v(R)^\times \overset\sim\to \mathcal F_v(R^\bullet)^\times$, \ $\mathcal I_v^*(R) \, \overset\sim\to \, \mathcal I_v^*(R^\bullet)$, and $\mathcal C_v(R) \, \overset\sim\to \, \mathcal C_v(R^\bullet)$, and in the sequel we shall identify these monoids and groups. The above correspondence between domains and their monoids of non-zero elements can be extended to commutative rings with zero-divisors and their monoids of regular elements (\cite[Theorem 3.5]{Ge-Ra-Re15c}), and there is an analogue for prime Goldie rings (\cite[Proposition 5.1]{Ge13a}). \begin{examples}~ \label{2.3} 1. (Domains) As mentioned above, the multiplicative monoid $R^{\bullet}$ of a domain $R$ is a Krull monoid if and only if $R$ is a Krull domain. Thus Property (a) in Theorem \ref{2.1} implies that a noetherian domain is Krull if and only if it is normal (i.e. integrally closed in its field of fractions). In particular, rings of invariants are Krull, as we shall see in Theorem \ref{4.1}. \smallskip 2. (Submonoids of domains) Regular congruence submonoids of Krull domains are Krull (\cite[Proposition 2.11.6]{Ge-HK06a}. \smallskip 3. (Monoids of modules) Let $R$ be a (possibly noncommutative) ring and let $\mathcal C$ be a class of finitely generated (right) $R$-modules which is closed under finite direct-sums, direct summands, and isomorphisms. Then the set $\mathcal V (\mathcal C)$ of isomorphism classes of modules is a commutative semigroup with operation induced by the direct sum. If the endomorphism ring of each module in $\mathcal C$ is semilocal, then $\mathcal V ( \mathcal C)$ is a Krull monoid (\cite[Theorem 3.4]{Fa02}). For more information we refer to \cite{Fa06a,Fa12a, Ba-Ge14b}. \smallskip 4. (Monoids of product-one sequences) In Theorem \ref{3.2} we will characterize the monoids of product-one sequences which are Krull. \end{examples} \subsection{\bf C-monoids and C-domains}~ \label{2.D} A monoid $H$ is called a C-{\it monoid} if it is a submonoid of a factorial monoid $F$ such that $H \cap F^{\times} = H^{\times}$ and the reduced class semigroup $\mathcal C^* (H,F)$ is finite. A domain is called a C-{\it domain} if $R^{\bullet}$ is a C-monoid. \begin{proposition} \label{2.4} Let $F$ be a factorial monoid and $H \subset F$ a submonoid such that $H \cap F^{\times} = H^{\times}$. \begin{enumerate} \item If $H$ is a {\rm C}-monoid, then $H$ is $v$-noetherian with $(H \negthinspace : \negthinspace \widehat H) \ne \emptyset$, and the complete integral closure $\widehat H$ is a Krull monoid with finite class group $\mathcal C (\widehat H)$. \smallskip \item Suppose that $F/F^{\times}$ is finitely generated, say $F = F^\times \times [p_1, \ldots, p_s]$ with pairwise non-associated prime elements $p_1, \ldots, p_s$. Then the following statements are equivalent{\rm \,:} \begin{enumerate} \item[(a)] $H$ is a \ $\text{\rm C}$-monoid defined in $F$. \smallskip \item[(b)] There exist some $\alpha \in \mathbb N$ and a subgroup $W \le F^\times$ such that $(F^\times \negthinspace : \negthinspace W) \t \alpha$, \ $W(H \setminus H^\times) \subset H$, and for all $j \in [1,s]$ and $a \in p^\alpha_j F$ we have $a \in H$ if and only if \ $p^\alpha_j a \in H$. \end{enumerate} \end{enumerate} \end{proposition} \begin{proof} For 1., see \cite[Theorems 2.9.11 and 2.9.13]{Ge-HK06a} and for 2. see \cite[Theorems 2.9.7]{Ge-HK06a}. \end{proof} \begin{examples}~ \label{2.5} 1. (Krull monoids) A Krull monoid is a C-monoid if and only if the class group is finite (\cite[Theorem 2.9.12]{Ge-HK06a}). \smallskip 2. (Domains) Let $R$ be a domain. Necessary conditions for $R$ being a C-domain are given in Proposition \ref{2.4}. Thus suppose that $R$ is a Mori domain (i.e., a $v$-noetherian domain) with nonzero conductor $\mathfrak f = (R \negthinspace : \negthinspace \widehat R)$ and suppose that $\mathcal C (\widehat R)$ is finite. If $R/\mathfrak f$ is finite, then $R$ is a C-domain by \cite[Theorem 2.11.9]{Ge-HK06a}. This result generalizes to rings with zero-divisors (\cite{Ge-Ra-Re15c}), and in special cases we know that $R$ is a C-domain if and only if $R/\mathfrak f$ is finite (\cite{Re13a}). \smallskip 3. (Congruence monoids) Let $R$ be Krull domain with finite class group $\mathcal C (R)$ and $H \subset R$ a congruence monoid such that $R/\mathfrak f$ is finite where $\mathfrak f$ is an ideal of definition for $H$. If $R$ is noetherian or $\mathfrak f$ is divisorial, then $H$ is a C-monoid (\cite[Theorem 2.11.8]{Ge-HK06a}). For a survey on arithmetical congruence monoids see \cite{Ba-Ch14a}. \smallskip 4. In Subsection \ref{3.A} we shall prove that monoids of product-one sequences are C-monoids (Theorem \ref{3.2}), and we will meet C-monoids again in Proposition \ref{B(G,V)-is-C} dealing with the monoid $\mathcal{B}(G,V)$. \end{examples} Finitely generated monoids allow simple characterizations when they are Krull or when they are C-monoids. We summarize these characterizations in the next lemma. \begin{proposition} \label{finitelygenerated} Let $H$ be a monoid such that $H_{\mathrm{red}}$ is finitely generated. \begin{enumerate} \item Then $H$ is $v$-noetherian with $(H \negthinspace : \negthinspace \widehat H) \ne \emptyset$, $\widetilde H = \widehat H$, $\widetilde H/H^{\times}$ is finitely generated, and $\widehat H$ is a Krull monoid. In particular, $H$ is a Krull monoid if and only if $H = \widehat H$. \smallskip \item $H$ is a {\rm C}-monoid if and only if \ $\mathcal C ( \widehat H)$ is finite. \smallskip \item Suppose that $H$ is a submonoid of a factorial monoid $F = F^{\times} \times \mathcal F (P)$. Then the following statements are equivalent{\rm \,:} \begin{enumerate} \item $H$ is a \text{\rm C}-monoid defined in $F$, $F^{\times}/H^{\times}$ is a torsion group, and for every $p \in P$ there is an $a \in H$ such that $\mathsf v_p (a) > 0$. \smallskip \item For every $a \in F$, there is an $n_a \in \mathbb N$ with $a^{n_a} \in H$. \end{enumerate} If $(a)$ and $(b)$ hold, then $P$ is finite and $\widetilde H = \widehat H = \mathsf q (H) \cap F \subset F$ is saturated and cofinal. \end{enumerate} \end{proposition} \begin{proof} 1. follows from \cite[2.7.9 - 2.7.13]{Ge-HK06a}, and 2. follows from \cite[Proposition 4.8]{Ge-Ha08b}. \smallskip 3. (a) $\Rightarrow$\, (b) \ For every $p \in P$, we set $d_p = \gcd \big( \mathsf v_p (H) \big)$, and by assumption we have $d_p > 0$. We set $P_0 = \{p^{d_p} \colon p \in P \}$ and $F_0 = F^{\times} \times \mathcal F (P_0)$. By \cite[Theorem 2.9.11]{Ge-HK06a}, $H$ is a C-monoid defined in $F_0$ and there is a divisor theory $\partial \colon \widehat H \to \mathcal F (P_0)$. By construction of $F_0$, it is sufficient to prove the assertion for all $a \in F_0$. Since $F^{\times}/H^{\times}$ is a torsion group, it is sufficient to prove the assertion for all $a \in \mathcal F (P_0)$. Let $a \in \mathcal F (P_0)$. Since $\mathcal C (\widehat H)$ is finite, there is an $n_a' \in \mathbb N$ such that $a^{n_a'} \in \widehat H$. Since $\widehat H = \widetilde H$, there is an $n_a'' \in \mathbb N$ such that $(a^{n_a'})^{n_a''} \in H$. (b) $\Rightarrow$\, (a) \ For every $p \in P$ there is an $n_p \in \mathbb N$ such that $p^{n_p} \in H$ whence $\mathsf v_p (p^{n_p})=n_p > 0$. Clearly, we have $\widehat H \subset \widehat F = F$ and hence $\widehat H \subset \mathsf q (\widehat H) \cap F = \mathsf q (H) \cap F$. Since for each $a \in F$ there is an $n_a \in \mathbb N_0$ with $a^{n_a} \in H$, we infer that $\mathsf q (H) \cap F \subset \widetilde H = \widehat H$ and hence $\widehat H = \mathsf q (H) \cap F$. Furthermore, $H \subset F$ and $\widehat H \subset F$ are cofinal, and $\mathsf q (F)/\mathsf q (H) = F/H$ is a torsion group. Clearly, $\mathsf q (H) \cap F \subset F$ is saturated, and thus $\widehat H$ is Krull. Since ${\widehat H}^{\times} = \widehat H \cap F^{\times}$ and $H^{\times} = \widehat H^{\times} \cap H$, it follows that $H^{\times} = H \cap F^{\times}$ and then we obtain that $F^{\times}/H^{\times}$ is a torsion group. By 1., $\widehat H/H^{\times}$ is finitely generated, say $\widehat H/H^{\times} = \{u_1H^{\times}, \ldots, u_n H^{\times} \}$, and set $P_0 = \{ p \in P \colon p \ \text{divides} \ u_1 \cdot \ldots \cdot u_n \ \text{in} \ F \}$. Then $P_0$ is finite, and we assert that $P_0=P$. If there would exist some $p \in P \setminus P_0$, then there is an $n_p \in \mathbb N$ such that $p^{n_p} \in H$ and hence $p^{n_p}H^{\times}$ is a product of $u_1H^{\times}, \ldots , u_nH^{\times}$, a contradiction. Therefore $P$ is finite, $F/F^{\times}$ is a finitely generated monoid, $\mathsf q (F)/F^{\times}$ is a finitely generated group, and therefore $\mathsf q (F)/\mathsf q (H)F^{\times}$ is a finitely generated torsion group and thus finite. Since $\varphi \colon \widehat H \to F \to F/F^{\times}$ is a divisor homomorphism and $\mathcal C (\varphi) = \mathsf q (F)/\mathsf q (H)F^{\times}$, Proposition \ref{prop:torsion-divtheory}.1 implies that $\mathcal C (\widehat H)$ is an epimorphic image of a submonoid of $\mathsf q (F)/\mathsf q (H)F^{\times}$ and thus $\mathcal C ( \widehat H)$ is finite. Thus 2. implies that $H$ is a {\rm C}-monoid (indeed, Property 2.(b) of Proposition \ref{2.4} holds and hence $H$ is a C-monoid defined in $F$). \end{proof} \subsection{\bf Davenport constants of BF-monoids}~ \label{2.E} Let $H$ be a BF-monoid. For every $k \in \mathbb N$, we study the sets \[ \mathcal M_k (H) = \{ a \in H \colon \max \mathsf L (a) \le k \} \quad \text{and} \quad \overline{\mathcal M}_k (H) = \{ a \in H \colon \max \mathsf L (a)=k \} \,. \] A monoid homomorphism $| \cdot | \colon H \to (\mathbb N_0,+)$ will be called a {\it degree function} on $H$. In this section we study abstract monoids having a degree function. The results will be applied in particular to monoids of product-one sequences and to monoids $\mathcal B (G,V)$ (see Subsections \ref{3.C} and \ref{5.D}). In all our applications the monoid $H$ will be a submonoid of a factorial monoid $F$ and if not stated otherwise the degree function on $H$ will be the restriction of the length function on $F$. If $\theta \colon H \to B$ is a homomorphism and $H$ and $B$ have degree functions, then we say that $\theta$ is {\emph{ degree preserving}} if $|a |_H = |\theta (a)|_B$ for all $a \in H$. Suppose we are given a degree function on $H$ and $k \in \mathbb N$, then \[ \mathsf D_k (H) = \sup \{ |a| \colon a \in \mathcal M_k (H) \} \in \mathbb N_0 \cup \{\infty\} \] is called the {\it large $k$th Davenport constant} of $H$ (with respect to $|\cdot|_H$). Clearly, $\mathcal M_1 (H) = \mathcal A (H) \cup H^{\times}$. We call $\mathsf D (H) = \mathsf D_1 (H) = \sup \{ |a| \colon a \in \mathcal A (H) \} \in \mathbb N_0 \cup \{\infty\}$ the {\it Davenport constant} of $H$. For every $k \in \mathbb N$, we have $\mathcal M_k (H) \subset \mathcal M_{k+1} (H)$, $\mathsf D_k (H) \le \mathsf D_{k+1} (H)$, and $\mathsf D_k (H) \le k \mathsf D (H)$. Furthermore, we have $|u|=0$ for every unit $u\in H^\times$. Therefore the degree function on $H$ induces automatically a degree function $|\cdot|:H_{\mathrm{red}}\to (\mathbb N_0,+)$, and so the $k$th Davenport constant of $H_{\mathrm{red}}$ is defined. Obviously we have $\mathsf D_k(H)=\mathsf D_k(H_{\mathrm{red}})$. Let $\mathsf e (H)$ denote the smallest $\ell \in \mathbb N_0\cup \{\infty\}$ with the following property: \begin{itemize} \item[] There is a $K \in \mathbb N_0$ such that every $a \in H$ with $|a| \ge K$ is divisible by an element $b \in H\setminus H^\times $ with $|b| \le \ell$. \end{itemize} Clearly, $\mathsf e (H) \le \mathsf D (H)$. \begin{proposition} \label{2.7} Let $H$ be a \text{\rm BF}-monoid and $| \cdot | \colon H \to (\mathbb N_0,+)$ be a degree function. \begin{enumerate} \item If $H_{\mathrm{red}}$ is finitely generated, then the sets $\mathcal M_k (H_{\mathrm{red}})$ are finite and $\mathsf D_k (H) < \infty$ for every $k \in \mathbb N$. \smallskip \item If \ $\mathsf D (H) < \infty$, then there exist constants $D_H, K_H \in \mathbb N_0$ such that $\mathsf D_k (H) = k \mathsf e (H) + D_H$ for all $k \ge K_H$. \smallskip \item If \ $\mathsf D (H) < \infty$, then the map $\mathbb N \to \mathbb Q$, $k\mapsto \frac{\mathsf D_k(H)}{k}$ is non-increasing. \smallskip \item Suppose that $H$ has a prime element. Then \[ \mathsf D_k (H) \ = \ \max \bigl\{ |a| \colon a \in \overline{\mathcal M}_k (H) \bigr\}\le \ k \mathsf D(H) \] and \[ k \mathsf D(H) \ = \ \max \bigl\{ |a| \colon a \in H, \ \min \mathsf L (a) \le k \bigr\} = \ \max \bigl\{ |a| \colon a \in H, \ k \in \mathsf L (a) \bigr\}\,. \] \end{enumerate} \end{proposition} \begin{proof} 1. Suppose that $H_{\mathrm{red}}$ is finitely generated. Then $\mathcal A (H_{\mathrm{red}})$ is finite whence $\mathcal M_k (H)$ is finite for every $k \in \mathbb N$. It follows that $\mathsf D (H) < \infty$ and $\mathsf D_k (H) \le k \mathsf D (H) < \infty$ for all $k \in \mathbb N$. \smallskip 2. Suppose that $\mathsf D (H) < \infty$ and note that $\mathsf e (H) \le \mathsf D (H)$. Let $\mathsf f (H) \in \mathbb N_0$ be the smallest $K \in \mathbb N_0$ such that every $a \in H$ with $|a| \ge K$ is divisible by an element $b \in H$ with $|b| \le \mathsf e (H)$. We define $A = \{ a \in \mathcal A (H) \colon |a| = \mathsf e (H) \}$. Let $k \in \mathbb N$ and continue with the following assertion. \begin{enumerate} \item[{\bf A.}\,] There exist $a_1, \ldots, a_k \in A$ such that $a_1 \ldots a_k \in \mathcal M_k (H)$. In particular, $\mathsf D_k (H) \ge |a_1 \ldots a_k| = k \mathsf e (H)$. \end{enumerate} {\it Proof of \,{\bf A}}.\, Assume to the contrary that for all $a_1, \ldots, a_k \in A$ we have $\max \mathsf L (a) > k$. Thus the product $a_1 \ldots a_k$ is divisible by an atom $u \in \mathcal A (H)$ with $|u| < \mathsf e (H)$. We set $K = \mathsf f (H) + (k-1) \mathsf e (H)$ and choose $a \in H$ with $|a| \ge K$. Then $a$ can be written in the form $a = a_1 \ldots a_k b$ where $a_1, \ldots, a_k, b \in H$ and $|a_i| \le \mathsf e (H) $ for all $i \in [1,k]$. If there is some $i \in [1,k]$ with $|a_i| < \mathsf e (H)$, then $a_i$ is a divisor of $a$ with $|a_i| < \mathsf e (H)$. Otherwise, $a_1, \ldots, a_k \in A$ and by our assumption the product $a_1 \ldots a_k$ and hence $a$ has a divisor of degree strictly smaller than $\mathsf e (H)$. This is a contradiction to the definition of $\mathsf e (H)$. \qed{(Proof of {\bf A})} Now let $k \ge \mathsf f (H)/\mathsf e (H) - 1$. Then {\bf A} implies that $\mathsf D_k (H) + \mathsf e (H) \ge (k+1)\mathsf e (H) \ge \mathsf f (H)$. Let $a \in H$ with $|a| > \mathsf D_k (H) + \mathsf e (H)$. Then, by definition of $\mathsf f (H)$, there are $ b, c \in H$ such that $a = b c$ with $|c| \le \mathsf e (H)$ and hence $|b| > \mathsf D_k (H)$. This implies that $\max \mathsf L (b) > k$, whence $\max \mathsf L (a) > k+1$ and $a \notin \mathcal M_{k+1} (H)$. Therefore we obtain that $\mathsf D_{k+1} (H) \le \mathsf D_k (H) + \mathsf e (H)$ and thus \[ 0 \le \mathsf D_{k+1} (H) - (k+1) \mathsf e (H) \le \mathsf D_k (H) - k \mathsf e (H) \,. \] Since a non-increasing sequence of non-negative integers stabilizes, the assertion follows. \smallskip 3. Suppose that $\mathsf D (H) < \infty$. Let $k \in \mathbb N$, $a \in \mathcal M_{k+1} (H)$ with $|a|=\mathsf D_{k+1} (H)$, and set $l = \max \mathsf L (a)$. Then $l \le k+1$. If $l \le k$, then $a \in \mathcal M_k (H)$ and $\mathsf D_{k+1} (H) \ge \mathsf D_k (H) \ge |a|=\mathsf D_{k+1} (H)$ whence $\mathsf D_k (H) =\mathsf D_{k+1}(H)$. Suppose that $l=k+1$. We set $a = a_1 \ldots a_{k+1}$ with $a_1, \ldots, a_{k+1} \in \mathcal A (H)$ and $|a_1| \ge \ldots \ge |a_{k+1}|$ whence $|a_{k+1}| \le (|a_1|+\ldots+|a_k|)/k$. It follows that \[ \frac{\mathsf D_{k+1}(H)}{k+1} = \frac{|a_1|+\dots+|a_{k+1}|}{k+1}\le \frac{|a_1|+\dots+|a_k|}{k}\le \frac{\mathsf D_k(H)}{k} \,, \] where the last inequality holds because $a_1 \ldots a_k \in \mathcal M_k (H)$. \smallskip 4. Let $p \in H$ be a prime element. We assert that \[ \mathsf D_k(H) \le \max \bigl\{ |a| \colon a \in H, \ \max \mathsf L(a)=k \bigr\} \,. \tag{$*$} \] Indeed, if $a \in \mathcal M_k(H)$ and $\max \mathsf L(a) =l \le k$, then $a p^{k-l} \in \mathcal M_k(H)$ and \[ |a| \le |a p^{k-l}| \le \max \bigl\{ |a| \colon a \in H, \ \max \mathsf L(a)=k \bigr\}\,, \] and hence $(*)$ follows. Next we assert that \[ \max \bigl\{ |a| \colon a \in H, \ \min \mathsf L(a) \le k \bigr\} \le k \mathsf D(H) \,. \tag{$**$} \] Let $a \in H$ with $\min \mathsf L(a) =l \le k$, say $a = u_1 \ldots u_l$, where $u_1,\ldots, u_l \in \mathcal A(H)$. Then $|a| = |u_1| + \ldots + |u_l| \le l \mathsf D(H) \le k \mathsf D(H)$, and thus $(**)$ follows. Using $(*)$ and $(**)$ we infer that \[ \begin{aligned} \mathsf D_k (H) & \le \max \bigl\{ |a| \colon a \in H, \ \max \mathsf L(a) = k \bigr\} \ \le \ \max \bigl\{ |a| \colon a \in H, \ \max \mathsf L(a) \le k \bigr\} \\ & = \ \mathsf D_k(H) \ \le \ \max \bigl\{ |a| \colon a \in H, \ \min \mathsf L(a) \le k \bigr\} \end{aligned} \] and that \[ k \mathsf D(H) = \max \bigl\{ |a| \colon a \in H, \ k \in \mathsf L(a) \bigr\} \le \max \bigl\{ |a| \colon a \in H, \ \min \mathsf L(a) \le k \bigr\} \le k \mathsf D (H) \,. \] \end{proof} Let $F$ be a factorial monoid and $H \subset F$ a submonoid such that $H^{\times} = H \cap F^{\times}$. Then $H$ is a BF-monoid by \cite[Corollary 1.3.3]{Ge-HK06a}. For $k \in \mathbb N$, let $\mathcal M_k^* (H)$ denote the set of all $a \in F$ such that $a$ is not divisible by a product of $k$ non-units of $H$. The restriction of the usual length function $|\cdot |\colon F\to \mathbb N_0$ on $F$ (introduced in Subsection~\ref{2.A}) gives a degree function on $H$. We define the {\it small $k$th Davenport constant} $\mathsf d_k (H)$ as \begin{equation} \label{defining-d_k} \mathsf d_k (H) = \sup \{ |a| \colon a \in \mathcal M_k^* (H) \} \in \mathbb N_0 \cup \{\infty\}. \end{equation} In other words, $1+\mathsf d_k (H)$ is the smallest integer $\ell \in \mathbb N$ such that every $a \in F $ of length $|a| \ge \ell$ is divisible by a product of $k$ non-units of $H$. We call $\mathsf d(H)=\mathsf d_1(H)$ the \emph{small Davenport constant} of $H$. Clearly we have $\mathcal{M}_k^*(H)\subset \mathcal{M}_{k+1}^*(H)$ hence $\mathsf d_k(H)\le \mathsf d_{k+1}(H)$. Furthermore, let $\eta (H)$ denote the smallest integer $\ell \in \mathbb N\cup \{\infty\}$ such that every $a \in F$ with $|a| \ge \ell$ has a divisor $b \in H \setminus H^{\times}$ with $|b| \in [1, \mathsf e (H)]$. For $p\in \mathcal{A}(F)$ denote by $o_p$ the smallest integer $\ell \in \mathbb N\cup \{\infty\}$ such that $p^{o_p}\in H$. Clearly, we have $o_p\le \eta(H)$ for all $p\in \mathcal{A}(F)$. \begin{proposition} \label{2.8} Let $F=F^{\times} \times \mathcal F (P)$ be a factorial monoid and $H \subset F$ a submonoid such that $H^{\times} = H \cap F^{\times}$, and let $k \in \mathbb N$. \begin{enumerate} \item If for every $a \in F$ there is a prime $p \in F$ such that $ap \in H$, then $1+\mathsf d_k (H) \le \mathsf D_k (H)$. \smallskip \item Suppose that $H_{\mathrm{red}}$ is finitely generated and that for every $a \in F$ there is an $n_a \in H$ such that $a^{n_a} \in H$. Then $H$ is a \text{\rm C}-monoid and we have \begin{itemize} \item[(a)] \ $\mathsf e(H)=\max\{o_p\colon p\in P \}$ and $\eta(H)<\infty$. \item[(b)] \ $\mathsf d_k(H)+1\ge k\mathsf e(H)$ and there exist constants $d_H\in \mathbb Z_{\ge -1}, k_H \in \mathbb N_0$ such that $\quad \mathsf d_k (H) = k \mathsf e (H) + d_H$ for all $k \ge k_H$. \end{itemize} \end{enumerate} \end{proposition} \begin{proof} 1. Let $a \in \mathcal M_k^* (H)$ such that $|a|=\mathsf d_k(H)$. We choose a prime $p \in F$ such that $a p \in H$. Take any factorization $ap = u_1 \ldots u_{\ell}$ where $u_i\in \mathcal{A}(H)$. We may assume that $p \t u_1$ in $F$. Then $u_2 \ldots u_{\ell} \t a$ in $F$ and hence $\ell - 1 < k$. Thus it follows that $ap \in \mathcal M_k (H)$ and $ \mathsf D_k (H)\ge |ap|=|a|+1\ge \mathsf d_k (H)+1$. 2.(a) By Proposition~\ref{finitelygenerated}.3, $H$ is a C-monoid, $P$ is finite and hence $\mathsf e (H) < \infty$. If $p \in P$, then $p^{o_p} \in \mathcal A (H)$ and by the minimality of $o_p$, $p^{o_p}$ does not have a divisor $b \in H \setminus H^{\times}$ such that $|b| < o_p$. Thus it follows that $\mathsf e(H)\ge \max\{o_p\colon p\in P\}$. For the reverse inequality, note that by Proposition~\ref{2.4}.2 there exists an $\alpha\in \mathbb N$ such that for all $p\in P$ and all $a\in p^{\alpha}F$ we have $a\in H$ if and only if $p^{\alpha}a\in H$. Since any multiple of $\alpha$ has the same property, we may assume that $\alpha$ is divisible by $o_p$ for all $p \in P$. Let $b\in H$ with $|b|>|P|(2\alpha-1)$. Then there exists a $p\in P$ such that $b\in p^{2\alpha}F\cap H$. Hence $b$ is divisible in $H$ by $p^{\alpha}$, implying in turn that $p^{o_p} \in \mathcal{A}(H)$ divides $b$ in $H$. Therefore we obtain that $\mathsf e(H)\le \max\{o_p \colon p\in P\}$. If $a \in F$ with $|a|\ge \sum_{p\in P}(o_p-1)$, then there is a $p\in P$ such that $p^{o_p}$ divides $a$ in $F$, and thus $\eta(H)\le 1+ \sum_{p\in P}(o_p-1)$. 2.(b) Let $p\in P$ with $o(p)=\mathsf e(H)$. Then $p^{ko_p-1}\in \mathcal{M}_k^*(H)$ and $|p^{ko_p-1}|=k\mathsf e(H)-1$, showing the inequality $\mathsf d_k(H)+1\ge k\mathsf e(H)$ for all $k\in \mathbb N$. Now let $k \in \mathbb N$ be such that $1+\mathsf d_k (H)+\mathsf e (H) \ge \eta (H)$, and let $a \in F$ with $|a| \ge \mathsf d_k (H) +\mathsf e (H)+1$. Then, by definition of $\eta (H)$, there are $b \in F$ and $c \in H \setminus H^{\times}$ such that $a=bc$ with $|c| \le \mathsf e (H)$ and $|b| > \mathsf d_k (H)$. This implies that $b$ is divisible by a product of $k$ non-units of $H$ whence $a$ is divisible by a product of $k+1$ non-units of $H$. Therefore it follows that $1+\mathsf d_{k+1}(H) \le \mathsf d_k (H) + \mathsf e (H) + 1$ and hence \[ 0 \le \mathsf d_{k+1}(H) - k \mathsf e (H) \le \mathsf d_k (H) - (k-1)\mathsf e (H) \quad \text{for all sufficiently large} \quad k \,. \] Since a non-increasing sequence of non-negative integers stabilizes, the assertion follows. \end{proof} \section{\bf Arithmetic Combinatorics: Zero-Sum Results with a focus on Davenport constants} \label{sec:3} This section is devoted to Zero-Sum Theory, a vivid subfield of Arithmetic Combinatorics (see \cite{Ga-Ge06b, Ge09a, Gr13a}). In Subsection \ref{3.A} we give an algebraic study of the monoid of product-one sequences over finite but not necessarily abelian groups. In Subsection \ref{3.B} we put together well-known material on transfer homomorphisms used in Subsections \ref{4.A} and \ref{subsec:abelian}. In Subsections \ref{3.C} and \ref{3.D} we consider the $k$th Davenport constants of finite groups. In particular, we gather results which will be needed in Subsection \ref{5.A} and results having relevance in invariant theory by Proposition \ref{prop:bschmid}. \subsection{\bf The monoid of product-one sequences}~ \label{3.A} Let $G_0 \subset G$ be a subset and let $G' = [G,G]=\langle g^{-1}h^{-1}g h \colon g, h \in G\rangle$ denote the commutator subgroup of $G$. A {\it sequence} over $G_0$ means a finite sequence of terms from $G_0$ which is unordered and repetition of terms is allowed, and it will be considered as an element of the free abelian monoid $\mathcal F (G_0)$. In order to distinguish between the group operation in $G$ and the operation in $\mathcal F (G_0)$, we use the symbol $\boldsymbol{\cdot}$ for the multiplication in $\mathcal F (G_0)$, hence $\mathcal F (G_0) = \big( \mathcal F (G_0), \boldsymbol{\cdot} \big)$ ---this coincides with the convention in the monographs \cite{Ge-HK06a, Gr13a}--and we denote multiplication in $G$ by juxtaposition of elements. To clarify this, if $S_1, S_2 \in \mathcal F (G_0)$ and $g_1, g_2 \in G_0$, then $S_1 \boldsymbol{\cdot} S_2 \in \mathcal F (G_0)$ has length $|S_1|+|S_2|$, $S_1 \boldsymbol{\cdot} g_1 \in \mathcal F (G_0)$ has length $|S_1|+1$, $g_1 \boldsymbol{\cdot} g_2 \in \mathcal F (G_0)$ is a sequence of length $2$, but $g_1 g_2$ is an element of $G$. Furthermore, in order to avoid confusion between exponentiation in $G$ and exponentiation in $\mathcal F (G_0)$, we use brackets for the exponentiation in $\mathcal F (G_0)$. So for $g \in G_0$, $S \in \mathcal F (G_0)$, and $k \in \mathbb N_0$, we have \[ g^{[k]}=\underset{k}{\underbrace{g\boldsymbol{\cdot}\ldots\boldsymbol{\cdot} g}}\in \mathcal F (G) \quad \text{with} \quad |g^{[k]}| = k \,, \quad \ \text{and} \quad S^{[k]}=\underset{k}{\underbrace{S\boldsymbol{\cdot}\ldots\boldsymbol{\cdot} S}}\in \mathcal F (G) \,. \] Now let \[ S = g_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} g_{\ell} = \prod_{g \in G_0} g^{\mathsf v_g (S)} \,, \] be a sequence over $G_0$ (in this notation, we tacitly assume that $\ell \in \mathbb N_0$ and $g_1, \ldots, g_{\ell} \in G_0$). Then $|S|=\ell = 0$ if and only if $S=1_{\mathcal F ( G_0)}$ is the identity element in $\mathcal F (G_0)$, and then $S$ will also be called the {\it trivial sequence}. The elements in $\mathcal F (G_0) \setminus \{1_{\mathcal F ( G_0)} \}$ are called {\it nontrivial sequences}. We use all notions of divisibility theory in general free abelian monoids. Thus, for an element $g \in G_0$, we refer to $\mathsf v_g (S)$ as the {\it multiplicity} of $g$ in $S$. A divisor $T$ of $S$ will also be called a subsequence of $S$. We call $\supp (S) = \{ g_1, \ldots, g_{\ell}\} \subset G_0$ the {\it support} of $S$. When $G$ is written multiplicatively (with unit element $1_G \in G$), we use \[ \pi (S) = \{ g_{\tau (1)} \ldots g_{\tau (\ell)} \in G \colon \tau\mbox{ a permutation of $[1,\ell]$} \} \subset G \] to denote the {\it set of products} of $S$ (if $|S|= 0$, we use the convention that $\pi (S) = \{1_G \}$). Clearly, $\pi (S)$ is contained in a $G'$-coset. When $G$ is written additively with commutative operation, we likewise let $$\sigma(S)=g_1+\ldots+g_\ell\in G$$ denote the \emph{sum} of $S$. Furthermore, we denote by \[ \Sigma (S) = \{\sigma(T) \colon T \t S\;\text{and} \; 1 \ne T \} \subset G \quad\text{and}\quad \Pi (S) = \underset{1 \ne T}{\bigcup_{T \t S}} {\pi}(T) \subset G \,, \] the {\it subsequence sums } and \emph{subsequence products} of $S$. The sequence $S$ is called \begin{itemize} \item a {\it product-one sequence} if $1_G \in \pi (S)$, \item {\it product-one free} if $1_G \notin \Pi (S)$. \end{itemize} Every map of finite groups $\varphi \colon G_1 \to G_2$ extends to a homomorphism $\varphi \colon \mathcal F (G_1) \to \mathcal F (G_2)$ where $\varphi (S) = \varphi (g_1) \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} \varphi (g_{\ell})$. If $\varphi$ is a group homomorphism, then $\varphi (S)$ is a product-one sequence if and only if $\pi (S) \cap \Ker ( \varphi) \ne \emptyset$. We denote by \[ \mathcal B (G_0) = \{ S \in \mathcal F (G_0) \colon 1_G \in \pi (S) \} \] the set of all product-one sequences over $G_0$, and clearly $\mathcal B (G_0) \subset \mathcal F (G_0)$ is a submonoid. We will use all concepts introduced in Subsection \ref{2.E} for the monoid $\mathcal B (G_0)$ with the degree function stemming from the length function on the free abelian monoid $\mathcal F (G_0)$. For all notations $*(H)$ introduced for a monoid $H$ we write -- as usual -- $*(G_0)$ instead of $*(\mathcal B (G_0))$. In particular, for $k \in \mathbb N$, we set $\mathcal M_k (G_0)= \mathcal M_k ( \mathcal B (G_0))$, $\mathsf D_k (G_0) = \mathsf D_k (\mathcal B (G_0))$, $\eta (G_0)=\eta (\mathcal B (G_0))$, $\mathsf e (G_0) = \mathsf e ( \mathcal B (G_0))$, and so on. By Proposition~\ref{2.8}.2(a), $\mathsf e (G_0) = \max \{ \ord (g) \colon g \in G_0 \}$. Note that $\mathcal M_1^* (G_0)$ is the set of all product-one free sequences over $G_0$. In particular, \[ \mathsf D (G_0) = \sup \{ |S| \colon S \in \mathcal A (G_0) \} \in \mathbb N \cup \{\infty\} \] is the {\it large Davenport constant} of $G_0$, and \[ \mathsf d (G_0) = \sup \{ |S| \colon S \in \mathcal F (G_0) \ \text{is product-one free} \} \in \mathbb N_0 \cup \{\infty\} \] is the {\it small Davenport constant} of $G_0$. Their study will be the focus of the Subsections \ref{3.C} and \ref{3.D}. \begin{lemma} \label{3.1} Let $G_0 \subset G$ be a subset. \begin{enumerate} \item $\mathcal B (G_0) \subset \mathcal F (G_0)$ is a reduced finitely generated submonoid, $\mathcal A (G_0)$ is finite, and $\mathsf D (G_0) \le |G|$. Furthermore, $\mathcal M_k (G_0)$ is finite and $\mathsf D_k (G_0) < \infty$ for all $k \in \mathbb N$. \item Let $S \in \mathcal F (G)$ be product-one free. \begin{enumerate} \item If $g_0 \in \pi (S)$, then $g_0^{-1} \boldsymbol{\cdot} S \in \mathcal A (G)$. In particular, $\mathsf d (G)+1 \le \mathsf D (G)$. \item If $|S|=\mathsf d (G)$, then $\Pi (S) = G \setminus \{1_G\}$ and hence \newline $\mathsf d (G) = \max \{ |S| \colon S \in \mathcal F (G) \ \text{with} \ \Pi (S) = G \setminus \{1_G\} \}$. \end{enumerate} \item If $G$ is cyclic, then $\mathsf d (G)+1 = \mathsf D (G) = |G|$. \end{enumerate} \end{lemma} \begin{proof} 1. We assert that for every $U \in \mathcal A (G)$ we have $|U| \le |G|$. Then $\mathcal A (G_0) \subset \mathcal A (G)$ is finite and $\mathsf D (G_0) \le \mathsf D (G) \le |G|$. As already mentioned, $\mathcal B (G_0) \subset \mathcal F (G_0)$ is a submonoid, and clearly $\mathcal B (G_0)^{\times} = \{1_{\mathcal F (G_0)}\}$. Since $\mathcal F (G_0)$ is factorial and $\mathcal B(G_0)^{\times} = \mathcal B (G_0) \cap \mathcal F (G_0)^{\times}$, $\mathcal B (G_0)$ is atomic by \cite[Corollary 1.3.3]{Ge-HK06a}. This means that $\mathcal B(G_0) = [\mathcal A (G_0) \cup \mathcal B (G_0)^{\times}]$, and thus $\mathcal B (G_0)$ is finitely generated. Since $\mathcal B (G_0)$ is reduced and finitely generated, the sets $\mathcal M_k (G_0)$ are finite by Proposition \ref{2.7}. Now let $U \in \mathcal B (G)$, say $U = g_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} g_{\ell}$ with $g_1 g_2 \ldots g_{\ell} = 1_G$. We suppose that $\ell > |G|$ and show that $U \notin \mathcal A (G)$. Consider the set \[ M = \{ g_1 g_2 \ldots g_i \colon i \in [1,\ell] \} \,. \] Since $\ell > |G|$, there are $i, j \in [1,\ell]$ with $i < j$ and $g_1 \ldots g_i = g_1 \ldots g_j$. Then $g_{i+1} \ldots g_j=1_G$ and thus $g_1 \ldots g_i g_{j+1} \ldots g_{\ell}=1_G$ which implies that $U$ is the product of two nontrivial product-one subsequences. \smallskip 2.(a) If $g_0 \in \pi (S)$, then $S$ can be written as $S = g_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} g_{\ell}$ such that $g_0 = g_1 \ldots g_{\ell}$, which implies that $g_0^{-1} \boldsymbol{\cdot} g_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} g_{\ell} \in \mathcal A (G)$. 2.(b) If $S$ is product-one free with $|S| = \mathsf d (G)$, and if there would be an $h \in G \setminus \{ \Pi (S) \cup \{1_G\} \}$, then $T = h^{-1} \boldsymbol{\cdot} S$ would be product-one free of length $|T| = |S|+1 > \mathsf d (G)$, a contradiction. Thus every product-one free sequence $S$ of length $|S| = \mathsf d (G)$ satisfies $\Pi (S) = G \setminus \{1_G\}$. If $S$ is a sequence with $\Pi (S) = G \setminus \{1_G\}$, then $S$ is product-one free and hence $|S| \le \mathsf d (G)$. \smallskip 3. Clearly, the assertion holds for $|G|=1$. Suppose that $G$ is cyclic of order $n \ge 2$, and let $g \in G$ with $\ord (g) = n$. Then $g^{[n-1]}$ is product-one free, and thus 1. and 2. imply that $n \le 1+ \mathsf d (G) \le \mathsf D (G) \le n$. \end{proof} The next result gathers the algebraic properties of monoids of product-one sequences and highlights the difference between the abelian and the non-abelian case. \begin{theorem} \label{3.2} Let $G_0 \subset G$ be a subset and let $G'$ denote the commutator subgroup of $\langle G_0 \rangle$. \begin{enumerate} \item $\mathcal B (G_0) \subset \mathcal F (G_0)$ is cofinal and $\mathcal B (G_0)$ is a finitely generated {\rm C}-monoid. $\widetilde{\mathcal B (G_0)}=\widehat{\mathcal B (G_0)}$ is a finitely generated Krull monoid, the embedding $\widehat{\mathcal B (G_0)} \hookrightarrow \mathcal F (G_0)$ is a cofinal divisor homomorphism with class group $\mathcal F (G_0)/\mathcal B (G_0)$, and the map \[ \begin{aligned} \Phi \colon \mathcal F (G_0)/ \mathcal B (G_0) & \quad \longrightarrow \quad \langle G_0 \rangle / G' \\ [S]_{\mathcal F (G_0)/\mathcal B (G_0)} & \quad \longmapsto \quad g G' \quad \text{for any} \ g \in \pi (S) \end{aligned} \] is a group epimorphism. Suppose that $G_0=G$. Then $\Phi$ is an isomorphism, every class of $\mathcal C ( \widehat{\mathcal B (G)})$ contains a prime divisor, and if $|G|\ne 2$, then $\widehat{\mathcal B (G)} \hookrightarrow \mathcal F (G)$ is a divisor theory. \smallskip \item The following statements are equivalent{\rm \,:} \begin{enumerate} \smallskip \item[(a)] $\mathcal B (G_0)$ is a Krull monoid. \smallskip \item[(b)] $\mathcal B (G_0)$ is root closed. \smallskip \item[(c)] $\mathcal B (G_0) \subset \mathcal F (G_0)$ is saturated. \end{enumerate} \smallskip \item $\mathcal B (G)$ is a Krull monoid if and only if $G$ is abelian. \smallskip \item $\mathcal B (G)$ is factorial if and only if $|G| \le 2$. \end{enumerate} \end{theorem} \begin{proof} 1. $\mathcal B (G_0)$ is finitely generated by Lemma \ref{3.1}. If $n = \lcm \{ \ord (g) \colon g \in G_0 \}$, then $S^{[n]} \in \mathcal B (G_0)$ for each $S \in \mathcal F (G_0)$. Thus $\mathcal B (G_0) \subset \mathcal F (G_0)$ and $\widehat{\mathcal B (G_0)} \hookrightarrow \mathcal F (G_0)$ are cofinal, $\mathcal F (G_0)/ \mathcal B (G_0)$ is a group and \[ \mathcal F (G_0)/ \mathcal B (G_0) = \mathsf q \big( \mathcal F (G_0) \big) / \mathsf q \big( \mathcal B (G_0) \big) = \mathsf q \big( \mathcal F (G_0) \big) / \mathsf q \big( \widehat{\mathcal B (G_0)} \big) \] is the class group of the embedding $\widehat{\mathcal B (G_0)} \hookrightarrow \mathcal F (G_0)$. All statements on the structure of $\mathcal B (G_0)$ and $\widehat{\mathcal B (G_0)}$ follow from Proposition \ref{finitelygenerated}.3, and it remains to show the assertions on $\Phi$. Let $S, S' \in \mathcal F (G_0)$, $g \in \pi (S), g' \in \pi (S')$, and $B \in \mathcal B (G_0)$. Then $\pi (S) \subset gG'$, $\pi (S') \subset g'G'$, $\pi (B) \subset G'$, and $\pi (S \boldsymbol{\cdot} B) \subset gG'$. We use the abbreviation $[S]=[S]_{\mathcal F (G_0)/\mathcal B (G_0)}$, and note that $[S]=[S']$ if and only if there are $C, C' \in \mathcal B (G_0)$ such that $S \boldsymbol{\cdot} C = S' \boldsymbol{\cdot} C'$. In order to show that $\Phi$ is well-defined, suppose that $[S]=[S']$ and that $S \boldsymbol{\cdot} C= S \boldsymbol{\cdot} C'$ with $C, C' \in \mathcal B (G_0)$. Then $\pi (S \boldsymbol{\cdot} C)=\pi ( S' \boldsymbol{\cdot} C') \subset gG' \cap g'G'$ and hence $gG'=g'G'$. In order to show that $\Phi$ is surjective, let $g \in \langle G_0 \rangle$ be given. Clearly, there is an $S \in \mathcal F (G_0)$ such that $g \in \pi (S)$ whence $\Phi ( [S]) = gG'$. Suppose that $G_0=G$. First we show that $\Phi$ is injective. Let $S, S' \in \mathcal F (G)$ with $g \in \pi (S)$, $g' \in \pi (S')$ such that $gG'=g'G'$. Then there are $k \in \mathbb N$, $a_1, b_1, \ldots, a_k, b_k \in G$ such that \[ g {g'}^{-1} = \prod_{i=1}^k (a_i^{-1}b_i^{-1}a_ib_i) \,. \] We define $T= \prod_{i=1}^k (a_i^{-1} \boldsymbol{\cdot} b_i^{-1} \boldsymbol{\cdot} a_i \boldsymbol{\cdot} b_i )$ and obtain that \[ S \boldsymbol{\cdot} (S' \boldsymbol{\cdot} g^{-1} \boldsymbol{\cdot} T) = S' \boldsymbol{\cdot} (S \boldsymbol{\cdot} g^{-1} \boldsymbol{\cdot} T) \in \mathcal F (G) \,. \] Since $1 \in \pi (T)$ and $g {g'}^{-1} \in \pi (T)$, it follows that $1 \in \pi ( S' \boldsymbol{\cdot} g^{-1} \boldsymbol{\cdot} T)$ and $1 \in \pi (S \boldsymbol{\cdot} g^{-1} \boldsymbol{\cdot} T)$ which implies that $[S]=[S']$. If $|G| \le 2$, then 4. will show that $\mathcal B (G)$ is factorial and clearly the trivial class contains a prime divisor. Suppose that $|G| \ge 3$. In order to show that $\widehat{\mathcal B (G)} \hookrightarrow \mathcal F (G)$ is a divisor theory, let $g \in G \setminus \{1_G\}$ be given. Then there is an $h \in G \setminus \{g^{-1}, 1_G \}$, $U = g \boldsymbol{\cdot} g^{-1} \in \mathcal A (G) \subset \widehat{\mathcal B (G)}$, $U' = g \boldsymbol{\cdot} h \boldsymbol{\cdot} (h^{-1}g^{-1}) \in \mathcal A (G) \subset \widehat{\mathcal B (G)}$, and $g = \gcd_{\mathcal F (G)} (U, U')$. Thus $\widehat{\mathcal B (G)} \hookrightarrow \mathcal F (G)$ is a divisor theory. Let $S \in \mathcal F (G)$ with $g \in \pi (S)$. Then $g \in \mathcal F (G)$ is a prime divisor and we show that $[g]=[S]$. Indeed, if $g=1_G$, then $S \in \mathcal B (G)$, $1_G \in \mathcal B (G)$, $S \boldsymbol{\cdot} 1_G = g \boldsymbol{\cdot} S$ whence $[g]=[S]$. If $\ord (g)=n \ge 2$, then $g^{[n]} \in \mathcal B (G)$, $S \boldsymbol{\cdot} g^{[n-1]} \in \mathcal B (G)$, $S \boldsymbol{\cdot} g^{[n]} = g \boldsymbol{\cdot} S \boldsymbol{\cdot} g^{[n-1]}$ whence $[S]=[g]$. \smallskip 2. (a) $\Rightarrow$\, (b) \ Every Krull monoid is completely integrally closed and hence root closed. (b) \,$\Rightarrow$\, (c) \ Let $S, T \in \mathcal B (G_0)$ with $T \t S$ in $\mathcal F (G_0)$, say $S = T \boldsymbol{\cdot} U$ where $U = g_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} g_{\ell} \in \mathcal F (G_0)$. If $n = \lcm \big(\ord (g_1), \ldots, \ord (g_{\ell}) \big)$, then $(T^{[-1]} \boldsymbol{\cdot} S)^{[n]} = U^{[n]} \in \mathcal B (G_0)$. Since $\mathcal B (G_0)$ is root closed, this implies that $U = T^{[-1]} \boldsymbol{\cdot} S \in \mathcal B (G_0)$ and hence $T \t S$ in $\mathcal B (G_0)$. (c) \,$\Rightarrow$\, (a) \ Since $\mathcal F (G_0)$ is free abelian, $\mathcal B (G_0)$ is Krull by Theorem \ref{2.1}. \smallskip 3. If $G$ is a abelian, then it is obvious that $\mathcal B (G) \subset \mathcal F (G)$ is saturated, and thus $\mathcal B (G)$ is a Krull monoid by 2. Suppose that $G$ is not abelian. Then there are $g, h \in G$ with $g h \ne h g$. Then $g h g^{-1} \ne h$, $S = g \boldsymbol{\cdot} h \boldsymbol{\cdot} g^{-1} \boldsymbol{\cdot} (g h g^{-1} )^{-1} \in \mathcal B (G)$, $T = g \boldsymbol{\cdot} g^{-1} \in \mathcal B (G)$ divides $S$ in $\mathcal F (G)$ but $T^{[-1]} \boldsymbol{\cdot} S = h \boldsymbol{\cdot} (g h g^{-1})^{-1} $ does not have product-one. Thus $\mathcal B (G) \subset \mathcal F (G)$ is not saturated and hence $\mathcal B (G)$ is not Krull by 2. \smallskip 4. If $G = \{0\}$, then $\mathcal B (G)=\mathcal F (G)$ is factorial. If $G = \{0,g\}$, then $\mathcal A (G) = \{0, g^{[2]} \}$, each atom is a prime, and $\mathcal B (G)$ is factorial. Conversely, suppose that $\mathcal B (G)$ is factorial. Then $\mathcal B (G)$ is a Krull monoid by \cite[Corollary 2.3.13]{Ge-HK06a}, and hence $G$ is abelian by 3. Suppose that $|G| \ge 3$. We show that $\mathcal B (G)$ is not factorial. If there is an element $g \in G$ with $\ord (g)=n \ge 3$, then $U=g^{[n]}, -U= (-g)^{[n]}, W= (-g)\boldsymbol{\cdot} g \in \mathcal A (G)$, and $U \boldsymbol{\cdot} (-U) = W^{[n]}$. Suppose there is no $g \in G$ with $\ord (g) \ge 3$. Then there are $e_1, e_2 \in G$ with $\ord (e_1)=\ord (e_2)=2$ and $e_1+e_2\ne 0$. Then $U = e_1 \boldsymbol{\cdot} e_2 \boldsymbol{\cdot} (e_1+e_2) , W_1=e_1^{[2]}, W_2=e_2^{[2]}, W_0 = (e_1+e_2)^{[2]} \in \mathcal A (G)$, and $U^{[2]}= W_0 \boldsymbol{\cdot} W_1 \boldsymbol{\cdot} W_2$. \end{proof} For a subset $G_0 \subset G$, the monoid $\mathcal B (G_0)$ may be Krull or just seminormal but it need not be Krull. We provide examples for both situations. \begin{proposition} \label{3.4} Let $G_0 \subset G$ be a subset satisfying the following property {\bf P}{\rm \,:} \begin{itemize} \smallskip \item[{\bf P.}] For each two elements $g, h \in G_0$, $\langle h \rangle \subset \langle g, h \rangle$ is normal or $\langle g \rangle \subset \langle g, h \rangle$ is normal. \end{itemize} \smallskip Then $\mathcal B (G_0)$ is a Krull monoid if and only if $\langle G_0 \rangle$ is abelian. \end{proposition} \begin{proof} If $\langle G_0 \rangle$ is a abelian, then it is obvious that $\mathcal B (G_0) \subset \mathcal F (G_0)$ is saturated, and thus $\mathcal B (G_0)$ is Krull by Theorem \ref{3.2}.2. Conversely, suppose that $\mathcal B (G_0)$ is Krull and that $G_0$ satisfies Property {\bf P}. In order to show that $\langle G_0 \rangle$ is abelian, it is sufficient to prove that $g h = hg$ for each two elements $g, h \in G_0$. Let $g, h \in G_0$ be given such that $\langle h \rangle \subset \langle g, h \rangle$ is normal, $\ord (g)=m$, $\ord (h)=n$, and assume to the contrary that $ghg^{-1} \ne h$. Since $g \langle h \rangle g^{-1} = \langle h \rangle$, it follows that $ghg^{-1} = h^{\nu}$ for some $\nu \in [2, n-1]$. Thus $g h g^{m-1}h^{n-\nu}=1$ and $S = g \boldsymbol{\cdot} h \boldsymbol{\cdot} g^{[m-1]} \boldsymbol{\cdot} h^{[n-\nu]} \in \mathcal B (G_0)$. Clearly, $T = g^{[m]} \in \mathcal B (G_0)$ but $S \boldsymbol{\cdot} T^{[-1]} = h^{[n-\nu+1]} \notin \mathcal B (G_0)$. Thus $\mathcal B (G_0) \subset \mathcal F (G_0)$ is not saturated, a contradiction. \end{proof} \begin{proposition} \label{3.5} Let $G = D_{2n}$ be the dihedral group, say $G = \langle a, b \rangle =$ \newline $\{1, a, \ldots, a^{n-1}, b, ab, \ldots, a^{n-1}b \}$, where $\ord (a) = n \ge 2$, $\ord (b) = 2$, and set $G_0 = \{ab, b\}$. Then $\mathcal B (G_0)$ is a Krull monoid if and only if $n$ is even. \end{proposition} \begin{proof} Clearly, we have $\ord (ab)=\ord(b)=2$ and $\langle G_0 \rangle = G$. Suppose that $n$ is odd and consider the sequence $S = (ab)^{[n]} \boldsymbol{\cdot} b^{[n]}$. Since $\big( (ab) b \big)^n = 1$, it follows that $S$ is a product-one sequence. Obviously, $S_1 = (ab)^{[n-1]} \boldsymbol{\cdot} b^{[n-1]} \in \mathcal B (G_0)$ and $S_2 = (ab) \boldsymbol{\cdot} b \notin \mathcal B (G_0)$. Since $S = S_1 \boldsymbol{\cdot} S_2$, it follows that $\mathcal B (G_0) \subset \mathcal F (G_0)$ is not saturated, and hence $\mathcal B (G_0)$ is not Krull by Theorem \ref{3.2}.2. Suppose that $n$ is even. Then $\mathcal A (G_0) = \{ (ab)^{[2]}, b^{[2]} \}$ and $\mathcal B (G_0) = \{ (ab)^{[\ell]} \boldsymbol{\cdot} b^{[m]} \colon \ell, m \in \mathbb N_0 \ \text{even} \}$. This description of $\mathcal B (G_0)$ implies immediately that $\mathcal B (G_0) \subset \mathcal F (G_0)$ is saturated, and hence $\mathcal B (G_0)$ is Krull by Theorem \ref{3.2}.2. \end{proof} \begin{remark} ({\bf Seminormality of $\mathcal B (G_0)$}) \label{3.3} A monoid $H$ is called seminormal if for all $x \in \mathsf q (H)$ with $x^2, x^3 \in H$ it follows that $x \in H$. Thus, by definition, every root closed monoid is seminormal. \smallskip 1. Let $n \equiv 3 \mod 4$ and $G = D_{2n}$ the dihedral group, say $G = \langle a, b \rangle =$ \newline $ \{1, a, \ldots, a^{n-1}, b, ab, \ldots, a^{n-1}b \}$, where $\ord (a) = n$, $\ord (b) = 2$, and \[ a^k b a^l b = a^{k-l} \quad \text{for all} \ k, l \in \mathbb Z \,. \] We consider the sequence \[ S = a^{\big[ \frac{n-1}{2}\big]} \boldsymbol{\cdot} b^{[2]} \in \mathcal F (G) \,. \] Then \[ S^{[2]} = \big( a^{\big[ \frac{n-1}{2}\big]} \boldsymbol{\cdot} b \boldsymbol{\cdot} a^{\big[ \frac{n-1}{2}\big]} \boldsymbol{\cdot} b \big) \boldsymbol{\cdot} ( b \boldsymbol{\cdot} b ) \ \text{and} \ S^{[3]} = a^{[n]} \boldsymbol{\cdot} \big( a^{\big[ \frac{n-3}{4}\big]} \boldsymbol{\cdot} b \boldsymbol{\cdot} a^{\big[ \frac{n-3}{4}\big]} \boldsymbol{\cdot} b \big) \boldsymbol{\cdot} b^{[4]} \] are both in $\mathcal B (G)$ whence $S \in \mathsf q \big( \mathcal B (G) \big)$, but obviously $S \notin \mathcal B (G)$. Thus $\mathcal B (\{a,b\})$ and $\mathcal B (G)$ are not seminormal. \smallskip 2. Let $G = H_8 = \{E, I, J, K, -E, -I, -J, -K \}$ be the quaternion group with the relations \[ IJ = -JI = K, \ JK = -KJ = I , \quad \text{and} \quad KI = - I K = J \,, \] and set $G_0 = \{I, J\}$. By Theorem \ref{3.2}, $\mathcal B (G)$ is not Krull and by Proposition \ref{3.4}, $\mathcal B (G_0)$ is not Krull. However, we assert that $\mathcal B (G_0)$ is seminormal. First, we are going to derive an explicit description of $\mathcal B (G_0)$. Since $E = (-E)(-E) = (K K)(I I) = (IJ)(IJ)(II)$, it follows that $U = I^{[4]}\boldsymbol{\cdot} J^{[2]} \in \mathcal B (G_0)$. Assume that $U = U_1 \cdot U_2$ with $U_1, U_2 \in \mathcal A (G_0)$ and $|U_1| \le |U_2|$. Then $|U_1| \in \{2,3\}$, but $U$ does not have a subsequence with product one and length two or three. Thus $U \in \mathcal A (G_0)$ and similarly we obtain that $I^{[2]} \boldsymbol{\cdot} J^{[4]} \in \mathcal A (G_0)$. Since $\mathsf D (G_0) \le \mathsf D (G)=6$, it is easy to check that \[ \mathcal A (G_0) = \{ I^{[4]}, J^{[4]}, I^{[2]} \boldsymbol{\cdot} J^{[2]}, I^{[4]}\boldsymbol{\cdot} J^{[2]}, I^{[2]} \boldsymbol{\cdot} J^{[4]} \} \,. \] This implies that \[ \mathcal B (G_0) = \{ I^{[k]} \boldsymbol{\cdot} J^{[l]} \colon k=l=0 \ \text{or} \ k,l \in \mathbb N_0 \ \text{are both even with} \ k+l\ge 4 \} \,. \] In order to show that $\mathcal B (G_0)$ is seminormal, let $x \in \mathsf q \big( \mathcal B (G_0) \big)$ be given such that $x^{[2]}, x^{[3]} \in \mathcal B (G_0)$. We have to show that $x \in \mathcal B (G_0)$. Since $x^{[2]}, x^{[3]} \in \mathcal B (G_0) \subset \mathcal F (G_0)$ and $\mathcal F (G_0)$ is seminormal, it follows that $x \in \mathcal F (G_0)$. If $x = I^{[k]}$ with $k \in \mathbb N_0$, then $I^{[3k]} \in \mathcal B (G_0)$ implies that $4 \t 3k$, hence $4 \t k$, and thus $x \in \mathcal B (G_0)$. Similarly, if $x = J^{[l]} \in \mathcal B (G_0)$ with $l \in \mathbb N_0$, then $x \in \mathcal B (G_0)$. It remains to consider the case $x = I^{[k]} \boldsymbol{\cdot} J^{[l]}$ with $k, l \in \mathbb N$. Since $x^{[3]} = I^{[3k]} \boldsymbol{\cdot} J^{[3l]} \in \mathcal B (G_0)$, it follows that $k,l$ are both even, and thus $x \in \mathcal B (G_0)$. Therefore $\mathcal B (G_0)$ is seminormal. \end{remark} \subsection{\bf Transfer Homomorphisms}~ \label{3.B} A well-established strategy for investigating the arithmetic of a given monoid $H$ is to construct a transfer homomorphism $\theta \colon H \to B$, where $B$ is a simpler monoid than $H$ and the transfer homomorphism $\theta$ allows to shift arithmetical results from $B$ back to the (original, more complicated) monoid $H$. We will use transfer homomorphisms in Section \ref{sec:4} in order to show that properties of the monoid of $G$-invariant monomials can be studied in a monoid of zero-sum sequences (see Propositions \ref{prop:bschmid} and \ref{prop:diagram}). \begin{definition} \label{3.6} A monoid homomorphism \ $\theta \colon H \to B$ \ is called a \ {\it transfer homomorphism} \ if it has the following properties: \smallskip \item[] \begin{enumerate} \item[{\bf (T\,1)\,}] $B = \theta(H) B^\times$ \ and \ $\theta ^{-1} (B^\times) = H^\times$. \smallskip \item[{\bf (T\,2)\,}] If $u \in H$, \ $b,\,c \in B$ \ and \ $\theta (u) = bc$, then there exist \ $v,\,w \in H$ \ such that \ $u = vw$, \ $\theta (v) B^{\times} = b B^{\times}$ \ and \ $\theta (w) B^{\times} = c B^{\times}$. \end{enumerate} \end{definition} We will use the simple fact that, if $\theta \colon H \to B$ and $\theta' \colon B \to B'$ are transfer homomorphisms, then their composition $\theta' \circ \theta \colon H \to B'$ is a transfer homomorphism too. The next proposition summarizes key properties of transfer homomorphisms. \begin{proposition} \label{3.7} Let $\theta \colon H \to B$ be a transfer homomorphism and $a \in H$. \begin{enumerate} \item $a$ is an atom of $H$ if and only if $\theta (a)$ is an atom of $B$. \smallskip \item $\mathsf L_H (a) = \mathsf L_B \big( \theta (a) \big)$, whence $\theta \big( \mathcal M_k (H) \big) = \mathcal M_k ( B )$ and $\theta^{-1} \big( \mathcal M_k ( B ) \big) = \mathcal M_k (H)$. \smallskip \item If $\theta$ is degree preserving, then $\mathsf D_k (H) = \mathsf D_k (B)$ for all $k \in \mathbb N$. \end{enumerate} \end{proposition} \begin{proof} 1. and 2. follow from \cite[Proposition 3.2.3]{Ge-HK06a}. In order to prove 3., note that for all $k \in \mathbb N$ we have \[ \begin{aligned} \mathsf D_k (H) & = \sup \{ |a|_H \colon a \in \mathcal M_k (H) \} = \sup \{ |\theta (a)|_B \colon \theta (a) \in \mathcal M_k (B) \} \\ & = \sup \{ |b|_B \colon b \in \mathcal M_k (B) \} = \mathsf D_k (B) \,. \end{aligned} \] \end{proof} The first examples of transfer homomorphisms in the literature starts from a Krull monoid to its associated monoid of zero-sum sequences which is a Krull monoid having a combinatorial flavor. These ideas were generalized widely, and there are transfer homomorphisms from weakly Krull monoids to (simpler) weakly Krull monoids (having a combinatorial flavor) and the same is true for C-monoids. \begin{proposition} \label{3.8} Let $H$ be a Krull monoid, $\varphi \colon H \to \mathcal F (P)$ be a cofinal divisor homomorphism with class group $G = \mathcal C ( \varphi)$, and let $G^* \subset G$ denote the set of classes containing prime divisors. Let $\widetilde{\theta} \colon \mathcal F (P) \to \mathcal F (G^*)$ denote the unique homomorphism defined by $\widetilde \theta (p) = [p]$ for all $p \in P$, and set $\theta = \widetilde{\theta} \circ \varphi \colon H \to \mathcal B (G^*)$. \begin{enumerate} \item $\theta$ is a transfer homomorphism. \item For $a \in H$, we set $|a|=|\varphi (a)|$ and for $S \in \mathcal B (G^*)$ we set $|S|=|S|_{\mathcal F (G^*)}$. Then $|a|=|\theta (a)|$ for all $a \in H$, $\theta (\mathcal M_k^* (H)) = \mathcal M_k^* (G^*)$ and $\theta^{-1} ( \mathcal M_k^* (G^*))=\mathcal M_k^* (H)$ for all $k \in \mathbb N$. Furthermore, $\mathsf e (H) = \mathsf e (G^*)$, $\eta (H) = \eta (G^*)$, and $\mathsf D_k (H) = \mathsf D_k (G^*)$ for all $k \in \mathbb N$. \end{enumerate} \end{proposition} \begin{proof} 1. follows from \cite[Section 3.4]{Ge-HK06a}. By definition, we have $|a|=|\theta (a)|$ for all $a \in H$. Thus the assertions on $\mathsf D_k (H)$ follow from Proposition \ref{2.7}, and the remaining statements can be derived in a similar way. \end{proof} The above transfer homomorphism $\theta \colon H \to \mathcal B (G^*)$ constitutes the link between the arithmetic of Krull monoids on the one side and zero-sum theory on the other side. In this way methods from Arithmetic Combinatorics can be used to obtain precise results for arithmetical invariants describing the arithmetic of $H$. For an overview of this interplay see \cite{Ge09a}. There is a variety of transfer homomorphisms from monoids of zero-sum sequences to monoids of zero-sum sequences in order to simplify specific structural features of the involved subsets of groups. Below we present a simple example of such a transfer homomorphism which we will meet again in Proposition \ref{prop:diagram} (for more of this nature we refer to \cite{Sc10a} and to \cite[Theorem 6.7.11]{Ge-HK06a}). Let $G$ be abelian and let $G_0 \subset G$ be a subset. For $g \in G_0$ we define \[ e (G_0, g) = \gcd \big( \{ \mathsf v_g (B) \colon B \in \mathcal B (G_0) \} \big) \,, \] and it is easy to check that (for details see \cite[Lemma 3.4]{Ge-Zh15a}) \[ \begin{aligned} & e (G_0, g) = \gcd \big( \{ \mathsf v_g (A) \colon A \in \mathcal A (G_0) \} \big)\\ =& \min \big( \{ \mathsf v_g (A) \colon \mathsf v_g (A)>0, A \in \mathcal A (G_0) \} \big) = \min \big( \{ \mathsf v_g (B) \colon \mathsf v_g (B) > 0, B \in \mathcal B (G_0) \} \big) \\ =& \min \big( \{ k \in \mathbb N \colon kg \in \langle G_0 \setminus \{g\} \rangle \} \big)=\gcd \big( \{ k \in \mathbb N \colon kg \in \langle G_0 \setminus \{g\} \rangle \} \big) \,. \end{aligned} \] \begin{proposition} \label{3.9} Let $G$ be abelian and $G_0, G_1, G_2 \subset G$ be subsets such that $G_0 = G_1 \uplus G_2$. For $g \in G_0$ we set $e (g) = e (G_0, g)$ and we define $G_0^* = \{ e(g) g \colon g \in G_1 \} \cup G_2$. Then the map \[ \begin{aligned} \theta \colon \mathcal B (G_0) & \quad \longrightarrow \quad \mathcal B (G_0^*) \\ B = \prod_{g \in G_0} g^{[\mathsf v_g (B)]} & \quad \longmapsto \quad \prod_{g \in G_1} (e(g) g)^{[\mathsf v_g (B) / e(g)]} \prod_{g \in G_2} g^{[\mathsf v_g (B)]} \end{aligned} \] is a transfer homomorphism. \end{proposition} \begin{proof} Clearly, $\theta$ is a surjective homomorphism satisfying $\theta^{-1} (1_{\mathcal F (G_0)})=\{1_{\mathcal F (G_0)} \}$. In order to verify property {\bf (T2)} of Definition \ref{3.6}, let $B \in \mathcal B (G_0)$ and $C_1, C_2 \in \mathcal B (G_0^*)$ be such that $\theta (B) = C_1 \boldsymbol{\cdot} C_2$. We have to show that there are $B_1, B_2 \in \mathcal B (G_0)$ such that $B = B_1 \boldsymbol{\cdot} B_2$, $\theta (B_1)=C_1$, and $\theta (B_2)=C_2$. This can be checked easily. \end{proof} \subsection{\bf The $k$th Davenport constants: the general case} \label{3.C}~ Let $G_0 \subset G$ be a subset, and $k \in \mathbb N$. Recall that $\mathsf e (G) = \max \{ \ord (g) \colon g \in G \}$. If $G$ is nilpotent, then $G$ is the direct sum of its $p$-Sylow subgroups and hence $\mathsf e (G) = \lcm \{ \ord (g) \colon g \in G \} = \exp (G)$. Let \begin{itemize} \item $\mathsf E (G_0)$ be the smallest integer $\ell \in \mathbb N$ such that every sequence $S \in \mathcal F (G_0)$ of length $|S| \ge \ell$ has a product-one subsequence of length $|G|$. \smallskip \item $\mathsf s (G_0)$ denote the smallest integer $\ell \in \mathbb N$ such that every sequence $S \in \mathcal F (G_0)$ of length $|S|\ge \ell$ has a product-one subsequence of length $\mathsf e (G)$. \end{itemize} The Davenport constants, together with the Erd{\H{o}}s-Ginzburg-Ziv constant $\mathsf s (G)$, the constants $\eta (G)$ and $\mathsf E (G)$, are the most classical zero-sum invariants whose study (in the abelian setting) goes back to the early 1960s. The $k$th Davenport constants $\mathsf D_k (G)$ were introduced by Halter-Koch \cite{HK92c} and further studied in \cite[Section 6.1]{Ge-HK06a} and \cite{Fr-Sc10} (all this work is done in the abelian setting). First results in the non-abelian setting were achieved in \cite{De-Or-Qu01}. If $G$ is abelian, then W. Gao proved that $\mathsf E (G) = |G| + \mathsf d (G)$. For cyclic groups this is the Theorem of Erd{\H{o}}s-Ginzburg-Ziv which dates back to 1961 (\cite[Proposition 5.7.9]{Ge-HK06a}). W. Gao and J. Zhuang conjectured that the above equality holds true for all finite groups (\cite[Conjecture 2]{Zh-Ga05a}), and their conjecture has been verified in a variety of special cases \cite{Ba07b, Ga-Lu08a, Ga-Li10b, Ha15a}. For more in the non-abelian setting see \cite{Wa-Zh12a,Wa-Qu14a}. We verify two simple properties occurring in the assumptions of Propositions \ref{2.7} and \ref{2.8}. \begin{itemize} \item If $S \in \mathcal F (G)$ and $g_0 \in \pi (S)$, then $h = g_0^{-1} \in G$ is a prime in $\mathcal F (G)$ and $h \boldsymbol{\cdot} S \in \mathcal B (G)$. \item Clearly, $1_G \in \mathcal B (G)$ is a prime element of $\mathcal B (G)$. \end{itemize} Therefore all properties proved in Propositions \ref{2.7} and \ref{2.8} for $\mathsf D_k (H)$ and $\mathsf d_k (H)$ hold for the constants $\mathsf D_k (G)$ and $\mathsf d_k (G)$ (the linearity properties as given in Proposition \ref{2.7}.2 and Proposition \ref{2.8}.2.(b) were first proved by Freeze and W.A. Schmid in case of abelian groups $G$ \cite{Fr-Sc10}). We continue with properties which are more specific. \begin{proposition} \label{gen-dav-5} Let $H \le G$ be a subgroup, $N \triangleleft G$ be a normal subgroup, and $k, \ell \in \mathbb N$. \begin{enumerate} \item $\mathsf d_k (N) + \mathsf d_{\ell} (G/N) \le \mathsf d_{k+\ell-1} (G)$. \smallskip \item $\mathsf d_k (G) \le \mathsf d_{\mathsf d_k (N)+1} (G/N)$. \smallskip \item $\mathsf d_k (G)+1 \le [G \negthinspace : \negthinspace H] (\mathsf d_k (H)+1)$. \smallskip \item $\mathsf d_k (G)+1 \le k ( \mathsf d (G)+1)$. \smallskip \item $\mathsf D_k (G) \le [G \negthinspace : \negthinspace H] \mathsf D_k (H)$. \end{enumerate} \end{proposition} \begin{proof} 1. Let $\overline S = (g_1N) \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} (g_{s}N) \in \mathcal M_{\ell}^* (G/N)$ with $|\overline S|=s=\mathsf d_{\ell} (G/N)$ and let $T = h_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} h_t \in \mathcal M_k^* (N)$ with $t=\mathsf d_k (N)$. We consider the sequence $W = g_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} g_s \boldsymbol{\cdot} h_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} h_t \in \mathcal F (G)$ and suppose that it is divisible by $S_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} S_a \boldsymbol{\cdot} T_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} T_b$ where $S_i, T_j \in \mathcal B (G) \setminus \{1_{\mathcal F (G)}\}$, $\supp (S_i) \cap \{g_1, \ldots, g_s\} \ne \emptyset$ and $T_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} T_b \t h_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} h_t$ for all $i \in [1, a]$ and all $j \in [1,b]$. For $i \in [1,a]$, let $\overline{S_i} \in \mathcal F (G/N)$ denote the sequence obtained from $S_i$ by replacing each $g_{\nu}$ by $g_{\nu}N$ and by omitting the elements of $S_i$ which lie in $\{h_1, \ldots, h_t\}$. Then $\overline{S_1}, \ldots, \overline{S_a} \in \mathcal B(G/N)\setminus \{1_{\mathcal F (G)}\}$ and $\overline{S_1} \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} \overline{S_a} \t \overline S$ whence $a \le \ell-1$. By construction, we have $b \le k-1$ whence $a+b < k+\ell-1$, $W \in \mathcal M_{k+\ell-1}^* (G)$, and $|W|=s+t=\mathsf d_k (N)+\mathsf d_{\ell}(G/N) \le \mathsf d_{k+\ell-1} (G)$. \smallskip 2. We set $m = \mathsf d_{\mathsf d_k (N)+1} (G/N)+1$. By \eqref{defining-d_k}, we have to show that every sequence $S$ over $G$ of length $|S| \ge m$ is divisible by a product of $k$ nontrivial product-one sequences. Let $f \colon G \to G/N$ denote the canonical epimorphism and let $S \in \mathcal F (G)$ be a sequence of length $|S| \ge m$. By definition of $m$, there exist sequences $S_1, \ldots, S_{\mathsf d_k (N)+1}$ such that $S_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} S_{\mathsf d_k (N)+1} \t S$ and $f (S_1), \ldots, f (S_{\mathsf d_k (N)+1})$ are product-one sequences over $G/N$. Thus, for each $\nu \in [1, \mathsf d_k (N)+1]$, there are elements $h_{\nu} \in N$ such that $h_{\nu} \in \pi (S_{\nu})$. Then $T = h_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} h_{\mathsf d_k (N)+1}$ is a sequence over $N$, and it has $k$ nontrivial product-one subsequences $T_1, \ldots, T_k$ whose product $T_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} T_k$ divides $T$. Therefore we obtain $k$ nontrivial product-one sequences whose product divides $S$. \smallskip 3. We set $m = [G \negthinspace : \negthinspace H]$ and start with the following assertion. \begin{enumerate} \item[{\bf A.}\,] If $S \in \mathcal F (G)$ with $|S| \ge m$, then $\Pi (S) \cap H \ne \emptyset$. \end{enumerate} {\it Proof of \,{\bf A}}.\, Let $S = g_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} g_n \in \mathcal F (G)$ with $|S|=n \ge m$. We consider the left cosets $g_1H, g_1g_2H, \ldots, $ $g_1 \ldots g_m H$. If one of these cosets equals $H$, then we are done. If this is not the case, then there are $k, \ell \in [1,m]$ with $k < \ell$ such that $g_1 \ldots g_kH = g_1 \ldots g_kg_{k+1} \ldots g_{\ell}H$ which implies that $g_{k+1} \ldots g_{\ell} \in H$. \qed{(Proof of {\bf A})} Now let $S \in \mathcal F (G)$ be a sequence of length $|S| = [G \negthinspace : \negthinspace H] (\mathsf d_k (H)+1)$. We have to show that $S$ is divisible by a product of $k$ nontrivial product-one sequences. By {\bf A}, there are $\mathsf d_k (H)+1$ sequences $S_1, \ldots, S_{\mathsf d_k (H)+1}$ and elements $h_1, \ldots, h_{\mathsf d_k (H)+1} \in H$ such that $S_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} S_{\mathsf d_k (H)+1} \t S$ and $h_{\nu} \in \pi (S_{\nu})$ for each $\nu \in [1, \mathsf d_k (H)+1]$. By definition, the sequence $h_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} h_{\mathsf d_k (H)+1} \in \mathcal F (H)$ is divisible by a product of $k$ nontrivial product-one sequences. Therefore $S$ is divisible by a product of $k$ nontrivial product-one sequences. \smallskip 4. Let $S \in \mathcal F (G)$ be a sequence of length $|S|=k ( \mathsf d (G)+1)$. Then $S$ may be written as a product $S = S_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} S_k$ where $S_1, \ldots , S_k \in \mathcal F (G)$ with $|S_{\nu}|=\mathsf d (G)+1$ for every $\nu \in [1,k]$. Then each $S_{\nu}$ is divisible by a nontrivial product-one sequence $T_{\nu}$ and hence $S$ is divisible by $T_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} T_k$. Thus by \eqref{defining-d_k} we infer that $\mathsf d_k (G)+1 \le k ( \mathsf d (G)+1)$. \smallskip 5. Let $A = g_1\boldsymbol{\cdot}\ldots\boldsymbol{\cdot} g_{\ell}\in\mathcal{B}(G)$ with $g_1\dots g_{\ell}=1$ and $\ell> [G:H]\mathsf D_k(H)$. We show that $\ell>\mathsf D_k(G)$. We set $d=\mathsf D_k(H)$ and consider the left $H$-cosets $C_j=g_1\dots g_jH$ for each $j\in [1,\ell]$. By the pigeonhole principle there exist $1\le i_1<\dots<i_{d+1}\le\ell$ such that $C_{i_1}=\dots =C_{i_{d+1}}$. We set $h_s =g_{i_s+1}\dots g_{i_{s+1}}$ for each $s\in [1,d]$ and $h_{d+1}=g_{i_{d+1}+1}\ldots g_{\ell}g_1\dots g_{i_1-1}$. Clearly $h_1,\dots,h_{d+1}\in H$, and $g_1\cdots g_{\ell}=1$ implies $h_1\cdots h_{d+1}=1$ whence $h_1\boldsymbol{\cdot} \ldots \boldsymbol{\cdot} h_{d+1}\in \mathcal{B}(H)$. The inequality $d+1> \mathsf D_k(H)$ implies that $h_1\boldsymbol{\cdot} \ldots \boldsymbol{\cdot} h_{d+1}=S_1\boldsymbol{\cdot} \ldots \boldsymbol{\cdot} S_{k+1}$, where $1_{\mathcal F (H)} \ne S_i\in \mathcal{B}(H)$ for $i\in [1,k+1]$. Let $T_i\in \mathcal{F}(G)$ denote the sequence obtained from $S_i$ by replacing each occurrence of $h_s$ by $g_{i_s+1}\boldsymbol{\cdot} \ldots \boldsymbol{\cdot} g_{i_{s+1}}$ for $s\in [1,d]$ and $h_{d+1}$ by $g_{i_{d+1}+1}\boldsymbol{\cdot} \ldots \boldsymbol{\cdot} g_{\ell}\boldsymbol{\cdot} g_1\boldsymbol{\cdot} \ldots \boldsymbol{\cdot} g_{i_1-1}$. Then $T_1, \ldots , T_{k+1} \in \mathcal{B}(G)$ and $A = g_1\boldsymbol{\cdot}\ldots\boldsymbol{\cdot} g_{\ell}=T_1\boldsymbol{\cdot}\ldots\boldsymbol{\cdot} T_{k+1}$, which implies that $\ell>\mathsf D_k(G)$. \end{proof} Much more is known for the classical Davenport constants $\mathsf D_1 (G)=\mathsf D (G)$ and $\mathsf d_1 (G)=\mathsf d(G)$. We start with metacyclic groups of index two. The following result was proved in \cite[Theorem 1.1]{Ge-Gr13a}. \begin{theorem}\label{3.12} Suppose that $G$ has a cyclic, index $2$ subgroup. Then \[ \mathsf D(G)=\mathsf d(G)+|G'|\quad\text{and}\quad\mathsf d(G)=\left\{ \begin{array}{ll} |G|-1 & \hbox{if $G$ is cyclic} \\ \frac12|G| & \hbox{if $G$ is non-cyclic,} \end{array} \right. \] where $G'=[G,G]$ is the commutator subgroup of $G$. \end{theorem} The next result gathers upper bounds for the large Davenport constant (for $\mathsf d (G)$ see \cite{Ga-Li-Pe14a}). \begin{theorem} \label{3.13} Let $G' = [G,G]$ denote the commutator subgroup of $G$. \begin{enumerate} \item $\mathsf D (G) \le \mathsf d (G) + 2 |G'| -1$, and equality holds if and only if $G$ is abelian. \item If $G$ is a non-abelian $p$-group, then $\mathsf D (G) \le \frac{p^2+2p-2}{p^3} |G|$. \item If $G$ is non-abelian of order $pq$, where $p, q$ are primes with $p < q$, then $\mathsf D (G) = 2q$ and $\mathsf d (G) = q+p-2$. \item If $N \triangleleft G$ is a normal subgroup with $G/N \cong C_p \oplus C_p$ for some prime $p$, then \[ \mathsf d (G) \le (\mathsf d (N) +2)p - 2 \le \frac{1}{p}|G|+p-2 \,. \] \item If $G$ is non-cyclic and $p$ is the smallest prime dividing $|G|$, then $\mathsf D (G) \le \frac{2}{p}|G|$. \item If $G$ is neither cyclic nor isomorphic to a dihedral group of order $2n$ with odd $n$, then $\mathsf D (G) \le \frac{3}{4}|G|$. \end{enumerate} \end{theorem} \begin{proof} All results can be found in \cite{Gr13b}: see Lemma 4.2, Theorems 3.1, 4.1, 5.1,7.1, 7.2, and Corollary 5.7. \end{proof} \begin{corollary} \label{3.14} The following statements are equivalent{\rm \,:} \begin{enumerate} \item[(a)] $G$ is cyclic or isomorphic to a dihedral group of order $2n$ for some odd $n \ge 3$. \smallskip \item[(b)] $\mathsf D (G) = |G|$. \end{enumerate} \end{corollary} \begin{proof} If $G$ is not as in (a), then $\mathsf D (G) \le \frac{3}{4}|G|$ by Theorem \ref{3.13}.6. If $G$ is cyclic, then $\mathsf D (G) = |G|$ by Lemma \ref{3.1}.3. If $G$ is dihedral of order $2n$ for some odd $n \ge 3$, then the commutator subgroup $G'$ of $G$ has order $n$ and hence $\mathsf D (G)=|G|$ by Theorem \ref{3.12}. \end{proof} \subsection{\bf The $k$th Davenport constants: The abelian case}~ \label{3.D} \centerline{\it Throughout this subsection, all groups are abelian and will be written additively.} \smallskip We have $G \cong C_{n_1} \oplus \ldots \oplus C_{n_r}$, with $r \in \mathbb N_0$ and $1 < n_1 \t \ldots \t n_r$, $\mathsf r (G) = r$ is the {\it rank} of $G$ and $n_r = \exp (G)$ is the {\it exponent} of $G$. We define \[ \mathsf d^* (G) = \sum_{i=1}^r (n_i-1) \,. \] If $G = \{0\}$, then $r=0 = \mathsf d^* (G)$. An $s$-tuple $(e_1, \ldots, e_s)$ of elements of $G \setminus \{0\}$ is said to be a \ {\it basis} \ of $G$ if $G = \langle e_1 \rangle \oplus \ldots \oplus \langle e_s \rangle$. First we provide a lower bound for the Davenport constants. \begin{lemma} \label{dandDabeliancase} Let $G$ be abelian. \begin{enumerate} \item $\mathsf D_k (G) = 1+ \mathsf d_k (G)$ for every $k \in \mathbb N$. \smallskip \item $\mathsf d^* (G) + (k-1)\exp (G) \le \mathsf d_k (G)$. \end{enumerate} \end{lemma} \begin{proof} 1. Let $k \in \mathbb N$. By Proposition \ref{2.8}.1, we have $1 + \mathsf d_k (G) \le \mathsf D_k (G)$. Obviously, the map \[ \psi \colon \mathcal M_k^*(G) \to \mathcal M_k (G) \setminus \{1\}\,, \quad \text{given by} \quad \psi (S) = (-\sigma (S)) \boldsymbol{\cdot} S\,, \] is surjective and we have $|\psi (S)|=1+|S|$ for every $S \in \mathcal M_k^* (G)$. Therefore we have $1 + \mathsf d_k (G) = \mathsf D_k (G)$. \smallskip 2. Suppose that $G \cong C_{n_1} \oplus \ldots \oplus C_{n_r}$, with $r \in \mathbb N_0$ and $1 < n_1 \t \ldots \t n_r$. If $(e_1, \ldots, e_r)$ is a basis of $G$ with $\ord (e_i) = n_i$ for all $i \in [1, r]$, then \[ S = e_r^{[n_r(k-1)]} \prod_{i=1}^r e_i^{[n_i-1]} \] is not divisible by a product of $k$ nontrivial zero-sum sequences whence $\mathsf d^* (G) + (k-1)\exp (G) = |S| \le \mathsf d_k (G)$. \end{proof} We continue with a result on the $k$th Davenport constant which refines the more general results in Subsection \ref{2.E}. It provides an explicit formula for $\mathsf d_k (G)$ in terms of $\mathsf d (G)$ (see \cite[Theorem 6.1.5]{Ge-HK06a}). \begin{theorem} \label{gen-dav-abelian} Let $G$ be abelian, $\exp (G) = n$, and $k \in \mathbb N$. \begin{enumerate} \item Let $H \le G$ be a subgroup such that $G = H \oplus C_n$. Then \[ \mathsf d (H) +kn-1 \le \mathsf d_k (G) \le (k-1)n + \max \{ \mathsf d(G), \eta(G) -n-1\} \,. \] In particular, if \ $\mathsf d (G) = \mathsf d (H) + n-1$ \ and \ $\eta (G) \le \mathsf d (G) + n+1$, then \ $\mathsf d_k (G) = \mathsf d (G) + (k-1)n$. \smallskip \item If \ $\mathsf r (G) \le 2$, then \ $\mathsf d_k (G) = \mathsf d (G) + (k-1)n$. \smallskip \item If \ $G$ is a $p$-group and \ $\mathsf D (G) \le 2 n-1$, then \ $\mathsf d_k (G) = \mathsf d (G) + (k-1)n$. \end{enumerate} \end{theorem} For the rest of this section we focus on the classical Davenport constant $\mathsf D (G)$. By Lemma \ref{dandDabeliancase}.2, there is the crucial inequality \[ \mathsf d^* (G) \le \mathsf d (G) \,. \] We continue with a list of groups for which equality holds. The list is incomplete but the remaining groups for which $\mathsf d^* (G) = \mathsf d (G)$ is known are of a similar special nature as those listed in Theorem \ref{d*equalsd}.3 (see \cite{Sc11b} for a more detailed discussion). In particular, it is still open whether equality holds for all groups of rank three (see \cite[Section 4.1]{Sc11b}) or for all groups of the form $G = C_n^r$ (see \cite{Gi08b}). \begin{theorem} \label{d*equalsd} We have $\mathsf d^* (G) = \mathsf d (G)$ in each of the following cases{\rm \;:} \begin{enumerate} \item $G$ is a $p$-group or has rank $\mathsf r (G) \le 2$. \item $G = K \oplus C_{km}$ \ where $k, m \in \mathbb N$, $p \in \mathbb P$ a prime, $m$ a power of $p$ and $K \le G$ is a $p$-subgroup with $\mathsf d (K) \le m-1$. \item $G = C_m^2 \oplus C_{mn}$ where $m \in \{2,3,4,6\}$ and $n \in \mathbb N$. \end{enumerate} \end{theorem} \begin{proof} For 1. see \cite{Ge-HK06a} (in particular, Theorems 5.5.9 and 5.8.3) for proofs and historical comments. For 2. see \cite[Corollary 4.2.13]{Ge09a}, and 3. can be found in \cite{Bh-SP07a} and \cite[Theorem 4.1]{Sc11b}. \end{proof} There are infinite series of groups $G$ with $\mathsf d^* (G) < \mathsf d (G)$. However, the true reason for the phenomenon $\mathsf d^* (G) < \mathsf d (G)$ is not understood. Here is a simple observation. Suppose that $G = C_{n_1} \oplus \ldots \oplus C_{n_r}$ with $1 < n_1 \t \ldots \t n_r$, $I \subset [1, r]$, and let $G' = \oplus_{i \in I}C_{n_i}$. If $\mathsf d^* (G') < \mathsf d (G')$, then $\mathsf d^* (G) < \mathsf d (G)$. For series of groups $G$ which have rank four and five and satisfy $\mathsf d^* (G) < \mathsf d (G)$ we refer to \cite{Ge-Sc92, Ge-Li-Ph12}. A standing conjecture for an upper bound on $\mathsf D (G)$ states that $\mathsf d (G) \le \mathsf d^* (G) + \mathsf r (G)$. However, the available results are much weaker (\cite[Theorem 5.5.5]{Ge-HK06a}, \cite{Bh-SP13a}). The remainder of this subsection is devoted to inverse problems with respect to the Davenport constant. Thus the objective is to study the structure of zero-sum free sequences $S$ whose lengths $|S|$ are close to the maximal possible value $\mathsf d (G)$. If $G$ is cyclic of order $n \ge 2$, then an easy exercise shows that $S$ is zero-sum free of length $|S| = \mathsf d (G)$ if and only if $S = g^{[n-1]}$ for some $g \in G$ with $\ord (g) = n$. After many contributions since the 1980s, S. Savchev and F. Chen could finally prove a (sharp) structural result. In order to formulate it we need some more terminology. If $g \in G$ is a nonzero element of order $\ord (g) = n$ and \[ S = (n_1g) \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} (n_{\ell}g), \quad \text{where} \quad \ell \in \mathbb N_0 \quad \text{and} \quad n_1, \ldots, n_{\ell} \in [1,n] \,, \] we define \[ \| S \|_g = \frac{n_1+ \ldots + n_{\ell}}n \,. \] Obviously, $S$ has sum zero if and only if $\|S\|_g \in \mathbb N_0$, and the {\it index of} $S$ is defined as \[ \ind (S) = \min \{ \| S \|_g \colon g \in G \ \text{with} \ G = \langle g \rangle \} \in \mathbb Q_{\ge 0} \,. \] \begin{theorem} \label{inverse1} Let $G$ be cyclic of order $|G|=n \ge 3$ . \begin{enumerate} \item If $S$ is a zero-sum free sequence over $G$ of length $|S| \ge (n+1)/2$, then there exist $g \in G$ with $\ord (g) = n$ and integers $1 = m_1, \ldots, m_{|S|} \in [1, n-1]$ such that \begin{itemize} \item $S = (m_1g) \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} (m_{|S|}g)$ \item $m_1 + \ldots + m_{|S|} < n$ and $\Sigma (S) = \{ \nu g \colon \nu \in [1, m_1 + \ldots + m_{|S|}] \}$. \end{itemize} \smallskip \item If $U \in \mathcal A (G)$ has length $|U| \ge \lfloor \frac{n}{2} \rfloor + 2$, then $\ind (U) = 1$. \end{enumerate} \end{theorem} \begin{proof} 1. See \cite{Sa-Ch07a} for the original paper. For the history of the problem and a proof in the present terminology see \cite[Chapter 5.1]{Ge09a} or \cite[Chapter 11]{Gr13a}. \smallskip 2. This is a simple consequence of the first part (see \cite[Theorem 5.1.8]{Ge09a}). \end{proof} The above result was generalized to groups of the form $G = C_2 \oplus C_{2n}$ by S. Savchev and F. Chen (\cite{Sa-Ch12a}). Not much is known about the number of all minimal zero-sum sequences of a given group. However, the above result allows to give a formula for the number of minimal zero-sum sequences of length $\ell \ge \lfloor \frac{n}{2} \rfloor + 2$ (this formula was first proved by Ponomarenko \cite{Po04a} for $\ell > 2n/3$). \begin{corollary} \label{number-of-minimal} Let $G$ be cyclic of order $|G|=n \ge 3$, and let $\ell \in \Big[ \lfloor \frac{n}{2} \rfloor + 2, n \Big]$. Then the number of minimal zero-sum sequences $U \in \mathcal A (G)$ of length $\ell$ equals $\Phi (n) \mathsf p_{\ell} (n)$, where $\Phi (n) = |(\mathbb Z/n\mathbb Z)^{\times}|$ is Euler's Phi function and $\mathsf p_{\ell} (n)$ is the number of integer partitions of $n$ into $\ell$ summands. \end{corollary} \begin{proof} Clearly, every generating element $g \in G$ and every integer partition $n = m_1 + \ldots + m_{\ell}$ gives rise to a minimal zero-sum sequence $U = (m_1g) \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} (m_{\ell}g)$. Conversely, if $U \in \mathcal A (G)$ of length $|U| = \ell$, then Theorem \ref{inverse1}.2 implies that there is an element $g \in G$ with $\ord (g) = n$ such that \[ U = (m_1g) \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} (m_{\ell}g) \quad \text{where} \quad m_1, \ldots, m_{\ell} \in [1, n-1] \ \text{with} \ n = m_1 + \ldots + m_{\ell} \,. \tag{$*$} \] Since $G$ has precisely $\Phi (n)$ generating elements, it remains to show that for every $U \in \mathcal A (G)$ of length $|U| = \ell$ there is precisely one generating element $g \in G$ with $\|U\|_g = 1$. Let $U$ be as in $(*)$, and assume to the contrary that there are $a \in [2, n-1]$ with $\gcd (a, n)=1$ and $m_1', \ldots, m_{\ell}' \in [1,n]$ such that $m_1'+ \ldots + m_{\ell}'=n$ and \[ U = \big(m_1' (a g)\big) \cdot \ldots \cdot \big(m_{\ell}' (ag) \big) \,. \] Let $a'\in [2,n-1]$ such that $aa'\equiv 1\pmod n$. Since \[ \begin{aligned} n&=m_1+\ldots+m_{\ell}\ge \mathsf v_g(U)+a\mathsf v_{ag}(U)+2(\ell-\mathsf v_g(U)-\mathsf v_{ag}(U))\\ & =2\ell-\mathsf v_g(U)+(a-2)\mathsf v_{ag}(U)\quad \text{and} \\ n&=m_1'+\ldots+m_{\ell}'\ge a'\mathsf v_g(U)+\mathsf v_{ag}(U)+2(\ell-\mathsf v_g(U)-\mathsf v_{ag}(U)) \\ & =2\ell+(a'-2)\mathsf v_g(U)-\mathsf v_{ag}(U)\,,\\ \end{aligned} \] it follows that \[ \begin{aligned} (a-1)n & =n+(a-2)n \\ & \ge 2\ell-\mathsf v_g(U)+(a-2)\mathsf v_{ag}(U)+(a-2)(2\ell+(a'-2)\mathsf v_g(U)-\mathsf v_{ag}(U))\\ & = (a-1)2\ell+((a-2)(a'-2)-1)\mathsf v_g(U)\,, \end{aligned} \] whence $a=2, a'=\frac{n+1}{2}$ or $a'=2, a=\frac{n+1}{2}$ because $\ell\ge \lfloor\frac{n}{2}\rfloor+2$. By symmetry, we may assume that $a=2$. Then $\mathsf v_g(U)\ge 2\ell-n\ge 2\lfloor\frac{n}{2}\rfloor+4-n\ge 3$, and thus $n\ge a'\mathsf v_g(U)\ge 3\frac{n+1}{2}$, a contradiction. \end{proof} The structure of all minimal zero-sum sequences of maximal length $\mathsf D (G)$ has been completely determined for rank two groups (\cite{Ga-Ge03b, Ga-Ge-Gr10a, Sc10b, Re10c}), for groups of the form $G = C_2 \oplus C_2 \oplus C_{2n}$ with $n \ge 2$ (\cite[Theorem 3.13]{Sc11b}), and for groups of the form $G = C_2^4 \oplus C_{2n}$ with $n \ge 70$ (\cite[Theorems 5.8 and 5.9]{Sa-Ch14a}). \section{\bf Multiplicative Ideal Theory of Invariant Rings} \label{sec:4} After gathering basic material from invariant theory in Subsection \ref{4.0} we construct an explicit divisor theory for the algebra of polynomial invariants of a finite group (see Subsection \ref{4.A}). In Subsection \ref{subsec:abelian} we present a detailed study of the abelian case as outlined in the Introduction. In Subsection \ref{5.D} we associate a BF-monoid to a $G$-module whose $k$th Davenport constant is a lower bound for the $k$th Noether number. \subsection{\bf Basics of invariant theory} \label{4.0}~ Let $n = \dim_{\mathbb F} (V)$ and let $\rho \colon G \to \GL(n, \mathbb F)$ be a group homomorphism. Consider the action of $G$ on the polynomial ring $\mathbb F[x_1,\ldots,x_n]$ via $\mathbb F$-algebra automorphisms induced by $g\cdot x_j=\sum_{i=1}^n\rho(g^{-1})_{ji}x_i$. Taking a slightly more abstract point of departure, we suppose that $V$ is a $G$-module (i.e. we suppose that $V$ is endowed with an action of $G$ via linear transformations). Choosing a basis of $V$, $V$ is identified with $\mathbb F^n$, the group $\GL(n,\mathbb F)$ is identified with the group $\GL(V)$ of invertible linear transformations of $V$, and $\mathbb F[V]=\mathbb F[x_1, \ldots, x_n]$ can be thought of as the symmetric algebra of $V^*$, the dual $G$-module of $V$, in which $(x_1,\ldots,x_n)$ is a basis dual to the standard basis in $V$. The action on $V^*$ is given by $(g\cdot x)(v)=x(\rho(g^{-1})v)$, where $g\in G$, $x\in V^*$, $v\in V$. Note that, if $\mathbb F$ is infinite, then $\mathbb F[V]$ is the algebra of polynomial functions $V\to \mathbb F$, and the action of $G$ on $\mathbb F[V]$ is the usual action on functions $V\to F$ induced by the action of $G$ on $V$ via $\rho$. Denote by $\mathbb F(V)$ the quotient field of $\mathbb F[V]$, and extend the $G$-action on $\mathbb F[V]$ to $\mathbb F (V)$ by \[ g\cdot \frac{f_1}{f_2} = \frac{g\cdot f_1}{g \cdot f_2} \quad \text{for} \ f_1, f_2 \in \mathbb F[V] \quad \text{and} \ g \in G \,. \] We define \[ \mathbb F(V)^G = \{ f\in \mathbb F(V) \colon g\cdot f=f \ \text{for all} \ g\in G \} \subset \mathbb F(V) \quad \text{and} \quad \mathbb F[V]^G = \mathbb F(V)^G \cap \mathbb F[V] \,. \] Then $\mathbb F (V)^G \subset \mathbb F (V)$ is a subfield and $\mathbb F[V]^G \subset \mathbb F[V]$ is an $\mathbb F$-subalgebra of $\mathbb F[V]$, called the {\it ring of polynomial invariants} of $G$ (the group homomorphism $\rho:G\to\GL(V)$ giving the $G$-action on $V$ is usually suppressed from the notation). Since every element of $\mathbb F (V)$ can be written in the form $f_1 f_2^{-1}$ with $f_1 \in \mathbb F[V]$ and $f_2 \in \mathbb F[V]^G$, it follows that $\mathbb F (V)^G$ is the quotient field of $\mathbb F[V]^G$. Next we summarize some well-known ring theoretical properties of $\mathbb F[V]^G$ going back to E. Noether \cite{No26}. \begin{theorem} \label{4.1} Let all notations be as above. \begin{enumerate} \item $\mathbb F[V]^G \subset \mathbb F[V]$ is an integral ring extension and $\mathbb F[V]^G$ is normal. \smallskip \item $\mathbb F[V]$ is a finitely generated $\mathbb F[V]^G$-module, and $\mathbb F[V]^G$ is a finitely generated $\mathbb F$-algebra $($hence in particular a noetherian domain$)$. \smallskip \item $\mathbb F[V]^G$ is a Krull domain with Krull dimension $\dim_{\mathbb F} (V)$. \end{enumerate} \end{theorem} \begin{proof} 1. To show that $\mathbb F[V]^G$ is normal, consider an element $f \in \mathbb F(V)^G$ which is integral over $\mathbb F[V]^G$. Then $f$ is integral over $\mathbb F[V]$ as well, and since $\mathbb F[V]$ is normal, it follows that $f \in \mathbb F[V] \cap \mathbb F(V)^G = \mathbb F[V]^G$. To show that $\mathbb F[V]^G \subset \mathbb F[V]$ is an integral ring extension, consider an element $f \in \mathbb F[V]$ and the polynomial \begin{equation}\label{eq:integral} \Phi_f = \prod_{g \in G} (X - gf) \in \mathbb F[V][X] \,. \end{equation} The coefficients of $\Phi_f$ are the elementary symmetric functions (up to sign) evaluated at $(gf)_{g \in G}$, and hence they are in $\mathbb F[V]^G$. Thus $f$ is a root of a monic polynomial with coefficients in $\mathbb F[V]^G$. \smallskip 2. For $i \in [1,n]$, we consider the polynomials $\Phi_{x_i} (X)$ (cf. \eqref{eq:integral}), and denote by $A \subset \mathbb F[V]^G \subset \mathbb F[V]$ the $\mathbb F$-algebra generated by the coefficients of $\Phi_{x_1}, \ldots, \Phi_{x_n}$. By definition, $A$ is a finitely generated $\mathbb F$-algebra and hence a noetherian domain. Since $x_1, \ldots, x_n$ are integral over $A$, $\mathbb F[V] = A[x_1, \ldots, x_n]$ is a finitely generated (and hence noetherian) $A$-module. Therefore the $A$-submodule $\mathbb F[V]^G$ is a finitely generated $A$-module, and hence a finitely generated $\mathbb F$-algebra. \smallskip 3. By 1. and 2., $\mathbb F[V]^G$ is an normal noetherian domain, and hence a Krull domain by Theorem \ref{2.1}. Since $\mathbb F[V]^G \subset \mathbb F[V]$ is an integral ring extension, the Theorem of Cohen-Seidenberg implies that their Krull dimensions coincide, and hence $\dim (\mathbb F[V]^G) = \dim (\mathbb F[V]) = \dim_{\mathbb F} (V)$. \end{proof} The algebra $\mathbb F[V]$ is graded in the standard way (namely, $\deg (x_1)= \ldots = \deg (x_n)=1$), and the subalgebra $\mathbb F[V]^G$ is generated by homogeneous elements. For $\mathbb F$-subspaces $S,T\subset \mathbb F[V]$ we write $ST$ for the $\mathbb F$-subspace in $\mathbb F[V]$ spanned by all the products $st$ $(s\in S,t\in T)$, and write $S^k=S\dots S$ (with $k$ factors). The factor algebra of $\mathbb F[V]$ by the ideal generated by $\mathbb F[V]^G_+$ is usually called the {\it algebra of coinvariants}. It inherits the grading of $\mathbb F[V]$ and is finite dimensional. \begin{definition}\label{def:beta_k} Let $k \in \mathbb N$. \begin{enumerate} \item Let $\beta_k(G,V)$ be the top degree of the factor space $\mathbb F[V]_+^G/(\mathbb F[V]_+^G)^{k+1}$, where $\mathbb F[V]_+^G$ is the maximal ideal of $\mathbb F[V]^G$ spanned by the positive degree homogeneous elements. We call \[ \beta_k(G) = \sup\{ \beta_k(G,W): W \text{ is a $G$-module over $\mathbb F$} \} \] the $k$th {\it Noether number} of $G$. \item Let $b_k(G,V)$ denote the top degree of the factor algebra $\mathbb F[V]/(\mathbb F[V]^G_+)^k\mathbb F[V]$ and set \[ b_k(G)= \sup\{ b_k(G,W): W \text{ is a $G$-module over $\mathbb F$} \} \,. \] \end{enumerate} \end{definition} In the special case $k=1$ we set \[ \beta(G,V)=\beta_1(G,V) \ , \beta(G)=\beta_1(G) \ , \ b(G,V)=b_1(G,V) \ , \text{and} \ b(G)=b_1(G) \,, \] and $\beta (G)$ is the {\it Noether number} of $G$. If $\{f_1, \ldots, f_m\}$ and $\{h_1, \ldots, h_l\}$ are two minimal homogeneous generating sets of $\mathbb F[V]^G$, then $m=l$ and, after renumbering if necessary, $\deg (f_i) = \deg (h_i)$ for all $i \in [1, m]$ (\cite[Proposition 6.19]{Ne07a}). Therefore by the Graded Nakayama Lemma (\cite[Proposition 8.31]{Ne07a}) we have \[ \beta (G,V) = \max \{ \deg (f_i) \colon i \in [1, m] \} \,, \] where $\{f_1,\ldots,f_m\}$ is a minimal homogeneous generating set of $\mathbb F[V]^G$. Again by the Graded Nakayama Lemma, $b(G,V)$ is the maximal degree of a generator in a minimal system of homogeneous generators of $\mathbb F[V]$ as an $\mathbb F[V]^G$-module. If $\mathrm{char}(\mathbb F) \nmid |G|$, then by \cite[Corollary 3.2]{Cz-Do13c} we have \begin{equation}\label{eq:beta=b+1} \beta_k(G)=b_k(G)+1 \quad \mbox{ and } \quad \beta(G,V)\le b(G,V)+1 \,, \end{equation} where the second inequality can be strict. If $G$ is abelian, then $\beta_k(G,V)$ and $b_k(G,V)$ will be interpreted as $k$th Davenport constants (see Proposition~\ref{prop:bschmid}). The \emph{regular $G$-module $V_{\mathrm{reg}}$} has a basis $\{e_g\colon g\in G\}$ labelled by the group elements, and the group action is given by $g\cdot e_h=e_{gh}$ for $g,h\in G$. More conceptually, one can identify $V_{\mathrm{reg}}$ with the space of $\mathbb F$-valued functions on $G$, on which $G$ acts linearly via the action induced by the left multiplication action of $G$ on itself. In this interpretation the basis element $e_g$ is the characteristic function of the set $\{g\}\subset G$. It was proved in \cite{Sc91a} that, if $\mathrm{char}(\mathbb F)=0$, then $\beta(G)=\beta(G,V_{\mathrm{reg}})$. If $\mathbb F$ is algebraically closed, each irreducible $G$-module occurs in $V_{\mathrm{reg}}$ as a direct summand with multiplicity equal to its dimension. \begin{theorem}\label{thm:noether}~ \begin{enumerate} \item If $\mathrm{char}(\mathbb F) \nmid |G|$, then $\beta(G) \le |G| $. \item If $\mathrm{char} (\mathbb F) \mid |G|$, then $\beta (G)=\infty$. \end{enumerate} \end{theorem} \begin{proof} 1. The case $\mathrm{char} (\mathbb F) =0 $ was proved by E. Noether \cite{No16} in 1916, and her argument works as well when the characteristic of $\mathbb F$ is greater than $|G|$. The general case was shown independently by P. Fleischmann \cite{Fl00a} and J. Fogarty \cite{fogarty} (see also \cite[Theorem 2.3.3]{Ne-Sm02a} and \cite{knop}. For 2. see \cite{richman}. \end{proof} Bounding the Noether number has always been an objective of invariant theory (for recent surveys we refer to \cite{wehlau, Ne07b}; degree bounds are discussed in \cite{Do-He00a, Se02a, F-S-S-W06,Cz14d, heg-pyb}; see \cite{derksen-kemper} for algorithmic aspects). Moreover, the main motivation to introduce the $k$th Noether numbers $\beta_k(G)$ (\cite{Cz-Do13c, Cz-Do15a, Cz-Do14a}) was to bound the ordinary Noether number $\beta(G)$ via structural reduction (see Subsection~\ref{sec:5.1}). \subsection{\bf The divisor theory of invariant rings} \label{4.A}~ Let $G\subset \GL(V)$ and $\chi\in\Hom(G,\mathbb F^\bullet)$. Then \[ \mathbb F[V]^{G,\chi} = \{f\in\mathbb F[V]\colon g\cdot f=\chi(g)f \ \text{for all} \ g \in G \} \] denotes the space of {\it relative invariants of weight} $\chi$, and we set \[ \mathbb F[V]^{G,\mathrm{rel}} = \bigcup_{\chi\in\Hom(G,\mathbb F^\bullet)}\mathbb F[V]^{G,\chi}. \] Clearly, we have $\mathbb F[V]^G \ \subset \ \mathbb F[V]^{G,\mathrm{rel}} \ \subset \ \mathbb F[V]$, and to simplify notation, we set \[ H =(\mathbb F[V]^G \setminus \{0\})_{{\text{\rm red}}} \,, \quad D=(\mathbb F[V]^{G,\mathrm{rel}}\setminus\{0\})_{{\text{\rm red}}} \,, \quad \text{and} \quad E= (\mathbb F[V] \setminus \{0\})_{{\text{\rm red}}} \,. \] Since $\mathbb F[V]$ is a factorial domain with $\mathbb F^\bullet$ as its set of units, $E=\mathcal{F}(P)$ is the free abelian monoid generated by $P=\{\mathbb F^\bullet f\colon f\in \mathbb F[V] \mbox{ is irreducible}\}$. The action of $G$ on $\mathbb F[V]$ is via $\mathbb F$-algebra automorphisms, so it induces a permutation action of $G$ on $E$ and $P$. Denote by $P/G$ the set of $G$-orbits in $P$. We shall identify $P/G$ with a subset of $E$ as follows: assign to the $G$-orbit $\{f_1,\ldots,f_r\}$ the element $f_1\ldots f_r\in E$ (here $f_1,\dots,f_r\in P$ are distinct). We say that a non-identity element $g \in G\subset \GL(V)$ is a {\it pseudoreflection} if a hyperplane in $V$ is fixed pointwise by $g$, and $g$ is not unipotent (this latter condition holds automatically if $\mathrm{char}(\mathbb F)$ does not divide $|G|$, since then a non-identity unipotent transformation cannot have finite order). We denote by $\Hom^0(G,\mathbb F^\bullet) \le \Hom (G,\mathbb F^\bullet)$ the subgroup of the character group consisting of the characters that contain all pseudoreflections in their kernels. For each $p\in P$, choose a representative $\tilde{p}\in \mathbb F[V]$ in the associate class $p=\mathbb F^\bullet \tilde{p}$. We have $\mathfrak X (\mathbb F[V]) = \{\tilde{p}\mathbb F[V]\colon p\in P\}$ because $\mathbb F[V]$ is factorial. We set $\mathsf v_{\tilde p} = \mathsf v_p \colon \mathsf q (\mathbb F[V]^{\bullet})= \mathbb F(V)^{\bullet} \to \mathbb Z$, and for a subset $X\subset \mathbb F(V)$ we write $\mathsf v_p(X) = \inf \{\mathsf v_p(f) \colon f\in X\setminus \{0\} \}$. The {\it ramification index} of the prime ideal $\tilde{p}\mathbb F[V]$ over $\mathbb F[V]^G$ is $e(p)=\mathsf v_p(\tilde{p}\mathbb F[V]\cap \mathbb F[V]^G)$. The ramification index $e(p)$ can be expressed in terms of the {\it inertia subgroup} \[ I_p = \{g\in G \colon g\cdot f-f\in \tilde{p}\mathbb F[V] \quad \text{for all} \quad f\in \mathbb F[V] \} \,. \] Since $V^\star$ is a $G$-stable subspace in $\mathbb F[V]$, the inertia subgroup $I_p$ acts trivially on $V^\star/(V^\star\cap \tilde{p}\mathbb F[V])$. On the other hand $I_p$ acts faithfully on $V^\star$. So if $I_p$ is non-trivial, then $V^\star\cap \tilde{p}\mathbb F[V]\neq 0$, implying $\tilde{p}\in V^\star$. Clearly $I_p$ must act trivially on the hyperplane $\mathcal{V}(\tilde{p})=\{v\in V\colon \tilde{p}(v)=0\}$, and hence acts via multiplication by a character $\delta_p\in\Hom(I_p,\mathbb F^\bullet)$ on the $1$-dimensional factor space $V/\mathcal{V}(\tilde{p})$. So $\ker(\delta_p)$ is a normal subgroup of $I_p$ (necessarily unipotent hence trivial if $\mathrm{char}(\mathbb F)\nmid |G|$) and $I_p=\ker(\delta_p)Z$ decomposes as a semi-direct product of $\ker(\delta_p)$ and a cyclic subgroup $Z$ consisting of pseudoreflections fixing pointwise $\mathcal{V}(\tilde{p})$. So $Z\cong I_p/\ker(\delta_p)$ is isomorphic to a finite subgroup of $\mathbb F^\bullet$. The next Lemma~\ref{lemma:nakajima} is extracted from Nakajima's paper \cite{Na82z}. \begin{lemma}\label{lemma:nakajima}~ \begin{enumerate} \item We have the equality $e(p)=|Z|$. \item $\mathsf v_p(\mathbb F[V]^{G,\chi})<e(p)$ for all $\chi\in \Hom(G,\mathbb F^\bullet)$. \item $\mathsf v_p(\mathbb F[V]^{G,\chi})=0$ for all $\chi\in \Hom^0(G,\mathbb F^\bullet)$. \end{enumerate} \end{lemma} \begin{proof} 1. By \cite[9.6, Proposition (i)]{neukirch}, we have that $e(p)=\mathsf v_p(\tilde{p}\mathbb F[V]\cap \mathbb F[V]^{I_p})$, the ramification index of the prime ideal $\tilde{p}\mathbb F[V]$ over the subring of $I_p$-invariants. Thus if $I_p$ is trivial, then $e(p)=1$, and of course $|Z|=1$. If $I_p$ is non-trivial, then as it was explained above, $\tilde{p}$ is a linear form, which is a relative $I_p$-invariant with weight $\delta_p^{-1}$, hence $\tilde{p}^{|Z|}$ is an $I_p$-invariant, implying $\mathsf v_p(\tilde{p}\mathbb F[V]\cap \mathbb F[V]^{I_p})\le |Z|$. On the other hand $\mathbb F[V]^{I_p}$ is contained in $\mathbb F[V]^Z$, and the algebra of invariants of the cyclic group $Z$ fixing pointwise the hyperplane $\mathcal{V}(\tilde{p})$ is generated by $\tilde{p}^{|Z|}$ and a subspace of $V^*$ complementary to $\mathbb F \tilde{p}$. Thus $\mathsf v_p(\tilde{p}\mathbb F[V]\cap \mathbb F[V]^{I_p})\ge \mathsf v_p(\tilde{p}\mathbb F[V]\cap \mathbb F[V]^{Z})=|Z|$, implying in turn that $e(p)=|Z|$. 2. Take an $h\in \mathbb F[V]^G$ with $e(p)=\mathsf v_p(h)$. Note that $\mathsf v_q(h)=\mathsf v_p(h)$ and $\mathsf v_q(\mathbb F[V]^{G,\chi})=\mathsf v_p(\mathbb F[V]^{G,\chi})$ holds for all $q\in G\cdot p$, since $\mathbb F[V]^{G,\chi}$ is a $G$-stable subset in $\mathbb F[V]$. Set $S=\{\frac ft\colon f\in \mathbb F[V],\quad t\in \mathbb F[V]^G\setminus \tilde{p}\mathbb F[V]\}$. This is a $G$-stable subring in $\mathsf q (\mathbb F[V])$ containing $\mathbb F[V]$. Consider $S^{\chi}=S\cap \mathsf q(\mathbb F[V])^{\chi}$, where $\mathsf q(\mathbb F[V])^{\chi}= \{ s\in \mathsf q(\mathbb F[V]) \colon g\cdot s=\chi(g)s \quad \text{for all} \ g\in G\}$. Then $\mathsf v_q(S^{\chi})=\mathsf v_q(\mathbb F[V]^{G,\chi})$ for all $q\in G\cdot p$, since for any denominator $t$ of an element $\frac ft$ of $S$ we have $\mathsf v_q(t)=0$. Now suppose that contrary to our statement we have $e(p)\le \mathsf v_p(\mathbb F[V]^{G,\chi})$, and hence $\mathsf v_q(h)\le \mathsf v_q(S^{\chi})$ for all $q\in G\cdot p$. In particular this means that $\mathbb F[V]^{G,\chi}\neq \{0\}$. Then $\mathsf v_q(h^{-1}S^{\chi})\ge 0$ holds for all $q\in G\cdot p$. Now $S$ is a Krull domain with $\mathfrak X (S) = \{\tilde{q} S\colon q\in G\cdot p\}$, thus $h^{-1}S^{\chi}\subset S$ (see the discussion after Theorem \ref{2.1}), implying that $S^{\chi}\subset hS$. Clearly $hS\cap S^{\chi}=hS^{\chi}$, so we conclude in turn that $S^{\chi}\subset hS^{\chi}$. Iterating this we deduce $\{0\}\neq S^{\chi}\subset \cap_{n=1}^{\infty} h^nS$, a contradiction. 3. It is well known that $\mathbb F[V]^{G,\chi} \ne \{0\}$ (see the proof of {\bf A4.} below). Write $v=\mathsf v_p(\mathbb F[V]^{G,\chi})$. Take $f\in \mathbb F[V]^{G,\chi}$ with $\mathsf v_p(f)=v$, say $f=\tilde{p}^vh$, where $h \in \mathbb F[V] $. Note that both $f$ and $\tilde{p}$ are relative invariants of $I_p$, hence so is $h$. Therefore $g\cdot h\in \mathbb F^\bullet h$, and $\tilde{p}\mid_{\mathbb F[V]} (g\cdot h-h)$ for all $g\in I_p$, implying that $h$ is an $I_p$-invariant. Any $\chi\in \Hom^0(G,\mathbb F^\bullet)$ contains $I_p$ in its kernel (the unipotent normal subgroup $\ker(\delta_p)$ of $I_p$ has no non-trivial characters at all, and $Z=I_p/\ker(\delta_p)$ consists of pseudoreflections). Thus $f$ is $I_p$-invariant as well. Therefore $\tilde{p}^v$ is $I_p$-invariant, so its weight $\delta_p^v$ is trivial. Consequently the order $|Z|$ of $\delta_p$ in $\Hom(I_p,\mathbb F^\bullet)$ divides $v$. We have $e(p)=|Z|$ by 1., and on the other hand $v<e(p)$ by 2., forcing $v=0$. \end{proof} For a relative invariant $f$, we denote by $w(f)$ the weight of $f$. This induces a homomorphism $w \colon D\to \Hom(G,\mathbb F^\bullet)$ assigning to $\mathbb F^\bullet f\in D$ the weight $w(f)$ of the relative invariant $f$. Clearly, $w$ extends to a group homomorphism $w \colon \mathsf{q}(D)\to \Hom(G,\mathbb F^\bullet)$. The kernel of $w$ consists of elements of the form $(\mathbb F^\bullet h)^{-1}\mathbb F^\bullet f$, where $f,h\in \mathbb F[V]^{G,\chi}$ for some character $\chi$. Now $f/h$ belongs to $\mathbb F(V)^G$, which is the field of fractions of $\mathbb F[V]^G$, so there exist $f_1,h_1\in \mathbb F[V]^G$ with $f/h=f_1/h_1$, implying $(\mathbb F^\bullet h)^{-1}\mathbb F^\bullet f=(\mathbb F^\bullet h_1)^{-1}\mathbb F^\bullet f_1\in \mathsf{q}(H)$. Thus $\ker(w)=\mathsf{q}(H)$. Therefore $w$ induces a monomorphism $\overline w \colon \mathsf q (D)/\mathsf q (H) \to \Hom(G,\mathbb F^\bullet)$. \begin{theorem} \label{main-theorem}~ Let $G \subset \GL(V)$, $H = (\mathbb F[V]^G \setminus \{0\})_{{\text{\rm red}}}$, and $D=(\mathbb F[V]^{G,\mathrm{rel}}\setminus\{0\})_{{\text{\rm red}}}$. \begin{enumerate} \item The embeddings \ ${\mathbb F[V]^G} \setminus \{0\} \ \overset{\varphi}{\hookrightarrow} \ \mathbb F[V]^{G,\mathrm{rel}} \setminus \{0\} \ \overset{\psi}{\hookrightarrow} \ {\mathbb F[V]}^{\bullet}$ \ are cofinal divisor homomorphisms. \smallskip \item $D$ is factorial, $P/G \subset E$ is the set of prime elements in $D$, and $\mathcal C ( \varphi)$ is a torsion group. \smallskip \item The monoid $D_0 = \{ \gcd_D (X) \colon X \subset H \ \text{finite} \} \subset D$ is free abelian with basis $\{ q^{e(q)} \colon q\in P/G \}$, where $e(q)=\min\{\mathsf{v}_q(h) \colon q\mid_D h\in H\}$, and the embedding $H \hookrightarrow D_0$ is a divisor theory. \smallskip \item We have $D_0 = \{f\in D\colon w(f)\in \Hom^0(G,\mathbb F^\bullet)\}$ and $\overline w \mid_{\mathsf q (D_0)/\mathsf q (H)} \colon \mathcal C ( \mathbb F[V]^G) = \mathsf q (D_0)/\mathsf q (H) \to \Hom^0(G,\mathbb F^\bullet)$ is an isomorphism. \end{enumerate} \end{theorem} Theorem \ref{main-theorem} immediately implies the following corollary which can be found in Benson's book (\cite[Theorem 3.9.2]{Be93a}) and which goes back to Nakajima \cite{Na82z} (see also \cite{Fl-Wo11a} for a discussion of this theorem). \begin{corollary}[Benson-Nakajima] \label{benson-nakajima} The class group of $\mathbb F[V]^G$ is isomorphic to $\Hom^0(G,\mathbb F^\bullet)$, the subgroup of the character group consisting of the characters that contain all pseudoreflections in their kernels. \end{corollary} \begin{proof}[of Theorem \ref{main-theorem}] 1. Since $\mathbb F[V]^G = \mathbb F(V)^G \cap \mathbb F[V]$, the embedding $\psi \circ \varphi \colon \mathbb F[V]^G \hookrightarrow \mathbb F[V]$ is a divisor homomorphism, and hence $\varphi$ is a divisor homomorphism. Furthermore, if the quotient of two relative invariants lies in $\mathbb F[V]$, then it is a relative invariant whence $\psi$ is a divisor homomorphism. In order to show that the embeddings are cofinal, let $0 \ne f \in \mathbb F[V]$ be given. Then $f^* = \prod_{g \in G} g f \in \mathbb F[V]^G$ and $f \t f^*$, so the embedding $\psi \circ \varphi$ is cofinal and hence $\varphi$ and $\psi$ are cofinal. \smallskip 2. Suppose that $\{f_1,\ldots,f_r \} \subset \mathbb F[V]$ represents a $G$-orbit in $P$. Then $g\cdot (f_1\ldots f_r)$ is a non-zero scalar multiple of $f_1\ldots f_r$, hence $f_1\ldots f_r\in \mathbb F[V]^{G,\mathrm{rel}}$. This shows that $P/G\subset E$ is in fact contained in $D$. Conversely, take an irreducible element $\mathbb F^\bullet f$ in the monoid $D$ (so $f$ is a relative invariant). Take any irreducible divisor $f_1$ of $f$ in $\mathbb F[V]$. Since $g\cdot f\in \mathbb F^\bullet f$, the polynomial $g\cdot f_1$ is also the divisor of $f$. Denoting by $f_1,\ldots, f_r$ polynomials representing the $G$-orbit of $\mathbb F^\bullet f_1$ in $P$, we conclude that $f_1\ldots f_r$ divides $f$ in $\mathbb F[V]$, hence $\mathbb F^\bullet f_1\ldots f_r$ divides $\mathbb F^\bullet f$ in $D$ as well, so $\mathbb F^\bullet f_1\ldots f_r=\mathbb F^\bullet f$. This implies that $D$ is the submonoid of $E=\mathcal{F}(P)$ generated by $P/G$. In order to show that $\mathcal C (\varphi)$ is a torsion group, let $f \in D$ be given. We have to find an $m \in \mathbb N$ such that $f^m \in H$. Clearly, this holds with $m$ being the order in $\Hom(G,F^\bullet)$ of the weight of the relative invariant corresponding to $f$. \smallskip 3. Since $\mathcal C (\varphi)$ is a torsion group, Proposition~\ref{prop:torsion-divtheory} implies that the embedding $H \hookrightarrow D_0$ is a divisor theory, and that $D_0$ is free abelian with basis $\{ q^{e(q)} \colon q\in P/G \}$, where $e(q)=\min\{\mathsf{v}_q(h) \colon q\mid_D h\in H\}$ (note that if $q\in P/G$ is the $G$-orbit of $p\in P$, then $\mathsf{v}_q(h)=\mathsf{v}_p(h)$, where the latter is the exponent of $p$ in $h\in E=\mathcal{F}(P)$). 4. It remains to prove the following three assertions. \begin{enumerate} \smallskip \item[{\bf A1.}\,] $D_0=\{f\in D\colon w(f)\in \Hom^0(G,\mathbb F^\bullet)\}$. \smallskip \item[{\bf A2.}\,] $w (D_0) = \Hom^0(G,\mathbb F^\bullet)$. \smallskip \item[{\bf A3.}\,] $\overline w \mid_{\mathsf q (D_0)/\mathsf q (H)} \colon \mathsf q (D_0)/\mathsf q (H) \to w (D_0)$ is an isomorphism. \end{enumerate} {\it Proof of \,{\bf A1}}.\, Set $D^0=\{f\in D\colon w(f)\in \Hom^0(G,\mathbb F^\bullet)\}$. We show first $D_0\subset D^0$. Let $\chi$ be a character of $G$, and assume that $\chi(g)\neq 1$ for some pseudoreflection $g\in G$. Let $f$ be a relative invariant with $w(f)=\chi$. Then for any $v$ with $gv=v$ we have $f(v)=f(g^{-1}v)=(gf)(v)=\chi(g)f(v)$, hence $f(v)=0$. So $l\mid_{\mathbb F[V]} f$, where $l$ is a non-zero linear form on $V$ that vanishes on the reflecting hyperplane of $g$. Denoting by $l=l_1,\ldots,l_r$ representatives of the $G$-orbit of $\mathbb F^\bullet l$, we find that the relative invariant $q= l_1\ldots l_r$ divides $f$. Thus $\gcd_D \{ f\in D\mid w(f)=\chi \} \neq 1$. Now suppose that for some $\mathbb F^\bullet k\in D_0$ we have that $w(k)$ does not belong to $\Hom^0(G,\mathbb F^\bullet)$. By definition of $D_0$ there exist $h_1,\ldots,h_n\in D$ with $\gcd_D (h_1,\ldots,h_n)=1$ and $kh_1,\ldots,kh_n\in H$. Clearly $w(h_i)=w(k)^{-1} \notin \Hom^0(G,\mathbb F^\bullet)$, hence by the above considerations $\gcd_D (h_1,\ldots,h_n) \neq 1$, a contradiction. Next we show $D^0\subset D_0$. Let $d$ be an element in the monoid $D^0$. By Lemma~\ref{lemma:nakajima}.3 for any prime divisor $p\in P$ of $d$ there exists an $h_p\in D$ such that $w(h_p)=w(d)^{-1}$ and $p \nmid_E h_p$. Denote by $m>1$ the order of $w(d)$ in the group of characters. Clearly $d^m\in H$ and $dh_p\in H$. Moreover, $\gcd_E (d^m,dh_p : p\in P,\quad p\mid_E d)=d$. {\it Proof of \,{\bf A2}}.\, The statement follows from {\bf A1}, as soon as we show that $\mathbb F[V]^{G,\chi} \neq {0}$ for all $\chi \in \Hom(G, \mathbb F^{\bullet})$. For any character $\chi\in\Hom(G,\mathbb F^\bullet)$ the group $\bar G=G/\ker(\chi)$ is isomorphic to a cyclic subgroup of $\mathbb F^\bullet$, hence its order is not divisible by $\mathrm{char}(\mathbb F)$. Moreover, $\bar G$ acts faithfully on the field $T=\mathbb F(V)^{\ker(\chi)}$, with $T^{\bar G}=\mathbb F(V)^G$. By the Normal Basis Theorem, $T$ as a $\bar G$-module over $T^{\bar G}$ is isomorphic to the regular representation of $\bar G$, hence contains the representation $\chi$ as a summand with multiplicity $1$. This shows in particular that $T^{\bar G}$ contains a relative invariant of weight $\chi$. Multiplying this by an appropriate element of $T^{\bar G}\cap \mathbb F[V]= \mathbb F[V]^G$ we get an element of $\mathbb F[V]^{G,\chi}$. So all characters of $G$ occur as the weight of a relative invariant in $\mathbb F[V]$. {\it Proof of \,{\bf A3}}.\, Since $\overline w \colon \mathsf q (D)/\mathsf q (H) \to \Hom(G,\mathbb F^\bullet)$ is a monomorphism, the map $\overline w \mid_{\mathsf q (D_0)/\mathsf q (H)} \colon \mathsf q (D_0)/\mathsf q (H) \to w ( \mathsf q (D_0))$ is an isomorphism. Note finally that $w ( \mathsf q (D_0)) = \mathsf q ( w (D_0)) = w (D_0)$. \end{proof} As already mentioned, not only the class group but also the distribution of prime divisors in the classes is crucial for the arithmetic of the domain. Moreover, the class group together with the distribution of prime divisors in the classes are characteristic (up to units) for the domain. For a precise formulation we need one more definition. Let $H$ be a Krull monoid, $H_{{\text{\rm red}}} \hookrightarrow \mathcal F(\mathcal P)$ a divisor theory, and let $G$ be an abelian group and $(m_g)_{g \in G}$ be a family of cardinal numbers. We say that $H$ has \emph{characteristic} $(G, (m_g)_{g \in G})$ if there is a group isomorphism $\Phi: G \rightarrow \mathcal C(H)$ such that $m_g=|\mathcal P \cap \Phi(g)|$. Two reduced Krull monoids are isomorphic if and only if they have the same characteristic (\cite[Theorem 2.5.4]{Ge-HK06a}). We pose the following problem. \begin{problem} \label{characteristic-problem} Let $G$ be a finite group, $\mathbb F$ be a field, and $V$ be a finite dimensional $\mathbb F$-vector space endowed with a linear action of $G$. Determine the characteristic of $\mathbb F[V]^G$. \end{problem} Let all assumptions be as in Problem \ref{characteristic-problem} and suppose further that $G$ acts trivially on one variable. Then $\mathbb F[V]^G$ is a polynomial ring in this variable and hence every class contains a prime divisor by \cite[Theorem 14.3]{Fo73}. \subsection{\bf The abelian case}~ \label{subsec:abelian} {\it Throughout this subsection, suppose that $G$ is abelian, $\mathbb F$ is algebraically closed, and $\mathrm{char}(\mathbb F) \nmid |G|$.} \smallskip The assumption on algebraic closedness is not too restrictive, since for any field $\mathbb F$ the set $\mathbb F[V]^G$ spans the ring of invariants over the algebraic closure $\overline{\mathbb F}$ as a vector space over $\overline{\mathbb F}$. The assumption on the characteristic guarantees that every $G$-module is completely reducible (i.e. is the direct sum of irreducible $G$-modules). The dual space $V^*$ has a basis $\{x_1,\ldots,x_n\}$ consisting of $G$-eigenvectors whence $g\cdot x_i=\chi_i(g)x_i$ for all $i \in [1,n]$ where $\chi_1, \ldots, \chi_n \in \Hom (G, \mathbb F^{\bullet})$. We set $\widehat G = \Hom (G, \mathbb F^{\bullet})$, $\widehat G_V = \{ \chi_1,\ldots,\chi_n \} \subset \widehat G$, and note that $G \cong \widehat G$. Recall that a completely reducible $H$-module $W$ (for a not necessarily abelian group $H$) is called \emph{multiplicity free} if it is the direct sum of pairwise non-isomorphic irreducible $H$-modules. In our case $V$ is multiplicity free if and only if the characters $\chi_1,\ldots,\chi_n$ are pairwise distinct. It was B. Schmid (\cite[Section 2]{Sc91a}) who first formulated a correspondence between a minimal generating system of $\mathbb F[V]^G$ and minimal product-one sequences over the character group (see also \cite{F-M-P-T08a}). The next proposition describes in detail the structural interplay. In particular, Proposition \ref{prop:bschmid}.2 shows that all (direct and inverse) results on minimal zero-sum sequences over $\widehat G_V$ (see Subsections \ref{3.C} and \ref{3.D}) carry over to $\mathcal A (M^G)$. \begin{proposition} \label{prop:bschmid} Let $M \subset \mathbb F[x_1, \ldots, x_n]$ be the multiplicative monoid of monomials, $\psi \colon M \to \mathcal F (\widehat G_V)$ be the unique monoid homomorphism defined by $\psi (x_i) = \chi_i$ for all $i \in [1,n]$, and let $M^G \subset M$ denote the submonoid of $G$-invariant monomials. \begin{enumerate} \item $\mathbb F[V]^G$ has $M^G$ as an $\mathbb F$-vector space basis, and $\mathbb F[V]^G$ is minimally generated as an $\mathbb F$-algebra by $\mathcal A (M^G)$. \item The homomorphism $\psi \colon M \to \mathcal F (\widehat G_V)$ and its restriction $\psi \t_{M^G}\colon M^G \to \mathcal B ( \widehat G_V)$ are degree-preserving transfer homomorphisms. Moreover, $M^G$ is a reduced finitely generated Krull monoid, and $\mathcal A (M^G) = \psi^{-1} \big( \mathcal A (\widehat G_V) \big)$. \item $\psi \t_{M^G}$ is an isomorphism if and only if $V$ is a multiplicity free $G$-module. \item $\beta_k (G,V) =\mathsf D_k(M^G)= \mathsf D_k (\widehat G_V)$ and $\beta_k(G) = \mathsf D_k (G)$ for all $k \in \mathbb N$. \end{enumerate} \end{proposition} \begin{proof} 1. Each monomial spans a $G$-stable subspace in $\mathbb F[V]$, hence a polynomial is $G$-invariant if and only if all its monomials are $G$-invariant, so $M^G$ spans $\mathbb F[V]^G$. The elements of $M^G$ are linearly independent, therefore $\mathbb F[V]^G$ can be identified with the monoid algebra of $M^G$ over $\mathbb F$, which shows the second statement. 2. $M$ and $\mathcal F (\widehat G_V)$ are free abelian monoids and $\psi$ maps primes onto primes. Thus $\psi \colon M \to \mathcal F (\widehat G_V)$ is a surjective degree-preserving monoid homomorphism and it is a transfer homomorphism. Let $\pi \colon \mathcal{F}(\widehat G)\to \widehat G$ be the monoid homomorphism defined by $\pi ( \chi) = \chi$ for all $\chi \in \widehat G$. Then $\ker(\pi)=\mathcal{B}(\widehat G)$. Taking into account that for a monomial $m\in M$ $G$ acts on the space $\mathbb F m$ via the character $\pi(\psi(m))$, we conclude that for a monomial $m\in M$ we have that $m\in M^G$ if and only if $\psi(m)\in \mathcal B ( \widehat G_V)$. This implies that the restriction $\psi \t_{M^G}$ of the transfer homomorphism $\psi$ is also a transfer homomorphism. Therefore $M^G$ is generated by $\mathcal A (M^G)=\psi^{-1} \big( \mathcal A (\widehat G_V) \big)$. Since $\mathcal A(\widehat G_V)$ is finite, and $\psi$ has finite fibers, we conclude that the monoid $M^G$ is finitely generated. Since $M$ is factorial and $\mathbb F[V]^G \subset \mathbb F[V]$ is saturated by Theorem \ref{main-theorem}, it follows that \[ M \cap \mathsf q (M^G) \subset M \cap \mathbb F[V] \cap \mathsf q ( \mathbb F[V]^G ) \subset M \cap \mathbb F[V]^G = M^G \] whence $M^G \subset M$ is saturated and thus $M^G$ is a Krull monoid. 3. $V$ is a multiplicity free $G$-module if and only if $\chi_1,\ldots,\chi_n$ are pairwise distinct. Since $\psi \colon M \to \mathcal F (\widehat G_V)$ maps the primes $x_1, \ldots, x_n$ of $M$ onto the primes $\chi_1, \ldots, \chi_n$ of $\mathcal F (\widehat G_V)$, $\psi$ is an isomorphism if and only if $\chi_1,\ldots,\chi_n$ are pairwise distinct. 4. Let $k \in \mathbb N$ and $M^G_+=M^G \setminus \{1\}$. Then $M^G \setminus (M_+^G)^{k+1} = \mathcal M_k (M^G)$. Since $\psi \t_{M^G}\colon M^G \to \mathcal B ( \widehat G_V)$ is degree-preserving transfer homomorphism, Proposition~\ref{3.7}.3 implies that $\mathsf D_k(M^G)=\mathsf D_k(\widehat G_V)$. Since $\mathbb F[V]^G$ is spanned by $M^G$, $(\mathbb F[V]^G_+)^{k+1}$ is spanned by $(M^G_+)^{k+1}$. Therefore the top degree of a homogeneous $G$-invariant not contained in $(\mathbb F[V]^G_+)^{k+1}$ coincides with the maximal degree of a monomial in $M^G_+ \setminus (M^G_+)^{k+1}= \mathcal M_k (M^G)$. Thus $\beta_k (G,V)= \mathsf D_k (M^G)$. For the $k$th Noether number $\beta_k (G)$ we have \[ \begin{aligned} \beta_k(G) & = \sup\{ \beta_k(G,W): W \text{ is a $G$-module over $\mathbb F$} \} \\ & = \sup \{ \mathsf D_k (\widehat G_W) \colon W \text{ is a $G$-module over $\mathbb F$} \} = \mathsf D_k (\widehat G) \end{aligned} \] because for the regular representation $V_{\text{\rm reg}}$ we have $\widehat G_{V_{\text{\rm reg} }} = \widehat G$. \end{proof} Recalling the notation of Theorem \ref{main-theorem}, we have \[ H = (\mathbb F[V]^G \setminus \{0\})_{{\text{\rm red}}} \quad \text{and} \quad D_0 = \{ {\gcd_D} (X) \colon X \subset H \ \text{finite} \} \subset D=(\mathbb F[V]^{G,\mathrm{rel}}\setminus\{0\})_{{\text{\rm red}}} \,. \] Furthermore, $M \subset \mathbb F[V]=\mathbb F[x_1, \ldots , x_n]$ is the monoid of monomials, $M^G = M \cap \mathbb F[V]^G$, and we can view $M$ as a submonoid of $H$ and then $M^G = M \cap H$. Since $M \subset H$ is saturated, $M = \mathsf q (M) \cap H$, and \[ \mathsf q (M)/\mathsf q (M^G) = \mathsf q (M)/\mathsf q (M\cap H) = \mathsf q (M)/(\mathsf q (M) \cap \mathsf q (H)) \cong \mathsf q (M)\mathsf q (H)/\mathsf q (H) \subset \mathsf q (D)/\mathsf q (H) \,, \] we consider $\mathsf q (M)/\mathsf q (M^G)$ as a subset of $\mathsf q (D)/\mathsf q (H)$. \begin{proposition} \label{prop:xi^e(xi)}~ Let all notation be as above and set $M_0 = M \cap D_0$. \begin{enumerate} \item $M_0 \subset D_0$ is divisor closed whence $M_0$ is free abelian, and $\mathcal{A}(M_0)=M\cap \mathcal{A}(D_0)=\{x_1^{e(x_1)},\ldots,x_n^{e(x_n)}\}$. \item We have $e(x_i)=\min\{k\in\mathbb N \colon \chi_i^k\in \langle \chi_j\mid j\neq i \rangle \}$. \item $\Hom^0(\rho(G),\mathbb F^\bullet)$ is generated by $\{ \chi_1^{e(x_1)}, \ldots,\chi_n^{e(x_n)} \}$ and $\mathbb F[x_1^{e(x_1)},\ldots,x_n^{e(x_n)}]=\mathbb F[V]^{G_1}$, where $G_1$ denotes the subgroup of $\rho(G)$ generated by the pseudoreflections in $\rho(G)$. \item The embedding $M^G \hookrightarrow M_0$ is a divisor theory, \[ \overline w \mid_{\mathsf q (M_0)/\mathsf q (M^G)} \colon \mathcal C ( M^G) = \mathsf q (M_0)/\mathsf q (M^G) \to \Hom^0( \rho (G),\mathbb F^\bullet) \] is an isomorphism, and $\overline w ( \mathcal{C}(M^G)^*) =\{\chi_1^{e(x_1)},\ldots,\chi_n^{e(x_n)}\}$. \end{enumerate} \end{proposition} \begin{proof} 1. If the product of two polynomials in $\mathbb F[V]$ has a single non-zero term, then both polynomials must have only one non-zero term. Thus, if $ab\in M$ for some $a,b\in D$, then both $a$ and $b$ belong to $M$. Hence $M \subset D$ is divisor closed implying that $M_0 \subset D_0$ is divisor-closed. Therefore $\mathcal{A}(M_0)=M\cap \mathcal{A}(D_0)$. By Theorem \ref{main-theorem}.3, $\mathcal A (D_0) = \{ q^{e(q)} \colon q \in \mathcal A (D) \}$. The divisor closedness of $M$ in $D$ implies that if $q^{e(q)}\in M$, then $q\in M \cap \mathcal A (D) =\mathcal A (M)= \{x_1,\ldots,x_n\}$. Thus $M\cap \mathcal{A}(D_0)=\{x_1^{e(x_1)},\ldots,x_n^{e(x_n)}\}$. 2. For $i \in [1,n]$, we have \[ e(x_i)=\min\{\mathsf v_{x_i}(h) \colon x_i\mid_Dh , h \in H\} = \min\{\mathsf v_{x_i}(m) \colon x_i\mid_Dm, m \in M^G \} \,, \] where the second equality holds because for all $h \in H$ we have \newline $\mathsf v_{x_i} (h) = \min \{ \mathsf v_{x_i} (m) \colon m \ \text{ranges over the monomials of} \ h \}$. Note that a monomial $m = \prod_{i=1}^n x_i^{a_i}$ lies in $M^G$ if and only if $\prod_{i=1}^n \chi_i^{[a_i]}$ is a product-one sequence over $\widehat G$ if and only if $\chi_i^{a_i} = \prod_{j \ne i} \chi_j^{-a_j}$. Thus $\min\{\mathsf v_{x_i}(m) \colon x_i\mid_Dm, m \in M^G \} = \min\{k\in\mathbb N \colon \chi_i^k\in \langle \chi_j\mid j\neq i \rangle \}$. 3. By Theorem \ref{main-theorem}.4, $\Hom^0( \rho (G),\mathbb F^\bullet) = w (D_0)$ and hence $\Hom^0( \rho (G),\mathbb F^\bullet)$ is generated by $w ( \mathcal A (D_0))$. Thus by 1., it remains to show that $\langle w ( \mathcal A (D_0)) \rangle = \langle w ( \mathcal A (M_0)) \rangle$. Since $\mathcal A (M_0) \subset \mathcal A (D_0)$, it follows that $\langle w ( \mathcal A (D_0)) \rangle \supset \langle w ( \mathcal A (M_0)) \rangle$. To show the reverse inclusion, let $a \in \mathcal A (D_0)$. For any monomial $m$ occurring in $a$, we have $w (m) = w (a)$. By Theorem \ref{main-theorem}.4, $D_0 = \{f\in D\colon w(f)\in \Hom^0( \rho (G),\mathbb F^\bullet)\}$ whence $m \in M \cap D_0 = M_0$ and clearly $w (m) \in \langle w ( \mathcal A (M_0)) \rangle$. Recall that each monomial in $\mathbb F[V]$ spans a $G$-invariant subspace. Thus $f\in\mathbb F[V]$ is $G_1$-invariant if and only if all monomials of $f$ are $G_1$-invariant. Furthermore, a monomial $m$ is $G_1$-invariant if and only if $w (m)$ contains $G_1$ in its kernel; equivalently (by the characterization of $D_0$) $m \in M \cap D_0 = M_0$. Thus $\mathbb F[V]^{G_1}$ is generated by $\mathcal A (M_0)$ and hence the assertion follows from 1. 4. Since $M\subset D$, $M_0 \subset D_0$ and $M^G \subset H$ are divisor closed and since the embedding $H\subset D_0$ is a divisor theory (Theorem~\ref{main-theorem}.4), $M^G \hookrightarrow M_0$ is a divisor homomorphism into a free abelian monoid. Let $m \in M_0$. Then $m \in D_0$ and there is a finite subset $Y \subset H$ such that $m = \gcd_{D_0} (Y)$. Let $X \subset D_0 \cap M = M_0$ be the set of all monomials occurring in some $y \in Y$. Then $m = \gcd_{D_0} (X) = \gcd_{M_0} (X)$, where the last equality holds because $M_0 \subset D_0$ is divisor closed. Restricting the isomorphism \[ \overline w \mid_{\mathsf q (D_0)/\mathsf q (H)} \colon \mathcal C ( \mathbb F[V]^G) = \mathsf q (D_0)/\mathsf q (H) \to \Hom^0( \rho (G),\mathbb F^\bullet) \] from Theorem \ref{main-theorem}, we obtain a monomorphism \[ \overline w\mid_{\mathsf q (M_0)/\mathsf q (M^G)} \colon \mathcal C (M^G) = \mathsf q (M_0)/\mathsf q (M^G) \to \Hom^0( \rho (G),\mathbb F^\bullet) \,. \] By 1. and 3., the image contains the generating set $\{ \chi_1^{e(x_1)}, \ldots,\chi_n^{e(x_n)} \}$ of the group $\Hom^0( \rho (G),\mathbb F^\bullet)$ and hence the above monomorphism is an isomorphism. The last statement follows from 1. by $\overline{w}(\mathcal{C}(M^G)^*)=\overline{w}(\mathcal{A}(M_0))$. \end{proof} \begin{proposition}\label{prop:diagram}~ Let $M \subset \mathbb F[x_1, \ldots, x_n]$ be the multiplicative monoid of monomials, and $M^G \subset M$ the submonoid of $G$-invariant monomials. \begin{enumerate} \item Every class of $\mathcal C ( \mathbb F[V]^G )$ contains a prime divisor. \item We have the following commutative diagram of monoid homomorphisms \[ \xymatrix@C=2cm{ H \ar[r]^{\theta_1} & \mathcal B( \mathcal C (H) ) \ar[r]^{w_1}_{\cong} & \mathcal B( \Hom^0( \rho (G),\mathbb F^\bullet) ) \\ & \mathcal B( \widehat G_V) \ar[ru]^{\nu} & \\ M^G \ar[rr]^{\theta_2} \ar[ru]^{\psi \t_{M^G}} \ar@{^{(}->}[uu]& & \mathcal B ( \mathcal C(M^G)^*) \ar@{^{(}->}[uu]^{w_2} } \] where \begin{itemize} \item $\theta_1$ and $\theta_2$ are transfer homomorphisms of Krull monoids as given in Proposition \ref{3.8}. \item $w_1$ is the extension to the monoid of product-one sequences of the group isomorphism $\overline w \mid_{\mathsf q (D_0)/\mathsf q (H)}$ given in Theorem \ref{main-theorem}.4 \item $w_2$ is the extension to the monoid of product-one sequences of the restriction to $\mathcal C(M^G)^*$ of the group isomorphism $\overline w \mid_{\mathsf q (M_0)/\mathsf q (M^G)}$ given in Proposition \ref{prop:xi^e(xi)} \item $\psi$ is given in Proposition \ref{prop:bschmid}. \item $\nu$ will be defined below (indeed, $\nu$ is a transfer homomorphism as given in Proposition \ref{3.9}). \end{itemize} \item If $\widehat G_V=\widehat G$, then every class of $\mathcal{C}(M^G)$ contains a prime divisor. \end{enumerate} \end{proposition} \begin{proof} 1. By Proposition~\ref{prop:bschmid}.1, $\mathbb F[V]^G$ is the monoid algebra of $M^G$ over $\mathbb F$. Thus, by \cite[Theorem 8]{chang}, every class of $\mathbb F[V]^G$ contains a prime divisor. 2. In order to show that the diagram is commutative, we fix an $m \in M^G$. We consider the divisor theory $M^G \hookrightarrow M_0$ from Proposition~\ref{prop:xi^e(xi)} and factorize $m$ in $M_0$, say $m = \prod_{i=1}^n \big(x_i^{e (x_i)} \big)^{a_i}$ where $a_1, \ldots, a_n \in \mathbb N_0$. Since $\overline w (x_i^{e(x_i)}) = \chi_i^{e(x_i)}$ for all $i \in [1,n]$, it follows that \[ (w_2 \circ \theta_2) (m) = (\chi_1^{e(x_1)})^{[a_1]} \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} (\chi_n^{e(x_n)})^{[a_n]} \in \mathcal B( \Hom^0( \rho (G),\mathbb F^\bullet) ) \,. \] Next we view $m$ as an element in $H$ and consider the divisor theory $H \hookrightarrow D_0$. Since $M_0 \subset D_0$ is divisor closed, $m = \prod_{i=1}^n \big(x_i^{e (x_i)} \big)^{a_i}$ is a factorization of $m$ in $D_0$. Therefore $(w_1 \circ \theta_1) (m) = (w_2 \circ \theta_2) (m)$. By definition of $\psi$, we infer that \[ \psi (m) = \chi_1^{[ e(x_1) a_1]} \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} \chi_n^{[e (x_n) a_n]} \,. \] We define a partition of $\widehat G_V = G_1 \cup G_2$, where $G_2 = \{ \chi_i \colon \chi_i = \chi_j \ \text{for some distinct} \ i, j \in [1,n] \}$ and $G_1 = \widehat G_V \setminus G_2$. Let $\nu \colon \mathcal B (\widehat G_V) \to \mathcal B( \Hom^0( \rho (G),\mathbb F^\bullet) )$ be defined as in Proposition \ref{3.9} (with respect to the partition $G_0 = G_1 \uplus G_2$). By Proposition \ref{prop:xi^e(xi)}.2, $e(x_i)=1$ if $\chi_i \in G_2$, and $e (x_i)$ equals the number $e(\chi_i)$ in Proposition \ref{3.9} if $\chi_i \in G_1$. Therefore it follows that \[ \nu ( \psi (m) )= (\chi_1^{e(x_1)})^{[a_1]} \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} (\chi_n^{e(x_n)})^{[a_n]} \,, \] and hence the diagram commutes. 3. In a finite abelian group all elements are contained in the subgroup generated by the other elements, with the only exception of the generator of a $2$-element group. Therefore unless $G$ is the $2$-element group and the non-trivial character occurs with multiplicity one in the sequence $\chi_1 \boldsymbol{\cdot} \ldots \boldsymbol{\cdot} \chi_n$, all the $e(x_i)=1$ by Proposition~\ref{prop:xi^e(xi)}.3, and the elements $x_i$ are all prime in $M_0$, so they represent all the divisor classes, as $i$ varies in $[1,n]$. In the missing case we have $\mathbb F[V]^G=\mathbb F[x_1,\ldots,x_{n-1},x_n^2]$ (after a renumbering of the variables if necessary), hence both class groups are trivial, and $x_1$ and $x_2^2$ are prime elements in the unique class. \end{proof} Thus Proposition \ref{prop:diagram}.1 gives a partial answer to Problem \ref{characteristic-problem}. Using that notation it states that $m_g \ge 1$ for all $g \in \mathcal C ( \mathbb F[V]^G )$. \begin{example}\label{example:nunotsurjective} The set $\mathcal{C}(M^G)^*$ may be a proper subset of $\mathcal C (M^G)$, and consequently the monoid homomorphism $\nu: \mathcal B ( \widehat G_V)\to \mathcal B ( \Hom^0( \rho (G),\mathbb F^\bullet) )$ is not surjective in general. 1. Indeed, let $G$ be cyclic of order $3$, $g \in G$ with $\ord (g)=3$, and the action on $\mathbb F[x_1,x_2,x_3]$ is given by $g\cdot x_i=\omega x_i$, where $\omega$ is a primitive cubic root of $1$. Then $\chi_1=\chi_2=\chi_3=\chi$, so $e(x_1)=e(x_2)=e(x_3)=1$, implying $\overline w (\mathcal{C}(M^G)^*) =\{\chi\}$ (each of the $x_i$ is a prime element in the class $\chi$), whereas $\overline w (\mathcal{C}(M^G)) =\{ \chi,\chi^2,\chi^3=1\}$, the $3$-element group. Thus $ \mathcal{B}(\widehat G_V)=\{\chi^{[3k]} \colon k\in\mathbb N_0\}$, and $\nu(\mathcal{B}(\widehat G_V))$ is the free abelian monoid $\mathcal{F}(\{\chi^3\})$ generated by $\chi^3=1\in\widehat G$. The polynomials $x_1^2+x_2x_3$ and $x_1^3+x_2^2x_3$ are irreducible, they are relative invariants of weight $\chi^2$ and $\chi^3$, so they represent prime elements of $D_0$ in the remaining classes $\chi^2$ and $\chi^3=1$. 2. To provide an example with a multiplicity free module, let $G$ be cyclic of order $5$, $g \in G$ with $\ord (g)=5$, and the action on $\mathbb F[x_1,x_2,x_3]$ is given by $g\cdot x_1=\omega x_1$, $g\cdot x_2=\omega^2 x_2$, $g\cdot x_3=\omega^3x_3$, where $\omega$ is a primitive fifth root of $1$. Then setting $\chi=\chi_1$, we have $\chi_2=\chi^2$, $\chi_3=\chi^3$ and $\overline w (\mathcal{C}(M^G)) =\langle \chi \rangle$ is the $5$-element group, so $V$ is multiplicity free. Still we have $e(x_1)=e(x_2)=e(x_3)=1$, so $\overline w (\mathcal{C}(M^G)^* ) =\{\chi,\chi^2,\chi^3\}$ (and $x_1,x_2,x_3$ are the prime elements of $M_0$ in these classes). The remaining classes $\chi^4$ and $\chi^5=1$ contain the prime elements of $D_0$ represented by $x_2^2+x_1x_3$ and $x_1^5+x_2x_3$. \end{example} \subsection{{\bf A monoid associated with $G$-modules}}~ \label{5.D} \centerline{\it Throughout this subsection, suppose that $\mathrm{char}(\mathbb F)\nmid |G|$.} \smallskip In this subsection we discuss a monoid associated with representations of not necessarily abelian groups which in the case of abelian groups recovers the monoid of $G$-invariant monomials. Decompose $V$ into the direct sum of $G$-modules: \begin{align}\label{decomp} V= V_1 \oplus ... \oplus V_r \end{align} and denote by $\rho_i\colon G\to \GL(V_i)$ the corresponding group homomorphisms. Then \eqref{decomp} induces a decomposition of $\mathbb F[V]$ into multihomogeneous components as follows. The coordinate ring $\mathbb F[V]$ is the symmetric algebra $\Sym(V^*)= \bigoplus_{n=0}^{\infty} \Sym^n (V^*)$. Writing $\mathbb F[V]_{a} = \Sym^{a_1}(V^*_1) \otimes ... \otimes \Sym^{a_r}(V^*_r)$ we have $\Sym^n(V^*)= \oplus_{|a| = n}\mathbb F[V]_{a}$, and hence $\mathbb F[V] = \oplus_{a \in \mathbb N_0^r} \mathbb F[V]_{a}$. The summands $\mathbb F[V]_{a}$ are $G$-submodules in $\mathbb F[V]$, and $\mathbb F[V]_{a}\mathbb F[V]_{b}\subset \mathbb F[V]_{a+b}$, so $\mathbb F[V]$ is a $\mathbb N_0^r$-graded algebra. Moreover, $\mathbb F[V]^G$ is spanned by its multihomogeneous components $\mathbb F[V]^G_{a}=\mathbb F[V]^G\cap \mathbb F[V]_{a}$. For $f\in \mathbb F[V]_{a}$ we call $a$ the \emph{multidegree} of $f$. We are in the position to define \begin{align}\label{block_def} \mathcal{B}(G,V) = \{ a \in \mathbb N_0^r \colon \mathbb F[V]^G_{a} \neq \{ 0\} \} \end{align} the set of multidegrees of multihomogeneous $G$-invariants. We give precise information on $\mathcal{B}(G,V)$ in terms of quantities associated to the direct summands $V_i$ of $V$. For $i \in [1,r]$ denote by $c_i$ the greatest common divisor of the elements of $\mathcal{B}(G,V_i)$, and $F_i$ the Frobenius number of the numerical semigroup $\mathcal{B}(G,V_i)\subset \mathbb N_0$, so $F_i$ is the minimal positive integer $N$ such that $\mathcal{B}(G,V_i)$ contains $N+kc_i$ for all $k \in \mathbb N_0$. \begin{proposition} \label{B(G,V)-is-C}~ \begin{enumerate} \item $\mathcal B (G,V) \subset \mathbb N_0^r$ is a reduced finitely generated {\rm C}-monoid. \item For each $i \in [1, r]$ and all $a \in \mathbb N_0^r $ satisfying $a_i\ge b(G,V_i)+F_i$ we have \begin{equation}\label{eq:alpha} a \in \mathcal B (G,V) \qquad \text{if and only if} \qquad c_ie_i + a \in \mathcal B (G,V) \,. \end{equation} \item For each $i \in [1, r]$ we have $c_i=|\rho_i(G)\cap \mathbb F^\bullet \mathrm{id}_{V_i}|$. \end{enumerate} \end{proposition} \begin{proof} 1. Take $a, b \in \mathcal{B}(G, V)$, so there exist non-zero $f\in \mathbb F[V]^G_{a}$ and $h\in \mathbb F[V]^G_{b}$. Now $0\neq fh\in \mathbb F[V]^G_{a+b}$, hence $a + b \in \mathcal{B}(G,V)$. This shows that $\mathcal B (G,V)$ is a submonoid of $\mathbb N_0$. Moreover, the multidegrees of a multihomogeneous $\mathbb F$-algebra generating system of $\mathbb F[V]^G$ clearly generate the monoid $\mathcal B (G,V)$. Thus $\mathcal B (G,V)$ is finitely generated by Theorem~\ref{4.1}. To show that $\mathcal B (G,V)$ is also a C-monoid, recall that by Proposition~\ref{finitelygenerated}.3 a finitely generated submonoid $H$ of $\mathbb N_0^r$ is a C-monoid if and only if each standard basis element $e_i\in\mathbb N_0^r$ has a multiple in $H$. Now this condition holds for $\mathcal{B}(G,V)$, since by Theorem~\ref{4.1}.2 $\mathbb F[V_i]^G\subset \mathbb F[V]^G$ contains a homogeneous element of positive degree for each $i\in [1,r]$. \smallskip 2. By symmetry it is sufficient to verify \eqref{eq:alpha} in the case $i=1$. Suppose $a\in \mathcal B (G,V)$, so there is a non-zero $G$-invariant $f\in \Sym^{a_1}(V_1^*)\otimes \ldots\otimes \Sym^{a_r}(V_r^*)$. Decompose $\Sym^{a_1}(V^*_1)=\bigoplus_jW_j$ into a direct sum of irreducible $G$-modules. This gives a direct sum decomposition $\Sym^{a_1}(V_1^*)\otimes \ldots\otimes \Sym^{a_r}(V_r^*)=\bigoplus_j(W_j\otimes \Sym^{a_2}(V^*_2)\otimes\ldots\otimes\Sym^{a_r}(V^*_r))$. It follows that $\Sym^{a_1}(V^*_1)$ contains an irreducible $G$-module direct summand $W$ such that $W\otimes \Sym^{a_2}(V^*_2)\otimes\ldots\otimes \Sym^{a_r}(V^*_r)$ contains a non-zero $G$-invariant. By definition of $b(G,V_1)$ we know that $\mathbb F[V_1]$ is generated as an $\mathbb F[V_1]^G$ module by its homogeneous components of degree $\le b(G,V_1)$. Therefore there exists a $d\le b(G,V_1)$ such that the degree $d$ homogeneous component of $\mathbb F[V]$ contains a $G$-submodule $U\cong W$, and $a_1\in d+\mathcal{B}(G,V_1)$. Now for any homogeneous $h\in \mathbb F[V_1]^G$ we have $hU\otimes \Sym^{a_2}(V^*_2)\otimes\ldots\otimes\Sym^{a_r}(V^*_r))\subset \mathbb F[V]_{(d+\deg(h),a_2,\ldots,a_r)}$ contains a non-zero $G$-invariant, since it is isomorphic to $W\otimes \Sym^{a_2}(V^*_2)\otimes\ldots\otimes\Sym^{a_r}(V^*_r))$. It follows that $(k,a_2,\ldots,a_r)\in\mathcal{B}(G,V)$ for all $k\in d+\mathcal{B}(G,V_1)$, in particular, for all $k\in \{d+F_1,d+F_1+c_1,d+F_1+2c_1,\ldots\}$. 3. Let $i \in [1,r]$, and to simplify notation set $W=V_i$, $c=c_i$, and $\phi=\rho_i$. Recall that $\mathbb F[W]^A=\mathbb F[W]^B$ for some finite subgroups $A,B\subset \GL(W)$ implies that $A=B$. Indeed, the condition implies equality $\mathbb F(W)^A=\mathbb F(W)^B$ of the corresponding quotient fields, and so both $A$ and $B$ are the Galois groups of the field extension $\mathbb F(W)$ over $\mathbb F(W)^A=\mathbb F(W)^B$, implying $A=B$. Now denote by $Z\subset \GL(W)$ the subgroup of scalar transformations $Z=\{\omega \mathrm{id}_{W}\colon \omega^{c}=1\}$, so $Z$ is a central cyclic subgroup of $\GL(W)$ of order $c$. Clearly every homogeneous element of $\mathbb F[W]$ whose degree is a multiple of $c$ is invariant under $Z$. It follows that $\mathbb F[W]^G\subset \mathbb F[W]^Z$, hence denoting by $\tilde G$ the subgroup $\phi(G)Z$ of $\GL(W)$, we have $\mathbb F[W]^G=\mathbb F[W]^{\tilde G}$. It follows that $\phi(G)=\tilde G$, i.e. $Z\subset \phi(G)$, and so $c=|Z|$ divides the order of $\phi(G)\cap \mathbb F^\bullet \mathrm{id}_W$. Conversely, if $\lambda\mathrm{id}_W$ belongs to $\rho(G)$, then every element of $\mathbb F[W]^G$ must be invariant under the scalar transformation $\lambda\mathrm{id}_W$, whence all homogeneous components of $\mathbb F[W]^G$ have degree divisible by the order of $\lambda$, so the order of the cyclic group $\phi(G)\cap \mathbb F^\bullet \mathrm{id}_W$ must divide $c$. \end{proof} In general $\mathcal{B}(G,V)$ is not a Krull monoid. To provide an example, consider the two-dimensional irreducible representation $V$ of the symmetric group $S_3 = D_6$. Its ring of polynomial invariants is generated by an element of degree 2 and 3, hence $\mathcal{B}(G,V) = \langle 2,3 \rangle \subset (\mathbb N_0, +)$ , which is not Krull. \begin{proposition}\label{prop:module-davenport} For every $k \in \mathbb N$ we have $\mathsf{D}_k(\mathcal{B}(G,V)) \le \beta_k(G,V)$. \end{proposition} \begin{proof} Let $k \in \mathbb N$. Take $a \in \mathcal{B}(G,V)$ such that $|a| > \beta_k(G,V)$. By \eqref{block_def} a multihomogeneous invariant $f \in \mathbb F[V]^G_a$ exists. As $\deg(f) =|a|> \beta_k(G,V)$ it follows that $f = \sum_{i=1}^N f_{i,1}\ldots f_{i,k+1}$ for some non-zero multihomogeneous invariants $f_{i,j}$ of positive degree. Denoting by $a_{i,j}\in\mathbb N_0^r$ the multidegree of $f_{i,j}$, we have that $a=a_{i,1}+\ldots +a_{i,k+1}$, where $0\neq a_{i,j}\in \mathcal{B}(G,V)$. This shows that all $a\in \mathcal{B}(G,V)$ with $|a|>\beta_k(G,V)$ factor into the product of more than $k$ atoms, implying the desired inequality. \end{proof} \begin{remarks} \label{remark:block monoid of G module}~ 1. Let $G$ be abelian and suppose that $\mathbb F$ is algebraically closed. Then we may take in \eqref{decomp} a decomposition of $V$ into the direct sum of $1$-dimensional submodules and so $V_i^*$, is spanned by a variable $x_i$ as in Subsection~\ref{subsec:abelian}. Then $\mathbb F[V]_{a}$ is spanned by the monomial $x_1^{a_1}\cdots x_r^{a_r}$ and $a \in \mathcal{B}(G,V)$ holds if and only if the corresponding monomial is $G$-invariant. So in this case $\mathcal{B}(G,V)$ can be naturally identified with $M^G$ and the transfer homomorphism $\psi \t_{M^G}$ of Proposition~\ref{prop:bschmid} can be thought of as a transfer homomorphism $\mathcal{B}(G,V) \to \mathcal B ( \widehat G_V)$, which is an isomorphism if $V$ is multiplicity free. However, this transfer homomorphism does not seem to have an analogues for non-abelian $G$ (i.e. the study of $\mathcal{B}(G,V)$ can not be reduced to the multiplicity free case), as it is shown by the example below. 2. The binary tetrahedral group $G=\widetilde A_4\cong SL_2(\mathbb F_3)$ of order $24$ has a 2-dimensional complex irreducible representation $V$ such that $\mathbb F[V]^G$ is minimally generated by elements of degree $6,8,12$ (see for example \cite[Appendix A]{Be93a}), hence $\mathcal{B}(G,V)=\{0,6,8,12,14,16,18,\ldots\}$. On the other hand under this representation $G$ is mapped into the special linear group of $V$, so on $V\oplus V$ the function maping $((x_1,x_2),(y_1,y_2))\mapsto \det\left(\begin{array}{cc}x_1 & y_1 \\x_2 & y_2\end{array}\right)$ is a $G$-invariant of multidegree $(1,1)$, implying that $(1,1)\in \mathcal{B}(G,V\oplus V)$. This shows that the transfer homomorphism $\tau:\mathbb N_0^2\to \mathbb N_0$, $(a_1,a_2)\mapsto a_1+a_2$ does not map $\mathcal{B}(G,V\oplus V)$ into $\mathcal{B}(G,V)$, as $\tau(1,1)=2\notin \mathcal{B}(G,V)$. \end{remarks} Recall that the multigraded Hilbert series of $\mathbb F[V]^G$ in $r$ indeterminates $T=(T_1,...,T_r)$ is \[H(\mathbb F[V]^G,T)= \sum_{a \in \mathbb N_0^r}\dim_{\mathbb F}( \mathbb F[V]^G_a)T_1^{a_1} \cdots T_r^{a_r}, \quad \text{and hence} \] \[\mathcal{B}(G,V) = \{a \in \mathbb N_0^r \colon \mbox{ {\rm the coefficient of} }T^{a}\mbox{ {\rm in} }H(\mathbb F[V]^G, T)\mbox{ {\rm is nonzero}} \}.\] By this observation Proposition~\ref{prop:module-davenport} can be used for finding lower bounds on the Noether number $\beta(G,V)$, thanks to the following classical result of Molien (see for example \cite[Theorem 2.5.2]{Be93a}): \begin{proposition} Given a $G$-module $V= V_1\oplus...\oplus V_r$ over $\mathbb C$, let $\rho_i(g) \in \GL(V_i)$ be the linear transformation defining the action of $g\in G$ on $V_i$. Then we have \[H(\mathbb C[V]^G,T) = \frac 1{|G|} \sum_{g \in G} \prod_{i=1}^r \frac1{\det(\mathrm{id}_{V_i}-\rho_i(g)\cdot T_i)}. \] \end{proposition} \begin{example}[see p. 54-55 in \cite{Ne-Sm02a}] Consider the alternating group $A_5$ and its $3$-dimensional representation over $\mathbb C^3$ as the group of symmetries of an icosahedron. The Hilbert series then equals \[ \frac{1+T^{15}}{(1-T^2)(1-T^6)(1-T^{10})} \] whence it is easily seen that $\mathcal{B}(A_5, \mathbb C^3) = \langle 2,6,10,15\rangle$ and consequently $\beta(A_5) \ge \mathsf{D}(\mathcal{B}(A_5,\mathbb C^3)) =15$. Note that this lower bound is stronger than what we could get from $\beta(G) \ge \max_{H\subsetneq G} \beta(H)$, since $\beta(H)\le |H|\le 12$ for any proper subgroup $H$ of $A_5$. \end{example} \section{\bf Constants from Invariant Theory and their counterparts in Arithmetic Combinatorics} \label{sec:5} In Subsection \ref{sec:5.1} we compare known reduction lemmas for the Noether number with reduction lemmas for the Davenport constants achieved in previous sections. We demonstrate how to use them to determine the precise value of Noether numbers and Davenport constants in new examples. In Subsection \ref{5.A} we consider an invariant theoretic analogue of the constant $\eta (G)$ (for the definition of $\eta (G)$ see the discussions before Proposition \ref{2.8} and Lemma \ref{3.1}). \smallskip \centerline{\it Throughout this section, suppose that $\mathrm{char}(\mathbb F) \nmid |G|$.} \subsection{The Noether number versus the Davenport constant}\label{sec:5.1}~ In the non-abelian case no structural connection (like Proposition~\ref{prop:bschmid}) is known between the $G$-invariant polynomials and the product-one sequences over $G$. Nevertheless, a variety of features of the $k$th Noether numbers and the $k$th Davenport constants are strikingly similar, and we offer a detailed comparison. Recall that $\beta_k(G)=b_k(G)+1$ (\eqref{eq:beta=b+1}) and that $ \mathsf{d}_k(G)+1\le \mathsf{D}_k(G)$ (Proposition~\ref{2.8}.1). \begin{enumerate} \item The inequalities \begin{align}\label{eq:trivial} (a) \quad \beta_k(G)\le k\beta(G) && (b) \quad {\mathsf d}_k(G)+1\le k({\mathsf d}(G)+1) && (c) \quad \mathsf D_k(G)\le k\mathsf D(G) \end{align} \item Reduction lemma for normal subgroups $N \triangleleft G$: \begin{align}\label{red_norm} (a) \quad \beta_k(G) \le \beta_{\beta_k(G/N)}(N) && (b) \quad {\mathsf d}_k(G) \le {\mathsf d}_{\mathsf d_k(N) +1 }(G/N) && \end{align} \item Reduction lemma for arbitrary subgroups $H \le G$ with index $l=[G:H]$: \begin{align}\label{red_sub} (a) \ \beta_k(G) \le \beta_{kl}(H)\le l\beta_k(H) && (b) \ {\mathsf d}_k(G)+1 \le l({\mathsf d}_k(H)+1) && (c) \ \mathsf D_k(G) \le l \mathsf D_k(H) \end{align} \item Supra-additivity: for a normal subgroup $N \triangleleft G$ we have \begin{align}\label{subadd} (a) \quad b_{k+r-1}(G) \ge b_k(N) + b_r(G/N) \mbox{ if }G/N\mbox{ is abelian} \end{align} \[ (b) \quad \mathsf d_{k+r-1}(G) \ge\mathsf d_k(N) + \mathsf d_r(G/N) \] \item Monotonicity: for an arbitrary subgroup $H \le G$ we have \begin{align}\label{mono} (a) \quad \beta_k(G) \ge \beta_k(H) && (b) \quad \mathsf d_k(G)\ge \mathsf d_k(H) && (c) \quad \mathsf D_k(G) \ge \mathsf D_k(H) \end{align} \item Almost linearity in $k$: there are positive constants $C, C', C'',k_0,k_0',k_0''$ depending only on $G$ such that \begin{align}\label{qlinear} (a) \ \beta_k(G) = k\sigma(G) + C \text{ for all }k > k_0 \mbox{ if }\mathrm{char}(\mathbb F)=0 && (b) \ \mathsf d_k(G) =k \mathsf e (G) +C' && \\ \notag \text{ for all } k >k_0' \quad \text{and} \quad (c) \ \mathsf D_k(G) =k \mathsf e (G) +C'' \text{ for all } k >k_0'' \end{align} \item The following functions are non-increasing in $k$: \begin{align} \label{non-increasing} (a) \quad \beta_k (G)/k \quad \text{if} \quad \mathrm{char} (\mathbb F) = 0 && (b) \quad \mathsf D_k (G)/k \end{align} \end{enumerate} The inequality \eqref{eq:trivial} (a) is observed in \cite{Cz-Do15a}, (b) is shown in Proposition \ref{gen-dav-5}.4, whereas (c) is observed in the beginning of Subsection \ref{2.E}. For the proof of \eqref{red_norm} (a) see \cite[Lemma 1.5]{Cz-Do15a} and for part (b) see Proposition~\ref{gen-dav-5}.2. Note that the roles of $N$ and $G/N$ are swapped in the formulas (a) respectively (b), but in the abelian case they amount to the same. The first inequality in part (a) of \eqref{red_sub} is proved in \cite[Corollary 1.11]{Cz-Do15a} for cases when (i) $\mathrm{char}(\mathbb F) =0$ or $\mathrm{char}(\mathbb F) > [G:H]$; (ii) $H$ is normal in $G$ and $\mathrm{char}(\mathbb F) \nmid [G:H]$; (iii) $\mathrm{char}(\mathbb F)$ does not divide $|G|$. It is conjectured, however that it holds in fact whenever $\mathrm{char}(F) \nmid [G:H]$ (see \cite{kemper-separating}). By \cite[Lemma 4.3]{Cz-Do13c}, we have $\beta_{kl}(H)\le l\beta_k(H)$ for all positive integers $k,l$, implying the second inequality in part (a). Parts (b) and (c) of \eqref{red_sub} appear in Proposition \ref{gen-dav-5} (3. and 5.) Part (a) of \eqref{subadd} appears in \cite[Theorem 4.3 and Remark 4.4]{Cz-Do14a} while part (b) is proved in Proposition~\ref{gen-dav-5}.1. Parts (b) and (c) of \eqref{mono} are immediate from the definitions, while part (a) follows from an argument of B. Schmid (\cite[Proposition~5.1]{Sc91a}) which also shows that $\beta_k(G, \mathrm{Ind}_H^G V) \ge \beta_k(H,V)$ for all $k \ge 1$ (see \cite[Lemma 4.1]{Cz-Do14a}). Part (a) of \eqref{qlinear} is proved in \cite[Proposition 4.5]{Cz-Do13c} (the constant $\sigma(G)$ will be discussed in Subsection \ref{5.A}, and for \eqref{qlinear} (b) and (c) we refer to Proposition~\ref{2.7}.2 and Proposition~\ref{2.8}.2. Part (a) of \eqref{non-increasing} is proved in \cite[Section 4]{Cz-Do13c} and for \eqref{non-increasing} (b) we refer to Proposition~\ref{2.7}.3. Furthermore, for a normal subgroup $N \triangleleft G$ we have \begin{align}\label{submult} (a) \quad \beta(G) \le {\beta(G/N)}\beta(N) && (b) \quad \mathsf D(G) \le \mathsf D(N) \mathsf D(G/N) \,, \end{align} where in (b) we assume that $N \cap G' = \{1\}$. Here part (a) is originally due to B. Schmid (\cite[Lemma 3.1]{Sc91a}) and it is an immediate consequence of \eqref{eq:trivial} (a) and \eqref{red_norm} (a) while part (b) is proven in \cite[Theorem 3.3]{Ge-Gr13a}. The above reduction lemmas on the Noether numbers are key tools in the proof of the following theorem. \begin{theorem}\label{thm:betaindex2}~ Let $k \in \mathbb N$. \begin{enumerate} \item $\beta_k(A_4) = 4k+2$ and $\beta(\tilde{A_4}) = 12$, where $A_4$ is the alternating group of degree $4$ and $\tilde{A}_4$ is the binary tetrahedral group. \smallskip \item If $G$ is a non-cyclic group with a cyclic subgroup of index two, then \[ \beta_k(G) = \frac{1}{2} |G| k + \begin{cases} 2 & \text{ if } G=Dic_{4m}, \text{ $m>1$};\\ 1 & \text{ otherwise. } \end{cases} \] where $\text{Dic}_{4m} = \langle a,b: a^{2m}=1, b^2 = a^m, bab^{-1} = a^{-1} \rangle$ is the dicyclic group. \smallskip \item \begin{align*} \beta(G)\ge \frac 12 |G| \quad \text{if and only if} \quad &\text{$G$ has a cyclic subgroup of index at most two or} \\ & \text{$G$ is isomorphic to $C_3\oplus C_3$, $C_2 \oplus C_2 \oplus C_2$, $A_4$ or $ \tilde{A}_4$} \end{align*} \end{enumerate} \end{theorem} \begin{proof} For 1. see \cite[Theorem 3.4 and Corollary 3.6]{Cz-Do15a}, for 2. see \cite[Theorem 10.3]{Cz-Do14a}, and 3. can be found in \cite[Theorem~1.1]{Cz-Do15a}. \end{proof} It is worthwhile to compare Theorem~\ref{thm:betaindex2}.3 with the statement from \cite{olson-white} asserting that $\mathsf d(G)<\frac 12|G|$ unless $G$ has a cyclic subgroup of index at most two. If $G$ is abelian, then Lemma \ref{dandDabeliancase} and Proposition \ref{prop:bschmid} imply $\mathsf d (G)+1 = \beta (G) = \mathsf D (G)$. Combining Theorems~\ref{3.12} and \ref{thm:betaindex2} we obtain that all groups $G$ having a cyclic subgroup of index at most two satisfy the inequality $\mathsf d (G)+1 \le \beta (G) \le \mathsf D (G)$. Moreover, for these groups $\beta (G)=\mathsf{d}(G)+1$, except for the dicyclic groups, where $\beta (G)=\mathsf{d}(G)+2$. On the other hand, it was shown in \cite{C-D-S} that for the Heisenberg group $H_{27}$ of order 27 we have $\mathsf D (H_{27}) < \beta (H_{27})$. \begin{problem} \label{Noether-Davenport} Study the relationship between the invariants $\mathsf d (G)$, $\beta (G)$, and $\mathsf D (G)$. \\ In particular, \begin{itemize} \item Characterize the groups $G$ satisfying $\mathsf d (G)+1 \le \beta (G)$. \item Characterize the groups $G$ satisfying $\beta (G) \le \mathsf D (G)$. \end{itemize} \end{problem} In the following examples we demonstrate how the reduction results presented at the beginning of this section do work. This allows us to determine Noether numbers and Davenport constants of non-abelian groups, for which they were not known before. \begin{example}\label{example:C_pq rtimes C_q} Let $p, q $ be primes such that $ q \mid p-1$. 1. Consider the non-abelian semi-direct product $G=C_p \rtimes C_q$. A conjecture attributed to Pawale (\cite{wehlau}) states that $\beta(C_p \rtimes C_q) =p+q-1$ and many subsequent research was done in this direction (\cite{Do-He00a}, \cite{Cz-Do15a}). Currently it is fully proved only for the cases $q=2$ in \cite{Sc91a} and $q=3$ in \cite{Cz14d} whereas for arbitrary $q$ we have only upper bounds in \cite{Cz-Do15a}, proved using known results related to the Olson constant of the cyclic group of order $p$. Theorem \ref{3.13}.3 implies that $\mathsf{d}(G)+1=p+q-1$ and hence $\mathsf d (G)+1$ coincides with the conjectured value for $\beta(G)$. 2. In view of the great difficulties related to Pawale's conjecture it is quite remarkable that we can determine the exact value of the Noether number for the non-abelian semidirect product $C_{pq} \rtimes C_q$. Indeed, this group contains an index $p$ subgroup isomorphic to $C_q\oplus C_q$, hence $\beta(C_{pq} \rtimes C_q) \le \beta_p(C_q \oplus C_q)$ by \eqref{red_sub}. By Proposition~\ref{prop:bschmid} 4. we have $\beta_p(C_q \oplus C_q)= \mathsf D_p(C_q\oplus C_q)$, and finally, $\mathsf D_p(C_q\oplus C_q)=pq+q-1$ by Theorem \ref{gen-dav-abelian}. Thus we have $\beta(C_{pq} \rtimes C_q) \le pq+q-1$. The reverse inequality also holds, since $\beta(C_{pq} \rtimes C_q)$ contains a normal subgroup $N\cong C_{pq}$ with $G/N\cong C_q$, so by \eqref{subadd} and \eqref{eq:beta=b+1} we have $\beta(C_{pq} \rtimes C_q)\ge \beta(C_{pq})+\beta(C_q)-1=pq+q-1$. So we have $\beta(C_{pq} \rtimes C_q) =pq+q-1$. Next we determine the small Davenport constant of this group. Since $C_{pq}$ is a normal subgroup and the corresponding factor group is $C_q$, we have by Proposition \ref{gen-dav-5}.1 that $\mathsf d(C_{pq} \rtimes C_q)\ge \mathsf d(C_{pq})+\mathsf d(C_q)=p+q-2$. The reverse inequality $\mathsf d(C_{pq} \rtimes C_q)\le p+q-2$ follows from Theorem \ref{3.13}.4, since $C_{pq} \rtimes C_q$ contains also a normal subgroup $N\cong C_p$ such that $G/N\cong C_q\oplus C_q$. Consequently, by Lemma \ref{3.1}.2.(a) we have \[ \mathsf D(C_{pq} \rtimes C_q)\ge \mathsf d(C_{pq} \rtimes C_q)+1=pq+q-1 \,. \] \end{example} \begin{example} \label{S_4} The symmetric group $S_4$ has a normal subgroup $N\cong C_2 \oplus C_2$ such that $S_4 /N \cong D_6$. We know that $\beta(D_6) = 4$ (say by Theorem~\ref{thm:betaindex2} 2.). Thus by \eqref{red_norm} and Theorem \ref{gen-dav-abelian} we have $\beta(S_4) \le \beta_{\beta(D_6)}(C_2\oplus C_2) = \mathsf D _4(C_2\oplus C_2) = 2 \cdot 4 +1 = 9$. Now let $V$ be the standard $4$-dimensional permutation representation of $S_4$ and $\mathrm{sign}: S_4 \to \{ \pm 1\}$ the sign character. It is not difficult to prove the algebra isomorphism $\mathbb F[V \otimes \mathrm{sign}]^{S_4} \cong \mathbb F[V]^{S_4}_{even} \oplus \Delta_4 \mathbb F[V]^{S_4}_{odd} $ where $\Delta_4$ is the Vandermonde determinant in 4 variables, $\mathbb F[V]^{S_4}_{even}$ is the span of the even degree homogeneous components of $\mathbb F[V]^{S_4}$, and $\mathbb F[V]^{S_4}_{odd}$ is the span of the odd degree homogeneous components of $\mathbb F[V]^{S_4}$. Moreover, the algebra $\mathbb F[V]^{S_4}_{even} \oplus \Delta_4 \mathbb F[V]^{S_4}_{odd}$ is easily seen to be minimally generated by $\sigma_2, \sigma_1^2, \sigma_1\sigma_3, \sigma_4, \sigma_3^2, \sigma_1\Delta_4, \sigma_3\Delta_4$, where $\sigma_i$ is the $i$-th elementary symmetric polynomial. As a result $\beta(S_4, V \otimes \mathrm{sign}) = \deg(\sigma_3 \Delta_4)= 3 +\binom 4 2 = 9$. So we conclude that $\beta(S_4)=9$ (and not $10$, as it is claimed on page 14 of \cite{kraft-procesi}). \end{example} \begin{example} \label{Pauli} Let $G$ be the group generated by the complex Pauli matrices \[\left(\begin{array}{rr} 0 & 1 \\ 1 & 0 \\ \end{array}\right), \quad \left(\begin{array}{rr} 0 & -i \\ i & 0 \\ \end{array}\right), \quad \left(\begin{array}{rr} 1 & 0 \\ 0 & -1 \\ \end{array}\right).\] This is a pseudoreflection group, hence the ring of invariants on $V=\mathbb C^2$ is generated by two elements, namely $\mathbb{C}[x,y]^G = \mathbb{C}[x^4+y^4, x^2y^2]$. Moreover, $b(G,V)$ is the sum of the degrees of the generators minus $\dim(V)$ (again because $G$ is a pseudoreflection group, see \cite{chevalley}), so $b(G,V)=6$. It follows by \eqref{eq:beta=b+1} that $\beta(G)=b(G)+1 \ge b(G,V) +1 =7$. On the other hand, $G $ is a non-abelian semi-direct product$ (C_4 \oplus C_2) \rtimes C_2$. Therefore $G$ has a normal subgroup $N$ such that $N\cong G/N\cong C_2 \oplus C_2$ and thus \[ \beta(G) \le \beta_{\beta(C_2\oplus C_2)}(C_2\oplus C_2) = \mathsf D_3(C_2 \oplus C_2) = 7.\] So we conclude that $\beta(G) = 7$. \end{example} \subsection{{\bf The constants $\sigma (G, V)$ and $\eta (G,V)$}}~ \label{5.A} \begin{definition}\label{def:sigma}~ \begin{enumerate} \item Let $\sigma(G,V)$ denote the smallest $d \in \mathbb N_0 \cup \{\infty\}$ such that $\mathbb F[V]^G$ is a finitely generated module over a subring $\mathbb F[f_1,\ldots,f_r]$ such that $\max \{ \deg(f_i) \colon i \in [1,r]\} = d$. We define $\sigma(G)=\sup \{ \sigma(G,W) \colon W \ \text{is a $G$-module} \}$. \item Let $S \subset \mathbb F[V]^G$ be the $F$-subalgebra of $\mathbb F[V]^G$ generated by its elements of degree at most $\sigma(G,V)$. Then $\eta(G,V)$ denotes the maximal degree of generators of $\mathbb F[V]^G_+$ as an $S$-module. \end{enumerate} \end{definition} One motivation to study $\sigma(G,V)$ and $\eta(G,V)$ is that by a straightforward induction argument (\cite[Section 4]{Cz-Do13c}) we have \[ \beta_k(G,V) \le (k-1)\sigma(G,V) + \eta(G,V) \,. \] By \cite[Proposition~6.2]{Cz-Do13c}, $\sigma(C_p \rtimes C_q) = p$ (this is also true in characteristic $q$, see \cite[Proposition 4.5]{Elm-Ko}). If $\mathbb F$ is algebraically closed, then, by Hilbert's Nullstellensatz, $\sigma(G,V)$ is the smallest $d$ such that there exist homogeneous invariants of degree at most $d$ whose common zero locus is the origin. It is shown in Lemma~5.1, 5.4 and 5.6 of \cite{Cz-Do13c} (some extensions to the modular case and for linear algebraic groups are given in \cite{Elm-Ko}) that \begin{itemize} \item $\sigma(G) \le \sigma(G/N)\sigma(N)$ if $N \triangleleft G$; \item $\sigma(H) \le \sigma(G) \le [G:H]\sigma(H)$ if $H \le G$; \item $\sigma(G) = \max \{ \sigma(G,V) \colon V \text{ is an irreducible $G$-module}\}$. \end{itemize} \begin{proposition}\label{prop:abeliansigma} Let $G$ be abelian. \begin{enumerate} \item $\sigma(G)=\exp(G)= \mathsf e(G) $. \item $\eta(G) = \sup \{ \eta(G,W) \colon W \ \text{is a $G$-module} \}$. \end{enumerate} \end{proposition} \begin{proof} For 1. see \cite[Corollary 5.3]{Cz-Do13c}. To prove 2., let $T\in\mathcal F(\widehat G)$ with $|T|=\eta(G)-1$ such that $T$ has no product-one subsequence $U$ with $|U|\in [1, \mathsf e(G)]$. Let $V$ be the regular representation of $G$, and denote by $S$ the subalgebra of $\mathbb F[V]^G$ generated by its elements of degree at most $\sigma(G)=\mathsf e(G)$. Now $\psi \colon M \to \mathcal{F}(\widehat G)$ is an isomorphism (see the proof of Proposition~\ref{prop:bschmid}.3.). Thus $\psi^{-1}(T)\in M$ is not divisible by a $G$-invariant monomial of degree smaller than $\mathsf e(G)$. Since both $S$ and $\mathbb F[V]$ are spanned by monomials, it follows that $\psi^{-1}(T)\in M$ is not contained in the $S$-submodule of $\mathbb F[V]^G_+$ generated by elements of degree less than $\deg(\psi^{-1}(T))$. This shows that for the regular representation $V$ of $G$ we have $\eta(G,V)\ge \eta(\widehat G)$. On the other hand let $W$ be an arbitrary $G$-module, and $m\in M$ a monomial with $\deg(m)>\eta(G)$. Then $\psi(m)$ has a product-one subsequence with length at most $\mathsf e(G)=\sigma(G)$, hence $m$ is divisible by a $G$-invariant monomial of length at most $\sigma(G)$ (see the beginning of the proof of Proposition~\ref{prop:bschmid}.2). This shows the inequality $\eta(G,W)\le \eta(\widehat G)$. Taking into account the isomorphism $\widehat G\cong G$ we are done. \end{proof} For the state of the art on $\eta (G)$ (in the abelian case) we refer to \cite[Theorem 5.8.3]{Ge-HK06a}, \cite{Fa-Ga-Zh11a,Fa-Ga-Wa-Zh13a}. Proposition \ref{prop:abeliansigma} inspires the following problem. \begin{problem} Let $G$ be a finite non-abelian group. Is $\sup \{ \eta(G,W) \colon W \ \text{is a $G$-module} \}$ finite? Is it related to $\eta ( \mathcal B (G))$ (see Subsection \ref{2.E} and \ref{3.A})? \end{problem} \acknowledgement{This work was supported by the {\it Austrian Science Fund FWF} (Project No. P26036-N26) and by OTKA K101515 and PD113138. } \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
-187,901.462041
[ -3, 2.619140625 ]
55.643154
[ -2.73828125, 0.76171875, -1.8662109375, -5.65234375, -0.8046875, 8.15625 ]
[ 3.03515625, 8.046875, 1.2470703125, 6.88671875 ]
1,541
25,376
[ -3.431640625, 4.05078125 ]
36.739437
[ -5.25390625, -3.693359375, -5.37890625, -2.791015625, 1.6640625, 13.671875 ]
0.521252
33.514539
11.711854
2.066677
[ 1.7075260877609253 ]
-108,611.748212
4.513398
-183,683.095905
0.689764
6.024313
[ -1.4736328125, -3.310546875, -4.203125, -5.51171875, 1.861328125, 12.703125 ]
[ -6.046875, -2.0703125, -2.203125, -1.146484375, 4.03125, 4.59765625 ]
BkiUc3U5qoYAq3Pz8uon
\section{Introduction} In functional programming, \textbf{optics} are a compositional representation of many common patterns in bidirectional data accessing. They are provided by libraries such as Kmett's \emph{lens}~\cite{kmett15}, O'Connor's \emph{mezzolens}~\cite{oconnor15}, or \emph{purescript profunctor lenses} by Freeman et al.~\cite{freeman15}. Optics, and more concretely \emph{lenses}, were originally used in functional programming as a compositional solution to the problem of accessing fields of nested data structures~\cite{foster05}. As the understanding of these data accessors grew, different families of optics were introduced for a variety of different types (e.g. \emph{prisms} for tagged unions or \emph{traversals} for containers), each one of them capturing a particular data accessing pattern. These families are intercomposable and together form a powerful language for modular data access. Perhaps surprisingly, optics can be composed using ordinary function composition; an example can be seen in Figure~\ref{fig:example1}. This is thanks to an alternative encoding called \emph{profunctor representation}. In this encoding, each optic is written as single function that is polymorphic over profunctors with a certain algebraic structure. For instance, \emph{lenses} can be written as functions polymorphic over \emph{cartesian} profunctors whereas \emph{prisms} can be written as functions polymorphic over \emph{cocartesian} profunctors~\cite[\S 3]{pickering17}. Milewski~\cite{milewski17} has identified these structures (cartesian, cocartesian,~\ldots) as \emph{Tambara modules}~\cite{tambara06} and has used a result by Pastro and Street~\cite{pastro08} to propose a unified definition of optic. That definition has been later extended by Boisseau and Gibbons~\cite{boisseau18} and Riley~\cite{riley18}, using slightly different techniques and proposing laws for optics. However, the original result by Pastro and Street cannot be used directly to unify all the optics that appear in practice. Our work generalizes their result, going beyond the previous definitions of optic and covering \emph{mixed}~\cite[\S 6.1]{riley18} and \emph{enriched optics}. \begin{figure}[H] \centering \begin{examplecode} let home = "221b Baker St, London, UK" address :: Prism Address String street :: Lens String Address >>> home?. address.street Just "221b Baker St" >>> home & address.street.~ "4 Marylebone Rd" "4 Marylebone Rd, London, UK" \end{examplecode} \caption{The composition of a prism (\hask{address}) and a lens (\hask{street}) is used to parse a string and then access and modify one of its subfields.}\label{fig:example1} \end{figure} The generalized profunctor representation theorem captures optics already present in the literature and makes it possible to upgrade them to more sophisticated definitions. For instance, many generalizations of \emph{lenses} in functional programming are shown to be particular cases of a more refined definition that uses mixed optics (Definition~\ref{def:lens}). We also show derivations for some new optics that were not present in the literature. Finally, Milewski~\cite{milewski17} posed the problem of fitting the three basic optics (lenses, prisms and traversals) into an elementary pattern; lenses and prisms had been covered in his work, but traversals were missing. We present a new description of traversals in terms of power series functors whose derivation is more direct than the ones based on \emph{traversables} as studied by Jaskelioff and Rypacek~\cite{rypacek12}. \subsection{Contributions} \begin{itemize} \item We extend the \emph{double} construction from Pastro and Street's ``Doubles for monoidal categories'' to cover mixed optics. In the same way that they characterize copresheaves of these \emph{doubles} as Tambara modules, we characterize copresheaves of mixed optics as suitably generalized Tambara modules~\cite[Proposition 6.1]{pastro08}. \item A study of enriched and \emph{mixed} optics~\cite[\S 6.1]{riley18}, endowing them with \Vt{category} structure. \item An extension of the result that justifies the profunctor representation of optics used in functional programming to the case of enriched and mixed optics. \item A new family of optics, which we dub \emph{algebraic lenses} (Definition~\ref{def:algebraiclens}), that unifies new examples with some optics already present in the literature, such as Boisseau's \emph{achromatic lens}~\cite[\S 5.2]{boisseau17}. \item A new derivation showing that \emph{monadic lenses}~\cite{AbouSaleh16} are mixed optics (Proposition~\ref{prop:monadiclens}). Similarly, a new derivation showing that Myers' \emph{lenses in a symmetric monoidal category}~\cite[\S 2.2]{spivak19} are mixed optics (Proposition~\ref{prop:monoidallenses}). \item A unified definition of \emph{lenses}, that can be specialized to all of these previous examples. \item A new derivation of the optic known as \emph{traversal} (Definition~\ref{def:traversal}) in terms of a monoidal structure originally described by Kelly~\cite[\S 8]{kelly05} for the study of non-symmetric operads (Proposition~\ref{prop:traversal}). Following this, a new family of optics that arises when we generalize this monoidal structure. \end{itemize} \subsection{Synopsis} We introduce the definition of \emph{mixed optic} in Section~\ref{sec:optics}. Section~\ref{sec:examples} describes examples from the practice of functional programming and how they are captured by the definition. Section~\ref{sec:tambaratheory} describes how the theory of Tambara modules can be applied to obtain a profunctor representation for optics. \subsection{Setting} We shall work with categories enriched over a Bénabou cosmos \((\V,\otimes,I)\); that is, a (small)-complete and cocomplete symmetric monoidal closed category. In particular, \(\V\) is enriched over itself and we write the internal hom-object between \(A,B \in \operatorname{Obj}(\V)\) as \(\V(A,B)\) or just \([A,B]\) when it does not cause ambiguity. Our intention is to keep a close eye on the applications in functional programming: the enriching category \(\V\) should be thought of as the category whose objects model the types of an idealized programming language and whose morphisms model the programs. Because of this, \(\V\) will be cartesian in many of the examples. We can, however, remain agnostic as to which specific \(\V\) we are addressing. For calculations, we make significant use of coend calculus as described, for instance, by Loregian~\cite{loregian19}. The proofs in this paper can be carried without assuming choice or excluded middle, but there is an important set-theoretical issue: in some of the examples, we compute coends over non-small categories. We implicitly fix a suitable Grothendieck universe and our categories are to be considered small with respect to that universe. As Riley~\cite[\S 2]{riley18} notes, this will not be a problem in general: even if some coends are indexed by large categories and we cannot argue their existence using the cocompleteness of \(\mathbf{Sets}\), we will still find them represented by objects in the category. \section{Optics}\label{sec:optics} The structure that is common to all optics is that they divide a bigger data structure of type \(S \in \mathbf{C}\) into some \emph{focus} of type \(A \in \mathbf{C}\) and some \emph{context} or \emph{residual} \(M \in \mathbf{M}\) around it. We cannot access the context but we can still use its shape to update the original data structure, replacing the current focus by a new one. The definition will capture this fact imposing a quotient relation on the possible contexts; this quotient is expressed by the dinaturality condition of a coend. The category of contexts \(\mathbf{M}\) will be monoidal, allowing us to compose optics with contexts \(M\) and \(N\) into an optic with context \(M \otimes N\). Finally, we leave open the possibility of the new focus being of a different type \(B \in \mathbf{D}\), possibly in a different category, which yields a new data structure of type \(T \in \mathbf{D}\). Let \((\mathbf{M},\otimes,I,a,\lambda,\rho)\) be a monoidal \Vt{category} as defined by Day~\cite{day70}. Let it act on two arbitrary \Vt{categories} \(\mathbf{C}\) and \(\mathbf{D}\) with strong monoidal \Vt{functors} \((\actL{}{}) \colon \mathbf{M} \to [ \mathbf{C} , \mathbf{C} ]\) and \((\actR{}{}) \colon \mathbf{M}\to [\mathbf{D} ,\mathbf{D}]\). We write \[\begin{aligned} & \phi_{A} \colon A \cong \actL{I}{A}, && \phi_{M,N,A} \colon \actL{M}{\actL{N}{A}} \cong \actL{M \otimes N}{A}, \\ & \varphi_{B} \colon B \cong \actR{I}{B}, && \varphi_{M, N,B} \colon \actR{M}{\actR{N}{B}} \cong \actR{M \otimes N}{B}, \end{aligned}\] for the structure isomorphisms of the strong monoidal actions \(\actL{}{}\) and \(\actR{}{}\). \begin{definition}[{{after \cite[\S 6.1]{riley18}, see also~\cite{milewski17,boisseau18}}}]\label{def:optic} Let \(S,A \in \mathbf{C}\) and \(T,B \in \mathbf{D}\). An \textbf{optic} from \((S,T)\) with the focus on \((A,B)\) is an element of the following object described as a coend \[\mathbf{Optic}_{\actL{}{},\actR{}{}} ((A, B), (S,T)) \coloneqq \int^{M} \mathbf{C}(S, \actL{M}{A}) \otimes \mathbf{D}(\actR{M}{B}, T). \] \end{definition} The two strong monoidal actions \(\actL{}{}\) and \(\actR{}{}\) represent the two different ways in which the context interacts with the focus: one when the data structure is decomposed and another one, possibly different, when it is reconstructed. Varying these two actions we will cover many examples from the literature and propose some new ones, as the following table summarizes. \begin{figure}[h] \centering \begin{tabular}{ccc} \hline Name & Description & Ref. \\ \hline Adapter \((\mathbf{Optic}_{\id,\id})\) & \(\mathbf{C}(S,A) \otimes \mathbf{D}(B,T)\) & {\ref{def:adapter}} \\ Lens \((\mathbf{Optic}_{\times,\bullet})\) & \(\mathbf{C}(S,A) \times \mathbf{D}(S \bullet B, T)\) & {{\ref{def:lens}}} \\ Monoidal lens \((\mathbf{Optic}_{\otimes,_\U\times})\) & \(\mathbf{CCom}(S,A) \times \mathbf{C}(\U S \otimes B, T)\) & {{\ref{def:myerslens}}} \\ Algebraic lens \((\mathbf{Optic}_{_\U\times,_\U\bullet})\) & \(\mathbf{C}(S,A) \times \mathbf{D}(\Psi S \bullet B, T)\) & {{\ref{def:algebraiclens}}} \\ Monadic lens \((\mathbf{Optic}_{\times,\rtimes})\) & \(\W(S , A) \times \W(S \times B , \Psi T)\) & {{\ref{def:monadiclens}}} \\ Linear lens \((\mathbf{Optic}_{\otimes,\bullet})\) & \(\mathbf{C}(S, \{B, T\} \bullet A)\) & {{\ref{def:linearlens}}} \\ Prism \((\mathbf{Optic}_{\bullet,+})\) & \(\mathbf{C}(S, T \bullet A) \times \mathbf{D}(B,T)\) & {{\ref{def:prism}}} \\ Coalg. prism \((\mathbf{Optic}_{_\U\bullet,_\U+})\) & \(\mathbf{C}(S, \Theta T \bullet A) \times \mathbf{D}(B,T)\) & {{\ref{remark:coalgebraicprism}}} \\ Grate \((\mathbf{Optic}_{\{\},\bullet})\) & \(\mathbf{D}(\{S,A\}\bullet B, T)\) & {{\ref{def:grate}}} \\ Glass \((\mathbf{Optic}_{\times[],\times[]})\) & \(\mathbf{C}(S \times [[S, A], B] , T)\) & {{\ref{def:glass}}} \\ Affine traversal \((\mathbf{Optic}_{+\otimes,+\otimes})\) & \(\mathbf{C}(S, T + A \otimes \{B, T\})\) & {{\ref{def:affine}}} \\ Traversal \((\mathbf{Optic}_{\mathrm{Pw},\mathrm{Pw}})\) & \(\V(S, \int\nolimits^n A^n \otimes [B^n, T])\) & {{\ref{def:traversal}}} \\ Kaleidoscope \((\mathbf{Optic}_{\mathrm{App},\mathrm{App}})\) & \(\int\nolimits_n \V([A^n,B],[S^n, T])\) & {{\ref{def:kaleidoscope}}} \\ Setter \((\mathbf{Optic}_{\mathrm{ev},\mathrm{ev}})\) & \(\V([A,B],[S,T])\) & {{\ref{def:setter}}} \\ Fold \((\mathbf{Optic}_{\mathrm{Foldable},\ast})\) & \(\V(S, \mathcal{L} A)\) & {{\ref{def:fold}}} \end{tabular} \end{figure} The purpose of an abstract unified definition is twofold: firstly, it provides a framework to classify existing optics and explore new ones, as we do in Section~\ref{sec:examples}; and secondly, it enables a unified profunctor representation, which we present in Section~\ref{sec:tambaratheory}. \section{Examples of optics}\label{sec:examples} \subsection{Lenses and prisms} In functional programming, \textbf{lenses} can be seen as accessors for a particular subfield of a data structure. Lenses are given by two functions: the \mintinline[breaklines]{haskell}{view} function that accesses the subfield; and the \mintinline[breaklines]{haskell}{update} function that overwrites its contents. \begin{spec} view :: s -> a update :: s -> b -> t \end{spec} The definition of \emph{lens} has its origin in Oles' thesis~\cite{oles82}, but this basic definition has been generalized in many different directions. \emph{Monadic lenses}~\cite{AbouSaleh16} try to unify many of the already proposed ways of combining lenses and monadic effects. Myers' \emph{lenses in a symmetric monoidal category}~\cite[\S 2.2]{spivak19} are a different notion that provides an analogue of the \mintinline[breaklines]{haskell}{view} and \mintinline[breaklines]{haskell}{update} functions in arbitrary symmetric monoidal categories. Riley also describes a notion of \emph{linear lenses} with their laws in an arbitrary monoidal closed category~\cite[\S 4.8]{riley18}. Boisseau~\cite[\S 5.2]{boisseau17} introduced the \emph{achromatic lens} responding to the fact that sometimes lenses come endowed with a \mintinline[breaklines]{haskell}{create} function along the usual \mintinline[breaklines]{haskell}{view} and \mintinline[breaklines]{haskell}{update}. However, these generalizations are not mutually compatible in general. They use the cartesian or monoidal structure to get the \mintinline[breaklines]{haskell}{view} function in different ways. Many of them include a monad at some point in the type signature, and then define a particular variant of lens composition that takes that monad into account. Some of them, such as \emph{monoidal lenses} or \emph{lenses in a symmetric monoidal category} were not presented as \emph{optics}, and a profunctor representation for them was not considered. We present two derivations of lenses as mixed optics that capture all of the variants mentioned before and endow all of them with a unified profunctor representation (Theorem~\ref{th:profrep}). The first derivation is based on the cartesian structure. It generalizes the original one by Milewski~\cite{milewski17} and allows us to show that Myers' lenses (Definition~\ref{def:myerslens}) and monadic lenses (Definition~\ref{def:monadiclens}) are also particular cases of mixed optic. The derivation can be then refined to also cover \emph{achromatic lenses} and describe some new variants of optic with potential applications in functional programming that were missing in the literature. The second derivation uses the closed structure instead, and slightly generalizes Riley's \emph{linear lenses}~\cite[\S 4.8]{riley18}. \subsubsection{Lenses} The key ingredient for most variants of lenses is a cartesian monoidal structure. Throughout this section, we take a cartesian closed category \((\mathcal{W}, \times,1)\) as our base for enrichment. A monoidal \Wt{category} \((\mathbf{C},\times,1)\) is cartesian if there exist \Wt{natural} isomorphisms \(\mathbf{C}(Z,X \times Y) \cong \mathbf{C}(Z,X) \times \mathbf{C}(Z,Y)\) and \(\mathbf{C}(X,1) \cong 1\). \begin{definition}\label{def:lens} Let \(\mathbf{C}\) be a cartesian \Wt{category} with a monoidal \Wt{action} \((\bullet) \colon \mathbf{C} \times \mathbf{D} \to \mathbf{D}\) to an arbitrary \Wt{category} \(\mathbf{D}\). A \textbf{lens} is an element of \[\mathbf{Lens}((A, B), (S, T)) \coloneqq \mathbf{C}(S,A) \times \mathbf{D}(S \bullet B, T). \] \end{definition} \begin{proposition} Lenses are mixed optics (as in Definition~\ref{def:optic}) for the actions of the cartesian product \((\times) \colon \mathbf{C} \times \mathbf{C} \to \mathbf{C}\) and \((\bullet) \colon \mathbf{C} \times \mathbf{D} \to \mathbf{D}\). That is, \(\mathbf{Lens} \cong \mathbf{Optic}_{(\times,\bullet)}\). \end{proposition} \begin{proof} The product \((\times) \colon \mathbf{C}^2 \to \mathbf{C}\) is right adjoint to the diagonal functor. \begin{align*} & \int^{C \in \mathbf{C}} \mathbf{C}(S,C \times A) \times \mathbf{D}(C \bullet B, T) \\ \cong & \qquad\mbox{(Product)} \\ & \int^{C \in \mathbf{C}} \mathbf{C}(S , C) \times \mathbf{C}(S , A) \times \mathbf{D}(C \bullet B , T) \\ \cong & \qquad\mbox{(coYoneda)} \\ & \mathbf{C}(S , A) \times \mathbf{D}(S \bullet B , T). \qedhere \end{align*} \end{proof} \begin{remark} The previous definition can be specialized to the pair \(\mathbf{C}(S,A) \times \mathbf{C}(S \times B , T)\) if we take \(\mathbf{C} = \mathbf{D}\) and we let \((\bullet)\) be the cartesian product. \end{remark} \subsubsection{Lenses in a symmetric monoidal category} \begin{definition}[{{\citealp[\S 2.2]{spivak19}}}]\label{def:myerslens} Let \(\mathbf{C}\) be a symmetric monoidal category, with \(\mathbf{CCom}\) its category of cocommutative comonoids and \(\U \colon \mathbf{CCom} \to \mathbf{C}\) the forgetful functor. Let \(S,A \in \mathbf{CCom}\) and \(B,T \in \mathbf{C}\). A \textbf{monoidal lens} is an element of \[\mathbf{mLens}_{\otimes} ((A, B), (S,T)) \coloneqq \mathbf{CCom}(S,A) \times \mathbf{C}(\U S \otimes B, T). \] \end{definition} \begin{proposition}\label{prop:monoidallenses} Monoidal lenses are a particular case of Definition~\ref{def:lens}. \end{proposition} \begin{proof} The category of cocommutative comonoids \(\mathbf{CCom}\) over a category \(\mathbf{C}\) can be given a cartesian structure in such a way that the forgetful functor \(\U \colon \mathbf{CCom} \to \mathbf{C}\) is strict monoidal (see~\cite{fox76}, where a stronger result is shown). The symmetric structure is needed to endow the monoidal product of two comonoids with a canonical comonoid structure. We take the action \((\bullet)\) in Definition~\ref{def:lens} to be given by \(S \bullet A \coloneqq \U S \otimes A\). That is, \(\mathbf{mLens}_\otimes \cong \mathbf{Optic}_{(\otimes, _\U\otimes)}\). \end{proof} \subsubsection{Monadic lenses} Monadic lenses~\cite{AbouSaleh16} were proposed for combining lenses with monadic effects. Their \hask{update} function can be partial or non-deterministic, and they can account for side effects. \begin{figure}[h] \centering \begin{examplecode} box :: (Show b) => MonadicLens Log a b (Box a) (Box b) return (Box 42) >>= mupdate box "hello" >>= mupdate box "world" >>> [box]: contents changed to "hello". >>> [box]: contents changed to "world". >>> Box{"world"} \end{examplecode} \caption{A polymorphic family of type-changing monadic lenses for a logging monad (\hask{Log}) is used to track each time a data holder (\hask{Box}) is accessed.}\label{fig:exampleBox} \end{figure} \begin{definition}[{{\citealp[\S 2.3]{AbouSaleh16}}}]\label{def:monadiclens} Let \(\Psi \colon \W \to \W\) be a \Wt{monad}. A \textbf{monadic lens} for \(\Psi\) is an element of \[\mathbf{MndLens}_\Psi ((A, B), (S, T)) \coloneqq \W(S , A) \times \W(S \times B , \Psi T).\] \end{definition} \begin{proposition}\label{prop:monadiclens} Monadic lenses are a particular case of Definition~\ref{def:lens}. \end{proposition} \begin{proof} First, note that every \Wt{endofunctor} is \textit{strong}. Thus, the \Wt{monad} \(\Psi\) comes with a \Wt{natural} family \(\theta_{X,Y} \colon X \times \Psi(Y) \to \Psi(X \times Y)\). This induces a \Wt{action} \((\rtimes) \colon \W \times \operatorname{Kl}(\Psi) \to \operatorname{Kl}(\Psi)\) defined as the composite \[\begin{tikzcd}[row sep=tiny] \W(A,\Psi B) \times \W(X,Y) \rar{(\times)} & \W(X \times A, Y \times \Psi B) \\ \phantom{\W(X \times A, Y \times \Psi B)} \rar{\theta} & \W(X \times A, \Psi (Y \times B)). \end{tikzcd}\] We can note that \(\operatorname{Kl}_\Psi(S \rtimes B,T) \coloneqq \W(S \times B, \Psi T)\). Monadic lenses can be rewritten as lenses (as in Definition~\ref{def:lens}) where the action \((\bullet)\) is given by \((\rtimes) \colon \mathbf{C} \times \operatorname{Kl}(\Psi) \to \operatorname{Kl}(\Psi)\). That is, \(\mathbf{MndLens}_\Psi \cong \mathbf{Optic}_{(\times,\rtimes)}\). \end{proof} \begin{remark} This technique is similar to the one used by Riley to describe a non-mixed variant called \emph{effectful lenses}~\cite[\S 4.9]{riley18}. \end{remark} \subsubsection{Lenses with algebraic context} We can further generalize Definition~\ref{def:lens} if we allow the \textit{context} over which we take the coend to be an algebra for a fixed monad. The motivation is that lenses with a context like this appear to have direct applications in programming; for instance, Boisseau's \textit{achromatic lens}~\cite[\S 5.2]{boisseau17} is a particular case of this definition. These \textit{algebraic lenses} should not be confused with the previous \textit{monadic lenses} in Definition~\ref{def:monadiclens}. \begin{definition}\label{def:algebraiclens} Let \(\Psi \colon \mathbf{C} \to \mathbf{C}\) be a \Wt{monad} in a cartesian \Wt{category} \(\mathbf{C}\). Let \((\bullet) \colon \mathbf{C} \times \mathbf{D} \to \mathbf{D}\) be a monoidal \Wt{action} to an arbitrary \Wt{category} \(\mathbf{D}\). An \textbf{algebraic lens} is an element of \[\mathbf{Lens}_\Psi((A,B), (S,T)) \coloneqq \mathbf{C}(S, A) \times \mathbf{D}(\Psi S \bullet B, T). \] \end{definition} \begin{proposition} Algebraic lenses are mixed optics (as in Definition~\ref{def:optic}) for the actions of the product by the carrier of an algebra \((_\U \times) \colon \EM_\Psi \times \mathbf{C} \to \mathbf{C}\) and \((_\U \bullet) \colon \EM_\Psi \times \mathbf{D} \to \mathbf{D}\). That is, \(\mathbf{Lens}_\Psi \cong \mathbf{Optic}_{(_\U \times, _\U \bullet)}\). \end{proposition} \begin{proof} Products in \(\mathbf{C}\) induce products in the \Wt{category} of algebras \(\EM_\Psi\) which are preserved by the forgetful functor \(\U \colon \EM_\Psi \to \mathbf{C}\). Thus, the \Wt{category} of algebras is cartesian, making the forgetful functor monoidal. The functor \((_\U \times) \colon \EM_\Psi \times \mathbf{C} \to \mathbf{C}\), defined by \(C {_\U\times} A \coloneqq \U C \times A\), is a strong monoidal action. \begin{align*} & \int^{C \in \mathbf{\EM_\Psi}} \mathbf{C}(S, \U C \times A) \times \mathbf{D}(\U C \bullet B, T) \\ \cong & \quad\mbox{(Product)} \\ & \int^{C \in \mathbf{\EM_\Psi}} \mathbf{C}(S, \U C) \times \mathbf{C}(S, A) \times \mathbf{D}(\U C \bullet B, T) \\ \cong & \quad\mbox{(Free-forgetful adjunction)} \\ & \int^{C \in \mathbf{\EM_\Psi}} \EM_\Psi(\Psi S, C) \times \mathbf{C}(S, A) \times \mathbf{D}(\U C \bullet B, T) \\ \cong & \quad\mbox{(coYoneda)} \\ & \mathbf{C}(S, A) \times \mathbf{D}(\Psi S \bullet B, T). \qedhere \end{align*} \end{proof} \begin{remark} Algebraic lenses are a new optic. Let \(\mathbf{D} = \mathbf{C}\) and let \((\bullet)\) be the cartesian product. An algebraic lens is given by the usual \mintinline[breaklines]{haskell}{view} function that accesses the subfield, and a variant of the \mintinline[breaklines]{haskell}{update} function that takes an element inside a monad. \begin{spec} view :: s -> a algUpdate :: m s -> b -> t \end{spec} \end{remark} \begin{remark} In particular, the case where \(\Psi\) is the \textit{list monad} \(\mathcal{L} \colon \V \to \V\) defines a new optic, which we dub a \textbf{classifying lens}. We propose to use them as lenses that can also be trained to classify a new focus into a complete data structure, as in Figure~\ref{fig:exampleIris}. \begin{figure}[H] \centering \begin{examplecode} let iris = [ Iris Setosa 4.9 3.0 1.4, 0.2 , Iris Setosa 4.7 3.2 1.3 0.2 , ... , Iris Virginica 5.9 3.0 5.1 1.8 ] measure :: AlgLens [] Measurements Flower iris!!4 ^. measure >>> (5.0, 3.6, 1.4, 0.2) iris & measure .? Measurements (4.8, 3.2, 3.5, 2.1) >>> Iris Versicolor (4.8, 3.1, 1.5, 0.1) \end{examplecode} \caption{A classifying lens (\hask{measure}) is used both for accessing a the measurements of a point in the \hask{iris} dataset and to classify new measurements into a species (\hask{Versicolor}).}\label{fig:exampleIris} \end{figure} They are given by two functions, the usual \mintinline[breaklines]{haskell}{view} function that accesses the focus, and a \mintinline[breaklines]{haskell}{classify} function that takes a list of examples together with some piece of data and produces a new example. \begin{spec} view :: s -> a classify :: [s] -> b -> t \end{spec} \end{remark} \begin{remark}\label{def:achromatic}\label{def:apochromatic} The case in which \(\Psi\) is the \textit{maybe monad} \(\mathcal{M} \colon \V \to \V\) was studied by Boisseau~\cite[\S 5.2]{boisseau17} and is known as the \textbf{achromatic lens}. It is motivated by the fact that, sometimes in practice, lenses come naturally equipped with a \mintinline[breaklines]{haskell}{create} function~\cite[\S 3]{foster05} along the usual \mintinline[breaklines]{haskell}{view} and \mintinline[breaklines]{haskell}{update}. \begin{spec} view :: s -> a update :: s -> b -> t create :: b -> t \end{spec} For this implementation, we note that \[\mathbf{C}(S,A) \times \mathbf{C}(\mathcal{M} S \times B,T) \cong \mathbf{C}(S,A) \times \mathbf{C}(S\times B,T) \times \mathbf{C}(B,T).\] \end{remark} \begin{remark} What Riley~\cite[\S 4.10]{riley18} calls \textit{achromatic lens}, \[\mathbf{C}(S,\{B,T\}+1) \times \mathbf{C}(S,A) \times \mathbf{C}(B,T),\] is actually the optic for the action \(\odot \colon \mathbf{C} \to [\mathbf{C},\mathbf{C}]\) defined by \(M \odot A \coloneqq (M + 1) \times A\). Perhaps confusingly, this is not equivalent to Boisseau's \emph{achromatic lens}, which is the optic for the action \((\times) \colon \textrm{\(\mathcal{M}\)-Alg} \to [\mathbf{C},\mathbf{C}]\) of the product by pointed objects. In order to amend this clash of terminology, we call this second optic the \textbf{apochromatic lens}. It can be implemented by a \mintinline[breaklines]{haskell}{view} and \mintinline[breaklines]{haskell}{create} functions, together this time with a \mintinline[breaklines]{haskell}{maybeUpdate} that is allowed to fail. \begin{spec} view :: s -> a maybeUpdate :: s -> Maybe (b -> t) create :: b -> t \end{spec} \end{remark} \subsubsection{Lenses in a closed category} Linear lenses (as described by Riley~\cite[\S 4.8]{riley18}) are a different generalization of lenses that relies on a closed monoidal structure. Its advantage is that we do not need to require our enriching category to be cartesian anymore. \begin{definition}[{{\citealp[\S 4.8]{riley18}}}]\label{def:linearlens} Let \((\mathbf{D}, \otimes, \{\} )\) be a right closed \Vt{category} with a monoidal \Vt{action} \((\bullet) \colon \mathbf{D} \otimes \mathbf{C} \to \mathbf{C}\) to an arbitrary \Vt{category} \(\mathbf{C}\). A \textbf{linear lens} is an element of \[ \mathbf{Lens}_{\{\}} ((A, B), (S, T)) \coloneqq \mathbf{C}(S, \{B , T\} \bullet A). \] \end{definition} \begin{proposition} Linear lenses are mixed optics (as in Definition~\ref{def:optic}) for the actions of the monoidal product \((\otimes) \colon \mathbf{D} \times \mathbf{D} \to \mathbf{D}\) and \((\bullet) \colon \mathbf{D} \times \mathbf{C} \to \mathbf{C}\). That is, \(\Lens_{\{\}} \cong \mathbf{Optic}_{\otimes,\bullet}\) \end{proposition} \begin{proof} The monoidal product has a right adjoint given by the exponential. \begin{align*} & \int^{D \in \mathbf{D}} \mathbf{C}(S,D \bullet A) \otimes \mathbf{D}(D \otimes B, T) \\ \cong & \qquad\mbox{(Closedness)} \\ & \int^{D \in \mathbf{D}} \mathbf{C}(S , D \bullet A) \otimes \mathbf{D}(D , \{B , T\}) \\ \cong & \qquad\mbox{(coYoneda)} \\ & \mathbf{C}(S , \{B , T\} \bullet A). \qedhere \end{align*} \end{proof} \subsubsection{Prisms} In functional programming, \textbf{prisms} pattern match on data structures, allowing for failure. They are given by a \hask{match} function that tries to access the matched structure and a \hask{build} function that constructs an abstract type from one of its alternatives. \begin{spec} match :: s -> Either t a build :: b -> t \end{spec} Prisms happen to be lenses in the opposite category. However, they can also be described as optics in the original category for a different pair of actions. We will provide a derivation of prisms that is dual to our derivation of lenses for a cartesian structure. This derivation spcializes to the pair \(\mathbf{C}(S,T+A) \times \mathbf{C}(B,T)\). \begin{definition}\label{def:prism} Let \(\mathbf{D}\) be a cocartesian \Wt{category} with a monoidal \Wt{action} \((\bullet) \colon \mathbf{D} \times \mathbf{C} \to \mathbf{C}\) to an arbitrary category \(\mathbf{C}\). A \textbf{prism} is an element of \[\mathbf{Prism}((A, B), (S,T)) \coloneqq \mathbf{C}(S , T \bullet A) \times \mathbf{D}(B , T). \] In other words, a prism from \((S,T)\) to \((A,B)\) is a lens from \((T,S)\) to \((B,A)\) in the opposite categories \(\mathbf{D}^{op}\) and \(\mathbf{C}^{op}\). However, they can also be seen as optics from \((A,B)\) to \((S,T)\). \end{definition} \begin{proposition} Prisms are mixed optics (as in Definition~\ref{def:optic}) for the actions of the coproduct \((+) \colon \mathbf{D} \times \mathbf{D} \to \mathbf{D}\) and \((\bullet) \colon \mathbf{C} \times \mathbf{D} \to \mathbf{D}\). That is, \(\mathbf{Prism} \cong \mathbf{Optic}_{(\bullet,+)}\). \end{proposition} \begin{proof} The coproduct \((+) \colon \mathbf{C}^2 \to \mathbf{C}\) is left adjoint to the diagonal functor. \begin{align*} & \int^{D \in \mathbf{D}} \mathbf{C}(S,D \bullet A) \times \mathbf{D}(D + B, T) \\ \cong & \qquad\mbox{(Coproduct)} \\ & \int^{D \in \mathbf{D}} \mathbf{C}(S , D \bullet A) \times \mathbf{D}(D , T) \times \mathbf{D}(B , T) \\ \cong & \qquad\mbox{(coYoneda)} \\ & \mathbf{C}(S , T \bullet A) \times \mathbf{D}(B , T). \qedhere \end{align*} \end{proof} \begin{remark}[Prisms in a symmetric monoidal category] Let \(\mathbf{C}\) be a symmetric monoidal (plain) category. Its category of commutative monoids, \(\mathbf{CMon}\), can be given a cocartesian structure in such a way that the forgetful functor \(\U \colon \mathbf{CMon} \to \mathbf{C}\) is strict monoidal. A \textbf{monoidal prism} is an element of \[\mathbf{mPrism}((A, B), (S, T)) \coloneqq \mathbf{C}(S,\U T \otimes A) \times \mathbf{CMon}(B,T).\] \end{remark} \begin{remark}[Prisms with coalgebraic context]\label{remark:coalgebraicprism} Let \(\Theta\) be a \Wt{comonad} in a cocartesian \Wt{category} \(\mathbf{D}\). Let \((\bullet) \colon \mathbf{D} \times \mathbf{C} \to \mathbf{C}\) be a monoidal action to an arbitrary \Wt{category} \(\mathbf{C}\). Coproducts in \(D\) induce coproducts in the category of coalgebras \(\EM_\Theta\) which are preserved by the forgetful \Wt{functor} \(\U \colon \EM_\Theta \to \mathbf{D}\). A \textbf{coalgebraic prism} is an element of \[\mathbf{Prism}_\Theta((A, B), (S, T)) \coloneqq \mathbf{C}(S, \Theta T \bullet A) \times \mathbf{D}(B, T).\] The coalgebraic variant for a comonad \mintinline[breaklines]{haskell}{(c)} is given by a \mintinline[breaklines]{haskell}{cMatch} function, that captures the failure into a comonad, and the usual \mintinline[breaklines]{haskell}{build} function. \begin{spec} cMatch :: s -> Either (c t) a build :: b -> t \end{spec} \end{remark} \subsection{Traversals} In functional programming, \textbf{traversals} extract the elements of a container into an ordered list, allowing us to iterate over the elements without altering the container (Figure~\ref{fig:exampleTraversal}). \begin{figure}[H] \centering \begin{examplecode} let mail = [ "43 Adlington Rd, Wilmslow, United Kingdom" , "26 Westcott Rd, Princeton, USA" , "St James's Square, London, United Kingdom" ] each :: Traversal String [String] address :: Prism Address String city :: Lens String Address >>> mail & each.address.city [ "43 Adlington Rd, WILMSLOW, United Kingdom" , "26 Westcott Rd, PRINCETON, USA" , "St James's Square, LONDON, United Kingdom" ] \end{examplecode} \caption{The composition of a traversal (\hask{each}) with a prism (\hask{address}) and a lens (\hask{city}) is used to parse a collection of strings and modify one of their subfields.}\label{fig:exampleTraversal} \end{figure} Traversals are constructed from a single \hask{extract} function that both outputs the elements and takes a new list to update them. \begin{spec} extract :: s -> ([a], [b] -> t) \end{spec} Usually, we ask the length of the input to the function of type \hask{[b] -> t} to be the same as the length of \hask{[a]}. This restriction can be also encoded into the type when dependent types are available. \begin{definition}\label{def:traversal} A \textbf{traversal} is an element of \[\mathbf{Traversal}((A, B), (S, T)) \coloneqq \V\left(S , \int^{n \in \mathbb{N}} A^n \otimes [B^n, T] \right). \] \end{definition} \begin{remark} Let \((\mathbb{N},+)\) be the free strict monoidal category on one object. Ends and coends indexed by \(\mathbb{N}\) coincide with products and coproducts, respectively. Here \(A^{(-)} \colon \mathbb{N} \to \mathbf{C}\) is the unique monoidal \Vt{functor} sending the generator of \(\mathbb{N}\) to \(A \in \mathbf{C}\). Each functor \(C \colon \mathbb{N} \to \V\) induces a \textit{power series} \[\operatorname{Pw}_C(A) = \int\nolimits^{n \in \mathbb{N}} A^n \otimes C_{n}.\] This defines an action \(\operatorname{Pw} \colon [\mathbb{N}, \mathbf{C}] \to [\mathbf{C},\mathbf{C}]\) sending the indexed family to its power series (see Remark~\ref{remark:series}). We propose a derivation of the \emph{traversal} as the optic for power series. \end{remark} \begin{proposition}\label{prop:traversal} Traversals are optics (as in Definition~\ref{def:optic}) for power series. That is, \(\mathbf{Traversal} \cong \mathbf{Optic}_{\operatorname{Pw},\operatorname{Pw}}\). \end{proposition} \begin{proof} The derivation generalizes that of linear lenses (Definition~\ref{def:linearlens}). \noindent \resizebox{\linewidth}{!}{ \begin{minipage}{\linewidth} \begin{align*} & \int^{C \in [ \mathbb{N} , \V]} \V\left(S , \int^{n \in \mathbb{N}} C_n \otimes A^n \right) \otimes \V\left(\int^{n \in \mathbb{N}} C_n \otimes B^n , T\right) \\ \cong &\qquad\mbox{{(Continuity)}}\\ & \int^{C \in [ \mathbb{N} , \V]} \V \left(S , \int^{n \in \mathbb{N}} C_n \otimes A^n \right) \otimes \int_{n \in \mathbb{N}} \V\left( C_n \otimes B^n , T\right) \\ \cong & \qquad\mbox{{(Currying)}}\\ & \int^{C \in [ \mathbb{N} , \V]} \V \left(S , \int^{n \in \mathbb{N}} C_n \otimes A^n \right) \otimes \int_{n \in \mathbb{N}} \V\left( C_n , [B^n , T]\right) \\ \cong & \qquad\mbox{{(Natural transformation as an end)}}\\ & \int^{C \in [ \mathbb{N} , \V]} \V \left(S , \int^{n \in \mathbb{N}} C_n \otimes A^n \right) \otimes [ \mathbb{N} , \V ] \left(C_{(-)} , [B^{(-)} , T]\right) \\ \cong & \qquad\mbox{{(coYoneda)}}\\ & \V \left(S , \int^{n \in \mathbb{N}} A^n \otimes [B^n, T] \right). \qedhere \end{align*} \end{minipage}} \end{proof} The derivation from the general definition of optic to the concrete description of lenses and prisms in functional programming was first described by Milewski~\cite{milewski17}, but finding a derivation of traversals like the one presented here, fitting the same elementary pattern as lenses or prisms, was left as an open problem. It should be noted, however, that derivations of the traversal as the optic for a certain kind of functor called \textbf{Traversables} (which should not be confused with traversals themselves) have been previously described by Boisseau and Gibbons~\cite{boisseau18} and Riley~\cite{riley18}. For a derivation using Yoneda, Riley~\cite{riley18} recalls a parameterised adjunction that has an equational proof in the work of Jaskelioff and O'Connor~\cite{jaskelioff15}. These two derivations do not contradict each other: two different classes of functors can generate the same optic, for instance, if the adjunction describing both of them gives rise to the same monad. This seems to be the case here: traversables are coalgebras for a comonad and power series are the cofree coalgebras for the same comonad~\cite[\S 4.2]{roman19}. In the \(\mathbf{Sets}\mbox{-based}\) case, the relation between traversable functors, applicative functors~\cite{mcbride08} and these power series functors has been studied by Jaskelioff and Rypacek~\cite{jaskelioff15}. \begin{remark}\label{remark:series} The \Vt{functor} \(\operatorname{Pw} \colon [\mathbb{N},\V] \to [\V,\V]\) is actually a strong monoidal action thanks to the fact that two power series functors compose into a power series functor. \begin{align*} &\int^{m \in \mathbb{N}} \left( \int^{n \in \mathbb{N}} A^n \otimes C_n \right)^m \otimes D_m \\ \cong & \quad\mbox{(Product distributes over colimits)} \\ &\int^{m} \int^{n_1,\dots,n_m} A^{n_1} \otimes \dots \otimes A^{n_m} \otimes C_{n_1} \otimes \dots \otimes C_{n_m} \otimes D_m \\ \cong & \quad\mbox{(Rearranging terms)} \\ &\int^{k \in \mathbb{N}} A^k \otimes \left( \sum_{n_1 + \dots + n_m = k} C_{n_1} \otimes \dots \otimes C_{n_m} \otimes D_m \right). \end{align*} We are then considering an implicit non-symmetric monoidal structure where the monoidal product \((\bullet) \colon [\mathbb{N},\V] \otimes [\mathbb{N},\V] \to [\mathbb{N},\V]\) of \(C_n\) and \(D_n\) can be written as \[\int^{m} \int^{n_1,\dots , n_m} \mathbb{N}(n_1 + \dots + n_m, k) \otimes C_{n_1} \otimes \dots \otimes C_{n_m} \otimes D_m. \] This is precisely the structure described by Kelly~\cite[\S 8]{kelly05} for the study of non-symmetric operads. A similar monoidal structure is described there when we substitute \(\mathbb{N}\) by \((\mathbb{P},+)\), the \Vt{\textit{category of permutations}} defined as the free strict symmetric monoidal category on one object. The same derivation can be repeated with this new structure to obtain an optic similar to the traversal and given by \[\V \left(S , \int^{n \in \mathbb{P}} A^n \otimes [B^n, T] \right).\] \end{remark} \subsubsection{Affine traversals} \emph{Affine traversals} strictly generalize prisms and linear lenses in the non-mixed case allowing a \textit{lens-like} accessing pattern to fail; they are used to give a concrete representation of the composition between a lens and a prism. An affine traversal is implemented by a single \hask{access} function. \begin{spec} access :: s -> Either t (a , b -> t) \end{spec} \begin{definition}\label{def:affine} Let \(\W\) be cartesian closed and let \(\mathbf{C}\) be a symmetric monoidal closed \Wt{category} that is also cocartesian. An \textbf{affine traversal} is an element of \[\mathbf{Affine}_{\otimes}\left( (A, B), (S, T) \right) := \mathbf{C}(S , T + A \otimes \{B,T\}).\] \end{definition} \begin{proposition} Affine traversals are optics (as in Definition~\ref{def:lens}) for the action \((+\otimes) \colon \mathbf{C}^2 \to [\mathbf{C},\mathbf{C}]\) that sends \(C, D \in \mathbf{C}\) to the functor \(C + D \otimes (-)\). That is, \(\mathbf{Affine}_{\otimes} \cong \mathbf{Optic}_{(+\otimes),(+\otimes)}\). \end{proposition} \begin{proof} The action uses that monoidal product distributes over the coproduct. \begin{align*} & \int^{C,D} \mathbf{C}(S , C + D \otimes A) \times \mathbf{C}(C + D \otimes B, T) \\ \cong & \qquad\mbox{{(Coproduct)}} \\ & \int^{C,D} \mathbf{C}(S , C + D \otimes A) \times \mathbf{C}(C , T) \times \mathbf{C}(D \otimes B, T) \\ \cong & \qquad\mbox{{(coYoneda, Exponential)}} \\ & \int^D \mathbf{C}(S , T + D \otimes A) \times \mathbf{C}(D , \{B, T\}) \\ \cong & \qquad\mbox{{(coYoneda)}} \\ & \mathbf{C}(S , T + A \otimes \{B,T\}). \qedhere \end{align*} \end{proof} \subsubsection{Kaleidoscopes} \emph{Applicative} functors are commonly considered in functional programming as they provide a convenient generalization to monads with better properties for composition. It is a natural then to ask what is the optic associated to applicative functors. Applicative functors in \([\V,\V]\) are monoids with respect to Day convolution \((\ast) \colon [\V,\V] \times [\V,\V] \to [\V,\V]\), defined as the following coend \[ (F \ast G)(A) = \int^{X,Y \in \V} \V(X \otimes Y, A) \otimes FX \otimes GY. \] Applicative functors form a \Vt{category} of monoids \(\App\). Alternatively, they are lax monoidal \Vt{functors} for the cartesian structure. It is a basic result in category theory~\cite[\S VII, Theorem 2]{maclane78} that, as the category \([\V,\V]\) has denumerable colimits and Day convolution distributes over them, the free applicative functor can be computed as \(I + F + F^{\ast 2} + F^{\ast 3} + \dots\). \begin{definition}\label{def:kaleidoscope} A \textbf{kaleidoscope} is an element of \[\mathbf{Kaleidoscope}\left((A, B), (S, T)\right) := \prod_{n \in \mathbb{N}} \V \left([A^n, B] , [S^n, T] \right). \] \end{definition} \begin{proposition} Kaleidoscopes are optics for the action of applicative functors. \end{proposition} \begin{proof} Let \(\U \colon \App \to [\V,\V]\) be the forgetful functor from the category of applicatives. \begin{align*} & \int^{F \in \App} \V(S, \U FA) \otimes \V(\U FB,T) \\ \cong & \qquad\mbox{{(Yoneda)}} \\ & \int^{F \in \App} \V\left(S, \int_{C \in \V} [[A,C],\U FC] \right) \otimes \V(\U FB,T) \\ \cong & \qquad\mbox{(Continuity)}\\ & \int^{F \in \App} \int_{C} \V\left(S, [[A,C],\U FC] \right) \otimes \V(\U FB,T) \\ \cong & \qquad\mbox{(Exponential)}\\ & \int^{F \in \App} \left( \int_{C} \V\left([A,C] \otimes S, \U FC \right) \right) \otimes \V(\U FB,T) \\ \cong & \qquad\mbox{(Natural transformations as ends)}\\ & \int^{F \in \App} \mathrm{Nat}\left([A,-] \otimes S, \U F \right) \otimes \V(\U FB,T) \\ \cong & \qquad\mbox{(Free applicative)} \\ & \int^{F \in \App} \App\left(\sum_{n\in \mathbb{N}} [A^n , -] \otimes S , F \right) \otimes \V(\U FB,T) \\ \cong & \qquad\mbox{(coYoneda)} \\ & \V \left(\sum_{n \in \mathbb{N}} S^n \otimes [A^n , B],T \right) \\ \cong & \qquad\mbox{(Continuity and exponential)} \\ & \prod_{n \in \mathbb{N}} \V \left([A^n , B], [S^n, T] \right). \qedhere \end{align*} \end{proof} \begin{remark} The free applicative we construct here is known in Haskell programming as the \hask{FunList} applicative. In the same way that traversables are written in terms of lists, we can approximate Kaleidoscopes as a single function \hask{aggregate} that takes a folding for the foci and outputs a folding for the complete data structure. Kaleidoscopes are a new optic, and we propose to use them as accessors for pointwise foldable data structures. \begin{spec} aggregate :: ([a] -> b) -> ([s] -> t) \end{spec} \end{remark} Kaleidoscopes pose a problem on the side of applications: they cannot be composed with lenses to produce new kaleidoscopes. This is caused by the fact that the constraint defining them (\mintinline{haskell}{Applicative}) is not a superclass of the constraint defining lenses: a functor given by a product is not applicative, in general. However, a functor given by a product \emph{by a monoid} is applicative. This means that applicatives can be composed with lenses whose residual is a monoid (Definition~\ref{def:algebraiclens}). \begin{figure}[H] \centering \begin{examplecode} measure :: AlgebraicLens [] Measurements Flower aggregate :: Kaleidoscope Float Measurements iris & measure.aggregate >- mean >>> Iris Versicolor; Sepal (5.843, 3.054); Petal (3.758, 1.198) \end{examplecode} \caption{Following the previous Example~\ref{fig:exampleIris}, a kaleidoscope (\hask{aggregate}) is composed with an algebraic lens to create a new point on the dataset by aggregating measurements with some function (\hask{mean}, \hask{maximum}) and then classifying it.}\label{fig:exampleKaleidoscope} \end{figure} \subsection{Grates} \subsubsection{Grates} \emph{Grates} create a new structure provided a way of creating a new focus from a view function. They are given by a single \hask{grate} function with the form of a nested continuation. \begin{spec} grate :: ((s -> a) -> b) -> t \end{spec} \begin{definition}\label{def:grate} Let \(\mathbf{C}\) be a symmetric monoidal closed \Vt{category}. Let \((\bullet) \colon \mathbf{C} \times \mathbf{D} \to \mathbf{D}\) be an arbitrary action. A \textbf{grate} is an element of \[\mathbf{Grate}\left((A, B), (S, T)\right) := \mathbf{D}(\{S,A\}\bullet B, T).\] \end{definition} \begin{proposition} Grates are mixed optics for the action of the exponential and \((\bullet) \colon \mathbf{C} \times \mathbf{D} \to \mathbf{D}\). That is, \(\mathbf{Grate} \cong \mathbf{Optic}_{\{\},\bullet}\). \end{proposition} \begin{proof} The exponential is right adjoint to the monoidal product. \begin{align*} & \int^{C \in \mathbf{C}} \mathbf{C}(S,\{C,A\}) \otimes \mathbf{D}(C \bullet B, T) \\ \cong & \qquad\mbox{(Exponential)} \\ & \int^{C \in \mathbf{C}} \mathbf{C}(S \otimes C, A) \otimes \mathbf{D}(C \bullet B, T) \\ \cong & \qquad\mbox{(Exponential)} \\ & \int^{C \in \mathbf{C}} \mathbf{C}(C,\{S, A\}) \otimes \mathbf{D}(C \bullet B, T) \\ \cong & \qquad\mbox{(coYoneda)} \\ & \mathbf{D}(\{S, A\} \bullet B, T) \qedhere \end{align*} \end{proof} The description of a grate and its profunctor representation in terms of ``closed'' profunctors was first reported by Deikun and O'Connor~\cite{deikun15}; it can be seen as a consequence of the profunctor representation theorem (Theorem~\ref{th:profrep}). \subsubsection{Glasses} \emph{Glasses} are a new optic that strictly generalizes grates and lenses for the cartesian case. In functional programming, glasses can be implemented by a single function \hask{glass} that takes a way of transforming \emph{views} into new foci and uses it to update the data structure. We propose to use them as a the concrete representation of the composition of lenses and grates. \begin{spec} glass :: ((s -> a) -> b) -> (s -> t) \end{spec} \begin{definition}\label{def:glass} Let \(\W\) be a cartesian Bénabou cosmos and let \(\mathbf{C}\) be a cartesian closed \Wt{category}. A \textbf{glass} is an element of \[\mathbf{Glass} \left( (A, B), (S, T) \right) := \mathbf{C}(S \times [[S, A], B] , T). \] \end{definition} \begin{proposition} Glasses are optics for the action \((\times[]) \colon \mathbf{C}^{op} \times \mathbf{C} \to [\mathbf{C},\mathbf{C}]\) that sends \(C, D \in \mathbf{C}\) to the \Wt{functor} \(D \times [C,-]\). \end{proposition} \begin{proof} \begin{align*} & \int^{C,D} \mathbf{C}(S, C \times [D,A]) \times \mathbf{C}([D,B] \times C , T) \\ \cong & \qquad \mbox{{(Product)}} \\ & \int^{C,D} \mathbf{C}(S, C) \times \mathbf{C}(S, [D,A]) \times \mathbf{C}([D,B] \times C , T) \\ \cong & \qquad \mbox{{(coYoneda)}}\\ & \int^D \mathbf{C}(S, [D, A]) \times \mathbf{C}([D,B] \times S, T) \\ \cong & \qquad \mbox{{(Exponential)}}\\ & \int^D \mathbf{C}(D, [S , A]) \times \mathbf{C}([D, B] \times S, T) \\ \cong & \qquad \mbox{{(coYoneda)}}\\ & \mathbf{C}([[S,A], B] \times S , T). \qedhere \end{align*} \end{proof} \subsection{Getters, reviews and folds} Some constructions, such as plain morphisms, are regarded as corner cases of optics in the \emph{lens} library. We will describe these constructions (\emph{getters}, \emph{reviews} and \emph{folds}~\cite{kmett15}) as mixed optics. All of them set one of the two base categories to be the terminal category and, contrary to most optics, they act only unidirectionally. In some sense, they are degenerate cases of the definition of mixed optic. \begin{definition}\label{def:getter}\label{def:review} Let \(\mathbf{C}\) be an arbitrary \Vt{category}. \textbf{Getters} are degenerate optics for the trivial action on the covariant side. \textbf{Reviews} are degenerate optics for the trivial action on the contravariant side. \begin{align*} \mathbf{Getter}((A,\ast),(S,\ast)) = \mathbf{C}(S, A). \\ \mathbf{Review}((\ast,B),(\ast,T)) = \mathbf{C}(B, T). \end{align*} In other words, we fix the case where \(\mathbf{M} = \mathbf{1}\) and \(\mathbf{D} = \mathbf{1}\) or \(\mathbf{C} = \mathbf{1}\), respectively. \end{definition} The category of \textbf{foldable} functors is the slice category on the list functor \(\mathcal{L} \colon \V \to \V\), also known as the free monoid functor. Using the fact that \(\mathcal{L}\) is a monad, the slice category can be made monoidal in such a way that the forgetful \([\V,\V]/\mathcal{L} \to [\V,\V]\) becomes strong monoidal. \begin{definition}\label{def:fold} \textbf{Folds} are degenerate optics for the action of foldable functors. \[\mathbf{Fold}((A,\ast), (S,\ast)) = \int^{F \in \mathbf{Foldable} } \V(S , F A).\] \end{definition} Folds admit a concrete description that we can reach by eliminating the coend. From the definition of coend as colimits, a coend from a diagram category with a terminal object is determined by the value at the terminal object. \[ \int^{F \in \mathbf{Foldable} } \V(S , F A) \cong \V(S, \mathcal{L} A). \] \begin{remark} The same technique can be used to prove that the optic for the slice category over a monad \(\mathcal{G} \colon \V \to \V\) has a concrete form given by \(\mathbf{C}(S, \mathcal{G}A)\). \end{remark} \begin{remark} We obtain the usual implementations of getters as \mintinline[breaklines]{haskell}{s -> a}, of reviews as \mintinline[breaklines]{haskell}{b -> t} and of folds as \mintinline[breaklines]{haskell}{s -> [a]}. Their profunctor representation degenerates into a covariant or contravariant \emph{functor representation}, which can be seen as a more direct application of the Yoneda lemma. \end{remark} \subsection{Setters and adapters} \begin{definition}\label{def:setter} A \textbf{setter}~\cite{kmett15} is an element of \[\mathbf{Setter} ((A, B), (S, T)) \coloneqq \V([A,B], [S,T]).\] \end{definition} \begin{proposition} Setters are optics for the evaluation of any endofunctor \(\mathrm{ev} \colon [\V , \V] \to [\V , \V]\). That is, \(\mathbf{Setter} \cong \mathbf{Optic}_{\mathrm{ev},\mathrm{ev}}\). \end{proposition} \begin{proof} The concrete derivation described by Riley~\cite[\S 4.5.2]{riley18} and his requirement for strength can be transferred directly to the enriched case. \begin{align*} & \int^{F \in [ \V , \V ]} \V(S, FA) \otimes \V(FB,T) \\ \cong & \qquad\mbox{{(Yoneda)}} \\ & \int^{F \in [ \V , \V ]} \left( \int_{C \in \V} [\V(A,C), \V(S, FC)] \right) \otimes \V(FB,T) \\ \cong & \qquad\mbox{{(Exponential)}} \\ & \int^{F \in [ \V , \V ]} \left( \int_{C \in \V} \V(S \otimes [A,C], FC) \right) \otimes \V(FB,T) \\ \cong & \qquad\mbox{{(Natural transformation as an end)}} \\ & \int^{F \in [ \V , \V ]} [\V,\V](S \otimes [A,-], F) \otimes \V(FB,T) \\ \cong & \qquad\mbox{{(coYoneda)}} \\ & \V(S \otimes [A,B], T). \qedhere \end{align*} \end{proof} \begin{remark} In functional programming, we implicitly restrict to the case where \(\V\) is a cartesian category, and we curry this description to obtain the usual representation of setters as a single \mintinline[breaklines]{haskell}{over} function. \begin{spec} over :: (a -> b) -> s -> t \end{spec} \end{remark} \begin{definition}\label{def:adapter} Let \(\mathbf{C}\) and \(\mathbf{D}\) be any two \Vt{categories}; \textbf{adapters}~\cite{kmett15} are morphisms in \(\mathbf{C}^{op} \otimes \mathbf{D}\). They are optics for the action of the identity functor; \(\mathbf{Adapter} \cong \mathbf{Optic}_{\mathrm{id},\mathrm{id}}\). \[\mathbf{Adapter} ((A, B), (S, T)) \coloneqq \mathbf{C}(S,A) \otimes \mathbf{D}(B,T).\] \end{definition} \subsection{Optics for (co)free}\label{sec:opticsforcofree} A common pattern that appears across many optic derivations is that of computing the optic associated to a class of functors using an adjunction to allow for an application of the Yoneda lemma. This observation can be generalized to a class of concrete optics. Consider some \Vt{endofunctor} \(H \colon \V \to \V\). Any objects \(S,T,A,B \in \V\) can be regarded as functors from the unit \Vt{category}. The following isomorphisms are the consequence of the fact that left and right global Kan extensions are left and right adjoints to precomposition, respectively. \begin{align*} \V(S,HA) \cong [\V,\V]( \mathsf{Lan}_A S,H),\\ \V(HB,T) \cong [\V,\V](H, \mathsf{Ran}_B T). \end{align*} These extensions exist in \(\V\) and they are given by the formulas \[ \mathsf{Lan}_AS \cong [A,-] \otimes S, \quad \mathsf{Ran}_BT \cong [[-,B],T]. \] \begin{proposition}[{{\citealp[\S 3.4.7]{roman19}}}]\label{prop:opticfree} Let the monoidal \Vt{action} \(\U \colon \mathbf{M} \to [\V,\V]\) have a left adjoint \(\mathcal{L} \colon [\V,\V] \to \mathbf{M}\), or, dually, let it have a right adjoint \(\mathcal{R} \colon [\V,\V] \to \mathbf{M}\). In both of these cases the optic determined by that monoidal action has a concrete form, given by \[\V(\mathcal{UL}([A,-] \otimes S)(B),T) \quad\mbox{ or }\quad \V(S,\mathcal{UR} [[-,B],T](A)),\] respectively. \end{proposition} \begin{proof} We prove the first case. The second one is dual. \begin{align*} & \int^{M \in \mathbf{M}} \V(S, \U MA) \otimes \V(\U MB,T) \\ \cong & \qquad\mbox{(Kan extension)} \\ & \int^{M \in \mathbf{M}} \mathrm{Nat}\left( \mathsf{Lan}_A S , \U M \right) \otimes \V(\U MB,T) \\ \cong & \qquad\mbox{(Adjunction)} \\ & \int^{M \in \mathbf{M}} \mathbf{M}\left(\mathcal{L} \mathsf{Lan}_{A}S , M \right) \otimes \V(\U MB,T) \\ \cong & \qquad\mbox{(coYoneda)} \\ & \V \left( \mathcal{UL} \mathsf{Lan}_{A}S (B) , T \right). & \qedhere \end{align*} \end{proof} \section{Tambara theory}\label{sec:tambaratheory} A fundamental feature of optics is that they can be represented as a single polymorphic function. Optics that admit this representation are called \emph{profunctor optics}, and that polymorphic function is their \emph{profunctor representation}. The benefit of the profunctor representation is that it makes particularly easy to compose two optics, even if they are from different families: when optics are represented by polymorphic functions, composition of optics becomes ordinary function composition, as it happened with Figure~\ref{fig:example1}. Profunctor optics can be seen as functions polymorphic over profunctors endowed with some extra algebraic structure. This extra structure depends on the family of the optic they represent. For instance, \emph{lenses} are represented by functions polymorphic over \emph{cartesian} profunctors, while \emph{prisms} are represented by functions polymorphic over \emph{cocartesian} profunctors~\cite[\S 3]{pickering17}. Milewski notes that the algebraic structures accompanying these profunctors are precisely \emph{Tambara modules}~\cite{milewski17}, a particular kind of profunctor that had been used to characterize the monoidal centre of convolution monoidal categories~\cite{tambara06}. Furthermore, it was shown that, because of this, categories of lenses or prisms can be obtained as particular cases of the ``Doubles for monoidal categories'' defined by Pastro and Street~\cite[\S 6]{pastro08}. The \emph{double} of an arbitrary monoidal \Vt{category} \((\AA,\otimes,I)\), is a promonoidal \Vt{category} \(\mathcal{DA}\) whose hom-objects are defined as \[ \mathcal{DA}((A,B),(S,T)) \coloneqq\int^{C \in \AA} \mathbf{C}(S,C \otimes A) \otimes \mathbf{C}(C \otimes B, T). \] In the particular case where \(\AA\) is cartesian or cocartesian, the \Vt{category} \(\mathcal{DA}\) is precisely the category of lenses or prisms over \(\AA\), respectively. Moreover, one of the main results of Pastro and Street~\cite[Proposition 6.1]{pastro08} declares that the \Vt{category} of copresheaves over these \Vt{categories}, \([\mathcal{D}\mathbf{C}, \V]\), is equivalent to the \Vt{category} of Tambara modules on \(\mathbf{C}\). In the case of lenses and prisms, these Tambara modules are precisely cartesian and cocartesian profunctors, and this justifies their profunctor representation. Surprisingly, the results of Pastro and Street can be applied to the theory of optics after the observations of Milewski~\cite{milewski17}. However, they are not general enough. Milewski~\cite{milewski17} already proposed a unified description of optics, later extended by Boisseau~\cite{boisseau18} and Riley~\cite{riley18}, that required a generalization of the original result by Pastro and Street from monoidal products to arbitrary monoidal actions. If we want to capture \Vt{enriched} mixed optics, we need to go even further and generalize Pastro and Street's definition of Tambara module~\cite[\S 3]{pastro08} in two directions. The monoidal category \(\AA\) in their definition needs to be substituted by a pair of arbitrary categories \(\mathbf{C}\) and \(\mathbf{D}\), and the monoidal product \((\otimes) \colon \AA \to [\AA,\AA]\) is substituted by a pair of arbitrary monoidal actions \((\actL{}{}) \colon \mathbf{M} \otimes \mathbf{C} \to \mathbf{C}\) and \((\actR{}{}) \colon \mathbf{M} \otimes \mathbf{D} \to \mathbf{D}\), from a common monoidal category \(\mathbf{M}\). This section can be seen both as a partial exposition of one of the main results from Pastro and Street's ``Doubles for monoidal categories''~\cite[Proposition 6.1]{pastro08} and a generalization of their definitions and propositions to the setting that is most useful for applications in functional programming. \subsection{Generalized Tambara modules} Originally, \emph{Tambara modules}~\cite{tambara06} were conceived as a structure on top of certain profunctors that made them interplay nicely with some monoidal action in both its covariant and contravariant components. In functional programming, Tambara modules represent the different ways in which we can use an optic. If $P$ has Tambara module structure for the monoidal actions defining an optic, we can use that optic to lift the Tambara module applied to the foci, $P(A,B)$, to the Tambara module applied to the full structures, $P(S,T)$. For instance, the profunctor $(-) \times B \to (-)$ can be used to lift the projection $A \times B \to B$ into the \hask{update} function $S \times B \to T$. In other words, that profunctor is a Tambara module compatible with all the families of optics that admit an \hask{update} function, such as \emph{lenses}. In programming libraries, this can be used to define a polymorphic \hask{update} combinator that works across different optic families. Formally, we want to prove that Tambara modules for the actions \(\actL{}{}\) and \(\actR{}{}\) are copresheaves over the category \(\mathbf{Optic}_{\actL{}{},\actR{}{}}\). This will also justify the profunctor representation of optics in terms of Tambara modules (Theorem~\ref{th:profrep}). \begin{definition}\label{def:tambara} Let \((\mathbf{M},\otimes,I)\) be a monoidal \Vt{category} with two monoidal actions \((\actL{}{}) \colon \mathbf{M} \otimes \mathbf{C} \to \mathbf{C}\) and \((\actR{}{}) \colon \mathbf{M} \otimes \mathbf{D} \to \mathbf{D}\). A generalized \textbf{Tambara module} consists of a \Vt{profunctor} \(P \colon \mathbf{C}^{op} \otimes \mathbf{D} \to \V\) endowed with a family of morphisms \[\alpha_{M,A,B} \colon P(A,B) \to P(\actL{M}{A} , \actR{M}{B})\] \Vt{natural} in \(A \in \mathbf{C}\) and \(B \in \mathbf{D}\) and \Vt{dinatural} \(M \in \mathbf{M}\), which additionally satisfies the two equations \begin{align*} \alpha_{I,A,B} \circ P(\phi_{A},\varphi^{-1}_{B}) &= \id, \\ \alpha_{M \otimes N,A,B} \circ P(\phi_{M,N,A},\varphi^{-1}_{M,N,B}) &= \alpha_{M,\actL{N}{A},\actR{N}{B}} \circ \alpha_{N,A,B}, \end{align*} for every \(M,N \in \mathbf{M}\), every \(A \in \mathbf{C}\) and every \(B \in \mathbf{D}\). That is, a family of morphisms making the following two diagrams commute. \[\begin{tikzcd}[column sep=4em] P(A,B) \rar{\alpha_{M \otimes N,A,B}}\dar[swap]{\alpha_{N,A,B}} & P(\actL{M \otimes N}{A},\actR{M \otimes N}{B}) \dar{P(\phi_{M,N,A}, \varphi_{M,N,B})} \\ P(\actL{N}{A}, \actR{N}{B}) \rar[swap]{\alpha_{M,\actL{N}{A},\actR{N}{B}}} & P(\actL{M}{\actL{N}{A}}, \actR{M}{\actR{N}{B}}) \\ P(A,B) \drar[swap]{\id} \rar{\alpha_{I,A,B}} & P(\actL{I}{A}, \actR{I}{B}) \dar{P(\phi_{I,A},\varphi^{-1}_{I,B})} \\ & P(A,B) \end{tikzcd}\] When \(\V = \mathbf{Sets}\), we can define a morphism of Tambara modules as a natural transformation \(\eta_{A,B} \colon P(A,B) \to Q(A,B)\) satisfying \[\eta_{\actL{M}{A},\actR{M}{B}} \circ \alpha_{M,A,B} = \alpha'_{M,A,B}\circ \eta_{A, B}.\] For an arbitrary \(\V\), Tambara modules are the objects of a \Vt{category} \(\mathbf{Tamb}\) whose hom-object from \((P,\alpha)\) to \((Q,\alpha')\) is computed as the intersection of the equalizers of \begin{align*} \int_{A,B} \V(P(A,B),Q(A,B)&) \\ \xrightarrow{\phantom{AAAAAAA}\pi_{A,B}\phantom{AAAAAAA}}\ & \V(P(A,B), Q(A,B)) \\ \xrightarrow{\phantom{AAAAA}\V(\id,\alpha'_{A,B})\phantom{AAAAA}}\ & \V(P(A,B), Q(\actL{M}{A},\actR{M}{B})) \end{align*} and \begin{align*} \int_{A,B} \V(P(A,B&),Q(A,B)) \\ \xrightarrow{\phantom{AA}\pi_{\actL{M}{A},\actR{M}{B}}\phantom{AA}}\ & \V(P(\actL{M}{A},\actR{M}{B}), Q(\actL{M}{A},\actR{M}{B})) \\ \xrightarrow{\phantom{xA}\V(\alpha_{A,B},\id)\phantom{Ax}}\ & \V(P(A,B), Q(\actL{M}{A},\actR{M}{B})) \end{align*} for each \(A \in \mathbf{C}\), \(B \in \mathbf{D}\) and \(M \in \mathbf{M}\)~\cite[\S 3.2]{pastro08}. \end{definition} \begin{remark} Pastro and Street~\cite{pastro08} follow the convention of omitting the unitors and the associators of the monoidal category when defining Tambara modules. These appear in Definition~\ref{def:tambara} replaced by the structure isomorphisms of the two monoidal actions. \end{remark} \subsection{Pastro-Street's ``double'' comonad} In order to prove their main result, Pastro and Street characterized Tambara modules as coalgebras for a particular comonad~\cite[\S 5]{pastro08}. That comonad has a left adjoint that must therefore be a monad, and then Tambara modules can be equivalently described as algebras for that monad. Following their same technique, we will describe the \Vt{category} of \emph{generalized} Tambara modules \(\mathbf{Tamb}\) as an Eilenberg-Moore category first for a comonad and then for its left adjoint monad. This will be the main lemma (Lemma~\ref{lemma:tambaracoalg}) towards the profunctor representation theorem (Theorem~\ref{th:profrep}). \begin{definition}\label{def:pastrofunctor} We start by constructing the underlying \Vt{functor} of the comonad \(\Theta \colon \mathbf{Prof}(\mathbf{C},\mathbf{D}) \to \mathbf{Prof}(\mathbf{C},\mathbf{D})\). Consider first the \Vt{functor} \[T \colon \mathbf{M}^{op} \otimes \mathbf{M} \otimes \mathbf{C}^{op} \otimes \mathbf{D} \otimes \mathbf{Prof}(\mathbf{C},\mathbf{D}) \to \V, \] given by the composition of the actions \((\actL{}{})\) and \((\actR{}{})\) with the evaluation \Vt{functor} \(\mathbf{C}^{op} \otimes \mathbf{D} \otimes \mathbf{Prof}(\mathbf{C},\mathbf{D}) \to \V\). On objects, this is given by \[ T(M,N,A,B,P) \coloneqq P(\actL{M}{A},\actR{N}{B}). \] By the universal property of the end, this induces a \Vt{functor} \(\mathbf{C}^{op} \otimes \mathbf{D} \otimes \mathbf{Prof}(\mathbf{C},\mathbf{D}) \to \V\) given by \[ T(A,B,P) = \int_{M \in \mathbf{M}} P(\actL{M}{A},\actR{M}{B}), \] which can be curried into the \Vt{functor} \[\Theta P(A,B) \coloneqq \int_{M \in \mathbf{M}} P(\actL{M}{A}, \actR{M}{B}). \] \end{definition} \begin{proposition}\label{prop:pastrocomonad} The \Vt{functor} \(\Theta\) can be endowed with a comonad structure. Its counit is the \Vt{natural} family of morphisms \(\varepsilon_{P,A,B} \colon \Theta P (A,B) \to P(A,B)\) obtained by projecting on the monoidal unit and applying the unitors, \[\begin{tikzcd}[column sep=4em] \Theta P(A,B) \rar{\pi_{I}} & P(\actL{I}{A}, \actR{I}{B}) \rar{P(\phi_{A}, \varphi^{-1}_{B})} & P(A,B). \end{tikzcd}\] Its comultiplication is given by \(\delta_{P,A,B} \colon \Theta P(A,B) \to \Theta \Theta P(A,B)\), the \Vt{natural} family of transformations obtained as the unique morphisms factorizing \[\begin{tikzcd}[column sep=1.5em] \displaystyle \Theta P (A,B) \ar{r}{P\left(\phi_{M,N,A},\varphi^{-1}_{M,N,B}\right) \circ \pi_{M \otimes N} } &[18ex] P(\actL{M}{\actL{N}{A}}, \actR{M}{\actR{N}{B}}) \end{tikzcd}\] through the projection \[\begin{tikzcd}[column sep=4em] \Theta \Theta P (A,B) \rar{\pi_M \circ \pi_N} & P(\actL{M}{\actL{N}{A}} , \actR{M}{\actR{N}{B}}) \end{tikzcd}\] for every \(M,N \in \mathbf{M}\). \end{proposition} \begin{remark}\label{remark:axiomsmonoidal} Before the proof, let us recall the axioms for a strong monoidal \Vt{action} \(\oslash \colon \mathbf{M} \otimes \mathbf{C} \to \mathbf{C}\) of a monoidal \Vt{category} \((\mathbf{M},\otimes,I)\) with coherence isomorphisms \((a,\lambda,\rho)\) to an arbitrary category \(\mathbf{C}\). Let \[\begin{aligned} & \phi_{A} \colon A \cong I \oslash A, && \phi_{M,N,A} \colon M \oslash N \oslash A \cong M \otimes N \oslash A, \\ \end{aligned}\] be the structure \Vt{natural} isomorphisms of the strong monoidal action. Note that the following are precisely the axioms for a strong monoidal functor \(\mathbf{M} \to [\mathbf{C},\mathbf{C}]\) written as \(\mathbf{M} \otimes \mathbf{C} \to \mathbf{C}\); they are simplified by the fact that \([\mathbf{C},\mathbf{C}]\) is strict. \[\begin{tikzcd} I \oslash M \oslash A \dar[swap]{\phi_{I,M,A}} & M \oslash A \lar[swap]{\phi_{M \oslash A}} \rar{M \oslash \phi_A} \dar{id} & M \oslash I \oslash A \dar{\phi_{M,I,A}} \\ I \otimes M \oslash A \rar[swap]{\lambda_M \oslash A}& M \oslash A & M \otimes I \oslash A \lar{\rho_M \oslash A} \end{tikzcd}\] \[\begin{tikzcd} M \oslash N \oslash K \oslash A \rar{\id} \dar[swap]{\phi_{M,N,K\oslash A}} & M \oslash N \oslash K \oslash A \dar{M \oslash \phi_{N,K,A}} \\ M \otimes N \oslash K \oslash A \dar[swap]{\phi_{M \otimes N,K,A}} & M \oslash N \otimes K \oslash A \dar{\phi_{M,N \otimes K,A}} \\ (M \otimes N) \otimes K \oslash A \rar[swap]{a_{M,N,K} \oslash A} & M \otimes (N \otimes K) \oslash A \end{tikzcd}\] \end{remark} \begin{proof} In order to keep the diagrams in this proof small, we introduce the notation \(P[M](A,B) \coloneqq P(\actL{M}{A},\actR{M}{B})\). We will show that this construction satisfies the comonad axioms. We first prove left counitality, \(\Theta \varepsilon_P \circ \delta_P = \mathrm{id}_{\Theta P}\), which follows from the commutativity of the following diagram. \[\begin{tikzcd} \Theta P(A,B) \rar{\delta_P} \ar{dd}[swap]{\pi_{I \otimes M}} & \Theta\Theta P(A,B) \rar{(\Theta \varepsilon)_P} \dar{\pi_M} & \Theta P(A,B) \dar{\pi_M} \\ & \Theta P[M](A,B) \dar{\pi_I} \rar{\varepsilon_{P[M]}} & P[M](A,B) \dar{\id} \\ P[I \otimes M](A,B) \ar[swap]{r}[yshift=-1ex]{P(\phi_{I,M,A},\varphi^{-1}_{I,M,B})} & P[M][I](A,B) \ar[swap]{r}[yshift=-1ex]{P(\phi_{\actL{M}{A}},\varphi^{-1}_{\actR{M}{B}})} & P[M](A,B) \end{tikzcd}\] The left pentagon of this diagram commutes because of the definition of \(\delta\). The upper right square commutes because of functoriality of ends and naturality of \(\pi_M\). The lower right square commutes because of the definition of \(\varepsilon\). By the axioms of the monoidal actions (Remark~\ref{remark:axiomsmonoidal}), the lower side of the square can be rewritten as \begin{align*} P(\phi_{I,M,A},\varphi^{-1}_{I,M,B}) \circ P(\phi_{\actL{M}{A}},\varphi^{-1}_{\actR{M}{B}}) = P(\lambda^{-1}_M \oslash A, \lambda_M \oslash B). \end{align*} Now, by the wedge condition of the end, one of the sides of the previous diagram is just the projection \(\pi_M\). \[\begin{tikzcd}[column sep=small] & \Theta P(A,B) \dlar[swap]{\pi_{I \otimes M}} \drar{\pi_M}& \\ P[I \otimes M](A,B) \drar[swap]{P(\id,\lambda_M \oslash B)} && P[M](A,B) \dlar{P(\lambda_M \oslash A,\id)} \\ & P(\actL{I \otimes M}{A},\actL{M}{B}) & \end{tikzcd}\] Finally, by the universal property of the end, that implies that \(\Theta \varepsilon_P \circ \delta_P\) must coincide with the identity. Let us now prove right counitality, \(\varepsilon_{\Theta P} \circ \delta_P = \id_{\Theta P}\), which follows from the commutativity of the following diagram. \[\begin{tikzcd} \Theta P(A,B) \rar{\delta_P} \ar{dd}[swap]{\pi_{I \otimes M}} & \Theta\Theta P(A,B) \rar{ \varepsilon_{\Theta P}} \dar{\pi_I} & \Theta P(A,B) \dar{\id} \\ & \Theta P[I](A,B) \dar{\pi_M} \rar{\Theta P(\phi_I, \varphi_I^{-1})} & \Theta P(A,B) \dar{\pi_M} \\ P[M \otimes I](A,B) \ar[swap]{r}[yshift=-1ex]{P(\phi_{M,I,A},\varphi^{-1}_{M,I,B})} & P[I][M](A,B) \ar[swap]{r}[yshift=-1ex]{P(\phi_A,\varphi_B^{-1})} & P[M](A,B) \end{tikzcd}\] Again, the definition of \(\delta\) makes the left pentagon commute. The upper right square commutes now because of the definition of \(\varepsilon\), whereas the lower right square commutes because of functoriality of ends and naturality of \(\pi\). By the axioms of the monoidal actions (Remark~\ref{remark:axiomsmonoidal}), the lower side of the square can be rewritten as \[ P(\phi_A,\varphi_B^{-1}) \circ P(\phi_{M,I,A},\varphi^{-1}_{M,I,B}) = P(\rho_M^{-1} \oslash A, \rho_M \oslash B). \] Now, by the wedge condition of the end, one of the sides of the previous diagram is just the projection \(\pi_M\). \[\begin{tikzcd}[column sep=small] & \Theta P(A,B) \dlar[swap]{\pi_{I \otimes M}} \drar{\pi_M}& \\ P[I \otimes M](A,B) \drar[swap]{P(\id,\lambda_M \oslash B)} && P[M](A,B) \dlar{P(\lambda_M \oslash A,\id)} \\ & P(\actL{I \otimes M}{A},\actL{M}{B}) & \end{tikzcd}\] Finally, by the universal property of the end, \(\varepsilon_{\Theta P} \circ \delta_P\) must coincide with the identity. Coassociativity, \(\Theta\delta_P\circ\delta_{P} = \delta_{\Theta P}\circ\delta_P\), follows from commutativity of the diagram in Figure~\ref{fig:diagramassoc}. We need to show that the upper diamond commutes; by the universal property of the ends, this amounts to showing that it commutes when followed by \(\pi_M \circ \pi_N \circ \pi_K\). The lower pentagon is made of isomorphisms, and it commutes by the axioms of the monoidal actions (Remark~\ref{remark:axiomsmonoidal}). The two upper degenerated pentagons commute by definition of \(\delta\). The two trapezoids commute by functoriality of ends and naturality of the projections. Finally, the two outermost morphisms of the diagram correspond to two projections from \(\Theta P(A,B)\), namely \(\pi_{(M \otimes N) \otimes K}\) and \(\pi_{M \otimes (N \otimes K)}\). The wedge condition for the associator \(a_{M,N,K}\) makes the external diagram commute. \end{proof} \begin{figure}[!h] \centering \begin{tikzcd} & \Theta P(A,B) \ar{rd}{\delta_{P}} \ar[swap]{dl}{\delta_{P}} & \\ \Theta\Theta P(A,B) \ar[swap]{rd}{\Theta\delta_{P}} \ar[swap]{ddd}{\pi_K} && \Theta\Theta P(A,B) \ar{ddd}{\pi_{N \otimes K}} \ar{ld}{\delta_{\Theta P}} \\ & \Theta \Theta \Theta P(A,B) \ar{d}{\pi_K} \\ & \Theta \Theta P[K](A,B) \ar{dd}{\pi_N} & \\ \Theta P[K](A,B) \urar{\delta_{P[K]}} \ar[swap]{ddd}{\pi_{M \otimes N}} && \Theta P[N \otimes K](A,B) \ar{ddd}{\pi_M} \dlar[near end]{\Theta P(\phi_{N,K,A},\varphi_{N,K,B}^{-1})} \\ & \Theta P[K][N](A,B) \dar{\pi_M} &\\ & P[K][N][M](A,B) \\ P[K][M \otimes N](A,B) \urar[near end,xshift=0.5em]{P(\phi_{M,N,\actL{K}{A}}, \varphi_{M,N,\actR{K}{B}}^{-1})} \dar{P(\phi_{M \otimes N,K,A},\varphi^{-1}_{M \otimes N,K,B})} && P[N \otimes K][M](A,B) \ular[swap, near end]{P(\phi_{N,K,A},\varphi_{N,K,B}^{-1})} \dar[swap]{P(\phi_{M, N\otimes K,A},\varphi^{-1}_{M, N\otimes K,B})} \\ P[(M\otimes N)\otimes K](A,B) \ar{rr}{P(a^{-1}_{M,N,K},a_{M,N,K})} && P[M\otimes (N\otimes K)](A,B) \end{tikzcd} \caption{Diagram for the coassociativity axiom.}\label{fig:diagramassoc} \end{figure} \begin{lemma}\label{lemma:tambaracoalg} Tambara modules are precisely coalgebras for this comonad. There exists an isomorphism of \Vt{categories} \(\mathbf{Tamb} \cong EM(\Theta)\) from the category of Tambara modules to the Eilenberg-Moore category of \(\Theta\). \end{lemma} \begin{proof} Note that the object of \Vt{natural} transformations from \(P\) to \(\Theta P\) is precisely \begin{align*} & \mathbf{Prof}(\mathbf{C},\mathbf{D})(P,\Theta P) \\ \cong & \quad\mbox{(Definition)} \\ & \int_{A,B} \V\left(P(A,B),\int_{M \in \mathbf{M}} P(\actL{M}{A}, \actR{M}{B})\right) \\ \cong & \quad\mbox{(Continuity)} \\ & \int_{A,B,M} \V\left(P(A,B), P(\actL{M}{A}, \actR{M}{B})\right) \end{align*} whose elements can be seen as a family of morphisms that is natural in both \(M \in \mathbf{M}\) and \((A,B) \in \mathbf{C}^{op} \otimes \mathbf{D}\). The two conditions in the definition of Tambara module can be rewritten as the axioms of the coalgebra. \end{proof} \begin{proposition} The \(\Theta\) comonad has a left \Vt{adjoint} \(\Phi\), which must therefore be a monad. On objects, it is given by the following formula. \[\Phi Q(X,Y) = \int^{M,U,V} Q(U,V) \otimes \mathbf{C}(X,\actL{M}{U}) \otimes \mathbf{D}(\actR{M}{V},Y).\] That is, there exists a \Vt{natural} isomorphism \(\mathrm{Nat}(\Phi Q,P) \cong \mathrm{Nat}(Q,\Theta P)\). \end{proposition} \begin{proof} We can explicitly construct the \Vt{natural} isomorphism using coend calculus. \noindent \resizebox{\linewidth}{!}{ \begin{minipage}{\linewidth} \begin{align*} & \int_{A,B} \V \left(Q(A,B), \int_{M}P(\actL{M}{A}, \actR{M}{B})\right) \\ \cong & \quad\mbox{(Continuity)} \\ & \int_{M,A,B} \V(Q(A,B), P(\actL{M}{A}, \actR{M}{B})) \\ \cong & \quad\mbox{(Yoneda, in the category $\mathbf{C}^{op}\otimes \mathbf{D}$)} \\ & \int_{M,A,B} \V\left( Q(A,B) , \int_{X,Y} \V \Big(\mathbf{C}(X, \actL{M}{A}) \otimes \mathbf{D}(\actR{M}{B},Y) , P(X,Y)\Big) \right) \\ \cong & \quad\mbox{(Continuity)} \\ & \int_{M,A,B,X,Y} \V \left( Q(A,B) , \V \Big(\mathbf{C}(X, \actL{M}{A}) \otimes \mathbf{D}(\actR{M}{B},Y) , P(X,Y)\Big) \right) \\ \cong & \quad\mbox{(Currying)} \\ & \int_{M,A,B,X,Y} \V(Q(A,B) \otimes \mathbf{C}(X, \actL{M}{A}) \otimes \mathbf{D}(\actR{M}{B}, Y), P(X,Y)) \\ \cong & \quad\mbox{(Continuity)} \\ & \int_{X,Y} \V \left( \int^{M,A,B} Q(A,B) \otimes \mathbf{C}(X, \actL{M}{A}) \otimes \mathbf{D}(\actR{M}{B},Y) , P(X,Y) \right). \\ \end{align*} \end{minipage}} Alternatively, the adjunction can be deduced from the definition of the comonad \(\Theta\) and the characterization of global Kan extensions as adjoints to precomposition. \end{proof} \subsection{Pastro-Street's ``double'' promonad} The second part of this proof occurs in the bicategory of \Vt{profunctors}. In this category, there exists a formal analogue of the Kleisli construction that, when applied to the Pastro-Street monad \(\Phi\), yields a category whose morphisms are the optics from Definition~\ref{def:optic}. This is the crucial step of the proof, as the universal property of that Kleisli construction will imply that copresheaves over the category of optics there defined are Tambara modules (Lemma~\ref{lemma:copresheaves}). After that, the last step will be a relatively straightforward application of the Yoneda lemma (Lemma~\ref{lemma:doubleyoneda}). Let \(\mathbf{Prof}\) be the bicategory of \Vt{profunctors} that has as 0-cells the \Vt{categories} \(\mathbf{C},\mathbf{D},\mathbf{E}, \dots\); as 1-cells \(P \colon \mathbf{C} \nrightarrow \mathbf{D}\) the \Vt{profunctors} given as \(P \colon \mathbf{C}^{op} \otimes \mathbf{D} \to \V\); and as 2-cells the natural transformations between them. The composition of two \Vt{profunctors} \(P \colon \mathbf{C}^{op} \otimes \mathbf{D} \to \V\) and \(Q \colon \mathbf{D}^{op} \otimes \mathbf{E} \to \V\) is the \Vt{profunctor} \(Q \diamond P \colon \mathbf{C}^{op} \otimes \mathbf{E} \to \V\) defined on objects by the coend\footnote{Though, in general, the composition of two profunctors can fail to exist for size reasons or when $\V$ lacks certain colimits, we only ever need these composites in a formal sense. This perspective can be formalized with the notion of virtual equipment~\cite{cruttwell09}.} \[ (Q \diamond P)(C,E) = \int^{D \in \mathbf{D}} P(C,D) \otimes Q(D,E). \] There is, however, an equivalent way of defining profunctor composition if we interpret each \Vt{profunctor} \(\mathbf{C}^{op} \otimes \mathbf{D} \to \V\) as a \Vt{functor} \(\mathbf{C}^{op} \to [\mathbf{D},\V]\) to the category of copresheaves. In this case, the composition of two profunctors \(P \colon \mathbf{C}^{op} \to [\mathbf{D},\V]\) and \(Q \colon \mathbf{D}^{op} \to [\mathbf{E},\V]\) is the \Vt{functor} \((Q \diamond P) \colon \mathbf{C}^{op} \to [\mathbf{E},\V]\) defined by taking a left Kan extension \((Q \diamond P) \coloneqq \mathsf{Lan}_y Q \circ P\) along the Yoneda embedding \(y \colon \mathbf{D}^{op} \to [\mathbf{D},\V]\). The unit profunctor for composition is precisely the Yoneda embedding. \[\begin{tikzcd} & \mathbf{D}^{op} \rar{Q} \dar[swap]{y} & \left[\mathbf{E}, \V \right] \\ \mathbf{C}^{op} \rar{P} & \left[\mathbf{D}, \V \right] \urar[swap]{\mathsf{Lan}_y Q \circ P} & \end{tikzcd}\] In the same way that we can construct a Kleisli category over a monad, we will perform a Kleisli construction over the monoids of the bicategory \(\mathbf{Prof}\), which we call \textbf{promonads}. Promonads over the base category \(\V\) that are also Tambara modules for the product appear frequently in the literature on functional programming languages under the name of \emph{arrows}~\cite{hughes00}. \begin{definition} A \textbf{promonad} is given by a \Vt{category} \(\mathbf{A}\), an endoprofunctor \(T \colon \mathbf{A}^{op} \otimes \mathbf{A} \to \V\), and two \Vt{natural} families \(\eta_{X,Y} \colon \mathbf{C}(X,Y) \to T(X,Y)\) and \(\mu_{X,Y} \colon (T \diamond T)(X,Y) \to T(X,Y)\) satisfying the following unitality and associativity axioms. \[\begin{tikzcd} T \rar{\eta \diamond \id}\drar[swap]{\id} & T\diamond T \dar{\mu} & \lar[swap]{\id \diamond \eta}\dlar{\id} T &[-2em] T \diamond T \diamond T\dar[swap]{\id \diamond \mu} \rar{\mu \diamond \id} & T \diamond T \dar{\mu} \\ & T && T \diamond T \rar{\mu}& T \end{tikzcd}\] A module for the promonad is a \Vt{profunctor} \(P \colon \mathbf{X}^{op} \otimes \mathbf{A} \to \V\), together with a \Vt{natural} transformation \(\rho \colon T \diamond P \to P\) making the following diagrams commute. \[\begin{tikzcd} P \rar{\eta \diamond \id} \drar[swap]{\id} & T \diamond P \dar{\rho} & T \diamond T \diamond P \rar{\mu \diamond \id} \dar[swap]{\id \diamond \rho} & T \diamond P \dar{\rho} \\ & P & T \diamond P \rar{\rho} & P \end{tikzcd}\] An algebra is a module structure on a \Vt{copresheaf} \(P \colon \mathbf{A} \to \V\), interpreted as a profunctor \(\mathbf{I}^{op} \otimes \mathbf{A} \to \V\) from the unit \Vt{category}. \end{definition} \begin{lemma}\label{lemma:kleisli} The bicategory \(\mathbf{Prof}\) admits the Kleisli construction~\cite[\S 6]{pastro08}. The Kleisli \Vt{category} \(\Kl(T)\) for a promonad \((T, \mu, \eta)\) over \(\mathbf{A}\) is constructed as having the same objects as \(\mathbf{A}\) and letting the hom-object between \(X,Y \in \mathbf{A}\) be precisely \(T(X,Y) \in \V\). \end{lemma} \begin{proof} The multiplication of the promonad is a \Vt{natural} transformation whose components can be taken as the definition for the composition of the \Vt{category} \(\Kl(T)\). \begin{align*} & \V\left(\int^{Z \in \mathbf{C}} T(X,Z) \otimes T(Z,Y) , T(X,Y)\right) \\ \cong & \quad\mbox{(Continuity)} \\ & \int_{Z \in \mathbf{C}} \V\left(T(X,Z) \otimes T(Z,Y) , T(X,Y)\right) \end{align*} Let us show now that this \Vt{category} satisfies the universal property of the Kleisli construction. Let \(P \colon \mathbf{X}^{op} \otimes \mathbf{A} \to \V\) be a \Vt{profunctor}. A module structure \(\rho \colon T \diamond P \to P\) corresponds to a way of lifting the profunctor \(P\) to the Kleisli category in its second argument. \begin{align*} & \int_{X \in \mathbf{X}, Z \in \mathbf{A}} \V\left(\int^{Y \in \mathbf{A}} P(X,Y) \otimes T(Y,Z), P(X,Z) \right)\\ \cong & \quad\mbox{(Continuity)} \\ & \int_{X,Y,Z} \V(P(X,Y) \otimes T(Y,Z), P(X,Z)) \\ \cong & \quad\mbox{(Exponential)} \\ & \int_{X,Y,Z} \V(T(Y,Z), [P(X,Y), P(X,Z)]). \end{align*} Functoriality of this family follows from the algebra axioms. \end{proof} \begin{lemma}\label{lemma:copresheaves} The category of algebras for a promonad is equivalent to the copresheaf category over its Kleisli object. \end{lemma} \begin{proof} Let \(\mathbf{X}\) be any category and \(\Phi \colon \mathbf{Y} \to \mathbf{Y}\) a promonad. By the universal property of the Kleisli construction (see Lemma~\ref{lemma:kleisli}), \(\mathbf{Prof}(\mathbf{X},\operatorname{Kl}(\Phi))\) is equivalent to the category of modules on \(\mathbf{X}\) for the promonad. In particular, \Vt{profunctors} from the unit \Vt{category} to the Kleisli object form precisely the category \(\EM(\Phi)\) of algebras for the promonad; thus \[[\operatorname{Kl}(\Phi),\V] \cong [\mathbf{I}^{op} \otimes \operatorname{Kl}(\Phi),\V] \cong \mathbf{Prof}(\mathbf{I},\operatorname{Kl}(\Phi)) \cong \EM(\Phi).\qedhere\] \end{proof} \begin{proposition}\label{prop:cocontinous} Let \(T \colon [\mathbf{A},\V] \to [\mathbf{A},\V]\) be a cocontinuous monad. The profunctor \(\check{T} \colon \mathbf{A}^{op} \to [\mathbf{A},\V]\) defined by \(\check{T} \coloneqq T \circ y\) can be given a promonad structure. Moreover, algebras for \(T\) are precisely algebras for the promonad \(\check{T}\). \end{proposition} \begin{proof} First, the fact that \(T\) is cocontinuous means that it preserves left Kan extensions and thus, \[\mathsf{Lan}_y\check{T} \cong \mathsf{Lan}_y(T \circ y) \cong T \circ \mathsf{Lan}_y(y) \cong T.\] This means that the composition of the profunctor \(\check{T}\) with itself is \[\check{T} \diamond \check{T} \cong \mathsf{Lan}_y \check{T} \circ T\circ y \cong T \circ T \circ y. \] The unit and multiplication of the promonad are then obtained by whiskering the unit and multiplication of the monad with the Yoneda embedding; that is, \((\eta \ast y) \colon y \to T\circ y\) and \((\mu \ast y) \colon T \circ y \to T \circ T \circ y\). The diagrams for associativity and unitality for the promonad are the whiskering by the Yoneda embedding of the same diagrams for the monad. In fact, the same reasoning yields that, for any \(P \colon \mathbf{D}^{op} \to [\mathbf{A}, \V]\), \[\check{T} \diamond P \cong (T \circ y) \diamond P \cong \mathsf{Lan}_y(T \circ y)\circ P \cong T \circ P.\] As a consequence of this for the case \(P \colon \mathbf{I} \to [\mathbf{A}^{op},\V]\), any \(T\mbox{-algebra}\) can be seen as a \(\check{T}\mbox{-algebra}\) and vice versa. The axioms for the promonad structure on \(\check{T}\) coincide with the axioms for the corresponding monad on \(T\). \end{proof} In particular, the Pastro-Street monad \(\Phi\) is a left adjoint. That implies that it is cocontinuous and, because of Proposition~\ref{prop:cocontinous}, it induces a promonad \({\check \Phi} = \Phi \circ y\), having Tambara modules as algebras. We can compute by the Yoneda lemma that \[{\check \Phi}(A,B,S,T) = \int^{M} \mathbf{C}(S,\actL{M}{A}) \otimes \mathbf{D}(\actR{M}{B}, T). \] Note that this coincides with Definition~\ref{def:optic}. We now define \(\mathbf{Optic}\) to be the Kleisli \Vt{category} for the promonad \({\check\Phi}\). \subsection{Profunctor representation theorem} Let us zoom out to the big picture again. It has been observed that optics can be composed using their profunctor representation; that is, profunctor optics can be endowed with a natural categorical structure. On the other hand, we have generalized the double construction by Pastro and Street~\cite{pastro08} to abstractly obtain the category \(\mathbf{Optic}\). The last missing piece that makes both coincide is the \textit{profunctor representation theorem}, which will justify the profunctor representation of optics and their composition in profunctor form being the usual function composition. The profunctor representation theorem for the case \(\V = \mathbf{Sets}\) and non-mixed optics has been discussed by Boisseau and Gibbons~\cite[Theorem 4.2]{boisseau18}. Although our statement is more general and the proof technique is different, the main idea is the same. In both cases, the key insight is the following lemma, already described by Milewski~\cite{milewski17}. \begin{lemma}[``Double Yoneda'' from Milewski~\cite{milewski17}]\label{lemma:doubleyoneda} For any \Vt{category} \(\mathbf{A}\), the hom-object between \(X\) and \(Y\) is \Vt{naturally} isomorphic to the object of \Vt{natural} transformations between the functors that evaluate copresheaves in \(X\) and \(Y\); that is, \[\mathbf{A}(X,Y) \cong [ [ \mathbf{A} , \V ] , \V ](-(X),-(Y)).\] The isomorphism is given by the canonical maps \(\mathbf{A}(X,Y) \to \V(FX,FY)\) for each \(F \in [\mathbf{A}, \V]\). Its inverse is given by computing its value on the identity on the \(\mathbf{A}(X,-)\) component. \end{lemma} \begin{proof} In the functor \Vt{category} \([ \mathbf{A} , \V]\), we can apply the Yoneda embedding to two representable functors \(\mathbf{A}(Y,-)\) and \(\mathbf{A}(X,-)\) to get \[\mathrm{Nat}(\mathbf{A}(Y,-) , \mathbf{A}(X,-)) \cong \int_F \V\Big([\mathbf{A}(X,-),F], [\mathbf{A}(Y,-),F] \Big). \] Here reducing by Yoneda lemma on both the left hand side and the two arguments of the right hand side, we get the desired result. \end{proof} \begin{theorem}[Profunctor representation theorem]\label{th:profrep} \[\int_{P \in \mathbf{Tamb}} \V(P(A,B) , P(S,T)) \cong \mathbf{Optic}((A,B),(S,T)). \] \end{theorem} \begin{proof} We apply Double Yoneda (Lemma~\ref{lemma:doubleyoneda}) to the \Vt{category} \(\mathbf{Optic}\) and then use that copresheaves over it are precisely Tambara modules (Proposition~\ref{lemma:copresheaves}). \end{proof} \section{Conclusions} We have extended a result by Pastro and Street to a setting that is useful for optics in functional programming. Using it, we have refined some of the optics already present in the literature to mixed optics, providing derivations for each one of them. We have also described new optics. Regarding functional programming, the work suggests an architecture for a library of optics that would benefit from these results. Instead of implementing each optic separately, the general definition can be instantiated in all the particular cases. We can then just consider specific functions for constructing the more common families of optics. Tambara modules can be used to implement each one of the combinators of the library, ensuring that they work for as many optics as possible. The interested reader can find the implementation in the Appendix~\ref{sec:implementation}. Many of the other applications of optics may benefit from the flexibility of enriched and mixed optics. They may be used to capture some \emph{lens}-like constructions and provide a general theory of how they should be studied; the specifics remain as future work. \subsection{Van Laarhoven encoding} This text has focused on the profunctor representation of optics. A similar representation that also provides the benefit of straightforward composition is the the \textbf{van Laarhoven encoding}~\cite{laarhoven09}. It is arguably less flexible than the profunctor representation, being based on representable profunctors, but it is more common in practice. For instance, traversals admit a different encoding in terms of profunctors represented by an applicative functor. \begin{align*} & \int_{F \in \mathbf{App}} \V(A, FB) \otimes \V(S,FT) \\ \cong & \quad\mbox{(Yoneda)} \\ & \int_{F \in \mathbf{App}} [\V,\V](A \otimes [B,-], F) \otimes \V(S,FT) \\ \cong & \quad\mbox{(Free applicative)} \\ & \int_{F \in \mathbf{App}} \App \left(\sum_{n \in \mathbb{N}} A^n \otimes [B^n,-], F \right) \otimes \V(S,FT) \\ \cong & \quad\mbox{(coYoneda)} \\ & \V\left(S, \sum_{n \in \mathbb{N}} A^n \otimes [B^n , T]\right). \end{align*} Exactly the same technique yields lenses and \emph{grates}~\cite{deikun15}, using arbitrary representable or corepresentable profunctors, respectively. \subsection{Related work} \begin{itemize} \item The case of mixed optics was first mentioned by Riley~\cite[\S 6.1]{riley18}, but his work targeted a more restricted case. Specifically, the definitions of \emph{optic} given by Riley~\cite[Definition 2.0.1]{riley18} and Boisseau~\cite[\S 4.5]{boisseau18} deal only with the particular case in which \(\V = \mathbf{Sets}\), \(\mathbf{C}\) and \(\mathbf{D}\) are the same category, and the two actions coincide. The definition we use (Definition~\ref{def:optic}) and the proofs we present are then strictly more general than these other ones~\cite[Proposition 2.0.3]{riley18}, as they address arbitrary monoidal actions instead of monoidal products, and enriched \emph{mixed} optics instead of assuming \(\mathbf{C}=\mathbf{D}\). \item Pastro and Street~\cite{pastro08} first described the construction of \Vt{categories} of the form \(\mathcal{DA}\) in their study of Tambara theory. These results can be reused for optics thanks to the observations of Milewski~\cite{milewski17}. We carry his approach to a more general case. \item The profunctor representation theorem and its implications for functional programming are discussed by Boisseau and Gibbons~\cite{boisseau18}. We present a different approach to a more general version of this theorem. \item We describe a general derivation for optics assuming a suitable adjunction (Section~\ref{sec:opticsforcofree}). Riley describes a different but closely related class of optics and their laws~\cite[\S4.4]{riley18}; ours makes stronger assumptions but may be easier to apply in programming contexts. \item A central aspect of Riley's work is the extension of the concept of \emph{lawful lens} to arbitrary \emph{lawful optics}~\cite[\S 3]{riley18}. Lawfulness is a different topic that we have decided not to address here. \item Finally, Riley uses the results of Jaskelioff and O'Connor to propose a description of the traversal in terms of traversable functors~\cite[\S 4.6]{riley18}; our derivation simplifies this approach, which was in principle not suitable for the enriched case. \end{itemize} \subsection{Further work} \begin{itemize} \item A categorical account of how optics of different kinds compose into optics is left for further work. Specifically, it should be able to explain the ``lattice of optics'' described by Pickering, Gibbons and Wu~\cite{pickering17} and later by Boisseau and Gibbons~\cite{boisseau18}. Some preliminary results have been discussed by Román~\cite{roman19}, but the proposal to model the lattice is still too \emph{ad-hoc} to be satisfactory. \item The relation between power series functors and traversables is implicit across the literature on polynomial functors and containers. It can be shown that \emph{traversable} structures over an endofunctor \(T\) correspond to certain parameterised coalgebras using the free applicative construction~\cite{rypacek12}. We believe that it is possible to refine this result relating the laws of traversables to the derivation in our Proposition~\ref{prop:traversal}. This could also make our derivation for traversals more practical for functional programming. It can be noted that lenses are the optic for products, functors that distribute over strong functors. Traversals are the optic for traversables, functors that distribute over applicative functors. Both have a van Laarhoven representation in terms of strong and applicative functors respectively. A construction like this needs a certain Kan extension to be given a coalgebra structure~\cite[Lemma 4.1.3]{roman19}, but it does not necessarily work for any optic. \item Optics have numerous applications in the literature, including game theory~\cite{ghani18}, machine learning~\cite{fong19} or model-driven development~\cite{stevens10}. Beyond functional programming, enriched optics open new paths for exploration on applications of optics. Both mixed optics and enriched optics allow us to more precisely adjust the existing definitions to match the desired applications. \item We have not addressed the topic of lawfulness. A first reasonable notion of lawfulness for the case of mixed optics for two actions \((\actL{}{}) \colon \mathbf{M} \otimes \mathbf{C} \to \mathbf{C}\) and \((\actR{}{}) \colon \mathbf{N} \otimes \mathbf{D} \to \mathbf{D}\) is to use a cospan \(\mathbf{C} \to \mathbf{E} \gets \mathbf{D}\) \emph{of actions} to push the two parts of the optic into the same category and then consider lawfulness in \(\mathbf{E}\). This can be applied to monadic lenses, for instance, when the monad is copointed. \end{itemize} \section{Acknowledgements} This work was started in the last author's MSc thesis~\cite{roman19}; development continued at the Applied Category School 2019 at Oxford, and we thank the organizers of the School for that opportunity. We also thank Pawel Sobocinski for comments on the structure of the paper. Fosco Loregian and Mario Román were supported by the European Union through the ESF funded Estonian IT Academy research measure (project 2014-2020.4.05.19-0001). Bryce Clarke is supported by the Australian Government Research Training Program Scholarship. \bibliographystyle{alpha}
-95,116.026257
[ -3.34375, 3.0546875 ]
33.333333
[ -3.142578125, 0.375244140625, -1.6279296875, -5.5703125, -0.368408203125, 7.765625 ]
[ 4.00390625, 8.4609375, 2.564453125, 7.859375 ]
547
11,212
[ -3.466796875, 4.21875 ]
31.160467
[ -5.6484375, -3.474609375, -4.42578125, -1.8955078125, 1.86328125, 11.5546875 ]
0.694369
18.989008
18.473609
0.628731
[ 2.701251983642578 ]
-62,984.209191
5.780146
-93,676.764697
0.56784
6.132589
[ -1.873046875, -3.07421875, -3.80859375, -5.1171875, 2.123046875, 12.0234375 ]
[ -5.5546875, -2.564453125, -2.42578125, -1.814453125, 4.13671875, 5.609375 ]
BkiUbGE5qdmB6uAdshyW
\section{Introduction} The ongoing trend of applying \acp{NN} to signal processing tasks for communication systems has led to the demonstration of substantial improvements when compared to conventional systems for a wide range of applications \cite{honkala2020deeprx,samuel2017deep,li2018power}. Especially when focusing on recent results of \ac{NN}-based \ac{OFDM} receivers \cite{honkala2020deeprx, aoudia2020end, Fischer_2021}, where implementations showed comparable, or sometimes even better performance than conventional state-of-the-art baselines% , there is reason to believe that \ac{NN}-based components will play a significant role in future beyond 5G systems \cite{Toward6G_Hoydis_2021}. Based on the assumption that trainable components will be present in future receivers, we want to discuss the opportunity of online retraining during operation to further adapt to current channel conditions. Conventionally, receiver algorithms are designed offline, where they are optimized for best performance on comprehensive channel models, focusing on universal optimal performance. At the same time, these channel models are optimized to mimic the expected average behavior of the real-world channel as accurately as possible. This also holds for \ac{NN}-based receivers, which are typically trained offline on a data-set representing an ensemble of channel realizations generated by the same underlying channel model. Training \ac{NN}-based receivers could also be done using measured data, but this entails several difficulties as the measurements must cover a wide range of different channel conditions to enable the NN to generalize to the task, and are therefore expensive. Thus, initially training \ac{NN}-based receivers on generated data is advantageous for generalization due to the randomness introduced by stochastic channel models. This has been done in \cite{aoudia2020end, Fischer_2021} and results in similar or even superior performance compared to conventional \ac{LMMSE}-based systems, when also evaluated on the same stochastic channel models. \begin{figure}[t] \begin{center} \input{fig/channel_ensemble_v2.tikz} \end{center} \vspace{-2mm} \caption{Visualization of sub-ensembles representing various channel conditions within a universal training data-set.} \pgfkeysvalueof{/tikz/dsp/label}}{fig:channel_ensemble} \vspace{-4mm} \end{figure} However, in an actual real-world system and within a short period of time, only a subset of these universal channel conditions occurs. The receiver rather observes sub-ensembles of conditions, sketched schematically in Fig.~\ref{fig:channel_ensemble}, depending on the area of current operation (rural, urban, city) or situation (velocity, interference). As these \emph{macro} conditions only change slowly, compared to signal processing from the receiver's point of view, we want to investigate the impact of retraining the initially universally optimized receiver for the actual channel conditions. From a deep learning perspective, this approach can be seen as a deliberate overfitting, since we propose to retrain the receiver with only the latest data available. In the following, we show by using the example of \ac{NN}-based \ac{OFDM} receivers, that re-optimizing to the current channel conditions leads to gains compared to the universally optimized system in corner cases and demonstrate that retrained receivers can also adapt to initially unseen channel conditions and channel alterations like interference. The paper is structured as follows: Sec.~\ref{sec:system_setup} introduces the channel model and \ac{OFDM} system. In Sec.~\ref{sec:RNN} details on the applied \ac{RNN}-based \ac{OFDM} receiver and the adaptive retraining process are given. Finally, Sec.~\ref{sec:results} presents simulation results and Sec.~\ref{sec:conclusion} concludes the main findings. \section{System Setup} \pgfkeysvalueof{/tikz/dsp/label}}{sec:system_setup} The ideal channel data to showcase the advantages of online retraining would be temporally continuous ``in-the-field'' measurements of \ac{CSI} for \ac{UE} trajectories covering various different channel conditions. An equally potent alternative to measured data could be ray-tracing-based \ac{CSI}, simulated for \ac{UE} trajectories within large spatially consistent areas. Unfortunately, to the best of our knowledge, neither of both data sources satisfying these requirements are currently available. This is why we rely on a modified Jakes' and Clarke's oriented time-varying and frequency-selective stochastic channel model for our simulations. By sensitively manipulating the stochastic model's parameters, e.g., maximum channel delay, \ac{PDP} or \ac{UE} velocity, we can generate stochastic sub-ensembles of channel realizations representing the different channel conditions as simplistically visualized in Fig.~\ref{fig:channel_ensemble}. \subsection{Channel Model and OFDM System} We consider a tapped-delay line channel model with time-varying channel impulse response $h\left(t, \tau\right)$. The time-varying channel impulse response is defined as \begin{equation} h\left(t, \tau\right) = \sum_{\ell=0}^{L-1} a_{\ell}\left(t\right)\delta\left(\tau - \tau_{\ell}\right) \end{equation} where $L$ is the number of resolvable multipath-components, i.e., taps, $a_{\ell}$ is the complex time-varying gain of the ${\ell}$th tap, $\tau_{\ell}$ is the delay of the ${\ell}$th tap\footnote{In the following it is assumed that the delay of the first tap is \unit[0]{ns} and that the delay time is equally spaced with $\nicefrac{1}{B}=\unit[100]{ns}$.} and $\delta\left(.\right)$ is the Dirac delta function. For each channel realization, these multipath-components $a_{\ell}$ are randomly generated to hold a certain average power $p_{\ell} = \operatorname{E}\left[|a_{\ell}|^2\right]$ while their absolute value $|a_{\ell}|$ is Rayleigh distributed. % This average power $p_{\ell}$ of the ${\ell}$th multipath-compenent is assumed to follow an exponentially decaying \ac{PDP}. Each channel tap is therefore weighted during its generation with the weight $b_{\ell} = \sqrt{p_{\ell}}$ computed by \begin{equation} \pgfkeysvalueof{/tikz/dsp/label}}{eq:exp_dec} b_{\ell} = \frac{1}{\gamma}\sqrt{1-\beta}\cdot \beta^{\nicefrac{{\ell}}{2}} \in \mathbb{R}, \qquad {\ell} = 0,1,...,L-1 \end{equation} where the factor $\gamma$ is chosen such that $\sum_{\ell}|b_{\ell}|^2=1$ and ${0<\beta<1}$ is a variable decay parameter. The Fourier transform of the channel impulse response $h\left(t, \tau\right)$ then yields the channel transfer function $H \left( t,f \right)$. We assume that the considered \ac{OFDM} transmission system operates on frames of $n_\mathrm{T}$ consecutive \ac{OFDM} symbols with parameters given in Tab.~\ref{Tab:Scenario}. Each \ac{OFDM} symbol consists of $N_{\mathrm{Sub}}$ symbols -- either data-carrying or pilot-carrying -- that are transmitted in parallel over the $N_\mathrm{Sub}$ subcarriers. The transmitted information bits $\mathbf{u}$ % are encoded and interleaved into the sequence $\mathbf{c}$ of length $n_{\mathrm{d}}\cdot m$ using an 5G NR compliant \ac{LDPC} code \cite{5G_Code_2018} of length $n=1296$ bit. Here, $n_\mathrm{d}$ denotes the number of transmitted data-carrying symbols within a frame and each data symbol carries the information of $m$ bits (e.g., $m=4$ for a 16 \ac{QAM}). For the simulation in frequency domain it is assumed that a sufficiently long \ac{CP} is applied and \ac{ISI} is not present. % Let $\mathbf{X} \in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ be the transmitted symbols. After the removal of the \ac{CP} the received symbols $\mathbf{Y}\in\mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ are given by \begin{equation} \pgfkeysvalueof{/tikz/dsp/label}}{eq:received_symbols} \mathbf{Y} = \mathbf{H} \circ \mathbf{X} + \mathbf{N} \end{equation} where $\circ$ denotes the element-wise multiplication, $\mathbf{H}\in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ is the channel matrix and $\mathbf{N}\in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ is the \ac{AWGN} matrix. By sampling $H\left(t,f\right)$ according to the \ac{OFDM} system parameters given in Tab.~\ref{Tab:Scenario} we end up with the channel matrix $\mathbf{H}$ of the current frame. The elements $N_{k,n}$ of the noise matrix $\mathbf{N}$ are independent and identically complex Gaussian distributed according to $N_{k,n}\sim \mathcal{CN}\left(0, \sigma^2\right)$ where $\sigma^2$ denotes the noise power per element. The task at receiver side is to equalize and demap the received symbols $\mathbf{Y}$. % Finally, the obtained soft bit estimates are decoded by a \ac{BP} decoder. % \subsection{Iterative LMMSE Baseline} As a state-of-the-art baseline system, we employ a receiver based on the \ac{IEDD} principle. % It consists of a data-aided \ac{LMMSE} channel estimator, a (soft-decision) \ac{APP} demapper and a \ac{BP} decoder that iterates and exchanges soft bit information with the estimator and the demapper. For further details the interested reader is referred to \cite{aoudia2020end} and the references therein. \section{Adaptive RNN-based OFDM Receiver} \pgfkeysvalueof{/tikz/dsp/label}}{sec:RNN} To demonstrate the advantages of adaptive retraining we consider a trainable \ac{RNN}-based \ac{OFDM} receiver. % Similar to \cite{honkala2020deeprx,aoudia2020end}, it combines the tasks of channel estimation, equalization and soft-demapping within a single \ac{NN}.% \subsection{Neural Network Structure and Training} \begin{figure}[t] \begin{center} \input{fig/RNN_Non_iter_slim.tex} \end{center} \vspace{-2mm} \caption{Block diagram of the \ac{RNN}-based \ac{OFDM} receiver.} \pgfkeysvalueof{/tikz/dsp/label}}{fig:RNN_structures} \vspace{-5mm} \end{figure} Fig.~\ref{fig:RNN_structures} provides an overview of the applied \ac{NN} model which is based on the structure that has been used in \cite{Fischer_2021} for the task of channel estimation. % The RNN maps the received symbols $\mathbf{Y}$ to a soft bit estimation, interpreted as \acp{LLR} $\mathbf{l}_{\mathrm{RNN}}\in \mathbb{R}^{n_{\mathrm{d}}\cdot m}$. % Besides $\mathbf{Y}$, it also takes the transmitted pilot symbols $\mathbf{X}_\mathrm{p} \in \mathbb{C}^{ n_\mathrm{T} \times N_{\mathrm{Sub}}}$% , the \ac{LS} channel estimates $\hat{\mathbf{H}}_\mathrm{p,LS}\in \mathbb{C}^{n_{\mathrm{T}} \times N_{\mathrm{Sub}}}$ at pilot positions and the noise standard deviation $\sigma$ into account. % The complex-valued inputs are split into their real and imaginary parts and the noise standard deviation is broadcasted for the whole frame to match the input tensor shape, so that all inputs can be stacked to one large input tensor. Similar to \cite{Fischer_2021}, the core element of the \ac{RNN} cell are three bidirectional \ac{LSTM} layers that primarily process the input. The first \ac{LSTM} layer operates along the input's frequency dimension. Next, the output's frequency and time dimension are permuted causing the second \ac{LSTM} layer to operate in time dimension. Finally, the time dimension and the frequency dimension of the second layer's output are again permuted so that the third \ac{LSTM} layer again processes along the frequency dimension of the frame. Subsequently, % the \ac{RNN} cell's output is reshaped and processed by two \acp{TDDL}. % Here, every element of the two-dimensional resource grid of the frame is processed separately by these \acp{TDDL} using shared weights. % The \ac{LSTM} cells are applied with TensorFlow's default settings using \ac{tanh} activations, the first \ac{TDDL} uses \acp{ReLU} and the second \ac{TDDL} has no activation function. % In this work, we use 64 units within each \ac{LSTM} layer, the first \ac{TDDL} consists of 8 neurons and the second \ac{TDDL} uses $m$ neurons, i.e., the RNN outputs $m$ values for every position in the resource grid. % After removing the output values at pilot positions, the \ac{RNN}'s reshaped output $\mathbf{l}_{\mathrm{RNN}} \in \mathbb{R}^{n_\mathrm{d}\cdot m}$ can be de-interleaved and utilized by the outer \ac{BP} decoder. % Training of the described \ac{RNN} is carried out in a supervised manner utilizing \ac{SGD} and \ac{BPTT}. During training (initial as well as re-training) the Adam optimizer \cite{Kingma2014} with a learning rate of $\eta = 0.001$ is used to minimize the \ac{BCE} loss between estimations $\mathbf{l}_{\mathrm{RNN}}$ and labels $\vec{c}$. The \ac{RNN}-based receiver is initially trained with universal randomly generated channel realizations from the stochastic channel model for a vast range of different channel parameters. This kind of initial training results in an universal and robust generalization and allows the \ac{RNN}-based receiver to implicitly gather knowledge of the channel only through data-driven training \cite{Fischer_2021}. The exact parameters used for initial training are summarized in Tab.~\ref{Tab:Training_Parameters}. \begin{table}[t] \centering \vspace{0.03in} \caption{Parameters for Initial (Universal) Training} \vspace{-1mm} \begin{tabular}{l|l} \toprule Parameter & Value \\ \midrule Epochs / It. per epoch / BS & 100 / 1000 / 128 \\ Velocity $v$& $\unitfrac[0]{km}{h}- \unitfrac[200]{km}{h}$ \\ Signal-to-noise-ratio (SNR) & $\unit[8]{dB} - \unit[30]{dB}$\\ % Number of channel taps $L$ & Ep. 1-50: 4-10; Ep. 51-100: 1-14\\ \ac{PDP} & Exp. decaying with $10\operatorname{log_{10}}\left(\frac{p_{L-1}}{p_0}\right)$\\& $=\unit[-13]{dB}$ and equally spaced\\%Exp. decaying with the power \\&in the last resolvable path being\\ & $\unit[13]{dB}$ lower than the power of\\& the first path and equally spaced\\ % \bottomrule \end{tabular} \pgfkeysvalueof{/tikz/dsp/label}}{Tab:Training_Parameters} \vspace{-5.5mm} \end{table} % % % % % % % % % % % % % % % % % % % \subsection{Adaptive Retraining via On-the-fly Label Recovery} \pgfkeysvalueof{/tikz/dsp/label}}{sec:retraining} In order to allow the \ac{RNN}-based \ac{OFDM} receiver to adapt to current channel conditions, it has to be retrained periodically. To enable a single retraining step, a data-set consisting of multiple recorded OFDM frames (holding inputs $\mathbf{Y}$, $\mathbf{X}_\mathrm{p}$, $\hat{\mathbf{H}}_\mathrm{p,LS}$ and $\sigma$) and the corresponding labels, being the originally transmitted interleaved coded bits $\mathbf{c}$, must be collected. As the labels $\mathbf{c}$ are required for supervised training, they must either be retrieved by the transmission of pilot-based training sequences (and are thereby known at the receiver side) or via on-the-fly label recovery, as presented in \cite{schibisch2018online}. Whereas pilot-based training sequences would cause a rate loss, the approach proposed in \cite{schibisch2018online} recovers the labels on-the-fly via the outer \ac{FEC} after the decoder has corrected the received bits. Thus, there is no additional rate loss and these labels usually come for free as most systems rely on \acp{FEC}. To demonstrate the feasibility of on-the-fly label recovery for the task of RNN retraining, we only use labels recovered by the LDPC code after 20 iterations of BP decoding. The block diagram in Fig.~\ref{fig:on_the_fly_label_recovery} depicts the individual processing steps that allow retraining with recovered labels. % Therefore, the \ac{RNN} processes the received symbols as described above and outputs an \ac{LLR} for each transmitted bit. These \acp{LLR} $\mathbf{l}_{\mathrm{RNN}}$ are then de-interleaved and further processed by the \ac{BP} decoder. % In normal operation, the decoder makes a final decision on the received information bits $\hat{\mathbf{u}}$ after several iterations of \ac{BP} decoding. But, in order to build up a labeled data-set for retraining, at the same time the decoder also outputs its information on the coded bits $\hat{\mathbf{c}}$, i.e., a hard decision on the final variable nodes. These coded bits $\hat{\mathbf{c}}$ are then interleaved to $\tilde{\mathbf{c}}$ and stored together with the corresponding inputs. If enough tuples of inputs and labels are recovered to form a sufficiently large retraining data-set, an update step using supervised \ac{SGD} is performed, aiming to reduce the \ac{BCE} loss. However, one drawback of the described label recovery approach is, that even after sufficient decoding, not all labels can be recovered correctly by a \ac{FEC} code. This is why we consider a codeword's error syndrome in combination with the current \ac{SNR} to define a threshold for labels that are stored in the retraining data-set, while samples above the threshold are discarded. Similar to the findings in \cite{schibisch2018online} we saw improved performance after retraining even with partly erroneous labels. If the number of erroneous labels exceeded a certain level we saw a degradation after retraining. But, this can be avoided by defining the threshold conservatively.% \begin{figure}[t] % \begin{center} \input{fig/On-the-fly-label-recovery.tex} \end{center} \vspace{-2mm} \caption{Block diagram of the retraining process for NN-based receiver adaptation via on-the-fly label recovery \cite{schibisch2018online}.} \pgfkeysvalueof{/tikz/dsp/label}}{fig:on_the_fly_label_recovery} \vspace{-8mm} \end{figure} \section{Simulation Results} \pgfkeysvalueof{/tikz/dsp/label}}{sec:results} \begin{table}[t] \centering \vspace{0.03in} \caption{OFDM and Channel Model Parameters} \vspace{-1mm} \begin{tabular}{l|l} \toprule Parameter & Value \\ \midrule Number of subcarriers $N_{\mathrm{Sub}}$ & 64 \\ Frame length $n_{\mathrm{T}}$& 36 \\ Carrier frequency $f_{\mathrm{c}}$ & $\unit[5.9]{GHz}$ \\ Symbol duration including \ac{CP} $T_{\mathrm{S}}$ & $\unit[8]{\mu s}$ \\ Length of the \ac{CP} &$\unit[1.6]{\mu s}$ \\ Bandwidth $B$ & $\unit[10]{MHz}$\\ Data symbol constellation & 16 QAM, $m=4$ bit per symbol \\ Pilot structure/arrangement & Rectangular/Grid \\ Pilot symbol distance & $d_{\mathrm{T}}=15$, $d_\mathrm{F}=5$\\ \ac{PDP} & Exp. decaying with\\ &$10\operatorname{log_{10}}\left(\frac{p_{L-1}}{p_0}\right)=\unit[-13]{dB}$\\ % LDPC code & $R_{\mathrm{C}} = \nicefrac{1}{2}$, $n = \unit[1296]{bit}$\\ % \bottomrule \end{tabular} \pgfkeysvalueof{/tikz/dsp/label}}{Tab:Scenario} \vspace{-4mm} \end{table} To evaluate the effects of adaptive retraining we simulate the performance of various receiver setups in three different scenarios. For each scenario we assume certain channel conditions, simulated by channel model parameters, to be static for a short period of time. Within this time period, which shall represent the \emph{current} channel, we gather retraining data via on-the-fly label recovery as described in Sec.~\ref{sec:retraining}, perform a retraining step of the RNN-based receiver and then evaluate the performance on the same channel conditions. For the following simulation results, a retraining step was executed after 32 batches with 50 frames of input-label-tuples per batch were collected. With the general simulation parameters given in Tab.~\ref{Tab:Scenario}, this translates to a label recovery time period of $\unit[0.4608]{s}$ and, thereby, sets a lower bound (neglecting time for retraining computations) for periodic retraining steps to track channel alterations. To limit the amount of erroneous labels within a recovered retraining data-set, we empirically defined the threshold according to the codeword's error syndrome in a way that at least $82\%$ of the parity-checks of the recovered labels have to be fulfilled by a batch to be used for retraining. In addition, a batch is only used for retraining if the \acs{SNR} $\nicefrac{E_{\mathrm{b}}}{N_0}$ is larger than $\unit[7]{dB}$, resulting in basically no retraining in the low SNR regime.\footnote{Pilot sequence-based labels are required for retraining in the low SNR regime, as recovered labels based on FEC suffer from high error rates.} Also, each recovered batch is only used once for an SGD weight update iteration and one retraining step is performed separately for every evaluation point at different SNR. For each scenario the performance is measured by the \ac{BER} after forward error correction (post-FEC) and the following receiver systems are analyzed: \begin{itemize} \item \textbf{Universal \ac{RNN}}: Non-iterative RNN-based receiver, % initially trained with the universal parameters summarized in Tab.~\ref{Tab:Training_Parameters}, complemented by 20 iterations of BP decoding. \item \textbf{Adapted \ac{RNN}:} Non-iterative \ac{RNN}-based receiver, initially trained with the universal parameters in Tab.~\ref{Tab:Training_Parameters}, that is adapted to the current channel via one retraining step using on-the-fly recovered labels. Also complemented by 20 iterations of BP decoding. \item \textbf{\ac{LMMSE} \ac{IEDD}}: Conventional \ac{LMMSE} \ac{IEDD} baseline system % utilizing an autocorrelation matrix that is matched to the channel (genie knowledge of channel model parameters). % The \ac{BP} decoder executes 5 iterations % before feedback is provided to estimator and demapper. % In total $4\times 5=20$ iterations of BP decoding are executed. \item \textbf{Perfect Knowledge IDD}: Lower limit of the achievable \ac{BER} assuming perfect knowledge of the channel and utilizing an iterative receiver, i.e., exploiting \ac{IDD}. Here, feedback is provided to the demapper after every iteration of \ac{BP} decoding and $\vec{H}$ is known. % In total $20\times 1 = 20$ iterations of BP decoding are executed. \vspace{-2mm} \end{itemize} % % % \subsection{Corner Case (Sub-Ensemble) Scenario} \begin{figure}[t] \begin{center} \input{fig/BER_Retraining_8taps_0kmh_exp_dec_16QAM} \end{center} \vspace{-2mm} \caption{\ac{BER} performance of the investigated receivers in the corner case scenario of no movement and thereby no channel time-variance ($v = \unitfrac[0]{km}{h}$ and moderate $L = 8$ channel taps).} \pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_8taps} \vspace{-5mm} \end{figure} The first scenario investigates the impact of adaptation to corner case conditions using the example of no \ac{UE} movement. For this purpose we set the velocity to $v = \unitfrac[0]{km}{h}$ and choose a moderate number of $L = 8$ channel taps so that the stochastic channel model generates channel realizations that form a sub-ensemble of the universal conditions used for initial training (Tab.~\ref{Tab:Training_Parameters}). As can be seen from the results shown in Fig.~\ref{fig:BER_retraining_8taps}, the unadapted \emph{Universal RNN} already shows a better performance than the conventional \emph{LMMSE IEDD} baseline, thus, confirming the findings of \cite{aoudia2020end, Fischer_2021}. This gain can be justified by the fact that the RNN-based receiver can additionally exploit the expected distribution of the data-carrying symbols in $\vec{Y}$. However, by adapting the RNN receiver to the current channel conditions, the \emph{Adapted RNN} can further gain about \unit[0.1]{dB} of BER performance compared to the \emph{Universal RNN}. Interestingly, this gain is possible although the channel conditions of this scenario were part (sub-ensemble) of the initial universal training. We assume that retraining to current channel conditions reinforces the RNN to lift conservative assumptions, as channel realizations with high velocity are not part of the retraining data and high velocity implications are thereby not considered for weight updates. These gains have also been observed for various other corner cases with different parameters within the range of the universal channel ensemble, but due to paper length limits we exemplary only show this corner case. \subsection{Out-of-Specification (Extreme) Scenario} \begin{figure}[t] \begin{center} \input{fig/BER_Retraining_16taps_100kmh_exp_dec_16QAM} \end{center} \vspace{-2mm} \caption{\ac{BER} performance of the investigated receivers in the extremely frequency-variant (out-of-specifications) scenario of $L = 16$ channel taps at a moderate velocity of $v = \unitfrac[100]{km}{h}$.} \pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_16taps} \vspace{-4mm} \end{figure} In the second scenario, we want to focus on the benefit of adaptation in case of unforeseen and extreme channel conditions. Therefore, the results shown in Fig.~\ref{fig:BER_retraining_16taps} were obtained at highly frequency-selective channel conditions with $L = 16$ channel taps at a moderate velocity of $v=\unitfrac[100]{km}{h}$. The simulation results show that the performance of the conventional \emph{LMMSE IEDD} baseline system degrades heavily. This is expected as it mainly relies on pilot symbols and the used pilot position spacing in frequency dimension is not sufficient for $L = 16$ channel taps, setting this scenario out of specification. Likewise, this scenario is also out of specification for the \emph{Universal RNN} as initial training only covers channel conditions up to $L = 14$ channel taps. However, the performance of the \emph{Universal RNN} does also degrade compared to the \emph{Perfect Knowledge IDD} lower limit, but not as much as the \emph{LMMSE IEDD} baseline system. This observation is also consistent with the findings of \cite{aoudia2020end, Fischer_2021} which showed, that NN-based receivers extract further knowledge about the channel from the provided data-carrying symbols and are therefore more robust against sparse pilot spacing. But, most interestingly, the \emph{Adapted RNN} shows significantly improved performance compared to the \emph{Universal RNN}. While there is still a large gap between the performance of the \emph{Adapted RNN} and \emph{Perfect Knowledge IDD}, these results show that adaptation can render a NN-based receiver to significantly higher operability, even in the case of a scenario that was originally out of specifications. \subsection{Interference Scenario} \begin{figure}[t] \begin{center} \input{fig/BER_Retraining_8taps_100kmh_GuardBand_Interference_6dB} \end{center} \vspace{-2mm} \caption{ \ac{BER} performance of the investigated receivers in a scenario with side channel interference, modeled by additive noise of $\unit[6]{dB}$ on the outer four subcarriers, at otherwise moderate conditions with $L = 8$ channel taps and $v = \unitfrac[100]{km}{h}$.} \pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_guard_band_interference} \vspace{-4mm} \end{figure} Finally, we want to showcase a scenario that highlights the flexibility of NN-based receivers and how retraining can even enable adaptation to unseen tasks. This is shown using the example of side channel interference, which is modeled by adding noise to the outer four subcarriers, reducing their SNR by $\unit[6]{dB}$. As can be seen from the results shown in Fig.~\ref{fig:BER_retraining_guard_band_interference}, the \emph{LMMSE IEDD} baseline as well as the \emph{Universal RNN} suffer from the added interference, but retraining the RNN-based receiver leads to a performance gain of \unit[0.42]{dB} when we compare the \emph{Adapted RNN} with the \emph{Universal RNN}. In this case the NN-based receiver is able to cope with the new task of incorporating the disturbance on the outer four subcarriers via retraining, while a conventional system would require additional signal processing and can not simply adapt. \section{Conclusion} \pgfkeysvalueof{/tikz/dsp/label}}{sec:conclusion} We have demonstrated that \ac{NN}-based receivers benefit from continuous retraining as they can adapt to current, extreme and new unforeseen channel conditions. For such cases, adaptation leads to a superior performance when compared to static receivers that have only been designed and optimized for a universal channel model. Finally, we want to emphasize that these gains come without any additional signaling overhead, as on-the-fly label recovery is sufficient for the retraining process. \bibliographystyle{IEEEtran} \section{Introduction} The ongoing trend of applying \acp{NN} to signal processing tasks for communication systems has led to the demonstration of substantial improvements when compared to conventional systems for a wide range of applications \cite{honkala2020deeprx,samuel2017deep,li2018power}. Especially when focusing on recent results of \ac{NN}-based \ac{OFDM} receivers \cite{honkala2020deeprx, aoudia2020end, Fischer_2021}, where implementations showed comparable, or sometimes even better performance than conventional state-of-the-art baselines% , there is reason to believe that \ac{NN}-based components will play a significant role in future beyond 5G systems \cite{Toward6G_Hoydis_2021}. Based on the assumption that trainable components will be present in future receivers, we want to discuss the opportunity of online retraining during operation to further adapt to current channel conditions. Conventionally, receiver algorithms are designed offline, where they are optimized for best performance on comprehensive channel models, focusing on universal optimal performance. At the same time, these channel models are optimized to mimic the expected average behavior of the real-world channel as accurately as possible. This also holds for \ac{NN}-based receivers, which are typically trained offline on a data-set representing an ensemble of channel realizations generated by the same underlying channel model. Training \ac{NN}-based receivers could also be done using measured data, but this entails several difficulties as the measurements must cover a wide range of different channel conditions to enable the NN to generalize to the task, and are therefore expensive. Thus, initially training \ac{NN}-based receivers on generated data is advantageous for generalization due to the randomness introduced by stochastic channel models. This has been done in \cite{aoudia2020end, Fischer_2021} and results in similar or even superior performance compared to conventional \ac{LMMSE}-based systems, when also evaluated on the same stochastic channel models. \begin{figure}[t] \begin{center} \input{fig/channel_ensemble_v2.tikz} \end{center} \vspace{-2mm} \caption{Visualization of sub-ensembles representing various channel conditions within a universal training data-set.} \pgfkeysvalueof{/tikz/dsp/label}}{fig:channel_ensemble} \vspace{-4mm} \end{figure} However, in an actual real-world system and within a short period of time, only a subset of these universal channel conditions occurs. The receiver rather observes sub-ensembles of conditions, sketched schematically in Fig.~\ref{fig:channel_ensemble}, depending on the area of current operation (rural, urban, city) or situation (velocity, interference). As these \emph{macro} conditions only change slowly, compared to signal processing from the receiver's point of view, we want to investigate the impact of retraining the initially universally optimized receiver for the actual channel conditions. From a deep learning perspective, this approach can be seen as a deliberate overfitting, since we propose to retrain the receiver with only the latest data available. In the following, we show by using the example of \ac{NN}-based \ac{OFDM} receivers, that re-optimizing to the current channel conditions leads to gains compared to the universally optimized system in corner cases and demonstrate that retrained receivers can also adapt to initially unseen channel conditions and channel alterations like interference. The paper is structured as follows: Sec.~\ref{sec:system_setup} introduces the channel model and \ac{OFDM} system. In Sec.~\ref{sec:RNN} details on the applied \ac{RNN}-based \ac{OFDM} receiver and the adaptive retraining process are given. Finally, Sec.~\ref{sec:results} presents simulation results and Sec.~\ref{sec:conclusion} concludes the main findings. \section{System Setup} \pgfkeysvalueof{/tikz/dsp/label}}{sec:system_setup} The ideal channel data to showcase the advantages of online retraining would be temporally continuous ``in-the-field'' measurements of \ac{CSI} for \ac{UE} trajectories covering various different channel conditions. An equally potent alternative to measured data could be ray-tracing-based \ac{CSI}, simulated for \ac{UE} trajectories within large spatially consistent areas. Unfortunately, to the best of our knowledge, neither of both data sources satisfying these requirements are currently available. This is why we rely on a modified Jakes' and Clarke's oriented time-varying and frequency-selective stochastic channel model for our simulations. By sensitively manipulating the stochastic model's parameters, e.g., maximum channel delay, \ac{PDP} or \ac{UE} velocity, we can generate stochastic sub-ensembles of channel realizations representing the different channel conditions as simplistically visualized in Fig.~\ref{fig:channel_ensemble}. \subsection{Channel Model and OFDM System} We consider a tapped-delay line channel model with time-varying channel impulse response $h\left(t, \tau\right)$. The time-varying channel impulse response is defined as \begin{equation} h\left(t, \tau\right) = \sum_{\ell=0}^{L-1} a_{\ell}\left(t\right)\delta\left(\tau - \tau_{\ell}\right) \end{equation} where $L$ is the number of resolvable multipath-components, i.e., taps, $a_{\ell}$ is the complex time-varying gain of the ${\ell}$th tap, $\tau_{\ell}$ is the delay of the ${\ell}$th tap\footnote{In the following it is assumed that the delay of the first tap is \unit[0]{ns} and that the delay time is equally spaced with $\nicefrac{1}{B}=\unit[100]{ns}$.} and $\delta\left(.\right)$ is the Dirac delta function. For each channel realization, these multipath-components $a_{\ell}$ are randomly generated to hold a certain average power $p_{\ell} = \operatorname{E}\left[|a_{\ell}|^2\right]$ while their absolute value $|a_{\ell}|$ is Rayleigh distributed. % This average power $p_{\ell}$ of the ${\ell}$th multipath-compenent is assumed to follow an exponentially decaying \ac{PDP}. Each channel tap is therefore weighted during its generation with the weight $b_{\ell} = \sqrt{p_{\ell}}$ computed by \begin{equation} \pgfkeysvalueof{/tikz/dsp/label}}{eq:exp_dec} b_{\ell} = \frac{1}{\gamma}\sqrt{1-\beta}\cdot \beta^{\nicefrac{{\ell}}{2}} \in \mathbb{R}, \qquad {\ell} = 0,1,...,L-1 \end{equation} where the factor $\gamma$ is chosen such that $\sum_{\ell}|b_{\ell}|^2=1$ and ${0<\beta<1}$ is a variable decay parameter. The Fourier transform of the channel impulse response $h\left(t, \tau\right)$ then yields the channel transfer function $H \left( t,f \right)$. We assume that the considered \ac{OFDM} transmission system operates on frames of $n_\mathrm{T}$ consecutive \ac{OFDM} symbols with parameters given in Tab.~\ref{Tab:Scenario}. Each \ac{OFDM} symbol consists of $N_{\mathrm{Sub}}$ symbols -- either data-carrying or pilot-carrying -- that are transmitted in parallel over the $N_\mathrm{Sub}$ subcarriers. The transmitted information bits $\mathbf{u}$ % are encoded and interleaved into the sequence $\mathbf{c}$ of length $n_{\mathrm{d}}\cdot m$ using an 5G NR compliant \ac{LDPC} code \cite{5G_Code_2018} of length $n=1296$ bit. Here, $n_\mathrm{d}$ denotes the number of transmitted data-carrying symbols within a frame and each data symbol carries the information of $m$ bits (e.g., $m=4$ for a 16 \ac{QAM}). For the simulation in frequency domain it is assumed that a sufficiently long \ac{CP} is applied and \ac{ISI} is not present. % Let $\mathbf{X} \in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ be the transmitted symbols. After the removal of the \ac{CP} the received symbols $\mathbf{Y}\in\mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ are given by \begin{equation} \pgfkeysvalueof{/tikz/dsp/label}}{eq:received_symbols} \mathbf{Y} = \mathbf{H} \circ \mathbf{X} + \mathbf{N} \end{equation} where $\circ$ denotes the element-wise multiplication, $\mathbf{H}\in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ is the channel matrix and $\mathbf{N}\in \mathbb{C}^{n_{\mathrm{T}}\times N_{\mathrm{Sub}}}$ is the \ac{AWGN} matrix. By sampling $H\left(t,f\right)$ according to the \ac{OFDM} system parameters given in Tab.~\ref{Tab:Scenario} we end up with the channel matrix $\mathbf{H}$ of the current frame. The elements $N_{k,n}$ of the noise matrix $\mathbf{N}$ are independent and identically complex Gaussian distributed according to $N_{k,n}\sim \mathcal{CN}\left(0, \sigma^2\right)$ where $\sigma^2$ denotes the noise power per element. The task at receiver side is to equalize and demap the received symbols $\mathbf{Y}$. % Finally, the obtained soft bit estimates are decoded by a \ac{BP} decoder. % \subsection{Iterative LMMSE Baseline} As a state-of-the-art baseline system, we employ a receiver based on the \ac{IEDD} principle. % It consists of a data-aided \ac{LMMSE} channel estimator, a (soft-decision) \ac{APP} demapper and a \ac{BP} decoder that iterates and exchanges soft bit information with the estimator and the demapper. For further details the interested reader is referred to \cite{aoudia2020end} and the references therein. \section{Adaptive RNN-based OFDM Receiver} \pgfkeysvalueof{/tikz/dsp/label}}{sec:RNN} To demonstrate the advantages of adaptive retraining we consider a trainable \ac{RNN}-based \ac{OFDM} receiver. % Similar to \cite{honkala2020deeprx,aoudia2020end}, it combines the tasks of channel estimation, equalization and soft-demapping within a single \ac{NN}.% \subsection{Neural Network Structure and Training} \begin{figure}[t] \begin{center} \input{fig/RNN_Non_iter_slim.tex} \end{center} \vspace{-2mm} \caption{Block diagram of the \ac{RNN}-based \ac{OFDM} receiver.} \pgfkeysvalueof{/tikz/dsp/label}}{fig:RNN_structures} \vspace{-5mm} \end{figure} Fig.~\ref{fig:RNN_structures} provides an overview of the applied \ac{NN} model which is based on the structure that has been used in \cite{Fischer_2021} for the task of channel estimation. % The RNN maps the received symbols $\mathbf{Y}$ to a soft bit estimation, interpreted as \acp{LLR} $\mathbf{l}_{\mathrm{RNN}}\in \mathbb{R}^{n_{\mathrm{d}}\cdot m}$. % Besides $\mathbf{Y}$, it also takes the transmitted pilot symbols $\mathbf{X}_\mathrm{p} \in \mathbb{C}^{ n_\mathrm{T} \times N_{\mathrm{Sub}}}$% , the \ac{LS} channel estimates $\hat{\mathbf{H}}_\mathrm{p,LS}\in \mathbb{C}^{n_{\mathrm{T}} \times N_{\mathrm{Sub}}}$ at pilot positions and the noise standard deviation $\sigma$ into account. % The complex-valued inputs are split into their real and imaginary parts and the noise standard deviation is broadcasted for the whole frame to match the input tensor shape, so that all inputs can be stacked to one large input tensor. Similar to \cite{Fischer_2021}, the core element of the \ac{RNN} cell are three bidirectional \ac{LSTM} layers that primarily process the input. The first \ac{LSTM} layer operates along the input's frequency dimension. Next, the output's frequency and time dimension are permuted causing the second \ac{LSTM} layer to operate in time dimension. Finally, the time dimension and the frequency dimension of the second layer's output are again permuted so that the third \ac{LSTM} layer again processes along the frequency dimension of the frame. Subsequently, % the \ac{RNN} cell's output is reshaped and processed by two \acp{TDDL}. % Here, every element of the two-dimensional resource grid of the frame is processed separately by these \acp{TDDL} using shared weights. % The \ac{LSTM} cells are applied with TensorFlow's default settings using \ac{tanh} activations, the first \ac{TDDL} uses \acp{ReLU} and the second \ac{TDDL} has no activation function. % In this work, we use 64 units within each \ac{LSTM} layer, the first \ac{TDDL} consists of 8 neurons and the second \ac{TDDL} uses $m$ neurons, i.e., the RNN outputs $m$ values for every position in the resource grid. % After removing the output values at pilot positions, the \ac{RNN}'s reshaped output $\mathbf{l}_{\mathrm{RNN}} \in \mathbb{R}^{n_\mathrm{d}\cdot m}$ can be de-interleaved and utilized by the outer \ac{BP} decoder. % Training of the described \ac{RNN} is carried out in a supervised manner utilizing \ac{SGD} and \ac{BPTT}. During training (initial as well as re-training) the Adam optimizer \cite{Kingma2014} with a learning rate of $\eta = 0.001$ is used to minimize the \ac{BCE} loss between estimations $\mathbf{l}_{\mathrm{RNN}}$ and labels $\vec{c}$. The \ac{RNN}-based receiver is initially trained with universal randomly generated channel realizations from the stochastic channel model for a vast range of different channel parameters. This kind of initial training results in an universal and robust generalization and allows the \ac{RNN}-based receiver to implicitly gather knowledge of the channel only through data-driven training \cite{Fischer_2021}. The exact parameters used for initial training are summarized in Tab.~\ref{Tab:Training_Parameters}. \begin{table}[t] \centering \vspace{0.03in} \caption{Parameters for Initial (Universal) Training} \vspace{-1mm} \begin{tabular}{l|l} \toprule Parameter & Value \\ \midrule Epochs / It. per epoch / BS & 100 / 1000 / 128 \\ Velocity $v$& $\unitfrac[0]{km}{h}- \unitfrac[200]{km}{h}$ \\ Signal-to-noise-ratio (SNR) & $\unit[8]{dB} - \unit[30]{dB}$\\ % Number of channel taps $L$ & Ep. 1-50: 4-10; Ep. 51-100: 1-14\\ \ac{PDP} & Exp. decaying with $10\operatorname{log_{10}}\left(\frac{p_{L-1}}{p_0}\right)$\\& $=\unit[-13]{dB}$ and equally spaced\\%Exp. decaying with the power \\&in the last resolvable path being\\ & $\unit[13]{dB}$ lower than the power of\\& the first path and equally spaced\\ % \bottomrule \end{tabular} \pgfkeysvalueof{/tikz/dsp/label}}{Tab:Training_Parameters} \vspace{-5.5mm} \end{table} % % % % % % % % % % % % % % % % % % % \subsection{Adaptive Retraining via On-the-fly Label Recovery} \pgfkeysvalueof{/tikz/dsp/label}}{sec:retraining} In order to allow the \ac{RNN}-based \ac{OFDM} receiver to adapt to current channel conditions, it has to be retrained periodically. To enable a single retraining step, a data-set consisting of multiple recorded OFDM frames (holding inputs $\mathbf{Y}$, $\mathbf{X}_\mathrm{p}$, $\hat{\mathbf{H}}_\mathrm{p,LS}$ and $\sigma$) and the corresponding labels, being the originally transmitted interleaved coded bits $\mathbf{c}$, must be collected. As the labels $\mathbf{c}$ are required for supervised training, they must either be retrieved by the transmission of pilot-based training sequences (and are thereby known at the receiver side) or via on-the-fly label recovery, as presented in \cite{schibisch2018online}. Whereas pilot-based training sequences would cause a rate loss, the approach proposed in \cite{schibisch2018online} recovers the labels on-the-fly via the outer \ac{FEC} after the decoder has corrected the received bits. Thus, there is no additional rate loss and these labels usually come for free as most systems rely on \acp{FEC}. To demonstrate the feasibility of on-the-fly label recovery for the task of RNN retraining, we only use labels recovered by the LDPC code after 20 iterations of BP decoding. The block diagram in Fig.~\ref{fig:on_the_fly_label_recovery} depicts the individual processing steps that allow retraining with recovered labels. % Therefore, the \ac{RNN} processes the received symbols as described above and outputs an \ac{LLR} for each transmitted bit. These \acp{LLR} $\mathbf{l}_{\mathrm{RNN}}$ are then de-interleaved and further processed by the \ac{BP} decoder. % In normal operation, the decoder makes a final decision on the received information bits $\hat{\mathbf{u}}$ after several iterations of \ac{BP} decoding. But, in order to build up a labeled data-set for retraining, at the same time the decoder also outputs its information on the coded bits $\hat{\mathbf{c}}$, i.e., a hard decision on the final variable nodes. These coded bits $\hat{\mathbf{c}}$ are then interleaved to $\tilde{\mathbf{c}}$ and stored together with the corresponding inputs. If enough tuples of inputs and labels are recovered to form a sufficiently large retraining data-set, an update step using supervised \ac{SGD} is performed, aiming to reduce the \ac{BCE} loss. However, one drawback of the described label recovery approach is, that even after sufficient decoding, not all labels can be recovered correctly by a \ac{FEC} code. This is why we consider a codeword's error syndrome in combination with the current \ac{SNR} to define a threshold for labels that are stored in the retraining data-set, while samples above the threshold are discarded. Similar to the findings in \cite{schibisch2018online} we saw improved performance after retraining even with partly erroneous labels. If the number of erroneous labels exceeded a certain level we saw a degradation after retraining. But, this can be avoided by defining the threshold conservatively.% \begin{figure}[t] % \begin{center} \input{fig/On-the-fly-label-recovery.tex} \end{center} \vspace{-2mm} \caption{Block diagram of the retraining process for NN-based receiver adaptation via on-the-fly label recovery \cite{schibisch2018online}.} \pgfkeysvalueof{/tikz/dsp/label}}{fig:on_the_fly_label_recovery} \vspace{-8mm} \end{figure} \section{Simulation Results} \pgfkeysvalueof{/tikz/dsp/label}}{sec:results} \begin{table}[t] \centering \vspace{0.03in} \caption{OFDM and Channel Model Parameters} \vspace{-1mm} \begin{tabular}{l|l} \toprule Parameter & Value \\ \midrule Number of subcarriers $N_{\mathrm{Sub}}$ & 64 \\ Frame length $n_{\mathrm{T}}$& 36 \\ Carrier frequency $f_{\mathrm{c}}$ & $\unit[5.9]{GHz}$ \\ Symbol duration including \ac{CP} $T_{\mathrm{S}}$ & $\unit[8]{\mu s}$ \\ Length of the \ac{CP} &$\unit[1.6]{\mu s}$ \\ Bandwidth $B$ & $\unit[10]{MHz}$\\ Data symbol constellation & 16 QAM, $m=4$ bit per symbol \\ Pilot structure/arrangement & Rectangular/Grid \\ Pilot symbol distance & $d_{\mathrm{T}}=15$, $d_\mathrm{F}=5$\\ \ac{PDP} & Exp. decaying with\\ &$10\operatorname{log_{10}}\left(\frac{p_{L-1}}{p_0}\right)=\unit[-13]{dB}$\\ % LDPC code & $R_{\mathrm{C}} = \nicefrac{1}{2}$, $n = \unit[1296]{bit}$\\ % \bottomrule \end{tabular} \pgfkeysvalueof{/tikz/dsp/label}}{Tab:Scenario} \vspace{-4mm} \end{table} To evaluate the effects of adaptive retraining we simulate the performance of various receiver setups in three different scenarios. For each scenario we assume certain channel conditions, simulated by channel model parameters, to be static for a short period of time. Within this time period, which shall represent the \emph{current} channel, we gather retraining data via on-the-fly label recovery as described in Sec.~\ref{sec:retraining}, perform a retraining step of the RNN-based receiver and then evaluate the performance on the same channel conditions. For the following simulation results, a retraining step was executed after 32 batches with 50 frames of input-label-tuples per batch were collected. With the general simulation parameters given in Tab.~\ref{Tab:Scenario}, this translates to a label recovery time period of $\unit[0.4608]{s}$ and, thereby, sets a lower bound (neglecting time for retraining computations) for periodic retraining steps to track channel alterations. To limit the amount of erroneous labels within a recovered retraining data-set, we empirically defined the threshold according to the codeword's error syndrome in a way that at least $82\%$ of the parity-checks of the recovered labels have to be fulfilled by a batch to be used for retraining. In addition, a batch is only used for retraining if the \acs{SNR} $\nicefrac{E_{\mathrm{b}}}{N_0}$ is larger than $\unit[7]{dB}$, resulting in basically no retraining in the low SNR regime.\footnote{Pilot sequence-based labels are required for retraining in the low SNR regime, as recovered labels based on FEC suffer from high error rates.} Also, each recovered batch is only used once for an SGD weight update iteration and one retraining step is performed separately for every evaluation point at different SNR. For each scenario the performance is measured by the \ac{BER} after forward error correction (post-FEC) and the following receiver systems are analyzed: \begin{itemize} \item \textbf{Universal \ac{RNN}}: Non-iterative RNN-based receiver, % initially trained with the universal parameters summarized in Tab.~\ref{Tab:Training_Parameters}, complemented by 20 iterations of BP decoding. \item \textbf{Adapted \ac{RNN}:} Non-iterative \ac{RNN}-based receiver, initially trained with the universal parameters in Tab.~\ref{Tab:Training_Parameters}, that is adapted to the current channel via one retraining step using on-the-fly recovered labels. Also complemented by 20 iterations of BP decoding. \item \textbf{\ac{LMMSE} \ac{IEDD}}: Conventional \ac{LMMSE} \ac{IEDD} baseline system % utilizing an autocorrelation matrix that is matched to the channel (genie knowledge of channel model parameters). % The \ac{BP} decoder executes 5 iterations % before feedback is provided to estimator and demapper. % In total $4\times 5=20$ iterations of BP decoding are executed. \item \textbf{Perfect Knowledge IDD}: Lower limit of the achievable \ac{BER} assuming perfect knowledge of the channel and utilizing an iterative receiver, i.e., exploiting \ac{IDD}. Here, feedback is provided to the demapper after every iteration of \ac{BP} decoding and $\vec{H}$ is known. % In total $20\times 1 = 20$ iterations of BP decoding are executed. \vspace{-2mm} \end{itemize} % % % \subsection{Corner Case (Sub-Ensemble) Scenario} \begin{figure}[t] \begin{center} \input{fig/BER_Retraining_8taps_0kmh_exp_dec_16QAM} \end{center} \vspace{-2mm} \caption{\ac{BER} performance of the investigated receivers in the corner case scenario of no movement and thereby no channel time-variance ($v = \unitfrac[0]{km}{h}$ and moderate $L = 8$ channel taps).} \pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_8taps} \vspace{-5mm} \end{figure} The first scenario investigates the impact of adaptation to corner case conditions using the example of no \ac{UE} movement. For this purpose we set the velocity to $v = \unitfrac[0]{km}{h}$ and choose a moderate number of $L = 8$ channel taps so that the stochastic channel model generates channel realizations that form a sub-ensemble of the universal conditions used for initial training (Tab.~\ref{Tab:Training_Parameters}). As can be seen from the results shown in Fig.~\ref{fig:BER_retraining_8taps}, the unadapted \emph{Universal RNN} already shows a better performance than the conventional \emph{LMMSE IEDD} baseline, thus, confirming the findings of \cite{aoudia2020end, Fischer_2021}. This gain can be justified by the fact that the RNN-based receiver can additionally exploit the expected distribution of the data-carrying symbols in $\vec{Y}$. However, by adapting the RNN receiver to the current channel conditions, the \emph{Adapted RNN} can further gain about \unit[0.1]{dB} of BER performance compared to the \emph{Universal RNN}. Interestingly, this gain is possible although the channel conditions of this scenario were part (sub-ensemble) of the initial universal training. We assume that retraining to current channel conditions reinforces the RNN to lift conservative assumptions, as channel realizations with high velocity are not part of the retraining data and high velocity implications are thereby not considered for weight updates. These gains have also been observed for various other corner cases with different parameters within the range of the universal channel ensemble, but due to paper length limits we exemplary only show this corner case. \subsection{Out-of-Specification (Extreme) Scenario} \begin{figure}[t] \begin{center} \input{fig/BER_Retraining_16taps_100kmh_exp_dec_16QAM} \end{center} \vspace{-2mm} \caption{\ac{BER} performance of the investigated receivers in the extremely frequency-variant (out-of-specifications) scenario of $L = 16$ channel taps at a moderate velocity of $v = \unitfrac[100]{km}{h}$.} \pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_16taps} \vspace{-4mm} \end{figure} In the second scenario, we want to focus on the benefit of adaptation in case of unforeseen and extreme channel conditions. Therefore, the results shown in Fig.~\ref{fig:BER_retraining_16taps} were obtained at highly frequency-selective channel conditions with $L = 16$ channel taps at a moderate velocity of $v=\unitfrac[100]{km}{h}$. The simulation results show that the performance of the conventional \emph{LMMSE IEDD} baseline system degrades heavily. This is expected as it mainly relies on pilot symbols and the used pilot position spacing in frequency dimension is not sufficient for $L = 16$ channel taps, setting this scenario out of specification. Likewise, this scenario is also out of specification for the \emph{Universal RNN} as initial training only covers channel conditions up to $L = 14$ channel taps. However, the performance of the \emph{Universal RNN} does also degrade compared to the \emph{Perfect Knowledge IDD} lower limit, but not as much as the \emph{LMMSE IEDD} baseline system. This observation is also consistent with the findings of \cite{aoudia2020end, Fischer_2021} which showed, that NN-based receivers extract further knowledge about the channel from the provided data-carrying symbols and are therefore more robust against sparse pilot spacing. But, most interestingly, the \emph{Adapted RNN} shows significantly improved performance compared to the \emph{Universal RNN}. While there is still a large gap between the performance of the \emph{Adapted RNN} and \emph{Perfect Knowledge IDD}, these results show that adaptation can render a NN-based receiver to significantly higher operability, even in the case of a scenario that was originally out of specifications. \subsection{Interference Scenario} \begin{figure}[t] \begin{center} \input{fig/BER_Retraining_8taps_100kmh_GuardBand_Interference_6dB} \end{center} \vspace{-2mm} \caption{ \ac{BER} performance of the investigated receivers in a scenario with side channel interference, modeled by additive noise of $\unit[6]{dB}$ on the outer four subcarriers, at otherwise moderate conditions with $L = 8$ channel taps and $v = \unitfrac[100]{km}{h}$.} \pgfkeysvalueof{/tikz/dsp/label}}{fig:BER_retraining_guard_band_interference} \vspace{-4mm} \end{figure} Finally, we want to showcase a scenario that highlights the flexibility of NN-based receivers and how retraining can even enable adaptation to unseen tasks. This is shown using the example of side channel interference, which is modeled by adding noise to the outer four subcarriers, reducing their SNR by $\unit[6]{dB}$. As can be seen from the results shown in Fig.~\ref{fig:BER_retraining_guard_band_interference}, the \emph{LMMSE IEDD} baseline as well as the \emph{Universal RNN} suffer from the added interference, but retraining the RNN-based receiver leads to a performance gain of \unit[0.42]{dB} when we compare the \emph{Adapted RNN} with the \emph{Universal RNN}. In this case the NN-based receiver is able to cope with the new task of incorporating the disturbance on the outer four subcarriers via retraining, while a conventional system would require additional signal processing and can not simply adapt. \section{Conclusion} \pgfkeysvalueof{/tikz/dsp/label}}{sec:conclusion} We have demonstrated that \ac{NN}-based receivers benefit from continuous retraining as they can adapt to current, extreme and new unforeseen channel conditions. For such cases, adaptation leads to a superior performance when compared to static receivers that have only been designed and optimized for a universal channel model. Finally, we want to emphasize that these gains come without any additional signaling overhead, as on-the-fly label recovery is sufficient for the retraining process. \bibliographystyle{IEEEtran}
-40,954.149724
[ -2.341796875, 2.451171875 ]
46.478873
[ -2.703125, 0.93212890625, -1.3994140625, -4.93359375, -0.86474609375, 6.82421875 ]
[ 2.912109375, 6.765625, 0.92578125, 5.28515625 ]
361
6,968
[ -1.783203125, 1.6865234375 ]
25.146282
[ -6.12890625, -4.0234375, -3.990234375, -2.095703125, 2.142578125, 11.53125 ]
0.884214
18.761259
14.867968
2.216752
[ 1.555336833000183 ]
-29,302.322282
6.167623
-40,218.406467
0.586374
5.936069
[ -2.71875, -3.42578125, -3.607421875, -4.57421875, 2.515625, 11.3984375 ]
[ -5.09765625, -1.783203125, -2.04296875, -1.509765625, 3.259765625, 4.59375 ]
BkiUfo8241xiP6pGxj80
\section{Introduction} \label{sec:intro} The $p$-adic number system for any prime number $p$ extends the ordinary arithmetic of the rational numbers in a way different from the extension of the rational number system to the real and complex number systems. This extension is achieved by an alternative interpretation of the concept of absolute value. First described by Kurt Hensel in 1897, the $p$-adic numbers were motivated primarily by an attempt to bring the ideas and techniques of power series methods into number theory. Their influence now extends far beyond this. For example, the field of $p$-adic analysis essentially provides an alternative form of calculus. More formally, for a given prime $p$, the field $\Q_p$ of $p$-adic numbers is a completion of the rational numbers. On the field $\Q_p$ is also given a topology derived from a metric, which is itself derived from an alternative valuation on the rational numbers. This metric space is complete in the sense that every Cauchy sequence converges to a point in $\Q_p$. This is what allows the development of calculus on $\Q_p$ and it is the interaction of this analytic and algebraic structure which gives the $p$-adic number systems their power and utility. For about a century after the discovery of $p$-adic numbers, they were mainly considered as objects of pure mathematics. However, numerous applications of these numbers to theoretical physics have been proposed in papers \cite{5,2,3,4}, to quantum mechanics \cite{6}, to $p$-adic - valued physical observables \cite{6} and many others \cite{K,V}. As in real case, to solve a problem in $p$-adic case there arises an equation which must be solved in the field of $p$-adic numbers (see for example \cite{M1, M2, M3, V}). For classification problems of varieties of algebras over a field $\Q_p$ of $p$-adic numbers one has to solve an equation of the form: $x^2=a$. It is well known the criteria of solvability of this equation. In fact, in the classification of Leibniz algebras of dimensions less than 4 over a field $\Q_p$ (see \cite{Ay}, \cite{Abror}) it is enough to solve the equation $x^2=a$. However, in the classifications tasks of larger dimensions one has to solve an equation of the form $x^q=a$, $q\geq 2$, in $\Q_p$. In this paper we present a criteria for solvability of the equation $x^q=a$ in $\Q_p$ (where $p$ is a fixed prime number) for arbitrary $q$ in two cases: $(q,p)=1$ and $q=p$. Moreover, in the cases of existing the solutions of the equation we present the algorithm of finding the solutions. Also we show that any equation $x^q=a$ in $Q_p$ can be reduced to both cases. Note that in \cite{MS} the same criterion has been proved by a different method. \section{Preliminaries} \subsection{Solvability of congruences.} The sources of the information in this subsection are \cite{A,B,N}. If $n$ is a positive integer, the integers between 1 and $n-1$ which are coprime to $n$ (or equivalently, the congruence classes coprime to $n$) form a group with multiplication modulo $n$ as the operation; it is denoted by $\Z^\times_n$ and is called the {\it group of units modulo} $n$ or the group of primitive classes modulo $n$. Multiplicative group of integers modulo $n$ is cyclic if and only if $n$ is equal to $1, 2, 4, p^k$ or $2 p^k$, where $p^k$ is a power of an odd prime number. A generator of this cyclic group is called {\it a primitive root modulo} $n$ or a primitive element of $\Z^\times_n$. For any integers $a, b$ and $n$, we say that $a$ is congruent to $b$ modulo $n$ (notated $a \equiv b \mod n$) if $n\mid (b-a)$. The order of $\Z^\times_n$ is given by Euler's totient function $\varphi(n)$. Euler's theorem says that $a^{\varphi(n)}\equiv 1 \mod n$ for every $a$ coprime to $n$; the lowest power of $a$ which is $\equiv 1 \mod n$ is called the multiplicative order of $a$ modulo $n$. In other words, for $a$ to be a primitive root modulo $n$, $\varphi(n)$ has to be the smallest power of $a$ which is congruent to 1 $\mod n$. \begin{lemma}\label{l1} Suppose that $m\in \N$ has a primitive root $r$. If $a$ is a positive integer with $(a,m) = 1$, then there is a unique integer $x$ with $1\leq x\leq \varphi(m)$ such that $$r^x\equiv a \mod m.$$ \end{lemma} \begin{defn} If $m\in \N$ has a primitive root $r$ and $\alpha$ is a positive integer with $(\alpha,m) = 1$, then the unique integer $x$, $1 \leq x \leq \varphi(m)$ and $r^x\equiv \alpha \mod m$ is called the index (or discrete logarithm) of $\alpha$ to the base $r$ modulo $m$ and denoted by ${\rm ind}_r\alpha$. \end{defn} In particular, $r^{{\rm ind}_ra}\equiv a \mod m.$ \begin{thm}\label{t1} Let $m$ be a positive integer with primitive root $r$. If $a, b$ are positive integers coprime to $m$ and $k$ is a positive integer, then (i) ${\rm ind}_r1\equiv 0 \mod \varphi(m)$ (ii) ${\rm ind}_r(ab)\equiv {\rm ind}_ra + {\rm ind}_rb \mod\varphi(m)$ (iii) ${\rm ind}_ra^k\equiv k\cdot {\rm ind}_ra \mod \varphi(m)$. \end{thm} \begin{thm}\label{tv} If $p$ is a prime number, $\alpha\in \N$, $m$ is equal to $p^\alpha$ or $2p^{\alpha}$, $(n,\varphi(m))=d$ then the congruence $$x^n\equiv a\mod m$$ has solution if and only if $d$ divides ${\rm ind}_x a$. In case of solvability the congruence equation has $d$ solutions. \end{thm} {\it Fermat's Little Theorem} says: Let $a$ be a nonzero integer, and let $p \nmid a$ be prime. Then $a^{p-1} \equiv 1 \mod p$. \begin{prop}\label{p1} Let $a, b, n \in \Z$ with $n\ne 0$. The congruence $ax \equiv b \mod n$ has solutions if and only if $(a, n)\mid b$. When the congruence has a solution $x_0\in \Z$ then the full solution set is $ \{{x_0 + tn\over (a, n)} : t \in \Z\}$. \end{prop} It follows that the equation $ax = b$ in $\Z_n$ has $(a, n)$ solutions. In particular, if $(a, n) = 1$, then the equation $ax = b$ has unique solution in $\Z_n$. \subsection{Divisibility of binomial coefficients.} (see \cite{H}) In 1852, Kummer proved that if $m$ and $n$ are nonnegative integers and $p$ is a prime number, then the largest power of $p$ dividing ${m+n\choose m}$ equals $p^c$, where $c$ is the number of carries when $m$ and $n$ are added in base $p$. Equivalently, the exponent of a prime $p$ in ${n\choose k}$ equals the number of nonnegative integers $j$ such that the fractional part of $k/p^j$ is greater than the fractional part of $n/p^j$. It can be deduced from this that ${n\choose k}$ is divisible by $n/gcd(n,k)$. Another fact: an integer $n\geq 2$ is prime if and only if all the intermediate binomial coefficients ${n\choose k}$, $k=1,\dots,n-1$ are divisible by $n$. \subsection{ $p$-adic numbers.} Let $\Q$ be the field of rational numbers. Every rational number $x\ne 0$ can be represented in the form $x = p^r{n\over m}$, where $r, n\in \Z$, $m$ is a positive integer, $(p, n) = 1$, $(p, m) = 1$ and $p$ is a fixed prime number. The p-adic norm of $x$ is given by $$|x|_p=\left\{\begin{array}{ll} p^{-r}\ \ \mbox{for} \ \ x\ne 0\\ 0\ \ \mbox{for} \ \ x = 0. \end{array}\right. $$ This norm satisfies the so called strong triangle inequality $$|x+y|_p\leq \max\{|x|_p,|y|_p\},$$ and this is a non-Archimedean norm. The completion of $\Q$ with respect to $p$-adic norm defines the $p$-adic field which is denoted by $\Q_p$. Any $p$-adic number $x\ne 0$ can be uniquely represented in the canonical form \begin{equation}\label{ek} x = p^{\gamma(x)}(x_0+x_1p+x_2p^2+\dots), \end{equation} where $\gamma=\gamma(x)\in \Z$ and $x_j$ are integers, $0\leq x_j \leq p - 1$, $x_0 > 0$, $j = 0, 1,2, ...$ (see more detail \cite{K,V,S}). In this case $|x|_p = p^{-\gamma(x)}$. \begin{thm}\label{tx2} \cite{B}, \cite{K}, \cite{V} In order to the equation $x^2 = a$, $0\ne a =p^{\gamma(a)}(a_0 + a_1p + ...), 0\leq a_j \leq p - 1$, $a_0 > 0$ has a solution $x\in \Q_p$, it is necessary and sufficient that the following conditions are fulfilled: i) $\gamma(a)$ is even; ii) $a_0$ is a quadratic residue modulo $p$ if $p\ne 2$; $a_1 = a_2 = 0$ if $p = 2$. \end{thm} In this paper we shall generalize this theorem. \section{Equation $x^q=a$.} In this section we consider the equation $x^q=a$ in $\Q_p$, where $p$ is a fixed prime number, $q\in \N$ and $a\in \Q_p$. Our goal is to find conditions under which the equation has a solution $x\in \Q_p$. The case $q=2$ is well known (see Theorem \ref{tx2}), therefore we consider $q>2$. We need the following \begin{lemma}\label{l2} The following are true (i) \begin{equation}\label{e3} \left(\sum_{i=0}^\infty x_ip^i\right)^q=x_0^q+\sum_{k=1}^\infty\left(qx_0^{q-1}x_k+N_k(x_0,x_1,\dots,x_{k-1})\right)p^k,\end{equation} where $x_0\ne 0$, $0\leq x_j\leq p-1$, $N_1=0$ and for $k\geq 2$ \begin{equation}\label{e4} N_k=N_k(x_0,\dots,x_{k-1})=\\ \sum_{{m_0,m_1,\dots,m_{k-1}:\atop \sum_{i=0}^{k-1}m_i=q,\ \ \sum_{i=1}^{k-1}im_i=k}} {q!\over m_0!m_1!\dots m_{k-1}!}x_0^{m_0}x_1^{m_1}\dots x_{k-1}^{m_{k-1}}. \end{equation} (ii) Let $q=p$ be a prime number, then $p\mid N_k$ if and only if $p\nmid k$. \end{lemma} \proof (i) Formulas (\ref{e3}) and (\ref{e4}) easily can be obtained by using multinomial theorem. (ii) It is known the following formula: \begin{equation}\label{e5} {p!\over m_0!m_1!\dots m_{k-1}!}={m_0 \choose m_0}{m_0+m_1 \choose m_1}\dots {m_0+m_1+\dots+m_{k-1} \choose m_{k-1}}. \end{equation} By condition $\sum_{i=0}^{k-1}m_i=p$ we get $0\leq m_i\leq p$. Moreover if $m_{i_0}=p$ for some $i_0$, then $m_{i}=0$ for all $i\ne i_0$. In this case from the condition $\sum_{i=1}^{k-1}im_i=k$ we obtain $k=i_0p$. Assume $p\mid k$, i.e., $k=pk_0$ for some $k_0$, where $1\leq k_0<k-1$. Putting $m_{k_0}=p$ and $m_i=0$ for $i\ne k_0$, we get $N_k(x_0,\dots,x_{k-1})=x_{k_0}^p+S_k,$ where $$S_k=\sum_{{0\leq m_0,m_1,\dots,m_{k-1}<p:\atop \sum_{i=0}^{k-1}m_i=p,\ \ \sum_{i=1}^{k-1}im_i=k}} {p!\over m_0!m_1!\dots m_{k-1}!}x_0^{m_0}x_1^{m_1}\dots x_{k-1}^{m_{k-1}}.$$ Since $x_{k_0}^p$ is not a multiple of $p$ (otherwise, if we assume that $r^p=pT$ for some $r\in \{2,\dots,p-1\}$ and $T\in \N$, then it follows that $r$ divides $T$ since $p$ is a prime number. Hence, $r^{p-1}=pT',$ where $T'=T/r$. By the same way we get $r^{p-2}=pT''$ and iterating the process finally we get $1=p\tilde{T}$, for some $\tilde {T}\in \N$, which is not possible), but $S_k$ is divisible by $p$ since each coefficient of $S_k$ contains (see formula (\ref{e5})) a factor ${p\choose m_{i_0}}$ (with $0<m_{i_0}<p$) which is divisible by $p$ thanks to the divisibility property of binomial coefficients mentioned in the previous section. Therefore $p\nmid N_k$. Assume $p\nmid k$ then by the arguments mentioned above we get $m_i<p$, for all $i=0,\dots,k-1$, therefore each term of $N_k$ is divisible by $p$ consequently $p\mid N_k$. \endproof 3.1. {\sl THE CASE $(q,p)=1$.} In this subsection we are going to analyze under what conditions the equation $x^q=a$ has solution in $Q_p$, when $q$ and $p$ are coprimes. In this case the following is true. \begin{thm}\label{t2} Let $q>2$ and $(q,p)=1$. The equation \begin{equation}\label{e1} x^q=a, \end{equation} $0\ne a=p^{\gamma(a)}(a_0+a_1p+\dots)$, $0\leq a_j\leq p-1$, $a_0\ne 0$, has a solution $x\in \Q_p$ if and only if 1) $q$ divides $\gamma(a)$; 2) $a_0$ is a $q$ residue$\mod p$. \end{thm} \proof {\it Necessity}. Assume that equation (\ref{e1}) has a solution $$ x=p^{\gamma(x)}\left(x_0+x_1p+\dots\right), \ \ 0\leq x_j\leq p-1, x_0\ne 0,$$ then \begin{equation}\label{e2} p^{q\gamma(x)}\left(x_0+x_1p+\dots\right)^q=p^{\gamma(a)}(a_0+a_1p+\dots). \end{equation} Consequently, $\gamma(a)=q\gamma(x)$ and $a_0\equiv x_0^q \mod p$. {\it Sufficiency.} Let $a$ satisfies the conditions 1) and 2). We construct a solution $x$ of equation (\ref{e1}) using the idea of reduction to canonical form of a $p$-adic number using a system of carries. Put $$\gamma(x)={1\over q}\gamma(a).$$ Then by condition 2) and due to $1\leq a_0\leq p-1$ there exists $x_0$ such that \begin{equation*}\label{e7} x_0^q\equiv a_0 \mod p, \ \ 1\leq x_0\leq p-1. \end{equation*} In other words, there exists $M_1(x_0)$ such that $x_0^q=a_0+M_1(x_0)p.$ Using the notations of Lemma \ref{l2} due to the fact that $qx_0^{q-1}$ is not a multiple of $p,$ there exists $x_1$ such that \begin{equation*}\label{e71} qx_0^{q-1}x_1+N_1(x_0)+M_1(x_0)\equiv a_1 \mod p, \ \ 1\leq x_1\leq p-1. \end{equation*} Therefore, there exists $M_2(x_0,x_1)$ such that $qx_0^{q-1}x_1+N_1(x_0)+M_1(x_0)=a_1+M_2(x_0,x_1)p.$ Proceeding in this way, we find the existence of $x_n$ such that $$ qx_0^{q-1}x_n+N_n(x_0,\dots,x_{n-1})+M_n(x_0,\dots,x_{n-1})\equiv a_n \mod p, \ \ 1\leq x_n\leq p-1. $$ and $M_{n+1}(x_0,\dots,x_n)$ such that \begin{equation}\label{e72} qx_0^{q-1}x_n+N_n(x_0,\dots,x_{n-1})+M_n(x_0,\dots,x_{n-1})=a_n+M_{n+1}(x_0,\dots, x_{n})p \end{equation} for any $n\in \mathbb{N}.$ Now from Lemma \ref{l2} and equality (\ref{e72}) it follows that $$\left(\sum_{i=0}^\infty x_ip^i\right)^q=x_0^q+\sum_{k=1}^\infty\left(qx_0^{q-1}x_k+N_k(x_0,x_1,\dots,x_{k-1})\right)p^k=$$ $$=a_0+M_1(x_0)p+\sum_{k=1}^\infty\left(a_k-M_k(x_0,\dots,x_{k-1})+M_{k+1}(x_0,\dots,x_k)p\right)p^k=a_0+\sum_{k=1}^\infty a_k p^k.$$ Hence, we found solution $x=\displaystyle\sum_{i=0}^\infty x_ip^i$ of equation (\ref{e1}) in its canonical form.\endproof \begin{rk} The condition 2) of Theorem \ref{t2} is always satisfied if $p=2$, consequently $x^q=a$ has a solution in $\Q_2$ for any odd $q$ and any $a\in \Q_2$ with $\gamma(a)$ divisible by $q$. \end{rk} \begin{cor}\label{c1} Let $q$ be a prime number such that $q<p$ and $\eta$ be a unity (i.e. $|\eta|_p=1$) which is not $q$-th power of some $p$-adic number. Then $p^i\eta^j$, $i,j=0,1,\dots,q-1$ ($i+j\ne 0$) is not $q$-th power of some $p$-adic number. \end{cor} \proof We shall check that the conditions of Theorem \ref{t2} fail under the hypothesis. It is easy to see that $\gamma(p^i\eta^j)=i$ for all $i=1,\dots,q-1$ and $j=0,\dots,q-1$, consequently $q$ does not divide $\gamma(p^i\eta^j)$, i.e. condition 1) is not satisfied. But for $\eta^j$, $i=0$, $j=2,\dots, q-1$ the condition $1)$ is satisfied, therefore we shall check condition $2)$. Consider the decomposition $\eta=\eta_0+\eta_1p+\dots$, then $\eta^j=\eta_0^j+j\eta_0^{j-1}\eta_1p+\dots$. It is known (see Theorem \ref{t1}) that $\eta^j_0\equiv a_0^q \mod p$ has a solution $\eta_0$ if and only if ${\rm ind}_{a_0}\eta^j_0$ is divisible by $d=(p-1,q)$. Since $\eta$ is a unity which is not $q$-th power of some $p$-adic number we have $\eta_0\equiv a_0^q \mod p$ has not solution thus ${\rm ind}_{a_0}\eta_0$ is not divisible by $d=(p-1,q)$. This property implies that for a prime $q$ with $q<p$ the prime number $p$ has a form $p=qk+1$. Consequently, $d=(p-1,q)=(qk,q)=q$. If ${\rm ind}_{a_0}\eta_0=dl+r$, then ${\rm ind}_{a_0}\eta_0^j\equiv j(ql+r)\mod(p-1)$, i.e., ${\rm ind}_{a_0}\eta_0^j= j(ql+r)+M(p-1)=q(jl+Mk)+jr$, but since $0<r<q-1$, $2\leq j\leq q-1$ and $q$ is a prime number, $jr$ is not divisible by $q$. Thus ${\rm ind}_{a_0}\eta_0^j$ is not divisible by $d=q$, hence condition $2)$ is not satisfied. \endproof \begin{cor}\label{c2} Let $q$ be a prime number such that $q<p=qk+1$, for some $k\in \N$ and $\eta$ be a unity which is not $q$-th power of some $p$-adic number. Then any $p$-adic number $x$ can be written in one of the following forms $x=\varepsilon_{ij}y_{ij}^q$, where $\varepsilon_{ij}\in \{p^i\eta^j: i,j=0,1,\dots,q-1\}$ and $y_{ij}\in \Q_p$. \end{cor} \proof Let $\eta=\eta_0+\eta_1p+\dots$ and $\mu=\mu_0+\mu_1p+\dots$ be unities which are not $q$-th power of some $p$-adic numbers. We shall show that there exists $i\in \{1,\dots,q-1\}$ such that $\mu=\eta^iy^q$ for some $y\in \Q_p$. Note that $\eta^i$ and ${1\over \eta^i}$, $i=1,\dots,q-1$ are not $q$-th power of some $p$-adic numbers. Consider $\mu=\mu_0+\mu_1p+\dots$ and ${1\over\eta^i}=c_{0i}+c_{1i}p+\dots$, then ${\mu\over \eta^i}=\mu_0c_{0i}+(\mu_1c_{0i}+\mu_0c_{1i})p+\dots$ It is easy to see that $\gamma(\mu/\eta^i)=0$, consequently condition $1)$ of Theorem \ref{t2} is satisfied. Indeed, if $\mu_0c_{0i}=pM$, then since $p$ is a prime number, the last equality is possible if and only if $\mu_0$ and $c_{0i}$ divide $M$, consequently we get $1=p\tilde{M}$, which is impossible. Now we shall check condition $2)$ of Theorem \ref{t2}. The equation $x_0^q\equiv \mu_0c_{0i}\mod p$ has a solution if and only if ${\rm ind}_{x_0}\mu_0c_{0i}$ is divisible by $q=(q,p-1)$. We have that $c_{0i}\equiv c_{01}^i\mod p$, consequently ${\rm ind}_{x_0}\mu_0c_{0i}\equiv {\rm ind}_{x_0}\mu_0+i \,{\rm ind}_{x_0}c_{01} \mod (p-1)$. Assume ${\rm ind}_{x_0}\mu_0=s$, ${\rm ind}_{x_0}c_{01}=r$, $s,r=1,\dots, q-1$ and take $i$ such that $ir\equiv q-s \mod (p-1)$ (the existence of $i$ follows from the fact that $(r,p-1)=1$). It is clear that if $s$ varies from 1 to $q-1$ then $i$ also varies in $\{1,\dots,q-1\}$. Consequently we get ${\rm ind}_{x_0}\mu_0c_{0i}\equiv s+ir\mod (p-1)\equiv q \mod (p-1)$. Hence the condition $2)$ in Theorem \ref{t2} is also satisfied, so there is an $i$ such that $\mu=\eta^i y_i^q$, for some $y_i\in \Q_p$. For $x\in \Q_p$, let $x=p^{\gamma(x)}(x_0+x_1p+\dots)$ denote $\nu=x_0+x_1p+\dots$ If $\nu$ satisfies conditions of Theorem \ref{t2}, then $\nu=y^q$ and $x=p^{\gamma(x)}y^q$. If $\nu$ does not satisfy conditions of Theorem \ref{t2}, then (as it was showed above) there exists $i$ such that $\nu=\eta^iy^q$ and in this case we get $x=p^{\gamma(x)}\eta^iy^q$. Taking $\gamma(x)=qN+j$ completes the proof. \endproof \begin{cor}\label{c3a} Let $q$ be a prime number such that $q<p\ne qk+1$, for any $k\in \N$. Then any $p$-adic number $x$ can be written in one of the following forms $x=\varepsilon_{i}y_{i}^q$, where $\varepsilon_{i}\in \{p^i: i=0,1,\dots,q-1\}$ and $y_{i}\in \Q_p$. \end{cor} \proof Using arguments in the proof of Corollary \ref{c2} we conclude that if $p\ne qk+1$ then any unity is $q$-th power of a $p$-adic number. Hence for $x\in \Q_p$ with $\gamma(x)=qN+i$ we get $x=p^iy^q$, $y\in \Q_p$. \endproof {3.2. \sl THE CASE $q=p$.} Now we are going to analyze the solvability conditions for the equation $x^p=a$ in $Q_p$. In this case, the following is true. \begin{thm}\label{t3} Let $q=p$. The equation $x^q=a$, $0\ne a=p^{\gamma(a)}(a_0+a_1p+\dots)$, $0\leq a_j\leq p-1$, $a_0\ne 0$, has a solution $x\in \Q_p$ if and only if (i) $p$ divides $\gamma(a)$; (ii) $a_0^p\equiv a_0+a_1p\mod p^2$. \end{thm} \proof {\it Necessity}. Assume that equation (\ref{e1}) has a solution $$ x=p^{\gamma(x)}\left(x_0+x_1p+\dots\right), \ \ 0\leq x_j\leq p-1, x_0\ne 0,$$ then using Lemma \ref{l2} we get \begin{equation}\label{e8} a=p^{p\gamma(x)}\left(x_0+x_1p+\dots\right)^p=p^{p\gamma(x)}\left(x_0^p+\sum_{k=1}^\infty(px_0^{p-1}x_k+N_k)p^k\right). \end{equation} Using part $(ii)$ of Lemma \ref{l2}, we get $${\rm RHS \ \ of \ \ (\ref{e8})}=p^{p\gamma(x)}\left(x_0^p+\sum_{{k=1\atop p\mid k}}^\infty x_0^{p-1}x_kp^{k+1}+ \sum_{{k=1\atop p\mid k}}^\infty N_kp^k+\right.$$ $$\left.\sum_{{k=1\atop p\nmid k}}^\infty(x_0^{p-1}x_k+p^{-1}N_k)p^{k+1}\right)=$$ $$p^{p\gamma(x)}\left(x_0^p+\sum_{k=1}^\infty x_0^{p-1}x_{pk}p^{pk+1}+ \sum_{k=1}^\infty N_{pk}p^{pk}+\right.$$ $$\left.\sum_{i=1}^{p-1}\sum_{k=0}^\infty(x_0^{p-1}x_{pk+i}+p^{-1}N_{pk+i})p^{pk+i+1}\right)=$$ $$p^{p\gamma(x)}\left(x_0^p+\sum_{i=1}^{p-2}(x_0^{p-1}x_i+p^{-1}N_i)p^{i+1}+\right.$$ $$\left.\sum_{k=1}^\infty (x_0^{p-1}x_{pk-1}+N_{pk}+p^{-1}N_{pk-1})p^{pk} +\right.$$ \begin{equation}\label{e9} \left.\sum_{k=1}^\infty x_0^{p-1}x_{pk}p^{pk+1}+\sum_{i=1}^{p-2}\sum_{k=1}^\infty(x_0^{p-1}x_{pk+i}+p^{-1}N_{pk+i})p^{pk+i+1}\right). \end{equation} We have $$ N_{pk}=\sum_{{m_0,\dots,m_{pk-1}:\atop \sum_{i=0}^{pk-1}m_i=p,\ \ \sum_{i=1}^{pk-1}im_i=pk}} {p!\over m_0!\dots m_{pk-1}!}x_0^{m_0}\dots x_{pk-1}^{m_{pk-1}}=$$ \begin{equation}\label{e10} =p(p-1)x_0^{p-2}x_1x_{pk-1}+\tilde{N}_{pk}, \end{equation} where $\tilde{N}_{pk}$ does not depend on $x_{pk-1}$; moreover $p\nmid \tilde{N}_{pk}$. Using (\ref{e10}), from (\ref{e9}) we get $${\rm RHS \ \ of \ \ (\ref{e8})}=p^{p\gamma(x)} \left(x_0^p+\sum_{i=1}^{p-2}(x_0^{p-1}x_i+p^{-1}N_i)p^{i+1}+\right.$$ $$\left.\sum_{k=1}^\infty (x_0^{p-1}x_{pk-1}+\tilde{N}_{pk}+p^{-1}N_{pk-1})p^{pk} +\right.$$ $$ \sum_{k=1}^\infty (x_0^{p-1}x_{pk}-x^{p-2}_0x_1x_{pk-1})p^{pk+1}+$$ $$ \sum_{k=1}^\infty(x_0^{p-1}x_{pk+1}+x_0^{p-2}x_1x_{pk-1}+p^{-1}N_{pk+1})p^{pk+2}+$$ \begin{equation}\label{e11} \left. \sum_{i=2}^{p-2}\sum_{k=1}^\infty(x_0^{p-1}x_{pk+i}+p^{-1}N_{pk+i})p^{pk+i+1}\right). \end{equation} Consequently, $\gamma(a)=p\gamma(x)$, and $x^{p}_0\equiv a_0+a_1p\mod p^2$, $x_0=a_0$. {\it Sufficiency.} Assume that $a$ satisfies the conditions $(i)$ and $(ii)$. We construct a solution $x$ of the equation $x^p=a$ using the similar process of reduction to canonical form as in the proof of Theorem \ref{t2}. First put $$\gamma(x)={1\over p}\gamma(a).$$ Denote $x_0=a_0$ and let $M_1$ be such that $x_0^q=a_0+a_1p+M_1p^2.$ Proceeding in this way, since $x_0^{p-1}$ is not a multiple of $p$ and taking into account that integer numbers $N_k$ (see Lemma \ref{l2}) depend only on $x_0,x_1,\dots,x_{k-1}$ and $\tilde{N}_{pk}$ depends only on $x_0,x_1,\dots,x_{pk-2}$ we find the existence of $x_n$ and introduce corresponding number $M_{n+1}$ consequently for each $n\geq 1$ such that the following congruences hold: $$1)\,x_0^{p-1}x_i+p^{-1}N_i+M_i\equiv a_{i+1}\mod p,$$ therefore, there exists $M_{i+1}$ such that $x_0^{p-1}x_i+p^{-1}N_i+M_i=a_{i+1}+M_{i+1}p$ for $0\leq x_i\leq p-1, \ \ i=1,\dots,p-2;$ $$2)\,x_0^{p-1}x_{pk-1}+\tilde{N}_{pk}+p^{-1}N_{pk-1}+M_{pk-1}\equiv a_{pk}\mod p,$$ therefore, there exists $M_{pk}$ such that $x_0^{p-1}x_{pk-1}+\tilde{N}_{pk}+p^{-1}N_{pk-1}+M_{pk-1}=a_{pk}+M_{pk}p$ for $k=1,2,\dots;$ $$3)\,x_0^{p-1}x_{pk}-x^{p-2}_0x_1x_{pk-1}+M_{pk}\equiv a_{pk+1}\mod p,$$ therefore, there exists $M_{pk+1}$ such that $x_0^{p-1}x_{pk}-x^{p-2}_0x_1x_{pk-1}+M_{pk}=a_{pk+1}+M_{pk+1}p$ for $k=1,2,\dots;$ $$4)\,x_0^{p-1}x_{pk+1}+x_0^{p-2}x_1x_{pk-1}+p^{-1}N_{pk+1}+M_{pk+1}\equiv a_{pk+2}\mod p,$$ therefore, there exists $M_{pk+2}$ such that $x_0^{p-1}x_{pk+1}+x_0^{p-2}x_1x_{pk-1}+p^{-1}N_{pk+1}+M_{pk+1}=a_{pk+2}+M_{pk+2}p$ for $k=1,2,\dots;$ $$5)\,x_0^{p-1}x_{pk+i}+p^{-1}N_{pk+i}+M_{pk+i}\equiv a_{pk+i+1}\mod p, $$ therefore, there exists $M_{pk+i+1}$ such that $x_0^{p-1}x_{pk+i}+p^{-1}N_{pk+i}+M_{pk+i}=a_{pk+i+1}+M_{pk+i+1}p$ for $i=2,\dots,p-2,\ \ k=1,2,\dots;$ Now making the substitutions above into equality (\ref{e11}) we obtain $${\rm RHS \ \ of \ \ (\ref{e11})}=p^{p\gamma(x)} \left(a_0+a_1p+M_1p^2+\sum_{i=1}^{p-2}(a_{i+1}-M_i+M_{i+1}p)p^{i+1}+\right.$$ $$\sum_{k=1}^\infty (a_{pk}-M_{pk-1}+M_{pk}p)p^{pk}+\sum_{k=1}^\infty (a_{pk+1}-M_{pk}+M_{pk+1}p )p^{pk+1}+$$ $$\left.\sum_{k=1}^\infty(a_{pk+2}-M_{pk+1}+M_{pk+2}p)p^{pk+2}+\sum_{i=2}^{p-2}\sum_{k=1}^\infty(a_{pk+i+1}-M_{pk+i}+M_{pk+i+1}p)p^{pk+i+1}\right)=$$ $$p^{p\gamma(x)}\left(\sum_{k=1}^\infty a_kp^k +M_1p^2-\sum_{k=1}^\infty (M_k-M_{k+1}p)p^{k+1}\right)=p^{\gamma(a)}\sum_{k=1}^\infty a_kp^k=a.$$ Hence, we found the solution $\displaystyle x=\sum_{k=0}^\infty x_kp^k$ of the equation $x^p=a.$\endproof \begin{cor}\label{c3} Let $p$ be a prime number. a) The numbers $\varepsilon \in {\mathcal E}_1=\{1\}\cup\{i+jp: i^p\ \ \mbox{is not equal}\ \ i+jp \ \ \mbox{modulo} \ \ p^2\}$, $\delta\in {\mathcal E}_2=\{p^j: j=0,\dots,p-1\}$ and products $\varepsilon\delta$ are not $p$-th power of some $p$-adic numbers. b) Any $p$-adic number $x$ can be represented in one of the following forms $x=\varepsilon\delta y^p$, for some $\varepsilon\in {\mathcal E}_1$, $\delta\in {\mathcal E}_2$ and $y\in \Q_p$. \end{cor} \proof a) Follows from Theorem \ref{t3}. b) If $x=x_0+x_1p+\dots\ne y^p$ for any $y\in \Q_p$ then by Theorem \ref{t3} we have $\varepsilon=x_0+x_1p\in {\mathcal E}_1$. We shall show that ${x\over \varepsilon}={x_0+x_1p+x_2p^2+\dots\over x_0+x_1p}=b_0+b_1p+b_2p^2+\dots$ is $p$-th power of some $y\in \Q_p$, i.e., we check the conditions of Theorem \ref{t3}: since $\gamma(x/\varepsilon)=0$ condition $(i)$ is satisfied; we have $x_0\equiv x_0b_0 \mod p$ and $x_0+x_1p\equiv x_0b_0+(x_0b_1+x_1b_0)p\mod p^2$ which implies that $b_0=1$ and $b_1=0$, consequently $b_0^p\equiv b_0+b_1\mod p^2$ i.e., the condition (ii) is satisfied. Thus if $x\in \Q_p$ has the form $x=y^p$, then $\varepsilon=\delta=1$. If $x=x_0+x_1p+\dots$ is not $p$-th power of some $p$-adic number with $\gamma(x)=pN+j$, then we take $\varepsilon=x_0+x_1p$, $\delta=p^j$ then $x=\varepsilon\delta y^p$ for some $y\in \Q_p$. \endproof Note that for a given $i\in \{1,\dots,p-1\}$ the congruence $i^p\equiv i+jp\mod p^2$ is not satisfied for some values of $j$. For example, if $p=3$, then for $j=1$ the congruence is not true. Using computer we get the following\\ \begin{tabular}{|l|l|} \hline & \mbox{Values of}\, $j$ \, \mbox{s.t.}\, $i^p\equiv i+jp\mod p^2$ \, \mbox{has no solution}\, $i\in \{1,\dots,p-1\}$\\ \hline p=3 & 1\\ \hline p=5 & 2\\ \hline p=7 & 1, 3, 5\\ \hline p=11 & 1, 4, 5, 6, 9\\ \hline p=13& 2, 3, 4, 8, 9, 10\\ \hline p=17& 1, 5, 8, 11, 15\\ \hline p=19& 4, 7, 8, 9, 10, 11, 14\\ \hline p=23& 3, 4, 6, 9, 10, 12, 13, 16, 18, 19\\ \hline p=29& 3, 5, 10, 11, 12, 13, 15, 17, 18, 23, 25\\ \hline p=31& 1, 2, 5, 6, 8, 9, 11, 15, 19, 21, 22, 24, 25, 28, 29\\ \hline p=37& 1, 4, 5, 6, 7, 10, 13, 14, 16, 20, 22, 26, 29, 30, 31, 32, 35\\ \hline p=41& 2, 4, 6, 8, 10, 16, 24, 26, 30, 32, 34, 36, 38\\ \hline \end{tabular} \begin{rk} Using this table and Corollary \ref{c3} we obtain $p=3$: Any $x\in \Q_3$ has form $x=\varepsilon\delta y^3$, where $\varepsilon\in \{1,4,5\}$, $\delta\in \{1,3,9\}$; $p=5$: Any $x\in \Q_5$ has form $x=\varepsilon\delta y^5$, where $\varepsilon\in \{1,11,12,13,14\}$, $\delta\in \{1,5,25,125,625\}$; $p=7$: Any $x\in \Q_7$ has form $x=\varepsilon\delta y^7$, where $\varepsilon\in \{1,8,9,10,11,12,13,22,23,24,25,\\ 26,27,28,36,37,38,39,40,41,42\}$, $\delta\in \{7^i: i=0,1,\dots,6\}$. \end{rk} 3.3. {\sl THE CASE $q=mp^s$.} As is presented, the proofs of the sufficient part of the solvability criteria in Theorems \ref{t2} and \ref{t3} are given in a constructive method. Thus, not only that we know the existence of the solution, but we possess the algorithm for the construction of the solution in these cases. After cases 3.1 and 3.2 it remains the case $q=mp^s$ with some $m, s\in \N$, $(m,p)=1$. Here we shall show that this case can be reduced to cases 3.1 and 3.2: we have to find the solvability condition for $x^{mp^s}=a$. Denoting $y=x^{p^s}$, we get $y^m=a$, which is the equation considered in case 3.1. Assume for the last equation the solvability condition is satisfied and its solution is $y={\tilde y}$. Then we have to solve $x^{p^s}={\tilde y}$; here we denote $z=x^{p^{s-1}}$ and get $z^p={\tilde y}$. The last equation is the equation considered in case 3.2. Suppose it has a solution $z={\tilde z}$ (i.e. the conditions of Theorem \ref{t3} are satisfied) then we get $x^{p^{s-1}}={\tilde z}$ which again can be reduced to the case 3.2. Iterating the last argument after $s-1$ times we obtain $x^p={\tilde a}$ for some ${\tilde a}$ which is also an equation corresponding to case 3.2. Consequently, by this argument we establish solvability condition of equation $x^{mp^s}=a$, which will be a system of solvability of conditions for equations considered in cases 3.1 and 3.2. \section*{ Acknowledgements} The first author was supported by Ministerio de Ciencia e Innovaci\'on (European FEDER support included), grant MTM2009-14464-C02-02. The second and third authors thank the Department of Applied Mathematics, E.U.I.T. Forestal, University of Vigo, Pontevedra, Spain, for providing financial support of their visit to the Department. We thank F.M.Mukhamedov and M.Saburov for useful discussions on previous version of our paper. Taking it to account their comments we improved the style of proofs of Theorems 3.2 and 3.7.
-41,293.496171
[ -2.373046875, 2.1796875 ]
37.470726
[ -3.255859375, 0.5849609375, -2.33203125, -5.5625, -0.5341796875, 8.1171875 ]
[ 3.458984375, 8.7734375, 2.03515625, 6.31640625 ]
249
3,615
[ -2.796875, 3.05078125 ]
40.395379
[ -5.83203125, -3.5, -4.34375, -2.3671875, 1.7841796875, 11.828125 ]
0.99306
20.481592
27.856155
6.670596
[ 3.0162465572357178 ]
-25,723.458898
4.902628
-41,320.322505
0.744795
5.849459
[ -2.177734375, -2.880859375, -3.162109375, -5.05078125, 2.09375, 11.2890625 ]
[ -5.30078125, -1.19921875, -1.4208984375, -0.79052734375, 3.021484375, 2.783203125 ]
BkiUfK3xK4tBVhat52yo
\section{Introduction} \la{sec:intro} Zel'dovich predicted in 1971 that a rotating black hole (BH) would radiate \cite{Zeldovich1,Zeldovich2}. His reasoning was based on the observation that the same physics that causes damping of an incident electromagnetic field by a static dielectric implies that, if the dielectric's surface moves faster than the incident field's phase velocity, then the incident field will be amplified at the expense of the dielectric's kinetic energy. Amplification of radiation by the sort of process that Zel'dovich described is often called ``superradiance'', a term introduced by Misner in 1972 \cite{Misner}. It is an irreversible process, distinct from the equilibrium phenomenon, also identified as ``superradiance'', first described by Dicke in \cite{Dicke} and about which we will have nothing to say here. Zel'dovich's thermodynamic argument was notably refined by Bekenstein and Schiffer in \cite{Bekenstein}. More recently, the irreversibility of electromagnetically superradiant systems has been carefully investigated in \cite{Maghrebi1, Maghrebi2}. For a thorough, modern review of rotational superradiance and its applications (with an emphasis on gravitational physics), see \cite{BCP}. Zel'dovich's argument applies to any object capable of damping a classical or bosonic degree of freedom. Some instances are Cherenkov radiation \cite{Cherenkov, QCherenkov}, sonic booms and Mach shocks \cite{shocks}, the Landau criterion for superflows \cite{Landau}, the Moon's tidal acceleration \cite{moon}, the shear flow instability by which the wind makes waves on the ocean's surface (see \Sec{sec:flow} in this paper), and mechanical instabilities of rotors.\footnote{One of the authors (AJ) thanks his student Carlos D\'iaz for help understanding the treatment of this particular subject in the mechanical engineering literature, where the analogy with superradiance has not been noted. A thermodynamic approach may help to generalize and simplify such analyses. \cite{DSTA}} The prediction of BH superradiance motivated Hawking's subsequent discovery that a static BH must radiate thermally \cite{Hawking}.\footnote{For first-hand historical accounts of this, see \cite{BHT,Kip-book}.} The novelty of our approach here is to provide a general and self-contained treatment of superradiance, based on the linear coupling of a quantum field to a rotating heat bath. Our computations rely on the formalism of the Markovian master equation (also called the ``Lindblad equation'', after one of its developers) for an open quantum system \cite{Davies, Lindblad, GKS}. Our results help illuminate the necessary connection between superradiance and the thermal dynamics of the bath, which for a stationary BH lead to Hawking radiation. We also clarify the connection between superradiance of bosonic quantum fields and classical self-oscillations, using as example a flow instability. This will allow us to clarify an important point about which there is some confusion in the hydrodynamics literature. \section{Model and approximations} \la{sec:model} Consider a quantum field, either bosonic or fermionic, interacting with a source that acts as an equilibrium heat bath. Let the heat bath rotate with angular velocity $\Omega$ about its symmetry axis, which we take to be the $z$-axis. The free quantum field is described by the set of annihilation and creation operators $a_{m \alpha}(\omega)$, $ a^\dagger_{m \alpha}(\omega)$, corresponding to the field modes $| \omega, m, \alpha \rangle$ and satisfying the (anti-)commutation relations \begin{eqnarray} \left[ a_{m \alpha}(\omega), a^\dagger_{m' \alpha'}(\omega') \right]_{\pm} &=& \delta_{\omega \omega'} \delta_{m m'} \delta_{\alpha \alpha'} , \nonumber \\ \left[ a_{m \alpha}(\omega), a_{m' \alpha'}(\omega') \right]_{\pm} &=& [a^\dagger_{m \alpha}(\omega), a^\dagger_{m' \alpha'}(\omega')]_{\pm} = 0 . \la{eq:accr1} \end{eqnarray} We have set $\hbar = 1$ and written \begin{eqnarray} \left[A, B \right]_+ &\equiv& [A, B] \equiv AB - BA \nonumber \\ \left[A, B \right]_- &\equiv& \{A, B\} \equiv AB + BA, \la{eq:acomm} \end{eqnarray} for the commutator and anti-commutator respectively, so that in \Eq{eq:accr1} the sign $(+)$ corresponds to bosons and $(-)$ to fermions. The quantum number $m = 0, \pm 1 , \pm 2 , ...$ indicates the angular momentum along the $z$-axis (which we take to be an axis of symmetry), $\omega \ge 0 $ the energy, and $\alpha$ the spin together with any other quantum numbers needed to specify the field's state. For the free field, the Hamiltonian is \begin{equation} H_{\rm f} = \sum_{\omega, m,\alpha} \omega\, a^\dagger_{m\alpha}(\omega) a_{m\alpha}(\omega) \la{eq:hamF} \end{equation} and the $z$-component of the angular momentum is \begin{equation} L_{\rm f}^z =\sum_{\omega, m,\alpha} m\, a^\dagger_{m\alpha}(\omega) a_{m\alpha}(\omega) . \la{eq:Lzf} \end{equation} The most general field-bath interaction that is linear in the field operators is given by an additional term in the Hamiltonian of the form \begin{equation} H_{\rm int} = \sum_{\omega, m,\alpha} \left( a_{m\alpha}(\omega)\otimes B^\dagger_{m \alpha}(\omega) + a^\dagger_{m \alpha}(\omega) \otimes B_{m \alpha}(\omega) \right), \la{eq:hamint} \end{equation} where $B_{m \alpha}(\omega)$ is a suitable bath operator. The bath has its own Hamiltonian $H_{\rm b}$ and its own $z$-component of the angular momentum $L_{\rm b}^z$. The linearity of \Eq{eq:hamint} with respect to the quantum field is a valid approximation as long as the field is sufficiently weak that its self-coupling (either induced by the medium's polarizability or direct as in gravity and non-Abelian gauge fields) can be neglected. All our computations will be in this weak-field regime, though we shall return to the issue of non-linearity in \Sec{sec:flow}. Assuming that the interaction described by \Eq{eq:hamint} is invariant under rotations with respect to the $z$-axis, the bath operators may be chosen so that \begin{equation} \left[ L^z_{\rm b} , B_{m \alpha}(\omega) \right] = -m B_{m \alpha}(\omega), \quad \left[ L^z_{\rm b} , B^\dagger_{m \alpha}(\omega) \right] = m B^\dagger_{m \alpha}(\omega), \la{eq:BLcomm} \end{equation} and therefore \begin{eqnarray} e^{i\phi L_{\rm b}^z} B_{m \alpha}(\omega) e^{-i\phi L_{\rm b}^z} &=& e^{i m\phi} B_{m \alpha}(\omega), \nonumber \\ e^{i\phi L_{\rm b}^z} B^\dagger_{m \alpha}(\omega) e^{-i\phi L_{\rm b}^z} &=& e^{-i m\phi} B^\dagger_{m \alpha}(\omega). \la{eq:eigen} \end{eqnarray} To take into account the bath's rotation we use an effective Hamiltonian of the form \begin{equation} H^{\rm eff}_{\rm b} = H_{\rm b} - \Omega L_{\rm b}^z, \la{eq:Heff} \end{equation} where the internal $H_{\rm b}$ is independent of the rotation or, at most, parametrically dependent on $\Omega$. The arbitrary sign in front of the $\Omega$ in \Eq{eq:Heff} is taken negative for later notational convenience. Equation \eqref{eq:Heff} is a valid approximation as long as the bath's coherent rotation does not involve energies sufficiently large to excite internal degrees of freedom. This is analogous to the Born-Oppenheimer approximation in molecular physics, in which the electronic degrees of freedom are taken to be decoupled from the molecule's rotational and vibrational degrees of freedom \cite{Weinberg}. We show in \Sec{sec:balance} that the kinetic energy of rotation can be converted into internal heating of the bath, but in our approximation this happens only via the coupling to the quantum field's low-energy modes. Note that the Hamiltonian for our model is not necessarily quadratic, because $B_{m \alpha}(\omega)$ in \Eq{eq:hamint} is arbitrary. The field's Markovian master equation that we derive for the weak-field limit in \Sec{sec:MME} applies, therefore, to models that are not exactly solvable. Even for exactly solvable models, our techniques help us to understand and to characterize the relevant thermodynamics. \section{Coupling spectrum and KMS condition} \la{sec:coupling} In the weakly coupled regime, all of the physically relevant properties of the bath are encoded in the second-order correlations of the bath operators $B_{m \alpha}(\omega)$, $B^\dagger_{m \alpha}(\omega)$, evaluated for the bath's stationary state \cite{Davies, AL2007}. Using a short-hand notation $B_k , B^\dagger_l$ with multi-index $k \equiv \{\omega, m, \alpha \}$, these correlations are expressed as matrix elements of the coupling spectrum \begin{equation} \gamma^0_{kl}(x) = \int_{-\infty}^{\infty} e^{ixt} \left\langle B^\dagger_k (t) B_l \right\rangle_{\rm b} dt , \la{eq:cspectrum} \end{equation} were $\langle \ldots \rangle_{\rm b}$ denotes the expectation value with respect to the given stationary state of the bath, while $B_k(t)$ is the bath observable $B_k$ as it evolves in the Heisenberg picture for a bath governed by its internal Hamiltonian $H_{\rm b}$. The matrix element $\gamma^0_{kl}(x)$ describes dissipative effects for frequency $x$, so that \Eq{eq:cspectrum} is an instance of the fluctuation-dissipation theorem \cite{Kubo}. For the non-rotating bath in an equilibrium state, the following Kubo-Martin-Schwinger (KMS) condition is satisfied \begin{equation} \gamma^0_{kl}(-x) = e^{-\beta x}\gamma^0_{-l-k}(x), \quad \mathrm{for} ~ B_{-k} \equiv B_k^\dagger, \la{eq:KMS} \end{equation} where $\beta$ is the inverse temperature of the stationary bath and $-k \equiv \{\omega, -m, \mathsf T \alpha \}$, with $\mathsf T \alpha$ the time-reversal of $\alpha$. Equation \eqref{eq:KMS} relates the rates of decay and its time-reverse (i.e., pumping) for a field coupled to the stationary bath. \cite{AL2007, BR} The coupling spectrum matrix is diagonal in $m$ because of rotational symmetry. The weak coupling approach eliminates from the Markovian master equation (discussed in the next section) the non-diagonal elements in $\omega$ (the ``secular approximation''). Finally, the non-diagonal elements in $\alpha$ can be removed by a suitable choice of the modes. For simplicity, we use the same notation for this diagonal basis as for the original modes. Replacing in \Eq{eq:cspectrum} the internal Hamiltonian by the effective one of \Eq{eq:Heff} and applying rotational invariance (see \Eq{eq:eigen}) we obtain a modified diagonal coupling spectrum for the rotating bath \begin{equation} \gamma^\Omega_{kk}(x) = \gamma^0_{kk} (x+m\Omega). \la{eq:cspectrum1} \end{equation} This coupling spectrum satisfies a modification of the KMS condition of \Eq{eq:KMS}, namely \begin{equation} \gamma^{\Omega}_{kk}(-x) = e^{-\beta (x-m\Omega)}\gamma^{\Omega}_{-k-k}(x). \la{eq:KMS1} \end{equation} \section{Markovian master equation} \la{sec:MME} Using Davies's weak coupling limit technique \cite{Davies}, one may derive the Markovian master equation (MME) for the density matrix $\rho(t)$ that describes the quantum field and acts on the corresponding Fock space. For the reader unfamiliar with this formalism, more details on the derivation and interpretation of the MME are provided in the appendix. For the model in \Sec{sec:model} we obtain that \begin{eqnarray} &\dot \rho(t)& \hskip -8 pt = -i \left[ H_{\rm f}, \rho(t) \right] + \mathcal L \rho(t) = -i \left[ H_{\rm f}, \rho(t) \right] \nonumber \\ && + \frac 1 2 \sum_{m, \alpha, \omega} \Gamma^{\Omega}_{m \alpha}(\omega) \Bigl\{ \left( \left[ a_{m \alpha}(\omega) \rho(t), a^\dagger_{m \alpha}(\omega) \right] + \left[ a_{m \alpha}(\omega), \rho(t)a^\dagger_{m \alpha}(\omega) \right] \right) \nonumber \\ && + e^{-\beta(\omega - m\Omega)} \left( \left[ a^\dagger_{m \alpha}(\omega) \rho(t), a_{m \alpha}(\omega) \right] + \left[ a^\dagger_{m \alpha}(\omega), \rho(t)a_{m \alpha}(\omega) \right] \right) \Bigl\} . \la{eq:MME} \end{eqnarray} Here $\mathcal L$ is short-hand notation for the dissipative part of the Lindblad superoperator, which induces non-unitarity in the time evolution of the quantum state described by $\rho(t)$ \cite{Lindblad,GKS}. For convenience, we have used the same symbol $H_{\rm f}$ for the renormalized Hamiltonian in \Eq{eq:MME}, which contains corrections due to the field-bath interaction, as for the bare Hamiltonian in \Eq{eq:hamF}. The symbol $\Gamma^{\Omega}_{m \alpha}(\omega)$ in \Eq{eq:MME} denotes the diagonal element of the coupling spectrum matrix of \Eq{eq:cspectrum1} with $k = \{ \omega, m ,\alpha \}$, evaluated at frequency $x = \omega$. \par The quantity \begin{equation} \gamma_\downarrow (k) \equiv \Gamma^{\Omega}_{m \alpha}(\omega) = \gamma_{k k}^0 ( \omega + m \Omega) \la{eq:decay} \end{equation} is the decay rate for the mode $k$, whereas \begin{equation} \gamma_\uparrow (k) \equiv e^{-\beta (\omega - m \Omega)} \Gamma^{\Omega}_{m \alpha}(\omega) = e^{-\beta (\omega - m \Omega)} \gamma_\downarrow (k) \la{eq:pumping} \end{equation} is the corresponding pumping rate. The ratio of the pumping to the damping rates in \Eq{eq:MME} may be expressed as a Boltzmann factor with ``local'' (i.e., $\omega$-dependent) inverse temperature $\beta_{\rm loc}[\omega]$, such that \begin{equation} e^{-\beta_{\rm loc}[\omega] \omega} \equiv \frac{\gamma_\uparrow (k)}{\gamma_\downarrow (k)} = e^{-\beta(\omega - m\Omega)} , \la{eq:Boltzmann} \end{equation} implying that \begin{equation} \beta_{\rm loc}[\omega] = \beta \left(1- \frac{m\Omega}{\omega} \right). \la{eq:Boltzmann1} \end{equation} Thus, \begin{equation} \beta_{\rm loc}[\omega] < 0 ~~ \Leftrightarrow ~~ \omega < m\Omega . \la{eq:superradiant} \end{equation} Such a negative local temperature for the rotating bath indicates a population inversion in the low-energy modes with $\omega < m \Omega$. We shall see in \Sec{sec:stability} that this leads to a superradiant instability for bosonic fields (in which the Hamiltonian of a single mode is unbounded). In quantum optics it is common to describe the population inversion of a lasing medium in terms of its negative local temperature (see \cite{localT} for a general theory). The modes with $\beta_{\rm loc}[\omega] < 0$ are those for which the energy $\omega'$ measured in the bath's comoving frame of reference is negative \cite{Zeldovich1}. It is not surprising, therefore, that their equilibrium populations should be inverted, but our treatment clarifies how this depends on the field-bath interaction: note that we had to assume rotational invariance of the interaction in \Eq{eq:hamint}, and a large energy-scale separation between the bath's rotational and internal degrees of freedom (\Eq{eq:Heff}). We shall see that a thermodynamically complete description of superradiance must incorporate a {\it positive feedback} between field and bath, which in the quantum picture is provided by stimulated emission. \section{Stable and unstable modes} \la{sec:stability} The average occupation number of a single mode \begin{equation} \bar n_{m \alpha}(\omega, t) \equiv \,{\rm Tr}\, \left[ \rho(t) a^\dagger_{m \alpha}(\omega) a_{m \alpha}(\omega) \right] \la{eq:pnumber} \end{equation} obeys the equation \begin{eqnarray} \dot{\bar n}_{m \alpha}(\omega, t) &=& - \left( \Gamma^{\Omega}_{m \alpha}(\omega) \left[ 1 - (\pm) e^{-\beta(\omega - m\Omega)} \right] \right) \bar n_{m \alpha}(\omega, t) \nonumber \\ && + \Gamma^{\Omega}_{m \alpha}(\omega)e^{-\beta(\omega - m\Omega)}, \la{eq:pnumber1} \end{eqnarray} where, as in Eqs.\ \eqref{eq:accr1} and \eqref{eq:acomm}, the sign $(+)$ corresponds to bosons and $(-)$ to fermions. The fermionic solution to \Eq{eq:pnumber1} for $ t\geq 0$ is \begin{eqnarray} \bar n_{m \alpha}(\omega, t) &=& \exp \left\{ -\Gamma^{\Omega}_{m \alpha}(\omega) \left[ 1 + e^{-\beta(\omega - m\Omega)} \right] t \right\} \bar n_{m \alpha}(\omega, 0) \nonumber \\ &+& \left[ 1- \exp \left\{ -\Gamma^{\Omega}_{m \alpha}(\omega) \left[ 1 + e^{-\beta(\omega - m\Omega)} \right] t \right\} \right] \bar n_{m \alpha}(\omega, \infty), \la{eq:fermions} \end{eqnarray} with asymptotic population \begin{equation} \bar n_{m \alpha}(\omega, \infty) = \left[ e^{\beta(\omega - m\Omega)} + 1 \right]^{-1} \la{eq:fermions1} \end{equation} corresponding to the Fermi-Dirac distribution with inverse temperature $\beta$ and chemical potential $m \Omega$. For bosons, on the other hand, only modes $| \omega, m, \alpha \rangle$ that satisfy \begin{equation} \omega > m \Omega \la{eq:stable} \end{equation} are stable. In that case the solution is \begin{eqnarray} \bar n_{m \alpha}(\omega, t) &=& \exp \left\{ -\Gamma^{\Omega}_{m \alpha}(\omega) \left[ 1 - e^{-\beta(\omega - m\Omega)} \right] t \right\} \bar n_{m \alpha}(\omega, 0) \nonumber \\ &+& \left[ 1- \exp \left\{ -\Gamma^{\Omega}_{m \alpha}(\omega) \left[ 1 - e^{-\beta(\omega - m\Omega)} \right] t \right\} \right] \bar n_{m \alpha}(\omega, \infty), \la{eq:bosons} \end{eqnarray} with asymptotic population \begin{equation} \bar n_{m \alpha}(\omega, \infty) = \left[ e^{\beta(\omega - m\Omega)} -1 \right]^{-1} , \la{eq:bosons1} \end{equation} corresponding to the Bose-Einstein distribution with inverse temperature $\beta$ and chemical potential $m\Omega$. Modes satisfying the condition \begin{equation} \omega < m\Omega , \la{eq:unstable} \end{equation} on the other hand, are unstable and their occupation numbers grow exponentially with time. This corresponds to Zel'dovich's rotational superradiance. It is interesting to consider the case of zero temperature, i.e., the limit $\beta \to \infty$ in \Eq{eq:pnumber1} for superradiant modes. Using the KMS conditions of Eqs.\ (\ref{eq:KMS}) and (\ref{eq:KMS1}), one obtains for the pumping rate \begin{equation} \gamma_{\uparrow} (k) = \Gamma^{\Omega}_{m \alpha}(\omega) e^{-\beta(\omega - m\Omega)} = \gamma^0_{-k-k}(m\Omega -\omega) \equiv \gamma_{\omega m \alpha} (m \Omega - \omega) , \la{eq:zerolimit} \end{equation} where $\gamma_{\omega m \alpha}(x)$, for $x \geq 0$ is the damping rate of the mode $| \omega, -m, \mathsf T\alpha \rangle$ at the frequency $x$. Note that this damping rate remains positive in the zero-temperature limit, while $\gamma_{\omega m \alpha}(-x)$ tends to zero. Thus, the rate in \Eq{eq:pnumber1}, in the zero-temperature limit, becomes \begin{equation} \dot{\bar n}_{m \alpha}(\omega, t) = \gamma_{\omega m \alpha} (m\Omega -\omega) \left[1 + \bar n_{m \alpha}(\omega, t) \right] . \la{eq:pnumber2} \end{equation} Equation \eqref{eq:pnumber2} implies that a rotating body at zero temperature will produce a continuous spectrum of radiation, with a non-trivial spatial distribution determined by the superradiant condition of \Eq{eq:unstable}. Our results are consistent with those obtained in \cite{Endlich} by the methods of effective quantum field theory. There the authors conclude that the probability of absorption by an object at rest entirely determines the superradiant amplification when the same object is rotating. We believe that our own formulation of superradiance in terms of the MME offers a more transparent thermodynamic interpretation and may therefore be more readily generalizable, including to non-relativistic and to classical systems. \section{Feedback and stimulated emission} \la{sec:feedback} The structure of \Eq{eq:MME} implies that the different field modes $|\omega, m, \alpha \rangle$ evolve independently. Moreover, the diagonal matrix elements of $\rho(t)$, computed in the corresponding population number representation basis, give the probabilities $P_n(k;t)$ of finding $n$ particles in a given mode $|k \rangle \equiv| \omega, m, \alpha \rangle$. These probabilities evolve according to a Markovian birth-death process \begin{eqnarray} &\dot P_n(k;t)& \hskip -8 pt = \gamma_\downarrow (k) (n +1) P_{n+1}(k;t) + \gamma_\uparrow (k) \left[ 1\pm(n-1) \right] P_{n-1}(k;t) \nonumber \\ && - \left[ \gamma_\downarrow (k) n +\gamma_\uparrow (k) (1\pm n) \right] P_{n}(k;t). \la{eq:birth} \end{eqnarray} In the symbol $\pm$ on the right-hand side of \Eq{eq:birth}, the sign $(+)$ corresponds to bosons (with unbounded population \hbox{$n=0,1,2,\ldots$}) and the sign $(-)$ to fermions (with bounded population $n=0,1$). From the last term on the right-hand side of \Eq{eq:birth} we see that the probability, per unit time, of creating a new particle if $n$ particles are already present in a given mode depends on $n$ and is equal to $\gamma_{\uparrow}(k)(1\pm n)$, where the 1 corresponds to spontaneous emission. This $n$-dependence can be interpreted as resulting from a feedback between the field and the bath. This feedback is positive for bosons, due to stimulated emission. For fermions the feedback is negative, due to the Pauli exclusion principle. Superradiance is a process in which the kinetic energy of the heat bath's rotation powers coherent radiation modes, without requiring any resonant tuning. In a classical context, such processes are described as ``self-oscillations''. For a review of self-oscillation as a non-equilibrium process depending on a positive feedback between the oscillator and the power source, see \cite{SO}. \section{Energy and entropy balance} \la{sec:balance} The formal Gibbs state \begin{equation} \bar{\rho} = Z^{-1} e^{-\beta \left( H_{\rm f} - \Omega L_{\rm f}^z \right)} \la{eq:gibbs} \end{equation} is a stationary solution of \Eq{eq:MME}. This allows us to apply the general formula for the entropy production based of the Spohn inequality \cite{Spohn, Alicki}, \begin{equation} \sigma(t) = - \,{\rm Tr}\, \left[ \mathcal L \rho(t) \left( \ln \rho(t) - \ln \bar{\rho} \right) \right] \geq 0 , \la{eq:spohn} \end{equation} in order to derive the entropy balance (i.e., the second law of thermodynamics). In units such that $k_B = 1$, this gives \begin{equation} \dot S(t) = \sigma(t) + \beta J , \la{eq:IIlaw} \end{equation} where $S(t) \equiv -\,{\rm Tr}\, \left[ \rho(t) \ln \rho(t) \right]$ is the entropy of the field, while \begin{equation} J = \frac{d}{dt} \,{\rm Tr}\, \left[ \rho(t) \left(H_{\rm f} - \Omega L_{\rm f}^z \right) \right] . \la{eq:heat} \end{equation} is the heat current flowing from the bath and into the field. Identifying the internal energy $U$ of the field with its average value of the Hamiltonian, \begin{equation} U(t) = \,{\rm Tr}\, \left[ \rho(t) H_{\rm f} \right] , \la{eq:U} \end{equation} we obtain the energy balance (i.e., the first law of thermodynamics) \begin{equation} \dot U(t) = J + \Omega \frac{d}{dt} \,{\rm Tr}\, \left[ \rho(t) L_{\rm f}^z \right], \la{eq:Ilaw} \end{equation} where the second term on the right-hand side of \Eq{eq:Ilaw} is the power supplied by the rotating bath. Note that the Gibbs state of \Eq{eq:gibbs} cannot be normalized for bosonic modes with $\omega < m \Omega$ (i.e., for the superradiant modes). However, introducing intermediate cutoffs in particle numbers one can derive rigorously the results of Eqs.\ (\ref{eq:IIlaw}) and (\ref{eq:Ilaw}). Combining \Eq{eq:heat} with the kinetic \Eq{eq:pnumber2} provides a simple derivation of the fact that, at zero temperature, superradiance is accompanied by a heating of the bath. Indeed, the corresponding expression for the heat current into the field, \begin{equation} J = \sum_{\{\omega< m \Omega, m, \alpha\}} (\omega - m\Omega) \cdot \gamma_{\omega m \alpha} (m\Omega -\omega) \cdot \left[1 + \bar n_{m \alpha}(\omega, t) \right], \la{eq:heat0} \end{equation} is evidently negative. In comparing our results to superradiant spectra computed by other authors for specific systems, one should bear in mind that we formulated the model of \Sec{sec:model} in terms of the field modes as they would exist within a cavity containing the heat bath. The non-unitarity represented by the superoperator $\cal L$ in \Eq{eq:MME} comes from the dynamical interaction with the bath, without any modification of the field's boundary conditions. Other treatments often frame superradiance as a scattering problem, with at least some of the non-unitarity given by the choice of boundary conditions within the bath; see, e.g., \cite{Manogue, Unruh}. \section{Black hole radiation} \la{sec:BH} For any quantum field placed around a BH, its modes can be separated into those localized far outside the BH (outer modes) and those close or below the event horizon (inner modes). The particles occupying the inner modes form a bath for the outer ones. The interaction between both systems can be treated as a kind of tunneling process described by the quadratic Hamiltonian of the form of \Eq{eq:hamint}, with the operator $a^\dagger_{m \alpha}(\omega)$ creating a particle in an outer mode and $B_{m \alpha}(\omega)$ annihilating a particle in a certain superposition of inner modes. As shown by Hawking in \cite{Hawking}, the strong gravity of a BH creates indeterminacy between the annihilation operator $b_k$ and the creation operator $b^\dagger_k$ of an inner mode with multi-index $k$. Hence, we can introduce a bath operator in \Eq{eq:hamint} of the form \begin{equation} B_k =\sum_{k'=\{\omega', m ,\alpha'\}} \left[ f_{k}(\omega') b_{k'} + g_{k}(\omega') b^\dagger_{-k'} \right] \la{eq:BHop} \end{equation} with some form factors $f_{k}(\omega), g_{k}(\omega)$ (as before, $-k$ stands for the time-reversal of the quantum numbers in $k$). There is no summation over $m' \neq m$ in \Eq{eq:BHop} because we assumed rotational symmetry. The key result of \cite{Hawking} is that the indeterminacy between the annihilation and creation operators is exponentially small and approximately given by \begin{equation} \frac{|g_{k}(\omega)|^2}{|f_{k}(\omega)|^2} = e^{-\beta_{\rm H} \omega} \la{eq:BHtemp} \end{equation} where $\beta_{\rm H}$ is the inverse Hawking temperature of the BH. Inserting Eqs.~\eqref{eq:BHop} and \eqref{eq:BHtemp} into \Eq{eq:cspectrum} for the vacuum state $| 0 \rangle$ of the inner modes (i.e. the state $| 0 \rangle$ such that $b_k | 0 \rangle = 0$ for all $k$'s), one obtains the KMS condition of \Eq{eq:KMS} with $\beta = \beta_{\rm H}$, thereby establishing that the static black hole behaves with respect to the external fields as a heat bath at the Hawking temperature. The results of \Sec{sec:stability} then imply that a Kerr BH will superradiate bosons obeying the condition of \Eq{eq:unstable}, with a spectrum determined by the form factors in \Eq{eq:BHop}. The negative energy of a superradiant mode as measured in the Kerr BH's co-moving frame (see \Sec{sec:MME}) reflects the presence of an ergosphere allowing extraction of the BH's angular momentum by the Penrose process \cite{Penrose}. In the thermodynamic relation for a Kerr-Newman BH with mass $M$, angular momentum $L$, charge $Q$, event horizon area $A$, and Hawking temperature $T_{\rm H}$, \begin{equation} dM = T_{\rm H} \frac{dA}{4} + \Omega dL + \Phi dQ , \la{eq:BH-thermo} \end{equation} the angular velocity $\Omega$ and the electrostatic potential $\Phi$ near the event horizon appear as chemical potentials. (In \Eq{eq:BH-thermo} the fundamental constants $c$, $\hbar$, and $G$ have all been set to 1.) Rotational and charged superradiance are processes that extract some of the BH's internal energy ($dM < 0$), while increasing the entropy within the BH's event horizon ($dA > 0$) \cite{charged-super}; see also \cite{BCP} and references therein. The energy source is non-thermal, but the irreversibility of superradiance relates it to the purely thermal Hawking radiation of a BH with $\Omega = 0$ and $\Phi = 0$. Since Hawking radiation does not depend on stimulated emission, it produces fermions as well as bosons.\footnote{The absence of fermionic superradiance (about which we will have more to say in \Sec{sec:flow}) appears, in the context of Hawking's area theorem \cite{area}, as tied to the fact that fermions violate the weak energy condition \cite{charged-super, neutrinos}.} The relation between BH superradiance and Hawking radiation is analogous to the relation between the coherent radiation produced by a laser and the thermal radiation that the unpumped lasing medium at non-zero temperature would emit into a surrounding vacuum (a radiation associated with the non-zero $\gamma_\uparrow (k)$'s). Though distinct, the two processes are tied by thermodynamic irreversibility. \section{Classical limit and flow instabilities} \la{sec:flow} The shear flow instability that explains how the wind makes waves on the surface of the ocean is an interesting classical analog of the superradiance of bosonic fields.\footnote{John McGreevy brought this to our attention some years ago.} It is instructive to consider how the thermodynamic arguments of \cite{Zeldovich2, Bekenstein, Maghrebi2} apply to this instability and to identify the hydrodynamical positive feedback, as well as the reasons why it leads to a result similar to that given by stimulated emission of bosons. \begin{figure} [t] \begin{center} \includegraphics[width=0.8 \textwidth]{shear.ps} \end{center} \caption{\small Illustration of the shear flow instability by which wind can generate waves on the surface of a body of water. Image adapted from \cite{Kip-talk}.\la{fig:shear}} \end{figure} The relevant instability can occur at the interface between two layers of fluid, as long as there is a difference in their respective tangential velocities. Let us consider an upper layer of air with mass $m$, which moves with horizontal velocity $V$ with respect to the water below, as shown in \Fig{fig:shear}. Non-relativistically, the air's momentum is $\vv p_{\rm wind} = m \vv V$ and its corresponding kinetic energy is \begin{equation} E_{\rm wind} = \frac{p^2}{2m}. \la{eq:windE} \end{equation} The air's viscosity provides a coupling between the air and the surface of the water. Consider a traveling wave on the water's surface, which in the linear regime may be described by a complex-valued \begin{equation} \xi \sim e^{i(kx - \omega t)}, \la{eq:travel} \end{equation} where $x$ is the spatial coordinate in the horizontal direction, $k$ the wavenumber, and \begin{equation} v = \frac \omega k \la{eq:phasev} \end{equation} the phase velocity. Translational invariance implies momentum conservation \begin{equation} \dot {\vv p}_{\rm wind} = - \dot {\vv p}_{\rm wave}. \la{eq:pcons} \end{equation} We may express the rate at which the wave is gaining momentum as \begin{equation} \dot {\vv p}_{\rm wave} = f \hbar \vv k , \la{eq:pquant} \end{equation} where $\hbar \vv k$ is the momentum of a quantum of wave and $f$ is the rate at which such quanta are being produced (for clarity, in this section we write the factors of $\hbar$ explicitly). Taking the time derivative of \Eq{eq:windE} and using Eqs.\ \eqref{eq:pcons} and \eqref{eq:pquant} gives \begin{equation} - \dot E_{\rm wind} = \frac{\vv p}{m} \cdot \dot {\vv p}_{\rm wave} = {\vv V} \cdot ( f \hbar \vv k)~ \la{eq:windP} \end{equation} Since each wave quantum carries energy $\hbar \omega$, we also have that \begin{equation} \dot E_{\rm wave} = f \hbar \omega = f \hbar v k \la{eq:waveP} \end{equation} Combining Eqs.\ \eqref{eq:windP} and \eqref{eq:waveP} we conclude that \begin{equation} \left| \dot E_{\rm wind} \right| > \dot E_{\rm wave} ~ \Leftrightarrow ~ V > v . \la{eq:diss} \end{equation} The instability condition is \begin{equation} V > v . \la{eq:critic} \end{equation} From \Eq{eq:diss} we can see that this is because, in such a case, dissipative processes can, while conserving momentum, extract more kinetic energy from one fluid layer (in our case, from the wind) than goes into amplifying waves on the interface. That energy difference is available to generate entropy (here by viscous dissipation in the air) and the process therefore proceeds irreversibly. This thermodynamic analysis is trivially generalizable to systems with a discontinuity in rotational rather than translational velocity and to relativistic systems, as in \cite{Zeldovich2, Bekenstein, Maghrebi2}. Note that for a cylindrically symmetrical system with a linear dispersion relation for the waves, \Eq{eq:unstable} corresponds exactly to \Eq{eq:critic}, with $V$ the linear speed at which the surface of the inner cylinder moves with respect to the outer cylinder, and $v$ the phase speed of the waves along the azimuthal direction. (Zel'dovich explicitly invoked such a correspondence in \cite{Zeldovich1}.) Zel'dovich noted that quantization makes the argument particularly simple and concluded that this was an instance of how ``quantum mechanics helps understand classical mechanics'' \cite{Zeldovich1}.\footnote{He had previously published a brief, pseudonymous piece with that title \cite{Paradoksov}.} Only the bosonic case, with no upper bound on the number of quanta with wavenumber $k$, can approach a classical regime. It is interesting to consider how the feedback from stimulated emission, described in \Sec{sec:feedback}, appears in the classical flow instability. Classical self-oscillation is due to a backreacting force that {\it leads} the oscillation and therefore amplifies it \cite{SO}. In the shear flow instability of \Fig{fig:shear} this results from the bath moving faster than the wave's phase, which is equivalent to negativity of the energy \begin{equation} \omega' = \frac i \xi \frac{\partial \xi}{\partial t'} = \frac{i}{\xi} \left( \frac{\partial \xi}{\partial t} - V \frac{\partial \xi}{\partial x} \right) = \omega \left( 1 - \frac V v \right) \la{eq:omegap} \end{equation} in the bath's comoving frame (as it does not affect our reasoning, we omit the relativistic factor $(1 - V^2/c^2)^{-1/2}$ in \Eq{eq:omegap}). If $V > v$, then $\omega ' < 0$ and the oscillation sees the time lag resulting from intra-bath dissipation as a lead, thus establishing a positive feedback between the wave and the air pressure above it. In the equation of motion for a scalar $\xi$, the condition $\omega' < 0$ also implies that the sign of the linear damping term $a (\partial \xi / \partial t)$ flips upon transformation to the bath's comoving frame (this was Zel'dovich's first argument for superradiance in \cite{Zeldovich1}). A fermionic field's damping appears in its equation of motion as an imaginary part of the invariant mass, which therefore does not change sign for $\omega' < 0$. Absence of fermionic superradiance may also be seen from the solutions to scattering off a potential barrier \cite{Manogue} or, in an AdS/CFT context, from the behavior of the poles of the retarded Green functions corresponding to boundary-theory operators \cite{McGreevy}. In classical hydrodynamics, one may see the instability of \Fig{fig:shear} as a consequence of the fact that, when $v < V$, the air ``trapped'' in a valley of the wave is slowed down, causing the air pressure there to increase (by Bernoulli's theorem) relative to the crests. This, combined with the lag of that air pressure oscillation with respect to the phase of the wave ---a lag resulting from the air's viscosity and therefore associated with dissipation in the air--- implies that the air pressure does net positive work on the wave over a full period. Mollo-Christensen emphasized in \cite{Mollo-Christensen} that an inviscid wind can do no net work over a full period of the water's motion. Textbook treatments find an instability for inviscid shear flow (the ``Kelvin-Helmholtz instability'') that in the center-of-momentum frame corresponds to a non-oscillatory divergence, and which predicts an unrealistically large critical wind speed for raising waves on the surface of body of water \cite{KH-TB}. Long ago, Lamb pointed out that allowing for viscous dissipation in the air fixes the problem \cite{Lamb}, but without sufficient emphasis and clarity for the point to have been widely grasped.\footnote{It is well known in mechanical engineering that a system may be stable when dissipation is exactly zero and unstable for vanishingly small but positive dissipation. For more on such ``dissipation-induced instabilities'' and their connection to superradiance, see \cite{DSTA}.} The approach that we have advocated here may therefore help to clarify certain obscurities and confusions that persist in the literature on the roles of viscosity and turbulence in shear flow instabilities. Linear feedback causes the wave's amplitude to grow exponentially with time. Non-linear dissipative effects eventually limit the wave's growth, giving a steady amplitude that corresponds to a limit cycle in the classical phase space \cite{SO}. The non-linear approach to a limit cycle is evidently an irreversible process that erases information about initial conditions and transient noise. The initial, linear runaway is also irreversible, but to see that we must take into account the dynamics of the bath (the wind in \Fig{fig:shear}) in which entropy is generated. To better understand the quantum-classical correspondence for such systems, consider the bosonic birth-death process described by \Eq{eq:birth}, but with the probability of single-boson absorption per unit time replaced by a more general expression \begin{equation} \gamma_\downarrow (k) n ~ \rightarrow ~ \gamma_\downarrow (k) \left( n + \kappa n^2 + \ldots \right), ~~\hbox{for}~~ \kappa \geq 0. \la{eq:nonlin} \end{equation} This corresponds to going beyond the linear coupling of \Eq{eq:hamint}, from which we derived the simplest MME (\Eq{eq:MME}). A $\kappa > 0$ in \Eq{eq:nonlin} can, for instance, describe gain saturation in a laser \cite{Carmichael}. Denote by $\langle \cdots \rangle$ the average of an observable over the probabilities $P_n(k;t)$. For $\kappa > 0$ there is no closed kinetic equation for the average population $\langle n \rangle$ of a given mode, because of the presence of the higher-order term $\langle n^2 \rangle$. However, in the semi-classical limit the fluctuations of the population numbers may be neglected, so that one can replace $\langle n^2 \rangle$ by $\langle n \rangle^2$. This gives a kinetic equation for $\langle n \rangle$ that can be interpreted classically, with the quantum stimulated emission transformed into a non-linear feedback that gives rise to a limit cycle. This is an instance of non-linear classical mechanics emerging from the linear evolution of quantum states. The general relation between the kinetic equations derived from the Markovian birth-death process for the quantum system and the hydrodynamical description of the system lies beyond the scope of this paper. An intriguing conceptual question in this regard is how the quantum picture of feedback, based on stimulated emission, translates into a classical picture in terms of a back-reacting, phase-shifted force. But the argument presented here illustrates clearly how the large-$n$ limit of bosonic superradiance is directly connected to a hydrodynamic instability, and why dissipation is required in a consistent hydrodynamic description. \section{Discussion} \la{sec:discussion} Superradiance is by now a well-known effect in high-energy physics, general relativity, and quantum optics. The contribution of this article was to derive it in the formalism of the MME for the quantum field coupled to a heat bath with a rotational symmetry that admits an effective description of the form of \Eq{eq:Heff}. Rather than assuming the laws of thermodynamics, we have derived them (see \Sec{sec:balance}) from the dynamics of the field as an open quantum system in the Markovian limit. This clarified how superradiance depends on the feedback between the field and the bath, and on the generation of entropy in the bath, allowing us to clarify a point on which confusion persists even in the best textbook treatments of flow instabilities (see \Sec{sec:flow}). This approach also allowed us to fix the possible dependence of the superradiant spectrum on the rotational velocity $\Omega$, which appears exclusively through the frequency shift of \Eq{eq:cspectrum1}. In \Sec{sec:flow} we clarified the connection between superradiance as a quantum process that proceeds via stimulated emission, and classical self-oscillations such as flow instabilities. Our results sharpen the arguments of \cite{BCP, Unruh} that superradiance requires not just the presence of negative-energy states (associated, in a Kerr BH geometry, to the ergosphere), but also non-unitarity in the field's time evolution (given, in a BH geometry, by the event horizon). We saw that in the weak-field, linear-response approximation this non-unitarity, represented by the $\cal L$ in the MME of \Eq{eq:MME}, has a clear and consistent thermodynamic interpretation. In \Sec{sec:flow} we commented on how non-linearity is related to gain saturation and to the approach to a classical limit cycle, but a detailed understanding of this non-linear regime remains a challenge to be met by future research. The thermodynamic point of view allows us to abstract much of the detailed physics and to treat many distinct systems within a simple and unified framework. The wisdom of this approach has been demonstrated in recent decades by the fruitfulness of BH thermodynamics \cite{BH-thermo}.\footnote{Thorne's account of Zel'dovich's original argument for BH superradiance in \cite{Kip-book} brings to mind an aphorism that one of us (AJ) received as a student from Gil Refael: ``Thermodynamics is saying something without knowing anything.''} Incorporating into this thermodynamical picture a more sophisticated understanding of the evolution of open quantum systems may shed light on fundamental problems. \vskip 10 pt {\bf Acknowledgements:} We thank Mohammad Maghrebi for helpful comments and feedback on this work, and Carlos D\'iaz for help with the hydrodynamical literature and with understanding the basic distinction between the textbook Kelvin-Helmholtz instability and Zel'dovich's dissipation-induced instability applied to the same shear flow. We also thank Blai Garolera, Jos\'e Gracia-Bond\'ia, Bob Jaffe, Zohar Komargodski, Karl Landsteiner, John McGreevy, Surjeet Rajendran, and Joe V\'arilly for discussions. AJ acknowledges the hospitality of the Instituto de F\'isica Te\'orica (IFT), of the Universidad Aut\'onoma de Madrid (UAM) and the Consejo Superior de Investigaciones Cient\'ificas (CSIC) of Spain, while some of this work was being completed. That visit was supported by the European Union's Horizon 2020 research and innovation program under the Marie Sk{\l}odowska-Curie grant agreement no.\ 690575.
-33,536.531481
[ -3.396484375, 3.033203125 ]
31.188119
[ -3.3359375, 0.059906005859375, -1.935546875, -5.96484375, -0.422607421875, 8.3828125 ]
[ 3.7421875, 8.4296875, 4.14453125, 5.97265625 ]
213
5,732
[ -3.478515625, 3.8671875 ]
25.887221
[ -6.21875, -4.16015625, -4.5546875, -2.560546875, 1.9033203125, 12.546875 ]
1.159098
17.686003
24.965108
0.561589
[ 2.492779493331909 ]
-22,189.140397
5.493719
-33,612.539691
0.57161
6.033921
[ -2.798828125, -3.837890625, -3.759765625, -4.85546875, 2.345703125, 12.359375 ]
[ -5.61328125, -2.3515625, -2.466796875, -1.6650390625, 3.8203125, 4.96875 ]
BkiUdf_xK7FjYEXSFEcl
\section{Introduction} The Antennae (NGC 4038/39 = Arp 244) is a pair of late type spirals in the course of merging. It is the youngest in Toomre's (1977) dynamical age sequence of 11 interacting and merging systems and comfortably nearby. This makes it a well-studied system all over the wavelength range from X-ray through radio wavelengths. The interaction has triggered a strong burst of star formation that has also produced a large number of $\sim 700$ bright star clusters as first observed with HST by Whitmore \& Schweizer (1995) ({\bf WS95}). Many of those bright clusters seem to be young Globular Clusters ({\bf GCs}) due to their small effective radii and high luminosities. The Luminosity Function ({\bf LF}) of this bright star cluster system looks like a power law ${\rm \Phi(L) \sim L^{-1.78 \pm 0.05}}$ with no hint of a turnover at fainter magnitudes down to the completeness limit of ${\rm V = 22.3}$ mag which, at the distance of The Antennae (19.2 Mpc for H$_0 = 75$), corresponds to ${\rm M_V = -9.6}$ mag (WS95). In Fritze -- v. Alvensleben (1998) (hereafter {\bf Pap.I}) I analysed WS95's HST data with evolutionary synthesis models for single burst single metallicity populations ({\bf SSPs}) using stellar evolutionary tracks for various metallicities from the Geneva group and a Scalo IMF from 0.15 to 60 M$_{\odot}$ (Fritze -- v. Alvensleben \& Burkert 1995, {\bf FB95}). In analogy to the YSC system in NGC 7252 which, like the Antennae, is an Sc -- Sc merger, we assume that the metallicity of the YSC population that forms out of the spirals' ISM is ${\rm \sim \frac{1}{2} \cdot Z_{\odot}}$ (Fritze -- v. Alvensleben \& Gerhard 1994). In Pap.I individual ages are determined from (${\rm V - I}$) colours for the 550 star clusters with I -- band detections. The age distribution clearly reveals two populations of clusters, $\sim 70$ old GCs from the original spirals, and $\sim 480$ {\bf Young} Star Clusters ({\bf YSCs}) with ages in the range $(0 - 4) \cdot 10^8$ yr formed in the ongoing interaction-induced starburst. Only the secondary population of YSCs will be considered in the following. Meurer (1995) was the first to point out the possible importance of age spread effects on the future evolution of the LF of a YSC system. With individual YSC ages and the fading in various passbands as given by our SSP models we were able to calculate the future evolution over a Hubble time of the LF and of the colour distribution of the YSC system under the unrealistic assumption that all clusters will survive. At an age of $\sim 12$ Gyr, when age differences of the order of $10^8$ yr do no longer play any role, the LF is shown to be undistinguishable from a typical GC LF (i.e. Gaussian with a turnover at ${\rm M_V \sim -6.9}$ mag -- appropriate for the metallicity [Fe/H] $\sim -0.3$ (Ashman et al. 1995) -- and ${\rm \sigma(M_V) = 1.3}$ mag). The colour distribution will by then also be compatible with the one observed on old GC systems. While accounting for stellar mass loss, this modelling, however, did not take into account dynamical effects like two-body relaxation, dynamical friction, or disk shocking that might act heavily on a YSC system over a Hubble time. Already for the Milky Way, where the potential is rather well known, it is difficult and not uncontroversial to model the dynamical processes on individual clusters (e.g. Chernoff \& Weinberg 1990, Fukushige \& Heggie 1995, Gnedin \& Ostriker 1997, Vesperini 1997). This is, of course, even more difficult, if not impossible, in an ongoing merger like The Antennae. Before one can try and quantify the efficiency of various destruction mechanisms, the intrinsic MF underlying the presently observed LF has to be determined. Knowing individual cluster ages offers the possibility to use M/L values from evolutionary synthesis models to derive the Mass Function ({\bf MF}) underlying the presently observed LF and this is what we attempt in this Letter. Since the age distribution of the YSCs is strongly peaked within $\leq 2 \cdot 10^8$ yr and the YSCs have a mean distance to the galaxy center of $\sim 3.5$ kpc, a YSC on average cannot have had more than 1 or 2 revolutions. We do not expect the MF therefore to already be significantly affected by cluster destruction processes. Rather we expect the presently derived MF to reflect the MF produced by the cluster formation process. The mass spectra of molecular clouds, molecular cloud cores, open clusters, and the LF of giant HII regions (in non-interacting galaxies) all are power laws with exponents $\alpha$ in the range $\alpha \sim -1.5 \ \dots \ -1.7$ (e.g. Solomon et al. 1987, Lada et al. 1991, Kennicutt 1989, see Harris \& Pudritz 1994 and Elmegreen \& Efremov 1997 for overviews) as is the mass spectrum of open clusters in the Milky Way and the LMC (e.g. van den Bergh \& Lafontaine 1984, Elson \& Fall 1985). Both the LF and the MF of old GC systems are Gaussians with typical parameters ${\rm \langle M_V \rangle \sim -7.3}$ mag, $\sigma \sim 1.3$ mag, and ${\rm \langle Log~(M/M_{\odot}) \rangle \sim 5.5}$, ${\rm \sigma \sim 0.5}$, respectively (e.g. Ashman et al. 1995). The question immediately arises: Is the transition from a power law molecular cloud mass spectrum to a Gaussian old GC mass spectrum performed in the star/cluster formation process or by secular destruction effects within a GC system ? Or, else, is already the mass spectrum of molecular clouds or molecular cloud cores different in strongly interacting and starbursting galaxies from what it is in normal spirals ? \section{The Mass Function of the YSCs in The Antennae} On the basis of individual YSC ages we use our SSP models giving ${\rm M/L_{\lambda}}$ in the passbands $\lambda = $ UBVRIK as a function of time to derive masses of individual clusters from their observed V -- luminosities. This is done for all the 393 YSCs with ages $\leq 4 \cdot 10^8$ yr and V -- luminosities brighter than the completeness limit ${\rm M_V = -9.6}$ mag. It is stressed that our model ${\rm M/L}$ - values include the decrease in cluster mass due to stellar mass loss (cf. Pap.I), but not that due to the evaporation of stars from the cluster. The MF we recover in this way from the presently observed LF is presented in Fig.1. A Gaussian with ${\rm \langle Log~(M_{YSC}/M_{\odot}) \rangle = 5.6}$ and ${\rm \sigma = 0.46}$, normalised to the number of YSCs in the histogram, is overplotted. The intrinsic MF we obtain for the YSCs brighter than the completeness limit in The Antennae clearly looks log-normal in shape with the maximum at a mean YSC mass of ${\rm \sim 4 \cdot 10^5~M_{\odot}}$. Stellar mass loss within the clusters from the present mean age of $\sim 2 \cdot 10^8$ yr through an age of $\sim 12$ Gyr will lead to a decrease in mass of $\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 15 ~\%$ for a Scalo -- IMF ($< 10 \%$ for Salpeter). \begin{figure} \includegraphics[width=\columnwidth]{mf.eps} \caption{Mass Distribution of the YSCs in The Antennae: 393 YSCs brighter than the completeness limit. A Gaussian with ${\rm \langle Log~(M_{YSC}/M_{\odot}) \rangle = 5.6}$ and ${\rm \sigma = 0.46}$ is overplotted, normalised to the number of YSCs in the histogram.} \end{figure} Thus, without any destruction or evaporation effects the mean mass of the secondary GC system in The Antennae at the age of a Hubble time would be ${\rm \sim 3.4 \cdot 10^5 ~M_{\odot}}$. A cluster with this mean mass would have ${\rm M_V = -6.9}$ mag at an age of 12 Gyr according to our models. This is the position of the maximum of the Gaussian YSC LF we obtain at a hypothetic YSC age of 12 Gyr (cf. Fig. 6a in Pap.I). The agreement is no surprise since this is, in fact, the way how we obtained the parameters for the Gaussian in Fig.1. We stress that these parameters are derived from the LF in Pap.I and are not the result of any fit to the cluster MF in Fig.1. Remarkably enough, the parameters of this Gaussian -- which in Fig.1 is seen to reasonably describe the MF of the secondary GC population -- are quite similar to those given by Ashman et al. (1995) for the Milky Way and M31 GC systems. Using evolutionary synthesis results from Worthey (1994) for ${\rm M/L_V}$, Ashman et al. find ${\rm \langle Log~(M/M_{\odot}) \rangle = 5.47}$ and ${\rm \sigma = 0.50}$ for the Milky Way and ${\rm \langle Log~(M/M_{\odot}) \rangle = 5.53}$ and ${\rm \sigma = 0.43}$ for M31. The MF in Fig. 1 thus seems compatible with the bulk of the YSCs really being young GCs rather than open clusters or associations, as was already indicated by their small effective radii and high luminosities (WS95). From the model side, uncertainties in the determination of YSC ages (and hence masses) on the basis of their ${\rm (V - I)}$ colours are dominated by uncertainties in the YSC metallicities. Age uncertainties due to metallicity uncertainty (${\rm \frac{1}{2} \cdot Z_{\odot} \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} Z_{YSC} \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} Z_{\odot}}$) are estimated to be of the order of $\pm 15~\%$. The uncertainty in M/L at ${\rm Z = \frac{1}{2} \cdot Z_{\odot}}$ due to the age uncertainty is $\sim 8$ \% and the uncertainty in M/L at all ages $\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 4 \cdot 10^8$ yr due to the metallicity uncertainty is $\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 5$ \%. This leads to an overall uncertainty in the M/L of $\sim 10$ \%. Measurement uncertainties are $\sim \pm 0.2$ mag for the observed ${\rm (V - I)}$ colours and $\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 0.15$ mag for V magnitudes for YSCs brighter than the completeness limit. The uncertainties that the inhomogeneous reddening over the body of NGC 4038/39 brings along for the derived cluster ages and masses are difficult to quantify. Only a global average value is given for the internal reddening of the YSC population in NGC 4038/39 (${\rm E_{V-I} \sim 0.3}$ mag (WS95)) and applied before our age-dating, but dust lanes and unrelaxed structures are seen all over NGC 4038/39. The very good agreement ($\leq 10^7$ yr) between ages determined from ${\rm (U - V)}$ and ${\rm (V - I)}$ colours, however, indicates that for the bright clusters seen on the short U exposure the reddening does {\bf not} seem to significantly deviate from the average reddening we use. It should be noted in this context that inclusion of YSCs fainter than the completeness limit -- which tend to have significantly larger observational uncertainties -- does {\bf not} affect the agreement with the Gaussian in Fig.1 or its parameters (beyond the normalisation to the number of clusters in the histogram). {\bf We conclude that the secondary GC population in The Antennae is formed with a log-normal mass distribution very similar to the one in the Milky Way or M31. It is not necessarily the secular evolution but rather the cluster formation process that produces the Gaussian mass spectrum observed on old GC systems.} Since uncertainties in the MF from observational errors cannot be calculated in any straightforward way in the analysis presented here, we are currently trying an independent approach. We draw YSCs at random from different intrinsic MFs (Gaussians, power-laws), randomly assign ages to them from different age distributions (clusters formed uniformly over the burst duration or at an increasing or decreasing rate), and calculate their present luminosities and colours to which, then, observational errors can be added, again at random from the observed luminosity-dependent error distribution. Comparison of the resulting model LFs and colour distributions with the observed ones should then allow to constrain the intrinsic MF (Kurth et al. , {\sl in prep.}). A somewhat similar procedure is used for YSCs in NGC 7252 and NGC 3921 by Miller \& Fall (1997), who find that power-laws are preferred over Gaussians for the MFs of these YSC systems. \section{Comparison with YSCs in NGC 7252 and NGC 3921} NGC 7252 and NGC 3921 are the two oldest merger remnants from Toomre's (1977) list. Large enough YSC systems have been detected in both of them to define the bright end of their LFs (Whitmore et al. 1993, Miller et al. 1997, Schweizer et al. 1996). Distances, however, are larger than to NGC 4038/39 by factors 3 and 4 for NGC 7252 and NGC 3921, respectively, pushing the completeness limit to significantly higher luminosities. The higher mean ages of the YSCs in these cases ($650 - 750$ Myr for NGC 7252 and $250 - 750$ Myr for NGC 3921, depending on metallicity) add another argument in favour of them being young GCs, since they have survived ${\rm \gg 10 \cdot t_{cross}}$ (cf. Schweizer et al. 1996). The LF we {\bf calculate} for the YSCs in The Antennae at an age of $\sim 1$ Gyr does show a marginal turnover at the expected ${\rm \langle M_V \rangle} \sim -9.9$ mag for YSCs brighter than the completeness limit, indicating that by this age the distortion of the LF with respect to the MF due to age spread effects from the finite YSC formation timespan (of the order $200 - 400$ Myr) is already less important. In order to estimate if a turn-over in their YSC LFs could be expected in NGC 7252 and NGC 3921, we assume that their YSC systems have the same MF as the YSC system in The Antennae. Then, we can calculate the mean absolute magnitude ${\rm \langle M_V \rangle}$ from ${\rm \langle M_{YSC} \rangle}$ at the above-quoted mean ages. We obtain ${\rm \langle M_V \rangle} = -10 \ \dots \ -9.5$ mag and ${\rm \langle M_V \rangle} = -9.5 \ \dots \ -9$ mag for YSCs in NGC 7252 and in NGC 3921, respectively. These luminosities are close to the 90 \% completeness limiting magnitudes of $-9.5$ (PC) and $-8.5$ (WF) for NGC 7252 (Miller et al. 1997) and of $-9.0$ for NGC 3921 (Schweizer et al. 1996). Together with the difficulty of disentangling the old GC population from the YSCs at these advanced ages the fact that no turnover is detected in the LFs for NGC 7252 and NGC 3921 does not seem to rule out a Gaussian MF similar to the one we obtain in The Antennae. \section{Dynamical Evolution of the YSC System} Quantitatively, nothing is known about the external dynamical effects on GC systems in interacting and merging galaxies. For the Milky Way, where dynamical modelling on the basis of the Galactic potential is possible and has been done by many groups (e.g. Chernoff \& Weinberg 1990, Gnedin \& Ostriker 1997, Fukushige \& Heggie 1995, Vesperini \& Heggie 1997, Vesperini 1997, 1998) it is clear that the GC system observed today is only ``the hardiest survivors of a larger original population'' (Harris 1991). Vesperini \& Heggie (1997) present N-body simulations including effects of stellar evolution, two-body relaxation, disk shocking, and the tidal field of the Milky Way. Studying the secular evolution of a number of GC systems with different initial MFs, Vesperini (1998) shows that {\bf if the initial MF is log-normal then this log-normal shape and its parameters are conserved over a Hubble time} despite the destruction of a large part ($\sim 50~\%$) of the original GC population. While evaporation and dynamical friction preferentially destroy low and high mass clusters, respectively, both processes balance each other in the case of a log-normal ($=$ equilibrium) initial GC MF, so that no net selective destruction of specific GC masses results. If the GC destruction processes were similar in the Antennae and in the Milky Way, and if, as indicated in Fig.1, the initial GC MF really were close to Vesperini's equilibrium MF, then the Gaussian MF could survive a Hubble time with its parameters ${\rm \langle Log~(M/M_{\odot}) \rangle}$ and ${\rm \sigma}$ essentially unchanged despite the likely destruction of a large fraction of the YSC system seen today. It will be interesting to analyse more YSC systems in the way we did in order to see if and in how far the YSC MF is universal or depends on parameters of the progenitor galaxies or the interaction event. Moreover, studying secondary GC systems on an age sequence of interacting, merging and merger remnant galaxies should allow to directly ``observe'' both the time evolution of the LF and of the MF and thus offer a unique possibility to study the effects of dynamical processes {\sl in situ}. The turnover of an old GC population at ${\rm M_V \sim -7.2}$ mag occurs around ${\rm V \sim 24.5}$ at Virgo cluster distances and is well within the reach of 10m telescopes. HST imaging allows to identify clusters in regions not too crowded for ground-based MOS. \section{Discussion and Open Questions} The uncertainties in the metallicities and -- more important -- in the individual reddening of YSCs in The Antennae can be substantially reduced with spectroscopic observations of YSC selected from the HST images. With MOS facilities on 10m class telescopes this should be possible in the nearest future. One question that poses itself immediately in the context of the cluster formation mechanism is whether the molecular cloud mass spectrum in a strongly interacting and starbursting galaxy like The Antennae does really have the same power-law form as that in an undisturbed spiral forming stars at a level orders of magnitude lower? Elmegreen \& Efremov (1997) pointed out that the high pressure of the ambient ISM produced in a strong interaction might favor the production of more massive clouds. Information about the molecular cloud mass spectrum in merger-induced strong starbursting systems seems a prerequisite for the study of the star and cluster formation processes. First spectroscopic observations of a handful of YSCs in NGC 7252 (Schweizer \& Seitzer 1993, 1998) confirms the metallicity of ${\rm (\frac{1}{2} - 1) \cdot Z_{\odot}}$ (cf. FB95) we predicted on the basis of the progenitor galaxies' ISM abundances. This enhanced metallicity (with respect to the primary GC population) raises the question in how far the secondary GC formation process is comparable to the primary one in the early Universe? \section{Preliminary Conclusions} {\bf 1.} The MF of the YSCs in The Antennae seems to be log-normal with parameters ${\rm \langle Log~(M/M_{\odot}) \rangle}$ and ${\rm \sigma}$ very similar to those of the Milky Way GC system. \hfill\break {\bf 2.} This suggests that the cluster formation process and not the dynamical evolution produce the Gaussian MF. \hfill\break {\bf 3.} The close agreement of the YSC MF we obtain with Vesperini's equilibrium initial GC MF seems to indicate that the shape and parameters of this MF may survive a Hubble time despite destruction of a large fraction of today's YSCs. \hfill\break {\bf 4.} In the older merger remnants NGC 7252 and NGC 3921 the completeness limit is close to the turn-over luminosity expected in case their MFs were similar to the one in The Antennae. As long as the impact of observational colour and luminosity uncertainties on the MF we derive cannot be quantified rigorously, our conclusions have to remain preliminary. \begin{acknowledgements} I gratefully acknowledge a very prompt and constructive report from an anonymous referee and I'm deeply indebted to Bruce Elmegreen for critical and stimulating discussions. \end{acknowledgements}
-12,009.0797
[ -1.966796875, 2.0234375 ]
26.885246
[ -3.00390625, 0.07904052734375, -1.8671875, -6.33984375, -0.689453125, 8.796875 ]
[ 3.65625, 6.32421875, 3.29296875, 4.18359375 ]
181
3,106
[ -1.9716796875, 1.908203125 ]
27.535564
[ -5.9375, -3.765625, -3.6171875, -2.267578125, 1.6953125, 11.4609375 ]
1.325449
15.068765
25.338055
3.688572
[ 2.3643922805786133 ]
-10,084.035637
4.736639
-11,818.083427
1.060359
5.642557
[ -3.435546875, -3.7421875, -2.96484375, -3.880859375, 2.48046875, 10.6171875 ]
[ -5.6015625, -2.416015625, -2.103515625, -1.35546875, 3.525390625, 5.046875 ]
BkiUelHxK7IDF1DduRfs
\section{Introduction} Quantum state readout is a crucial component of any quantum computing architecture. For semiconductor quantum dots, charge state readout has been performed using quantum point contacts \cite{Field1993} and quantum dots \cite{Barthel2010} as detectors. Electronic spin states can also be resolved using spin-to-charge conversion, which relies on spin selective tunneling and sensitive charge state detection \cite{Elzerman2004, Johnson2005}. To increase measurement bandwidths, conventional dc transport measurement approaches have to a large extent been replaced by rf and microwave frequency reflectometry setups \cite{Schoelkopf1998, Cottet2011, Petersson2010}. In particular, the circuit quantum electrodynamics (cQED) architecture allows for dispersive readout of superconducting qubits \cite{Blais2004, Wallraff2004, Sillanpaa2007, Reed2012}, as well as semiconductor charge and spin qubits \cite{Petersson2012, Delbecq2011, Frey2012, Mi2017, Bruhat2016, Stockklauser2017}. Both rf-reflectometry and cQED measurement implementations rely on costly room temperature microwave sources, rf components, and coaxial lines that occupy a significant amount of space in a dilution refrigerator. As one scales to many qubits, the resource requirements will increase dramatically. Moreover, to suppress the room temperature microwave background, a typical attenuation of 60--70 dB is required in the coax lines connecting the signal generator to the quantum device ($\sim$ 10 mK). To reduce the experimental complexity, the source would ideally be isolated from the 300 K environment. Over the past 10 years it has been shown that a variety of voltage-biased quantum devices generate microwave frequency photons. For example, voltage-biased Cooper pair boxes and superconducting quantum intereference devices embedded in superconducting cavities have been shown to mase \cite{Astafiev2007, Cassidy2017}. Cavity-coupled semiconductor double quantum dots (DQD) can serve as an electrically tunable maser gain medium \cite{Stockklauser2015, Stehlik2016, Liu2017}. These devices are fabrication compatible with other qubits and they can be integrated on the same chip. It is therefore of interest to determine if these devices, which already operate at millikelvin temperatures, can be utilized as microwave frequency sources in quantum computing experiments \cite{Berg2014}. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{Fig1_v3.pdf} \caption{\label{Fig: scheme} (a) The experiment consists of a cavity containing two DQDs. A dc biased e-DQD mases and populates the cavity with microwave frequency photons. These photons are used to read out the charge state of the t-DQD. (b) Optical microscope image showing the cavity and the positions of the e-DQD and t-DQD. (c) Upper panel: Scanning electron micrograph of a nanowire DQD. Lower panel: The DQD confinement potential is defined using 5 bottom gates.} \end{center} \end{figure} In this Letter we show that microwave frequency photons generated by a cavity-coupled DQD can be used to sensitively readout the charge state of a second DQD that is located in the same cavity. A source-drain bias is applied across an emitter DQD (e-DQD) and results in above-threshold masing. The photons generated by the e-DQD are used to sense a target DQD (t-DQD) that is located in the same cavity. Charge dynamics in the t-DQD influence the maser emission, changing its output power and emission frequency, allowing for charge state readout. We explore three different readout approaches. In the first approach the total power emitted by the cavity is measured and used to reconstruct the t-DQD charge stability diagram. In the second approach, the e-DQD emission frequency, which is dependent on the charge state of the t-DQD, is used to measure the t-DQD charge stability diagram by mapping out the maser emission frequency as a function of the t-DQD gate voltages. In the third approach, we measure the power emitted into a narrow band centered around the free-running maser frequency. Shifts in the emission frequency significantly change the narrow-band power, allowing us to detect t-DQD interdot charge transitions. While the target qubit in these experiments is a semiconductor DQD, it could in principle be replaced with any other cavity-coupled qubit, such as a superconducting transmon qubit. No external cavity drive is required in our experiments, further supporting the use of a DQD maser as a light source for quantum state readout in quantum computation architectures. \section{Device Description} Figure\ 1(a) captures the important elements of our experimental setup. The e-DQD and t-DQD are coupled to a microwave cavity. To serve as a microwave source, the e-DQD is source-drain biased, which results in single electron tunneling events and microwave frequency photon emission into the cavity mode \cite{Liu2017}. These cavity photons are used to sense the charge state of the t-DQD. An optical micrograph of the device is shown in Fig.~\ref{Fig: scheme}(b). The cavity consists of a $\lambda/2$ superconducting resonator with resonance frequency $f_{\rm c} = 7596$ MHz and linewidth $\kappa_{\rm tot}/2\pi$ = 1.77 MHz. The cavity is designed to have output (input) coupling rates $\kappa_{\rm out}/2\pi = $ 0.8 MHz ($\kappa_{\rm in}/2\pi = $ 0.04 MHz). The InAs nanowire DQDs are located at opposite ends of the cavity near the voltage anti-nodes. The confinement potential of the t-DQD is created by voltage biasing the five bottom gates labeled as $\rm{B^T_L}$, $\rm{L^T}$, ${\rm C^T}$, ${\rm R^T}$, and $\rm{B^T_R}$ in Fig.\ 1(c). Independent control of the e-DQD is achieved using a separate set of bottom gates. We further define source and drain ohmic contacts using electron beam lithography. In contrast with our previous work, the source contacts to the e-DQD and t-DQD are electrically decoupled such that the source-drain bias voltages can be independently controlled \cite{Liu2015}. The drain electrode of each DQD is connected to the microwave resonator. Coupling between a charge trapped in the DQD confinement potential and the electric field of the cavity leads to an electric dipole interaction with strength $g_c/2\pi \approx 30$ -- $40$ MHz \cite{Stehlik2016}. \section{Characterization of the Emitter Double Quantum Dot} We first characterize the microwave field emitted by the e-DQD. For these measurements, the t-DQD source drain bias $V^{\rm T}_{\rm SD} = 0$ and the gates are tuned such that the t-DQD is in Coulomb blockade, where the charge number is fixed. In this configuration the t-DQD will not have any impact on the cavity field. The e-DQD is configured to emit photons by applying a finite source drain bias $V^{\rm E}_{\rm SD} = 2$ mV, which results in single electron tunneling events. The interdot charge transition leads to photon emission and, in a high quality factor cavity, a transition to a masing state \cite{Liu2017}. To measure the emitted radiation, the cavity output field is amplified using a high electron mobility transistor (HEMT) amplifier and detected with a spectrum analyzer. Figure\ \ref{Fig: LDQD emission}(a) shows the power spectral density $S(f)$ of the radiation emitted from the cavity, along with a fit to a Lorentzian. The best fit parameters yield the emission frequency $f_{\rm e} = 7595.68$ MHz and FWHM = 8~kHz. We obtain a total output power $P_{\rm out} =$ 0.16 pW by integrating over $S(f)$. The emission power yields an intra-cavity photon number $n_{\rm c} = P_{\rm out}/(hf_e\kappa_{\rm out})\approx 6000$ given $\kappa_{\rm out}/2\pi = 0.8$ MHz. The FWHM is 200 times narrower than the bare cavity linewidth, which is suggestive of masing. The output field can be examined in more detail by measuring ($I$,$Q$) histograms. To acquire the histograms, the cavity output field is first amplified with a HEMT and then demodulated into the in-phase ($I$) and quadrature-phase ($Q$) components by a local reference set to a frequency $f_{\rm lo} = f_{\rm c}$ \cite{Liu2015}. Figure~\ref{Fig: LDQD emission}(b) shows an ($I$,$Q$) histogram obtained by accumulating $1.7\times10^7$ $(I,Q)$ samples at a rate of 12.3 MHz. The histogram has a ring shape that is consistent with coherent emission \cite{Liu2017}. Combined, these data sets show that the voltage-biased e-DQD can serve as a coherent source that populates the cavity with approximately 6000 photons. These photons may be used to read out the charge state of the t-DQD, as will be demonstrated in the following sections of the paper. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{Fig2_v2.pdf} \caption{\label{Fig: LDQD emission} (a) Power spectral density of the radiation emitted by the e-DQD (circles) and a fit to a Lorentzian (solid line) with FWHM = 8 kHz. (b) The $IQ$ histogram of the output field is consistent with a coherent source.} \end{center} \end{figure} \section{Target Double Quantum Dot Charge State Detection} In this section we compare several different approaches for measuring the charge stability diagram of the t-DQD. We first measure the stability diagram using standard cavity input-output readout, where an external tone is used to populate the cavity with photons. These data are then compared with charge stability diagrams that are obtained by measuring the total power emitted from the cavity when it is populated with e-DQD photons. Two additional transduction methods are examined that are based on the effect that charge dynamics in the t-DQD have on the emission properties of the e-DQD. Specifically, we show that the t-DQD charge stability diagram can be reconstructed by measuring the emission frequency of the e-DQD and the narrow band power emitted by the e-DQD. \subsection{Charge state readout through measurements of the cavity transmission} The conventional cavity input-output readout approach is illustrated in Fig.\ \ref{Fig: total power sensing}(a). Here the cavity is driven by an input tone of frequency $f_{\rm in}$ and power $P_{\rm in} \approx -112$~dBm that results in approximately $n_{\rm c}\approx10$ intra-cavity photons. The resulting cavity output is amplified with a HEMT and demodulated by a local reference having a frequency $f_{\rm lo} = f_{\rm in}$. Both the phase shift $\Delta \phi$ and power gain $G = CP_{\rm out}/P_{\rm in}$ can be extracted from the cavity transmission. Here the constant $C$ is set such that $G = 1$ with $f_{\rm in} = f_{\rm c}$ and both DQDs in Coulomb blockade \cite{Liu2015, Stehlik2016}. Figure\ \ref{Fig: total power sensing}(c) plots $G$ as a function of the t-DQD gate voltages with $f_{\rm in} = f_{\rm c}$ and $V^{\rm T}_{\rm SD} = 0$. For this data set the e-DQD is in idle mode, with $V^{\rm E}_{\rm SD} = 0$ and the gate voltages tuned to Coulomb blockade. These measurements reveal the t-DQD charge stability diagram, consistent with previous measurements of cavity-coupled InAs nanowire DQDs \cite{Stehlik2016}. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{Fig3_v3.pdf} \caption{\label{Fig: total power sensing} (a) Circuit used to measure the t-DQD charge stability diagram by driving the cavity with a weak input tone while the e-DQD is in Coulomb blockade. (b) Photons emitted from the e-DQD can be used to measure the charge stability diagram of the t-DQD in the absence of a cavity input tone. (c) Gain, $G$, measured with method shown in (a) as a function of $V^{\rm T}_{\rm L}$ and $V^{\rm T}_{\rm R}$, revealing the t-DQD charge stability diagram. (d) $P_{\rm out}$ measured with method shown in (b) as a function of $V^{\rm T}_{\rm L}$ and $V^{\rm T}_{\rm R}$ also reveals the t-DQD charge stability diagram. } \end{center} \end{figure} \subsection{Charge state readout through measurements of the total cavity output power} To make a comparison with cavity input-output readout we now turn off the cavity input tone and configure the e-DQD in the ``on state," such that it is emitting coherent radiation as shown in Fig.\ 2. We then measure the output power $P_{\rm out}$ and plot it as a function of $V^{\rm T}_{\rm L}$ and $V^{\rm T}_{\rm R}$ in Fig.~\ref{Fig: total power sensing}(d). Writing the cavity output field complex amplitude as $\alpha = I + iQ$, $P_{\rm out}$ is determined from measurements of $\langle \alpha^*\alpha\rangle = \langle I^2+Q^2\rangle$. The $(I,Q)$ data are processed using a digital filter of 2.6 MHz bandwidth that covers the entire cavity linewidth and therefore $ \langle I^2+Q^2\rangle$ captures the total emitted power \cite{EichlerPRA2012}. The scenario is equivalent to a power meter measuring over a wide bandwidth as illustrated in Fig.\ \ref{Fig: total power sensing}(b). The data in Fig.~\ref{Fig: total power sensing}(d) show that measurements of $P_{\rm out}$ can be used to extract the t-DQD charge stability diagram. \subsection{Impact of charge dynamics in the t-DQD on the emission properties of the e-DQD} We now more carefully examine the readout mechanism by studying the effect that the t-DQD charge configuration has on the emission properties of the e-DQD. Figure\ \ref{Fig: spectrum dependance}(a) shows a high resolution measurement of $P_{\rm out}$ near one of the t-DQD interdot charge transitions in Fig.\ \ref{Fig: total power sensing}(d). These data were acquired in the absence of a cavity input tone and with the e-DQD emitting photons. The left dot and right dot charge transitions are visible in these data, while the visibility of the interdot charge transition is significantly less than in the data shown in Fig.\ 3(c). \begin{figure*}[t] \begin{center} \includegraphics[width=2\columnwidth]{Fig4_v3.pdf} \caption{\label{Fig: spectrum dependance} (a) Total output power $P_{\rm out}$ as a function of $V^{\rm T}_{\rm L}$ and $V^{\rm T}_{\rm R}$ near a t-DQD interdot charge transition. Here the e-DQD emission is used to populate the cavity with measurement photons. (b) $S(f)$ measured with the t-DQD in Coulomb blockade (CB), at an interdot transition, and at a left dot charge transition (Lead). The dashed lines indicate the frequency window that the emission is integrated over in the narrowband measurement approach. (c) $f_{\rm e}$ as a function of $V^{\rm T}_{\rm L}$ and $V^{\rm T}_{\rm R}$. In these data both single dot and interdot charge transitions are visible.} \end{center} \end{figure*} To better understand what sets the visibility of the charge transitions in these data, we measure $S(f)$ of the emitted radiation with the gate voltages of the t-DQD tuned to different regions of the t-DQD charge stability diagram. Figure\ \ref{Fig: spectrum dependance}(b) shows measurements of $S(f)$ with the t-DQD configured in Coulomb blockade, at the interdot charge transition, and at a left dot charge transition. With the t-DQD configured in Coulomb blockade the emission peak in $S(f)$ is centered at $f_{\rm e}^0$ = 7595.68~MHz. When the t-DQD is configured to a left dot charge transition, the emission peak shifts down in frequency by 214 kHz, the peak power is reduced by approximately a factor of 10, and the peak in $S(f)$ is significantly broader. In comparison, with the t-DQD configured at the interdot charge transition the emission peak is only shifted down in frequency by 37 kHz. The emission peak has a height and width that is comparable to the data acquired with the t-DQD in Coulomb blockade. Therefore, it is difficult to resolve the interdot charge transitions in measurements of the total emitted power $P_{\rm out}$. However, since the emission peak shifts by an amount that is much greater than the FWHM $\sim 8$ kHz of the emission peak, measurements of the emission frequency may be used to reconstruct the t-DQD charge stability diagram. By fitting $S(f)$ to a Lorentzian at every point in the t-DQD charge stability diagram we can extract $f_{\rm e}$ as a function of $V^{\rm T}_{\rm L}$ and $V^{\rm T}_{\rm R}$. A plot of the extracted $f_e$ is shown in Fig.\ \ref{Fig: spectrum dependance}(c) and is much more sensitive to the interdot charge transition. Therefore a measurement of $f_{\rm e}$ can in principle be used to readout the device. The approach is similar to cQED readout of transmon qubits, where the state-dependent dispersive shift of the cavity is used for readout \cite{Wallraff2004, Blais2004}. It is important to note here that in general we do not know the phase of the maser emission, and that previous work showed that the coherence time of the maser is only on the order of 10 $\mu$s \cite{Liu2015}. Even with a long coherence time, $f_e = f_e(V^{\rm T}_{\rm L}, V^{\rm T}_{\rm R})$ is t-DQD dependent and thus the phase shift in the maser output $\Delta \phi(t) = \int \Delta f_e(t) dt$ where $f_e(t) =f_e(V^{\rm T}_{\rm L}(t), V^{\rm T}_{\rm R}(t))$ is dependent on the ``path" of $(V^{\rm T}_{\rm L}(t), V^{\rm T}_{\rm R}(t))$. $\Delta \phi$ is then not a well defined variable. Therefore we cannot simply measure the dispersive shift $\Delta \phi$, as is commonly achieved with phase-sensitive measurement approaches in cQED. \subsection{Charge state readout through narrow-band measurements of the total cavity output power} The previous section demonstrated that measurements of the emission frequency $f_{\rm e}$ can be used to reconstruct the t-DQD charge stability diagram. However, extracting $f_{\rm e}$ from measurements of $S(f)$ is too time consuming (3--4 seconds per spectrum) to allow for efficient charge state readout. The challenge of devising a practical measurement that quickly extracts the state-dependent frequency shift has been solved in the standard readout schemes. For example, in cQED systems, state dependent shifts in the resonance frequency of the cavity can be measured by driving the cavity with a weak input tone at $f_{\rm in} = f_{\rm c}$ and detecting the large phase shift $\Delta \phi = \arctan(I,Q)$ of the cavity output field using heterodyne demodulation techniques. As a demonstration of the standard readout approach, Fig.~\ref{Fig: narrow rbw readout}(a) plots the phase shift $\Delta \phi$ as a function of $V^{\rm T}_{\rm L}$ and $V^{\rm T}_{\rm R}$. Single dot transitions associated with the left and right dots, as well as the interdot charge transition, are clearly visible in the phase response. Phase readout is not feasible when e-DQD emission is used to populate the cavity with photons since $f_{\rm e}$ exhibits fluctuations that randomize the phase. Moreover, since $f_{\rm e}$ is a quantity that depends on the t-DQD configuration the phase shift is not a well-defined quantity. Instead, a quantity analogous to the phase shift can be measured for fast readout. The emission spectrum [Fig.\ 4(b)] shifts in response to the charge state of the t-DQD, allowing us to simply measure the output power $P_{\rm out}$ within a narrow resolution bandwidth (RBW), as schematically illustrated in Fig.\ \ref{Fig: spectrum dependance}(b). The frequency range over which the power is integrated $f_{\rm e}^0 \pm$ RBW$/2$ should be smaller than the expected state-dependent shift in $f_{\rm e}$, yet large enough to tolerate the drift in $f_{\rm e}$ caused by charge fluctuations in the emitter \cite{LiuPRA2015, Liu2017}. We operate with RBW $>$ FWHM of the e-DQD emission spectrum to tolerate the drift in $f_{\rm e}$, and RBW $<\left|f_{\rm e}-f_{\rm e}^0\right|$ to allow sensitivity to changes in the emission spectrum due to the t-DQD charge state. Figure \ref{Fig: narrow rbw readout} (b) shows the output power $P_{\rm out}$ measured around $f_{\rm e}^0$ = 7595.68 MHz with a 30 kHz RBW. The state-dependent shift in the emission center frequency at a t-DQD interdot charge transition leads to a factor of 100 change in $P_{\rm out}$ within the measured bandwidth. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{Fig5_v2.pdf} \caption{\label{Fig: narrow rbw readout} (a) Cavity phase shift $\Delta \phi$ measured in response to a weak cavity input tone as a function of $V^{\rm T}_{\rm L}$ and $V^{\rm T}_{\rm R}$. (b) $P_{\rm out}$ integrated within a 30 kHz RBW as a function of $V^{\rm T}_{\rm L}$ and $V^{\rm T}_{\rm R}$. The integrated power in this narrow bandwidth is sensitive to changes in $f_{\rm e}$.} \end{center} \end{figure} \section{Summary and Outlook} In summary, we have shown that a voltage-biased DQD can be used as a light source for qubit readout in the cQED architecture. Readout based on measurements of the total output power, emission center frequency, and narrow-band output power were compared. While the total output power is sensitive to single dot charge transitions, it does not have sufficient sensitivity to resolve interdot charge transitions. Measurements of the emission center frequency reveal both single dot and interdot charge transitions, but this approach is slow and not well-suited for single shot readout. The narrow-band power measurement approach yields high sensitivity to both single dot and interdot charge transitions. In some applications, it may be desirable to place the e-DQD in a separate cavity. In the masing state, the e-DQD generates a large intra-cavity photon number $n_{\rm c} \sim 6000$, which may cause saturation effects and broaden the linewidth of the target transition. Separating the emitter from the target qubit would more easily allow the emitted field to be attenuated. Lastly, previous work has shown that the maser can be switched on and off rapidly \cite{Liu2017}. A switchable maser could be turned off during quantum control sequences and then rapidly activated for high power readout of the qubit state \cite{Reed2010}. We hope that this study will motivate further applications of nanoscale emitters in quantum computing readout architectures. \begin{acknowledgments} We thank M. J. Gullans and J. M. Taylor for helpful discussions and acknowledge support from the Packard Foundation, the National Science Foundation Grant No.\ DMR-1409556, and the Gordon and Betty Moore Foundation's EPiQS Initiative through Grant GBMF4535. Devices were fabricated in the Princeton University Quantum Device Nanofabrication Laboratory. \end{acknowledgments} \bibliographystyle{apsrev_lyy2017}
-17,382.110845
[ -3.2734375, 3.0625 ]
60
[ -2.8671875, 0.9560546875, -2.123046875, -5.25, -1.3564453125, 8.3671875 ]
[ 2.908203125, 7.84765625, 3.328125, 5.90234375 ]
178
3,314
[ -2.234375, 2.412109375 ]
24.560536
[ -6.3671875, -3.376953125, -3.30859375, -1.9326171875, 1.69140625, 10.828125 ]
1.433335
39.763957
24.170187
1.2488
[ 2.8667869567871094 ]
-13,870.524379
5.157815
-17,072.646429
1.064763
5.546763
[ -3.224609375, -3.96875, -3.951171875, -4.57421875, 2.52734375, 12.078125 ]
[ -5.46484375, -1.759765625, -2.19921875, -1.966796875, 3.28515625, 5.5390625 ]
BkiUaYzxK7ICUn2IXhit
\section{Introduction} Current technological plans hint to mainstream adoption of highly charged ions (HCIs) for many uses in the near future~(see, e.g. the review~\cite{gillaspy01jpb}). Production of any ion stage of practically any naturally occurring element is possible at ion accelerators and/or electron beam ion traps. Furthermore great progress has been made recently in trapping, cooling, and spectroscopy of HCIs (see, e.g.~\cite{draganic03prl,crespo08cjp,hobein11prl,mackel11prl} and the review~\cite{beiersdorfer09pscr}). In this paper we consider candidate transitions for an optical clock made using a HCI that has a configuration crossing in the ground state: a ``level crossing''. Level crossings in ions occur when the energy ordering of orbitals changes with increasing ion charge. The ion charge may be increased by considering ionisation along an isonuclear sequence or by considering an isoelectronic sequence with variable nuclear charge. The latter is somewhat simpler to deal with theoretically since the electronic structure does not usually change very much between adjacent ions. In this paper we discuss isoelectronic sequences at points where the electronic structure does change --- the level crossings --- and interesting properties can emerge. Near level crossings, the frequencies of transitions involving the crossing orbitals can be much smaller than the ionisation energy. This means that they can be within the optical range and have the potential to be excited by lasers, opening the possibility of performing high-precision spectroscopy and building optical clocks using HCI reference transitions. This work is also motivated by astronomical observations of quasar absorption spectra that hint that there is a spatial gradient in the value of the fine-structure constant, $\alpha = e^2/\hbar c$~\cite{webb11prl,king12mnras}. Data samples from the Very Large Telescope and Keck Telescope~\cite{webb99prl,murphy03mnras} independently agree on the direction and the magnitude of this gradient, which is significant at a $4.2\sigma$ level. A consequence of the astronomical result is that since the solar system is moving along this spatial gradient, there may exist a corresponding temporal shift in $\alpha$ in the Earth frame at the level $\dot\alpha/\alpha\sim10^{-19}\ \textrm{yr}^{-1}$~\cite{berengut12epl}. Finding this variation using atomic clocks could independently corroborate the astronomical result in the laboratory. The best current terrestrial limit on time-variation of $\alpha$ was obtained by comparing the ratio of frequencies of the Al$^{+}$ clock and the Hg$^{+}$ clock over the course of a year~\cite{rosenband08sci}. The ratio is sensitive to $\alpha$-variation because the reference transitions in the two clocks have different sensitivity coefficients, $q$, defined as \begin{equation} \label{eq:q_def} q = \frac{d\omega}{dx}\bigg|_{x=0}\ , \end{equation} where $x = \alpha^2/\alpha^2_0 - 1$ is a normalised change in $\alpha^2$ from the current value $\alpha_0^2$, and $q$ and $\omega$ are measured in atomic units of energy. In this experiment the Al$^{+}$ clock is relatively insensitive to $\alpha$ variation (low $q$ coefficient), thus serving as an ``anchor'' line. On the other hand the Hg$^+$ clock is sensitive to $\alpha$-variation (high $q$ coefficient). Therefore, the ratio of these transition frequencies will change if $\alpha$ changes. The limit on the rate of change of $\alpha$ was measured as $\dot{\alpha}/\alpha = (-1.6\pm 2.3)\times 10^{-17}~\text{yr}^{-1}$. To compete with astrophysical measurements of the spatial gradient, the atomic clock limits must be improved by around two orders of magnitude. Several proposals have been made for atomic clocks that, if measured at the same level of accuracy as the Al$^+$/Hg$^+$ ratio, would give much stronger limits on $\alpha$-variation. These include: proposals to construct clocks using heavier elements with similar properties (e.g. the Tl$^{+}$ clock proposed by~\cite{dehmelt89pnas}); systems with large relative sensitivities to $\alpha$-variation exploiting the accidentally degenerate levels in Dy~\cite{dzuba99prl,cingoz07prl} or fine-structure anomalies in Te, Po, and Ce~\cite{dzuba05pra}; a variety of transitions in heavy elements with large $q$ values (e.g.~\cite{dzuba03pra,angstmann04pra0,porsev09pra,flambaum09pra}); and nuclear clocks based on the 7.6eV isomeric transition in the $^{229}$Th nucleus that would have extraordinary sensitivity to variation of fundamental constants~\cite{peik03epl,flambaum06prl,berengut09prl,campbell12prl}. For a more complete review see~\cite{dzuba09cjp,berengut11jpcs}. Transitions near level crossings in HCIs can provide higher sensitivity to $\alpha$-variation than any other optical transitions seen in atomic systems~\cite{berengut10prl,berengut11prl}. Consider the following analytical formula for the relativistic shift of an energy level in the single-particle approximation~\cite{dzuba99prl}: \begin{equation} \label{eq:q_approx} q_n \approx -I_n \frac{(Z\alpha)^2}{\nu(j+1/2)} \,, \end{equation} where $I_n$ is the ionisation energy of the orbital (atomic units $\hbar = e = m_e = 1$), and $\nu$ is the effective principal quantum number. A transition in a HCI can have a large sensitivity because the difference in $q_n$ between the levels involved can be large. The enhancement comes from the coherent contributions of three factors: high nuclear charge $Z$, high ionisation degree $Z_\textrm{ion}$ (leading to large $I_n$), and significant differences in the configuration composition of the states involved (large changes in $j$ and $\nu$). For nearly-filled shells, an additional enhancement in the $\alpha$-sensitivity occurs due to each electron spending approximately half its time nearer to the nucleus than other electrons in the same shell. In these cases $q_n \sim I_n^{3/2}$~\cite{berengut11prl}. In this paper we perform a systematic search for level crossings in HCIs throughout the periodic table. We identify several ranges of $Z$ and $Z_\textrm{ion}$ where level crossings can be found, and perform configuration interaction calculations for some of the most promising systems. In \Sec{sec:scalings} we discuss how systematic effects that affect optical clocks are modified in the case of HCIs, and find that HCIs confer some benefits over near-neutral ions. Current experimental techniques might be applied to build a similar clock retaining high precision, but with much higher sensitivity to $\alpha$-variation. \section{Method} Our first task in this work is to identify HCIs with level crossings in the ground state. We start with neutral ions and then increase $Z$, working along the isoelectronic sequence from the neutral atom filling order towards the Coulomb filling order. The Madelung rule (also known as the Klechkowski rule) can be taken as a first approximation for determining the filling order of electron shells in neutral atoms. We show in the Appendix that this is a good approximation because deviations from this filling order in neutral atoms disappear with a small increase in the ion charge $Z_\textrm{ion}$. Also we know that in very highly-charged ions, the energy levels of the electrons must approach the hydrogen-like (Coulomb) limit, where all orbitals with the same principal quantum number $n$ are nearly degenerate. \Fig{fig:ordering} presents the order of electron orbitals under both ordering schemes. Since any difference in the ordering as computed from the Madelung rule and that of the hydrogen-like limit must be resolved with increasing \ensuremath{Z_\textrm{ion}}, the `out-of-order' levels must cross at some \ensuremath{Z_\textrm{ion}}. From the transition between these limits it is seen that the only types of crossings available in HCIs are between orbitals with angular momenta $s-d$, $s-f$, and $p-f$. \begin{figure} \caption{A comparison of the ordering of electron orbitals: The first column is the order of filling as derived by applying the Madelung rule, while the second column is derived for a hydrogen-like atom (excluding $g$-wave and $h$-wave orbitals that cannot be occupied in the ground state of any real ion). The ordering of orbitals changes with increasing ion charge, $Z_{ion}$.} \label{fig:ordering} \includegraphics{Ordering.eps} \end{figure} Neutral atoms sometimes have ground state electronic configurations that deviate from the Madelung rule. In isoelectronic sequences starting with such atoms other types of level crossings can occur (namely, $5d-4f$ and $6d-5f$). However, we find that the new crossings occur with the addition of just a few extra protons; no additional crossings are found in highly-charged ($\ensuremath{Z_\textrm{ion}} \gtrsim 5$) ions. Full details are presented in the Appendix. To find the ions along the isoelectronic sequence where level crossings lead to small transition frequencies we perform Dirac-Fock (relativistic Hartree-Fock) calculations. An example is presented in \Fig{fig:4f5p_crossing}, which shows the $4f$ and $5p$ valence orbitals of the indium isoelectronic sequence ($N=49$) calculated using Dirac-Fock in the $V^{N-1}$ approximation. It is seen that for low values of \ensuremath{Z_\textrm{ion}}\ the $4f$ orbitals lie above the $5p$ orbitals, but at $Z=59$ the $4f$ levels drop below the $5p_{3/2}$ orbital, and between $Z=59$ and 60 they cross the $5p_{1/2}$ orbital energy. In general, this method produces acceptable estimates for the position of the crossings as we will see by comparison with Configuration Interaction calculations in Sections~\ref{sec:fp_crossingOne} and \ref{sec:fp_crossingTwo}. \begin{figure}[tb] \caption{Dirac-Fock energies of the $4f_{5/2}$ (solid), $5p_{1/2}$ (dashed), and $5p_{3/2}$ (dotted) levels of the In ($N=49$) isoelectronic sequence.} \label{fig:4f5p_crossing} \includegraphics[width=0.45\textwidth]{E_4f5p.eps} \end{figure} Many-body perturbation theory (MBPT) corrections can be included, but our calculations show that this does not change the position of the crossing point (see \Fig{fig:MBPTComp}, which shows a detailed view of the crossing point in \Fig{fig:4f5p_crossing} with and without including of MBPT corrections). The indium sequence described above has one valence electron above closed shells (a cadmium core that we may consider frozen). In general, however, we can perform Dirac-Fock calculations even for several-valence-electron ions provided we scale the contribution of each subshell by its filling fraction. Again this gives reasonable accuracy for the ionisation energy (order of few percent) which is good enough to identify level crossings. As we progress along an isoelectronic sequence, we increase $Z$ until the first crossing point is reached. After this point the electronic configuration will be altered, and to find other crossing points that occur later in the sequence further calculations must be performed with the modified electron configuration which assumes that the first crossing has occurred. In principle, it is possible to use the weighted Dirac Fock method outlined above for an arbitrary number of electrons, but for partially-filled shells and electron-hole calculations there usually will be more than one possible DF electron configuration to use. One such example is Cr II~\cite{berengut11pra1}, where the $d$-shell electrons must be accounted for in the DF approximation, but it is not clear if a $V^N$ scheme where $3d^5$ is included in the DF potential or a $V^{N-1}$ scheme where $3d^4$ is included will give better agreement with experiment (of course, in the limit of a complete basis set both approximations will give the same CI result). Furthermore, using a poor approximation for the DF potential may result in the Dirac-Fock calculation showing no available level crossings. In order to resolve this issue, we must perform at least minimal configuration interaction calculations to locate the crossing point and calculate approximate transition frequencies. \begin{figure}[tb] \caption{Energies of the $4f_{5/2}$ (solid), $5p_{1/2}$ (dashed), and $5p_{3/2}$ (dotted) levels of the In ($N=49$) isoelectronic sequence. Upper panel: detail of the level crossing in the Dirac-Fock approximation of \Fig{fig:4f5p_crossing}. Lower panel: the same level crossing calculated with many-body perturbation theory corrections included. The qualitative nature of the crossing point is not significantly affected by the MBPT corrections.} \label{fig:MBPTComp} \includegraphics[width=0.45\textwidth]{E_4f5p_MBPT.eps} \end{figure} HCIs with many valence electrons have some benefits for potential clock applications because of the availability of different angular momentum states and configurations. This is useful both for finding reference transitions with desirable properties and also for increasing the sensitivity of the transition to $\alpha$-variation, as the $q$ values for an $k$-electron transition is approximately $k$ times the $q$ value for the single electron transition. This is illustrated, e.g., by the examples presented in~\cite{berengut10prl,berengut11prl}. Furthermore, using configuration mixing it is also possible to generate E1 transitions using multiple electrons in a $s-f$ crossing. In the following sections, we will list all the available level crossings in elements from the considerations discussed above. The calculations presented in this paper use the atomic structure code AMBiT~\cite{berengut06pra}, which includes Dirac Fock (DF) and Configuration Interaction (CI) algorithms. While core-valence calculations can be included in a CI calculation via many-body perturbation theory using the CI+MBPT method~\cite{dzuba96pra}, for our current purposes this is not required, as discussed below. All CI calculations for two and three-valence-electron ions are performed using a fairly small $B$-spline basis of the type developed in~\cite{johnson86prl,johnson88pra}, including valence orbitals only up to $7spdf$. MBPT corrections are much more important for the calculation of transition frequencies, $\omega$. More precise studies of those HCIs that are of interest to experimentalists will need to be performed using the full CI+MBPT theory. In \Tref{tab:Licomp} we compare experimental ionization energies with those calculated in the DF approximation for several levels of neutral lithium. For this simple case, we see that the ionization energies and intervals are accurate to $\sim1\%$ or better. \Tref{tab:Wcomp} compares the ground state ionization energies for selected ions along the tungsten ($Z = 74$) isonuclear sequence with available data. In HCIs, we see that we maintain roughly the same degree of accuracy, therefore, we can surmise that the position of level crossings are fairly accurately determined from the DF calculations alone. On the other hand, we note that the energy of transitions between the levels participating in the optical level crossing may not be as easily determined for HCIs (indeed even the ordering may be difficult to determine). This is because we are selecting HCIs where the difference between the ionization energies of these levels are strongly suppressed. For the typical scale of ionization energies in HCIs, $\sim 10^7~\textrm{cm}^{-1}$, an interval of $\sim 10^4~\textrm{cm}^{-1}$ is the result of a cancelation at the level 99.9\%. To determine the ground state conclusively an accuracy of better than 0.1\% in the ionisation energy is required. As a result, in HCIs near level crossings the electronic structure is not as well determined as in near neutral ions, and the ground state is typically not identified to a high degree of confidence. For experimental purposes however, the two (or more) possible ground states in HCIs with optical level crossings can all be considered metastable, as they have typical lifetimes ranging from seconds to the lifetime of the universe. \begin{table} \caption{Dirac-Fock calculation of ionization energy and energy intervals for neutral lithium, compared with available experimental data from~\cite{NIST}.} \label{tab:Licomp} \begin{ruledtabular} \begin{tabular}{lcrrr} \multicolumn{1}{c}{Level} & \multicolumn{1}{c}{$J$} & \multicolumn{2}{c}{Ionization Energy~(cm$^{-1}$)} & \multicolumn{1}{c}{\% Deviation} \\ && \multicolumn{1}{r}{DF Calc.} & \multicolumn{1}{r}{Expt.} & \\ \hline $2s$ & 1/2 & -43087 & -43487 & -0.919\\ $2p$ & 1/2 & -28232 & -28583 & -1.226\\ & 3/2 & -28232 & -28583 & -1.227\\ $3s$ & 1/2 & -16197 & -16281 & -0.513\\ $3p$ & 1/2 & -12460 & -12561 & -0.808\\ & 3/2 & -12459 & -12561 & -0.809\\ $3d$ & 3/2 & -12194 & -12204 & -0.079\\ & 5/2 & -12194 & -12204 & -0.079\\ $4s$ & 1/2 & -8444 & -8475 & -0.367\\ $4p$ & 1/2 & -6975 & -7017 & -0.604\\ & 3/2 & -6974 & -7017 & -0.604\\ $4d$ & 3/2 & -6859 & -6863 & -0.065\\ & 5/2 & -6859 & -6863 & -0.065 \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{Dirac-Fock calculation of energy levels for selected ions belonging to the ionization sequence of tungsten, compared with available data.} \label{tab:Wcomp} \begin{ruledtabular} \begin{tabular}{lrrr} \multicolumn{1}{c}{Ion} & \multicolumn{2}{c}{Ionization Energy~($10^3$~cm$^{-1}$)} & \multicolumn{1}{c}{\% Deviation} \\ & \multicolumn{1}{r}{DF Calc.} & \multicolumn{1}{r}{Expt.~\cite{kramida09adndt}} & \\ \hline W$^{5+}$ & -509 & -522 & 2.49\\ W$^{11+}$ & -1846 & -1868 & 1.17\\ W$^{13+}$ & -2440 & -2345 & 4.05\\ W$^{27+}$ & -7075 & -7109 & 0.47\\ W$^{37+}$ & -13049 & -13080 & 0.23\\ W$^{45+}$ & -19487 & -19471 & 0.08\\ W$^{55+}$ & -43101 & -43133 & 0.07\\ W$^{73+}$ & -652346 & -651338 & 0.15 \end{tabular} \end{ruledtabular} \end{table} \section{Ground State Level Crossings} In this section we list all possible level crossings that occur because of the transition from the Madelung filling scheme to the Coulomb degenerate scheme. All crossings in \Fig{fig:ordering} are represented, however most occur at relatively low ion stage or outside the range of relatively stable nuclei ($Z\gtrsim100$). The most interesting cases are those that occur in HCIs with $\ensuremath{Z_\textrm{ion}}\gtrsim5$: crossings $c$ ($4f-5s$), $d$ ($4f - 5p$), and $h$ ($5f - 6p$), which are studied in further detail in Sections~\ref{sec:fs_crossing}, \ref{sec:fp_crossingOne} and \ref{sec:fp_crossingTwo}, respectively. Once an ion is found with orbitals near a particular level crossing, nearby ions with the same crossing can generally be found by increasing the nuclear charge while simultaneously increasing the number of electrons by the same amount, provided that the orbital shells involved in the crossing are not completely filled. \paragraph{$3d - 4s$:} The earliest crossing possible in the periodic table occurs in the K isoelectronic sequence ($N=19$). The ground state configuration is [Kr]$4s$, but the ground state of Sc$^{2+}$ ($Z = 21$) is [Kr]$3d$. This crossing can be seen in the early transition metals, where it is well known that the $3d$ and $4s$ orbitals are nearly degenerate in neutral and near-neutral ions of these elements. All isoelectronic sequences beginning from neutral atoms with $19 \leq N \leq 28$ have this crossing. The $N=29$ isoelectronic sequence starts with Cu, where the ground state is $3d^{10}4s$; this sequence has no crossing since in the neutral atom the electron shells already fill in the Coulomb-limit order. \paragraph{$4d - 5s$:} For the Rb isoelectronic sequence, this crossing point occurs near $Z = 39$, which is Y$^{2+}$. Again this level crossing happens in near-neutral systems; it is available in isoelectronic sequences with $37 \leq N \leq 46$. For $N = 47$ the ground state already has Coulomb degenerate ordering. One ion with this crossing, the two-valence-electron ion Zr$^{2+}$, was discussed in~\cite{berengut11pra0}. \paragraph{$4f - 5s$:} The $5s$ and $4f$ level crossing occurs at a higher degree of ionisation than the previous two crossings. The lightest ions with this crossing occur in the $N=47$ isoelectronic sequence, which has a single electron above closed shells. The ions Nd$^{13+}$, Pm$^{14+}$ and Sm$^{15+}$ have optical transitions between these orbitals; they were studied in~\cite{berengut10prl}. The heaviest ions with this crossing occur when the $4f$ and $5s$ shells are nearly filled, i.e. in the isoelectronic sequences of Pm or Nd. These were studied in~\cite{berengut11prl} where the ions Ir$^{16+}$ and Ir$^{17+}$ (ground state configurations $4f^{13}5s^2$ and $4f^{13}5s$, respectively) were found to have optical transitions from the ground state with the extremely large $q$ values. The total number of ions with this crossing is around fifty. This level crossing is available in isoelectronic sequences with $47 \leq N \leq 61$. We discuss other examples with this crossing in \Sec{sec:fs_crossing}. \paragraph{$4f - 5p$:} The $5p_{1/2}$ and $5p_{3/2}$ orbitals are separated by a large fine-structure interval, which causes this level crossing to occur over a wider range of $Z$ (see \Fig{fig:4f5p_crossing}). For a single electron above a closed shell, this crossing occurs at around $Z = 59$. \Fig{fig:MBPTComp} illustrates the effect of including MBPT corrections on the position of the level crossing. This level crossing is available in isoelectronic sequences with $49 \leq N \leq 67$. The ions W$^{7+}$ and W$^{8+}$ ($N=67$ and 66, respectively), which have hole transitions between the nearly filled shells, were studied in detail in~\cite{berengut11prl}. We discuss other examples in \Sec{sec:fp_crossingOne}. \paragraph{$4f - 6s$:} This crossing point occurs much earlier in the ionization sequence than other $s - f$ crossings presented here since the difference in principal quantum number between the orbitals is $\Delta n = 2$. In the Cs isoelectronic sequence, this crossing occurs in Ce$^{3+}$, however the $5d$ orbital also plays a role here and the $4f-6s$ level crossing is not seen in the ground state of this sequence. This level crossing is available in isoelectronic sequences with $55 \leq N \leq 69$. \paragraph{$5d - 6s$:} Just as in the $4d-5s$ case, the $5d$ and $6s$ orbitals cross at low ionization stage; for the Cs isoelectronic sequence ($N=55$) it occurs in doubly-ionised lanthanum ($Z=57$). On the other hand $s^2-d^2$ transitions can have reasonably large $q$-values even in ions with relatively small ion stage, especially where the hole transitions are used. Several interesting examples including Hf$^{2+}$, Hg$^{2+}$, and Hg$^{3+}$ were studied in~\cite{berengut11pra0}. This level crossing is available in isoelectronic sequences with $55 \leq N \leq 78$. \paragraph{$5f - 6s$:} This crossing is similar to the $4f-5s$ crossing previously discussed, and it was hoped that ions which showed this crossing would have very high $q$-values due to the large $Z^2$ enhancement factor. However, the $6s$ orbital is much more tightly bound than the $5f$ orbitals and as a result the level crossing occurs at $Z = 105$ for the Au isoelectronic sequence and well beyond $105$ for the Tl isoelectronic sequence. While this level crossing occurs in isoelectronic sequences with $79 \leq N \leq 101$, it is unavailable in any stable nuclei. \paragraph{$5f - 6p$:} The $6p_{1/2}$ and $6p_{3/2}$ orbitals are very far apart in HCIs due to large fine-structure splitting (the $5f_{5/2}$ and $5f_{7/2}$ orbitals are much closer). This causes a bifurcation of this level crossing, with $5f$ crossing the $6p_{3/2}$ orbitals (in the excited state) near $Z = 93$ and $6p_{1/2}$ near $Z = 98$ for the Tl isoelectronic sequence. This level crossing is available in isoelectronic sequences with $81 \leq Z \leq 101$. It was originally exploited in~\cite{berengut12arxiv1} where it was shown that optical transitions in Cf$^{16+}$ ($N=82$ with two valence electrons) have the largest sensitivity to variation of the fine-structure constant seen in any atomic system. We discuss more examples in \Sec{sec:fp_crossingTwo}. \paragraph{$5f - 7s$:} As in the case of the $4f - 6s$ crossing, the difference in principal quantum number is $\Delta n = 2$. Since states with larger $n$ tend to have lower orbital energy, this causes the $7s$ orbital to be comparable in energy to $5f$, thus creating a crossing point early in the ionization sequence. Ac$^{2+}$, which is near this level crossing, was examined in~\cite{berengut11pra0}. This level crossing is available in isoelectronic sequences with $87 \leq N \leq 101$. \paragraph{$6d - 7s$ and $6f - 7s$:} The $6d - 7s$ level crossing occurs in low ionisation stages of isoelectronic sequences with $N \geq 87$. For example in the Fr isoelectronic sequence, it is seen in Ac$^{2+}$~\cite{berengut11pra0}, which has $7p$, $6d$, and $5f$ orbitals all within optical range of the $7s$ ground state. This level crossing is available in isoelectronic sequences with $N \geq 87$. The $6f - 7s$ crossing should exist in all sequences with $N \geq 87$ since the $6f$ shell is never occupied. However the $6f$ orbital is at such high energy that the crossing occurs in very highly charged ions, with $Z > 100$. Therefore the crossing is not shown in \Fig{fig:ordering} since it will not occur in stable isotopes. \section{$4f-5s$ crossing} \label{sec:fs_crossing} In this section, we examine the $4f - 5s$ crossing in greater detail. As mentioned previously, this level crossing occurs in ions with a relatively high degree of ionization. \Tref{tab:4f5sCI} presents CI calculations of some ions near this crossing with up to three valence electrons. As can be seen from the tables, the range of values for the charge on the ion \ensuremath{Z_\textrm{ion}}\ for which the level crossing occurs remains relatively stable. This reinforces the general rule of thumb that given an ion near a level crossing, simultaneously increasing or decreasing both the charge $Z$ and the number of electrons $N$ by the same amount will result in another ion near the same level crossing. The $4f - 5s$ level crossing is particularly unique in that it is the only available level crossing between levels of different parity in HCIs. This hints to the possibility of optical E1 transitions in these ions, which could be useful for cooling and trapping of HCIs. Two points are worth noting, however: firstly, in HCIs the strength of E1 transitions is suppressed compared to near-neutral atoms (\Sec{sec:ejmjscaling}); and secondly with \mbox{$\Delta l = 3$} for an $s-f$ transition, it will tend to proceed via configuration mixing which greatly reduces its strength. Examples include $_{63}$Eu$^{14+}$ which has E1 transitions between the ground state, $4f^{2}5s$ ($J^P = 3.5^+$), and excited states $4f5s^{2}$ ($J = 2.5^-$) and $4f^3$ ($J = 4.5^-$), with energy intervals $17478~\text{cm}^{-1}$ and $28828~\text{cm}^{-1}$, respectively. Perhaps the most interesting examples presented in \Tref{tab:4f5sCI} are the two-valence-electron ion Sm$^{14+}$, which was studied in~\cite{berengut10prl}, and the three-valence-electron ion Eu$^{14+}$. Both of these ions have ground states with half-open $5s$ shell, which means that both have optical $s-f$ and $f-s$ ground state transitions. The two E1 transitions in Eu$^{14+}$ mentioned previously are of this type, and it means that they will have $q$-values that are of opposite sign. On the other hand they may be too broad for high-precision clocks. Better reference transitions for clocks are strongly suppressed E1 transitions, suppressed M1, and E2 transitions. It is also possible to have level crossings in hole states, where one or two electrons are removed from otherwise closed shells and effectively give rise to a similar structure as one- or two-valence-electron systems. The specific cases of Ir$^{16+}$ and Ir$^{17+}$ were studied in~\cite{berengut11prl} for the hole case, which leaves all intermediate cases. In general, intermediate ions with more than one electron result in large configuration spreading, significantly complicating the level structure of the ion. This is not true for hole cases, which allow for simpler level structures, yet providing the benefit of increased $Z$ and therefore high sensitivity to $\alpha$-variation. \begin{longtable}{@{\extracolsep{19pt}}lclcc \caption{\label{tab:4f5sCI} Configuration interaction calculations for the level structure of highly charged ions with one, two or three valence electrons and $4f-5s$ intervals below $100\,000~\text{cm}^{-1}$. In general, the ellipses $\ldots$ used indicate that there are more fine-structure states available which we omit for brevity.}\\ \hline\hline N & Ion & Config. & $J^P$ & Energy (cm$^{-1}$)\\ \hline \endfirsthead \caption{(continued)}\\ \hline\hline N & Ion & Config. & $J^P$ & Energy (cm$^{-1}$)\\ \hline \endhead \hline \endfoot \hline \hline \endlastfoot 47 & $_{60}$Nd$^{13+}$ & $5s$ & 0.5$^+$ & 0 \\* & & $4f$ & 2.5$^-$ & 64084 \\* & & $4f$ & 3.5$^-$ & 68480 \\* 47 & $_{61}$Pm$^{14+}$ & $5s$ & 0.5$^+$ & 0 \\* & & $4f$ & 2.5$^-$ & 8902 \\* & & $4f$ & 3.5$^-$ & 14290 \\* 47 & $_{62}$Sm$^{15+}$ & $4f$ & 2.5$^-$ & 0 \\* & & $4f$ & 3.5$^-$ & 6485 \\* & & $5s$ & 0.5$^+$ & 51314 \\ 48 & $_{60}$Nd$^{12+}$ & $5s^{2}$ & 0$^+$ & 0\\* & & $4f5s$ & 2$^-$ & 86136\\* & & $4f5s$ & 3$^-$ & 87464\\* & & $4f5s$ & 4$^-$ & 90435\\* & & $4f5s$ & 3$^-$ & 96929\\ 48 & $_{61}$Pm$^{13+}$ & $5s^{2}$ & 0$^+$ & 0\\* & & $4f5s$ & 2$^-$ & 32742\\* & & $4f5s$ & 3$^-$ & 34261\\* & & $4f5s$ & 4$^-$ & 38030\\* & & $4f5s$ & 3$^-$ & 44299\\* & & $4f^{2}$ & 4$^+$ & 98912\\ 48 & $_{62}$Sm$^{14+}$ & $4f5s$ & 2$^-$ & 0 \\* & & $4f5s$ & 3$^-$ & 1697\\* & & $4f5s$ & 4$^-$ & 6381\\* & & $4f^{2}$ & 4$^+$ & 11223\\* & & $4f5s$ & 3$^-$ & 12460\\* & & $4f^{2}$ & 5$^+$ & 16095\\* & & \ldots & & \\* & & $5s^{2}$ & 0$^+$ & 25338\\* & & $4f^{2}$ & 4$^+$ & 31069\\* & & \ldots & & \\ 48 & $_{63}$Eu$^{15+}$ & $4f^{2}$ & 4$^+$ & 0\\* & & $4f^{2}$ & 5$^+$ & 5886\\* & & \ldots & & \\* & & $4f5s$ & 2$^-$ & 48780\\* & & $4f5s$ & 3$^-$ & 50643\\* & & \ldots & & \\ 49 & $_{62}$Sm$^{13+}$ & $4f5s^{2}$ & 2.5$^-$ & 0\\* & & $4f5s^{2}$ & 3.5$^-$ & 6189\\* & & $4f^{2}5s$ & 3.5$^+$ & 40211\\* & & $4f^{2}5s$ & 4.5$^+$ & 42454\\* & & \ldots & & \\ 49 & $_{63}$Eu$^{14+}$ & $4f^{2}5s$ & 3.5$^+$ & 0\\* & & $4f^{2}5s$ & 4.5$^+$ & 2601\\* & & $4f^{2}5s$ & 5.5$^+$ & 6663\\* & & $4f^{2}5s$ & 1.5$^+$ & 10711\\* & & \ldots & & \\* & & $4f5s^{2}$ & 2.5$^-$ & 17478\\* & & $4f^{2}5s$ & 3.5$^+$ & 20722\\* & & \ldots & & \\* & & $4f5s^{2}$ & 3.5$^-$ & 24854\\* & & $4f^{3}$ & 4.5$^-$ & 28828\\* & & \ldots & & \\ 49 & $_{64}$Gd$^{15+}$ & $4f^{3}$ & 4.5$^-$ & 0\\* & & $4f^{3}$ & 5.5$^-$ & 4768\\* & & $4f^{3}$ & 6.5$^-$ & 9711\\* & & $4f^{3}$ & 1.5$^-$ & 24137\\* & & \ldots & & \\* & & $4f^{2}5s$ & 3.5$^+$ & 30172\\* & & $4f^{3}$ & 4.5$^-$ & 31911\\* & & \ldots & & \\ 49 & $_{65}$Tb$^{16+}$ & $4f^{3}$ & 4.5$^-$ & 0\\* & & $4f^{3}$ & 5.5$^-$ & 5702\\* & & $4f^{3}$ & 6.5$^-$ & 11527\\* & & $4f^{3}$ & 1.5$^-$ & 25637\\* & & \ldots & & \\* & & $4f^{2}5s$ & 3.5$^+$ & 94034\\* & & $4f^{2}5s$ & 4.5$^+$ & 97331\\ \end{longtable} \section{$4f - 5p$ crossing} \label{sec:fp_crossingOne} The $4f - 5p$ crossing differs from the $4f - 5s$ crossing in that the orbitals are of the same parity and the $5p$ orbital has strong fine-structure splitting. HCIs near this crossing can have M1 transitions even without configuration mixing since the single-electron $5p_{3/2} - 4f_{5/2}$ transition is M1-allowed (although not in the non-relativistic limit since $\Delta l = 2$). Since the ratio of M1/E1 transition strengths is larger in HCIs relative to near-neutral ions (due to the suppression of E1 transitions), these ions can have rich physics to exploit in clocks. Additionally E2-allowed transitions are plentiful, and these can have linewidths which are more appropriate for reference transitions. With the possibility of up to 14 electrons in the $f$-shell and 6 electrons in the $p$-shell, there are many ions that have this crossing, from single-valence-electron examples like $_{59}$Pr$^{10+}$ to the nineteen-valence-electron (single hole) $_{74}$W$^{7+}$. \Tref{tab:4f5pDF} presents Dirac-Fock calculations of energy levels in the $V^{N-1}$ approximation, with a $5p^x$ shell included above the closed Cd ($N=48$) core. The $5p^x$ shell ($x = N-1 - 48$) is included by weighting the potential of the filled $5p^6$ shell by the factor $x/6$. The position of the crossing from the Dirac-Fock estimate does not always agree with the configuration interaction calculation, but at least provides for a reasonable starting point. \begin{table}[tb] \label{tab:4f5pDF} \caption{Weighted Dirac-Fock energy intervals calculated in the $V^{N-1}$ potential for highly charged ions near the $4f - 5p$ level crossing. The Dirac-Fock procedure includes a Cd core and a weighted $5p$ shell: [Kr] $5s^2 4d^{10} 5p^x$, with $x = N - 49$. } \begin{ruledtabular} \begin{tabular}{lccrrr} $N$ & $x$ & Ion & \multicolumn{3}{c}{Energy relative to $4f_{5/2}$ orbital (cm$^{-1}$)}\\ &&& $5p_{1/2}$ & $5p_{3/2}$ & $4f_{7/2}$ \\ \hline 49& 0 & $_{57}$La$^{8+}$ & -114738 & -86257 & 1698 \\ && $_{58}$Ce$^{9+}$ & -70791 & -37066 & 2406 \\ && $_{59}$Pr$^{10+}$ & -20261 & 19230 & 3200 \\ && $_{60}$Nd$^{11+}$ & 36280 & 82094 & 4085 \\ && $_{61}$Pm$^{12+}$ & 98419 & 151157 & 5066 \\ 50 & 1 & $_{60}$Nd$^{10+}$ & -130828 & -82538 & 4013 \\ && $_{61}$Pm$^{11+}$ & -77060 & -21675 & 4988 \\ && $_{62}$Sm$^{12+}$ & -17868 & 45257 & 6065 \\ && $_{63}$Eu$^{13+}$ & 46466 & 118020 & 7249 \\ 51 & 2 & $_{61}$Pm$^{10+}$ & -100727 & -47479 & 4916 \\ && $_{62}$Sm$^{11+}$ & -43378 & 17463 & 5987 \\ && $_{63}$Eu$^{12+}$ & 19191 & 88305 & 7164 \\ && $_{64}$Gd$^{13+}$ & 86736 & 164841 & 8456 \\ 52 & 3 & $_{61}$Pm$^{9+}$ & -123164 & -72027 & 4849 \\ && $_{62}$Sm$^{10+}$ & -67686 & -9103 & 5915 \\ && $_{63}$Eu$^{11+}$ & -6903 & 59797 & 7086 \\ && $_{64}$Gd$^{12+}$ & 58917 & 134449 & 8370 \\ 53 & 4 & $_{61}$Pm$^{8+}$ & -144342 & -95288 & 4787 \\ && $_{62}$Sm$^{9+}$ & -90768 & -34415 & 5849 \\ && $_{63}$Eu$^{10+}$ & -31796 & 32520 & 7014 \\ && $_{64}$Gd$^{11+}$ & 32282 & 105269 & 8290 \\ 54 & 5 & $_{62}$Sm$^{8+}$ & -112596 & -58445 & 5788 \\ && $_{63}$Eu$^{9+}$ & -55462 & 6497 & 6948 \\ && $_{64}$Gd$^{10+}$ & 6852 & 77323 & 8218 \\ && $_{65}$Tb$^{11+}$ & 74101 & 153821 & 9606 \\ 55 & 6 & $_{62}$Sm$^{7+}$ & -133139 & -81160 & 5734 \\ && $_{63}$Eu$^{8+}$ & -77874 & -18239 & 6889 \\ && $_{64}$Gd$^{9+}$ & -17346 & 50637 & 8153 \\ && $_{65}$Tb$^{10+}$ & 48176 & 125243 & 9533 \\ \end{tabular} \end{ruledtabular} \end{table} In \Tref{tab:4f5pCI} we present CI calculations for two- and three-valence-electron ions near the $4f - 5p$ crossing. Interesting examples here include the $_{60}$Nd$^{10+}$ ion which has a mixed $4f5p$ ground state from which narrow transitions are available to $4f^2$ and $5p^2$ configurations. These would have $q$ values of opposite sign, and so a clock using these transition would be a good probe of $\alpha$-variation. On the other hand configuration mixing ensures that the three-valence-electron ions $_{60}$Nd$^{9+}$ and $_{61}$Pm$^{10+}$ ion have good E2 and M1 transitions well within the range of usual optical and near-IR lasers. At the heavier end of the spectrum of ions which have this crossing are W$^{7+}$ and W$^{8+}$, with one and two holes in otherwise filled orbitals, respectively. These were studied in~\cite{berengut11prl}. \begin{table} \label{tab:4f5pCI} \caption{Configuration interaction calculations for the level structure of HCIs with two or three valence electrons and $4f-5p$ intervals below $100\,000~\text{cm}^{-1}$. The ellipses $\ldots$ are used to indicate that there are more fine-structure states available which we omit for brevity.} \begin{ruledtabular} \begin{tabular}{lclcc} N & Ion & Config. & J & Energy (cm$^{-1}$)\\ \hline 50 & $_{58}$Ce$^{8+}$ & $5p^{2}$ & 0 & 0\\ & & $5p^{2}$ & 1 & 23362\\ & & $5p^{2}$ & 2 & 31033\\ & & $4f5p$ & 3 & 92661\\ & & $4f5p$ & 4 & 98806\\ 50 & $_{59}$Pr$^{9+}$ & $5p^{2}$ & 0 & 0\\ & & $5p^{2}$ & 1 & 28273\\ & & $5p^{2}$ & 2 & 34999\\ & & $4f5p$ & 3 & 44738\\ & & $4f5p$ & 4 & 51669\\ & & $4f5p$ & 5 & 86593\\ 50 & $_{60}$Nd$^{10+}$ & $4f5p$ & 3 & 0\\ & & $4f5p$ & 2 & 3640\\ & & $4f5p$ & 4 & 7701\\ & & $5p^{2}$ & 0 & 9060\\ & & $4f^{2}$ & 5 & 33730\\ & & $4f^{2}$ & 6 & 36668\\ & & $5p^{2}$ & 1 & 42578\\ 51 & $_{59}$Pr$^{8+}$ & $5p^{3}$ & 1.5 & 0\\ & & $5p^{3}$ & 1.5 & 26953\\ & & $5p^{3}$ & 2.5 & 34494\\ & & $4f5p^{2}$ & 2.5 & 47413\\ & & $4f5p^{2}$ & 3.5 & 50927\\ & & $5p^{3}$ & 0.5 & 51929\\ & & $4f5p^{2}$ & 3.5 & 68470\\ & & \ldots & & \\ 51 & $_{60}$Nd$^{9+}$ & $4f5p^{2}$ & 2.5 & 0\\ & & $4f5p^{2}$ & 3.5 & 6429\\ & & $5p^{3}$ & 1.5 & 10613\\ & & $4f5p^{2}$ & 2.5 & 25124\\ & & $4f5p^{2}$ & 3.5 & 27641\\ & & \ldots & & \\ & & $4f^{2}5p$ & 4.5 & 58361\\ & & \ldots & & \\ 51 & $_{61}$Pm$^{10+}$ & $4f5p^{2}$ & 2.5 & 0\\ & & $4f^{2}5p$ & 4.5 & 3937\\ & & $4f5p^{2}$ & 3.5 & 6992\\ & & $4f^{2}5p$ & 3.5 & 9483\\ & & $4f^{2}5p$ & 5.5 & 10844\\ & & $4f^{2}5p$ & 2.5 & 13732\\ & & $4f^{2}5p$ & 3.5 & 16000\\ & & \ldots & & \\ & & $4f5p^{2}$ & 2.5 & 31646\\ & & \ldots & & \\ \end{tabular} \end{ruledtabular} \end{table} \section{$5f - 6p$ crossing} \label{sec:fp_crossingTwo} The $5f - 6p$ level crossing is similar in many ways to the $4f - 5p$ crossing, with some important differences. Since this crossing occurs in ions with very high $Z$, the fine-structure splitting of the $6p$ levels is very large, and near the level crossing is usually much larger than the $5f - 6p$ interval. This provides advantages over the $4f - 5p$ crossing in that there are a larger number of ions available where one of these orbitals cross and also that the large fine-structure splitting causes a simplification of the level structure. In cases where the $6p_{1/2}$ and $5f$ levels cross, such as near Cf$^{17+}$, there is an enhancement of sensitivity to $\alpha$-variation~\cite{berengut12arxiv1}. The lower component of the $p_{1/2}$ Dirac spinor has an $s_{1/2}$ structure and is not small because of the high $Z$. This means that the $p_{1/2}$ orbital has a $q$-value comparable to an $s$-wave orbital. \begin{table}[tb] \caption{\label{tab:5f6pDF} Weighted Dirac-Fock energy intervals calculated in the $V^{N-1}$ potential for highly charged ions near the $5f - 6p$ level crossing. The Dirac-Fock procedure includes a Hg core ($N=80$) and a weighted $6p$ shell: [Xe] $6s^2 5d^{10} 4f^{14} 6p^x$, with $x = N - 81$.} \begin{ruledtabular} \begin{tabular}{lcrrr} $N$ & Ion & \multicolumn{3}{c}{Energy relative to $5f_{5/2}$ orbital (cm$^{-1}$)} \\ &&$6p_{1/2}$ & $6p_{3/2}$ & $5f_{7/2}$\\ \hline 81 & $_{90}$Th$^{9+}$ & -182828 & -94604 & 4964 \\ 81 & $_{91}$Pa$^{10+}$ & -167192 & -65878 & 6440 \\ 81 & $_{92}$U$^{11+}$ & -148656 & -33232 & 8044 \\ 81 & $_{93}$Np$^{12+}$ & -127546 & 3067 & 9775 \\ 81 & $_{94}$Pu$^{13+}$ & -104132 & 42813 & 11636 \\ 81 & $_{95}$Am$^{14+}$ & -78639 & 85844 & 13630 \\ 81 & $_{96}$Cm$^{15+}$ & -51267 & 132033 & 15760 \\ 81 & $_{97}$Bk$^{16+}$ & -22195 & 181274 & 18030 \\ 81 & $_{98}$Cf$^{17+}$ & 8415 & 233481 & 20446 \\ 81 & $_{99}$Es$^{18+}$ & 40410 & 288583 & 23012 \\ 81 & $_{100}$Fm$^{19+}$ & 73643 & 346516 & 25732 \\ 82 & $_{95}$Am$^{13+}$ & -258013 & -86547 & 14560 \\ 82 & $_{96}$Cm$^{14+}$ & -237735 & -47179 & 16727 \\ 82 & $_{97}$Bk$^{15+}$ & -215633 & -4627 & 19033 \\ 82 & $_{98}$Cf$^{16+}$ & -191886 & 41006 & 21484 \\ 82 & $_{99}$Es$^{17+}$ & -166663 & 89635 & 24085 \\ 83 & $_{96}$Cm$^{13+}$ & -248665 & -63867 & 16320 \\ 83 & $_{97}$Bk$^{14+}$ & -227341 & -22372 & 18614 \\ 83 & $_{98}$Cf$^{15+}$ & -204330 & 22234 & 21052 \\ 83 & $_{99}$Es$^{16+}$ & -179808 & 69859 & 23638 \\ 84 & $_{96}$Cm$^{12+}$ & -259040 & -79947 & 15920 \\ 84 & $_{97}$Bk$^{13+}$ & -238508 & -39525 & 18205 \\ 84 & $_{98}$Cf$^{14+}$ & -216246 & 4043 & 20630 \\ 84 & $_{99}$Es$^{15+}$ & -192434 & 50657 & 23202 \\ 85 & $_{96}$Cm$^{11+}$ & -262798 & -89202 & 15076 \\ 85 & $_{97}$Bk$^{12+}$ & -242718 & -49486 & 17332 \\ 85 & $_{98}$Cf$^{13+}$ & -220875 & -6600 & 19728 \\ 85 & $_{99}$Es$^{14+}$ & -197458 & 39345 & 22269 \\ 85 & $_{100}$Fm$^{15+}$ & -172644 & 88261 & 24960 \\ 86 & $_{97}$Bk$^{11+}$ & -259178 & -71990 & 17413 \\ 86 & $_{98}$Cf$^{12+}$ & -238450 & -30538 & 19816 \\ 86 & $_{99}$Es$^{13+}$ & -216091 & 14017 & 22363 \\ 86 & $_{100}$Fm$^{14+}$ & -192285 & 61579 & 25059 \\ 87 & $_{97}$Bk$^{10+}$ & -268654 & -87268 & 17032 \\ 87 & $_{98}$Cf$^{11+}$ & -248716 & -46899 & 19426 \\ 87 & $_{99}$Es$^{12+}$ & -227101 & -3390 & 21960 \\ 87 & $_{100}$Fm$^{13+}$ & -204002 & 43149 & 24643 \\ 87 & $_{101}$Md$^{14+}$ & -179597 & 92630 & 27480 \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{\label{tab:5f6pCI_2e} Configuration interaction estimates for the level structure of highly charged ions with two valence electrons and $5f-6p$ intervals below $100\,000~\text{cm}^{-1}$. Ellipses ($\ldots$) indicate that there are more fine-structure states that have been omitted. All levels have even parity.} \begin{ruledtabular} \begin{tabular}{lclcc} N & Ion & Config. & J & Energy (cm$^{-1}$)\\ \hline 82 & $_{95}$Am$^{13+}$ & $6p^{2}$ & 0 & 0\\ & & $5f6p$ & 3 & 89786\\ & & $5f6p$ & 2 & 97898\\ 82 & $_{96}$Cm$^{14+}$ & $6p^{2}$ & 0 & 0\\ & & $5f6p$ & 3 & 63664\\ & & $5f6p$ & 2 & 72221\\ & & $5f6p$ & 3 & 83564\\ & & $5f6p$ & 4 & 85846\\ 82 & $_{97}$Bk$^{15+}$ & $6p^{2}$ & 0 & 0\\ & & $5f6p$ & 3 & 36004\\ & & $5f6p$ & 2 & 44444\\ & & $5f6p$ & 3 & 58033\\ & & $5f6p$ & 4 & 59702\\ & & $5f^{2}$ & 4 & 90788\\ 82 & $_{98}$Cf$^{16+}$ & $6p^{2}$ & 0 & 0\\ & & $5f6p$ & 3 & 7452\\ & & $5f6p$ & 2 & 14775\\ & & $5f^{2}$ & 4 & 28824\\ & & $5f6p$ & 3 & 31436\\ & & $5f^{2}$ & 4 & 36157\\ & & \ldots & & \\ 82 & $_{99}$Es$^{17+}$ & $5f^{2}$ & 4 & 0\\ & & $5f^{2}$ & 2 & 4536\\ & & $5f6p$ & 3 & 5323\\ & & $5f^{2}$ & 5 & 20460\\ & & $5f^{2}$ & 4 & 21371\\ & & $5f6p$ & 2 & 22436\\ & & $6p^{2}$ & 0 & 22871\\ & & $5f^{2}$ & 3 & 24795\\ & & $5f6p$ & 3 & 35959\\ & & $5f^{2}$ & 6 & 42232\\ & & \ldots & & \\ 82 & $_{100}$Fm$^{18+}$ & $5f^{2}$ & 4 & 0\\ & & $5f^{2}$ & 2 & 9828\\ & & $5f^{2}$ & 5 & 22772\\ & & $5f^{2}$ & 4 & 27627\\ & & $5f^{2}$ & 3 & 28876\\ & & $5f6p$ & 3 & 37642\\ & & $5f^{2}$ & 6 & 39897\\ & & \ldots & & \\ & & $5f6p$ & 3 & 67819\\ & & $5f6p$ & 4 & 73676\\ & & \ldots & & \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[!tb] \caption{\label{tab:5f6pCI_3e} Configuration interaction calculations for the level structure of HCIs with three valence electrons and $5f-6p$ intervals below $100\,000~\text{cm}^{-1}$. Ellipses ($\ldots$) indicate that there are more fine-structure states that have been omitted. All levels have odd parity.} \begin{ruledtabular} \begin{tabular}{lclcc} N & Ion & Config. & J & Energy (cm$^{-1}$)\\ \hline 83 & $_{92}$U$^{9+}$ & $6p^3$ & 1.5 & 0\\ & & $5f6p^{2}$ & 2.5 & 70210\\ & & $5f6p^{2}$ & 3.5 & 82945\\ 83 & $_{96}$Cm$^{13+}$ & $5f6p^{2}$ & 2.5 & 0\\ & & $5f6p^{2}$ & 3.5 & 18815\\ & & $5f^{2}6p$ & 4.5 & 83815\\ & & $5f^{2}6p$ & 2.5 & 97251\\ & & $5f^{2}6p$ & 3.5 & 97765\\ 83 & $_{97}$Bk$^{14+}$ & $5f6p^{2}$ & 2.5 & 0\\ & & $5f6p^{2}$ & 3.5 & 20858\\ & & $5f^{2}6p$ & 4.5 & 58127\\ & & $5f^{2}6p$ & 2.5 & 72447\\ & & $5f^{2}6p$ & 3.5 & 73189\\ & & $5f^{2}6p$ & 1.5 & 76551\\ & & $5f^{2}6p$ & 5.5 & 79687\\ & & \ldots & & \\ 83 & $_{98}$Cf$^{15+}$ & $5f6p^{2}$ & 2.5 & 0\\ & & $5f6p^{2}$ & 3.5 & 22742\\ & & $5f^{2}6p$ & 4.5 & 31188\\ & & $5f^{2}6p$ & 2.5 & 46699\\ & & $5f^{2}6p$ & 3.5 & 47136\\ & & $5f^{2}6p$ & 1.5 & 49751\\ & & $5f^{2}6p$ & 5.5 & 54895\\ & & \ldots & & \\ 83 & $_{99}$Es$^{16+}$ & $5f6p^{2}$ & 2.5 & 0\\ & & $5f^{2}6p$ & 4.5 & 4928\\ & & $5f^{2}6p$ & 3.5 & 19106\\ & & $5f^{2}6p$ & 1.5 & 22246\\ & & $5f^{2}6p$ & 2.5 & 23262\\ & & $5f6p^{2}$ & 3.5 & 26967\\ & & $5f^{2}6p$ & 5.5 & 30767\\ & & \ldots & & \\ & & $5f^{3}$ & 5.5 & 55606\\ & & \ldots & & \\ & & $5f^{3}$ & 6.5 & 64091\\ & & $5f^{3}$ & 1.5 & 65019\\ & & \ldots & & \\ 83 & $_{100}$Fm$^{17+}$ & $5f^{2}6p$ & 4.5 & 0\\ & & $5f^{3}$ & 4.5 & 8162\\ & & $5f^{2}6p$ & 2.5 & 11213\\ & & $5f^{2}6p$ & 1.5 & 12028\\ & & $5f^{2}6p$ & 3.5 & 18134\\ & & $5f^{3}$ & 5.5 & 22763\\ & & $5f^{2}6p$ & 2.5 & 28805\\ & & \ldots & & \\ & & $5f^{3}$ & 1.5 & 33451\\ & & \ldots & & \\ \end{tabular} \end{ruledtabular} \end{table} In \Tref{tab:5f6pDF} we present weighted Dirac-Fock orbital energies for ions near the $5f-6p$ crossing in the $V^{N-1}$ approximation. For the single-valence electron case ($N=81$) two crossings are seen. The first occurs between U$^{11+}$ and Np$^{12+}$ and corresponds to the $5f - 6p_{3/2}$ crossing, while the $5f - 6p_{1/2}$ crossing occurs near Cf$^{17+}$. This second crossing is only shown in \Tref{tab:5f6pDF} for $N=81$ because it is soon pushed to ions with $Z>100$ (although clearly it will still occur for two- or three-valence-electron ions). Configuration interaction calculations for some interesting HCIs with the $5f-6p$ crossing are shown in \Tref{tab:5f6pCI_2e} (two-valence-electron ions) and \Tref{tab:5f6pCI_3e} (three-valence-electron ions). As with the ions near the $4f-5p$ level crossing, many M1 and E2 transitions are available within the optical range corresponding to single-electron $p-f$ transitions. Clearly the difficulty with exploiting this crossing is that many of the elements with transitions near it are not stable and do not occur naturally. In \cite{berengut12arxiv1} we studied Cf$^{16+}$ in some detail since it is relatively stable (with isotopes that live up to several hundred years) and has the $6p_{1/2}-5f$ crossing mentioned previously. Using hole transitions is not possible with this crossing since there would need to be around 14 or 15 valence electrons (corresponding to the crossing of filled $6p_{1/2}^2$ and $5f^{14}$ shells, minus one or two electrons), and this would require $Z>100$, well past the somewhat stable elements. An interesting example that makes use of the $5f - 6p_{3/2}$ crossing is the three-valence-electron U$^{9+}$. Because of the large fine-structure splitting, the first two valence electrons fill the $6p_{1/2}^2$ subshell. The third valence electron is in the $6p_{3/2}$ subshell (ground state) but may be excited to the $5f_{5/2}$ and $5f_{7/2}$ orbitals. These transitions are shown in \Tref{tab:5f6pCI_3e}. The transitions (M1 at $70210\ \ensuremath{\textrm{cm}^{-1}}$ and E2 at $82945\ \ensuremath{\textrm{cm}^{-1}}$) will not be particularly sensitive to $\alpha$-variation, but $^{235}$U is interesting also because it has a 76 eV nuclear transition which may soon come within XUV laser range~\cite{cingoz12nat}. The nuclear transition would have high sensitivity to variation of fundamental constants, and the electronic transition could then form an `anchor' (relatively insensitive) transition. \section{Approximate Scaling Laws} \label{sec:scalings} It is useful to have simple estimates of various properties of HCIs given existing knowledge of an appropriate neutral or near neutral ion. In the following sections we derive approximate scaling laws for highly charged ions that state how a particular property will change along an isoelectronic sequence with increasing $Z$. Our approach is similar to that of~\cite{gillaspy01jpb}, however we use the formalism of effective charges which we believe will be more useful to experimentalists. A summary of our results is presented in \Tref{tab:scalings}. \subsection{Coefficients of Linear Fitting for Effective Charge} Recall the approximate formula for the non-relativistic energy of an electron in a screened Coulomb potential, $V(r) \sim -{Z_a}/{r}$, \begin{equation} E_n = - \frac{(\ensuremath{Z_\textrm{ion}} + 1)^2}{2\nu^2} = -\frac{\ensuremath{Z_a}^2}{2n^2}\label{eqn:nonrelenergy} \,, \end{equation} where $n$ is the integer principal quantum number and $\nu$ is the effective principal quantum number which is introduced to keep agreement with experimentally observed energies. For the purposes of this work on highly charged ions, it is more convenient introduce an effective charge \ensuremath{Z_a}\ as an alternative to $\nu$ that represents the (non-integer) effective screened charge of the potential that the external electron `sees'. In this formulation $n$ is kept as the usual integer principal quantum number. \ensuremath{Z_a}\ scales nearly linearly along an isoelectronic sequence as $Z$ (or equivalently \ensuremath{Z_\textrm{ion}}) increases. \ensuremath{Z_a}\ can easily be calculated using Dirac-Fock energies for any ion. We also present fitting laws for \ensuremath{Z_a}\ as a function of \ensuremath{Z_\textrm{ion}}\ for several valence orbitals in \Tref{tab:Zeffcoeffs} that may be used to quickly obtain \ensuremath{Z_a}. The data were obtained from one-valence-electron Dirac-Fock calculations, and we fit for the linear coefficients $A$ and $B$ according to \begin{equation} \ensuremath{Z_a} = A\,\ensuremath{Z_\textrm{ion}} + B \,. \end{equation} Values for $A$ and $B$ are presented in \Tref{tab:Zeffcoeffs}. For large \ensuremath{Z_\textrm{ion}}\ ($5 \leq \ensuremath{Z_\textrm{ion}} \leq 20$, labelled H in \Tref{tab:Zeffcoeffs}), our calculations show that the linear approximation used above is in very good agreement with the calculated trend (see \Fig{fig:4fza}). Often experimental data is available for neutral or near-neutral ions and in order to extrapolate from these ions to HCIs requires a reasonable estimate of \ensuremath{Z_a}\ for these ions. Therefore we also present fits across the domain \mbox{$1 \leq \ensuremath{Z_\textrm{ion}} \leq 4$} (labelled L in \Tref{tab:Zeffcoeffs}) and values for the neutral atoms \mbox{$\ensuremath{Z_\textrm{ion}} = 0$} (labelled N in \Tref{tab:Zeffcoeffs}). \begin{figure}[tb] \caption{Calculated effective charge $\ensuremath{Z_a} = \sqrt{\left|2 n^2 E\right|}$ (circles) versus ion charge \ensuremath{Z_\textrm{ion}}\ for a valence $4f$ electron above a closed shell [Xe]\,$6s^2$ ($N=56$) core. The lines represent linear fits using the values tabulated in \Tref{tab:Zeffcoeffs} for the appropriate regions of $\ensuremath{Z_\textrm{ion}}$.} \label{fig:4fza} \includegraphics[width=0.45\textwidth]{4fza.eps} \end{figure} \begin{table}[htb] \caption{Table of coefficients $A$ and $B$ for the effective charge $\ensuremath{Z_a} = A\ensuremath{Z_\textrm{ion}} + B$. For the regime column, L means the coefficients are more suitable for ions with $1 \leq \ensuremath{Z_\textrm{ion}} \leq 4$ and H means that the coefficients are tailored for high ion charge $5 \leq \ensuremath{Z_\textrm{ion}} \leq 20$, while N is the special case of neutral atoms where $\ensuremath{Z_\textrm{ion}} = 0$. These values were tabulated using the Dirac-Fock energies of singly-occupied electron orbitals above closed shells.} \label{tab:Zeffcoeffs} \begin{ruledtabular} \begin{tabular}{rcccc} Orbital & N & Regime & $A$ & $B$\\ \hline $4s$ & 19 & L & 1.233280 & 2.4530\\ & & H & 1.060127 & 3.3022\\ & & N & & 2.1725\\ $5s$ & 37 & L & 1.402867 & 3.0221\\ & & H & 1.130601 & 4.4015\\ & & N & & 2.6390\\ $6s$ & 55 & L & 1.542243 & 3.4892\\ & & H & 1.198431 & 5.2507\\ & & N & & 3.0283\\ $7s$ & 87 & L & 1.750825 & 4.1439\\ & & H & 1.325290 & 6.3323\\ & & N & & 3.5841\\ $4p$ & 31 & L & 1.362347 & 2.8994\\ & & H & 1.107311 & 4.1724\\ & & N & & 2.5049\\ $5p$ & 49 & L & 1.525760 & 3.5762\\ & & H & 1.179904 & 5.3294\\ & & N & & 3.0751\\ $6p$ & 81 & L & 1.774450 & 4.4369\\ & & H & 1.318796 & 6.7611\\ & & N & & 3.7917\\ $3d$ & 21 & L & 1.188580 & 4.0655\\ & & H & 1.049841 & 4.7758\\ & & N & & 3.9083\\ $4d$ & 39 & L & 1.447306 & 2.8779\\ & & H & 1.119397 & 4.5226\\ & & N & & 2.3191\\ $5d$ & 71 & L & 1.724165 & 3.4661\\ & & H & 1.227029 & 5.9896\\ & & N & & 2.6498\\ $4f$ & 57 & L & 1.801169 & 2.6428\\ & & H & 1.194715 & 5.6147\\ & & N & & 1.0082\\ $5f$ & 89 & L & 2.028852 & 2.4084\\ & & H & 1.276494 & 6.1027\\ & & N & & 1.2579 \end{tabular} \end{ruledtabular} \end{table} Our approach is similar to that of Slater~\cite{slater30pr} for calculating the effective charge of electrons with shielding and the work that followed it. We see that the effective charge \ensuremath{Z_a}\ is always bigger than $Z_i + 1$ (this correspondingly leads to $\nu < n$ for the other convention). This is because electrons spend a non-zero amount of time closer to the nucleus, and during that time experience a larger ion charge. It is worth noting these two schemes are equivalent for the $4f$ electron in neutral La, which experiences an effective charge very close to $Z_i + 1 = 1$ because it is much further from the nucleus than the $s$, $p$ and $d$ electrons below it. Therefore the use of $Z_i + 1$ gives similar results to \ensuremath{Z_a}\ for electrons that are far removed from the potential of other electrons. \subsection{Scaling of EJ and MJ Matrix Elements} \label{sec:ejmjscaling} In this section we present analytical estimates for the scaling of the EJ and MJ transition matrix elements. We use the following formulae to calculate both the non-relativistic electric and magnetic multipole reduced matrix elements (a relativistic treatment gives the same results). In the equations below, we seek to retain only the dependence of \ensuremath{Z_a}\ in the relevant formulae whenever possible. The E1 matrix element is \begin{equation} \langle nl | r | n'l' \rangle = \int P_{nl} r P_{n'l'} dr \end{equation} where the radial wavefunction $P_{nl}$ far away from the core electrons is \begin{eqnarray} P_{nl} & = & N_{nl} \left( \frac{2Z_ar}{n} \right)^{l+1} e^{-\frac{Z_ar}{n}}F\left( -n+l+1, 2l+2, \frac{2Z_ar}{n} \right)\nonumber\\ N_{nl} & = & \frac{1}{n(2l+1)!} \sqrt{\frac{Z_a(n+l)!}{(n-l-1)!}}\nonumber\ . \end{eqnarray} This allows the \ensuremath{Z_a}\ dependence of the E1 integral to be calculated as \begin{equation} \left(\int_0^\infty r P_{i}P_{j} dr\right) \sim \left(Z_a\right)^{-1}\label{eqn:nonrelintegral}\ . \end{equation} The non-relativistic M1 matrix element does not scale with charge as it is a function of the angular momenta. Therefore, while the E1 matrix element decreases for a decrease in \ensuremath{Z_a}, the M1 matrix element remains constant. In comparing highly-charged ions where (large \ensuremath{Z_a}) and near neutral ions (small \ensuremath{Z_a}), we see that M1 transitions can be as strong as E1 transitions as the latter decreases with increasing \ensuremath{Z_a}. A similar treatment was adopted in~\cite{bates49ptrsa}, with effective principal quantum number $\nu$ (labelled $n^*$ in their equations) instead of effective charge \ensuremath{Z_a}. For higher multipoles one obtains higher powers of the Coulomb radius \begin{equation} \label{eq:coulomb_radius} \left< r^n \right> \sim \left( \frac{a_B}{\ensuremath{Z_a}} \right)^n \end{equation} where $a_B$ is the Bohr radius, so that the general scaling law for EJ and MJ matrix elements is \begin{equation} \langle \kappa_i || q_J^{(E)} || \kappa_j \rangle \sim \left(Z_a\right)^{-J} \end{equation} and \begin{equation} \langle \kappa_i || q_J^{(M)} || \kappa_j \rangle \sim \left(Z_a\right)^{1-J}\ . \end{equation} In general, E(J+1) matrix elements have the same \ensuremath{Z_a}\ scaling as MJ matrix elements. \subsection{Scaling of Polarizability and Blackbody Radiation Shift} The blackbody radiation shift (BBR) for an adiabatic system can be calculated using the formula \begin{equation} \delta E = -\frac{1}{2}(831.9~\text{V/m})^2 \left( \frac{T(K)}{300} \right)^4 \alpha_0 (1+ \eta) \end{equation} where $\alpha_0$ is the static dipole polarizability and and $\eta$ is a small dynamic correction due to the frequency distribution, which for the purposes of this estimate we will disregard. The valence scalar polarizability of an atom in a state $v$ can be expressed as a sum over all excited intermediate states $n$ allowed by E1 selection rules. \begin{equation} \alpha_0 = \frac{2}{3(2j_v + 1)} \sum_n \frac{\langle v || r || n \rangle \langle n || r || v \rangle}{E_n - E_v} \end{equation} We showed in \Eref{eqn:nonrelintegral} that the reduced matrix element $\langle v || r || n \rangle$ scales simply as $1/\ensuremath{Z_a}$. Also, the dependence of the non-relativistic energy on $Z_a$ is given by \Eref{eqn:nonrelenergy} to be $Z_a^2$. Therefore all terms in the summation have the same dependence on $Z_a$ and the total dependence on $Z_a$ must necessarily be the same. We must then have \begin{equation} \delta E \sim \alpha_0 \sim \left(\frac{1}{Z_a}\right)^4 \label{eqn:bbrreduction}\ . \end{equation} \Eref{eqn:bbrreduction} suggests that in systems with high effective charge (large \ensuremath{Z_a}) such as highly charged ions, the BBR shift will be strongly suppressed as compared to neutral systems. \subsection{Scaling of the Hyperfine structure} Operators with large negative powers of radius will not follow the Coulomb radius scaling, \Eref{eq:coulomb_radius}, since the wavefunction at small distances cannot be described by $P_{nl}$. Instead we must use the approach of Fermi-Segr\'e~(see, e.g.~\cite{foldy58pr}) where the normalised squared wavefunction at the origin $\sim Z(\ensuremath{Z_\textrm{ion}}+1)^2/\nu^3$. Since $\nu = n(\ensuremath{Z_\textrm{ion}}+1)/\ensuremath{Z_a}$, we then come to the following scaling law for the hyperfine $A$ coefficient: \[ \frac{A}{g_I} \sim \frac{Z\ensuremath{Z_a}^3}{(\ensuremath{Z_\textrm{ion}}+1)} \] where we have factored out the nuclear $g$-factor $g_I$ which varies greatly between nuclei. We compare this scaling law with experimental data in \Tref{tab:scalingcomp}. A similar result may be derived for the electric quadrupole hyperfine constant $B$. We should also point out that the widths of hyperfine transitions will scale as $\omega^3 \sim A^3$, therefore relaxation of hyperfine structure will occur much faster in HCIs. \begin{table}[tb] \caption{Magnetic dipole hyperfine coefficients $A$ (calculated in~\cite{wu07chinesephys}) and their scaling with increasing $Z$ along the lithium isoelectronic sequence. Values of $Z_a$ were obtained from Dirac-Fock calculations using the relation $Z_a = \sqrt{|2n^2E|}$ where $n$ is the principal quantum number and $E$ is the orbital energy in atomic units. The notation $|_\text{p}$ means to use the values in the previous row of the table.} \label{tab:scalingcomp} \begin{ruledtabular} \begin{tabular}{lcccccc} Isotope & $I$ & $g_I$ & $A$~(MHz)~\cite{wu07chinesephys} & $Z_a$ & $\frac{(A/g_I)}{(A/g_I)|_\text{p}}$ & $\frac{\frac{ZZ^3_a}{Z_i + 1}}{\frac{ZZ^3_a}{Z_i + 1}\big|_{\text{p}}}$\\ \hline $_3^7$Li & 3/2 & 2.1709 & 399.34 & 1.25 & & \\ $_4^9$Be$^{+}$ & 3/2 & -0.7850 & -625.55 & 2.31 & 4.35 & 4.20 \\ $_{\ 5}^{11}$B$^{2+}$ & 3/2 & 1.7924 & 3603.77 & 3.33 & 2.53 & 2.49 \\ $_{\ 6}^{13}$C$^{3+}$ & 1/2 & 1.4048 & 5642.40 & 4.35 & 2.00 & 2.00 \\ $_{\ 7}^{15}$N$^{4+}$ & 1/2 & -0.5664 & -3973.68 & 5.36 & 1.74 & 1.74 \\ $_{\ 8}^{17}$O$^{5+}$ & 5/2 & -0.7575 & -8474.13 & 6.36 & 1.59 & 1.59 \\ $_{\ 9}^{19}$F$^{6+}$ & 1/2 & 5.2578 & 88106.93 & 7.37 & 1.50 & 1.50 \end{tabular} \end{ruledtabular} \end{table} \begin{table}[tb] \caption{\label{tab:scalings} Scaling dependences for HCIs for various sources of systematic shifts in optical clocks.} \begin{ruledtabular} \begin{tabular}{ll} $2^\textrm{nd}$ order Stark shift & $\sim 1/\ensuremath{Z_a}^4$ \\ Blackbody shift & $\sim 1/\ensuremath{Z_a}^4$ \\ $2^\textrm{nd}$ order Zeeman shift & suppressed\footnotemark[1] \\ Electric quadrupole shift & $\sim 1/Z_a^2$ \\ Fine-structure & $\sim Z^2\ensuremath{Z_a}^3/(\ensuremath{Z_\textrm{ion}}+1)$ \\ Hyperfine $A$ coefficient & $\sim Z\ensuremath{Z_a}^3/(\ensuremath{Z_\textrm{ion}}+1)$ \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{The Zeeman shift is sensitive to the specific fine- and hyperfine-structure of the transition, but may be suppressed in HCIs due to a larger energy denominator.} \end{table} \section{Conclusion} In this paper, we have discussed all level crossings available in the periodic table and their characteristics. We separately discussed and identified several highly charged ions near level crossings and presented estimates for the energy intervals in some of these ions. We also calculated scaling laws in terms of the effective screened charge \ensuremath{Z_a}\ for transition matrix elements, energy intervals (including fine-structure), blackbody radiation and the hyperfine shift -- these provide a quick and reliable way to estimate size of these atomic properties given knowledge of these properties in a near-neutral ion. In order to facilitate these estimates, we have also tabulated empirical values for \ensuremath{Z_a}\ for singly occupied electron orbitals above closed shells. Our scaling laws predict the BBR shifts in HCIs will be strongly suppressed. On the other hand, the hyperfine structure is much more important. The potential future applications of HCIs as discussed in this paper\ are clear -- the strong dependence of transitions in HCIs on the variation of the fine-structure constant makes them good candidates for laboratory tests of cosmological $\alpha$-variation. Finally, the existence of level crossings leads to the availability of transitions that can be excited by optical lasers. This is an experimental advantage HCIs have over nuclear clocks which have also been proposed to probe the variation of fundamental constants~\cite{peik03epl,campbell12prl}, but require lasers operating in the petahertz range to excite neutrons or protons. \begin{acknowledgments} This work was supported in part by the Australian Research Council. Supercomputer time was provided by an award under the Merit Allocation Scheme on the NCI National Facility at the Australian National University. \end{acknowledgments}
-54,584.036403
[ -2.796875, 2.6484375 ]
19.710145
[ -3.08203125, 0.640625, -2.052734375, -6.32421875, -0.73828125, 8.609375 ]
[ 4.49609375, 7.84765625, 4.49609375, 6.6640625 ]
526
7,903
[ -2.64453125, 2.978515625 ]
43.532424
[ -6.38671875, -4.265625, -4.1015625, -2.51171875, 2.046875, 12.5703125 ]
0.706082
12.819238
25.926863
37.488646
[ 2.2815356254577637 ]
-37,844.502695
5.376186
-52,832.035912
0.428356
6.367842
[ -3.1953125, -3.90625, -3.44140625, -4.5078125, 2.564453125, 11.84375 ]
[ -5.58203125, -2.5078125, -2.783203125, -2.43359375, 3.8828125, 6.25390625 ]
BkiUdJQ4ubng-JcX4VCP
\section{Introduction} \vspace{-0.1cm} Translating a spoken language, in other words recognizing speech and automatically having one's words translated into another language, is extremely complex. One traditional approach in speech-to-text translation systems must construct automatic speech recognition (ASR) and machine translation (MT) system, both of which are independently trained and tuned. Given a speech input, the ASR system processes and transforms the speech into the text in the source language, and then MT transforms the text in the source language to corresponding text in the target language \cite{Satoshi-IEEE06}. The basic unit for information sharing between these components is only words at the text level. Even though significant progress has been made and various commercial speech translation systems have been introduced, this approach continues to suffer from several major limitations. One of the drawbacks is that speech acoustics might involve both linguistic and paralinguistic information (i.e., prosody, intonation, accent, rhythm, emphasis, or emotion), but such paralinguistic information is not a factor in written communication, and much cannot even be expressed in words. Consequently, the words output by ASR have lost all of their paralinguistic information, and only the linguistic parts are translated by the MT system. Some studies have proposed including additional component to just handle paralinguistic translation, but this step introduces more complexity and delay \cite{Kano-InterSpeech13,Truong-InterSpeech16,1660081}. Another noted problem is that over half of the world's languages actually have no written form and are only spoken. Another solution is to translate directly from phoneme-based transcription. However, the performance of a phoneme-based ASR is usually low, and errors in the ASR stage can propagate throughout the translation process \cite{Deng-ICASSP11}. Therefore, it would be useful to find ways beyond the current conventional approach to directly translate from the speech of the source language to the text of the target language. Recently, deep learning has shown much promise in many tasks. A sequence-to-sequence attention-based neural network is one architecture that provides a powerful model for machine translation and speech recognition \cite{Chorowski-CoRR15,Bahdanau-CoRR14}. Recently, several works have extended models for end-to-end speech translation (ST) tasks. Duong et al. \cite{Duong-NAACL16}. directly trained attentional models on parallel speech data. But their work is only applicable for Spanish-English language pairs with similar syntax and word order (SVO-SVO). Furthermore, it focused on alignment performance. The only attempt to build a full-fledged end-to-end attentional-based speech-to-text translation system is B$\acute{e}$rard et al. \cite{Alexandre-NIPS16}. But, that work was only done on a small French-English synthetic corpus, because these language share similar word order (SVO-SVO). For such languages, only local movements are sufficient for translation. This paper proposes a first attempt to build an end-to-end attention-based ST system on syntactically distant language pairs that suffers from long-distance reordering phenomena. We train the attentional model on English-Japanese language pairs with SVO versus SOV word order. To guide the encoder-decoder attentional model to learn this difficult problem, we proposed a structured-based curriculum learning strategy. Unlike the conventional curriculum learning that gradually emphasize difficult data examples, we formalize CL strategies that start the training with an end-to-end encoder-decoder for speech recognition or text-based machine translation tasks and gradually train the network for end-to-end speech translation tasks by adapting the decoder or encoder parts. Here we start the training with an end-to-end encoder-decoder for speech recognition or a text-based machine translation task and gradually move to an end-to-end speech translation task. \vspace{-0.1cm} \section{Related Works} \vspace{-0.1cm} \textit{Curriculum learning}, which is one learning paradigm, is inspired by the learning processes of humans and animals that learn from easier aspects and gradually increase to more difficult ones. Although the application of such training strategies to machine learning has been discussed between machine learning and cognitive science researchers going back to Elman (1993) \cite{Elman-Cognition93}, CL's first formulation in the context of machine learning was introduced by Bengio et al. (2009) \cite{Bengio-09}. Using CL might help avoid bad local minima and speed up the training convergence and improve generalization. These advantages have been empirically demonstrated in various tasks, including shape recognition \cite{Bengio-09}, object classification \cite{Gong-IEEE16}, and language modeling tasks \cite{Rush-CoRR15}. However, most studies focused on how to organize the sequence of the learning data examples in the context of single task learning. Bengio at al. \cite{Bengio-09} proposed curriculum learning for multiple tasks. But again, all of the tasks still belonged to the same type of problem, which is object classification, where those tasks shared the same input and output spaces. In contrast with most previous CL studies, (1) we utilize CL strategy not for simple recognition/classification problems, but for sequence-to-sequence based neural network learning problems in speech translation tasks; (2) the attentional-based neural network is not trained directly for the speech translation task using similar but more and more difficult speech translation data. Instead we formalize CL strategies that start the training with an end-to-end encoder-decoder for speech recognition or text-based machine translation tasks and gradually train the network for end-to-end speech translation tasks by adapting the decoder or encoder parts respectively; (3) those different tasks of speech recognition, text-based machine translation, and speech translation used in structured-based CL do not share the same input and output spaces, as in the CL of multiple tasks. \begin{figure*} \centering \fbox{\includegraphics[width=13cm]{./idea.eps}} \caption{Attention-based speech translation training phases with CL-based concept.} \label{fig1} \end{figure*} \vspace{-0.1cm} \section{Basic Attention-based Speech Translation} \label{sec:basic} \vspace{-0.1cm} We built our end-to-end speech translation system upon the standard attention-based encoder-decoder neural networks architecture \cite{Bahdanau-CoRR14,Sutskever-CoRR14} that consists of encoder, decoder, and attention modules. Given input sequence $\mathbf{x} = [x_1, x_2, ..., x_N]$ with length $N$, the encoder produces a sequence of vector representation $h^{enc} = \left(h^{enc}_1,h^{enc}_2, ..., h^{enc}_N\right)$. Here we used a bidirectional recurrent neural network with long short-term memory (bi-LSTM) units \cite{Hochreiter:1997:LSM:1246443.1246450}, which consist of forward and backward LSTMs. The forward LSTM reads the input sequence from $x_1$ to $x_N$ and estimates forward $\overrightarrow{h^{enc}}$, while the backward LSTM reads the input sequence in reverse order from $x_N$ to $x_1$ and estimates backward $\overleftarrow{h^{enc}}$. Thus, for each input $x_n$, we obtain $h^{enc}_n$ by concatenating forward $\overrightarrow{h^{enc}}$ and backward $\overleftarrow{h^{enc}}$. The decoder, on the other hand, predicts target sequence $\mathbf{y} = [y_1,y_2, ..., y_T]$ with length $T$ by estimating conditional probability $p(\mathbf{y}|\mathbf{x})$. Here, we use uni-directional LSTM (forward only). Conditional probability $p(\mathbf{y}|\mathbf{x})$ is estimated based on the whole sequence of the previous output: \vspace{-0.1cm} \begin{equation} p(y_t|y_1,y_2,...,y_{t-1},x)=softmax(W_y \tilde{h}^{dec}_t). \end{equation} Decoder hidden activation vector $\tilde{h}^{dec}_t$ is computed by applying linear layer $W_c$ over context information $c_t$ and current hidden state $h^{dec}_t$: \begin{equation} \tilde{h}^{dec}_t = \text{tanh}(W_c[c_t;h^{dec}_t]). \end{equation} Here $c_t$ is the context information of the input sequence when generating current output at time $t$, estimated by the attention module over encoder hidden states $h^{enc}_n$ \begin{equation} c_t = \sum_{n=1}^{N} a_t(n) * h^{enc}_n, \end{equation} where variable-length alignment vector $a_t$, whose size equals the length of input sequence $x$, is computed by \begin{eqnarray} a_t(n) &=& \text{align}({h^{enc}_n}, h^{dec}_t) \\ \nonumber &=& \text{softmax}(\text{dot}({h^{enc}_n}, h^{dec}_t). \end{eqnarray} This step is done to assist the decoder to find relevant information on the encoder side based on the current decoder hidden states. There are several variations to calculate $\text{align}({h^{enc}_n}, h^{dec}_t)$. Here we simply use the dot product between the encoder and decoder hidden states \cite{LuongPM15}. In this study, we apply this basic architecture for various tasks: \begin{itemize} \item ASR system\\ Input sequence $\mathbf{x} = [x_1, ..., x_N]$ is the input speech sequence of the source language, and target sequence $\mathbf{y} = [y_1, ..., y_T]$ is the predicted corresponding transcription in the source language. \item MT system\\ Input sequence $\mathbf{x} = [x_1, ..., x_N]$ is the word sequence of the source language, and target sequence $\mathbf{y} = [y_1, ..., y_T]$ is the predicted corresponding word sequence in the target language. \item ST system\\ Input sequence $\mathbf{x} = [x_1, ..., x_N]$ is the input speech sequence of the source language, and target sequence $\mathbf{y} = [y_1, ..., y_T]$ is the predicted corresponding word sequence in the target language. \end{itemize} \vspace{-0.1cm} \section{Attention-based Speech Translation with Curriculum Learning} \label{sec:proposed} \vspace{-0.1cm} The training process of the attention-based encoder-decoder model is basically more difficult than the standard neural network model \cite{chan2016listen} because an attention-based model needs to jointly optimize three different (encoder, decoder, and attention) modules simultaneously. Utilizing the attention-based encoder-decoder architecture for constructing a direct ST task is obviously difficult because the model needs to solve two complex problems: (1) learning how to process a long speech sequence and map it to the corresponding words, similar to the issues focused on in the field of ASR \cite{Chorowski-CoRR15}; (2) learning how to make good alignment rules between source and target languages, similar to the issues discussed in the field of MT \cite{Bahdanau-CoRR14, Koehn-NAACL03}. Furthermore, we utilize attention-based encoder-decoder architecture to construct a ST system on syntactically distance language pairs that suffer from long-distance reordering phenomena and train the attentional model on English-Japanese language pairs with SVO versus SOV word order. Therefore, to assist the encoder-decoder model to learn this difficult problem, we proposed a structured-based curriculum learning strategy. In our CL strategy, the attentional-based neural network is not trained directly for speech translation tasks using similar but more and more difficult speech translation data, instead we formalize structured-based CL strategies that start the training with an end-to-end encoder-decoder for ASR or text-based MT tasks and gradually train the network for end-to-end ST tasks. In other words, we train the attentional encoder-decoder architecture by starting from a simpler task, switch a certain part of the structure (encoder or decoder) in each training phase, and set it to a more difficult target task. In this way, the difficulty of the problems increases gradually in each training phase, as in CL strategies. Figure \ref{fig1} illustrates the attention-based speech translation training phases, and the details are described below. \begin{enumerate} \item \textbf{CL type 1: Start from an attention-based ASR system} Here the curriculum learning for each phases is designed as follows: \begin{enumerate}[(a)] \item \textbf{Fast track} \begin{description} \item[Phase 1] We train an attentional-based encoder-decoder neural network for a standard ASR task, which predicts the corresponding transcription of the input speech sequence in the source language. \item[Phase 2] Next we replace the ASR decoder with a new decoder and retrain it to match the MT decoder's output. The model now predicts the corresponding word sequence in the target language given the input speech sequence of the source language. \end{description} \item \textbf{Slow track} \begin{description} \item[Phase 1] As before, we train the attentional-based encoder-decoder neural network for a standard ASR task, which predicts the corresponding transcription of the input speech sequence in the source language. \item[Phase 2] Then we replace the ASR decoder with a new decoder and retrain it to match the MT encoder's output this work as ASR-MT transcoder. The model's objective now is to predict the word representation (like the MT encoder's output) that is good for the corresponding word sequence in the source language given the input speech sequence of the source language. Here, as a loss function, we calculate the mean squared error between the output of the new decoder with the ouput of the MT encoder. \item[Phase 3] Finally, we combine the MT attention and decoder modules to perform the speech translation task from the source speech sequence to the target word sequence and train the whole architecture using a softmax cross-entropy function. \end{description} \end{enumerate} \item \textbf{CL type 2: Start from attention-based MT system} Similar to CL type 1, we construct an attentional-based ST system for both fast and slow tracks, but instead of starting with an ASR system, we start with the MT system. In this case, the model gradually adapts the encoder part from the MT encoder to more closely resemble the ASR encoder. \end{enumerate} \vspace{-0.1cm} \section{Experimental Set-Up and Results} \vspace{-0.1cm} \subsection{Experimental Set-Up} \vspace{-0.1cm} We conducted our experiments using a basic travel expression corpus (BTEC) \cite{BTEC,1677987}. The BTEC English-Japanese parallel corpus consists of 4.5-k training sentences and 500 sentences in the test set. Since corresponding speech utterances for this text corpus are unavailable, we used the Google text-to-speech synthesis\footnote{Google TTS: https://pypi.python.org/pypi/gTTS} to generate a speech corpus of the source language. The speech utterances were segmented into multiple frames with a 25-ms window size and a 10-ms step size. Then we extracted 23-dimension filter bank features using Kaldi's feature extractor \cite{Povey_ASRU2011_2011} and normalized them to have zero mean and unit variance. As for the text corpus, using one-hot vectors results in large sparse vectors due to a large vocabulary. In this study, we incorporated word embedding that learns the dense representation of words in a low-dimensional vector space. We further used this data to build an attention-based ASR and MT system, a direct ST system, and a CL-based ST-system. Table \ref{tb:table} summarizes the network parameters. For all the systems, we used the same learning rate and adopted Adam\cite{DBLP:journals/corr/KingmaB14} to all of the models. \begin{table}[h] \center \footnotesize \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{ASR system}\\ \hline Input units & 23 \\ \hline Hidden units & 512 \\ \hline Output units & 27293 \\ \hline LSTM layer depth & 2 \\ \hline \hline \hline \multicolumn{2}{|c|}{MT system}\\ \hline Source vocabulary & 27293 \\ \hline Target vocabulary & 33155 \\ \hline Embed size & 128 \\ \hline Input units & 128 \\ \hline Hidden units & 512 \\ \hline Output units & 33155 \\ \hline LSTM layer depth & 2 \\ \hline\hline\hline \multicolumn{2}{|c|}{Optimization}\\ \hline Initial learning rate & 0.001000 \\ \hline Learning descend rate & 1.800000 \\ \hline \hline Optimizing method & Adam \cite{DBLP:journals/corr/KingmaB14}\\ \hline \end{tabular} \caption[t]{Model settings for each system} \label{tb:table} \end{table} \subsection{Results and Discussion} We applied the attentional encoder-decoder architecture described in Section \ref{sec:basic} to train the ASR, MT, and direct ST systems. We also constructed an ASR+MT cascade system. For our proposed models, we also applied the CL-based attentional encoder-decoder architecture described in Section \ref{sec:proposed} to train CL type 1 and CL type 2 (fast and slow tracks). Unfortunately, CL type 2 failed to converge. This might be due to the large divergence between the MT encoder in the text input space to the ASR encoder in the speech input space. The successfully trained systems are listed below. \begin{description} \item[Baseline ASR:] speech-to-text model of source language. \item[Baseline MT:] text-to-text translation model from source language to target language. \item[Baseline ASR+MT:] speech-to-text translation model by cascading speech-to-text in source language with a text-to-text translation model. \item[Direct ST Enc-Dec:] direct end-to-end speech translation model using a single attention-based neural network. \item[Proposed ST Enc-Dec (CL type 1 - Fast Track):] end-to-end speech translation model trained using CL type 1 (fast track). \item[Proposed ST Enc-Dec (CL type 1 - Slow Track):] end-to-end speech translation model trained using CL type 1 (slow track). \end{description} The performance of our ASR system achieved a 9.4$\%$ word error rate (WER). The remaining systems were evaluated based on translation quality using a standard automatic evaluation metric BLEU+1 \cite{lin-och:2004:COLING}. First, we show how our proposed methods work during the training steps. Fig.\ref{loss} illustrates the softmax cross-entropy until 15 epochs. The MT system has easiest task, which is translating the text in the source language to the corresponding target language. The loss decreased quite fast. On the other hand, direct ST training is hard, and therefore it gave the worst performance (its loss only decreased $0.04$ from epochs 1 to 15). By using our CL-based proposed method, we can further decrease the loss. Specifically, the one that trained with CL type 1 - Slow Track successfully outperformed the text-based MT system. \begin{figure} \centering \fbox{\includegraphics[width=7.8cm]{./loss.eps}} \vspace{-0.3cm} \caption{Softmax cross-entropy of each epoch} \label{loss} \centering \fbox{\includegraphics[width=7.8cm]{./bleup1.eps}} \vspace{-0.3cm} \caption{Translation accuracy of each model} \label{bleup1} \end{figure} Next, we investigated the translation quality of the models summarized in Fig.\ref{bleup1}. The results also reveal that the direct attentional ST system is difficult. Direct ST Enc-Dec model seems to be over-fitting the language model and could not handle the input speech utterances. The results also demonstrated that our proposed ST Enc-Dec (CL type 1 - Fast Track) model significantly improved the baseline. The best performance was achieved by the proposed ST Enc-Dec (CL type 1 - Slow Track) model, which even surpassed the text-based MT and cascade ASR+MT systems. This system is constructed with [ASRenc+att]+[ASRdec-MTenc]+[MTatt+dec] (Fig.\ref{fig1}). The combination of the second and third parts actually resembles a conventional MT system. Therefore, from the MT system viewpoint, the additional components in the first part, which introduced more noise to the input of the MT system, might function as a denoising encoder-decoder that prevents over-fitting. \vspace{-0.1cm} \section{Conclusions} \vspace{-0.1cm} In this paper, we achieved English-Japanese end-to-end speech to text translation without being affected by ASR error. Our proposals utilized structured-based CL strategies for training attentional-based ST systems in which we start with the training of attentional ASR and gradually train the network for end-to-end ST tasks by adapting the decoder part. Experimental results demonstrated that the learning model is stable and its translation quality outperformed the standard MT system. The best performance was achieved by our proposed model. Our current results, however, still rely on synthetic data. In the future, we will investigate the effectiveness of our proposed method using natural speech data, investigate various possible language pairs, paralinguistic information, and expand the speech-to-text translation task to a speech-to-speech translation task. \vspace{-0.1cm} \section{Acknowledgements} \vspace{-0.1cm} Part of this work was supported by JSPS KAKENHI Grant Numbers JP17H06101,JP17H00747 and JP17K00237. \bibliographystyle{IEEEtran}
-14,672.264542
[ -2.576171875, 2.521484375 ]
37.5
[ -3.427734375, 0.30615234375, -1.8427734375, -4.140625, -0.3720703125, 6.54296875 ]
[ 3.515625, 6.75, 1.4638671875, 8.109375 ]
170
2,786
[ -2.595703125, 2.92578125 ]
22.784448
[ -6.47265625, -3.970703125, -4.08984375, -1.587890625, 2.369140625, 11.6875 ]
0.758081
18.285786
26.633166
4.083875
[ 3.165088176727295 ]
-11,813.580488
5.918521
-14,450.784571
1.443387
5.659963
[ -3.4609375, -3.578125, -3.36328125, -4.19921875, 2.734375, 11.2578125 ]
[ -5.94140625, -2.62890625, -2.416015625, -2.025390625, 4.15234375, 5.8984375 ]
BkiUbbnxK0-nUh8iIHaL
\section{Introduction}\label{sec:intro} This is the fifth in a series of seven papers describing the Pan-STARRS1 Surveys, the data reduction techiques and the resulting data products. This paper (Paper V) describes the final calibration process, and the resulting photometric and astrometric quality. \citet[][Paper I]{chambers2017} provides an overview of the Pan-STARRS System, the design and execution of the Surveys, the resulting image and catalog data products, a discussion of the overall data quality and basic characteristics, and a brief summary of important results. \citet[][Paper II]{magnier2017c} describes how the various data processing stages are organised and implemented in the Imaging Processing Pipeline (IPP), including details of the the processing database which is a critical element in the IPP infrastructure . \citet[][Paper III]{waters2017} describes the details of the pixel processing algorithms, including detrending, warping, and adding (to create stacked images) and subtracting (to create difference images) and resulting image products and their properties. \citet[][Paper IV]{magnier2017a} describes the details of the source detection and photometry, including point-spread-function and extended source fitting models, and the techniques for ``forced" photometry measurements. \citet[][Paper VI]{flewelling2017} describes the details of the resulting catalog data and its organization in the Pan-STARRS database. \citet[][Paper VII]{huber2017} describes the Medium Deep Survey in detail, including the unique issues and data products specific to that survey. The Medium Deep Survey is not part of Data Release 1. (DR1) The Pan-STARRS1 filters and photometric system have already been described in detail in \cite{2012ApJ...750...99T}. {\color{red} {\em Note: These papers are being placed on arXiv.org to provide crucial support information at the time of the public release of Data Release 1 (DR1). We expect the arXiv versions to be updated prior to submission to the Astrophysical Journal in January 2017. Feedback and suggestions for additional information from early users of the data products are welcome during the submission and refereeing process.}} \section{Pan-STARRS\,1} From May 2010 through March 2014, the Pan-STARRS Science Consortium used the 1.8m \PSONE\ telescope to perform a set of wide-field science surveys. These surveys are designed to address a range of science goals included the search for hazardous asteroids, the study of the formation and architecture of the Milky Way galaxy, and the search for Type Ia supernovae to measure the history of the expansion of the universe. The wide-field \PSONE\ telescope consists of a 1.8~meter diameter $f$/4.4 primary mirror with an 0.9~m secondary, producing a 3.3 degree field of view \citep{2004SPIE.5489..667H}. The optical design yields low distortion and minimal vignetting even at the edges of the illuminated region. The optics, in combination with the natural seeing, result in generally good image quality: the median image quality for the 3$\pi$ survey is FWHM = (1.31, 1.19, 1.11, 1.07, 1.02) arcseconds for (\grizy), with a floor of $\sim0.7$ arcseconds. The \PSONE\ camera \citep{2009amos.confE..40T} is a mosaic of 60 edge-abutted $4800\times4800$ pixel back-illuminated CCID58 Orthogonal Transfer Arrays manufactured by Lincoln Laboratory \citep{2006amos.confE..47T,2008SPIE.7021E..05T}. The CCDs have 10~$\mu$m pixels subtending 0.258~arcsec and are 70$\mu$m thick. The detectors are read out using a StarGrasp CCD controller, with a readout time of 7 seconds for a full unbinned image \citep{2008SPIE.7014E..0DO}. The active, usable pixels cover $\sim 80$\% of the FOV. Nightly observations are conducted remotely from the Advanced Technology Research Center in Kula, the main facility of the University of Hawaii's Institute for Astronomy operations on Maui. During the \PSONE\ Science Survey, images obtained by the \PSONE\ system were stored first on computers at the summit, then copied with low latency via internet to the dedicated data analysis cluster located at the Maui High Performance Computer Center in Kihei, Maui. Images obtained by \PSONE\ are automatically processed in real time by the \PSONE\ Image Processing Pipeline \citep[IPP,][]{magnier2017a}. Real-time analysis goals are aimed at feeding the discovery pipelines of the asteroid search and supernova search teams. The data obtained for the \PSONE\ Science Survey has also been used in three additional complete re-processing of the data: Processing Versions 1, 2, and 3 (PV1, PV2, and PV3). The real-time processing of the data is considered ``PV0''. Except as otherwise noted, the PV3 analysis of the data is used for the purpose of this article. The data processing steps are described in detail by \cite{waters2017} and \cite{magnier2017a,magnier2017b}. In summary, individual images are detrended: non-linearity and bias corrections are applied, a dark current model is subtracted and flat-field corrections are applied. The \yps-band images are also corrected for fringing: a master fringe pattern is scaled to match the observed fringing and subtracted. Mask and variance image arrays are generated with the detrend analysis and carried forward at each stage of the IPP processing. Source detection and photometry are performed for each chip independently. As discussed below, preliminary astrometric and photometric calibrations are performed for all chips in a single exposure in a single analysis. Chip images are geometrically transformed based on the astrometric solution into a set of pre-defined pixel grids covering the sky, called skycells. These transformed images are called the warp images. Sets of warps for a given part of the sky and in the same filter may be added together to generate deeper `stack' images. PSF-matched difference images are generated from combinations of warps and stacks; the details of the difference images and their calibration are outside of the scope of this article. Astronomical objects are detected and characterized in the stacks images. The details of the analysis of the sources in the stack images are discussed in \cite{magnier2017b}, but in brief these include PSF photometry, along with a range of measurements driven by the goals of understanding the galaxies in the images. Because of the significant mask fraction of the GPC1 focal plane, and the varying image quality both within and between exposures, the effective PSF of the PS1 stack images is highly variable. The PSF varies significantly on scales as small as a few to tens of pixels, making accurate PSF modelling essentially infeasible. The PSF photometry of sources in the stack images is thus degraded significantly compared to the quality of the photometry measured for the individual chip images. To recover most of the photometric quality of the individual chip images, while also exploiting the depth afforded by the stacks, the PV3 analysis make use of forced photometry on the individual warp images. PSF photometry is measured on the warp images for all sources which are detected in the stack images images. The positions determined in the stack images are used in the warp images, but the PSF model is determined for each warp independently based on brighter stars in the warp image. The only free parameter for each object is the flux, which may be insignificant or even negative for sources which are near the faint limit of the stack detections. When the fluxes from the individual warp images are averaged, a reliable measurement of the faint source flux is determined. The details of this analysis are described in detail in Magnier et al \cite{magnier2017b}. In this article, we discuss the photometric calibration of the individual exposures, the stacks, and the warp imags. We also discuss the astrometric calibration of the individual exposures and the stack images. \section{Astrometric Models} Three somewhat distinct astrometric models are employed within the IPP at different stages. The simplest model is defined independently for each chip: a simple TAN projection as described by \cite{2002AA...395.1077C} is used to relate sky coordinates to a cartesian tangent-plane coordinate system. A pair of low-order polynomials are used to relate the chip pixel coordinates to this tangent-plane coordinate system. The transforming polynomials are of the form: \begin{eqnarray} P & = & \sum_{i,j} C^P_{i,j} X^i_{\rm chip} Y^j_{\rm chip} \\ Q & = & \sum_{i,j} C^Q_{i,j} X^i_{\rm chip} Y^j_{\rm chip} \end{eqnarray} where $P,Q$ are the tangent plane coordinates, $X_{\rm chip}, Y_{\rm chip}$ are the coordinates on the 60 GPC1 chips, and $C^P_{i,j}, C^Q_{i,j}$ are the polynomial coefficients for each order. In the \code{psastro} analysis, $i + j <= N_{\rm order}$ where the order of the fit, $N_{\rm order}$, may be 1 to 3, under the restriction that sufficient stars are needed to constrain the order. A second form of astrometry model which yields somewhat higher accuracy consists of a set of connected solutions for all chips in a single exposure. This model also uses a TAN projection to relate the sky coordinates to a locally cartesian tangent plane coordinate system. A set of polynomials is then used to relate the tangent plane coordinates to a 'focal plane' coordinate system, $L,M$: \begin{eqnarray} P & = & \sum_{i,j} C^P_{i,j} L^i M^j \\ Q & = & \sum_{i,j} C^Q_{i,j} L^i M^j \end{eqnarray} This set of polynomial accounts for effects such as optical distortion in the camera and distortions due to changing atmospheric refraction across the field of the camera. Since these effects are smooth across the field of the camera, a single pair of polynomials can be used for each exposure. Like in the chip analysis about, the \code{psastro} code restricts the exponents with the rule $i + j <= N_{\rm order}$ where the order of the fit, $N_{\rm order}$, may be 1 to 3, under the restriction that sufficient stars are needed to constrain the order For each chip, a second set of polynomials describes the transformation from the chip coordinate systems to the focal coordinate system: \begin{eqnarray} L & = & \sum_{i,j} C^L_{i,j} X^i_{\rm chip} Y^j_{\rm chip} \\ M & = & \sum_{i,j} C^M_{i,j} X^i_{\rm chip} Y^j_{\rm chip} \end{eqnarray} A third form of the astrometry model is used in the context of the calibration determined within the DVO database system. We retain the two levels of transformations (chip $\rightarrow$ focal plane $\rightarrow$ tangent plane), but the relationship between the chip and focal plane is represented with only the linear terms in the polynomial, supplemented by a course grid of displacements, $\delta L, \delta M$ sampled across the coordinate range of the chip. This displacement grid may have a resolution of up to $6\times6$ samples across the chip. The displacement for a specific chip coordinate value is determined via bilinear interpolation between the nearest sample points. Thus, the chip to focal-plane transformation may be written as: \begin{eqnarray} L & = & C^L_{0,0} + C^L_{1,0} X_{\rm chip} + C^L_{0,1} Y_{\rm chip} + \delta L(X_{\rm chip}, Y_{\rm chip}) \\ M & = & C^M_{0,0} + C^M_{1,0} X_{\rm chip} + C^M_{0,1} Y_{\rm chip} + \delta M(X_{\rm chip}, Y_{\rm chip}) \end{eqnarray} {\bf WCS Keywords} When this polynomial representation is written to the output files, a set of WCS keywords are used to define the astrometric transformation elements. It is necessary to transform the simply polynomials above into an alternate form: \begin{eqnarray} P & = & \sum_{i,j} C^P_{i,j} (X_{\rm chip} - X_0)^i (Y_{\rm chip} - Y_0)^j \\ Q & = & \sum_{i,j} C^Q_{i,j} (X_{\rm chip} - X_0)^i (Y_{\rm chip} - Y_0)^j \end{eqnarray} \section{Real-time Calibration} \subsection{Overview} As images are processed by the data analysis system, every exposure is calibrated individually with respect to a photometric and astrometric database. The goal of this calibration step is to generate a preliminary astrometric calibration, to be used by the warping analysis to determine the geometric transformation of the pixels, and preliminary photometric transformation, to be used by the stacking analysis to ensure the warps are combined using consistent flux units. The program used for the real-time calibration, \code{psastro}, loads the measurements of the chip detections from their individual \code{cmf}-format files. It uses the header information populated at the telescope to determine an initial astrometric calibration guess based on the position of the telescope boresite right ascension, declination and position angle as reported by the telescope \& camera subsystems. Using the initial guess, \code{psastro} loads astrometric and photometric data from the reference database. \subsection{Reference Catalogs} During the course of the PS1SC Survey, several reference databases have been used. For the first 20 months of the survey, \code{psastro} used a reference catalog with synthetic PS1 \grizy\ photometry generated by the Pan-STARRS IPP team based on based combined photometry from Tycho (B, V), USNO (red, blue, IR), and 2MASS $J, H, K$. The astrometry in the database was from 2MASS. After 2012 May, a reference catalog generated from internal re-calibration of the PV0 analysis of PS1 photometry and astrometry was used for the reference catalog. Coordinates and calibrated magnitudes of stars from the reference database are loaded by \code{pasastro}. A model for the positions of the 60 chips in the focal plane is used to determine the expected astrometry for each chip based on the boresite coordinates and position angle reported by the header. Reference stars are selected from the full field of view of the GPC1 camera, padded by an additional 25\% to ensure a match can be determined even in the presence of substantial errors in the boresite coordinates. It is important to choose an appropriate set of reference stars: if too few are selected, the chance of finding a match between the reference and observed stars is diminished. In addition, since stars are loaded in brightness order, a selection which is too small is likely to contain only stars which are saturated in the GPC1 images. On the other hand, if too many reference stars are chosen, there is a higher chance of a false-positive match, especially as many of the reference stars may not be detected in the GPC1 image. The seletion of the reference stars includes a limit on the brightest and fainted magnitude of the stars selected. The astrometric analysis is necessarily performed first; after the astrometry is determined, an automatic byproduct is a reliable match between reference and observed stars, allowing a comparison of the magnitudes to determine the photometric calibration. The astrometric calibration is performed in two major stages: first, the chips are fitted independently with independent models for each chip. This fit is sufficient to ensure a reliable match between reference stars and observed sources in the image. Next, the set of chip calibrations are used to define the transformation between the focal plane coordinate system and the tangent plane coordinate system. The chip-to-focal plane transformations are then determined under the single common focal plane to tangent plane transformation. \subsection{Cross-Correlation Search} The first step of the analysis is to attempt to find the match between the reference stars and the detected objects. \code{psastro} uses 2D cross correlation to search for the match. The guess astrometry calibration is used to define a predicted set of $X^{\rm ref}_{\rm chip}, Y^{\rm ref}_{\rm chip}$ values for the reference catalog stars. For all possible pairs between the two lists, the values of \begin{eqnarray} \Delta X & = & X^{\rm ref}_{\rm chip} - X^{\rm obs}_{\rm chip}\\ \Delta Y & = & Y^{\rm ref}_{\rm chip} - Y^{\rm obs}_{\rm chip} \end{eqnarray} are generated. The collection of $\Delta X, \Delta Y$ values are collected in a 2D histogram with sampling of 50 pixels and the peak pixel is identified. If the astrometry guess were perfect, this peak pixel would be expected to lie at (0,0) and contain all of the matched stars. However, the astrometric guess may be wrong in several ways. An error in the constant term above, $C^P_{0,0}, C^Q_{0,0}$ shifts the peak to another pixel, from which $C^P_{0,0}, C^Q_{0,0}$ can easily be determined. An error in the plate scale or a rotation will smear out the peak pixel potentially across many pixels in the 2D histogram. To find a good match in the face of plate scale and rotation errors, the cross correlation analysis above is performed for a series of trials in which the scale and rotation are perturbed from the nominal value by a small amount. For each trial, the peak pixel is found and a figure of merit is measured. The figure of merit is defined as $\frac{\sigma^2_x + \sigma^2_y}{N_p^4}$ where $\sigma^2_{x,y}$ are the second moment of $\Delta X,Y$ for the star pairs associated with the peak pixel, and $N_p$ is the number of star pairs in the peak. This figure of merit is thus most sensitive to a narrow distribution with many matched pairs. For the PS1 exposures, rotation offsets of (-1.0, -0.5, 0.0, 0.5, 1.0) degrees, and plate scales of (+1\%, 0, -1\%) of the nominal plate scale are tested. The best match among these 15 cross-correlation tests is selected and used to generate a better astrometry guess for the chip. \subsection{Chip Polynomial Fits} The astrometry solution from the cross correlation step above is again used to selected matches between the reference stars and observed stars in the image. The matching radius starts off quite large, and a series of fits is performed to generate the transformation between chip and tangent plane coordinates. Three clipping iterations are performed, with outliers $> 3 \sigma$ rejected on each pass, where here $\sigma$ is determined from the distribution of the residuals in each dimension (X,Y) independently. After each fit cycle, the matches are redetermined using a smaller radius and the fit re-tried. \subsection{Mosaic Astrometry Polynomial Fits} The astrometry solutions from the independent chip fits are used to generate a single model for the camera-wide distortion terms. The goal is to determine the two stage fit (chip $\rightarrow$ focal plane $\rightarrow$ tangent plane). There are a number of degenerate terms between these two levels of transformation, most obviously between the parameters which define the constant offset from chip to focal plane ($C^{L,M}_{0,0}$) and those which define the offset from focal plane to tangent plane ($C^{P,Q}_{0,0}$). We limit ($C^{P,Q}_{0,0}$) to be 0,0 to remove this degeneracy. The initial fit of the astrometry for each chip follows the distortion introduced by the camera: the apparent plate scale for each chip is the combination of the plate scale at the optical axis of the camera, modified by the local average distortion. To isolate the effect of distortion, we choose a single common plate scale for the set of chips and re-define the chip $\rightarrow$ sky calibrations as a set of chip $\rightarrow$ focal plane transformation using that common pixel scale. We can now compare the observed focal plane coordinates, derived from the chip coordinates, and the tangent plane coordiantes, derived from the projection of the reference coordinates. One caveat is that the chip reference coordinates are also degenerate with the fitted distortion. In order to avoid being sensitive to the exact positions of the chips at this stage, we measure the local gradient between the focal plane and tangent plane coordinate systems. We then fit the gradient with a polynomial of order 1 less than the polynomial desired for the distortion fit. The coefficients of the gradient fit are then used to determine the coefficients for the polynomials representing the distortion. Once the common distortion coming from the optics and atmosphere have been modeled, \code{psastro} determines polynomial transformations from the 60 chips to the focal plane coordinate system. In this stage, 5 iterations of the chip fits are performed. Before each iteration, the reference stars and detected objects are matched using the current best set of transformations. These fits start with low order (1) and large matching radius. As the iterations proceed, the radius is reduced and the order is allowed to increaes, up to 3rd order for the final iterations. \subsection{Real-time Photometric Calibration} After the astrometric calibration has finished, the photometric calibration is performed by \code{psastro}. When the reference stars are loaded, the apparent magnitude in the filter of interest is also loaded. Stars for which the reference magnitude is brighter than (\grizy) = (19, 19, 18.5, 18.5, 17.5) are used to determine the zero points by comparison with the instrumental magnitudes. For the PV3 analysis, an outlier-rejecting median is used to measure the zero point. For early versions of the analysis, when the reference catalog used synthetic magnitudes, it was necessary to search for the blue edge of the distribution: the synthetic magnitude poorly predicted the magnitudes of stars in the presence of significant extinction or for the very red stars, making the blue edge somewhat more reliable. Note that we do not include an airmass correction in this zero point analysis: the airmass correction is folded into the observed zero point. The zero point may be measured separately for each chip or as a single value for the entire exposure; the latter option was used for the PV3 analysis. \subsection{Real-time outputs} The calibrations determined by \code{psastro} as saved as part of the header information in the output FITS tables. A single multi-extension FITS table is written using the \code{smf} format. In these files, the measurements from each chip are written as a separate FITS table. A second FITS extension for each chip is used to store the header information from the original chip image. The original chip header is modified so that the extension corresponds to an image with no pixels data: \code{NAXIS} is set to 0, even though \code{NAXIS1} and \code{NAXIS2} are retained with the original dimensions of the chip. A pixel-less primary header unit (PHU) is generated with a summary of some of the important and common chip-level keywords (e.g., \code{DATE-OBS}). The astrometric transformation information for each chip is saved in the corresponding header using standard (and some non-standard) WCS keywords. For the two-level astrometric model, the PHU header carries the astrometric transformation related to the projection and the camera-wide distortions. Photometric calibrations are written as a set of keywords to individual chip headers, and if the calibration is performed at the exposure-level, to the PHU. The photometry calibration keywords are: \begin{itemize} \item \code{ZPT_REF} : the nominal zero point for this filter \item \code{ZPT_OBS} : the measured zero point for this chip / exposure \item \code{ZPT_ERR} : the measured error on \code{ZPT_OBS} \item \code{ZPT_NREF} : the number of stars used to measure \code{ZPT_OBS} \item \code{ZPT_MIN} : minimum reference magnitude included in analysis \item \code{ZPT_MAX} : maximum reference magnitude included in analysis \end{itemize} The keyword \code{ZPT_OBS} is used to set the initial zero point when the data from the exposure are loaded into the DVO database. \section{PV3 DVO Master Database} Data from the GPC1 chip images, the stack images, and the warp images are loaded into DVO using the real-time analysis astrometric calibration to guide the association of detections into objects. After the full PV3 DVO database was constructed, including all of the chip, stack, and warp detections, several external catalogs were merged into the database. First, the complete 2MASS PSC was loaded into a stand-alone DVO database, which was then merged into the PV3 master database. Next the DVO database of synthetic photometry in the PS1 bands (see Section~\ref{sec:synthdb}) was merged in. Next, the full Tycho database was added, followed by the AllWISE database. After the Gaia release in August 2016 \citep{2016AA...595A...2G}, we generated a DVO database of the Gaia positional and photometric information and merged that into the master DVO database. \section{Photometry Calibration} \subsection{Ubercal Analysis} The photometric calibration of the DVO database starts with the ``ubercal'' analysis technique as described by \cite{2012ApJ...756..158S}. This analysis is performed by the group at Harvard, loading data from the \code{smf} files into their instance of the Large Scale Database \citep[LSD,][]{2011AAS...21743319J}, a system similar to DVO used to manage the detections and determine the calibrations. Photometric nights are selected and all other exposures are ignored. Each night is allowed to have a single fitted zero point and a single fitted value for the airmass extinction coefficient per filter. The zero points and extinction terms are determined as a least squares minimization process using the repeated measurements of the same stars from different nights to tie nights together. Flat-field corrections are also determined as part of the minimization process. In the original (PV1) ubercal analysis, \cite{2012ApJ...756..158S} determined flat-field corrections for $2\times 2$ sub-regions of each chip in the camera and four distinct time periods (``seasons''). Later analysis (PV2) used an $8\times8$ grid of flat-field corrections to good effect. The ubercal analysis was re-run for PV3 by the Harvard group. For the PV3 analysis, under the pressure of time to complete the analysis, we chose to use only a $2\times 2$ grid per chip as part of the ubercal fit and to leave higher frequency structures to the later analysis. A 5th flat-field season consisting of nearly the last 2 years of data was also included for PV3. In retrospect, as we show below, the data from the latter part of the survey would probably benefit from additional flat-field seasons. By excluding non-photometric data and only fitting 2 parameters for each night, the Ubercal solution is robust and rigid. It is not subject to unexpected drift or sensitivity of the solution to the vagaries of the data set. The Ubercal analysis is also especially aided by the inclusion of multiple Medium Deep field observations every night, helping to tie down overall variations of the system throughput and acting as internal standard star fields. The resulting photometric system is shown by \cite{2012ApJ...756..158S} to have reliability across the survey region at the level of (8.0, 7.0, 9.0, 10.7, 12.4) millimags in (\grizy). As we discuss below, this conclusion is reinforced by our external comparison. The overall zero point for each filter is not naturally determined by the Ubercal analysis; an external constraint on the overall photometric system is required for each filter. \cite{2012ApJ...756..158S} used photometry of the MD09 Medium Deep field to match the photometry measured by \cite{2012ApJ...750...99T} on the reference photometric night of MJD 55744 (UT 02 July 2011). \cite{2014ApJ...795...45S} and \cite{2015ApJ...815..117S} have re-examined the photometry of Calspec standards observed by PS1. \cite{2014ApJ...795...45S} reject 2 of the 7 stars used by \cite{2012ApJ...750...99T} and add photometry of 5 additional stars. \cite{2015ApJ...815..117S} further reject measurements of Calspec standards obtained close to the center of the camera field of view where the PSF size and shape changes very rapidly. The result of this analysis modifies the over system zero points by 20 - 35 millimags compared with the system determined by \cite{2012ApJ...756..158S}. \subsection{Applying the Ubercal Zero Points : Setphot} The ubercal analysis above results in a table of zero points for all exposures considered to be photometric, along with a set of low-resolution flat-field corrections. It is now necessary to use this information to determine zero points for the remaining exposures and to improve the resolution of the flat-field correction. This analysis is done within the IPP DVO database system. The ubercal zero points and the flat-field correction data are loaded into the PV3 DVO database using the program \code{setphot}. This program converts the reported zero point and flat field values to the DVO internal representation in which the zero point of each image is split into three main components: \[ zp_{\rm total} = zp_{\rm nominal} + M_{cal} + K_{rm \lambda}(sec \zeta - 1) \] where $zp_{\rm nominal}$ and $K_{rm \lambda}$ are static values for each filter representing respectively the nominal zero point and the slope of the trend with respect to the airmass ($\zeta$) for each filter. These static values are listed in Table~\ref{tab:zpts}. When \code{setphot} was run, these static zero points have been adjusted by the Calspec offsets listed in Table~\ref{tab:zpts} based on the analysis of Calspec standards by \cite{2015ApJ...815..117S}. These offsets bring the photometric system defined by the ubercal analysis into alignment with \cite{2015ApJ...815..117S}. The value $M_{cal}$ is the offset needed by each exposure to match the ubercal value, or to bring the non-ubercal exposures into agreement with the rest of the exposures, as discussed below. The flat-field information is encoded in a table of flat-field offsets as a function of time, filter, and camera position. Each image which is part of the ubercal subset is marked with a bit in the field \code{Image.flags}: \code{ID_IMAGE_PHOTOM_UBERCAL = 0x00000200} When \code{setphot} applies the ubercal information to the image tables, it also updates the individual measurements associated with those images. In the DVO database schema, the normalized instrumental magnitude, $M_{\rm inst} = -2.5log_{10} (DN / sec) + 25.0$ are stored for each measurement. The value of 25.0 is an arbitrary (but fixed) constant offset to place the instrumental magnitudes into approximately the correct range. Associated with each measurement are two correction magnitudes: $M_{\rm cal}$ and $M_{\rm flat}$, along with the airmass for the measurement, calculated using the altitude of the individual detection as determined from the Right Ascension, Declination, the observatory latitude, and the sidereal time. For a camera with the field of view of the PS1 GPC1, the airmass may vary significantly within the field of view, especially at low elevations. In the worst cases, at the celestial pole, the airmass range within a single exposure is XXX - XXX. The complete calibrated (`relative') magnitude is determined from the stored database values as: \[ M_{\rm rel} = M_{\rm inst} - 25.0 + zp_{\rm ref} + M_{\rm cal} + M_{\rm flat} + K_\lambda (sec \zeta - 1). \] The calibration offsets, $M_{\rm cal}$ and $M_{\rm flat}$, represent the per-exposure zero point correction and the slowly-changing flat-field correction respectively. These two values are split so the flat-field corrections may be determined and applied independently from the time-resolved zero point variations. Note that the above corrections are applied to each of the types of measurements stored in the database, PSF, Aperture, Kron. The calibration math remains the same regardless of the kind of magnitude being measured. Also note that for the moment, this discussion should only be considered as relevant to the chip measurements. Below we discuss the implications for the stack and warp measurements. When the ubercal zero points and flat-field data are loaded, \code{setphot} updates the $M_{\rm cal}$ values for all measurements which have been derived from the ubercal images. These measurements are also marked in the field \code{Measure.dbFlags} with the bit \code{ID_MEAS_PHOTOM_UBERCAL = 0x00008000}. At this stage, \code{setphot} also updates the values of $M_{\rm flat}$ for all GPC1 measurements in the appropriate filters. \subsection{Relphot Analysis} Relative photometry is used to determine the zero points of the exposures which were not included in the ubercal analysis. The relative photometry analysis has been described in the past by \cite{2013ApJS..205...20M}. We review that analysis here, along with specific updates for PV3. As described above, the instrumental magnitude and the calibrated magnitude are related by arithmetic magnitude offsets which account for effects such as the instrumental variations and atmospheric attenuation. \[ M_{rel} = m_{inst} + ZP + M_{cal} \] From the collection of measurements, we can generate an average magnitude for a single star (or other object): \[ M_{ave} = \frac{\sum_i M_{rel,i} w_i}{\sum_i w_i} \] We find that the color difference of the different chips can be ignored, and set the value of $A$ to 0.0. Note that we only use a single mean airmass extinction term for all exposures -- the difference between the mean and the specific value for a given night is taken up as an additional element of the atmospheric attenuation. We write a global $\chi^2$ equation which we attempt to minimize by finding the best mean magnitudes for all objects and the best $M_{\rm cal}$ offset for each exposure: \[ \chi^2 = \sum_{i,j} (m_{inst}[i,j] + ZP + K \zeta + M_{clouds}[i] - M_{ave}[j]) w_{i,j} / \sum_{i,j} w_{i,j} \] If everything were fitted at once and allowed to float, this system of equations would have $N_{exposures} + N_{stars} \sim 2 \times 10^5 + N \times 10^9$ unknowns. We solve the system of equations by iteration, solving first for the best set of mean magnitudes in the assumption of zero clouds, then solving for the clouds implied by the differences from these mean magnitudes. Even with 1-2 magnitudes of extinction, the offsets converge to the milli-magnitude level within 8 iterations. Only brighter, high quality measurements are used in the relative photometry analysis of the exposure zero points. We use only the brighter objects, limiting the density to a maximum of 4000 objects per square degree (lower in areas where we have more observations). When limiting the density, we prefer objects which are brighter (but not saturated), and those with the most measurements (to ensure better coverage over the available images). There are a few classes of outliers which we need to be careful to detect and avoid. First, any single measurement may be deviant for a number of reasons (e.g., it lands in a bad region of the detector, contamination by a diffraction spike or other optical artifact, etc). We attempt to exclude these poor measurements in advance by rejecting measurements which the photometric analysis has flagged the result as suspcious. We reject detections which are excessively masked; these include detections which are too close to other bright objects, diffraction spikes, ghost images, or the detector edges. However, these rejections do not catch all cases of bad measurements. After the initial iterations, we also perform outlier rejections based on the consistency of the measurements. For each star, we use a two pass outlier clipping process. We first define a robust median and sigma from the inner 50\% of the measurements. Measurements which are more than 5$\sigma$ from this median value are rejected, and the mean \& standard deviation (weighted by the inverse error) are recalculated. We then reject detections which are more than 3$\sigma$ from the recalculated mean. Suspicious stars are also exclude from the analsis. We exclude stars with reduced $\chi^2$ values more than 20.0, or more than 2$\times$ the median, whichever is larger. We also exclude stars with standard deviation (of the measurements used for the mean) greater than 0.005 mags or 2$\times$ the median standard deviation, whichever is greater. Similarly for images, we exclude those with more than 2 magnitudes of extinction or for which the deviation greater of the zero points per star are than 0.075 mags or 2$\times$ the median value, whichever is greater. These cuts are somewhat conservative to limit us to only good measurements. The images and stars rejected above are not used to calculate the system of zero points and mean magnitudes. These cuts are updated several times as the iterations proceed. After the iterations have completed, the images which have been reject are calibrated based on their overlaps with other images. We overweight the ubercal measurements in order to tie the relative photometry system to the ubercal zero points. Ubercal images and measurements from those images are not allowed to float in the relative photometry analysis. Detections from the Ubercal images are assigned weights of 10x their default (inverse-variance) weight. The calculation of the formal error on the mean magnitudes propagates this additional weight, so that the errors on the Ubercal observations dominates where they are present. \[ \mu = \frac{\sum m_i w_i \sigma_i^{-2}}{\sum w_i \sigma_i^{-2}} \] \[ \sigma_\mu = \frac{\sum w_i^2 \sigma_i^{-2}}{(\sum w_i \sigma_i^{-2})^2} \] The calculation of the relative photometry zero points is performed for the entire $3\pi$ data set in a single, highly parallelized analysis. As discussed above, the measurement and object data in the DVO database are distributed across a large number of computers in the IPP cluster: for PV3, 100 parallel hosts are used. These machines by design control data from a large number of unconnected small patches on the sky, with the goal of speeding queries for arbitrary chunks of the sky. As a result, this parallelization is entirely inappropriate as the basis of the relative photometry analysis. For the relative photometry calculation (and later for relative astrometry calculation), the sky is divided into a number of large, contiguous regions each bounded by lines of constant RA \& DEC, 73 regions in the case of the PV3 analysis. A separate computer, called a ``region host'' is responsible for each of these regions: that computer is responsible for calculating the mean magnitudes of the objects which land within its region and for determining the exposure zero points for exposures for which the center of the exposure lands in the region of responsibility. \begin{figure*}[htbp] \begin{center} \begin{minipage}{0.85\linewidth} \includegraphics[width=\textwidth,clip]{{pics/photflat.example}.png} \end{minipage} \hspace{-3.0in} \begin{minipage}{0.4\linewidth} \vspace{3.25in} \caption{\label{fig:photflat} High-resolution flat-field correction images for the 5 filters $grizy$.} \end{minipage} \end{center} \end{figure*} The iterations described above (calculate mean magnitudes, calculate zero points, calculate new measurements) are peformed on each of the 73 region hosts in parallel. However, between certain iteration steps, the region hosts must share some information. After mean object magnitudes are calculated, the region hosts must share the object magnitudes for the objects which are observed by exposures controlled by neighboring region hosts. After image calibrations have been determined by each region host, the image calibrations must be shared with the neighboring region hosts so measurement values associated with objects owned by a neighboring region host may be updated. The completely work flow of the all-sky relative photometry analysis starts with an instance of the program running on a master computer. This machine loads the image database table and assigns the images to the 73 region hosts. A process is then launched on each of the region hosts which is responsible for managing the image calibration analysis on that host. These processes in turn make an intial request of the photometry information (object and measurement) from the 100 parallel DVO partition machines. In practice, the processes on the the region hosts are launched in series by the master process to avoid overloading the DVO partition machines with requests for photometry data from all region hosts at once. Once all of the photometry has been loaded, the region hosts perform their iterations, sharing the data which they need to share with their neighbors and blocking while they wait for the data they need to receive from their neighbors. The management of this stage is performed by communication between the region host. At the end of the iterations, the regions hosts write out their final image calibrations. The master machine then loads the full set of image calibrations and then applies these calibrations back to all measurements in the database, updating the mean photometry as part of this process. The calculations for this last step are performed in parallel on the DVO partition machines. With the above software, we are able to perform the entire relphot analysis for the full 3$\pi$ region at once, avoiding any possible edge effects. The region host machines have internal memory ranging from 96GB to 192GB. Regions are drawn, and the maximum allowed density was chosen, to match the memory usage to the memory available on each machine. A total of 9.8TB of RAM was available for the analysis, allowing for up to 6000 objects per square degree in the analysis. \begin{figure}[htbp] \begin{center} \includegraphics[width=\hsize,clip]{{pics/allsky.photom.sigma}.png} \caption{\label{fig:allsky.photom.sigma} Consistency of photometry measurements across the sky. Each panel shows a map of the standard deviation of photometry residuals for stars in each pixel. The median value of the measure standard deviations across the sky is $(\sigma_g, \sigma_r, \sigma_i, \sigma_z, \sigma_y) = (14, 14, 15, 15, 18)$ millimags. These values reflect the typical single-measurement errors for bright stars.} \end{center} \end{figure} \subsubsection{Photometric Flat-field} For PV3, the relphot analysis was performed two times. The first analysis used only the flat-field corrections determined by the ubercal analysis, with a resolution of 2x2 flat-field values for each GPC1 chip (corresponding to \approx 2400 pixels), and 5 separate flat-field 'seasons'. However, we knew from prior studies that there were significant flat-field structures on smaller scales. We used the data in DVO after the initial relphot calibration to measure the flat-field residual with much finer resolution: 124 x 124 flat-field values for each GPC1 chip (40x40 pixels per point). We then used \code{setphot} to apply this new flat-field correction, as well as the ubercal flat-field corrections, to the data in the database. At this point, we re-ran the entire relphot analysis to determine zero points and to set the average magnitudes. Figure~\ref{fig:photflat} shows the high-resolution photometric flat-field corrections applied to the measurements in the DVO database. These flat-fields make low-level corrections of up to \approx 0.03 magnitudes. Several features of interest are apparent in these images. First, at the center of the camera is an important structure caused by the telescope optics which we call the ``tent''. In this portion of the focal plane, the image quality degrades very quickly. The photometry is systematically biased because the point spread function model cannot follow the real changes in the PSF shape on these small scales. As is evident in the image, the effect is such that the flux measured using a PSF model is systematically low, as expected if the PSF model is too small. The square outline surrounding the ``tent'' is due to the 2$\times$2 sampling per chip used for the Ubercal flat-field corrections. The imprint of the Ubercal flat-field is visible throughout this high-resolution flat-field: in regions where the underlying flat-field structure follows a smooth gradient across a chip, the Ubercal flat-field partly corrects the structure, leaving behind a saw-tooth residual. The high-resolution flat-field corrects the residual structures well. Especially notable in the bluer filters is a pattern of quarter circles centered on the corners of the chips. These patterns are similar to the ``tree rings'' reported by the DES team and others (G. Berstein REF \& REFS). The details of these tree rings are beyond the scope of this article, and will be explored in future work. Unlike the tree ring features discussed by these other authors, the features observed in the GPC1 photometry are not caused by an interaction of the flat-field with the effective pixel geometry. Instead, these photometric features are due to low-level changes in the PSF size which we attribute to variable charge diffusion (Magnier in prep). Other features include some poorly responding cells (e.g., in XY14) and effects at the edges of chips, possibly where the PSF model fails to follow the changes in the PSF. For stacks and warps, the image calibrations were determined after the relative photometry was performed on the individual chips. Each stack and each warp was tied via relative photometry to the average magnitudes from the chip photometry. In this case, no flat-field corrections were applied. For the stacks, such a correction would not be possible after the stack has been generated because multiple chip coordinates contribute to each stack pixel coordinate. For the warps, it is in principle possible to map back to the corresponding chip, but the information was not available in the DVO database, and thus it was not possible at this time to determine the flat-field correction appropriate for a given warp. This latter effect is one of several which degrade the warp photometry compared to the chip photometry at the bright end. \subsection{Photometry Calibration Quality} Figure~\ref{fig:allsky.photom.sigma} shows the standard devitions of the mean residual photometry for bright stars as a function of position across the sky. For each pixel in these images, we selected all objects with (14.5, 14.5, 14.5, 14.0, 13.0) $<$ ($g,r,i,z,y$) $<$ (17, 17, 17, 16.5, 15.5), with at least 3 measurements in $i$-band (to reject artifacts detected in a pair of exposures from the same night), with \code{PSF_QF} $> 0.85$ (to reject excessively-masked objects), and with $mag_{\rm PSF} - mag_{rm Kron} < 0.1$ (to reject galaxies). We then generated histograms of the difference between the average magnitude and the apparent magnitude in an individual image for each filter for all stars in a given pixel in the images. From these residual histograms, we can then determine the median and the 68\%-ile range to calculate a robust standard deviation. This represents the bright-end systematic error floor for a measurement from a single exposure. The standard deviations are then plotted in Figure~\ref{fig:allsky.photom.sigma}. The 5 panels in Figure~\ref{fig:allsky.photom.sigma} show several features. The Galactic bulge is clearly seen in all five filters, with the impact strongest in the reddest bands. We attribute this to the effects of crowding and contamination of the photometry by neighbors. Large-scale, roughly square features \approx 10 degrees on a side in these images can be attributed to the vagaries of weather: these patches correspond to the observing chunks. These images include both photometric and non-photometric exposures. It seems plausible that the non-photometric images from relatively poor quality nights elevate the typical errors. On small scales, there are circular patterns \approx 3 degrees in diameter corresponding to individual exposures; these represent residual flat-fields structures not corrected by our stellar flat-fielding. The median of the standard deviations in the five filters are $(\sigma_g,\sigma_r,\sigma_i,\sigma_z,\sigma_y) = (14, 14, 15, 15, 18)$ millimagnitudes. \subsection{Calculation of Object Photometry} \subsubsection{Iteratively Reweighted Least Squares Fitting (1D)} \subsubsection{Selection of Measurements} \subsubsection{Stack Photometry} \subsubsection{Warp Photometry} \begin{figure*}[htbp] \begin{center} \includegraphics[width=\hsize,clip]{{pics/KHexample}.png} \caption{\label{fig:KHexample} Illustration of the Koppenh\"ofer Effect on chip XY04. In each plot, the solid line shows the measured mean residual for stars detected on this chip as a function of the instrumental magnitude / FWHM$^2$. {\bf top left} X-direction before correction. {\bf top right} Y-direction before correction. {\bf bottom left} X-direction after correction. {\bf bottom right} Y-direction after correction. } \end{center} \end{figure*} \begin{figure}[htbp] \begin{center} \includegraphics[width=\hsize,clip]{{pics/KHmap}.png} \caption{\label{fig:KHmap} Map of the amplitude of the Koppenh\"ofer Effect on chips across the focal plane. In the affected chips, bright stars are up to 0.2 \note{arcsec} deviant from their expected positions. {\bf bottom left} X-direction before correction. {\bf bottom right} Y-direction before correction. {\bf top left} X-direction after correction. {\bf top right} Y-direction after correction. } \end{center} \end{figure} \section{Astrometry Calibration} Once the full PV3 dataset loaded into the master PV3 DVO database, along with supporting databases, and the photometric calibrations were performed, relative astrometry could be performed on the database to improve the overall astrometric calibration. In many respects the relative astrometric analysis is similar to the relative photometric analysis: the repeated measurements of the same object in different images are used to determine a high quality average position for the object. The new average positions are then used to determine improved astrometric calibrations for each of the images. These improved calibrations are used to set the observed coordinates of the measurements from those images, which are in turn used to improve the average positions of the objects. The whole process is repeated for several iterations. Like the photometric analysis, the astrometric analysis is performed in a parallel fashion with the same concept that specific machines are responsible for exposures and objects which land within their regions of responsibility, defined on the basis of lines of constant RA and DEC. Between iteration steps, the astrometric calibrations are shared between the parallel machines as are the improved positions for objects controlled by one machine but detect in images controlled by another machine. Like the photometric analysis, the entire sky is processed in one pass. However, there are some important differences in the details. \subsection{Systematic Effects} First, the astrometric calibration has a larger number of systematic effects which must be performed. These consist of: 1) the Koppenh\"offer Effect, 2) Differential Chromatic Refraction, 3) Static deviations in the camera. We discuss each of these in turn below. \subsubsection{Koppenh\"offer Effect} The Koppenh\"offer Effect was first identified in February 2011 by Johannes Koppenh\"offer (MPE) as part of the effort to search for planet transists in the Stellar Transit Survey data. He noticed that the astromety of bright stars and faint stars disagreed on overlapping chips at the boundary between the STS fields. After some exploration, it was determined that the X coordinate of the brightest stars was offset from the expected location based on the faint stars for a subset of the GPC1 chips. The essence of the effect was that a large charge packet could be drawn prematurely over an intervening negative serial phase into the summing well, and this leakage was proportionately worse for brighter stars. The brighter the star, the more the charge packet was pushed ahead on the serial register. The amplitude of the effect was at most $0\farcs{}25$, corresponding to a shift of about one pixel. This effect was only observed in 2-phase OTA devices, with 22 / 30 of these suffering from this effect. By adjusting the summing well high voltage down from a default +7 V to +5.5V on the 2-phase devices, the effect was prevented in exposures after 2011-05-03. However, this left 101,550 exposures (27\%) already contaminated by the effect. We measured the Koppenh\"offer Effect by accumulating the residual astrometry statistics for stars in the database. For each chip, we measured the mean X and Y displacements of the astrometric residuals as function of the instrumental magnitude of the star divided by the FWHM$^2$. We measured the trend for all chips in a number of different time ranges and found the effect to be quite stable, in the period where it was present. The effect only appeared in the serial direction. Figure~\ref{fig:koppenhoefer} shows the KE trend for a typical affected chip both before and after the correction. For the PV3 dataset, we re-measured the KE trends using stars in the Galactic pole regions after an initial relative astrometry calibration pass: the Galactic pole is necessary because the real-time astrometric calibration relies largely on the fainter stars which are not affected by the KE. The trend is then stored in a form which can be applied to the database measurements. \subsubsection{Differential Chromatic Refraction} Differential Chromatic Refraction (DCR) affects astrometry because the reference stars used the calibrate the images are not the same color (SED) as the rest of the stars in the image. For a given star of a color different from the reference stars, as exposures are taken at higher airmass, the apparent position of the star will be biased along the parallactic angle. While it is possible to build a model for the DCR impact based on the filter response functions and atmospheric refraction, we have instead elected to use an empirical correction for the DCR present in the PV3 database. We have measured the DCR trend using the astrometric residuals of millions of stars after performing an initial relative astrometry calibration. We define a blue DCR color ($g-i$) to be used when correcting the filters \gps,\rps,\ips, and a red DCR color ($z - y$) to be used when correcting the filters $zy$. In the process of performing the relative astrometry calibration, we record the median red and blue colors of the reference stars used to measure the astrometry calibration for each image. As we determine the astrometry parameters for each object in the database, we record the median red and blue reference star colors for all images used to determine the astrometry for a given object. For each star in the database, we know both the color of the star and the typical color of the reference stars used to calibrate the astrometry for that star. We measure the mean deviation of the residuals in the parallactic angle direction and the direction perpendicular to the parallactic angle. For each filter, we determine the DCR trend as a function of the difference between the star color and the reference star color, using the red or blue color approriate to the particular filter, times the tangent of the zenith distance. Figure~\ref{fig:DCR} shows the DCR trend for the 5 filters \grizy, as well as the measured displacement in the direction perpendicular to the parallactic angle. We represent the trend with a spline fitted to this dataset. \begin{figure}[htbp] \begin{center} \includegraphics[width=\hsize,clip]{{pics/dcr.r2.g}.png} \caption{\label{fig:DCRexample} Example of the DCR trend in the g-band. {\bf top:} DCR trend in the parallactic direction {\bf bottom:} DCR trend perpendicular to the parallactic angle.} \end{center} \end{figure} The amplitude of the DCR trend in the five filters is $(g,r,i,z,y) = (0.010, 0.001, -0.003, -0.017, -0.021)$ arcsec airmass$^{-1}$ magntiude$^{-1}$. We saturate the DCR correction if the term $color TAN (\zeta)$ for a given measurement is outside a range where the DCR correction is well measured. The maximum DCR correction applied to the five filters is $(g,r,i,z,y) = (0.019, 0.002, 0.003, 0.006, 0.008)$ arcseconds. \begin{figure*}[htbp] \begin{center} \includegraphics[width=0.85\textwidth,clip]{{pics/astroflat.gri}.png} \caption{\label{fig:astroflat.gri} High-resolution astrometric flat-field correction images for $gri$.} \end{center} \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=0.85\textwidth,clip]{{pics/astroflat.zy}.png} \caption{\label{fig:astroflat.zy} High-resolution astrometric flat-field correction images for $zy$.} \end{center} \end{figure*} \subsubsection{Astrometric Flat-field} After correction for both KE and DCR, we observe persistent residual astrometric deviations which depend on the position in the camera. We construct an astrometric ``flat-field'' response by determining the mean residual displacement in the X and Y (chip) directions as a function of position in the focal plane. We have measured the astrometric flat using a sampling resolution of 40x40 pixels, matching the photometric flat-field correction images. Figures~\ref{fig:astroflat.gri} and \ref{fig:astroflat.zy} show the astrometric flat-field images for the five filters \grizy\ in each of the two coordinate directions. These plots show several types of features. The dominant pattern in the astrometric residual is roughly a series of concentric rings. The pattern is similar to the pattern of the focal surface residuals measured by (REF), which also has a concentric series of rings with similar spacing. The ``tent'' in the center of the focal surface reflected in these astrometry residual plots. Our interpretation of the structure is that the deviations of the focal plane from the ideal focal surface introduces small-scale PSF changes, presumably coupled to the optical aberrations, which result in small changes in the centroid of the object relative to the PSF model at that location. Since the PSF model shape parameters are only able to vary at the level of a 6x6 grid per chips, the finer structures are not included in the PSF model. The PV2 analysis shows the ring structure more clearly, with a pattern much more closely following the focal surface deviations. In the PV2 analysis, the PSF model used at most a 3x3 grid per chip to follow the shape variations, so any changes caused by the optical aberrations would be less well modeled in the PV2 analysis, as we observe. A second pattern which is weakly seen in several chips consists of consistent displacements in the X (serial) direction for certain cells. This effect can be seen most clearly in chips XY45 and XY46. In the PV2 analysis, this pattern is also more clearly seen. In this case, the fact that the astrometric model used polynomials with a maximum of 3rd order per chip means the deviation of individual cells cannot be followed by the astrometric model. A third effect is seen at the edge of the chips, where there appears to be a tendency for the residual to follow the chip edge. The origin of this is unclear, but likely caused by the astrometry model failing to follow the underlying variations because of the need to extrapolate to the edge pixels. Finally, we also mention an interesting effect {\em not} visible at the resolution of these astrometric flat-field images. Fine structures are observed at the \approx 10 pixel scale similar to the ``tree rings'' reported by the DES team and others (G. Berstein REF \& REFS). The details of these tree rings are beyond the scope of this article, and will be explored in future work. Unfortunately, we discovered a problem with the astrometric flat-field correction too late to be repaired for DR1. As can be seen by inspection of Figures~\ref{fig:astroflat.gri} and \ref{fig:astroflat.zy}, there is significant pixel-to-pixel noise in the the astrometric flat-field images. This pixel-to-pixel noise is caused by too few stars used in the measuremnt of the flat-field structure for the high-resolution sampling. As a result, the astrometric flat-field correction reduces systematic structures on large spatial scales, but at the expense of degrading the quality of an individual measurement. Only $i$-band has sufficient signal-to-noise per pixel to avoid significantly increasing the per-measurement position errors. Figure~\ref{fig:allsky.astrom.sigma} shows the standard devitions of the mean residual astrometry in $(\alpha,\delta)$ for bright stars as a function of position across the sky. For each pixel in these images, we selected all objects with $15 < i < 17$, with at least 3 measurements in $i$-band (to reject artifacts detected in a pair of exposures from the same night), with \code{PSF_QF} $> 0.85$ (to reject excessively-masked objects), and with $mag_{\rm PSF} - mag_{rm Kron} < 0.1$ (to reject galaxies). We then generated histograms of the difference between the object position predicted for the epoch of each measurement (based on the proper motion and parallax fit) and the observed position of that measurement, in both the Right Ascension and Declination directions (in linear arcseconds), for all stars in a given pixel in the images. From these residual histograms, we can then determine the median and the 68\%-ile range to calculate a robust standard deviation. This represents the bright-end systematic error floor for a measurement from a single exposure. The standard deviations are then plotted in Figure~\ref{fig:allsky.photom.sigma}. The median value of the standard deviations across the sky is $(\sigma_\alpha, \sigma_\delta) = (22, 23)$ milliarcseconds. The Galactic plane is clearly apparently in these images. Like photometry, we attribute this to failure of the PSF fitting due to crowding. The celestial North pole regions have somewhat elevated errors in both R.A. and DEC. This may be due to the larger typical seeing at these high airmass regions, but without further exploration this is interpretation uncertain. Several features can be seen which appear to be an effect of the tie to the Gaia astrometry: the stripes near the center of the DEC image and the right side of the R.A. image. The mesh of circular outlines is due to the outer edge of the focal plane where the astrometric calibration is poorly determined. As discussed above, the median values in the images are higher than expected based on our PV2 analysis of the astrometry: the median per-measurement error floor of \approx 22 mas is significantly worse than the \approx 17 mas value in that earlier analysis. We attribute this degradation to the noise introduced by the astrometric flat-field. This noise can likely be addressed before the DR2 release of the individual measurement data. \begin{figure}[htbp] \begin{center} \includegraphics[width=\hsize,clip]{{pics/allsky.astrom.sigma}.png} \caption{\label{fig:allsky.astrom.sigma} Consistency of photometry measurements across the sky. Each panel shows a map of the standard deviation of astrometry residuals for stars in each pixel. The median value of the standard deviations across the sky is $(\sigma_\alpha, \sigma_\delta) = (22, 23)$ milliarcseconds. These values reflect the typical single-measurement errors for bright stars. See discussion regarding the astrometric flat which is likely responsible for these elevated value. } \end{center} \end{figure} After the initial analysis to measure the KE corrections, DCR corrections, and astrometric flat-field corrections, we applied these corrections to the entire database. Within the schema of the database, each measurement has the raw chip coordinates (\code{Measure.Xccd,Yccd}) as well as the offset for that object based on each of these three corrections: \code{Measure.XoffKH,YoffKH, Measure.XoffDCR,YoffDCR, Measure.XoffCAM,YoffCAM}. The offsets are calculated for each measurement based on the observed instrumental chip magnitudes and FWHM for the Koppenhoffer Effect, on the average chip colors and the altitude \& azimuth of each measurement for the DCR correction, and on the chip coordinates for the astrometric flat-field corrections. The corrections are combined and applied to the raw chip coordinates and saved back in the database in the fields \code{Measure.Xfix,Yfix}. At this point, we are ready to run the full astrometric calibration. \subsection{Galactic Rotation and Solar Motion} The initial analysis of the PV2 astrometry used the 2MASS positions as an inertial constraint: the 2MASS coordiates were included in the calculation of the mean positions for the objects in the database, with weight corresponding to the reported astrometric errors. In this analysis, the object positions used to determine the calibrations of the image parameters ignored proper motion and parallax. After the image calibrations were determined, then individual objects were fitted for proper motion and possibly parallax, as discussed in detail below. Using the PV2 analysis of the astrometry calibration, we discovered large-scale systematic trends in the reported proper motions of background quasars. This motion had an amplitude of 10 - 15 milliarcseconds per year and clear trends with Galactic longitude. We also observed systematic errors of the mean positions with respect to the ICRF milliarcsecond radio quasar positions, with an amplitude of \approx 60 milliarcseconds, again with trends associated with Galactic longitude. Since the 2MASS data were believed to have minimal average deviations relative to the ICRF quasars, this latter seemed to be a real effect. We realized that both the proper motion and the mean position biases could be caused by a single common effect: the proper motion of the stars used as reference stars between the 2MASS epoch (\approx 2000) and PS1 epoch (\approx 2012). Since we are fitting the image calibrations without fitting for the proper motions of the stars, we are in essencence forcing those stars to have proper motions of 0.0. The background quasars would then be observed to have proper motions corresponding to the proper motions of the reference stars, but in the opposite direction. We demonstrated that the observed quasar proper motions agreed well with the distribution expected if the median distance to our reference stars was \approx 500 pc. For PV3, we desired to address this bias by including our knowledge about the distances to the reference stars and the expected typical proper motions for stars at those distances. With some constraint on the distance to each star, we can determine the expected proper motion based on a model of the Galactic rotation and solar motions. We can then calculate the mean positions for the objects keeping the assumed proper motion fixed. When calibrating a specific image, the reference star mean position is then translated to the expected position at the epoch of that image. The image calibration is then performed relative to these predicted postions. This process naturally accounts for the proper motion of the reference stars. In order to make the calibrations consistent with the observed coordinates of an external inertial reference, we perform the iterative fits using the technique as described, but assign very high weights in the initial iterations to the inertial reference, and reduce the weights as the astrometric calibration iterations proceed. In order to perform this analysis, we need estimated distances for every reference star used in the analysis. Green et al (REF) performed SED fitting for 800M stars in the 3$\pi$ region using PV2 data. The goal of this work was to determine the 3D structure of the dust in the galaxy. By fitting model SEDs to stars meeting a basic data quality cut, they determined the best spectral type, and thus $T_{\rm eff}$, absolute $r$-band magnitude, distance modulus, and extinction $A_V$ (the desired output and used to determine the dust extinction as a function of distance throughout the galaxy). We use the distance modulus determined in this analysis to predict the proper motions. To convert the distances to proper motions, we use the Galactic rotation parameters ($A,B$) = (14.82,-12.37) km sec$^{-1}$ pc$^{-1}$ and Solar motion parameters ($U_{\rm sol}, V_{\rm sol}, W_{\rm sol}$) = (9.32, 11.18, 7.61) km sec$^{-1}$ as determined by Feast \& Whitelock (REF) using Hipparchos data. Proper motions are determined from the following: \begin{eqnarray} \mu^{\rm gal}_{l} & = & (A \cos (2 l) + B) \cos (b) \\ \mu^{\rm gal}_{b} & = & \frac{-A \sin (2 l) \sin (2 b)}{2} \\ \mu^{\rm sol}_{l} & = & \frac{U \sin(l) - V \cos(l)}{d} \\ \mu^{\rm sol}_{b} & = & \frac{(U \cos(l) + V \sin(l)) \sin(b) - W \cos(b)}{d} \end{eqnarray} where $d$ is the distance and $l,b$ are the Galactic coordintes of the star. Note that the proper motion induced by the Galactic rotation is independent of distance while the reflex motion induced by the solar motion decreases with increasing distance. Also note that this model assumes a flat rotation curve for objects in the thin disk; any reference stars which are part of the halo population will have proper motions which are not described by this model; the mostly random nature of the halo motions should act to increase the noise in the measurement, but should not introduce detectable motion biases. Also, if the distance modulus is not well determined, we can assume the object is simply following the Galactic rotation curve and set a fixed proper motion. If we do not have a distance modulus from the Green et al analysis, we assume a value of 500pc. \subsection{Gaia Constraint} After the full relative astrometry analysis was performed for the PV3 database, the Gaia Data Release 1 became available \citep{2016A&A...595A...2G, 2016A&A...595A...4L}. This afforded us the opportunity to constrain the astrometry on the basis of the Gaia observations. Gaia DR1 objects which are bright enough to have proper motion and parallax solutions are in general saturated in the PS1 observations. Thus, we are limited to using the Gaia mean positions reported for the fainter stars. We extracted all Gaia sources not marked as a duplicate from the Gaia archive and generated a DVO database from this dataset. We then merged the Gaia DVO into the PV3 master DVO database. We re-ran the complete relative astrometry analysis using Gaia as an additional measurement. We applied the analysis described above, applying the estimated distances to determine preliminary proper motions. The Gaia mean epoch is reported as 2015.0, so all Gaia measurements were assigned this epoch. We wanted to ensure the Gaia measurements dominated the astrometric solutions, so we made the weight very high for the Gaia points: 1000$\times$ the nominal weight in the initial fits (to lock down the reference frame), decreasing to 100$\times$ the nominal weight for the last fits. We also retained the 2MASS measurements in the analysis, but gave them somewhat lower weights than Gaia: while the 2MASS data does not have the accuracy of Gaia, the coverage is known to be quite complete, while the Gaia DR1 has clear gaps and holes. Having 2MASS, even at a lower weight, helps to tile over those gaps. \begin{figure*}[htbp] \begin{center} \includegraphics[width=\hsize,clip]{{pics/gaia.photom}.png} \caption{\label{fig:gaia.photom} Comparison with Gaia photometry. {\bf Left} Mean of PS1 - Gaia, {\bf Right} Standard deviation of PS1 - Gaia. For pixels with $|b| > 30$ and $\delta > -30$, the standard deviation of the PS1 - Gaia mean values is 7 millimagnitudes, while the median of the standard deviations is 12 millimagnitudes. The former is a statement about the consistency of the Gaia and Pan-STARRS\,1 photometry, while the latter reflects the combined bright-end errors for both systems. } \end{center} \end{figure*} Figure~\ref{fig:gaia.photom} shows a comparison between the Pan-STARRS photometry in $g,r,i$ and the Gaia photometry in the $G$-band. To compare the PS1 photometry to the very broadband Gaia G filter, we have determined a transformation based on a 3rd order polynomial fit to $g-r$ and $g-i$ colors. This transformation reproduces Gaia photometry reasonably well for stars which are not too red. For a comparison, we have seleted all PS1 stars with Gaia measurements meeting the following criteria: $14 < i < 19$, with at least 10 total measurements, within a modest color range $0.2 < g - r < 0.9$. We also restricted to objects with $i_{\rm PSF} - i_{\rm Kron} < 0.1$, using the average $i$ magnitudes determined from the individual exposures. For Figure~\ref{fig:gaia.photom}, we calculate the difference between the estimated $G$-band magnitude based on PS1 $g,r,i$ photometry and the $G$-band photometry reported by Gaia. For each pixel, we determine the histogram of these differences and calculate the median and the 68\%-ile range. In Figure~\ref{fig:gaia.photom}, these values are plotted as a color scale. The Galactic plane is clearly poorly matched between the two photometry systems. This may in part be due to the difficulty of predicting $G$-band magnitudes for stars which are significantly extincted: the $G$-band includes significant flux from the PS1 $z$-band which was not used in our transformation. Many other large scale feature in the median differences have structures similar to the Gaia scanning pattern (large arcs and long parallel lines. There are also structures related to the PS1 exposure footprint. These show up as a mottling on the \approx 3 degree scale (e.g., lower right below the Galactic plane). The amplitude of the residual structures is fairly modest. The standard devition of the median difference values is 7 millimagnitudes. This number gives an indication of the overall photometric consistency of both Gaia and PS1 and implies that the systematic error floor for each survey is less than 7 millimags. \begin{figure*}[htbp] \begin{center} \includegraphics[width=\hsize,clip]{{pics/gaia.astrom}.png} \caption{\label{fig:gaia.astrom} Comparison with Gaia astrometry. {\bf Left} Mean of PS1 - Gaia, {\bf Right} Standard deviation of PS1 - Gaia. The median value of the standard deviations is $(\sigma_\alpha, \sigma_\delta) = (4, 3)$ milliarcseconds. } \end{center} \end{figure*} Figure~\ref{fig:gaia.astrom} shows a comparison between the Pan-STARRS mean astrometry positions in $\alpha,\delta$ and the Gaia astrometry. For this comparison, we have seleted all PS1 stars with Gaia measurements with $14 < i < 19$ and with at least 10 total measurements. For Figure~\ref{fig:gaia.astrom}, we calculate the difference between the position predicted by PS1 at the Gaia epoch (using the proper motion and parallax fit) and the position reported by Gaia. For each pixel, we determine the histogram of these differences in the R.A\. and DEC directions, and calculate the median and the 68\%-ile range. In Figure~\ref{fig:gaia.astrom}, these values are plotted as a color scale. There is good consistency between the PS1 and Gaia astrometry. There are patterns from the Galactic plane (though not very strongly at the bulge). There are also clear features due to the PS1 exposure footprint (ring structure on \approx 3 degree scales). In the plots of the scatter, there are patterns which are related to the Gaia scanning rule. These are presumably regions with relatively low signal to noise in Gaia; they were also apparent in the plots of the statisics of the per-exposure measurement residuals (Figure~\ref{fig:allsky.astrom.sigma}. The standard deviations of the median differences are ($\sigma_\alpha, \sigma_\delta) = (4, 3)$ milliarcseconds. \subsection{Calculation of Object Astrometry} \subsubsection{Iteratively Reweighted Least Squares Fitting} \subsubsection{Seletion of Measurements} \section{Discussion} \section{Conclusion} \acknowledgments The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE) and the Los Alamos National Laboratory. \bibliographystyle{apj} \input{calibration.bbl} \end{document} \begin{verbatim} Plots: * illustration of the astrometric models (schematic) * astrometry cross-correlation example? * zero point history, including / excluding ubercal? (from Eddie) * applied flat-field images [FITS -> png] * Koppenhoffer plots [from presentations] * DCR plots [exist] * astrometric flat fields [FITS -> png] * PV3 vs Gaia [exit] * PV3 quasar motions [** need to extract **] * bright-end astrometry residuals [running cdhist code, but is the density too low?] * bright-end photometry residuals [running cdhist code, but is the density too low?] * careful discussion of calibration wrt scolnic et al \end{verbatim} List of Figures and their sources: * KH example & map: * kukui:/data/kukui.3/eugene/pv3.stats.20161202 * kh.data.20151203.v1/spline.final.fits : spline fits to the KH data * kh.data.20151203.v1.fits : densify images of residuals per chip : (dX,dY) & (T0, T1) = (pre fix, post fix) * mana.sh : kh.example - plot of XY04 * mana.sh : khmap (needs cleanup) * ipp094:/data/ipp094.0/eugene/pv3.cam.20150607/astrom.corrections : extractions and original scripts to make spline, etc * DCR plots: * need to rebuild density plots (density images used to make splines are poor for plots) * old examples: * /data/kukui.3/eugene/dcr.20141205 * dcr.r2.g.png * spline fits (DCR.example) * g : dP/dQ = 0.010, dPmax = 0.019 * r : dP/dQ = 0.001, dPmax = 0.002 * i : dP/dQ = -0.003, dPmax = -0.003 * z : dP/dQ = -0.017, dPmax = -0.006 * y : dP/dQ = -0.021, dPmax = -0.008 * astroflats: * kukui:/data/kukui.3/eugene/pv3.cam.20150607 * plots.sh : * photflat.20151127.fix.fits was made in: * kukui:/data/kukui.3/eugene/setphot.20151213 * Gaia comparisons: * ipp094:/data/ipp094.0/eugene/pv3.stats.20161022 * kukui:/data/kukui.3/eugene/pv3.stats.20161022 * photom & astrom residuals: kukui:/data/kukui.3/eugene/pv3.stats.20161202/maps.measure
-29,409.155849
[ -2.9921875, 2.8515625 ]
21.270903
[ -3.53515625, 0.1817626953125, -2.015625, -5.12109375, 0.197509765625, 6.90625 ]
[ 2.560546875, 5.38671875, 2.486328125, 7.55859375 ]
746
11,626
[ -3.537109375, 3.916015625 ]
23.30884
[ -6.09375, -2.77734375, -3.0234375, -1.255859375, 1.3056640625, 9.4609375 ]
1.654854
15.274432
17.546878
1.989935
[ 1.648324966430664 ]
-21,696.078196
5.327628
-28,154.943955
0.356803
6.019421
[ -3.607421875, -3.701171875, -2.853515625, -3.470703125, 2.55859375, 10.2265625 ]
[ -6.66015625, -2.8828125, -2.796875, -1.3271484375, 4.12890625, 5.3203125 ]
BkiUctrxK7FjYHGHzffr
\section*{Introduction} The main goal of this paper is to derive a formula for the character of the fusion product of two integrable irreducible representations of the affine Kac-Moody Lie algebra $\slth$. We first briefly recall the definition. The notion of the fusion product of cyclic representations $V_1,\ldots, V_n$ of the Lie algebra $\g$ was introduced in \cite{FL}. This object is a cyclic graded representation of the current algebra $\g\T\C[u]$. One starts with a tensor product of evaluation representations $V_1(z_1)\T\ldots\T V_n(z_n)$, where $z_i$ are pairwise distinct complex numbers. Introduce a filtration \begin{equation} \label{deffus} F_l=\mathrm{span} \{(g_1\T u^{i_1}\ldots g_k\T u^{i_k})\cdot v_1\T\ldots\T v_n, \ \sum i_\al\le l\}, \end{equation} where $v_i$ are cyclic vectors of $V_i$. Then the fusion product is an adjoint graded space $$V_1(z_1)*\ldots * V_n(z_n)=F_0\oplus\bigoplus_{l>0} F_l/F_{l-1}.$$ An important property is that in some cases $V_1(z_1)*\ldots * V_n(z_n)$ is independent on $\{z_i\}$ as $\g\T\C[u]$-module. This is always true for \begin{itemize} \item $n=2$ and arbitrary cyclic modules (obvious); \item $\g=\slt$ and $n$ finite-dimensional modules (see \cite{FL}, \cite{FF1}); \item $\g={\mathfrak{sl}_n}$ and irreducible representations with special highest weights (see \cite{CL},\cite{FKL}, \cite{Ked}); \item simple Lie algebra $\g$ and special highest weight irreducible representations (see \cite{FoL}). \end{itemize} In these cases we omit numbers $z_i$ and denote the corresponding fusion product simply by $V_1*\ldots * V_n$. Now let $\g$ be an affine Kac-Moody Lie algebra and $V_i$ be integrable irreducible representations. This situation for $\g=\slth$ and $n=2$ was studied in \cite{FFJMT} in order to derive some results about monomial bases for vertex operators in minimal models. In particular a bosonic formula for the character of fusion product of level $1$ and level $k$ representations was obtained. Here we consider two arbitrary level representations and derive a fermionic formula for the corresponding fusion product. We briefly describe our approach here. Let $\g=\slth=\slt\T\C[t,t^{-1}]\oplus\C K\oplus \C d$, where $K$ is a central element and $d$ is a degree operator. For $k\in\N$ and $0\le i\le k$ let $L_{i,k}$ be the corresponding irreducible integrable representation of $\slth$ with highest weight vector $v_{i,k}$ such that $h_0v_{i,k}=iv_{i,k}$, $Kv_{i,k}=kv_{i,k}$, $dv_{i,k}=0$, where $h$ is a standard generator of the Cartan subalgebra and for $x\in\slt$ we set $x_i=x\T t^i$. In what follows we use the notation $e,h,f$ for standard basis of $\slt$. There exist three different gradings on a fusion product $L_{i_1,k_1}*L_{i_2,k_2}$: $\deg_z$ by an operator $h_0$, $\deg_q$ by an operator $d$ and $\deg_u$ coming from the fusion filtration $(\ref{deffus})$. This defines a character $\ch_{z,u,q} L_{i_1,k_1}* L_{i_2,k_2}$. To find this character we consider a principal subspace $W_{i,k}^N\hk L_{i,k}$ which is generated from the extremal vector $v_{i,k}^N$ by the action of operators $e_i$, $i\in\Z$ (see \cite{FS}, \cite{FF2}). The weight of $v_{i,k}^N$ is a weight of $v_{i,k}$ shifted by the $N$-th power of the translation element from the Weyl group of $\slth$. An important thing is that $L_{i,k}$ is a limit of $W_{i,k}^N$ while $N\to\infty$. This gives $$L_{i_1,k_1}* L_{i_2,k_2}=\lim_{N\to\infty} W_{i_1,k_1}^N* W_{i_2,k_2}^N.$$ So we only need to find the character of $W_{i_1,k_1}^N* W_{i_2,k_2}^N$. We note that this space is a cyclic representation of an abelian Lie algebra with a basis $e_i, e_i\T u$, $i\in\Z$ with cyclic vector $v_{i_1,k_1}\T v_{i_2,k_2}$. We give a precise description of $W_{i_1,k_1}^N* W_{i_2,k_2}^N$ in terms of generators and relations. Let $\la_0\ge\la_1\ge\cd \ge \la_s>0$ be a set of positive integers. We consider a corresponding partition $\bl=\{(i,j),\ i,j\in\Z, i,j\ge 0, i\le\la_j\}$ and define an algebra \begin{equation} A_\bl=\C[a_0,a_{-1},\ld; b_0,b_{-1},\ld]/\bra a(z)^ib(z)^j, (i,j)\notin\bl\ket, \end{equation} where $a(z)=\sum_{k\ge 0} a_{-k} z^k$, $b(z)=\sum_{l\ge 0} b_{-l} z^l$. This algebra is two-\-dimen\-si\-onal generalization of an algebra $\C[e_i]_{i\le 0}/e(z)^{k+1}$ which plays an important role in the representation theory of $\slth$ (see for example \cite{FS}). Recall that one of the most useful tools of study of $A_k$ and similar algebras is a vertex operator realizations (see for example \cite{FK}, \cite{FF3}, \cite{FST}). We also use this technique in our situation. Namely for $\bl$ such that $\la_{i-1}-\la_i\le \la_i-\la_{i+1}$ we embed $A_\bl$ into multi-dimensional lattice vertex operator algebra and compute the character of $A_\bl$. In addition we show that there exists $\bl$ (depending on $k_1$ and $k_2$) such that up to a certain identification of generators $e_i$, $e_i\T u$ and $a_j, b_j$ the space $W_{i_1,k_1}^N* W_{i_2,k_2}^N$ is isomorphic to the quotient of $A_\bl$ by some ideal. This ideal depends on $i_1$ and $i_2$ and is generated by certain coefficients of the series $a(z)^i$ and $b(z)^j$. Using the realization above we derive a fermionic (Gordon type) formula for the the character of the fusion product of principal subspaces and therefore (as a limit) the character of the fusion product of two integrable irreducible representations. For fermionic formulas in the case of finite-dimensional algebras see for example \cite{FF1, FJKLM1, FJKLM2, AK}. We also expect the existence of the bosonic (alternating sign) formula for the fusion product of integrable representations similar to one given in \cite{FFJMT} in the $\slth$ case with one of the representations of the level $1$. Let us briefly explain the importance of such formula. Let $\g$ be an affine Kac-Moody Lie algebra, $L_\mu, L_\nu$ be its integrable highest weight representations. Then one has a decomposition of the tensor product $$L_\mu\T L_\nu=\bigoplus C_{\mu,\nu}^\pi L_\pi$$ into the direct sum of integrable highest weight modules $L_\pi$, where $C_{\mu,\nu}^\pi$ are the spaces of multiplicities (see \cite{K1}). The fusion filtration $(\ref{deffus})$ defines a filtration on $C_{\mu,\nu}^\pi$, because all $F_l$ in $(\ref{deffus})$ are representations of $\g=\g\T 1\hk\g\T \C[t]$. We note that spaces of multiplicities appear in the coset conformal field theories (see \cite{DMS}). In \cite{FFJMT} the character of the filtered space $C_{\mu,\nu}^\pi$ in some particular case was given in the bosonic form and the resulting formula was applied for the study of vertex operators in Virasoro minimal models. We hope that in the general case (at least for $\g=\slth$) it is also possible to write an analogous bosonic formula and establish connection with the corresponding coset conformal field theory. In the end of the introduction we comment about the connection of the affine fusion product with two-dimensional affine Demazure modules. Recall that in some special cases affine Demazure modules (see for example \cite{Kum}) can be identified with the fusion product of finite-dimensional irreducible representations of the corresponding simple algebra (see \cite{FF2}, \cite{CL}, \cite{FKL}, \cite{FoL}). In addition some integrable irreducible representations (in particular the basic one) can be realized as an inductive limit of these fusion products (see \cite{FF2, FoL}). All these statements allow to consider affine fusion products of two representations as a simplest example of double affine Demazure modules. To construct another examples one needs to "fuse" $N$ irreducible representations for arbitrary $N$. In particular it seems to be important to prove the independence of the corresponding fusion products on the evaluation parameters. We hope to return to these questions elsewhere. Our paper is organized in the following way. In Section $1$ we recall the definition and collect main properties of the lattice vertex operator algebras and principal subspaces. In Section $2$ we study the family of commutative algebras labeled by Young diagram and construct their vertex operator realization. In Section $3$ we apply the results of Section $2$ to the computation of the character of the fusion product of two irreducible representations. {\it Acknowledgements.}\\ Research of BF is partially supported by RFBR Grants 04-01-00303 and 05-01-01007, INTAS 03-51-3350 and NSh 2044.2003.2. Research of EF is partially supported by the RFBR Grant 06-01-00037 and LSS 4401.2006.2. \section{Lattice vertex operator algebras} In this section we recall main properties of lattice vertex operator algebras (VOA for short) and derive some statements about principal subspaces. The main references are \cite{K2}, \cite{BF}, \cite{dong}. Let $L$ be a lattice of finite rank equipped with a symmetric bilinear form $(\cdot,\cdot): L\times L\to \Z$ such that $(\al,\al)>0$ for all $\al\in L\setminus \{0\}$. Let $\h=L\T_{\Z} \C$. The form $(\cdot,\cdot)$ induces a bilinear form on $\h$, for which we use the same notation. Let $$\widehat\h=\h\T\C[t,t^{-1}]\oplus \C K$$ be the corresponding multi-dimensional Heisenberg algebra with the bracket $$[\al\T t^i, \be\T t^j]=i\delta_{i,-j}(\al,\be) K,\ [K,\al\T t^i]=0,\ \al,\be\in\h.$$ For $\al\in\h$ define the Fock representation $\pi_\al$ generated by a vector $|\al\ket$ such that $$(\be\T t^n) |\al\ket=0,\ n>0; \qquad (\be\T 1) |\al\ket=(\be,\al) |\al\ket; \qquad K|\al\ket=|\al\ket.$$ We now define a VOA $V_L$ associated with $L$. We deal only with an even case, i.e. $(\al,\be)\in 2\Z$ for all $\al,\be\in L$ (in the general case the construction leads to the so called super VOA). As a vector space $$V_L\simeq \bigoplus_{\al\in L} \pi_\al.$$ The $q$-degree on $V_L$ is defined by \begin{equation} \label{defqdeg} \deg_q |\al\ket= \frac{(\al,\al)}{2},\quad \deg_q (\be\T t^n)=-n. \end{equation} The main ingredient of the VOA structure on $V_L$ are bosonic vertex operators $\Gamma_\al(z)$ which correspond to highest weight vectors $|\al\ket$. One sets \begin{equation} \label{defvo} \Gamma_\al(z)=S_\al z^{\al\T 1} \exp(-\sum_{n<0} \frac{\al\T t^n}{n} z^{-n}) \exp(-\sum_{n>0} \frac{\al\T t^n}{n} z^{-n}), \end{equation} where $z^{\al\T 1}$ acts on $\pi_\be$ by $z^{(\al,\be)}$ and the operator $S_\al$ is defined by $$S_\al |\be\ket=c_{\al,\be} |\al+\be\ket;\quad [S_\al,\be\T t^n]=0, \ \al,\be\in\h,$$ where $c_{\al,\be}$ are some nonvanishing constants. The Fourier decomposition is given by $$\Gamma_\al(z)=\sum_{n\in\Z} \Gamma_\al(n) z^{-n-(\al,\al)/2}.$$ In particular, \begin{equation} \label{htoh} \Gamma_\al(-(\al,\al)/2-(\al,\be))|\be\ket=c_{\al,\be}|\al+\be\ket. \end{equation} One of the main properties of vertex operators is the following commutation relation: \begin{equation} \label{vertop} [\al\T t^n,\Gamma_\be(z)]=(\al,\be)z^n \Gamma_\be(z). \end{equation} Another important formula describes the product of two vertex operators \begin{multline} \Gamma_\al(z)\Gamma_\be(w)=(z-w)^{(\al,\be)} S_\al S_\be z^{(\al+\be)\T 1}\times\\ \exp(-(\sum_{n<0} \frac{\al\T t^n}{n} z^{-n} + \frac{\be\T t^n}{n} w^{-n})) \exp(-(\sum_{n>0} \frac{\al\T t^n}{n} z^{-n} + \frac{\be\T t^n}{n} w^{-n})). \end{multline} This leads to the proposition: \begin{prop} \begin{equation} \label{vorel} (\Gamma_\al(z))^{(k)}(\Gamma_\be(z))^{(l)}=0 \text{ if } k+l<(\al,\be), \end{equation} where the superscript $(k)$ denotes the $k$-th derivative of the corresponding series. In addition if $(\al,\be)=0$ then $$\Gamma_\al(z)\Gamma_\be(z) \text{ is proportional to } \Gamma_{\al+\be}(z).$$ \end{prop} We now recall some basic facts about the representation theory of $V_L$. This VOA is known to be rational, i.e. every $V_L$-module is completely reducible. The number of irreducible representations is finite. These representations are labeled by the elements of $L'/L$, where $L'$ is a dual lattice \begin{equation} \label{duallat} L'=\{\be\in L\T_\Z \Q:\ (\al,\be)\in\Z\ \forall \al\in L\}. \end{equation} Namely for $\gamma\in L'/L$ define $$V^\gamma_L=\bigoplus_{\be\in L+\gamma} \pi_\be.$$ For example vertex operators $\Gamma_\al(z)$ act on each $V^\gamma_L$ via the formula $(\ref{defvo})$ (because $(\al,\be+\gamma)\in\Z$ for all $\be\in L$). The $q$-degree on $V^\gamma_L$ is defined as in ($\ref{defqdeg}$). In what follows we fix a set $\al_1,\ldots,\al_N$ of linearly independent vectors generating the lattice $L$. We denote the nondegenerate matrix of scalar products by $M$: $m_{ij}=(\al_i,\al_j)$ and assume that $m_{ij}\in \N$ . \begin{lem} For any vector $\bv\in \Z_{\ge 0}^N$ there exist $\gamma_\bv\in L'/L$ and $\be\bv\in L+ \gamma_\bv$ such that \begin{equation} \label{begcond} \Gamma_{\al_i} (-n) |\be_\bv\ket=0 \text{ for } n<v_i, 1\le i\le N; \quad \Gamma_{\al_i} (-v_i) |\be_\bv\ket=c_i |\be_\bv+\al_i\ket, \end{equation} with some nontrivial constants $c_i$. \end{lem} \begin{proof} We only need to find $\be_\bv$ such that \begin{equation} \label{degcond} \deg_q |\be_\bv+\al_i\ket - \deg_q |\be_\bv\ket=v_i,\ 1\le i\le N. \end{equation} Note that $\be\in L\T_\Z \Q$. So $\be_\bv$ is a rational linear combination of $\al_i$ and we can consider $\be_\bv$ as a vector in $\Q^N$. Then $(\ref{degcond})$ is equivalent to \begin{equation} \label{bebv} \frac{m_{ii}}{2}+ (M\be_\bv)_i=v_i. \end{equation} In view of $m_{ii}/2\in\Z$ we obtain that $\be_\bv\in L'$ satisfying $(\ref{bebv})$ really exists, because $(\ref{duallat})$ can be rewritten as $$L'=\{\be\in L\T_\Z \Q:\ M\be\in\Z\}.$$ Then $\gamma_\bv$ is defined as a class of $\be_\bv$. \end{proof} We now define principal subspaces. For $\bv\in Z_{\ge 0}^N$ consider the subspace $W_L(\bv)\hk V^{\gamma_\bv}_L$ generated from the vector $\be_\bv$ by an action of operators $\Gamma_{\al_i}(-n_i)$ with $n_i\ge v_i$ ($1\le i\le N$). Our goal is to describe $W_L(\bv)$ (in particular we want to find its character). We first realize this subspace as a quotient of a polynomial algebra. Namely define $W'_L(\bv)$ as a quotient of $\C[a_i(-n)]_{\substack{1\le i\le N\\ n\ge v_i}}$ by relations $$a_i(z)^{(k)}a_j(z)^{(l)},\ k+l<m_{ij},$$ where $a_i(z)=\sum_{n\ge v_i} z^n a_i(-n)$. $W_L'(\bv)$ is generated by coefficients $$a_i(-n),\quad 1\le i\le N, n\ge v_i.$$ We note that $W_L'(\bv)=\bigoplus_{\bn\in\Z_{\ge 0}^N} W_{L,\bn}'(\bv)$, where $W_{L,\bn}'(\bv)$ is a subspace spanned by monomials in $a_i(k)$ such that the number of factors of the type $a_{i_0}(k)$ with fixed $i_0$ is exactly $n_{i_0}$. The character of $W_{L,\bn}'(\bv)$ is naturally defined by $\deg_q a_i(k)=-k.$ \begin{lem} \label{charlem} \begin{equation} \label{charact} \ch_q W_{L,\bn}'(\bv)= \frac{q^{\bn M\bn/2+\sum_{i=1}^N n_i(v_i-m_{ii}/2)}}{(q)_\bn}, \end{equation} where $(q)_\bn=\prod_{j=1}^N (q)_{n_j}$, $(q)_n=\prod_{j=1}^n (1-q^j)$. \end{lem} \begin{proof} We use the dual space approach (see \cite{FS, FJKLM1}). For $\theta\in (W_{L,\bn}'(\bv))^*$ define a polynomial $f_\theta\in\C[z_{i,n}]_{\substack{1\le i\le N\\ 1\le n\le n_i}}$ by $$f_\theta=\theta\left(\prod_{i=1}^N \prod_{n=1}^{n_i} a_i(z_{i,n})\right).$$ From the exact form of the relations in $W_L'(\bv)$ one can see that the space $$\{f_\theta:\ \theta\in (W_{L,\bn}'(\bv))^*\}$$ coincides with the space of polynomials which are divisible by \begin{multline} \label{pol} \prod_{i=1}^N\prod_{n=1}^{n_i} z_{i,n}^{v_i} \prod_{i=1}^N \prod_{1\le n< m\le n_i} (z_{i,n}-z_{i,m})^{m_{ii}} \times\\ \prod_{1\le i<j\le N} \prod_{\substack{1\le n\le n_i\\ 1\le m\le n_j}} (z(i,n)-z(j,m))^{m_{ij}}. \end{multline} The character of such polynomials coincides with the right hand side of $(\ref{charact})$. \end{proof} In the next proposition we show that spaces $W_L(\bv)$ and $W_L'(\bv)$ are isomorphic. \begin{prop} \label{charvo} The map $\be_\bv\mapsto 1$, $\Gamma_{\la_i}(n)\mapsto a_i(n)$ induces the isomorphism $$W_L(\bv)\simeq W_L'(\bv).$$ In particular for any $\bn=(n_1,\ldots,n_N)\in\Z_{\ge 0}^N$ \begin{equation} \label{chvo} \ch_q (W_L(\bv)\cap \pi_{\be_\bv+n_1\al_1+\ldots +n_N\al_N})= \frac{q^{\bn M\bn/2+\sum_{i=1}^N n_i(v_i-m_{ii}/2)}}{(q)_\bn} q^{\deg_q \be_\bv}. \end{equation} \end{prop} \begin{proof} Because of $(\ref{vorel})$ it suffices to prove the equality $(\ref{chvo})$. We note that $$|\be_\bv+n_1\al_1+\ldots +n_N\al_N\ket \in W_L(\bv)$$ (see $(\ref{htoh})$). In addition \begin{multline*} \deg_q|\be_\bv+\sum_{i=1}^N n_i\al_i\ket- \deg_q|\be_\bv\ket= \frac{1}{2}((\be_\bv+\bn) M (\be_\bv+\bn)- \be_\bv M \be_\bv=\\ \frac{1}{2}(\bn M \bn +2 \bn M \be_\bv)= \frac{1}{2}\bn M \bn + \sum_{i=1}^N n_i (v_i-\frac{m_{ii}}{2}), \end{multline*} where the last equality is true because of $(\ref{bebv})$. Therefore the minimal power of $q$ in the left hand side of $(\ref{chvo})$ is equal to $$\bn M\bn/2+\sum_{i=1}^N n_i(v_i-m_{ii}/2).$$ For $\theta\in (W_L(\bv)\cap \pi_{\be_\bv +n_1\al_1+\ldots + n_N\al_N})^*$ set $$f_\theta(z_{i,n})_{\substack{1\le i\le N\\ 1\le n\le n_i}}= \theta(\prod_{i=1}^N\prod_{n=1}^{n_i} \Gamma_{\al_i}(z_{i,n})).$$ In view of $(\ref{vertop})$ we obtain that $W_L(\bv)\cap \pi_{\be_\bv + n_1\al_1+\ldots +n_N\al_N}$ is invariant under the action of operators $\al_i\T t^k$, $k\ge 0$, $1\le i\le N$. For the action on the dual space one has $$(\al_i\T t^k)f_\theta= (\sum_{j=1}^N m_{ij} \sum_{n=1}^{n_j} z_{j,n}^k) f_\theta.$$ Using the nondegeneracy of $M$ and a fact that polynomials $\sum_{n=1}^{n_j} z_{j,n}^k$, $k\ge 0$ generates the ring of symmetric polynomials in variables $z_{j,n}$ with fixed $j$ we obtain that the character of $W_L(\bv)\cap \pi_{\be_\bv+n_1\al_1+\ldots +n_N\al_N}$ is greater than or equal to the right hand side of $(\ref{chvo})$. Now using $(\ref{vorel})$ and Lemma $\ref{charlem}$ we obtain our proposition. \end{proof} \section{Commutative algebras} Let $\la_0\ge\la_1\ge\cd \ge \la_s>0$ be a set of positive integers. We consider a corresponding partition $\bl=\{(i,j),\ i,j\in\Z_{\ge 0}, i\le\la_j\}$ and define an algebra \begin{equation} A_\bl=\C[a_0,a_{-1},\ld; b_0,b_{-1},\ld]/\bra a(z)^ib(z)^j, (i,j)\notin\bl\ket, \end{equation} where $a(z)=\sum_{k\ge 0} a_{-k} z^k$, $b(z)=\sum_{l\ge 0} b_{-l} z^l$. This means that our algebra is generated by two sets of variables $a_k$ and $b_l$ and the ideal of relations is generated by coefficients of series $a(z)^ib(z)^j$, $(i,j)\notin\bl$. For example, $a(z)^{\la_0+1}=b(z)^{s+1}=0$. $A_\bl$ is graded by $$\deg_z a_n= \deg_z b_n=1,\ \ \deg_u a_n=0, \deg_u b_n=1,\ \ \deg_q a_n=\deg_q b_n=-n.$$ We want to find the character of $A_\bl$, which is given by \begin{multline} \ch_{z,u,q} A_\bl=\\ \sum_{i,j,k \ge 0} z^i u^j q^k \dim \{x\in A_\bl:\ \deg_z x=i, \deg_u x=j, \deg_q x=k\}. \end{multline} We first recall the corresponding result in one-dimensional case (see \cite{FS, FJKLM1}). \begin{prop} \label{filtr} Let $A_k=\C[a_0,a_{-1},\ld]/a(z)^{k+1}$. Then there exists a Gordon filtration $F_\mu$ of $A_k$ (labeled by Young diagrams $\mu$) such that the adjoint graded algebra is generated by coefficients of series $a^{[i]}(z)$, which are images of powers $a(z)^i$, $1\le i\le k$. In addition defining relations in the adjoint graded algebra are of the form \begin{equation} \label{rel} a^{[i]}(z)^{(l)} a^{[j]}(z)^{(r)}=0 \text{ if } s+r< 2\min(i,j). \end{equation} Here the superscript $(l)$ is used for the $l$-th derivative of the corresponding series. Relations $(\ref{rel})$ gives the following Gordon type formula: \begin{equation} \label{G} \ch_{z,q} A_k =\sum_{\bn\in \Z_{\ge 0}^k} (zq^{-1})^{|\bn|} \frac{q^{\bn A\bn/2}}{(q)_\bn}, \end{equation} where $A_{i,j}=2\min(i,j)$, $|\bn|=\sum_{i=1}^k in_i$ and \begin{equation} (q)_\bn=\prod_{i=1}^k (q)_{n_i}, \ (q)_j=\prod_{s=1}^j (1-q^s). \end{equation} \end{prop} The following theorem gives two-dimensional generalization of $(\ref{G})$ for special $\bl$. \begin{theorem} \label{mth} We consider $\la_0\ge \ld\ge \la_s > 0$ with a condition \begin{equation} \la_{i-1}-\la_i\le \la_i-\la_{i+1},\ i=1,\ld, s-1. \end{equation} Then \begin{equation} \label{mf} \ch_{z,u,q} A_\bl=\sum_{\bn\in \Z^{\la_0}_{\ge 0}, \bm\in\Z_{\ge 0}^{s}} u^{|\bm|} (zq^{-1})^{|\bn|+|\bm|} \frac{q^{\bn A\bn/2 + \bn B\bm + \bm A\bm/2}} {(q)_\bn (q)_\bm}, \end{equation} where $A_{i,j}=2\min(i,j)$ and $B_{i,j}=\max(0,i-\la_j)$. \end{theorem} We first construct an algebra with the character given exactly by the right hand side of $(\ref{mf})$. Let $\A_\bl$ be an algebra generated by coefficients of abelian currents $$a^{[i]}(z)=\sum_{n\ge 0} a^{[i]}_{-n} z^n,\ 1\le i\le \la_0,\quad b^{[j]}(z)=\sum_{n\ge 0} b^{[j]}_{-n} z^n,\ 1\le j\le s.$$ These currents are subject to the only relations \begin{gather} \label{f} a^{[i]}(z)^{(l)} a^{[j]}(z)^{(r)}=0 \text{ if } l+r< 2\min(i,j),\\ \label{s} b^{[i]}(z)^{(l)} b^{[j]}(z)^{(r)}=0 \text{ if } l+r< 2\min(i,j),\\ \label{t} a^{[i]}(z)^{(l)} b^{[j]}(z)^{(r)}=0 \text{ if } l+r< i-\la_j. \end{gather} We define $3$ gradings on on $\A_\bl$: \begin{gather*} \deg_z a^{[i]}_n=\deg_z b^{[i]}_n=i,\\ \deg_u a^{[i]}_n=0, \deg_u b^{[i]}_n=i,\\ \deg_q a^{[i]}_n=\deg_q b^{[i]}_n=-n. \end{gather*} This defines the character $\ch_{z,u,q} \A_\bl$ of $\A_\bl$. \begin{lem} \label{dual} $\ch_{z,u,q} \A_\bl$ coincides with the right hand side of $(\ref{mf})$. \end{lem} \begin{proof} Follows from Lemma $\ref{charlem}$. \end{proof} We now prove our theorem in two steps comparing left and right hand sides of $(\ref{mf})$. \begin{lem} For any $\bl$ the left hand side of $(\ref{mf})$ is less than or equal to the right hand side. \end{lem} \begin{proof} Consider a filtration $F_\mu(A_{\la_0})\cdot \C[b_i]_{i=0}^\infty$ on $A_\bl$ (this filtration comes from the filtration of from Proposition $\ref{filtr}$ on the subalgebra of $A_\bl$ generated by coefficients of $a(z)$). The adjoint graded algebra $A'_\bl$ is generated by coefficients of series $a^{[i]}(z)$ (images of $a(z)^i$) and $b(z)$. We now consider a filtration on $A'_\bl$ coming from the filtration from Proposition $\ref{filtr}$ on the subalgebra of $A'_\bl$ generated by $b_j$. We denote the adjoint graded algebra by $A^{gr}_\bl$. This algebra is generated by coefficients of currents \begin{equation} \label{gen} a^{[i]}(z), b^{[j]}(z), \quad i\le \la_0,\ j\le s. \end{equation} First note that in view of $(\ref{rel})$ currents $(\ref{gen})$ are subject to the relations \begin{gather} \label{1} a^{[i]}(z)^{(l)} a^{[j]}(z)^{(r)}=0 \text{ if } l+r< 2\min(i,j).\\ \label{2} b^{[i]}(z)^{(l)} b^{[j]}(z)^{(r)}=0 \text{ if } l+r< 2\min(i,j). \end{gather} We show that in addition \begin{equation} \label{3} a^{[i]}(z)^{(l)} b^{[j]}(z)^{(r)}=0 \text{ if } l+r< \max(0,i-\la_j). \end{equation} Note that it is enough to show that relations \begin{equation} \label{simp} (a(z)^i)^{(l)} (b(z)^j)^{(r)}=0 \text{ if } l+r< \max(0,i-\la_j) \end{equation} hold in $A_\la$. We use the induction on $i$. For $i=\la_j+1$ we need to show that $a(z)^{\la_j+1} b(z)^j=0$. But this is a relation in $A_\bl$. Now suppose that ($\ref{simp}$) is proved for all $i\le i_0$. We want to show that \begin{equation} \label{tp} (a(z)^{i_0+1})^{(l)} (b(z)^j)^{(r)}=0 \text{ if } l+r< i_0-\la_j+1. \end{equation} If $l+r <i_0-\la_j$ then $(\ref{tp})$ holds by induction assumption, because \begin{equation} (a(z)^{i_0}a(z))^{(l)} (b(z)^j)^{(r)}= \sum_{\gamma=0}^l (a(z)^{i_0})^{(\gamma)} (b(z)^j)^{(r)} x_\gamma(z) \end{equation} for some $x_\gamma$. Suppose $l+r=i_0-\la_j$. \\ {\bf Case $1$.} $l\ne 0$. Then for some $x(z)$ $$(a(z)^{i_0+1})^{(l)} (b(z)^j)^{(r)}= a(z)^{i_0+1-l} (b(z)^j)^{(r)}x(z).$$ In view of $i_0+1-l\le i_0$ we can use the induction assumption, which gives $$a(z)^{i_0+1-l} (b(z)^j)^{(r)}=0,$$ because $l+r=i_0-\la_j$ and so $r<(i_0+1-l)-\la_j$.\\ {\bf Case $2$.} $l=0$. We need to show that $a(z)^{i_0+1} (b(z)^j)^{(i_0-\la_j)}=0$. Note that $a(z)^{i_0+1} b(z)^j=0$ for $i_0\ge \la_j$. Therefore $$(a(z)^{i_0+1} b(z)^j)^{(i_0-\la_j)}=0.$$ But the following equality holds in $A_\bl$: $$(a(z)^{i_0+1} b(z)^j)^{(i_0-\la_j)}=a(z)^{i_0+1} (b(z)^j)^{(i_0-\la_j)}.$$ In fact $$(a(z)^{i_0+1} b(z)^j)^{(i_0-\la_j)}=\sum_{l=0}^{i_0-\la_j} \bin{i_0-\la_j}{l} (a(z)^{i_0+1})^{(l)} (b(z)^j)^{(i_0-\la_j-l)}.$$ But if $l\ne 0$ then $$(a(z)^{i_0+1})^{(l)} (b(z)^j)^{(i_0-\la_j-l)}=0$$ because of the Case $1$. We thus obtain $$a(z)^{i_0+1} (b(z)^j)^{i_0-\la_j}=0.$$ Equalities $(\ref{tp})$ and $(\ref{simp})$ are proved. We now consider an algebra generated by currents $(\ref{gen})$ with only relations $(\ref{1})$, $(\ref{2})$, $(\ref{3})$. Because of Lemma $\ref{dual}$ the character of this algebra is given by the right hand side of $(\ref{mf})$. This finishes the proof of our lemma. \end{proof} To prove an equality in $(\ref{mf})$ we construct a vertex operator realization of an algebra $A_\la$. Consider a vector space $\h\simeq\R^N$ equipped with a standard scalar product $(\cdot,\cdot)$. We fix $N$ such that there exists a set of linearly independent vectors $p_1,\ld, p_{\la_0}, q_1,\ld,q_s\in\R^N$ with the scalar products $(p_i,p_j)=2\delta_{i,j}$, $(q_i,q_j)=2\delta_{i,j}$ and \begin{equation} \label{scal} (p_i,q_j)=1 \text{ if } \la_{j-1}\ge i >\la_j;\quad (p_i,q_j)=0 \text{ otherwise }. \end{equation} (For example, take $N=\la_0+s$ and let $e_1,\ld,e_N$ be some orthonormal basis. Put $$q_j=\sqrt{2} e_j, 1\le j\le s;\quad p_i=\frac{1}{\sqrt{2}} e_j+ \sqrt{\frac{3}{2}} e_{s+i},\ j \text{ such that } \la_{j-1}\ge i >\la_j.$$ Then these vectors obviously satisfy (\ref{scal}).) In what follows we fix a lattice $L$ generated by vectors $p_i,q_j$. Let $\Gamma_{p_i}(z)$, $\Gamma_{q_j}(z)$ be corresponding bosonic vertex operators. Set \begin{equation} \label{tatb} \tilde a(z)=\sum_{i=1}^{\la_0} \Gamma_{p_i}(z),\quad \tilde b(z)=\sum_{j=1}^s \Gamma_{q_j}(z). \end{equation} \begin{lem} \label{grel} Suppose that \begin{equation} \label{cond} \la_{i-1}-\la_i\le \la_i-\la_{i+1},\ i=1,\ld, s-1. \end{equation} Then $\tilde a(z)^i \tilde b(z)^j=0$ for $(i,j)\notin\bl$. \end{lem} \begin{proof} It suffices to check that \begin{equation} \label{van} \Gamma_{p_{l_1}}(z) \ldots \Gamma_{p_{l_{\la_i+1}}}(z) \Gamma_{q_{r_1}}(z) \ldots \Gamma_{q_{r_i}}(z)=0 \end{equation} for any $l_1,\ldots,l_{\la_i+1}$, $r_1,\ldots,r_i$. Note that we can assume that $$l_1<\ldots <l_{\la_i+1},\quad r_1 <\ldots <r_i,$$ because $(p_l,p_l)=2=(q_r,q_r)$ and therefore $\Gamma_{p_l}(z)^2=\Gamma_{q_r}(z)^2=0$. In view of $(\ref{scal})$ we have $$\Gamma_{p_l}(z)\Gamma_{q_r}(z)=0 \text{ for } \la_{r-1}\ge l >\la_r.$$ So $(\ref{van})$ holds if there exists $k\le \la_r+1$ such that $\la_{\be_t-1}\ge l_k >\la_{\be_t}$ for some $t\le r$. The number of such $k$ equals to $$\sum_{t=1}^r (\la_{\be_t}-\la_{\be_t-1}).$$ Because of the condition $(\ref{cond})$ this sum is greater than or equal to $\la_0-\la_r$. So we obtain $(\ref{van})$, because the number of factors of the type $\Gamma_{p_l}(z)$ is equal to $\la_r+1$. Lemma is proved. \end{proof} This Lemma can be used for the proof of $(\ref{mf})$. But in the last section we will need a modification of Theorem $\ref{mth}$. So we formulate and prove its slight generalization. Let $$ \bc=(c_1,\ldots, c_{\la_0})\in \Z_{\ge 0}^{\la_0},\quad \bd=(d_1,\ldots, d_s)\in\Z_{\ge 0}^s.$$ We consider an ideal $I_{\la;\bc,\bd}\hk A_\la$ generated by conditions \begin{gather} \label{*} a(z)^i\div z^{c_i+2c_{i-1}+\ldots + ic_1}, \qquad 1\le i\le \la_0,\\ \label{**} b(z)^j\div z^{d_j+2d_{j-1}+\ldots + jd_1}, \qquad 1\le j\le s. \end{gather} This means $I_{\la;\bc,\bd}$ is generated by coefficients of $a(z)^i$ in front of the powers $z^r$, $0\le r< \sum_{l=1}^i lc_{i+1-l}$ and by coefficients of $b(z)^j$ in front of the powers $z^r$, $0\le r< \sum_{l=1}^j ld_{j+1-l}$. We define $$A_{\la;\bc,\bd}=A_\la/I_{\la;\bc,\bd}.$$ \begin{theorem} Let \begin{equation} \la_{i-1}-\la_i\le \la_i-\la_{i+1},\ i=1,\ld, s-1. \end{equation} Then \begin{multline} \label{gmf} \ch_{z,u,q} A_{\la;\bc,\bd}=\\ \sum_{\bn\in \Z^{\la_0}_{\ge 0}, \bm\in\Z_{\ge 0}^{s}} u^{|\bm|} (zq^{-1})^{|\bn|+|\bm|}\times\\ \frac{q^{\bn A\bn/2 + \bn B\bm + \bm A\bm/2}} {(q)_\bn (q)_\bm} q^{\sum\limits_{1\le i\le j\le\la_0} (j-i+1)n_j c_i + \sum\limits_{1\le i\le j\le s} (j-i+1)m_jd_i}, \end{multline} where $A_{i,j}=2\min(i,j)$ and $B_{i,j}=\max(0,i-\la_j)$. \end{theorem} \begin{proof} Recall that $L$ is a lattice generated by vectors $p_i$, $q_j$. Consider a vector $\bv\in L$ $$\bv=c_1p_1+(c_1+c_2)p_2 + \ldots + (c_1+\ldots + c_{\la_0})p_{\la_0} + d_1 q_1 + \ldots + (d_1+\ldots +d_s)q_s$$ and the corresponding principal subspace $W_L(\bv)$. Denote $$\tilde\Gamma_\al(z)=\sum_{n\ge 0} \Gamma_{p_i}(-n) z^n,$$ which differs from usual vertex operators by a certain power of $z$. Then for \begin{equation} \label{point} \tilde a(z)=\sum_{i=1}^{\la_0} \tilde \Gamma_{p_i}(z),\quad \tilde b(z)=\sum_{j=1}^s \tilde \Gamma_{q_j}(z) \end{equation} one gets $$\tilde a(z)^i\cdot |\be_\bv\ket\div z^{ic_1+\ldots +c_i},\quad \tilde b(z)^j\cdot |\be_\bv\ket\div z^{id_1+\ldots +d_i}.$$ Combining these relations with Lemma $\ref{grel}$ we obtain a surjection $A_{\bl;\bc,\bd}\to W_L(\bv)$. We now construct the degeneration of $W_L(\bv)$ in order to show that this surjection is an isomorphism. Consider a family of spaces $W_L(\bv,\ve)$ ($\ve\in\R$, $\ve>0$) generated from the vector $|\be_\bv\ket$ by coefficients of series \begin{equation} \label{degen} \tilde a_\ve(z)=\sum_{i=1}^{\la_0} \ve^i \tilde\Gamma_{p_i}(z),\quad \tilde b_\ve(z)=\sum_{j=1}^s \ve ^j \tilde\Gamma_{q_j}(z). \end{equation} For any positive $\ve$ a space $W_L(\bv,ve)$ is isomorphic to $W_L(\bv)$. Denote the limit $\lim_{\ve\to 0} W_L(\bv,\ve)$ by $W_L(\bv,0)$. Then the $q$-characters of $W_L(\bv,0)$ and $W_L(\bv)$ coincide. We note that \begin{gather} \label{ave} \lim_{\ve\to 0} \tilde a_\ve(z)^i\ve^{-i(i+1)/2}= \prod_{l=1}^i \tilde\Gamma_{p_l}(z), \\ \label{bve} \lim_{\ve\to 0} \tilde b_\ve(z)^j\ve^{-j(j+1)/2}= \prod_{r=1}^j \tilde\Gamma_{q_r}(z), \end{gather} where $1\le i\le \la_0$ and $1\le j\le s$. Denote the right hand side of $(\ref{ave})$ by $\tilde a^{[i]}(z)$ and the right hand side of $(\ref{bve})$ by $\tilde b^{[j]}(z)$. We have $$\tilde a^{[i]}(z)=\tilde\Gamma_{p_1+\ldots +p_i}(z),\quad \tilde b^{[j]}(z)=\tilde\Gamma_{q_1+\ldots +q_j}(z).$$ Because of \begin{gather} (p_1+\ldots + p_i, p_1+\ldots + p_j)= (q_1+\ldots + q_i, q_1+\ldots + q_j)=2\min(i,j),\\ (p_1+\ldots + p_i, q_1+\ldots + q_j) =\max(0,i-\la_j) \end{gather} we obtain that the character of the algebra generated by $\tilde a^{[i]}(z)$, $\tilde b^{[j]}(z)$ is equal to the right hand side of $(\ref{gmf})$ (see Proposition $\ref{charvo}$). This gives that $\ch_{z,u,q} A_{\bl;\bc,\bd}$ is greater than or equal to the right hand side of $(\ref{gmf})$. To finish the proof we use Lemma $\ref{charlem}$ which gives the upper bound for the character of $A_{\bl;\bc,\bd}$, which coincides with the right hand side of $(\ref{gmf})$. Theorem is proved. \end{proof} \begin{cor} The statement of Theorem $\ref{mth}$ is true. \end{cor} We will need the following Corollary from the proof of the previous theorem. \begin{cor} \label{vor} Vertex operators $(\ref{point})$ provides a vertex operator realization of the algebra $A_{\la;\bc,\bd}$. \end{cor} \section{Fusion products} \subsection{Principal subspaces} In this section we apply results of the previous section to the study of fusion products of integrable irreducible representations of $\slth$. We first recall main definitions. Let $\g$ be some Lie algebra, $V_1,\ldots, V_n$ be its cyclic representations with cyclic vectors $v_1,\ldots, v_n$, $Z=(z_1,\ldots, z_n)$ be the set of pairwise distinct complex numbers. Denote by $V_i(z_i)$ the corresponding evaluation representations of $\g\otimes\C[u]$. We consider a filtration $F_l$ on the tensor product $\bigotimes_{j=1}^n V_j(z_j)$ defined by $$F_l=\mathrm{span}\{ x_1\otimes u^{i_1}\cdots x_p\otimes u^{i_p} (v_1\T \ldots \T v_n):\ i_1+\ldots + i_p \le l, x_j\in\g\}.$$ The adjoint graded representation of $\g\otimes\C[u]$ is called the fusion product of $V_i$ and is denoted by $V_1(z_1)*\ldots *V_n(z_n)$. Note that fusion products possess natural $u$-grading defined by $\deg v=l$ for $v\in F_l/F_{l-1}$. \begin{conj} Let $\g$ be an affine Kac-Moody algebra, $V_i$ be its integrable representations. Then the corresponding fusion product does not depend on $Z$ as a representation of $\g\otimes\C[u]$. \end{conj} This conjecture is obvious in the case $n=2$ (for an arbitrary Lie algebra $\g$). In what follows we study the case $\g=\slth$ and $n=2$. We fix some notations first. Let $\slth=\slt\otimes\C[t,t^{-1}]\oplus \C K \oplus\C d$, where $K$ is the central element, and $d$ is the degree element. Let $e,h,f$ be the standard basis of $\slt$. For $x\in\slt$ we set $x_i=x\T t^i\in\slth$. Let $V_1=L_{i_1,k_1}$ and $V_2=L_{i_2,k_2}$ be two integrable irreducible $\slth$-modules with $0\le i_1\le k_1$, $0\le i_2\le k_2$. We fix a vector $v_{i,k}\in L_{i,k}$ which satisfy \begin{gather*} f_0v_{i,k}=e_1v_{i,k}=0,\qquad U(\slth)v_{i,k}=L_{i,k},\\ h_0 v_{i,k}=-iv_{i,k},\ K v_{i,k}=kv_{i,k},\ dv_{i,k}=0, \end{gather*} ($v_{i,k}$ is a highest weight vector with respect to the nilpotent algebra of annihilation operators generated by $f_0$ and $e_1$). The principal subspace $W_{i,k}\hk L_{i,k}$ is defined by $$W_{i,k}=\C[e_0,e_{-1},\ldots]\cdot v_{i,k}.$$ This space is $z,q$ bi-graded by $$\deg_z e_i=1,\ \deg_q e_i=-i,\ \deg_z v_{i,k}=\deg_q v_{i,k}=0.$$ $W_{i,k}$ is a representation of an abelian algebra $\A$ spanned by $e_i$, $i\le 0$. So the fusion product $W_{i_1,k_1}* W_{i_2,k_2}$ is a representation of $\A\oplus \A\T u$. Let $w_{i_1,k_1;i_2,k_2}\in W_{i_1,k_1}* W_{i_2,k_2}$ be the image of $v_{i_1,k_1}\T v_{i_2,k_2}$. Then the action of the operators $e_i$ and $e_i\T u$ ($i\le 0$) on $w_{i_1,k_1;i_2,k_2}$ generates the fusion product. Our goal is to describe the ideal of relations in $\C[e_i,e_i\T u]_{i\le 0}$. We will show that this ideal is isomorphic to $I_{\la;\bc,\bd}$ for special values of parameters. Namely let $I_{i_1,k_1;i_2,k_2}\hk \C[e_j, e_j\T u]_{j\le 0}$ be the ideal of relations, i.e. $$W_{i_1,k_1}* W_{i_2,k_2}\simeq \C[e_i, e_i\T u]_{i\le 0}/I_{i_1,k_1;i_2,k_2}.$$ \begin{prop} \label{prisom} An isomorphism of algebras \begin{equation} \label{isom} \C[e_j, e_j\T u]_{j\le 0}\to \C[a_j,b_j]_{i\le 0},\quad e_i\mapsto a_i, \ e_i\T u\mapsto b_i \end{equation} induces the isomorphism of ideals $$I_{i_1,k_1;i_2,k_2}\to I_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)}, \delta^{(\min(i_1,i_2)+1)}},$$ where $\la^{(k_1,k_2)}=(k_1+k_2,k_1+k_2-2,\ldots, |k_1-k_2|)$ and $(\delta^{(i)})_j=\delta_{i,j}$. \end{prop} We start with the following lemma. \begin{lem} \label{surj} We have an embedding of ideals (with respect to the isomorphism $(\ref{isom})$) $$I_{i_1,k_1;i_2,k_2}\hk I_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)}, \delta^{(\min(i_1,i_2)+1)}},$$ or the surjection $$A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}\to W_{i_1,k_1}* W_{i_2,k_2}.$$ \end{lem} \begin{proof} Let $I_{i,k}\hk \C[e_j]_{j\le 0}$ be the ideal of relations in $W_{i,k}$, i.e. $$I_{i,k}=\{p(e_0,e_{-1},\ldots): p v_{i,k}=0\}, \ W_{i,k}\simeq \C[e_j]_{j\le 0}/ I_{i,k}.$$ Denote $e(z)=\sum_{i\le 0} e_i z^{-i}$. Then $I_{i,k}$ is generated by coefficients of series $e(z)^{k+1}$ and first $l-i$ coefficients of the series $e(z)^l$, $l>i$ (the defining relations in $W_{i,k}$ are $e(z)^{k+1}=0$ and $e(z)^l\div z^{l-i}$, $l>i$). We recall that $W_{i_1,k_1}* W_{i_2,k_2}$ is an adjoint graded space with respect to the filtration on the tensor product of evaluation representations $W_{i_1,k_2}(z_1)\T W_{i_2,k_2}(z_2)$, $z_1\ne z_2$. We set $$e(z)=\sum_{j\ge 0} e_{-j}z^j,\quad e^{(1)}(z)=e(z)\T \Id, \quad e^{(2)}(z)=\Id\T e(z).$$ Then obviously \begin{equation} \label{i1+i2} (e^{(1)}(z)+e^{(2)}(z))^l (v_{i_1,k_1}\T v_{i_2,k_2}) \div z^{l-i_1-i_2} \text{ for } l> i_1+i_2 \end{equation} in the tensor product $W_{i_1,k_1}\T W_{i_2,k_2}$. This gives $(\ref{*})$ for $\bc=\delta^{(i_1+i_2+1)}$. Now let $z_1=1, z_2=0$. Then $e_j\T u$ acts on the tensor product $W_{i_1,k_1}(1)\T W_{i_2,k_2}(0)$ as $e_j\T \Id$. We state that in $W_{i_1,k_2}* W_{i_2,k_2}$ the following equation is true: \begin{equation} \label{mini1i2} e^{(1)}(z)^j\div z^{j-\min(i_1,i_2)} \text{ for } j> \min(i_1,i_2). \end{equation} In fact, $e^{(1)}(z)^l\div z^{l-i_1}$. We also have \begin{multline*} e^{(2)}(z)^l=(e^{(1)}(z)+e^{(2)}(z)-e^{(1)}(z))^l=\\ (-e^{(1)}(z))^l+ \sum_{j=1}^l \genfrac{(}{)}{0pt}{0}{l}{j} (-e^{(1)}(z))^{l-j} (e^{(1)}(z)+e^{(2)}(z))^j. \end{multline*} Therefore in $W_{i_1,k_2}* W_{i_2,k_2}$ we have $$e^{(1)}(z)^l=(-e^{(2)}(z))^j\div z^{l-i_2}.$$ This gives $(\ref{**})$ for $\cd=\delta^{(\min(i_1,i_2)+1)}$. We now check that \begin{equation} \label{lak1k2} (e^{(1)}(z)+e^{(2)}(z))^{k_1+k_2-2i+1} e^{(1)}(z)^i=0,\ i=0,1,\ldots, \min(k_1,k_2). \end{equation} Consider one-dimensional fusion product $(\C[e]/e^{k_1+1}) * (\C[e]/e^{k_2+1})$. As shown in \cite{FF1} the relations in this fusion product are of the form $$(e^{(1)}+e^{(2)})^{k_1+k_2-2i+1} (e^{(1)})^i=0,\ i=0,1,\ldots, \min(k_1,k_2),$$ where $e^{(1)}=e\T\Id$, $e^{(2)}=\Id\T e$. This gives $(\ref{lak1k2})$. Combining together $(\ref{i1+i2})$, $(\ref{mini1i2})$ and $(\ref{lak1k2})$ we obtain our lemma. \end{proof} Now our goal is to prove that $$ W_{i_1,k_1}* W_{i_2,k_2}\simeq A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}. $$ We use the fermionic realization of $W_{i,k}$ (see \cite{FF2}). Let us briefly recall the construction. Let $\phi_i, \psi_i$, $i\ge 0$ be anticommuting variables, $$\phi(z)=\sum_{i\ge 0} \phi_{-i} z^i,\quad \psi(z)=\sum_{i\ge 0} \psi_{-i} z^i.$$ We denote by $\Lambda=\Lambda(\phi_i,\psi_i)_{i\le 0}$ an exterior algebra in variables $\phi_i, \psi_i$. Then there exists an embedding $\imath_{i,k} :W_{i,k}\hk \Lambda^{\T k}$ such that \begin{equation} \label{vik} v_{i,k}\mapsto \wt v_{i,k}= \underbrace{1 \T\ldots\T 1}_i \T \phi_0\T\ldots\T \phi_0. \end{equation} This embedding is defined via the identification $e(z)=\sum_{n=1}^k \stackrel{n}{\phi}(z)\stackrel{n}{\psi}(z)$, where $$\stackrel{n}{\phi}(z)=\underbrace{\Id\T\ldots\T\Id}_{n-1}\T \phi(z)\T\Id\T\ldots\T\Id$$ and similarly for $\stackrel{n}{\psi}(z)$. Note that $$\imath_{i,k} (W_{i,k})\hk \Lambda_{ev}\cdot \wt v_{i,k},$$ where $\Lambda_{ev}$ is an even part of $\Lambda$ generated by the products $\psi_i\psi_j$, $\psi_i\phi_j$, $\phi_i\phi_j$. The algebra $\Lambda_{ev}$ is naturally $(z,q)$ bi-graded with $$\deg_z \psi_i\psi_j=1,\ \deg_q \psi_i\psi_j=i+j$$ and similarly for other generators. Fixing $\deg_z \wt v_{i,k}= \deg_q \wt v_{i,k}=0$ we obtain a bi-grading on $\Lambda_{ev}\cdot \wt v_{i,k}$ such that $\imath_{i,k}$ is an embedding of $(z,q)$ bi-graded spaces. Note that our construction gives an embedding $$\imath_{i_1,k_1}\T \imath_{i_2,k_2}: W_{i_1,k_1}\T W_{i_2,k_2}\hk \Lambda^{\T (k_1+k_2)}.$$ Our goal is to construct a continuous family of algebras $B(\ve)\hk \Lambda^{\T (k_1+k_2)}$, $0\le \ve \le 1$ which "connects" $W_{i_1,k_1}\T W_{i_2,k_2}$ and $A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}$ inside $\Lambda^{\T (k_1+k_2)}$. So we need a fermionic realization of an algebra $$A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}.$$ \begin{lem} The map $\jmath$ defined by \begin{gather} \label{1to} 1\mapsto \wt v_{i_1,k_2}\T \wt v_{i_2,k_2},\\ \label{a(z)} a(z)\mapsto \wt a(z)= \sum_{n=1}^{k_1+k_2} \stackrel{n}{\phi}(z)\stackrel{n}{\psi}(z),\\ \label{b(z)} b(z)\mapsto \wt b(z)= \begin{cases} \sum_{n=1}^{k_1} \stackrel{n}{\phi}(z)\stackrel{n+k_1}{\psi}(z), \ i_1\le i_2,\\ \sum_{n=1}^{k_1} \stackrel{n+k_1}{\phi}(z)\stackrel{n}{\psi}(z), \ i_1>i_2. \end{cases} \end{gather} provides an embedding $$A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}\hk \Lambda^{\T (k_1+k_2)}.$$ \end{lem} \begin{proof} Our Lemma is an immediate consequence of vertex operator realization of $A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}$ (see Corollary $\ref{vor}$) and a version of the boson-fermion correspondence (see for example \cite{BF}). We give a sketch of the proof here. Recall that the boson-fermion correspondence allows to realize lattice VOAs inside the space build up from the fermionic particles of the type $\phi_i$ and $\psi_i$. In particular the space of states of corresponding VOA (the direct sum of Fock modules) is replaced by the space of semi-infinite forms or by the tensor products of such spaces. In this realization fields $\phi(z)$ and $\phi(z)\psi(z)$ correspond to the one-dimensional vertex operators $\Gamma_\al(z)$ with the length of $\al$ equals $1$ or $2$ respectively. To proceed to multi-dimensional even case one must consider independent fileds of the type $\stackrel{n}{\phi}(z)\stackrel{m}{\psi}(z)$. The independence means that these field represent vertex operators $\Gamma_{\be_{n,m}}(z)$ and the scalar products are given by $(\be_{n,m},\be_{k,l})=\delta_{n,k}+\delta_{m,l}.$ We now apply this boson-fermion correspondence to the proof of our lemma. Let $p_1,\ldots, p_{k_1+k_2}$, $q_1,\ldots,q_{k_1}$ be vectors such that \begin{gather} \stackrel{n}{\phi}(z)\stackrel{n}{\psi}(z)=\Gamma_{p_{k_1+k_2+1-n}} (z), 1\le n\le k_1+k_2\\ \stackrel{m}{\phi}(z)\stackrel{m+k_1}{\psi}(z)=\Gamma_{q_m}(z), \ i_1\le i_2, \quad \stackrel{m+k_1}{\phi}(z)\stackrel{m}{\psi}(z)=\Gamma_{q_m}(z), \ i_1>i_2, \end{gather} where $1\le m\le k_1$. Then $(p_n,q_m)=0$ unless $k_1+k_2-2(m-1)\ge n > k_1+k_2-2m$. In the latter case the corresponding scalar product equals $1$. Therefore according to Corollary $\ref{vor}$ to finish the proof of our Lemma we only need to check the initial conditions $$\tilde a(z)^{i_1+i_2+l} \tilde v_{i_1,k_1}\T \tilde v_{i_2,k_2} \div z^l,\quad \tilde b(z)^{\min(i_1,i_2)+l} \tilde v_{i_1,k_1}\T \tilde v_{i_2,k_2} \div z^l.$$ But this just follows from $\stackrel{n}{\phi}(z) \tilde v_{i,k}\div z^l$ for $n>i$ (see $(\ref{vik})$). \end{proof} {\bf Proof of Proposition \ref{prisom}: } \begin{equation} \label{isomfus} W_{i_1,k_1}* W_{i_2,k_2}\simeq A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}. \end{equation} \begin{proof} Because of Lemma $\ref{surj}$ it suffices to check that the character of the left hand side of $(\ref{isomfus})$ coincides with the character of the right hand side. We construct a continuous family of algebras $B(\ve)\hk \Lambda^{\T (k_1+k_2)}$, $0\le \ve <1$ which "connects" $W_{i_1,k_1}\T W_{i_2,k_2}$ and $A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}$. We want $B(\ve)$ to satisfy \begin{description} \item[A] $B(0)=(\imath_{i_1,k_1}\T \imath_{i_2,k_2}) (W_{i_1,k_1}\T W_{i_2,k_2})$ \item[B] $B(\ve)\simeq B(0)$ as bi-graded vector spaces \item[C] $\lim_{\ve\to 1} B(\ve)=\jmath A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}$ \end{description} Note that the existence of such deformation proves our proposition. Set \begin{equation} \label{b} \wt b^\ve(z)= \begin{cases} \sum_{n=1}^{k_1} \stackrel{n}{\phi}(z) (\ve\stackrel{n}{\psi}(z)+(1-\ve)\stackrel{n+k_1}{\psi}(z)), \ i_1\le i_2,\\ \sum_{n=1}^{k_1} (\ve\stackrel{n}{\phi}(z)+(1-\ve)\stackrel{n+k_1}{\phi}(z)) \stackrel{n}{\psi}(z), \ i_1>i_2. \end{cases} \end{equation} We denote by $B(\ve)\hk \Lambda^{\T (k_1+k_2)}$ the subspace generated by the coefficients of $\wt a(z)$ and $b^\ve(z)$ from the vector $\wt v_{i_1,k_1}\T \wt v_{i_2,k_2}$. We now check ${\bf A}$, ${\bf B}$ and ${\bf C}$. ${\bf A}$ is obvious because $\wt b^0(z)$ reduces to the current $\wt b(z)$. We now check ${\bf B}$. First note that for any $0< \ve_1\le \ve_2<1$ we have $B(\ve_1)\simeq B(\ve_2)$. In fact, fix some $0<\ve <1$ and redefine \begin{gather*} 1/2\stackrel{n}{\psi'}(z)=(1-\ve)\stackrel{n}{\psi}(z),\ 1\le n\le k_1,\\ 1/2\stackrel{m}{\psi'}(z)=\ve\stackrel{m}{\psi}(z),\ k_1+1\le m\le k_1+k_2,\\ 2\stackrel{n}{\phi'}(z)=(1-\ve)^{-1}\stackrel{n}{\phi}(z),\ 1\le n\le k_1,\\ 2\stackrel{m}{\phi'}(z)=\ve^{-1}\stackrel{m}{\phi}(z), \ k_1+1\le m\le k_1+k_2, \end{gather*} for $i_1\le i_2$ and \begin{gather*} 1/2\stackrel{n}{\phi'}(z)=(1-\ve)\stackrel{n}{\phi}(z),\ 1\le n\le k_1,\\ 1/2\stackrel{m}{\phi'}(z)=\ve\stackrel{m}{\phi}(z),\ k_1+1\le m\le k_1+k_2, \\ 2\stackrel{n}{\psi'}(z)=(1-\ve)^{-1}\stackrel{n}{\psi}(z),\ 1\le n\le k_1,\\ 2\stackrel{m}{\psi'}(z)=\ve^{-1}\stackrel{m}{\psi}(z), \ k_1+1\le m\le k_1+k_2, \end{gather*} for $i_1>i_2$. Then one can easily check that $\wt a(z)$ doesn't change and $\wt b^\ve(z)$ becomes $\wt b^{1/2}(z)$ (up to the nonzero constant). Therefore, $B(\ve)\simeq B(1/2)$ for any $0<\ve <1$. Now note that \begin{equation} \label{sep} (2\wt b^{1/2} (z))^{k_1+1} \wt v_{i_1,k_1}\T \wt v_{i_2,k_2}= (a(z)-2\wt b^{1/2} (z))^{k_2+1} \wt v_{i_1,k_1}\T \wt v_{i_2,k_2}=0 \end{equation} and \begin{gather} \label{i1} (2\wt b^{1/2} (z))^l \wt v_{i_1,k_1}\T \wt v_{i_2,k_2}\div z^{l-i_1+1}, \ l>i_1,\\ \label{i2} (\wt a(z) - 2\wt b^{1/2} (z))^l \wt v_{i_1,k_1}\T \wt v_{i_2,k_2}\div z^{l-i_2+1}, \ l>i_2. \end{gather} In fact, one has $$2\wt b^{1/2}(z)= \begin{cases} \sum_{n=1}^{k_1} \stackrel{n}{\phi}(z) (\stackrel{n}{\psi}(z)+\stackrel{n+k_1}{\psi}(z)), \ i_1\le i_2,\\ \sum_{n=1}^{k_1} (\stackrel{n}{\phi}(z)+\stackrel{n+k_1}{\phi}(z)) \stackrel{n}{\psi}(z), \ i_1>i_2; \end{cases}, $$ and $$\wt a(z)- 2\wt b^{1/2}(z)= \begin{cases} \sum_{n=k_1+1}^{k_1+k_2} (\stackrel{n}{\phi}(z)- \stackrel{n-k_1}{\phi}(z)) \stackrel{n}{\psi}(z), \ i_1\le i_2,\\ \sum_{n=k_1+1}^{k_1+k_2} \stackrel{n}{\phi}(z) (\stackrel{n}{\psi}(z)- \stackrel{n-k_1}{\psi}(z)), \ i_1>i_2. \end{cases} $ Now $(\ref{sep}), (\ref{i1}), (\ref{i2})$ follows from the above exact expressions of the currents. Relations $(\ref{sep}), (\ref{i1}), (\ref{i2})$ provide a surjection $B(0)\to B(1/2)$ ($2\wt b^{1/2}(z)$ and $\wt a(z)$ correspond to the currents $e^{(1)}(z)$ and $e^{(2)}(z)$ in $B(0)\simeq W_{i_1,k_1}\T W_{i_2,k_2}$). This gives ${\bf B}$ (because all $B(\ve)$ with $0<\ve<1$ are isomorphic and our deformation is continuous). To show ${\bf C}$ we note that in view of formulas $(\ref{1to}), (\ref{a(z)}), (\ref{b(z)})$ and $(\ref{b})$ we have an embedding $$\jmath A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}\hk \lim_{\ve\to 1} B(\ve).$$ But all $B(\ve)$ (including $B(0)$) are isomorphic and $$\ch_{z,q} B(0)\le \ch_{z,q} A_{\la^{(k_1,k_2)};\delta^{(i_1+i_2+1)},\delta^{(\min(i_1,i_2)+1)}}.$$ Therefore, using Lemma $\ref{surj}$ we obtain ${\bf C}$. \end{proof} \begin{cor} \label{w1w2} Let $k_1\le k_2$. Then \begin{multline} \ch_{z,u,q} (W_{i_1,k_1}* W_{i_2,k_2})= \sum_{\bn\in \Z^{k_1+k_2}_{\ge 0}, \bm\in\Z_{\ge 0}^{k_1}} u^{|\bm|} z^{|\bn|+|\bm|}\times\\ q^{-|\bn|-|\bm|+\sum\limits_{j=i_1+i_2+1}^{k_1+k_2} (j-i_1-i_2)n_j + \sum\limits_{j=\min(i_1,i_2)+1}^{k_1} (j-\min(i_1,i_2))m_j}\times\\ \frac{q^{\bn A\bn/2 + \bn B\bm + \bm A\bm/2}} {(q)_\bn (q)_\bm}, \end{multline} where $A_{i,j}=2\min(i,j)$ and $B_{i,j}=\max(0,i-k_1-k_2+2j)$. \end{cor} \subsection{The limit construction.} In this subsection we derive a fermionic formula for the character of the fusion product $L_{i_1,k_1}* L_{i_2,k_2}$. Note that $L_{i,k}$ is bi-graded by operators $d$ and $h_0$. Therefore for the fusion product the $z,u,q$ character is naturally defined. Let $v_{i,k}^N\in L_{i,k}$, $N\in\Z$ be the set of extremal vectors (the weight of $v_{i,k}^N$ is a weight of $v_{i,k}$ shifted by the $N$-th power of the translation element from the Weyl group of $\slth$). We fix $h_0v^N_{i,k}=(-i-2Nk)v_{i,k}^N$. Introduce the $N$-th principal subspace $W_{i,k}^N\hk L_{i,k}$ by $$W_{i,k}^N=\C[e_{2N}, e_{2N-1},\ldots]\cdot v_{i,k}^N$$ (note that $e_{2N+1}v_{i,k}^N=0$). We recall that there exists an isomorphism \begin{equation} \label{shift} W_{i,k}\simeq W_{i,k}^N,\quad v_{i,k}\mapsto v_{i,k}^N, e_j\to e_{j+2N} \end{equation} and $$L_{i,k}=W_{i,k}^0\hk W_{i,k}^1 \hk W_{i,k}^2 \hk\ldots.$$ Using this limit construction we can write a fermionic formula for the character of the fusion product of integrable modules. \begin{cor} \begin{multline} \label{limform} \ch_{z,u,q} (L_{i_1,k_1}* L_{i_2,k_2})= \lim_{N\to\infty} z^{-i_1-i_2-2N(k_1+k_2)} q^{N^2(k_1+k_2)+N(i_1+i_2)}\times\\ \sum_{\bn\in \Z^{k_1+k_2}_{\ge 0}, \bm\in\Z_{\ge 0}^{k_1}} u^{|\bm|} z^{2(|\bn|+|\bm|)} q^{(-2N-1)(|\bn|+|\bm|)}\times\\ q^{\sum\limits_{j=i_1+i_2+1}^{k_1+k_2} (j-i_1-i_2)n_j + \sum\limits_{j=\min(i_1,i_2)+1}^{k_1} (j-\min(i_1,i_2))m_j} \frac{q^{\bn A\bn/2 + \bn B\bm + \bm A\bm/2}} {(q)_\bn (q)_\bm}. \end{multline} \end{cor} \begin{proof} Follows from Corollary \ref{w1w2}, isomorphism $(\ref{shift})$ and equalities $$h_0 v_{i,k}^N=(-i-2Nk)v_{i,k}^N,\quad dv_{i,k}^N=(kN^2+Ni)v_{i,k}^N.$$ We only note that a factor $2$ in $z^{2(|\bn|+|\bm|)}$ comes from the relation $[h_0,e_i]=2e_i$ (in $W_{i,k}$ we had $\deg_z e_i=1$) and the power $q^{(-2N-1)(|\bn|+|\bm|)}$ comes from the shift $e_j\to e_{j+2N}$ (see $(\ref{shift})$). \end{proof} Introduce a new set of variables $s_i$ instead of $n_i$ in $(\ref{limform})$ by $$n_i=s_i-s_{i+1}, 1\le i <k_1+k_2,\ n_{k_1+k_2}=s_{k_1+k_2}+N- \frac{|\bm|}{k_1+k_2}.$$ Then for the power of $z$ in $(\ref{limform})$ we have $$-i_1-i_2-2N(k_1+k_2)+2(|\bn|+|\bm|)=-i_1-i_2+2\sum_{i=1}^{k_1+k_2} s_i.$$ We now rewrite the power of $q$ in $(\ref{limform})$ in terms of new variables. Note that \begin{gather*} \bn A\bn/2=\sum_{i=1}^{k_1+k_2} (n_i+\ldots +n_{k_1+k_2})^2= \sum_{i=1}^{k_1+k_2} (s_i+N-\frac{|\bm|}{k_1+k_2})^2,\\ \bn B\bm=|\bm|(2N-2\frac{|\bm|}{k_1+k_2})+\sum_{i+2j\ge k_1+k_2+1} m_j s_i,\\ |\bn|+|\bm|=N(k_1+k_2)+\sum_{i=1}^{k_1+k_2} s_i. \end{gather*} Therefore the power of $q$ in $(\ref{limform})$ is equal to \begin{multline} \label{P} \sum_{i=1}^{k_1+k_2} s_i^2- \frac{|\bm|}{k_1+k_2}(|\bm|+k_1+k_2-i_1-i_2+ 2\sum_{i=1}^{k_1+k_2} s_i)+\\ \sum_{i+2j\ge k_1+k_2+1} m_j s_i+ \bm A\bm/2 - \sum_{i=1}^{i_1+i_2} s_i + \sum_{j=\min(i_1,i_2)+1}^{k_1} (j-\min(i_1,i_2))m_j. \end{multline} (Note that the powers which contain $N$ cancel.) We thus obtain the following theorem \begin{theorem} \begin{multline*} \ch_{z,u,q} L_{i_1,k_1}* L_{i_2,k_2}=\\ \frac{1}{(q)_\infty} \sum_{\substack{\bs\in\Z^{k_1+k_2}, \bm\in\Z_{\ge 0}^{k_1}\\ s_1\ge \ldots \ge s_{k_1+k_2}}} z^{-i_1-i_2+2\sum_{i=1}^{k_1+k_2} s_i} \frac{q^{P(\bs,\bm)}}{(q)_\bm \prod_{i=1}^{k_1+k_2-1} (q)_{s_i}} , \end{multline*} where $P(\bs,\bm)$ is given by $(\ref{P})$ and $(q)_\infty=\prod_{i=1}^\infty (1-q^i)$. \end{theorem} \begin{proof} We only note that the factor $(q)_\infty$ comes from the limit $$\lim_{N\to\infty} (q)_{n_{k_1+k_2}}= \lim_{N\to\infty}(q)_{s_{k_1+k_2}+N-\frac{|\bm|}{{k_1+k_2}}}.$$ \end{proof} \newcounter{a} \setcounter{a}{1}
-68,299.941475
[ -2.654296875, 2.37890625 ]
22.825219
[ -2.9296875, 0.047882080078125, -2.4609375, -5.921875, -0.51025390625, 9.109375 ]
[ 2.8125, 9.421875, 0.751953125, 5.39453125 ]
356
5,831
[ -3.36328125, 3.841796875 ]
41.132676
[ -5.140625, -3.830078125, -5.2734375, -2.53515625, 1.6728515625, 12.78125 ]
0.830432
9.47207
26.616361
4.027912
[ 0.7377874851226807 ]
-42,667.737208
5.369405
-67,603.740552
0.849596
6.258909
[ -1.41015625, -3.322265625, -4.27734375, -5.75, 1.7568359375, 13.2421875 ]
[ -5.40625, -1.5234375, -2.1171875, -0.68310546875, 3.34765625, 3.26953125 ]
BkiUawDxK6EuNAra_58K
\section{Conclusion} \vspace{-10pt} We have shown that building intermediate representations that preserve the Euclidean structure of the 3D objects we try to model is beneficial. It enables us to outperform state-of-the-art approaches to single view reconstruction. We have also investigated the use of multiple-views for which our representations are also well suited. In future work, we will extend our approach to handle video sequences for which camera poses can be regressed using either SLAM-type methods or learning-based ones. We expect that the benefits we have observed in the single-view case will carry over and allow full scene reconstruction. \section*{Broader impact} Our work is relevant to a variety of applications. In robotics, autonomous camera-equipped agents, for which a volumetric estimate of the environment can be useful, would benefit from this. In medical applications, it would allow aggregating 2D scans to form 3D models of organs. It could also prove useful in industrial applications, such as in creating 3D designs from 2D sketches. More generally, constructing an easily handled differentiable representations of surfaces such as the ones we propose opens the way to assisted design and shape optimization. As for any method enabling information extraction from images in an automated manner, malicious use is possible, especially raising privacy concerns. Accidents or malevolent use of autonomous agents is also a risk. To reduce accident threats we encourage the research community to propose explainable models, that perform more reconstruction than recognition - the latter regime arguably being more prone to adversarial attacks. \section{Related work} Most recent SVR methods rely on a 2D-CNN to create an image description that is then passed to a 3D shape decoder that generates a 3D output. What differentiates them is the nature of their output which is strongly related to the structure of their shape decoder, and their approach to local feature extraction. We briefly describe these below. \vspace{-2mm} \paragraph{Shape Decoders} The first successful deep SVR models relied on 3D convolutions to regress voxelized shapes~\cite{Choy16b}. This restricts them to coarse resolutions because of their cubic computational and memory cost. This drawback can be mitigated using local subdivision schemes~\cite{hane2017hierarchical, Tatarchenko17}. MarrNet~\cite{wu2017marrnet} and Pix3D~\cite{sun2018pix3d} regress voxelized shapes as well but also incorporate depth, normal, and silhouette predictions as intermediate representations. They help disentangle shape from appearance and are used to compute a re-projection consistency loss. Depth, normal and silhouette are however not exploited in a geometric manner at inference time because they are encoded as flat vectors. PSGN~\cite{Fan17a} regresses sparse scalar values, directly interpreted as 3D coordinates of a point cloud with fixed size and mild continuity. AtlasNet~\cite{Groueix18} introduces a per-patch surface parametrization and samples a point cloud from a set of learned parametric surfaces. One limitation, however, is that the patches it produces sometimes overlap each other or collapse during training \cite{Bednarik20}. To combine the strengths of voxel and mesh representations, Mesh R-CNN~\cite{gkioxari2019mesh} uses a hybrid shape decoder that first regresses coarse voxels, which are then refined into mesh vertices using graph convolutions. Our approach is in the same spirit with two key differences. First, our coarse occupancy grid is used to instantiate folding patches and to sample 3D surface points in the AtlasNet~\cite{Groueix18} manner. However, unlike in AtlasNet, the locations of the sampled 3D points and the folding creating them are tightly coupled. Second, we regress shapes in object space, thus leveraging stronger object priors. A competing approach is to rely on implicit shape representations. For example, the network of~\cite{mescheder2019occupancy} computes occupancy maps that represent smooth watertight shapes at arbitrary resolutions. DISN~\cite{xu2019disn} uses instead a Signed Distance Field (SDF). Shapes are encoded as zero-crossing of the field and explicit 3D meshes can be recovered using the Marching Cubes~\cite{Lorensen87} algorithm. \vspace{-2mm} \paragraph{Local Feature Extraction} Most SVR methods discussed above rely on a vectorized embedding passing from image encoder to shape decoder. This embedding typically ignores image feature localization and produces a global image descriptor. As shown in~\cite{Tatarchenko19}, such approaches are therefore prone to behaving like classifiers that simply retrieve shapes from a learned catalog. Hence, no true geometric reasoning occurs and recognition occurs at the scale of whole objects while ignoring fine details. \input{fig/failure} There have been several attempts at preserving feature localization from the input image by passing local vectors from 2D feature maps of the image encoder to the shape decoder. In~\cite{wang2018pixel2mesh, gkioxari2019mesh}, features from the 2D plane are propagated to the mesh convolution network that operates in the camera space. In DISN~\cite{xu2019disn}, features from the 2D plane are extracted and serve as local inputs to a SDF regressor, directly in object space. Unfortunately, features extracted in this manner do not incorporate any notion of depth and local shape regressors get the same input all along a camera ray. As a result and as shown in Fig.~\ref{fig:perceptual_pooling_failure}, DISN can reconstruct shapes with the correct outline when projected in the original viewpoint but that are nevertheless incorrect. In practice, this occurs when the network relies on both global and local features, but not when it relies on global features only. In other words, it seems that local features allow the network to take an undesirable shortcut by making silhouette recovery excessively easy, especially when the background is uniform. The depth constraint is too weakly enforced by the latent space, and must be carried out by the fully connected network regressing signed distance value. By contrast, our approach does avoids this pitfall, as shown in Fig.~\ref{fig:perceptual_pooling_failure}(f). This is allowed by two key differences: (i) the shape decoder relies on 3D convolutions to handle global spatial arrangement before fully connected networks locally regress shape parts, and (ii) predicted depth maps are made available as inputs to the shape decoder. \section{Introduction} Most state-of-the-art deep geometric learning Single-View Reconstruction approaches (SVR) rely on encoder-decoder architectures that output either explicit shape parametrizations \cite{gkioxari2019mesh, Groueix18, Wang18} or implicit representations \cite{mescheder2019occupancy, xu2019disn,Chen19c}. However, the representations they learn rarely preserve the Euclidean structure of the 3D space objects exist in, and rather rely on a global vector embedding of the input image at a semantic level. In this paper, we show that building a geometry preserving 3-dimensional representation helps the network concurrently learn global shape regularities and local reasoning in the object coordinate space and, as a result, boosts performance. This corroborates the observation that choosing the right coordinate frame for the output of a deep network matters a great deal~\cite{Tatarchenko19}. In our work, we use camera projection matrices to explicitly link camera- and object-centric coordinate frames. This allows us to reason about geometry and learn object priors in a common 3D coordinate system. More specifically, we use regressed camera pose information to back-project 2D feature maps to 3D feature grids at several scales. This is achieved within our novel architecture that comprises a 2D image encoder and a 3D shape decoder. They feature symmetrical downsampling and upsampling parts and communicate through multi-scale skip connections, as in the U-Net architecture~\cite{Ronneberger15}. However, unlike in other approaches, the bottleneck is made of 3D feature grids and we use back-projection layers~\cite{ji2017surfacenet,iskakov2019learnable,Sitzmann19} to lift 2D feature maps to 3D grids. As a result, feature localization from the input view is preserved. In other words, our feature embedding has a Euclidean structure and is aligned with object coordinate frame. Fig.~\ref{fig:arch} depicts this process. In reference to its characteristics, we dub our architecture UCLID-Net. Earlier attempts at passing 2D features to a shape decoder via local feature extraction~\cite{wang2018pixel2mesh,xu2019disn} enabled spatial information to flow to the decoder in a non semantic manner, often with limited impact on the final result. In these approaches, the same local feature is attributed to all points lying along a camera ray. By contrast, UCLID-Net uses 3D convolutions to volumetrically process local features before passing them to the local shape decoders. This allows them to make different contributions at different places along camera rays. To further promote geometrical reasoning, it never computes a global vector encoding of the input image. Instead, it relies on localized feature grids, either 2D in the image plane or 3D in object space. Finally, the geometric nature of the 3D feature grids enables us to exploit estimated depth maps and further boost reconstruction performance. We demonstrate both on ShapeNet synthetic images, which are often used for benchmarking purposes, and on real-world images that our approach outperforms state-of-the-art ones. Our contribution is therefore a demonstration that creating a Euclidean preserving latent space provides a clear benefit for single-image reconstruction and a practical approach to taking advantage of it. Finally, the single-view pipeline naturally extends to multi-view reconstruction, which we also provide an example for. \input{fig/arch} \section{Experiments} \subsection{Experimental Setup} \vspace{-2mm} \paragraph{Datasets.} Given the difficulty of annotation, there are relatively few 3D datasets for geometric deep learning. We use the following two: \hspace{0.4cm} {\bf ShapeNet Core}~\cite{chang2015shapenet} features 38000 shapes belonging 13 object categories. Within each category objects are aligned with each other and we rescale them to fit into a $[-1, 1]^3$ bounding box. For training and validation purposes, we use the RGB renderings from 36 viewpoints provided in DISN~\cite{xu2019disn} with more variation and higher resolution than those of~\cite{Choy16b}. We use the same testing and training splits but re-generated the depth maps because the provided ones are clipped along the z-axis. \hspace{0.4cm} {\bf PIX3D}~\cite{sun2018pix3d} is a collection of pairs of real images of furniture with ground truth 3D models and pose annotations. With 395 3D shapes and 10,069 images, it contains far less samples than ShapeNet. We therefore use it for validation only, on approximately 2.5k images of chairs. \vspace{-2mm} \paragraph{Baselines and Metrics.} We test our UCLID-Net against several state-of-the-art approaches: AtlasNet~\cite{Groueix18} provides a set of 25 patches sampled as a point cloud, Pixel2Mesh~\cite{wang2018pixel2mesh} regresses a mesh with fixed topology, Mesh R-CNN~\cite{gkioxari2019mesh} a mesh with varying topological structure, and DISN~\cite{xu2019disn} uses an implicit shape representation in the form of a signed distance function. For Pixel2Mesh, we use the improved reimplementation from~\cite{gkioxari2019mesh} with a deeper backbone, which we refer to as Pixel2Mesh\textsuperscript{+}. All methods are retrained on the dataset described above, each according to their original training procedures. We report our results and those of the baselines in terms of five separate metrics, Chamfer L1 and L2 Distances (CD--$L_1$, CD--$L_2$), Earth Mover's Distance (EMD), shell-IoU (sIoU), and average F-Score for a distance threshold of 5\% (F@5\%), which we describe in more detail in the supplementary material. \subsection{Comparative Results} \vspace{-2mm} \paragraph{ShapeNet.} In Fig.~\ref{fig:example_image_reconstruction}, we provide qualitative UCLID-Net reconstruction results. In Tab.~\ref{tab:benchmark}(a), we compare it quantitatively against our baselines. UCLID-Net outperforms all other methods. We provide the results in aggregate and refer the interested reader to the supplementary material for per-category results. As in~\cite{xu2019disn}, all metrics are computed on shapes scaled to fit a unit radius sphere, and CD--$L_2$ and EMD values are scaled by $10^3$ and $10^2$, respectively. Note that these results were obtained using the depth maps and camera poses regressed by our auxiliary regressors. In other words, the input was only the image. We will see in the ablation study below that they can be further improved by supplying the ground-truth depth maps, which points towards a potential for further performance gains by using a more sophisticated depth regressor than the one we currently use. \begin{figure}[t] \centering \includegraphics[width=.16\textwidth]{figures/in_bench.png} \includegraphics[width=.16\textwidth]{figures/in_chair.png} \includegraphics[width=.16\textwidth]{figures/in_lamp.png} \includegraphics[width=.16\textwidth]{figures/in_plane.png} \includegraphics[width=.16\textwidth]{figures/in_rifle.png} \includegraphics[width=.16\textwidth]{figures/in_table.png} \includegraphics[width=.16\textwidth]{figures/out_bench.png} \includegraphics[width=.16\textwidth]{figures/out_chair.png} \includegraphics[width=.16\textwidth]{figures/out_lamp.png} \includegraphics[width=.16\textwidth]{figures/out_plane.png} \includegraphics[width=.16\textwidth]{figures/out_rifle.png} \includegraphics[width=.16\textwidth]{figures/out_table.png} \caption{\label{fig:example_image_reconstruction}\textbf{ShapeNet objects reconstructed by UCLID-Net.} Top row: Input view. Bottom row: Final point cloud. The points are colored according to the patch that generated them.} \end{figure} \input{fig/comparative} \begin{figure}[htbp] \begin{center} \setlength{\tabcolsep}{4pt} \begin{tabular}{ccc|ccc} \hspace{-0.3cm}\includegraphics[width=.15\textwidth]{figures/pix3d_frame20.png}& \hspace{-0.4cm}\includegraphics[width=.15\textwidth]{figures/pix3d_reco_DISN_20.png}& \hspace{-0.4cm}\includegraphics[width=.15\textwidth]{figures/pix3d_reco_ours_20.png}& \hspace{-0.0cm}\includegraphics[width=.15\textwidth]{figures/pix3d_frame21.png}&\includegraphics[width=.15\textwidth]{figures/pix3d_reco_DISN_21.png}& \hspace{-0.6cm}\includegraphics[width=.15\textwidth]{figures/pix3d_reco_ours_21.png}\\ \end{tabular} \end{center} \vspace{-0.6mm} \caption{\label{fig:pix3d}\textbf{Reconstructions on Pix3D photographs:} from left to right, twice: input, DISN, ours.} \end{figure} \vspace{-2mm} \paragraph{Pix3D.} In Fig.~\ref{fig:pix3d}, we provide qualitative UCLID-Net reconstruction results. In Tab.~\ref{tab:benchmark}(b), we compare it quantitatively against our baselines. We conform to the evaluation protocol of~\cite{sun2018pix3d} and report the Chamfer-L1 distance (CD--$L_1$) and EMD on point clouds of size 1024. The CD--$L_1$ and EMD values are scaled by $10^2$. UCLID-Net again outperforms all other methods. The only difference with the ShapeNet case is that both DISN and UCLID-Net used the available camera models whereas none of the other methods leverages camera information. \subsection{From Single- to Multi-View Reconstruction} \begin{figure} \begin{subfigure}{0.19\textwidth} \includegraphics[width=\linewidth]{figures/multi_view/chair/vp_1_in.png} \caption{} \end{subfigure}% \hspace*{\fill} \begin{subfigure}{0.19\textwidth} \includegraphics[width=\linewidth]{figures/multi_view/chair/vp_2_in.png} \caption{} \end{subfigure}% \hspace*{\fill} \begin{subfigure}{0.19\textwidth} \includegraphics[width=\linewidth]{figures/multi_view/chair/vp_1_out.png} \caption{} \end{subfigure} \begin{subfigure}{0.19\textwidth} \includegraphics[width=\linewidth]{figures/multi_view/chair/vp_2_out.png} \caption{} \end{subfigure} \begin{subfigure}{0.19\textwidth} \includegraphics[width=\linewidth]{figures/multi_view/chair/vp_1+2_out.png} \caption{} \end{subfigure} \vspace{-3mm} \caption{\label{fig:multi_view} \textbf{Two-views reconstruction.} (a,b) Two input images of the same chair from ShapeNet. (c) Reconstruction using only the first one. (d) Reconstruction using only the second one. (e) Improved reconstruction using both images.} \end{figure} A further strength of UCLID-Net is that its internal feature representations make it suitable for multi-view reconstruction. Given depth and feature grids provided by the image encoder from multiple views of the same object, their simple point-wise addition at each scale enables us to combine them in a spatially relevant manner. For input views $a$ and $b$, the encoder produces feature/depth grids collections $\{(G^{F_1}_a, G^D_{1,a})\, ..., (G^{F_S}_a, G^D_{S,a})\}$ and $\{(G^{F_1}_b, G^D_{1,b})\, ..., (G^{F_S}_b, G^D_{S,b})\}$. In this setting, we feed $\{(G^{F_1}_a+G^{F_1}_b, G^D_{1,a}+G^D_{1,b})\, ..., (G^{F_S}_a+G^{F_S}_b, G^D_{S,a}+G^D_{S,b})\}$ to the shape decoder and let it merge details from both views. For best results, the decoder is fine-tuned to account for the change in magnitude of its inputs. As can be seen in Fig.~\ref{fig:multi_view}, this delivers better reconstructions than those obtained from each view independently. \subsection{Ablation Study} \label{sec:Ablation} \input{fig/ablation} To quantify the impact of regressing camera poses and depth maps, we conducted an ablation study on the ShapeNet car category. In Fig.~\ref{fig:ablation}(a), we report CD-$L_2$ and EMD for different network configurations. Here, \textit{CAR} is trained and evaluated on the cars subset, with inferred depth maps and camera poses. \textit{CAM} is trained and evaluated with inferred depth maps, but ground truth camera poses. \textit{CAD} is trained and evaluated with ground truth camera poses and depth maps. Finally, \textit{ALL} is trained on 13 object categories with inferred depth maps and cameras as it was in all the experiments above, but evaluated on cars only. Using ground truth data annotation for depth and pose improves reconstruction quality. The margin is not significant, which indicates that the regressed poses and depth maps are mostly good enough. Nevertheless, our pipeline is versatile enough to take advantage of additional information, such as depth map from a laser scanner or an accurate camera model obtained using classic photogrammetry techniques, when it is available. Note also that \textit{ALL} marginally gets better performance than \textit{CAR}. Training the network on multiple classes does not degrade performance when evaluated on a single class. In fact, having other categories in the training set increases the overall data volume, which seems to be beneficial. In Fig.~\ref{fig:ablation}(b), we present an interesting failure case. The visible armrest is correctly carved out while the occluded one is reconstructed as being solid. While incorrect, this result indicates that UCLID-Net has the ability to reason locally and does not simply retrieve a shape from the training database, as described in~\cite{Tatarchenko19}. \section{Method} At the heart of UCLID-Net is a representation that preserves the Euclidean structure of the 3D world in which the shape we want to reconstruct lives. To encode the input image into it and then decode it into a 3D shape, we use the architecture depicted by Fig.~\ref{fig:arch}. A CNN image encoder computes feature maps at $S$ different scales while auxiliary ones produce a depth map estimate $D$ and a camera projection model $P: \mathbb{R}^3 \rightarrow \mathbb{R}^2$. $P$ allows us to back-project image feature onto the 3D space along camera rays and $D$ to localize the features at the probable location of the surface on each of these ray. The 2D feature maps and depth maps are back-projected to 3D grids that serve as input to the shape decoder, as shown by Fig.~\ref{fig:backproj}. This yields a coarse voxelized shape that is then refined into a point cloud. If estimates of either the pose $P$ or the depth map $D$ happen to be available {\it a priori}, we can use them instead of regressing them. We will show in the results section that this provides a small performance boost when they are accurate but not a very large one because our predictions tend to be good enough for our purposes, that is, lifting the features to the 3D grids. The back-projection mechanism we use is depicted by Fig.~\ref{fig:backproj}. It is similar to the one of~\cite{ji2017surfacenet,iskakov2019learnable,Sitzmann19} and has a major weakness when used for single view reconstruction. All voxels along a camera ray receive the same feature information, which can result in failures such as the one depicted by Fig.~\ref{fig:perceptual_pooling_failure} if passed as is to local shape decoders. To remedy this, we concatenate feature grids with voxelized depth maps. The result is then processed as a whole using 3D convolutions before being passed to local decoders. In the remainder of this section, we first introduce the basic back-projection mechanism, and then describe how our shape decoder fuses feature grids with depth information using a 3D CNN before locally regressing shapes. \input{fig/backproj} \subsection{Back-Projecting Feature and Depth Maps} \label{sec:backproj} We align all objects in the dataset to be canonically oriented within each class, centered at the origin, and scaled to fill bounding box $[-1, 1]^3$. Given such a 3D object, a CNN produces a 2D feature map $F \in \mathbb{R}^{f \times H \times W}$ for input image $I$. Using $P$, the camera projection used to render it into image $I \in \mathbb{R}^{3 \times H \times W}$, we back-project $F$ into object space as follows. As in~\cite{ji2017surfacenet,iskakov2019learnable}, we subdivide bounding box $[-1, 1]^3$ into $G^F \in \mathbb{R}^{f \times N \times N \times N}$, a regular 3D grid. Each voxel $(x,y,z)$ contains the $f$-dimensional feature vector \begin{equation} G^F_{xyz} = F\{P\begin{pmatrix}x\\mathbf{y}\\z\end{pmatrix}\} \; , \label{eq:backproj} \end{equation} where $\{\cdot\}$ denotes bilinear interpolation on the 2D feature map. As illustrated by Fig.~\ref{fig:backproj}, back-projecting can be understood as illuminating a grid of voxels with light rays that are cast by the camera and pass through the 2D feature map. This preserves geometric structure of the surface and 2D features are positioned consistently in 3D space. In practice, we back-project 2D feature maps $(F_1,\ldots, F_S)$ of decreasing spatial resolutions, which yield 3D feature grids $(G^{F_1},\ldots, G^{F_S})$ of decreasing sizes $(N_1,\ldots, N_S)$. We linearly scale the projected coordinates to account for decreasing resolution. We process depth maps in a different manner to exploit the available depth value at each pixel. Given a 2D depth map $D \in \mathbb{R}_+^{H \times W}$ of an object seen from camera with projection matrix $P$, we first back-project the depth map to the corresponding 3D point cloud in object space. This point cloud is used to populate binary occupancy grids such as the one depicted by Fig.~\ref{fig:depthmap_backproj}(a). As for feature maps, we use this mechanism to produce a set of binary depth grids $(G^{D}_1,\ldots, G^{D}_S)$ of decreasing sizes $(N_1,\ldots, N_S)$. \input{fig/depth} \subsection{Hybrid Shape Decoder} \label{sec:decoder} The feature grids discussed above contain learned features but lack an explicit notion of depth. The values in its voxels are the same along a camera ray. By contrast, the depth grids structurally carry depth information in a binary occupancy grid but without any explicit feature information. One approach to merging these two kinds of information would be to clamp projected features using depth. However, this is not optimal for two reasons. First, the depth maps can be imprecise and the decoder should learn to correct for that. Second, it can be advantageous to push feature information not only to the visible part of the surfaces but also to their occluded ones. Instead, we devised a shape decoder that takes as input the pairs of feature and depth grids at different scales $\{(G^{F_1}, G^D_1)\, ..., (G^{F_S}, G^D_S)\}$ we introduced in Section.~\ref{sec:backproj} and outputs a point cloud. Our decoder uses residual layers that rely on regular 3D convolutions and transposed ones to aggregate the input pairs in a bottom-up manner. We denote by $layer_s$ the layer at scale $s$, and $concat$ concatenation along the feature dimension of same size 3D grids. $layer_s$ takes as input a feature grid of size $N_s$ and outputs a grid $H_{s-1}$ of size $N_{s-1}$. If $N_{s-1} > N_s$, $layer_s$ performs upsampling, otherwise if $N_{s-1} = N_s$, the resolution remains unchanged. At the lowest scale, $layer_S$ constructs its output from feature grid $G^{F_S}$ and depth grid $G^D_S$ as \begin{equation} H_{S-1} = layer_S(concat(G^{F_S}, G^D_S)) \; . \end{equation} At subsequent scales $1 \leq s < S$, the output of the previous layer is also used and we write \begin{equation} H_{s-1} = layer_s(concat(G^{F_s}, G^D_s, H_s)) \; . \end{equation} The 3D convolutions ensure that voxels in the final feature grid $H_0$ can receive information emanating from different lines of sight and are therefore key to addressing the limitations of methods that only rely on local feature extraction~\cite{xu2019disn}. $H_0$ is passed to two downstream Multi Layer Perceptrons (MLPs), we will refer to as $occ$ and $fold$. $occ$ returns a coarse surface occupancy grid. Within each voxel predicted to be occupied, $fold$ creates one local patch that refines the prediction of $occ$ and recovers high-frequency details in the manner of AtlasNet~\cite{Groueix18}. Both MLPs process each voxel of $H_0$ independently. Fig.~\ref{fig:example_reconstruction}(b) depicts their output in a specific case. We describe them in more detail in the supplementary material. Let $\widetilde{O}= occ(H_0)$ be the occupancy grid generated by $occ$ and % \begin{equation} \widetilde{X} = \bigcup_{\substack{xyz\\ \widetilde{O}_{xyz} > \tau}} \left \{ \begin{pmatrix}x\\mathbf{y}\\z\end{pmatrix} + fold(u,v|(H_0)_{xyz}) \mid (u,v) \in \Lambda \right \} \end{equation} be the union of the point clouds generated by $fold$ in each individual $H_0$ voxel in which the occupancy is above a threshold $\tau$. As in \cite{Groueix18,yang2018foldingnet}, $fold$ continuously maps a discrete set of 2D parameters $\Lambda \subset \left [ 0,1 \right ]^2$ to 3D points in space, which makes it possible to sample it at any resolution. During the training, we minimize a weighted sum of the cross-entropy between $\widetilde{O}$ and the ground-truth surface occupancy and of the Chamfer-$L_2$ distance between $\widetilde{X}$ and a point cloud sampling of the ground-truth 3D model. \subsection{Implementation Details} In practice, our UCLID-Net architecture has $S=4$ scales with grid sizes $N_1=N_2=28$, $N_3=14$, $N_4=7$. The image encoder is a ResNet18~\cite{He16a}, in which we replaced the batch normalization layers by instance normalization ones~\cite{Ulyanov16a}. Feature map $F_s$ is the output of the $s$-th residual layer. The shape decoder mirrors the encoder, but in the 3D domain. It uses residual blocks, with transposed convolutions to increase resolution when required. Last feature grid $H_0$ of the decoder has spatial resolution $N_0=28$, with 40 feature channels. The 8 first features serve as input to $occ$, and the last 32 to $fold$. $occ$ is made of a single fully connected layer while $fold$ comprises 7 and performs two successive folds as in~\cite{Yang18a}. The network is implemented in Pytorch, and trained for 150 epochs using the Adam optimizer, with initial learning rate $10^{-3}$, decreased to $10^{-4}$ after 100 epochs. We take the camera to be a simple pinhole one with fixed intrinsic parameters and train a CNN to regress rotation and translation from RGB images. Its architecture and training are similar to what is described in~\cite{xu2019disn} except we replaced its VGG-16 backbone by a ResNet18. To regress depth maps from images, we train another off-the-shelf CNN with a feature pyramid architecture \cite{chen2018depth}. These auxiliary networks are trained independently from the main UCLID-Net, but using the same training samples. \section{Supplementary material} \subsection{Metrics} This subsection defines the metrics and loss functions used in the main paper. \subsubsection{Chamfer-L1} The Chamfer-L1 (CD--$L_1$) pseudo distance $d_{CD_1}$ between point clouds $X=\left \{ x_i | 1\leq i\leq N, x_i \in \mathbb{R}^3 \right \}$ and $Y=\left \{ y_j | 1\leq j\leq M, y_j \in \mathbb{R}^3 \right \}$ is the following: \begin{equation} d_{CD_1}(X, Y) = \frac{1}{\left | X \right |}\cdot \sum_{x\in X} \mathrm{min}_{y\in Y} \left \| x-y \right \|_2 + \frac{1}{\left | Y \right |}\cdot \sum_{y\in Y} \mathrm{min}_{x\in X} \left \| x-y \right \|_2 , \end{equation} where $\left \| . \right \|_2$ is the Euclidean distance. We use CD--$L_1$ as a validation metric on the Pix3D dataset, according to the original procedure. It is applied on shapes normalized to bounding box $[-0.5, 0.5]^3$, and sampled with 1024 points. \subsubsection{Chamfer-L2} The Chamfer-L2 (CD--$L_2$) pseudo distance $d_{CD_2}$ between point clouds $X$ and $Y$ is the following: \begin{equation} d_{CD_2}(X, Y) = \frac{1}{\left | X \right |}\cdot \sum_{x\in X} \mathrm{min}_{y\in Y} \left \| x-y \right \|_2^2 + \frac{1}{\left | Y \right |}\cdot \sum_{y\in Y} \mathrm{min}_{x\in X} \left \| x-y \right \|_2^2 \end{equation} i.e. CD--$L_2$ is the average of the \textit{squares} of closest neighbors matching distances. We use CD--$L_2$ as a validation metric on the ShapeNet dataset. It is applied on shapes normalized to unit radius sphere, and sampled with 2048 points. \subsubsection{Earth Mover's distance} The Earth Mover's Distance (EMD) is a distance that can be used to compare point clouds as well: \begin{equation} d_{EMD}(X, Y) = \underset{T\in \wp (N,M)}{min} \sum_{1\leq i\leq N, 1\leq j\leq M} T_{i,j} \times \left \| x_i - y_j \right \|_2 \end{equation} where $\wp (N,M)$ is the set of all possible uniform \textit{transport plans} from a point cloud of $N$ points to one of $M$ points, i.e. $\wp (N,M)$ is the set of all $N\times M$ matrices with real coefficients larger than or equal to $0$, such that the sum of each line equals $1/N$ and the sum of each column equals $1/M$. The high computational cost of EMD implies that it is mostly used for validation only, and in an approximated form. On ShapeNet, we use the implementation from~\cite{qi2018emd} on point clouds normalized to unit radius sphere, and sampled with 2048 points. On Pix3D, we use the implementation from~\cite{sun2018emd} on point clouds normalized to bounding box $[-0.5, 0.5]^3$, and sampled with 1024 points. \subsubsection{F-score} The F-Score is introduced in~\cite{Tatarchenko19}, as an evaluation of distance between two object surfaces sampled as point clouds. Given a ground truth and a reconstructed surface, the F-Score at a given threshold distance $d$ is the harmonic mean of precision and recall, with: \begin{itemize} \item \textbf{precision} being the percentage of reconstructed points lying within distance $d$ to a point of the ground truth; \item \textbf{recall} being the percentage of ground truth points lying within distance $d$ to a point of the reconstructed surface. \end{itemize} We use the F-Score as a validation metric on the ShapeNet dataset. It is applied on shapes normalized to unit radius sphere, and sampled with 10000 points. The distance threshold is fixed at 5\% side-length of bounding box $[-1, 1]^3$, i.e. $d=0.1$ . \subsubsection{Shell Intersection over Union} We introduce shell-Intersection over Union (sIoU). It is the intersection over union computed on voxelized surfaces, obtained as the binary occupancy grids of reconstructed and ground truth shapes. As opposed to volumetric-IoU which is dominated by the interior parts of the objects, sIoU accounts only for the overlap between object surfaces instead of volumes. We use the sIoU as a validation metric on the ShapeNet dataset. The occupancy grid divides the $[-1,1]^3$ bounding box at resolution $50\times50\times50$, and is populated by shapes normalized to unit radius sphere. \subsection{Network details} \vspace{-5px} We here present some details of the architecture and training procedure for UCLID-Net. We will make our entire code base publicly available. \paragraph{3D CNN} UCLID-Net uses $S=4$ scales, and feature map $F_s$ is the output of the $s$-th residual layer of the ResNet18~\cite{He16a} encoder, passed through a 2D convolution with kernel size 1 to reduce its feature channel dimension before being back-projected. In the 3D CNN, $layer_4$, $layer_3$, and $layer_2$ are composed of 3D convolutional blocks, mirroring the composition of a residual layer in the ResNet18 image encoder, with: \begin{itemize} \item 2D convolutions replaced by 3D convolutions; \item 2D downsampling layers replaced by 3D transposed convolutions. \end{itemize} Final $layer_1$ is a single 3D convolution. Each $concat$ operation repeats depth grids twice along their single binary feature dimension before concatenating them to feature grids. Tab.~\ref{tab:layer_size} summarizes the size of feature maps and grids appearing on Fig.~\ref{fig:arch}. \input{supp_fig/layer_size} \paragraph{Local shape regressors} The last feature grid $H_0$ produced byt the 3D CNN is passed to two downstream Multi Layer Perceptrons (MLPs). First, a coarse voxel shape is predicted by MLP $occ$. Then, within each predicted occupied voxel, a local patch is folded in the manner of AtlasNet~\cite{Groueix18}, by MLP $fold$. Both MLPs locally process each voxel of $H_0$ independently. First, MLP $occ$ outputs a surface occupancy grid $\widetilde{O}$ such that \begin{equation} \widetilde{O}_{xyz} = occ((H_0)_{xyz}) \end{equation} at every voxel location $(x,y,z)$. $\widetilde{O}$ is compared against ground truth occupancy grid $O$ using binary cross-entropy: \begin{equation} \mathcal{L}_{BCE}(\widetilde{O}, O) = - \sum_{xyz} \left [ O_{xyz} \cdot log(\widetilde{O}_{xyz}) + (1-O_{xyz}) \cdot log(1 - \widetilde{O}_{xyz}) \right ] \end{equation} $\mathcal{L}_{BCE}$ provides supervision for training the 2D image encoder convolutions, the 3D decoder convolutions and MLP $occ$. Then $fold$, the second MLP learns a 2D parametrization of 3D surfaces within voxels whose predicted occupancy is larger than a threshold $\tau$. As in \cite{Groueix18, yang2018foldingnet}, such learned parametrization is physically explained by folding a flat sheet of paper (or a patch) in space. It continuously maps a discrete set of 2D parameters $(u,v) \in \Lambda$ to 3D points in space. A patch can be sampled at arbitrary resolution. In our case, we use a single MLP whose input is locally conditioned on the value of $(H_0)_{xyz}$. The predicted point cloud $\widetilde{X}$ is defined as the union of all point samples over all folded patches: \begin{equation} \widetilde{X} = \bigcup_{\substack{xyz\\ \widetilde{O}_{xyz} > \tau}} \left \{ \begin{pmatrix}x\\mathbf{y}\\z\end{pmatrix} + fold(u,v|(H_0)_{xyz}) \mid (u,v) \in \Lambda \right \} \end{equation} Notice that 3D points are expressed relatively to the coordinate of their voxel. As a result, we can explicitly restrict the spatial extent of a patch to the voxel it belongs to. We use the Chamfer-L2 pseudo-distance to compare $\widetilde{X}$ to a ground truth point cloud sampling of the shape $X$: $\mathcal{L}_{CD}(\widetilde{X}, X) = d_{CD_2}(\widetilde{X}, X)$. $\mathcal{L}_{CD}$ provides supervision for training the 2D image encoder convolutions, the 3D decoder convolutions and MLP $fold$. The total loss function is a weighted combination of the two losses $\mathcal{L}_{BCE}$ and $\mathcal{L}_{CD}$. Practically, for training each patch of $\widetilde{X}$ is sampled with $|\Lambda|=10$ uniformly sampled parameters, and $X$ is composed of 5000 points. \paragraph{Pre-training} UCLID-Net is first trained for one epoch using the occupancy loss $\mathcal{L}_{BCE}$ only. \paragraph{Normalization layers} In the ResNet18 that serves as our image encoder, we replace the batch-normalization layers by instance normalization ones. We empirically found out this provides greater stability during training, and improves final performance. \paragraph{Regressing depth maps} We slightly adapt the off-the-shelf network architecture used for regressing depth maps~\cite{chen2018depth}. We modify the backbone CNN to be a ResNet18 with instance normalization layers. Additionally, we perform less down-sampling by removing the initial pooling layer. As a result the input size is $224\times 224$ and the output size is $112\times 112$. \paragraph{Regressing cameras} We similarly adapt the off-the-shelf network architecture used for regressing cameras in~\cite{xu2019disn}: the backbone VGG is replaced by a ResNet18 with instance normalization layers. \subsection{Per-category results on ShapeNet} We here report per-category validation metrics for UCLID-Net and baseline methods: AtlasNet~\cite{Groueix18} (AN), Pixel2Mesh\textsuperscript{+} and Mesh R-CNN~\cite{wang2018pixel2mesh, gkioxari2019mesh} (P2M\textsuperscript{+} and MRC), DISN~\cite{xu2019disn} and UCLID-Net (ours). Tab.~\ref{tab:cd} reports Chamfer-L2 validation metric, Tab.~\ref{tab:emd} the Earth Mover's Distance, Tab.~\ref{tab:siou} the Shell Intersection over Union and Tab.~\ref{tab:fscore} the F-Score at 5\% distance threshold (ie. $d=0.1$). \input{supp_fig/comparative_cd} \input{supp_fig/comparative_emd} \input{supp_fig/comparative_siou} \input{supp_fig/comparative_fscore}
-21,868.510899
[ -2.0546875, 2.0390625 ]
52.840909
[ -2.384765625, 1.056640625, -1.82421875, -3.8359375, -1.1884765625, 6.26171875 ]
[ 2.42578125, 5.3203125, 0.4521484375, 5.77734375 ]
350
5,088
[ -1.478515625, 1.3212890625 ]
24.24496
[ -6.08984375, -4.0234375, -4.421875, -2.43359375, 2.0546875, 12.796875 ]
0.556511
36.392372
26.434748
2.260063
[ 1.7660884857177734 ]
-15,574.659532
5.827241
-21,651.501416
0.269824
6.103489
[ -2.875, -3.681640625, -3.62890625, -4.37109375, 2.703125, 11.4296875 ]
[ -5.67578125, -2.359375, -2.7265625, -2.015625, 4.1171875, 6.078125 ]
BkiUairxK0wg09lJ_0zB
\section{Introduction} Supersolitons are deformed solitary waves that are distinguishable through their three local minima and three local maxima in the electric field. Since the first reports on supersolitons, \cite{DubKol1,DubKol2,DubKol3} an increasing number of plasma models that support supersolitons have been identified. \cite{VerhEtAl2013a, VerhEtAl2013b,VerhEtAl2013c,MaharEtAl2013,OlivEtAl2015, VerhOliv2017} Many of these models describe magnetospheric plasmas. Regardless, very few actual satellite observations of possible supersoliton profiles have been reported. \cite{VerhEtAl2013b,DubKol2018} The limitations of spacecraft data means that the time evolution of these structures cannot be traced. It is therefore nearly impossible to distinguish between supersolitons and regular soliton collisions. The observed supersoliton-like structures\cite{VerhEtAl2013b,DubKol2018} are typically sandwiched between regular solitons, or more complicated electric field structures. This is not entirely unexpected, as solitons in space plasmas are usually observed in clusters.\cite{TemEtAl1982,BoehmEtAl1984,BoundsEtAl1999,PickEtAl2009} These observations suggest that supersolitons in space plasmas would frequently collide with other solitons. Therefore, it is important to understand the collision properties of supersolitons. A fluid simulation was recently performed by Kakad \textit{et al.} \cite{KakadEtAl2016} in order to investigate the properties of supersolitons. They simulated the formation of a supersoliton from a Gaussian initial density disturbance. The generated supersolitons are therefore stable and provide insight into the possible formation of supersolitons. However, the collision properties of the resulting supersolitons were not considered. To date, theoretical studies have solely relied on pseudopotential analysis due to Sagdeev. \cite{Sagd1966} This approach is useful to obtain supersoliton solutions from which exact information about their amplitudes, velocities, and parametric regions of existence can be deduced. Unfortunately, the study of collision properties falls outside the scope of Sagdeev analysis. To study collision properties, we will apply the reductive perturbation analysis of Washimi and Taniuti. \cite{WashTan1966} Previously, it was suggested that reductive perturbation analysis cannot be used to obtain small-amplitude supersolitons. \cite{VerhEtAl2013b,VerhHell2015} But at that time, the existence of supercritical plasma compositions \cite{VerhEtAl2016} had not been reported yet. More recently, supercritical plasma compositions have been shown to be related to small-amplitude supersolitons. \cite{OlivEtAl2017} In this paper, we show how this relationship can be used to study small-amplitude supersolitons by means of reductive perturbation analysis. This requires an extension of the earlier reductive perturbation analysis \cite{VerhEtAl2016} for a fluid plasma model consisting of cold ions and two-temperature Boltzmann electrons. The analysis leads to a generalized Korteweg-de Vries equation that admits supersoliton solutions. That equation is a higher order variant of the standard Gardner equation. Since we have not come across this equation previously, we refer to it as the modified Gardner (mG) equation. The solutions obtained from this study agree exactly with those of an earlier small-amplitude study based on Sagdeev potential analysis.\cite{OlivEtAl2017} The main advantage of the reductive perturbation analysis is that one may use the resulting evolution equation to analyze the collision properties of the supersolitons in the small amplitude regime. This is done in two ways. Firstly, we show that the mG equation is not completely integrable. As a result, it follows that supersoliton collisions are inelastic. Secondly, we use the mG equation to simulate the collision between solitons and overtaking supersolitons. These simulations suggest that such collisions may reduce the supersoliton to a regular soliton with smaller amplitude. It should be noted that our study is limited to very small regions in parameter space, very small amplitudes, and velocities that only marginally exceed the acoustic speed. \cite{OlivEtAl2017} Indeed, a comprehensive study of supersoliton collisions can only be undertaken through full fluid simulations. However, our results show that the collision properties of small-amplitude supersolitons are very different from those of regular solitons. The paper is organized as follows: In Section 2 we present the fluid model. In Section 3 we apply reductive perturbation analysis to derive the mG equation. We also establish the necessary conditions for the existence of supersoliton solutions. In Section 4 we normalize the mG equation and list its conservation laws. Moreover, we discuss why the equation is not completely integrable. Consequently, one should not expect collisions of solitons and supersolitons to be elastic. In Section 5, we use the mG equation to simulate the collision of a supersoliton that overtakes a regular soliton. Some conclusions are drawn in Section 6 together with an outlook on future work. \section{Fluid model} We consider a plasma consisting of cold fluid ions and a two-temperature Boltzmann electron species. The normalized fluid equations are given by\cite{VerhEtAl2016} \begin{equation} \frac{\partial n}{\partial t}+\frac{\partial}{\partial x}\left(nu\right)=0, \label{eq:fluids 1} \end{equation} \begin{equation} \frac{\partial u}{\partial t}+u\frac{\partial u}{\partial x}+\frac{\partial\phi}{\partial x}=0, \label{eq:fluids 2} \end{equation} \begin{equation} \frac{\partial^{2}\phi}{\partial x^{2}}+n-f\mbox{exp}\left(\alpha_{c}\phi\right) -\left(1-f\right)\mbox{exp}\left(\alpha_{h}\phi\right)=0, \label{eq:fluids 3} \end{equation} where $n$ denotes the ion number density normalized with respect to the equilibrium ion density $N_{i}$, and $u$ the fluid velocity normalized with respect to the ion-acoustic speed $c_{ia}=\sqrt{K_{\mathrm{B}}T_{\mathrm{eff}}/m_{i}}$ with ion mass $m_{i}$. Here, $K_{\mathrm{B}}$ denotes the Boltzmann constant and $T_{\mathrm{eff}}=T_{c}/\left[f+\left(1-f\right)\sigma\right]$ denotes the effective temperature with electron temperature ratio $\sigma=T_{c}/T_{h}$ for cool (hot, resp.) electron temperature $T_{c}$ $\left(T_{h}, \mathrm{resp.}\right)$. The cool electron density $f$ is normalized with respect to $N_{i}$. In addition, $\phi$ denotes the electrostatic potential normalized with respect to $K_{\mathrm{B}}T_{\mathrm{eff}}/e$ where $e$ is the electron charge, while \begin{equation} \alpha_{c}=\frac{1}{f+\left(1-f\right)\sigma}, \end{equation} and \begin{equation} \alpha_{h}=\frac{\sigma}{f+\left(1-f\right)\sigma}. \end{equation} Finally, length $x$ and time $t$ are normalized with respect to the Debye length $\lambda_{\mathrm{D}}=\sqrt{\varepsilon_{0}\kappa T_{\mathrm{eff}}/\left(N_{i}e^{2}\right)}$ and the reciprocal of the plasma frequency, $\omega_{pi}^{-1}=\sqrt{\varepsilon_{0}m_{i}/N_{i}e^{2}}$, respectively. \section{Reductive perturbation analysis} In order to retain fourth-order nonlinear effects, we follow Ref. \citenum{VerhEtAl2016} and introduce a stretched coordinate system \begin{equation} \xi=\varepsilon^{3/2}\left(x-t\right), \quad \tau=\varepsilon^{9/2}t. \label{eq:Stretching} \end{equation} In addition, we expand the ion number density and velocity, and the electrostatic potential as follows: \begin{equation} \left\{ \begin{array}{ccc} n & = & 1+\varepsilon n_{1}+\varepsilon^{2} n_{2}+\varepsilon^{3} n_{3}+\varepsilon^{4} n_{4}+\cdots, \\ \\ u & = & \varepsilon u_1 + \varepsilon^2 u_2 + \varepsilon^3 u_3 + \varepsilon^4 u_4 + \cdots,\\ \\ \phi & = & \varepsilon \phi_1 + \varepsilon^2 \phi_2 + \varepsilon^3 \phi_3 + \varepsilon^4 \phi_4 + \cdots . \end{array} \right. \label{eq:expansion} \end{equation} Since we are interested in solitons and supersolitons, we impose the following boundary conditions: \begin{equation} n\rightarrow1,u\rightarrow0,\:\phi\rightarrow0\text{ when }|\xi|\rightarrow\infty. \label{eq:BCs} \end{equation} By substituting the expressions (\ref{eq:Stretching}) and (\ref{eq:expansion}) into the fluid equations (\ref{eq:fluids 1})--(\ref{eq:fluids 3}), one obtains differential equations at different orders of $\varepsilon$. For brevity, we do not present these long expressions. We start with the continuity equation (\ref{eq:fluids 1}). By substituting the expansions (\ref{eq:Stretching}) and (\ref{eq:expansion}) into the continuity equation, and collecting terms up to $\varepsilon^{11/2}$, one obtains the following equations: \begin{equation} n_{1\xi} = u_{1\xi}.\label{eq:Continuity expansion 1} \end{equation} \begin{equation} n_{2\xi}=u_{2\xi}+\left(n_1u_1\right)_\xi.\label{eq:Continuity expansion 2}\\ \end{equation} \begin{equation} n_{3\xi}=u_{3\xi}+\left(n_1u_2+n_2u_1\right)_\xi.\label{eq:Continuity expansion 3}\\ \end{equation} \begin{equation} n_{4\xi}=n_{1\tau}+u_{4\xi}+\left(n_1u_3+n_2u_2+n_3u_1\right)_\xi.\label{eq:Continuity expansion 4} \end{equation} The subscripts $\xi$ and $\tau$ are used to denote partial derivatives $\partial/\partial\xi$ and $\partial/\partial\tau$, respectively. In addition, higher order partial derivatives are denoted with multiple subscripts throughout the paper. For example, we use $\phi_{1\xi\xi}$ to denote $\partial^2\phi_1/\partial\xi^2$. The first three equations (\ref{eq:Continuity expansion 1})--(\ref{eq:Continuity expansion 3}) can be simplified by means of a simple integration. By taking the boundary conditions (\ref{eq:BCs}) into account, it follows that \begin{equation} n_1=u_1,\label{eq:Continuity simplified 1} \end{equation} \begin{equation} n_2=u_2+n_1u_1, \label{eq:Continuity simplified 2}\\ \end{equation} and \begin{equation} n_3=u_3+n_1u_2+n_2u_1.\label{eq:Continuity simplified 3}\\ \end{equation} A similar treatment of the momentum equation (\ref{eq:fluids 2}) produces the following set of equations: \begin{equation} u_1 = \phi_1, \label{eq:Momentum simplified 1} \end{equation} \begin{equation} u_2=\phi_2+\frac{1}{2}u_1^2, \label{eq:Momentum simplified 2} \end{equation} \begin{equation} u_3 = \phi_3+u_1u_2, \label{eq:Momentum simplified 3} \end{equation} and \begin{equation} u_{4\xi}=u_{1\tau}+\phi_{4\xi}+u_1u_{3\xi}+u_2u_{2\xi}+u_3u_{1\xi}. \label{eq:Momentum simplified 4} \end{equation} The set of equations (\ref{eq:Momentum simplified 1})--(\ref{eq:Momentum simplified 4}) can be combined to eliminate the $u$ dependence from the set of equations (\ref{eq:Continuity expansion 4})--(\ref{eq:Continuity simplified 3}). It follows that \begin{equation} n_1 = \phi_1, \label{eq:Density order 1} \end{equation} \begin{equation} n_2 = \phi_2 + \frac{3}{2}\phi_1^2, \label{eq:Density order 2} \end{equation} \begin{equation} n_3 = \phi_3 + 3\phi_1\phi_2+\frac{5}{2}\phi_1^3, \label{eq:Density order 3} \end{equation} and \begin{equation} n_{4\xi} = \phi_{4\xi}+2\phi_{1\tau}+\left(2\phi_1\phi_3+\frac{1}{2}\phi_2^2+\frac{3}{2}\phi_1^2\phi_2+\frac{5}{8}\phi_1^4\right)_\xi. \label{eq:Density order 4} \end{equation} We now turn to Poisson's equation (\ref{eq:fluids 3}). By applying the expansions (\ref{eq:Stretching}) and (\ref{eq:expansion}), using a Taylor series to expand the exponential functions, and retaining terms up to order $\varepsilon^4$, one obtains the following equation: \begin{equation} \begin{array}{ccc} \displaystyle{\varepsilon^4\phi_{1\xi\xi}+\varepsilon n_1+\varepsilon^2n_2+\varepsilon^3n_3+\varepsilon^4n_4-\varepsilon A_1\phi_1-\varepsilon^2A_1\phi_2}\\ \\ \displaystyle{-\varepsilon^3A_1\phi_3-\varepsilon^4A_1\phi_4-\frac{A_2}{2}\varepsilon^2\phi_1^2-A_2\varepsilon^3\phi_1\phi_2-A_2\varepsilon^4\phi_1\phi_3}\\ \\ \displaystyle{-\frac{A_2}{2}\varepsilon^4\phi_2^2-\frac{A_3}{6}\varepsilon^3\phi_1^3-\frac{A_3}{2}\varepsilon^4\phi_1^2\phi_2-\frac{A_4}{24}\varepsilon^4\phi_1^4=0,\qquad\qquad} \end{array} \label{eq:Poisson expanded} \end{equation} where \begin{equation} A_j = f\alpha_c^j + \left(1-f\right)\alpha_h^j. \end{equation} The equations (\ref{eq:Density order 1})--(\ref{eq:Density order 3}) must be substituted into (\ref{eq:Poisson expanded}). To use (\ref{eq:Density order 4}), we differentiate (\ref{eq:Poisson expanded}) with respect to $\xi$. Since $A_1=1$ for any choice of $f$ and $\sigma$, (\ref{eq:Poisson expanded}) becomes \begin{equation} \begin{array}{ccc} \displaystyle{\varepsilon^4\phi_{1\xi\xi\xi}+2\varepsilon^4\phi_{1\tau}+\left[\frac{3-A_2}{2}\varepsilon^2\phi_1^2+\left(3-A_2\right)\varepsilon^3\phi_1\phi_2\qquad\right.}\\ \\ \displaystyle{+\frac{15-A_3}{6}\varepsilon^3\phi_1^3+\left(3-A_2\right)\varepsilon^4\phi_1\phi_3+\frac{15-A_3}{2}\varepsilon^4\phi_1^2\phi_2\qquad\qquad}\\ \\ \displaystyle{\left.+\frac{3-A_2}{2}\varepsilon^4\phi_2^2+\frac{105-A_4}{24}\varepsilon^4\phi_1^4\right]_\xi=0.\qquad\qquad\qquad\qquad\qquad} \end{array} \label{eq:Poisson expanded 2} \end{equation} For the supercritical plasma composition $f=\frac{1}{6}\left(3-\sqrt{6}\right)$ and $\sigma=5-2\sqrt{6}$, one has $A_2=3$ and $A_3=15$, so that the terms in orders $\varepsilon^2$ and $\varepsilon^3$ in (\ref{eq:Poisson expanded 2}) vanish. Here we consider plasma compositions near the supercritical composition. To do so, we look for compositions that satisfy the following criteria \begin{equation} A_2 = 3-\varepsilon^2 B_2,\,\,\,\,\,A_3 = 15-\varepsilon B_3. \label{eq:Near supercritical comps} \end{equation} We thus require that $A_2$ is close to $3$ up to order $\varepsilon^2$ and that $A_3$ only differs from $15$ by a quantity of order $\varepsilon.$ Obviously, $B_2$ and $B_3$ must both be of order $1.$ If we substitute (\ref{eq:Near supercritical comps}) into (\ref{eq:Poisson expanded 2}), and retain terms of order $\varepsilon^4$, we obtain the following equation: \begin{equation} \phi_{1\tau}+\frac{1}{2}\phi_{1\xi\xi\xi}+\frac{B_2}{2}\phi_1\phi_{1\xi}+\frac{B_3}{4}\phi_1^2\phi_{1\xi}+\frac{105-A_4}{12}\phi_1^3\phi_{1\xi}=0. \label{eq:mG 1} \end{equation} For further analysis of (\ref{eq:mG 1}), we consider the lowest order approximation of the electrostatic potential \begin{equation} \Phi=\varepsilon\phi_1. \end{equation} In addition, we introduce the following changes of coordinates: \begin{equation} t=\varepsilon^{-9/2}\tau,\,\,\,\eta=\varepsilon^{-3/2}\xi=x-t. \end{equation} Then (\ref{eq:mG 1}) becomes \begin{equation} \Phi_{t}+\frac{1}{2}\Phi_{\eta\eta\eta}+a\Phi\Phi_{\eta}+b\Phi^2\Phi_{\eta}+c\Phi^3\Phi_{\eta}=0, \label{eq:mG} \end{equation} where \begin{equation} a=\frac{3-A_2}{2},\,\,\,\,b=\frac{15-A_3}{4},\,\,\,\,c=\frac{105-A_4}{12}. \label{eq:mG coefficients} \end{equation} To the best of our knowledge, (\ref{eq:mG}) has not been reported before in the literature. We will refer to it as the modified Gardner (mG) equation since it is a quartic version of the standard Gardner equation where $c=0$. To find solitary wave solutions, we introduce a moving frame, \begin{equation} \zeta = \eta-vt, \end{equation} and integrate the resulting ordinary differential equation twice, to obtain the energy-like equation, \begin{equation} \frac{1}{2}\frac{\partial}{\partial\zeta}\left(\Phi^2\right)+V\left(\Phi\right)=0, \label{eq:Energy eq} \end{equation} where \begin{equation} V\left(\Phi\right)= -v\Phi^2+\frac{a}{3}\Phi^3+\frac{b}{6}\Phi^4+\frac{c}{10}\Phi^5. \end{equation} Note that the above Sagdeev potential $V\left(\Phi\right)$ agrees with the one obtained in a small-amplitude study \cite{OlivEtAl2017} based on a Taylor series expansion of the Sagdeev potential. We briefly summarize the main results from that paper: \begin{enumerate} \item For the model under consideration, a supercritical plasma composition exists for $\sigma=5-2\sqrt{6}$ and $f=\left(3-\sqrt{6}\right)/6,$ yielding $A_2 = 3$ and $A_3 = 15,$ Using (\ref{eq:mG coefficients}), it follows that $a = b = 0$ in (\ref{eq:mG}). \item For supersolitons to exist, the following conditions must be satisfied: \begin{equation} b<0,\qquad ac>0,\qquad ac<\frac{8}{27}b^2. \label{eq:Existence crit} \end{equation} \item For a plasma that satisfies these criteria, supersolitons exist at velocities \begin{equation} v_{\mathrm{min}}<v<v_{\mathrm{max}}, \end{equation} where \begin{equation} v_{\mathrm{max}}=v_{+}, \end{equation} \begin{equation} v_{\mathrm{min}} = \left\{ \begin{array}{c} v_{DL}\quad\text{ if }\quad\quad\quad\quad{\displaystyle \frac{ac}{b^{2}}\leq\frac{5}{18}},\\ \\ v_{-}\quad\text{ if }\quad{\displaystyle \frac{5}{18}<\frac{ac}{b^{2}}<\frac{8}{27}}, \end{array}\right. \end{equation} \begin{equation} v_{DL}=\frac{5b{\displaystyle \left(\frac{5b^{2}-27ac}{27}\right)}-200{\displaystyle \left(\frac{5b^{2}-18ac}{180}\right)}^{3/2}}{27c^{2}}, \label{eq:DL} \end{equation} and \begin{equation} v_{\pm}=\frac{{\displaystyle \frac{2b}{27}}\left(16b^{2}-81ac\right)\pm4\left({\displaystyle \frac{8b^{2}-27ac}{18}}\right)^{3/2}}{27c^{2}}. \end{equation} In (\ref{eq:DL}), $v_{DL}$ corresponds to the velocity of a double layer solution. \item A comparison between the small-amplitude study and the analysis based on the fully nonlinear Sagdeev potential was performed. Based on that comparison, a region (in the compositional parameter space) for the existence of small-amplitude supersolitons was found. This region was established for plasma compositions very close to the supercritical plasma composition. \end{enumerate} The mG equation can now be used to study collisions of overtaking supersolitons of small amplitudes. \section{Non-integrability of the mG equation} Some of the coefficients in (\ref{eq:mG}) can be removed by scaling: \begin{equation} t\rightarrow\alpha t,\text{ }\eta\rightarrow\beta\eta,\text{ }\Phi\rightarrow\gamma\Phi. \end{equation} By choosing the parameters $\alpha, \beta,$ and $\gamma$ appropriately, one obtains a normalized equation, \begin{equation} \Phi_{t} + \frac{1}{2}\Phi_{\eta\eta\eta} \pm\Phi\Phi_{\eta} + d\Phi^2\Phi_\eta \pm\Phi^3\Phi_{\eta} = 0. \label{eq:mG normalized} \end{equation} The signs in (\ref{eq:mG normalized}) and the choices of $\alpha$, $\beta$ and $\gamma$ depend on the signs of $a$, $b$ and $c$. For the model under consideration, one can easily show \cite{VerhEtAl2016} that $A_4=81$ at the supercritical composition, so that $c=2$. It can also easily be verified that $c>0$ for plasma compositions near the supercritical composition. Based on the existence critera (\ref{eq:Existence crit}), we restrict ourselves to compositions where $a>0$, $b<0$ and $c>0$. Choosing the coefficients \begin{equation} \alpha=\left(\frac{c}{a}\right)^{3/4},\text{ }\beta=\left(\frac{c}{a}\right)^{1/4}, \text{ }\gamma=-\sqrt{\frac{a}{c}}, \end{equation} yields \begin{equation} \Phi_{t} + \frac{1}{2}\Phi_{\eta\eta\eta} + \Phi\Phi_{\eta} + D\Phi^2\Phi_\eta + \Phi^3\Phi_{\eta} = 0, \label{eq:mG normalized 2} \end{equation} with \begin{equation} D=-\sqrt{\frac{b^2}{ac}}=-\sqrt{\frac{3\left(15-A_3\right)^2}{2\left(3-A_2\right)\left(105-A_4\right)}}. \end{equation} To compute conservation laws of (\ref{eq:mG normalized 2}), we follow the approach of Verheest and Hereman \cite{VerhHer1994} which yields two conservation laws: \begin{equation} \Phi_t + \left( \frac{1}{2}\Phi^2 + \frac{D}{3} \Phi^3 + \frac{1}{4}\Phi^4 + \frac{1}{2} \Phi_{\eta\eta} \right)_\eta = 0, \end{equation} and \begin{equation} \left(\Phi^2\right)_t + \left( \frac{2}{3}\Phi^3 + \frac{D}{2} \Phi^4 + \frac{2}{5}\Phi^5 + \Phi\Phi_{\eta\eta} - \frac{1}{2} \Phi_\eta^2 \right)_\eta = 0. \end{equation} Using symbolic software developed by Poole and Hereman, \cite{PooleHereman2011} an extensive search for polynomial conservation laws of (\ref{eq:mG normalized 2}) did not yield any additional results which suggests that (\ref{eq:mG normalized 2}) is not completely integrable. Equation (\ref{eq:mG normalized 2}) does not pass the Painlev\'e integrability test either as confirmed with the code of Baldwin and Hereman.\cite{BaldwinHereman2006} One should therefore not expect that solitary wave solutions of (\ref{eq:mG normalized 2}) would collide elastically and thus retain their shapes upon collisions. \section{Simulation of small-amplitude supersoliton collisions} We can now use the mG equation to simulate collisions between solitons and supersolitons. To do this, we construct a supersoliton solution and a slower soliton by numerically integrating the energy equation (\ref{eq:Energy eq}). The faster supersoliton solution is then shifted $\eta_0$ units to the left and added to the soliton solution. While the principle of superposition does not apply to nonlinear equations, it is assumed that the stability of the solutions ensures that the soliton and supersoliton propagation remains unaffected provided that the two solutions are sufficiently far apart. It should be mentioned that the solutions must be constructed on a sufficiently large interval. Due to the instability of the energy integral (\ref{eq:Energy eq}), the numerical integration is not accurate enough to provide such solutions. We therefore applied the results from an asymptotic study \cite{OlivEtAl2017_2} to construct sufficiently long tails for the solutions. After constructing the appropriate initial potential $\Phi$, the mG equation was integrated using a fourth-order Runge-Kutta method. We used finite differences to approximate the spatial derivatives and applied periodic boundary conditions. To avoid interference from the periodic boundary assumption, we had to choose a sufficiently large interval length. The simulations reveal that the supersoliton breaks up during the collision, so that only regular solitons emerge after the collision. To illustrate this, we discuss a typical result obtained from simulations with $D=-\sqrt{3.6}$. The initial disturbance consists of a supersoliton with velocity $v=0.5\left(v_{DL}+v_{\mathrm{max}}\right)\approx0.1218$ that is shifted $\eta_0=75$ units to the left, and a slower soliton with velocity $v=0.1$. For this simulation, the interval length is $L=1200$ and a grid with $N=25600$ points are used. Therefore, the spatial width is $\Delta\eta\approx0.047$. An integration increment of $\Delta t=10^{-4}$ is used. \begin{figure} \begin{centering} \includegraphics[scale=0.45]{Fig1.eps} \par\end{centering} \caption{Simulation of a supersoliton overtaking a regular soliton. The electrostatic potential $\Phi$ is plotted as a function of $\eta$ and $t$.} \label{fig:Collision 1} \end{figure} The results are shown in Figure~\ref{fig:Collision 1}, where the magnitude of the solution $\Phi$ is plotted as a function of $\eta$ and $t$. Here we see that the supersoliton (initially on the left) widens around $t \approx 1000$, before the collision takes place. The fact that the supersoliton breaks up during this time is not obvious from the figure. The amplitude of the collision peaks around $t=3600$, before two solitons with smaller amplitudes emerge. It is therefore clear that the collision is inelastic. To see the breaking up of the supersoliton more clearly, in Figure~\ref{fig:Collision 2} we graphed the $\eta$ profiles of the electric field, $E = -\partial\Phi/\partial\eta,$ at different values of $t.$ In panel (a) of Figure~\ref{fig:Collision 2}, the initial condition is shown. The characteristic ``wiggles" of the supersoliton are clearly visible to the left of the regular soliton. As the supersoliton approaches the regular soliton, the supersoliton starts to deform. This is shown in panel (b) for $t=1100$. The supersoliton breaks up to form a regular soliton. Panel (c) shows the solution at $t=1400$, after the supersoliton deformed to become a regular soliton. The collision of the resulting two solitons is shown in panels (d)--(f) of Figure~\ref{fig:Collision 2}. The faster soliton overtakes the slower, resulting in a transient solution as shown in panel (d) for $t=3000$. Eventually, the faster soliton re-emerges in front of the slower one, as shown in panel (e) for $t=4400$. Beyond $t=4400$, the separation between the two solitons increases, as depicted in panel (f) for $t=5500$. \begin{figure} \includegraphics[scale=0.35]{Fig2.eps} \caption{Simulation of supersoliton overtaking a regular soliton. The electric field $E$ is shown at different times $t$, as specified on top of each panel.} \label{fig:Collision 2} \end{figure} \section{Conclusions and future work} In this paper, we applied reductive perturbation analysis to study small-amplitude supersolitons in a plasma consisting of cold ions and two-temperature Boltzmann electrons. To do so, we considered near-supercritical plasma compositions. We derived a generalized Korteweg-de Vries equation, referred to as the modified Gardner equation. For that equation, we derived the necessary conditions for small-amplitude supersolitons to exist. We also used the equation to study the collision properties of small-amplitude supersolitons, both theoretically and through simulations. Theoretically, we showed that in contrast to the KdV and mKdV equations, the mG equation is not completely integrable. Hence, collisions of small-amplitude supersolitons will be inelastic. Numerical simulations of the collisions between solitons and supersolitons show that the supersolitons break up during the collision to form a regular soliton. This is very different from elastic collisions of regular solitons. These results show that, in the small-amplitude regime, supersolitons are not as robust as regular solitons, and may break up during collisions. This suggests that their life spans may be much shorter than that of regular solitons and might explain the low number of supersoliton observations in space plasmas. However, caution must be taken in the interpretation of these results. Indeed, for these conclusions to be valid, our results must be extended beyond the small-amplitude regime. To do so, one has to study the collision properties of supersolitons in laboratory experiments or numerical simulations. In addition, head-on collisions lie beyond the scope of this analysis. In conclusion, we hope that our results will generate interest in the topic of supersoliton collisions, and that this study can be used as a benchmark for further investigations. \section*{Acknowledgement} CO wishes to acknowledge the financial assistance of the National Research Foundation (NRF) towards this research. Opinions expressed and conclusions arrived at, are those of the authors and are not necessarily to be attributed to the NRF.
-26,409.970885
[ -2.57421875, 2.642578125 ]
32.427536
[ -3.203125, 1.0068359375, -1.853515625, -5.5234375, -0.83251953125, 7.3359375 ]
[ 1.4658203125, 7.1875, 2.734375, 4.16015625 ]
176
2,958
[ -2.7109375, 2.8515625 ]
27.216456
[ -5.69140625, -3.658203125, -3.5234375, -2.05859375, 1.7548828125, 10.3046875 ]
0.624469
10.303281
30.797836
2.985083
[ 2.7811474800109863 ]
-16,930.44243
6.767072
-26,115.161335
0.649448
5.791008
[ -2.712890625, -3.44140625, -3.548828125, -4.67578125, 2.306640625, 11.65625 ]
[ -5.20703125, -1.9365234375, -2.359375, -1.4921875, 3.228515625, 4.125 ]
BkiUdpY4eIOjSBZeeNPE
\section{Introduction} \label{sec:intro} In the past several decades, variational methods and optimization techniques have been extensively studied for solving image reconstruction problems. In particular, a number of regularizers, including total variation (TV), $l_p$ norm ($p\in [0,1]$), low rank, group Lasso, and nonlocal TV, have been proposed to improve the classical Tikhonov-type regularizers. Advanced optimization techniques were also developed to solve these nonsmooth and/or nonconvex reconstruction models for better computational efficiency, often by leveraging the special structures of the regularizers. However, the image reconstruction quality heavily depends on these hand-crafted regularizers, which are still overly simple and incapable to capture the complex structural features of images. Moreover, the slow convergence and subtle parameter tuning of the optimization algorithms have hindered their applications in real-world image reconstruction problems. Recent years have witnessed the tremendous success of deep learning in a large variety of real-world application fields \cite{DLH14,HSW89,LPW17,Yarotsky}. At the heart of deep learning are the deep neural networks (DNNs) which have provable representation power and the substantial amount of data available nowadays for training these DNNs. Deep learning was mostly used as a data-driven approach since the DNNs can be trained with little or no knowledge about the underlying functions to be approximated. However, there are several major issues of such standard deep learning approaches: (i) Generic DNNs may fail to approximate the desired functions if the training data is scarce; (ii) The training of these DNNs are prone to overfitting, noises, and outliers; and (iii) The trained DNNs are mostly ``blackboxes'' without rigorous mathematical justification and can be very difficult to interpret. \subsection{Background of learnable optimization algorithms} To mitigate the aforementioned issues of DNNs, a class of \emph{learnable optimization algorithms} (LOAs) has been proposed recently. The main idea of LOA is to map a known iterative optimization algorithm to a DNN. The DNN is restricted to have a small number of blocks, where each block (also called a phase) mimics one iteration of the algorithm but with certain components replaced by network layers, and the network parameter $\theta$ of the DNN is learned such that the outputs of the DNN fit the desired solutions given in the training data. Consider the standard setting of supervised learning with a set of $N$ pairs of training data, $\{(\mathbf{b}^{(s)}, \hat{\mathbf{x}}^{(s)}): s \in [N] \}$, such that $\mathbf{b}^{(s)}$ is the input data of the DNN, for instance, a noisy and/or corrupted image or compressed or encoded data, and $\hat{\mathbf{x}}^{(s)}$ is the corresponding ground truth high quality image that the output of the DNN is expected to match. We study a general framework of LOA which can be described by a bi-level optimization problem, where the upper-level minimization is described by the loss function $\mathcal{L}$, and the lower-level minimization forms the constraint, as follows: \begin{equation}\label{eq:loa} \min_{\theta}\ \frac{1}{N} \sum_{s = 1}^N \mathcal{L}(\mathbf{x}^{(s)}_{\theta}, \hat{\mathbf{x}}^{(s)}),\quad \text{s.t.}\ \ \mathbf{x}^{(s)}_{\theta} = \argmin_{\mathbf{x} \in \mathcal{X}}\, \{\phi(\mathbf{x}; \mathbf{b}^{(s)}, \theta) := f(\mathbf{x}; \mathbf{b}^{(s)}, \theta) + r(\mathbf{x}; \theta) \} \end{equation} where $f$ is the data fidelity term to ensure that the reconstructed image $\mathbf{x}$ is faithful to the given data $\mathbf{b}$, and $r$ is the regularization that may incorporate proper prior information of $\mathbf{x}$. The regularization $r(\cdot; \theta)$ (and possibly $f$ also) is realized as a DNN with parameter $\theta$ to be learned. The loss function $\mathcal{L}$ is to measure the (averaged) difference between $\mathbf{x}_{\theta}$, which is the minimizer of $\phi(\cdot; \mathbf{b},\theta)$, and the given ground truth $\hat{\mathbf{x}}$ for every $s$. The optimal parameter $\theta$ of $r$ is then obtained by solving the bi-level optimization \eqref{eq:loa}. Typically, the actual minimizer $\mathbf{x}_{\theta}$ is replaced by the output of an LOA-based DNN (different from the DNN $r$) which mimics an iterative optimization scheme for solving the lower-level minimization in the constraint of \eqref{eq:loa}. LOA is also widely known as the \emph{unrolling method}, as the iteration scheme of the optimization algorithm is ``unrolled'' into multiple blocks of the LOA-based DNN. However, despite of their promising performance in practice, existing LOA-based DNNs only superficially resemble the steps of optimization algorithms, and hence they do not really yield a convergent algorithm or correspond to solving any interpretable variational model as the one in the constraint of \eqref{eq:loa}. As a result, they lack theoretical justifications and convergence guarantees. \subsection{Our goal and approach} Our goal in this work is to develop a general LOA framework for solving nonsmooth and nonconvex image reconstruction problems. The proposed LOA has the following perperties: \emph{Versatility}---our method is flexible and allows users to plug in various kinds of deep neural networks for learning the objective function; \emph{Convergence}---we can ensure convergence of our network as its architecture follows exactly the proposed algorithm for solving the nonsmooth and nonconvex optimization in \eqref{eq:loa}; and \emph{Performance}---our method can adaptively learn the regularization function from the training data, such that it is competitive and can even outperform the state-of-the-art methods in terms of both reconstruction accuracy and efficiency in practice. To this end, we consider to learn the lower-level minimization in \eqref{eq:loa} for image reconstruction, such that its solution is close to the ground truth high quality image in the training data. Specifically, we use a composited structure of the regularization $r$ as the $l_{2,1}$ norm of a learnable feature mapping $\mathbf{g}$ realized by a deep neural network. Both $f$ and $\mathbf{g}$ are smooth but (possibly) nonconvex, and the overall objective function is nonsmooth and nonconvex. We propose a descent-type algorithm to solve this nonsmooth nonconvex problem as follows: (i) We tackle the nonsmoothness by employing Nesterov's smoothing technique \cite{nesterov2005smooth} with automatic diminishing smoothing effect; (ii) We propose two successive residual-type updates (first on $f$ and then on $r$), a key idea proven very effective in deep network training \cite{ResNet}, and compute the convex combination of the two updates for the next iteration; and (iii) We employ an iterate selection policy based on objective function value to ensure convergence. Moreover, we prove that a subsequence generated by the proposed LOA has accumulation points and all of them are Clarke stationary points of the nonsmooth nonconvex problem. \subsection{Notations and organization} \label{subsec:notation} We denote $[n]:=\{1,\dots,n\}$ for $n\in \mathbb{N}$. We use regular lower-case letters to denote scalars and scalar-valued functions, and boldfaced lower-case letters for vectors and vector-valued functions. Unless otherwise noted, all vectors are column vectors. The inner product of two vectors $\mathbf{x}$ and $\mathbf{y}$ is denoted by $\langle \mathbf{x}, \mathbf{y} \rangle$, and $\|\mathbf{x}\| = \|\mathbf{x}\|_2$ stands for the $l_2$ norm of $\mathbf{x}$ and $\|\mathbf{A}\|$ the induced $l_2$ norm of the matrix $\mathbf{A}$, and $\mathbf{A}^{\top}$ is the transpose of $\mathbf{A}$. For any set $\mathcal{S} \subset \mathbb{R}^n$, we denote $\dist(\mathbf{y}, \mathcal{S}) := \inf\{ \|\mathbf{y} - \mathbf{x}\| \ \vert \ \mathbf{x} \in \mathcal{S} \}$, and $\mathcal{S} + \mathbf{y} := \{\mathbf{x} + \mathbf{y}\ \vert \ \mathbf{x} \in \mathcal{S}\}$. The remainder of this paper is organized as follows. In Section \ref{sec:related}, we review the recent literature on LOA and general nonsmooth nonconvex optimization methods. In Section \ref{sec:proposed}, we present our LOA based on a descent type algorithm to solve the nonsmooth nonconvex image reconstruction problem with comprehensive convergence analysis. In Section \ref{sec:experiment}, we conduct a number of numerical experiments on natural and medical image dataset to show the promising performance of our proposed method. We provide several concluding remarks in Section \ref{sec:conclusion}. \section{Related Work} \label{sec:related} \subsection{Learnable optimization algorithms} Learnable optimization algorithm (LOA) is a class of methods developed in recent years to imitate the iterations in optimization algorithms as blocks in a deep neural network with certain components replaced by learnable layers. Existing LOAs can be approximately categorized into two groups. The first group of LOAs appeared in the literature are motivated by the similarity between the iterative scheme of a traditional optimization algorithm (e.g., proximal gradient algorithm) and a feed forward neural network. Provided instances of training data, such as ground truth solutions, an LOA replaces certain components of the optimization algorithm with parameters to be learned from the data. The pioneer work \cite{KY2010} in this group of LOAs is based on the well-known iterative shrinkage thresholding algorithm (ISTA) for solving the LASSO problem. In \cite{KY2010}, a learned ISTA network, called LISTA, is proposed to replace $\Phi^{\top }$ by a weight matrix to be learned from instance data to reduce iteration complexity of the original ISTA. The asymptotic linear convergence rate for LISTA is established in \cite{CLWY2018} and \cite{LCW2019}. Several variants of LISTA were also developed using low rank or group sparsity \cite{SBS2015}, $\ell_0$ minimization \cite{XWG16} and learned approximate message passing \cite{BSR2017}. The idea of LISTA has been extended to solve composite problems with linear constraints, known as the differentiable linearized alternating direction method of multipliers (D-LADMM) \cite{XWZ2019}. These LOA methods, however, still employ handcrafted regularization and require closed form solution of the proximal operator of the regularization term. To improve reconstruction quality, the other group of LOAs follows a different approach by replacing the lower-level minimization in \eqref{eq:loa} with a DNN whose structure is inspired by a numerical optimization algorithm for solving the minimization problem. For example, recall that the standard proximal gradient method applies a gradient descent step on the smooth function $\nabla f$ at the current iterate, and then the proximal mapping of $r$ to obtain the next iterate. In this case, an LOA can be obtained by replacing the proximal mapping of $r$ with a multilayer perceptrons (MLP), which can be learned using training data. As such, one avoids explicit formation of the regularization $r$ for \eqref{eq:loa}. This paradigm has been embedded into half quadratic splitting in DnCNN \cite{zhang2017learning}, ADMM in \cite{chang2017one,meinhardt2017learning} and primal dual methods in \cite{AO18,LCW2019,meinhardt2017learning,wang2016proximal} to solve the subproblems. To improve the generic black-box CNNs above, several LOA methods are proposed to incorporate certain prior knowledge about the solution in the design of $r$, then unroll numerical optimization algorithms as deep neural networks so as to preserve their efficient structures with proven efficiency, such as the ADMM-Net \cite{NIPS2016_6406}, Variational Network \cite{HKK18} and ISTA-Net \cite{Zhang2018ISTANetIO}. These methods also prescribe the phase number $K$, and map each iteration of the corresponding numerical algorithm to one phase of the network, and learn specific components of the phases in the network using training data. Despite of the promising performance in a variety of applications, the LOAs are only related to the original optimization algorithms superficially. These LOAs themselves do not follow any provably convergent algorithm or correspond the solution of any properly defined variational problem. Moreover, certain acceleration techniques proven to be useful for numerical optimization algorithms are not effective in their LOA counterparts. For example, the acceleration approach based on momentum \cite{Nesterov} can significantly improve iteration complexity of traditional (proximal) gradient descent methods, but does not have noticeable improvement when deployed in LOAs. This can be observed by the similar performance of ISTA-Net \cite{Zhang2018ISTANetIO} and FISTA-Net \cite{zhang2017learning}. One possible reason is that the LOA has nonconvex components, for which a linear combination of past iterates is potentially a worse extrapolation point in optimization \cite{li2015accelerated}. In parallel to the development of LOAs, performance of deep networks for image reconstruction is continuously being improved due to various network engineering and training techniques these years. For example, ISTA-Net$^+$ \cite{Zhang2018ISTANetIO} employs the residual network structure \cite{ResNet} and results in substantially increased reconstruction accuracy over ISTA-Net. The residual structure is also shown to improve network performance in a number of recent work, such as ResNet-v2 \cite{he2016identity}, WRN \cite{zagoruyko2016wide}, and ResNeXt \cite{xie2017aggregated}. In image compression and reconstruction, the learnable sampling module is always implemented as a single convolutional layer without activation \cite{SJZ17, Shi_2019_CVPR, zhou2019multi,ZZG20,zhang2020amp}. Efficient block compressed sensing for high-dimensional data \cite{gan2007block} can be achieved by controlling the convolution kernel size and stride \cite{ SJZ17}. Joint reconstruction to reduce blocky effects in image compressive sensing is proposed in \cite{SJZ17}, and the learned sampling operator is shown to automatically satisfy the orthogonal property in \cite{ZZG20}. A multi-channel method is proposed in \cite{zhou2019multi} to elaborately manage the sensing resources by assigning different sampling ratio to image blocks. To obtain any desired sampling ratio, a scalable sampling and reconstruction is achieved through a greedy measurement based selection algorithm in \cite{Shi_2019_CVPR}. \subsection{Nonsmooth and nonconvex optimization} Nonsmooth nonconvex optimization has been extensively studied in recent years. One of the most common methods is the proximal gradient method (also known as the forward-backward splitting or FBS) \cite{FM81, ABS13, N13}. Several variants, including the accelerated proximal gradient method \cite{LL15} and the FBS with an inertial force \cite{BCL16, OCB14}, are proposed to improve the convergence rate. Iteratively reweighted algorithms are developed to iteratively solve a proximal operator problem \cite{GZL13, ODB14, ZK14}. These algorithms are effective when the nonsmooth components involved in the subproblems are simple, i.e., the associated proximal operators have closed-form or are easy to solve. There are also a number of optimization algorithms developed for certain structured nonsmooth nonconvex problems. For instance, for a class of composite optimization problems involving $h(\mathbf{c}(\mathbf{x}))$, where $h$ is convex but nonsmooth, and $\mathbf{c}$ is smooth but (possibly) nonconvex, several linearized proximal type algorithms are proposed such that $\mathbf{c}$ is approximated by its linearization. This renders a convex subproblem in each iteration, which can be solved with exact \cite{LW16, DL16, OFB19} or inexact \cite{GST19} function and gradient evaluations. If the problem nonconvexity is due to the difference of convex (DC) functions, a number of optimization methods known as DC algorithms (DCA) are developed \cite{SOS16, WCP18, BBC19, NOS18}. DCA approximates a nonconvex DC problem by a sequence of convex ones, such that each iteration only involves a convex optimization problem. Several DC decomposition and approximation methods have been introduced for different applications \cite{LP05, PL14}. Recently, the proximal linearized algorithms for DC programming are proposed to iteratively solve the subproblem, where one of the convex components is replaced by its linear approximation together with a proximal term \cite{SOS16, BBC19}. In \cite{WCP18}, extrapolation is integrated into proximal linearized algorithm for possible acceleration of the proximal DCA. In \cite{NOS18}, an inexact generalized proximal linearized algorithms is developed, where the Euclidean distance is replaced with a quasi distance in the proximal operator, and the proximal point is replaced with an approximate proximal point. However, the subproblem of DCA may not have closed-form solution and thus can still require inner iterations to solve. To solve general nonconvex and nonsmooth problems, a common approach is using the smoothing technique, possibly in combination with gradient decent and line search strategy; see \cite{R98,ZC09,C12,BC17} and the references therein. The main idea of the smoothing technique to construct a class of smooth nonconvex problems (e.g., using convolution) to approximate the original problem, where the approximation accuracy (smoothing level) is controlled by a smoothing parameter. Then one can apply gradient decent or projected gradient decent with line search to solve the approximate problem with a fixed smoothing level; then reduce the smoothing parameter and solve the problem again, and so on. The descent algorithm developed in this work for nonsmooth and nonconvex optimization is largely inspired by \cite{C12}. However, unlike \cite{C12}, our goal is to construct a deep image reconstruction network in the framework of \eqref{eq:loa}, where (part of) the objective function is unknown and needs to be learned from the training data, such that the trained network has convergence guarantee in theory and compares to the state-of-the-art favorably in reconstruction quality in practice. \section{LDA and Convergence Analysis} \label{sec:proposed} In this section, we propose a novel learnable descent algorithm (LDA) to solve the nonsmooth and nonconvex optimization problem for image reconstruction: \begin{equation}\label{eq:phi} \min_{\mathbf{x} \in \mathcal{X}} \ \phi(\mathbf{x}) := f(\mathbf{x}) + r(\mathbf{x}), \end{equation} where $\mathbf{x}$ is the image to be reconstructed, $\mathcal{X}$ is the admissible set of $\mathbf{x}$, e.g., $\mathcal{X} = \mathbb{R}^n$, $n$ is the number of pixels in $\mathbf{x}$, $f$ stands for the data fidelity term of $\mathbf{x}$, and $r$ represents the regularization term to be learned. We leverage the sparse selection property of $l_1$ norm and parametrize the regularization term $r$ as the composition of the $l_{2,1}$ norm and a feature extraction operator $\mathbf{g}(\mathbf{x})$ to be learned. Specifically, $\mathbf{g}: \mathbb{R}^n \to \mathbb{R}^{md}$ such that $\mathbf{g}(\mathbf{x}) = (\mathbf{g}_1(\mathbf{x}),\dots,\mathbf{g}_m(\mathbf{x}))$, where $\mathbf{g}_i(\mathbf{x}) \in \mathbb{R}^{d}$ is the $i$-th feature vector of $\mathbf{x}$ for $i=1,\dots,m$. That is, we set $r$ in \eqref{eq:phi} to \begin{equation}\label{eq:r} r(\mathbf{x}) := \|\mathbf{g}(\mathbf{x})\|_{2,1} = \sum_{i = 1}^{m} \|\mathbf{g}_i(\mathbf{x})\|. \end{equation} In this paper, $\mathbf{g}$ is realized by a deep neural network whose parameters are learned from training data. The regularization $r$ in \eqref{eq:r} can be interpreted as follows: we learn a smooth nonlinear mapping $\mathbf{g}$ to extract spares features of $\mathbf{x}$, and apply the $l_{2,1}$-norm which has proven to be a robust and effective sparse feature regularization. In addition, we make several assumptions on $f$ and $\mathbf{g}$ throughout this work. \begin{itemize} \item \textbf{Assumption 1 (A1)} $f$ is differentiable and (possibly) nonconvex, and $\nabla f$ is $L_f$-Lipschitz continuous. \item \textbf{Assumption 2 (A2)} Every component of $\mathbf{g}$ is differentiable and (possibly) nonconvex, $\nabla \mathbf{g}$ is $L_g$-Lipschitz continuous, and $\sup_{\mathbf{x} \in \mathcal{X}} \| \nabla \mathbf{g}(\mathbf{x})\| \leq M $ for some constant $M>0$. \item \textbf{Assumption 3 (A3)} $\phi$ is coercive, and $\phi^* = \min_{\mathbf{x} \in \mathcal{X}} \phi(\mathbf{x}) > -\infty$. \end{itemize} \begin{remark} The assumptions (A1)--(A3) are mild for imaging applications. The Lipschitz continuity of $\nabla f$ and $\nabla \mathbf{g}$ is standard in optimization and most imaging applications; the smoothness of $\mathbf{g}$ and boundedness of $\nabla \mathbf{g}$ are satisfied for all standard deep neural networks with smoothly differentiable activation functions such as sigmoid, tanh, and elu; and the coercivity of $\phi$ generally holds in image reconstruction, e.g., the DC component providing overall image intensity information (e.g., $\|\mathbf{x}\|_1$) is contained in the data, and deviation from this value makes $f$ value tend to infinity. \end{remark} Other than the requirement in A2, the design of network architecture and the choice of activation functions in $\mathbf{g}$ are rather flexible. A typical choice of $\mathbf{g}$ is a convolutional neural network (CNN), which maps an input image $\mathbf{x} \in \mathbb{R}^n$ (gray-scale image with $n$ pixels) to a collection of $m$ feature vectors $\{\mathbf{g}_i(\mathbf{x}): 1 \le i \le m\} \subset \mathbb{R}^d$. \subsection{Smooth Approximation of Nonsmooth Regularization} \label{subsec:smooth} To tackle the nonsmooth and nonconvex regularization term $r(\mathbf{x})$ in \eqref{eq:r}, we first employ Nesterov's smoothing technique \cite{nesterov2005smooth} to smooth the $l_{2,1}$ norm in \eqref{eq:r} for any fixed $\mathbf{g}(\mathbf{x})$: \begin{equation}\label{eq:Rx} r(\mathbf{x}) = \max_{\mathbf{y} \in \mathcal{Y}}\ \langle \mathbf{g}(\mathbf{x}), \mathbf{y} \rangle, \end{equation} where $\mathbf{y} \in \mathcal{Y}$ is the dual variable, $\mathcal{Y}$ is the dual space defined by \[ \mathcal{Y} := \cbr[1]{ \mathbf{y} = (\mathbf{y}_1,\dots,\mathbf{y}_m) \in \mathbb{R}^{md}\ \vert \ \mathbf{y}_i=(y_{i1},\dots,y_{id}) \in \mathbb{R}^{d},\ \| \mathbf{y}_i \| \leq 1, \ i \in [m] }. \] For any $\varepsilon>0$, we consider the smooth version $r_{\varepsilon}$ of $r$ by perturbing the dual form \eqref{eq:Rx} as follows: \begin{equation}\label{eq:r_eps_def} r_{\varepsilon}(\mathbf{x}) = \max_{\mathbf{y} \in \mathcal{Y}} \ \langle \mathbf{g}(\mathbf{x}), \mathbf{y} \rangle - \frac{\varepsilon}{2} \|\mathbf{y}\|^2. \end{equation} Then one can readily show that \begin{equation}\label{eq:r_eps_bound} r_{\varepsilon}(\mathbf{x})\leq r(\mathbf{x}) \leq r_{\varepsilon}(\mathbf{x}) +\frac{m\varepsilon}{2},\quad \forall\, \mathbf{x} \in \mathbb{R}^n. \end{equation} Note that the perturbed dual form in \eqref{eq:r_eps_def} has a closed form solution: denoting \begin{equation}\label{eq:y_eps} \mathbf{y}_\varepsilon^* = \argmax_{\mathbf{y} \in \mathcal{Y}}\ \langle \mathbf{g}(\mathbf{x}) , \mathbf{y} \rangle - \frac{\varepsilon}{2} \| \mathbf{y} \|^2, \end{equation} then solving \eqref{eq:y_eps}, we obtain the closed form of $\mathbf{y}_\varepsilon^*=((\mathbf{y}_\varepsilon^*)_1,\dots,(\mathbf{y}_\varepsilon^*)_m)$ where \begin{equation}\label{eq:y_eps_sol} (\mathbf{y}_\varepsilon^*)_i = \begin{cases} \frac{1}{\varepsilon} \mathbf{g}_i(\mathbf{x}), & \mbox{if} \ \|\mathbf{g}_i(\mathbf{x}) \| \leq \varepsilon, \\ \frac{\mathbf{g}_i(\mathbf{x})}{\|\mathbf{g}_i(\mathbf{x})\|}, & \mbox{otherwise}, \end{cases}\qquad \mbox{for}\ i\in [m]. \end{equation} Plugging \eqref{eq:y_eps_sol} back into \eqref{eq:r_eps_def}, we have \begin{equation}\label{eq:r_eps} r_{\varepsilon}(\mathbf{x}) = \sum_{i \in I_0} \frac{1}{2\varepsilon} \|\mathbf{g}_i(\mathbf{x})\|^2 + \sum_{i \in I_1} \del[2]{\|\mathbf{g}_i(\mathbf{x})\| - \frac{\varepsilon}{2} }, \end{equation} where the index set $I_0$ and its complement $I_1$ at $\mathbf{x}$ for the given $\mathbf{g}$ and $\varepsilon$ are defined by \begin{equation*} I_0 = \{ i \in [m] \ \vert \ \|\mathbf{g}_i(\mathbf{x})\| \leq \varepsilon \}, \ \ \ I_1 = [m] \setminus I_0. \end{equation*} Moreover, it is easy to show from \eqref{eq:r_eps} that \begin{equation}\label{eq:d_r_eps} \nabla r_{\varepsilon}(\mathbf{x}) = \nabla \mathbf{g}(\mathbf{x})^{\top} \mathbf{y}_{\varepsilon}^* = \sum_{i \in I_0} \nabla \mathbf{g}_i(\mathbf{x})^{\top} \frac{\mathbf{g}_i(\mathbf{x})}{\varepsilon} + \sum_{i \in I_1} \nabla \mathbf{g}_i(\mathbf{x})^{\top} \frac{\mathbf{g}_i(\mathbf{x})}{\|\mathbf{g}_i(\mathbf{x})\|} , \end{equation} where $\nabla \mathbf{g}_i(\mathbf{x})\in \mathbb{R}^{d \times n}$ is the Jacobian of $\mathbf{g}_i$ at $\mathbf{x}$. The smoothing technique above yields a smooth approximation of the nonsmooth function $r(\mathbf{x})$, which allows for rigorous analysis of iteration complexity and provable asymptotic convergence to the original nonsmooth problem \eqref{eq:phi}, as we will show in Section \ref{subsec:convergence}. \subsection{Proposed Descent Algorithm} \label{subsec:alg} In this subsection, we propose a novel descent type algorithm for solving the minimization problem \eqref{eq:phi} with the regularization $r$ defined in \eqref{eq:r}. The idea is to apply a modified gradient descent algorithm to minimize the objective function $\phi$ with the nonsmooth $r$ replaced by the smooth $r_{\varepsilon}$ as follows: \begin{equation}\label{eq:phi_eps} \phi_\varepsilon(\mathbf{x}) := f(\mathbf{x}) + r_{\varepsilon} (\mathbf{x}), \end{equation} with $\varepsilon$ automatically decreasing to $0$ as the iteration progresses. Note that $\phi_{\varepsilon}$ in \eqref{eq:phi_eps} is differentiable since both $\nabla f$ and $\nabla r_{\varepsilon}$ (defined in \eqref{eq:d_r_eps}) exist. Moreover, $\phi_{\varepsilon}(\mathbf{x}) \le \phi(\mathbf{x}) \le \phi_{\varepsilon}(\mathbf{x}) + \frac{m \varepsilon}{2}$ for any $\mathbf{x} \in \mathcal{X}$ due to \eqref{eq:r_eps_bound}. In light of the substantial improvement in practical performance by ResNet \cite{ResNet}, we choose to split $f$ and $r_{\varepsilon}$ and perform two residual type updates as follows: In the $k$-th iteration with $\varepsilon = \varepsilon_k >0$, we first compute \begin{equation} \label{eq:z} \mathbf{z}_{k+1} = \mathbf{x}_k - \alpha_k \nabla f(\mathbf{x}_k), \end{equation} where $\alpha_k$ is the step size to be specified later. Then we compute two candidates for $\mathbf{x}_{k+1}$, denoted by $\mathbf{u}_{k+1}$ and $\mathbf{v}_{k+1}$, as follows: \begin{subequations} \begin{align} \mathbf{u}_{k+1} &= \argmin_{\mathbf{x}}\ \langle \nabla f(\mathbf{x}_{k}) , \mathbf{x} - \mathbf{x}_{k}\rangle + \frac{1}{2\alpha_k} \| \mathbf{x} - \mathbf{x} _{k} \| ^2 + \langle \nabla r_{\varepsilon_{k}} (\mathbf{z}_{k+1}) , \mathbf{x} - \mathbf{z}_{k+1}\rangle + \frac{1}{2\beta_k} \| \mathbf{x} - \mathbf{z} _{k+1} \| ^2, \label{eq:u} \\ \mathbf{v}_{k+1} &= \argmin_{\mathbf{x}}\ \langle \nabla f(\mathbf{x}_{k}) , \mathbf{x} - \mathbf{x}_{k}\rangle + \langle \nabla r_{\varepsilon_{k}} (\mathbf{x}_{k}) , \mathbf{x} - \mathbf{x}_{k}\rangle + \frac{1}{2\alpha_k} \| \mathbf{x} - \mathbf{x} _{k} \| ^2, \label{eq:v} \end{align} \end{subequations} where $\beta_k$ is another step size along with $\alpha_k$. Note that both minimization problems in \eqref{eq:u} and \eqref{eq:v} have closed form solutions: \begin{subequations} \begin{align} \mathbf{u}_{k+1} &= \mathbf{z}_{k+1} - \tau_k \nabla r_{\varepsilon_{k}} (\mathbf{z}_{k+1}) \label{eq:u_closed} \\ \mathbf{v}_{k+1} &= \mathbf{z}_{k+1} - \alpha_k \nabla r_{\varepsilon_{k}} (\mathbf{x}_{k}) \label{eq:v_closed} \end{align} \end{subequations} where $\nabla r_{\varepsilon_{k}} $ is defined in \eqref{eq:d_r_eps} and $\tau_k = \frac{\alpha_k \beta_k}{\alpha_k + \beta_k}$. Then we choose between $\mathbf{u}_{k+1}$ and $\mathbf{v}_{k+1}$ that has the smaller function value $\phi_{\varepsilon_k}$ to be the next iterate $\mathbf{x}_{k+1}$: \begin{equation}\label{eq:x_select} \mathbf{x} _{k+1} = \begin{cases} \mathbf{u}_{k+1} &\text{if $\phi_{\varepsilon_k}(\mathbf{u}_{k+1}) \leq \phi_{\varepsilon_k}(\mathbf{v}_{k+1})$}, \\ \mathbf{v}_{k+1} & \text{otherwise}. \end{cases} \end{equation} This algorithm is summarized in Algorithm \ref{alg:lda}. Line 7 of Algorithm \ref{alg:lda} presents a \emph{reduction criterion}. That is, if the reduction criterion $\| \nabla \phi_{\varepsilon_{k}}(\mathbf{x}_{k+1}) \| < \sigma \gamma \varepsilon_{k}$ is satisfied, then the smoothing parameter $\varepsilon_{k}$ is shrunk by $\gamma \in (0,1)$. In Algorithm \ref{alg:lda}, $\mathbf{u}_{k+1}$ in \eqref{eq:u} can be considered as the convex combination of two successive residue-type updates: the first update is $\mathbf{z}_{k+1} = \mathbf{x}_k - \alpha_k \nabla f(\mathbf{x}_k)$ as defined by \eqref{eq:z}---a gradient descent of $f$ at $\mathbf{x}_k$; the second is $\mathbf{p}_{k+1} = \mathbf{z}_{k+1} - \beta_k \nabla r_{\varepsilon_{k}}(\mathbf{z}_{k+1})$---another gradient descent of $r_{\varepsilon_{k}}$ at $\mathbf{z}_{k+1}$; and finally $\mathbf{u}_{k+1} = \frac{\beta_k}{\alpha_k+\beta_k} \mathbf{z}_{k+1} + \frac{\alpha_k}{\alpha_k + \beta_k} \mathbf{p}_{k+1}$---the convex combination of $\mathbf{z}_{k+1}$ and $\mathbf{p}_{k+1}$. In this case, $f$ and $r_{\varepsilon_{k}}$ are separated so they each can participate in a residual-type update, which is proven very effective for imaging applications \cite{ResNet}. Note that the $\mathbf{v}_{k+1}$ in \eqref{eq:v} is the standard gradient descent of $\phi_{\varepsilon_{k}}$ at $\mathbf{x}$ to safeguard the convergence of the algorithm. \begin{algorithm}[t] \caption{Learnable Descent Algorithm (LDA) for the Nonsmooth Nonconvex Problem \eqref{eq:phi}} \label{alg:lda} \begin{algorithmic}[1] \STATE \textbf{Input:} Initial $\mathbf{x}_0$, $0<\gamma<1$, and $\varepsilon_0,\sigma>0$. Maximum iteration $K$ or tolerance $\epsilon_{\mathrm{tol}}>0$. \FOR{$k=0,1,2,\dots,K$} \STATE $\mathbf{z}_{k+1} = \mathbf{x}_k - \alpha_k \nabla f(\mathbf{x}_k)$ \STATE $\mathbf{u}_{k+1} = \mathbf{z}_{k+1} - \tau_k \nabla r_{\varepsilon_{k}} (\mathbf{z}_{k+1})$ \STATE $\mathbf{v}_{k+1} = \mathbf{z}_{k+1} - \alpha_k \nabla r_{\varepsilon_{k}} (\mathbf{x}_k)$ \STATE $\mathbf{x}_{k+1} = \begin{cases} \mathbf{u}_{k+1} & \mbox{if}\ \phi_{\varepsilon_k}(\mathbf{u}_{k+1}) \leq \phi_{\varepsilon_k}(\mathbf{v}_{k+1}) \\ \mathbf{v}_{k+1} & \mbox{otherwise} \end{cases}$ \STATE If $\|\nabla \phi_{\varepsilon_k}(\mathbf{x}_{k+1})\| < \sigma \gamma {\varepsilon_k}$, set $\varepsilon_{k+1}= \gamma {\varepsilon_k}$; otherwise, set $\varepsilon_{k+1}={\varepsilon_k}$. \STATE If $\sigma {\varepsilon_k} < \epsilon_{\mathrm{tol}}$, terminate. \ENDFOR \STATE \textbf{Output:} $\mathbf{x}_{k+1}$. \end{algorithmic} \end{algorithm} \subsection{Convergence and Complexity Analysis} \label{subsec:convergence} In this subsection, we provide a comprehensive convergence analysis with iteration complexity of the proposed Algorithm \ref{alg:lda} LDA. Since the objective function in \eqref{eq:phi} is nonsmooth and nonconvex, we adopt the notion of \emph{Clarke subdifferential} \cite{Clarke83} (also called the \emph{limiting subdifferential} or simply \emph{subdifferential}) to characterize the optimality of solutions. \begin{definition}[Clarke subdifferential] \label{def:clarke_subdiff} Suppose that $f: \mathbb{R}^{n} \rightarrow(-\infty,+\infty]$ is locally Lipschitz. The Clarke subdifferential of $f$ at $\mathbf{x}$ is defined as \[ \partial f(\mathbf{x}) := \cbr[2]{\mathbf{w} \in \mathbb{R}^{n}\ \bigg\vert \ \langle \mathbf{w}, \mathbf{v} \rangle \leq \limsup_{\mathbf{z} \rightarrow \mathbf{x},\, t \downarrow 0} \frac{f(\mathbf{z}+t \mathbf{v})-f(\mathbf{z})}{t}, \ \ \forall\, \mathbf{v} \in \mathbb{R}^{n} }. \] \end{definition} \begin{definition}[Clarke stationary point] \label{def:clarke_cp} For a locally Lipschitz function $f$, a point $\mathbf{x} \in R^n$ is called a Clarke stationary point of $f$ if $0 \in \partial f(\mathbf{x})$. \end{definition} Note that for a differentiable function $f$, there is $\partial f (\mathbf{x})= \{\nabla f(\mathbf{x})\}$. For the nondifferentiable (nonsmooth) function $r$ defined in \eqref{eq:r}, we can also compute its Clarke subdifferential as in the following lemma. % \begin{lemma}\label{lem:r_subdiff} Let $r(\mathbf{x})$ be defined in \eqref{eq:r}, then the Clarke subdifferential of $r$ at $\mathbf{x}$ is \begin{equation}\label{eq:r_subdiff} \partial r(\mathbf{x}) = \cbr[2]{\sum_{i\in I_0}\nabla \mathbf{g}_i(\mathbf{x})^{\top} \mathbf{w}_i + \sum_{i \in I_1}\nabla \mathbf{g}_i(\mathbf{x})^{\top}\frac{\mathbf{g}_i(\mathbf{x})}{\|\mathbf{g}_i(\mathbf{x})\|} \ \bigg\vert \ \mathbf{w}_i \in \mathbb{R}^d, \ \|\Pi(\mathbf{w}_i; \mathcal{C}(\nabla \mathbf{g}_i(\mathbf{x})))\|\leq 1,\ \forall\, i \in I_0 } , \end{equation} where $I_0=\{i \in [m] \ | \ \|\mathbf{g}_i(\mathbf{x}) \|= 0 \}$, $I_1=[m] \setminus I_0$, and $\Pi(\mathbf{w};\mathcal{C}(\mathbf{A}))$ is the projection of $\mathbf{w}$ onto the column space of $\mathbf{A}$. \end{lemma} \begin{proof} We observe that $r(\mathbf{x}) = \sum_{i=1}^m r_i(\mathbf{x})$ where $r_i(\mathbf{x}):= \| \mathbf{g}_i (\mathbf{x})\|$. Hence we can consider the Clarke subdifferential of each $r_i(\mathbf{x})$. If $i\in I_1$, then it is clear that $r_i(\mathbf{x})$ is differentiable and $\partial r_i(\mathbf{x}) = \nabla \mathbf{g}_i(\mathbf{x})^{\top} \frac{\mathbf{g}_i(\mathbf{x})}{\|\mathbf{g}_i(\mathbf{x})\|}$. If $i\in I_0$, then for any $\mathbf{v}$, there is \begin{align*} &\abs[3]{\frac{r_i(\mathbf{z} + t \mathbf{v}) - r_i(\mathbf{z})}{t} - \|\nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v}\| } = \abs[3]{\frac{\|\mathbf{g}_i(\mathbf{z} + t \mathbf{v})\| - \|\mathbf{g}_i(\mathbf{z})\|}{t} - \|\nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v}\| } \\ \le\ & \frac{\|\mathbf{g}_i(\mathbf{z} + t \mathbf{v}) - \mathbf{g}_i(\mathbf{z}) -t \nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v}\|}{t} = \norm[3]{ \frac{1}{t} \int_0^t \nabla \mathbf{g}_i(\mathbf{z} + s \mathbf{v}) \mathbf{v} \dif s - \nabla \mathbf{g}_i(\mathbf{x})^{\top} \mathbf{v}} \\ \le\ & \frac{1}{t} \int_0^t \norm[1]{\nabla \mathbf{g}_i (\mathbf{z}+s\mathbf{v})^{\top}\mathbf{v} - \nabla \mathbf{g}_i(\mathbf{z})^{\top}\mathbf{v} + \nabla \mathbf{g}_i(\mathbf{z})^{\top}\mathbf{v} - \nabla \mathbf{g}_i(\mathbf{x})^{\top} \mathbf{v} } \dif s \\ \le\ & \frac{1}{t}\int_0^t M(\|\mathbf{v}\|^2 s +\|\mathbf{z} - \mathbf{x}\|\|\mathbf{v}\|) \dif s = \frac{t}{2}M(\|\mathbf{v}\| + 2\| \mathbf{z} - \mathbf{x} \|)\|\mathbf{v}\| \to 0 \end{align*} as $(\mathbf{z},t)\to (\mathbf{x},0)$, which implies that \[ \| \nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v} \| = \lim_{(\mathbf{z},t) \to (\mathbf{x},0)} \frac{r_i(\mathbf{z} + t \mathbf{v}) - r_i(\mathbf{z})}{t} = \limsup_{\mathbf{z} \rightarrow \mathbf{x},\, t \downarrow 0} \frac{r_i(\mathbf{z} + t \mathbf{v}) - r_i(\mathbf{z})}{t}. \] Therefore, for any $\mathbf{w} \in \mathbb{R}^d$ satisfying $\|\Pi(\mathbf{w}; \mathcal{C}(\nabla \mathbf{g}_i(\mathbf{x})))\| \le 1$, we have \begin{align*} & \langle \nabla \mathbf{g}_i(\mathbf{x})^{\top} \mathbf{w}, \mathbf{v} \rangle = \langle \mathbf{w}, \nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v} \rangle = \langle \Pi(\mathbf{w}; \mathcal{C}(\nabla \mathbf{g}_i(\mathbf{x}))), \nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v} \rangle \le \| \nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v} \| = \limsup_{\mathbf{z} \rightarrow \mathbf{x},\, t \downarrow 0} \frac{r_i(\mathbf{z} + t \mathbf{v}) - r_i(\mathbf{z})}{t} \end{align*} where the second equality is due to $\nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v} \in \mathcal{C}(\nabla \mathbf{g}_i(\mathbf{x}))$. On the other hand, for any $\mathbf{w} \in \mathbb{R}^d$ satisfying $\|\Pi(\mathbf{w}; \mathcal{C}(\nabla \mathbf{g}_i(\mathbf{x})))\| > 1$, there exists $\mathbf{v} \in \mathbb{R}^n$, such that $\nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v} = \Pi(\mathbf{w}; \mathcal{C}(\nabla \mathbf{g}_i(\mathbf{x})))$ and \[ \langle \nabla \mathbf{g}_i(\mathbf{x})^{\top} \mathbf{w}, \mathbf{v} \rangle = \langle \Pi(\mathbf{w}; \mathcal{C}(\nabla \mathbf{g}_i(\mathbf{x}))), \nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v} \rangle = \| \nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v} \|^2 > \| \nabla \mathbf{g}_i(\mathbf{x}) \mathbf{v} \| = \limsup_{\mathbf{z} \rightarrow \mathbf{x},\, t \downarrow 0} \frac{r_i(\mathbf{z} + t \mathbf{v}) - r_i(\mathbf{z})}{t}. \] Therefore, by Definition \ref{def:clarke_subdiff}, we obtain the Clarke subdifferential $\partial r(\mathbf{x})$ as in \eqref{eq:r_subdiff}. \end{proof} % We immediately have the subdifferential $\partial \phi$ due to \eqref{eq:r_subdiff} and the differentiability of $f$: \begin{equation}\label{eq:phi_subdiff} \partial \phi(\mathbf{x}) = \partial r(\mathbf{x}) + \nabla f(\mathbf{x}). \end{equation} The following lemma also provides the Lipschitz constant of $\nabla r_{\varepsilon}$. \begin{lemma}\label{lem:r_Lip} The gradient $\nabla r_{\varepsilon}$ of $r_{\varepsilon}$ defined in \eqref{eq:r_eps_def} is Lipschitz continuous with constant $\sqrt{m} L_g+\frac{M^2}{\varepsilon}$. \end{lemma} \begin{proof} For any $\mathbf{x}_1,\mathbf{x}_2 \in \mathcal{X}$, we first define $\mathbf{y}_1$ and $\mathbf{y}_2$ as follows, \begin{align*} \mathbf{y}_1 &=\argmax_{\mathbf{y} \in \mathcal{Y}}\ \langle \mathbf{g}(\mathbf{x}_1),\,\mathbf{y} \rangle -\frac{\varepsilon}{2} \| \mathbf{y} \|^2, \\ \mathbf{y}_2 &=\argmax_{\mathbf{y} \in \mathcal{Y}}\ \langle \mathbf{g}(\mathbf{x}_2),\,\mathbf{y} \rangle -\frac{\varepsilon}{2} \| \mathbf{y} \|^2, \end{align*} which are well defined since the maximization problems have unique solution. Due to the concavity of the problems above (in $\mathbf{y}$) and the optimality conditions of $\mathbf{y}_1$ and $\mathbf{y}_2$, we have \begin{align*} & \langle \mathbf{g}(\mathbf{x}_1)-\varepsilon \mathbf{y}_1,\,\mathbf{y}_2-\mathbf{y}_1\rangle\leq 0, \\ & \langle \mathbf{g}(\mathbf{x}_2)-\varepsilon \mathbf{y}_2,\,\mathbf{y}_1-\mathbf{y}_2\rangle\leq 0. \end{align*} Adding the two inequalities above yields \begin{align*} \langle \mathbf{g}(\mathbf{x}_1)-\mathbf{g}(\mathbf{x}_2)-\varepsilon \left(\mathbf{y}_1-\mathbf{y}_2\right),\,\mathbf{y}_2-\mathbf{y}_1\rangle\leq 0, \end{align*} which, together with the Cauchy-Schwarz inequality, implies \begin{equation*} \varepsilon\, \| \mathbf{y}_2-\mathbf{y}_1 \| ^2 \le \langle \mathbf{g}(\mathbf{x}_1)-\mathbf{g}(\mathbf{x}_2),\,\mathbf{y}_1-\mathbf{y}_2\rangle \le \| \mathbf{g}(\mathbf{x}_1)-\mathbf{g}(\mathbf{x}_2) \| \cdot \| \mathbf{y}_1-\mathbf{y}_2 \|. \end{equation*} Therefore, $\varepsilon\, \| \mathbf{y}_1-\mathbf{y}_2 \| \le \| \mathbf{g}(\mathbf{x}_1)-\mathbf{g}(\mathbf{x}_2) \|$. Recall that $\nabla r_{\varepsilon}(\mathbf{x}_j) = \nabla \mathbf{g}(\mathbf{x}_j)^{\top} \mathbf{y}_j$ for $j=1,2$. Therefore, we have \begin{align*} & \,\| \nabla r_{\varepsilon}(\mathbf{x}_1) - \nabla r_{\varepsilon}(\mathbf{x}_2) \| = \left \| \nabla \mathbf{g}(\mathbf{x}_1)^\top \mathbf{y}_1-\nabla \mathbf{g}(\mathbf{x}_2)^\top \mathbf{y}_2\right \| \\ = \ & \left \| \left(\nabla \mathbf{g}(\mathbf{x}_1)^\top \mathbf{y}_1-\nabla \mathbf{g}(\mathbf{x}_2)^\top \mathbf{y}_1\right)+\left(\nabla \mathbf{g}(\mathbf{x}_2)^\top \mathbf{y}_1-\nabla \mathbf{g}(\mathbf{x}_2)^\top \mathbf{y}_2\right)\right \| \\ \le \ & \left \| \left(\nabla \mathbf{g}(\mathbf{x}_1) -\nabla \mathbf{g}(\mathbf{x}_2) \right)^\top \mathbf{y}_1 \right \| + \| \nabla \mathbf{g}(\mathbf{x}_2) \| \left \| \mathbf{y}_1- \mathbf{y}_2 \right \| \\ \le \ & \left \| \nabla \mathbf{g}(\mathbf{x}_1) -\nabla \mathbf{g}(\mathbf{x}_2) \right \| \cdot \| \mathbf{y}_1 \| + \frac{1}{\varepsilon}\cdot \| \nabla \mathbf{g}(\mathbf{x}_2) \| \cdot \| \mathbf{g}(\mathbf{x}_1)-\mathbf{g}(\mathbf{x}_2) \| \\ \le \ & L_g \| \mathbf{x}_1 - \mathbf{x}_2 \| \cdot \| \mathbf{y}_1 \| + \frac{M}{\varepsilon}\cdot \| \nabla \mathbf{g}(\mathbf{x}_2) \| \cdot \| \mathbf{x}_1 - \mathbf{x}_2 \| , \end{align*} where the last inequality is due to the $L_g$-Lipschitz continuity of $\nabla \mathbf{g}$ for the first term, and $\|\mathbf{g}(\mathbf{x}_1) - \mathbf{g}(\mathbf{x}_2)\|=\| \nabla \mathbf{g}(\tilde{\mathbf{x}})(\mathbf{x}_1-\mathbf{x}_2)\|$ for some $\tilde{\mathbf{x}}$ due to the mean value theorem and that $ \| \nabla \mathbf{g}(\tilde{\mathbf{x}}) \| \le \sup_{\mathbf{x} \in \mathcal{X}} \| \nabla \mathbf{g}(\mathbf{x}) \| \leq M$ for the second term. Since $\max_{\mathbf{y} \in \mathcal{Y}} \| \mathbf{y} \| = \sqrt{m}$, we have \[ \| \nabla r_{\varepsilon}(\mathbf{x}_1) - \nabla r_{\varepsilon}(\mathbf{x}_2) \| \le \left \| \nabla \mathbf{g}(\mathbf{x}_1)^\top \mathbf{y}_1-\nabla \mathbf{g}(\mathbf{x}_2)^\top \mathbf{y}_2\right \| \leq\del[2]{\sqrt{m} L_g+\frac{M^2}{\varepsilon} }\, \| \mathbf{x}_1-\mathbf{x}_2 \|, \] which completes the proof. \end{proof} Now we return to Algorithm \ref{alg:lda}. We first consider its behavior if a constant $\varepsilon>0$ is used, i.e., an iterative scheme that only executes its Lines 3--6. \begin{lemma}\label{lem:inner} Let $\varepsilon, \eta>0$, $\delta_1 \ge \delta_2 > 1$ and $\mathbf{x}_0 \in \mathcal{X}$ be arbitrary. Suppose $\{\mathbf{x}_{k}\}$ is the sequence generated by repeating Lines 3--6 of Algorithm \ref{alg:lda} with $\varepsilon_k = \varepsilon$ and step sizes $\frac{1}{\delta_1 L_{\varepsilon}} \le \alpha_k \le \frac{1}{\delta_2 L_{\varepsilon}}$ for all $k\ge 0$, where $L_{\varepsilon} = L_f + \sqrt{m} L_g + \frac{M^2}{\varepsilon}$, and $\phi^*:=\min_{\mathbf{x} \in \mathcal{X}} \phi(\mathbf{x})$. Then the following statements hold: \begin{enumerate} \item $\| \nabla \phi_{\varepsilon}(\mathbf{x}_k) \| \to 0$ as $k\to \infty$. \item $\min\{ k \in \mathbb{N} \ \vert \ \| \nabla \phi_{\varepsilon}(\mathbf{x}_{k+1}) \| \le \eta \} \le \frac{\delta_1 \delta_2 L_{\varepsilon}(2\phi_{\varepsilon}(\mathbf{x}_0) - 2 \phi^* + m\varepsilon)}{(\delta_2 - 1 ) \eta^2}$. \end{enumerate} \end{lemma} \begin{proof} 1. Due to the optimality condition of $\mathbf{v}_{k+1}$ in \eqref{eq:v}, we have \begin{equation}\label{eq:v_opt} \langle\nabla \phi_{\varepsilon}(\mathbf{x}_{k}), \mathbf{v}_{k+1}-\mathbf{x}_{k}\rangle+\frac{1}{2\alpha_k} \| \mathbf{v}_{k+1}-\mathbf{x}_{k} \| ^2 \leq 0. \end{equation} In addition, $\nabla \phi_{\varepsilon}$ is $L_\varepsilon$-Lipschitz continuous due to Lemma \ref{lem:r_Lip}, which implies that \begin{equation}\label{lipF} \phi_{\varepsilon}(\mathbf{v}_{k+1}) \leq \phi_{\varepsilon}(\mathbf{x}_{k}) + \langle\nabla \phi_{\varepsilon}(\mathbf{x}_{k}), \mathbf{v}_{k+1}-\mathbf{x}_{k}\rangle+\frac{L_{\varepsilon}}{2} \| \mathbf{v}_{k+1}-\mathbf{x}_{k} \| ^2. \end{equation} Combining $\eqref{eq:v_opt}$, $\eqref{lipF}$ and $\mathbf{v}_{k+1} = \mathbf{x}_k - \alpha_k \nabla \phi_{\varepsilon}(\mathbf{x}_k)$ in \eqref{eq:v_closed} yields \begin{equation}\label{eq:diff} \phi_{\varepsilon}(\mathbf{v}_{k+1})-\phi_{\varepsilon}(\mathbf{x}_{k})\leq - \del[2]{\frac{1}{2\alpha_k} -\frac{L_{\varepsilon}}{2}} \| \mathbf{v}_{k+1}-\mathbf{x}_{k} \|^2 = - \frac{\alpha_k(1- \alpha_k L_{\varepsilon})}{2} \| \nabla \phi_{\varepsilon} (\mathbf{x}_k)\|^2 \le 0, \end{equation} where we used the fact that $\alpha_k L_{\varepsilon} \le \frac{1}{\delta_2} < 1$ to obtain the last inequality. According to the selection rule \eqref{eq:x_select}, if $\phi_{\varepsilon}(\mathbf{u}_{k+1})\leq \phi_{\varepsilon}(\mathbf{v}_{k+1})$, then $\mathbf{x}_{k+1}=\mathbf{u}_{k+1}$, and $\phi_{\varepsilon}(\mathbf{x}_{k+1})=\phi_{\varepsilon}(\mathbf{u}_{k+1})\leq \phi_{\varepsilon}(\mathbf{v}_{k+1})$; If $\phi_{\varepsilon}(\mathbf{v}_{k+1})< \phi_{\varepsilon}(\mathbf{u}_{k+1})$, then $\mathbf{x}_{k+1}=\mathbf{v}_{k+1}$, and $\phi_{\varepsilon}(\mathbf{x}_{k+1})=\phi_{\varepsilon}(\mathbf{v}_{k+1})$. Therefore, in either case, \eqref{eq:diff} implies $\phi_{\varepsilon}(\mathbf{x}_{k+1}) - \phi_{\varepsilon}(\mathbf{x}_k) \le \phi_{\varepsilon}(\mathbf{v}_{k+1}) - \phi_{\varepsilon}(\mathbf{x}_k) \le 0 $, and hence \begin{equation} \label{eq:recursive} \phi_{\varepsilon}(\mathbf{x}_{k+1}) \le \phi_{\varepsilon}(\mathbf{v}_{k+1}) \le \phi_{\varepsilon}(\mathbf{x}_{k}) \le \cdots \le \phi_{\varepsilon}(\mathbf{x}_{0}), \end{equation} for all $k\ge 0$. Moreover, rearranging \eqref{eq:diff} and recalling that $\frac{1}{\delta_1 L_{\varepsilon}} \le \alpha_k \le \frac{1}{\delta_2 L_{\varepsilon}}$ yield \begin{equation}\label{eq:diff2} \frac{\delta_2 - 1}{2\delta_1 \delta_2 L_{\varepsilon}} \| \nabla \phi_{\varepsilon} (\mathbf{x}_k)\|^2 \le \frac{\alpha_k(1- \alpha_k L_{\varepsilon})}{2} \| \nabla \phi_{\varepsilon} (\mathbf{x}_k)\|^2 \le \phi_{\varepsilon}(\mathbf{x}_{k})-\phi_{\varepsilon}(\mathbf{v}_{k+1}) \le \phi_{\varepsilon}(\mathbf{x}_{k})-\phi_{\varepsilon}(\mathbf{x}_{k+1}). \end{equation} Summing up \eqref{eq:diff2} for $k=0,\dots,K$ and using the fact that $\phi_{\varepsilon}(\mathbf{x}) \ge \phi(\mathbf{x})-\frac{\varepsilon}{2} \ge \phi^* - \frac{\varepsilon}{2}$ for every $\mathbf{x} \in \mathcal{X}$, we know that \begin{equation} \sum_{k=0}^{K} \| \nabla \phi_{\varepsilon} (\mathbf{x}_k) \| ^2 \leq \frac{2\delta_1 \delta_2 L_{\varepsilon}(\phi_{\varepsilon}(\mathbf{x}_0) - \phi_{\varepsilon}(\mathbf{x}_{K+1}))}{\delta_2 - 1} \le \frac{\delta_1 \delta_2 L_{\varepsilon}(2\phi_{\varepsilon}(\mathbf{x}_0) - 2\phi^* + m \varepsilon)}{\delta_2 - 1}. \end{equation} Note that the right hand side is a finite constant, and hence by letting $K\to\infty$ we know that $\| \nabla \phi_{\varepsilon} (\mathbf{x}_k) \| \to 0$, which proves the first statement. 2. Denote $\kappa := \min\{ k \in \mathbb{N} \ \vert \ \| \nabla \phi_{\varepsilon}(\mathbf{x}_{k+1}) \| < \eta \}$, then we know that $\| \nabla \phi_{\varepsilon} (\mathbf{x}_{k+1}) \| \ge \eta$ for all $k \le \kappa-1$. Hence we have \[ \kappa \eta^2 \le \sum_{k=0}^{\kappa-1} \| \nabla \phi_{\varepsilon} (\mathbf{x}_{k+1}) \|^2 =\sum_{k=1}^{\kappa} \| \nabla \phi_{\varepsilon} (\mathbf{x}_k) \| ^2 \leq \frac{\delta_1 \delta_2 L_{\varepsilon}(2\phi_{\varepsilon}(\mathbf{x}_0) - 2\phi^* + m \varepsilon)}{\delta_2 - 1}, \] which implies the second statement. \end{proof} Now we consider the complete version of Algorithm \ref{alg:lda}. The first result we have is on the monotonicity of $\phi_{\varepsilon_{k}}(\mathbf{x}_{k}) + \frac{m \varepsilon_{k}}{2}$ in $k$. \begin{lemma} \label{lem:phi_decay} Suppose that the sequence $\{\mathbf{x}_k\}$ is generated by Algorithm \ref{alg:lda} with $\frac{1}{\delta_1 L_{\varepsilon_k}} \le \alpha_k \le \frac{1}{\delta_2 L_{\varepsilon_k}}$ and any initial $\mathbf{x}_0$. Then for any $k\ge 0$ there is \begin{equation}\label{eq:phi_decay} \phi_{\varepsilon_{k+1}}(\mathbf{x}_{k+1}) + \frac{m \varepsilon_{k+1}}{2} \le \phi_{\varepsilon_{k}}(\mathbf{x}_{k+1}) + \frac{m \varepsilon_{k}}{2} \le \phi_{\varepsilon_{k}}(\mathbf{x}_{k}) + \frac{m \varepsilon_{k}}{2}. \end{equation} \end{lemma} \begin{proof} Due to \eqref{eq:recursive}, the second inequality holds immediately. So we focus on the first inequality. For any $\varepsilon>0$ and $\mathbf{x}$, denote \begin{equation}\label{eq:rei} r_{\varepsilon, i}(\mathbf{x}) := \begin{cases} \frac{1}{2 \varepsilon} \| \mathbf{g}_i(\mathbf{x})\|, & \mbox{if} \ \|\mathbf{g}_i(\mathbf{x})\| \le \varepsilon, \\ \| \mathbf{g}_i(\mathbf{x})\| - \frac{\varepsilon}{2}, & \mbox{if} \ \|\mathbf{g}_i(\mathbf{x})\| > \varepsilon . \end{cases} \end{equation} Then it is clear that $\phi_{\varepsilon}(\mathbf{x}) = \sum_{i=1}^m r_{\varepsilon,i}(\mathbf{x}) + f(\mathbf{x})$. To prove the first inequality, it suffices to show that \begin{equation} \label{eq:r_decay} r_{\varepsilon_{k+1},i}(\mathbf{x}_{k+1}) + \frac{\varepsilon_{k+1}}{2} \le r_{\varepsilon_{k},i}(\mathbf{x}_{k+1}) + \frac{\varepsilon_{k}}{2}. \end{equation} If $\varepsilon_{k+1} = \varepsilon_{k}$, then the two quantities above are identical and the first inequality holds. Now suppose $\varepsilon_{k+1} = \gamma \varepsilon_{k} < \varepsilon_k$. % We then consider the relation between $\| \mathbf{g}_i(\mathbf{x}_{k+1})\|$, $\varepsilon_{k+1}$ and $\varepsilon_k$ in three cases: (i) If $\| \mathbf{g}_i(\mathbf{x}_{k+1}) \| > \varepsilon_{k} > \varepsilon_{k+1}$, then by the definition in \eqref{eq:rei}, there is \[ r_{\varepsilon_{k+1},i}(\mathbf{x}_{k+1}) + \frac{\varepsilon_{k+1}}{2} = \| \mathbf{g}_i(\mathbf{x}_{k+1}) \| = r_{\varepsilon_{k},i}(\mathbf{x}_{k+1}) + \frac{\varepsilon_{k}}{2}. \] (ii) If $\varepsilon_{k} \ge \| \mathbf{g}_i(\mathbf{x}_{k+1}) \| > \varepsilon_{k+1}$, then \eqref{eq:rei} implies \[ r_{\varepsilon_{k+1},i}(\mathbf{x}_{k+1}) + \frac{\varepsilon_{k+1}}{2} = \frac{\| \gbf_i(\mathbf{x}_{k+1}) \|^2}{2\varepsilon_{k+1}} + \frac{\varepsilon_{k+1}}{2} \le \frac{\|\gbf_i(\mathbf{x}_{k+1})\|}{2} + \frac{\|\gbf_i(\mathbf{x}_{k+1})\|}{2} = r_{\varepsilon_{k},i}(\mathbf{x}_{k+1}) + \frac{\varepsilon_{k}}{2}. \] (iii) If $\varepsilon_{k} > \varepsilon_{k+1} \ge \| \mathbf{g}_i(\mathbf{x}_{k+1}) \|$, then we know that $\frac{\|\gbf_i(\mathbf{x}_{k+1})\|^2}{2\varepsilon} + \frac{\varepsilon}{2}$---as a function of $\varepsilon$---is non-decreasing for all $\varepsilon \ge \| \gbf_i(\mathbf{x}_{k+1})\|^2 $, which implies \eqref{eq:r_decay}. Therefore, in either of the three cases, \eqref{eq:r_decay} holds and hence \[ r_{\varepsilon_{k+1}}(\mathbf{x}_{k+1}) + \frac{m\varepsilon_{k+1}}{2} = \sum_{i=1}^m \del[2]{ r_{\varepsilon_{k+1},i}(\mathbf{x}_{k+1}) + \frac{\varepsilon_{k+1}}{2}} \le \sum_{i=1}^m \del[2]{ r_{\varepsilon_{k},i}(\mathbf{x}_{k+1}) + \frac{\varepsilon_{k}}{2} } = r_{\varepsilon_{k}}(\mathbf{x}_{k+1}) + \frac{m\varepsilon_{k}}{2}, \] which implies the first inequality of \eqref{eq:phi_decay}. \end{proof} Now we are ready to prove the iteration complexity of Algorithm \ref{alg:lda} for any $\epsilon_{\mathrm{tol}}>0$. Note that Lemma \ref{lem:inner} implies that the reduction criterion in Line 7 of Algorithm \ref{alg:lda} must be satisfied within finitely many iterations since it was met last time, and hence $\varepsilon_k$ will eventually be small enough to satisfy Line 8 and terminate the algorithm. Let $k_{l} $ be the counter of iteration when the criterion in Line 7 of Algorithm \ref{alg:lda} is met for the $l$-th time (we set $k_{0}=-1$), then we can partition the iteration counters $k=0,1,2,\dots,$ into segments accordingly, such that $\varepsilon_k = \varepsilon_{k_{l} +1} = \varepsilon_0 \gamma^l$ for $k=k_{l} +1,\dots,k_{l+1} $ in the $l$-th segment. From Lemma \ref{lem:inner}, we can bound the length of each segment and hence the total iteration number which is the sum of these lengths. These results are given in the following theorem. \begin{theorem} \label{thm:eta_shrinking} Suppose that $\{\mathbf{x}_{k}\}$ is the sequence generated by Algorithm \ref{alg:lda} with any initial $\mathbf{x}_0$ and step size $\frac{1}{\delta_1 L_{\varepsilon_{k}}} \le \alpha_k \le \frac{1}{\delta_2 L_{\varepsilon_{k}}}$. Then the following statements hold: \begin{enumerate} \item The number of iterations, $k_{l+1} - k_{l}$, for the $l$-th segment is bounded by \begin{equation} \label{eq:inner_bound} k_{l+1} - k_{l} \le c_1 \gamma^{-2l} + c_2 \gamma^{-3l}, \end{equation} where the constants $c_1$ and $c_2$ are defined by \begin{equation}\label{eq:c_def} c_1 = \frac{\delta_1 \delta_2 (L_f + \sqrt{m}L_g)(2\phi(\mathbf{x}_0) - 2\phi^* + m\varepsilon_0)}{(\delta_1 - 1) \sigma^2 \varepsilon_0^2 \gamma^2},\quad c_2 = \frac{\delta_1 \delta_2 M^2 (2\phi(\mathbf{x}_0) - 2\phi^* + m\varepsilon_0)}{(\delta_1 - 1) \sigma^2 \varepsilon_0^3 \gamma^3}. \end{equation} \item The total number of iterations for Algorithm \ref{alg:lda} to terminate with $\epsilon_{\mathrm{tol}}>0$ is bounded by \begin{equation} \label{eq:total_iter} \frac{c_1 \sigma^2 \varepsilon_0^2}{ 1 - \gamma^2} \epsilon_{\mathrm{tol}}^{-2} + \frac{c_2 \sigma^3 \varepsilon_0^3}{1-\gamma^3} \epsilon_{\mathrm{tol}}^{-3} - \frac{c_1 \gamma^2 + c_2 \gamma^3 - (c_1 + c_2) \gamma^5}{(1-\gamma^2)(1 - \gamma^3)} = O(\epsilon_{\mathrm{tol}}^{-3}). \end{equation} \end{enumerate} \end{theorem} \begin{proof} 1. Due to Lemma \ref{lem:phi_decay}, we know that, for all $k\ge0$, there is \begin{equation}\label{eq:phi_eps_bound} \phi_{\varepsilon_{k+1}} (\mathbf{x}_{k+1}) + \frac{m \varepsilon_{k+1}}{2} \le \phi_{\varepsilon_{k}}(\mathbf{x}_k) + \frac{m\varepsilon_{k}}{2} \le \cdots \le \phi_{\varepsilon_0}(\mathbf{x}_0) + \frac{m \varepsilon_0}{2} \le \phi(\mathbf{x}_0) + \frac{m \varepsilon_0}{2} \end{equation} where we used the fact that $\phi_{\varepsilon}(\mathbf{x}) \le \phi(\mathbf{x}_0)$ for all $\varepsilon>0$ and $\mathbf{x} \in \mathcal{X}$ in the last inequality. Therefore $k_{l+1} - k_{l}$ satisfies the bound in Lemma \ref{lem:inner} (Statement 2) with $\varepsilon = \varepsilon_{k_{l}+1} = \varepsilon_0 \gamma^l$, $\eta = \sigma \gamma \varepsilon_{k_{l}+1} = \sigma \varepsilon_0 \gamma^{l+1}$ and initial $\mathbf{x}_{k_{l}+1}$. Namely, there is \begin{align*} k_{l+1} - k_{l} \le\ & \frac{2\delta_1 \delta_2 (L_f + \sqrt{m} L_g + \frac{M^2}{\varepsilon}) (\phi_{\varepsilon}(\mathbf{x}_{k_{l} + 1}) - \phi^* + \frac{m\varepsilon}{2})}{(\delta_1 -1) \eta^2} \\ \le\ & \frac{\delta_1 \delta_2 (L_f + \sqrt{m} L_g) (2\phi(\mathbf{x}_{0}) - 2\phi^* + m\varepsilon_0)}{(\delta_1 -1) \eta^2} + \frac{\delta_1 \delta_2 M^2 (2\phi(\mathbf{x}_{0}) - 2\phi^* + m\varepsilon_{0})}{(\delta_1 -1) \varepsilon \eta^2}\\ =\ & c_1 \gamma^{-2l} + c_2 \gamma^{-3l}, \end{align*} where we used \eqref{eq:phi_eps_bound} to obtain $\phi_{\varepsilon}(\mathbf{x}_{k_{l}+1}) + \frac{m \varepsilon}{2 } \le \phi(\mathbf{x}_0) + \frac{m \varepsilon_0}{2}$ for $\varepsilon = \varepsilon_{k_{l}+1} $ in the second inequality and the definitions of $c_1$ and $c_2$ in \eqref{eq:c_def} to obtain the last equality. 2. Let $\ell$ be the number of times the reduction criterion in Line 7 of Algorithm \ref{alg:lda} is satisfied before it is terminated by Line 8. Then $ \sigma \varepsilon_0 \gamma^{\ell-1} \ge \epsilon_{\mathrm{tol}}$. Hence we have $\ell - 1\le \log_{\gamma}^{(\sigma\varepsilon_0)^{-1}\epsilon_{\mathrm{tol}}}$, which implies that the total number of iterations for Algorithm \ref{alg:lda} to terminate with $\epsilon_{\mathrm{tol}}$ is \begin{equation*} \sum_{l=0}^{\ell - 1} (k_{l+1} - k_{l}) \le \sum_{l=0}^{\ell - 1} (c_1 \gamma^{-2l} + c_2 \gamma^{-3l}) \le \frac{c_1(\gamma^{-2(\ell-1)} - \gamma^2)}{1-\gamma^2} + \frac{c_2(\gamma^{-3(\ell-1)}- \gamma^3)}{1-\gamma^3} \end{equation*} and readily reduces to \eqref{eq:total_iter}. This completes the proof. \end{proof} If we set $\epsilon_{\mathrm{tol}}=0$ and $K=\infty$ in Algorithm \ref{alg:lda}, then LDA will generate an infinite sequence $\{\mathbf{x}_k\}$. We focus on the subsequence $\{\mathbf{x}_{k_{l}+1}\}$ which selects the iterates when the reduction criterion in Line 7 is satisfied for $k = k_{l}$ and $\varepsilon_k$ is reduced. Then we can show that every accumulation point of this subsequence is a Clarke stationary point, as shown in the following theorem. \begin{theorem} Suppose that $\{\mathbf{x}_{k}\}$ is the sequence generated by Algorithm \ref{alg:lda} with any initial $\mathbf{x}_0$ and step size $\frac{1}{\delta_1 L_{\varepsilon_{k}}} \le \alpha_k \le \frac{1}{\delta_2 L_{\varepsilon_{k}}}$, $\epsilon_{\mathrm{tol}}=0$ and $K=\infty$. Let $\{\mathbf{x}_{k_{l}+1}\}$ be the subsequence where the reduction criterion Line 7 of Algorithm \ref{alg:lda} is met for $k=k_l$ and $l=1,2,\dots$. Then the following statements hold: \begin{enumerate} \item $\{\mathbf{x}_{k_{l} + 1}\}$ has at least one accumulation point. \item Every accumulation point of $\{\mathbf{x}_{k_{l} + 1}\}$ is a Clarke stationary point of \eqref{eq:phi}. \end{enumerate} \end{theorem} \begin{proof} 1. Due to Lemma \ref{lem:phi_decay} and $\phi(\mathbf{x}) \le \phi_{\varepsilon}(\mathbf{x}) + \frac{m \varepsilon}{2}$ for all $\varepsilon>0$ and $\mathbf{x} \in \mathcal{X}$, we know that \begin{equation*} \phi(\mathbf{x}_k) \le \phi_{\varepsilon_{k}}(\mathbf{x}_k) + \frac{m\varepsilon_{k}}{2} \le \cdots \le \phi_{\varepsilon_0}(\mathbf{x}_0) + \frac{m\varepsilon_0}{2} < \infty. \end{equation*} Since $\phi$ is coercive, we know that $\{\mathbf{x}_k\}$ is bounded. Hence $\{ \mathbf{x}_{k_{l}+1}\}$ is also bounded and has at least one accumulation point. 2. Note that $\mathbf{x}_{k_{l}+1}$ satisfies the reduction criterion in Line 7 of Algorithm \ref{alg:lda}, i.e., $\| \nabla \phi_{\varepsilon_{k_{l}}} (\mathbf{x}_{k_{l}+1}) \| \le \sigma \gamma \varepsilon_{k_{l}} = \sigma \varepsilon_0 \gamma^{l+1} \to 0$ as $l \to \infty$. For notation simplicity, we let $\{\mathbf{x}_{j+1}\}$ denote any convergent subsequence of $\{\mathbf{x}_{k_{l} +1}\}$ and $\varepsilon_j$ the corresponding $\varepsilon_{k}$ used in the iteration to generate $\mathbf{x}_{j+1}$. Then there exists $\hat{\mathbf{x}} \in \mathcal{X}$ such that $\mathbf{x}_{j+1} \to \hat{\mathbf{x}}$, $\varepsilon_{j} \to 0$, and $\nabla \phi_{\varepsilon_{j}}(\mathbf{x}_{j+1}) \to 0$ as $j\to \infty$. Recall the Clarke subdifferential of $\phi$ at $\hat{\mathbf{x}}$ is given by \eqref{eq:phi_subdiff}: \begin{equation}\label{eq:d_phi_xhat} \partial \phi(\hat{\mathbf{x}}) = \cbr[2]{\sum_{i \in I_0} \nabla \gbf_i(\hat{\mathbf{x}})^{\top} \mathbf{w}_i + \sum_{ i \in I_1} \nabla \gbf_i(\hat{\mathbf{x}})^{\top} \frac{\gbf_i(\hat{\mathbf{x}})}{\| \gbf_i(\hat{\mathbf{x}}) \|} + \nabla f(\hat{\mathbf{x}}) \ \bigg\vert \ \| \Pi(\mathbf{w}_i; \mathcal{C}(\nabla \gbf_i(\hat{\mathbf{x}}))) \le 1,\ \forall\, i\in I_0}, \end{equation} where $I_0 = \{i\in[m]\ \vert \ \|\gbf_i(\hat{\mathbf{x}})\| = 0 \}$ and $I_1 = [m] \setminus I_0$. Then we know that there exists $J$ sufficiently large, such that \[ \varepsilon_{j} < \frac{1}{2}\min \{ \|\gbf_i(\hat{\mathbf{x}})\| \ \vert \ i\in I_1\} \le \frac{1}{2} \|\gbf_i(\hat{\mathbf{x}})\| \le \| \gbf_i(\mathbf{x}_{j+1}) \|, \quad \forall\, j\ge J,\quad \forall\, i\in I_1, \] where we used the facts that $\min \{ \|\gbf_i(\hat{\mathbf{x}})\| \ \vert \ i\in I_1\}>0$ and $\varepsilon_{j} \to 0$ in the first inequality, and $\mathbf{x}_{j+1} \to \hat{\mathbf{x}}$ and the continuity of $\gbf_i$ for all $i$ in the last inequality. Furthermore, we denote \begin{equation*} \mathbf{s}_{j,i} := \begin{cases} \frac{\gbf_i(\mathbf{x}_{j+1})}{\varepsilon_{j}}, & \mbox{if}\ \|\gbf_i(\mathbf{x}_{j+1})\| \le \varepsilon_{j}, \\ \frac{\gbf_i(\mathbf{x}_{j+1})}{\| \gbf_i(\mathbf{x}_{j+1}) \|}, & \mbox{if}\ \|\gbf_i(\mathbf{x}_{j+1})\| > \varepsilon_{j}. \end{cases} \end{equation*} Then we have \begin{equation}\label{eq:d_phi_epsj} \nabla \phi_{\varepsilon_{j}}(\mathbf{x}_{j+1}) = \sum_{i \in I_0} \nabla \gbf_i(\mathbf{x}_{j+1})^{\top} \mathbf{s}_{j,i} + \sum_{ i \in I_1} \nabla \gbf_i(\mathbf{x}_{j+1})^{\top} \frac{\gbf_i(\mathbf{x}_{j+1})}{\| \gbf_i(\mathbf{x}_{j+1}) \|} + \nabla f(\mathbf{x}_{j+1}). \end{equation} Comparing \eqref{eq:d_phi_xhat} and \eqref{eq:d_phi_epsj}, we can see that the last two terms on the right hand side of \eqref{eq:d_phi_epsj} converge to those of \eqref{eq:d_phi_xhat}, respectively, due to the facts that $\mathbf{x}_{j+1} \to \hat{\mathbf{x}}$ and the the continuity of $\gbf_i,\nabla \gbf_i, \nabla f$. Moreover, noting that $\|\Pi(\mathbf{s}_{j,i}; \mathcal{C}(\nabla \gbf_i(\hat{\mathbf{x}})))\| \le \| \mathbf{s}_{j,i} \| \le 1$, we can see that the first term on the right hand side of \eqref{eq:d_phi_epsj} also converges to the set formed by the first term of \eqref{eq:d_phi_xhat} due to the continuity of $\gbf_i$ and $\nabla \gbf_i$. Hence we know that \[ \dist( \nabla \phi_{\varepsilon_{j}}(\mathbf{x}_{j+1}), \partial \phi(\hat{\mathbf{x}})) \to 0, \] as $j \to 0$. Since $\nabla \phi_{\varepsilon_{j}}(\mathbf{x}_{j+1}) \to 0$ and $\partial \phi(\hat{\mathbf{x}})$ is closed, we conclude that $0 \in \partial \phi(\hat{\mathbf{x}})$. \end{proof} \section{Numerical Experiments} \label{sec:experiment} \subsection{Network architecture and parameter setting} \label{subsec:network} Throughout our experiments, we parameterize $\mathbf{g}$ in \eqref{eq:r} as a simple 4-layer convolutional neural network with componentwise activation function $\sigma$ and no bias as follows: \begin{equation}\label{eq:g_net} \begin{cases} \mbox{For any $\mathbf{x}$, compute}\ \mathbf{g}(\mathbf{x}) = \mathbf{h}_4, \\ \mbox{where}\ \mathbf{h}_0 = \mathbf{x},\ \mbox{and} \\ \mathbf{h}_{l} = \sigma(\mathbf{W}_{l-1} \mathbf{h}_{l-1}),\quad l=1,2,3,4,\\ \end{cases} \quad \mbox{and} \quad \sigma (x) = \begin{cases} 0, & \mbox{if} \ x \leq -\delta, \\ \frac{1}{4\delta} x^2 + \frac{1}{2} x + \frac{\delta}{4}, & \mbox{if} \ -\delta < x < \delta, \\ x, & \mbox{if} \ x \geq \delta, \end{cases} \end{equation} where $\delta = 0.01$ in our experiment. In \eqref{eq:g_net}, $\mathbf{W}_l$ represents the convolution in the $l$-th layer. We set the kernel size to $3\times 3 \times d$ for all layers, where $d=32$ is the depth of the convolution kernel. In our experiments, we set stride to 1, and use zero-padding to preserve image size. Then $\mathbf{W}_0$ can be interpreted as a $dn\times n$ matrix with $3^2 \times 32$ learnable parameters and $\mathbf{W}_{l}$ as $dn\times dn$ for $l=1,2, 3$ each with $3^2 \times 32^2$ learnable parameters. In this case $m=n$ is the number of pixels in the image. Note that $\mathbf{g}$ satisfies Assumption A2 due to the boundedness of $\sigma'$ and the fixed $\mathbf{W}_l$ once learned. The regularization is $r(\mathbf{x}) = \|\mathbf{g}(\mathbf{x})\|_{2,1}$ as in \eqref{eq:r}, and $r_{\varepsilon}$ and $\nabla r_{\varepsilon}$ are given in \eqref{eq:r_eps} and \eqref{eq:d_r_eps}, respectively. During training, we prescribe the iteration number $K=15$ for Algorithm \ref{alg:lda} which seems to reach a good compromise between network depth and performance in practice. We adopt a warm start strategy by first training LDA with $K=3$ for 500 epochs, and then add 2 more phases and train the network for another 200 epochs, and so on, until we finish with $K=15$. The step sizes $\alpha_k$ and $\tau_k$ are also to be learned and allowed to vary across different phases. The threshold $\varepsilon_{k}$ is updated according to Algorithm \ref{alg:lda}, where the starting $\varepsilon_{0}$ is to be learned. We let $\theta$ denote the set of trainable parameters of LOA in Algorithm \ref{alg:lda}, including the convolutions $\{\mathbf{W}_l\}_{l=0}^3$, the step sizes $\{\alpha_k, \tau_k\}_{k = 0}^K$ and the starting $\varepsilon_0$. Given $N$ training data pairs $\{(\mathbf{b}^{(s)}, \hat{\mathbf{x}}^{(s)} )\}_{s=1}^{N}$, where each $\hat{\mathbf{x}}^{(s)}$ is the ground truth data and $\mathbf{b}^{(s)}$ is the measurement of $\hat{\mathbf{x}}^{(s)}$, we solve $\theta$ by minimizing the loss function in \eqref{eq:loa} using the Adam Optimizer with $\beta_1=0.9$ and $\beta_2=0.999$ and Xavier Initializer implemented in TensorFlow \cite{abadi2016tensorflow}. All the experiments are performed on a desktop computer with Intel i7-6700K CPU at 3.40 GHz, 16 GB of memory, and an Nvidia GTX-1080Ti GPU of 11GB graphics card memory. \subsection{Experimental results on image reconstruction} \begin{figure}[t] \centering \includegraphics[width=1.0\textwidth]{fig/independant-converted.pdf} \includegraphics[width=0.8\textwidth]{fig/joint.pdf} \includegraphics[width=0.85\textwidth]{fig/Recovery_Net_Phase_k-converted.pdf} \caption{The flowchart of the block compressed sensing natural image reconstruction. An image is partitioned into patches of size $n$, each of which, denoted by $\hat{\mathbf{x}}$, is compressed by the sampling matrix $\mathbf{A}$ into data $\mathbf{b}= \mathbf{A}\hat{\mathbf{x}} \in \mathbb{R}^{cn}$. Top: The compressed data $\mathbf{b}$ is obtained using a prescribed sampling matrix $\mathbf{A}$ and is mapped to $\mathbf{x}_0$ as the initial value of the $K$-phase LDA reconstruction network; Middle: The sampling matrix $\mathbf{A}$ is jointly learned with the network parameters by appending $\mathbf{b}= \mathbf{A}\hat{\mathbf{x}} \in \mathbb{R}^{cn}$ as a linear layer before the LDA; Bottom: The detailed illustration of $k$th-phase Recovery Net. } \label{fig:flowchart} \end{figure} \subsubsection{Reconstruction on natural image compressed sensing} \label{subsubsec:cs} We first consider the natural image block compressed sensing (block CS) problem \cite{gan2007block} to recover images (image patches) from compressed data. In block CS, an image is partitioned into small blocks of size $n=33\times 33$, each of which (treated as a vector $\hat{\mathbf{x}}\in \mathbb{R}^{n}$) is left-multiplied by a prescribed sensing matrix $\mathbf{A} \in \mathbb{R}^{cn \times n}$ to obtain the compressed data $\mathbf{b}= \mathbf{A} \hat{\mathbf{x}} \in \mathbb{R}^{cn}$, where $c \in (0,1)$ is the compression ratio (CS ratio) \cite{dinh2013measurement,fowler2012block,gan2007block}. The flowchart of this process, including the compressed sensing part using the prescribed sampling matrix $\mathbf{A}$ and the reconstruction by a $K$-phase LDA network, is shown in the top panel of Figure \ref{fig:flowchart}. We test the proposed Algorithm \ref{alg:lda} LDA on \textit{91 Images} for training and \textit{Set11} for testing \cite{kulkarni2016reconnet}. The training set $\mathcal{D}$ consists of $N = 88,912$ pairs of form $(\mathbf{b},\hat{\mathbf{x}}) \in \mathbb{R}^{cn} \times \mathbb{R}^n$, where $\hat{\mathbf{x}}$ is randomly cropped from the images. The experiments on three different CS ratios $c=10\%,25\%,50\%$ are performed. The matrix $\mathbf{A}$ is set to a random Gaussian matrix whose rows are orthogonalized and the initial $\mathbf{x}_0$ is set to be $\mathbf{x}_0 = \mathbf{Q} \mathbf{b}$, where $\mathbf{Q} = \hat{\mathbf{X}}\mathbf{B}^{\top}(\mathbf{B}\Bbf^{\top})^{-1}$ and $\hat{\mathbf{X}} = [\hat{\mathbf{x}}^{(1)}, ... , \hat{\mathbf{x}}^{(N)}]$, $\mathbf{B} = [\mathbf{b}^{(1)}, ... , \mathbf{b}^{(N)}]$, which follow \cite{Zhang2018ISTANetIO}. We follow the same criterion when generating the testing data pairs from \textit{Set11}. All the testing results are evaluated on the average Peak Signal-to-Noise Ratio (PSNR) of the reconstruction quality. We compare with two classical image reconstruction methods, i.e., TVAL3 \cite{li2013efficient} and D-AMP \cite{metzler2016denoising}, and three state-of-the-art methods based on deep learning approaches, i.e., IRCNN \cite{zhang2017learning}, ReconNet \cite{kulkarni2016reconnet} and ISTA-Net$^+$ \cite{Zhang2018ISTANetIO}. The comparison results tested on \textit{Set11} \cite{kulkarni2016reconnet} in Table \ref{tab:cs-result}, where the results of the first five methods are quoted from \cite{Zhang2018ISTANetIO}. The number of learnable parameters in the networks are also shown in the last column of Table \ref{tab:cs-result}. In general, a network has higher capacity and yields lower reconstruction error with more parameters (e.g., ISTA-Net$^+$ with varying parameters across different phases yields higher PSNR than that with parameters shared by all phases), but may also suffer the issue of parameter overfitting. As LDA uses the same set of network parameters in all phases, except the step size which is different in each phase but is only a scalar to be learned, it requires much fewer parameters than IRCNN and ISTA-Net$^+$. From Table \ref{tab:cs-result}, we can see that the proposed LDA obtained higher accuracy in reconstruction while using relatively small amount of network parameters. \begin{table}[t] \caption{Average PSNR (dB) of reconstructions obtained by the compared methods and the proposed LDA on \textit{Set11} dataset with CS ratios 10\%, 25\% and 50\% and the number of learnable network parameters (\#Param) using a prescribed compressed sensing matrix $\mathbf{A}$. Subscript $*$ indicates that network parameters are shared across different phases. } \label{tab:cs-result} \centering \begin{tabular}{lcccr} \toprule \textbf{Method} & \textbf{10\%} & \textbf{25\%} & \textbf{50\%}& \textbf{\#Param}\\ \midrule TVAL3 \cite{li2013efficient} & 22.99 & 27.92& 33.55 & NA \\ D-AMP \cite{metzler2016denoising} & 22.64 & 28.46 &35.92 & NA \\ IRCNN \cite{zhang2017learning} & 24.02 & 30.07 & 36.23& 185,472\\ ReconNet \cite{kulkarni2016reconnet} & 24.28 & 25.60 & 31.50& \textbf{22,914}\\ ISTA-Net$_*^+$ \cite{Zhang2018ISTANetIO} & 26.51 & 32.08 & 37.59& 37,450\\ ISTA-Net$^+$ \cite{Zhang2018ISTANetIO} & 26.64 & 32.57 & 38.07& 336,978\\ \textbf{LDA} & \textbf{27.42} & \textbf{32.92} & \textbf{38.50}& {27,967}\\ \bottomrule \end{tabular} \end{table} \begin{figure}[ht] \centering \subfigure{\includegraphics[width=0.22\textwidth]{fig/Monarch.pdf}\label{gt_butterfly}} \subfigure{\includegraphics[width=0.22\textwidth]{fig/CS1.pdf}} \subfigure{\includegraphics[width=0.22\textwidth]{fig/SCS1.pdf}} \subfigure{\includegraphics[width=0.22\textwidth]{fig/LDA.pdf}} \\ \setcounter{subfigure}{0} \subfigure[\scriptsize{Reference}]{\includegraphics[width=0.22\textwidth]{fig/Monarch-detail.pdf}} \subfigure[\scriptsize{CS-Net (28.31)}]{\includegraphics[width=0.22\textwidth]{fig/CS1-detail.pdf}} \subfigure[\scriptsize{SCS-Net (28.88)}]{\includegraphics[width=0.22\textwidth]{fig/SCS1-detail.pdf}} \subfigure[\scriptsize{LDA (30.54)}]{\includegraphics[width=0.22\textwidth]{fig/LDA1-detail.pdf}} \caption{Block CS reconstruction of butterfly image with CS ratio 10\% obtained by CS-Net, SCS-Net and the proposed LDA. Images in the bottom row zoom in the corresponding ones in the top row. PSNR are shown in the parentheses.} \label{fig:butteryfly} \subfigure{\includegraphics[width=0.22\textwidth]{fig/lena256.pdf}\label{gt_lena}} \subfigure{\includegraphics[width=0.22\textwidth]{fig/CS10.pdf}} \subfigure{\includegraphics[width=0.22\textwidth]{fig/SCS10.pdf}} \subfigure{\includegraphics[width=0.22\textwidth]{fig/LDA10.pdf}} \\ \setcounter{subfigure}{0} \subfigure[\scriptsize{Reference}]{\includegraphics[width=0.22\textwidth]{fig/lena256-detail.pdf}} \subfigure[\scriptsize{CS-Net (28.97)}]{\includegraphics[width=0.22\textwidth]{fig/CS10-detail.pdf}} \subfigure[\scriptsize{SCS-Net (29.29)}]{\includegraphics[width=0.22\textwidth]{fig/SCS10-detail.pdf}} \subfigure[\scriptsize{LDA (30.19)}]{\includegraphics[width=0.22\textwidth]{fig/LDA10-detail.pdf}} \caption{Block CS reconstruction of Lena image with CS ratio 10\% obtained by CS-Net, SCS-Net and the proposed LDA. Images in the bottom row zoom in the corresponding ones in the top row. PSNR are shown in the parentheses.} \label{fig:lena} \end{figure} \begin{figure}[ht] \centering \subfigure{\includegraphics[width=0.22\textwidth]{fig/Parrots.pdf}\label{gt_parrot}} \subfigure{\includegraphics[width=0.22\textwidth]{fig/CS2.pdf}} \subfigure{\includegraphics[width=0.22\textwidth]{fig/SCS2.pdf}} \subfigure{\includegraphics[width=0.22\textwidth]{fig/LDA2.pdf}} \\ \setcounter{subfigure}{0} \subfigure[\scriptsize{Reference}]{\includegraphics[width=0.22\textwidth]{fig/Parrots-detail.pdf}} \subfigure[\scriptsize{CS-Net (28.00)}]{\includegraphics[width=0.22\textwidth]{fig/CS2-detail.pdf}} \subfigure[\scriptsize{SCS-Net (28.10)}]{\includegraphics[width=0.22\textwidth]{fig/SCS2-detail.pdf}} \subfigure[\scriptsize{LDA (29.54)}]{\includegraphics[width=0.22\textwidth]{fig/LDA2-detail.pdf}} \caption{Block CS reconstruction of parrot image with CS ratio 10\% obtained by CS-Net, SCS-Net and the proposed LDA. Images in the bottom row zoom in the corresponding ones in the top row. PSNR are shown in the parentheses.} \label{fig:parrot} \end{figure} \begin{table}[ht] \centering \caption{Average PSNR (dB) of reconstructions obtained by the compared methods and the proposed LDA on \textit{Set11} dataset with CS ratios 10\%, 30\% and the number of parameters (\#Param) in the reconstruction part of the network using jointly learned compressed sensing matrix $\mathbf{A}$.} \begin{tabular}{lccr} \toprule \textbf{Method} & \textbf{10\%} & \textbf{30\%} & \textbf{\#Param} \\ \midrule CS-Net \cite{SJZ17} & 28.10 & 33.86 & 370,560\\ SCS-Net \cite{Shi_2019_CVPR} & 28.48 & 34.62 & 587,520\\ BCS-Net \cite{zhou2019multi} & 29.43 & 35.60 & 1,117,440\\ AMP-Net \cite{zhang2020amp} & 29.45 & 35.90 & 229,254\\ \textbf{LDA} & \textbf{30.03} & \textbf{36.47} &\textbf{27,967} \\ \bottomrule \end{tabular} \label{tab:jointcs} \end{table} \subsubsection{Joint compression and reconstruction of natural images} We test LDA Algorithm \ref{alg:lda} on the problem of joint image compression and reconstruction, which is considered in several recent CS image reconstruction work \cite{Shi_2019_CVPR,zhang2020amp,zhou2019multi,ZZG20}. In this experiment, we prescribe the CS ratio $c \in (0,1)$ and let the compressed sensing matrix $\mathbf{A} \in \mathbb{R}^{cn\times n}$ be learned together with the reconstruction network. More precisely, we let a ground truth image (patch) $\hat{\mathbf{x}}$ first pass a linear layer $\mathbf{b}= \mathbf{A} \hat{\mathbf{x}}$, where $\mathbf{A}$ is also to be learned. Here $\mathbf{A}$ can be implemented as a convolutional operation with $cn$ kernels of size $\sqrt{n} \times \sqrt{n}$ and stride $\sqrt{n} \times \sqrt{n}$, and hence once applied to an image patch it returns a $cn$-vector. The sampling layer is followed by an initialization layer $\mathbf{x}_0 = \tilde{\mathbf{A}}\mathbf{b}$, where $\tilde{\mathbf{A}} \in \mathbb{R}^{n\times cn}$ is implemented as transposed convolutional operation \cite{dumoulin2016guide}. Then $\mathbf{x}_0$ is served as the input of LDA. Moreover, we add $(1/N)\cdot\sum_{s = 1}^N \|\tilde{\mathbf{A}}\mathbf{A} \mathbf{x}^{(s)} - \mathbf{x}^{(s)}\|^2$ with weight $0.01$ to the loss function in \eqref{eq:loa}, such that $\mathbf{A}$ and $\tilde{\mathbf{A}}$ are learned jointly with the network parameters during training. The training dataset in our experiment consists of 89,600 image patches of size $96 \times 96$, where all these patches are the luminance components randomly cropped from images in BSD500 training and testing set ($200 + 200$ images) \cite{arbelaez2010contour}. Each image patch consists of 9 non-overlapping blocks of size $n=32\times 32=32^2$, where each block can be sampled independently by $\mathbf{A}$. We use \textit{Set11} for testing. For comparison, we also test four recent methods in this experiment: CS-Net \cite{SJZ17}, SCS-Net \cite{Shi_2019_CVPR}, BCS-Net \cite{zhou2019multi}, AMP-Net \cite{zhang2020amp}. All the compared methods are applied to \textit{Set11}, and the average PSNR are shown in Table \ref{tab:jointcs}. Table \ref{tab:jointcs} also shows the number of learnable parameters of the reconstruction network part of each method. In addition to these parameters, all methods also need to learn the sampling matrix $\mathbf{A}$ with $cn \times n = 104,448$ variables when $c = 0.1$ and another $104,448$ variables of $\tilde{A}$ for initialization, except that BCS-Net requires over 2.2M parameters for sampling and initialization. BCS-Net learns a set of sampling matrices with different rates and dynamically assigns the sampling resource depending on the embedded saliency information of each block \cite{zhou2019multi}. From Table \ref{tab:cs-result}, we can see that LDA outperforms all these state-of-the-art methods by a large margin, but only need a fraction of the amount of learable parameters compared to most methods. \subsubsection{Magnetic resonance image reconstruction} In this experiment, we consider the reconstruction problem in compressed sensing magnetic resonance imaging (CS-MRI). In CS-MRI, we set $\mathbf{A} = \mathcal{P}\mathcal{F}$, where $\mathcal{P}$ is a binary selection matrix representing the Fourier space ($k$-space) sampling trajectory, and $\mathcal{F}$ is the discrete Fourier transform. The ground truth image is shown in Figure \ref{fig:mri_rec_err}(a). We use radial mask $\mathcal{P}$ with three different sampling ratios $10 \%$, $20 \%$ and $30 \%$ in this experiments. The one with 20\% sampling ratio is shown in Figure \ref{fig:mri_rec_err}(e). We randomly select $150$ 2D images from the brain MRI datasets \cite{mridata}, then extract the main center region of interests (size $190\times190$) of every image as the ground truth images $\hat{\mathbf{x}}$, and set the data to $\mathbf{b} = \mathbf{A} \hat{\mathbf{x}}$. Then we randomly select $100$ images for training and use the other $50$ for testing. During the training, for each of the sampling ratios 10\%, 20\%, and 30\%, we train LDA for phase numbers $K=3,5,\dots,11$, and the PSNR obtained for each case is shown in the middle panel of Figure \ref{fig:5}. For comparison, we also apply ISTA-Net$^+$ \cite{Zhang2018ISTANetIO} to the same data. We use $\mathbf{x}_0 = \mathbf{0}$ as the initial for both ISTA-Net$^+$ and LDA. The quantitative comparison results are shown in Table \ref{www}, where the PSNR, the relative error (RelErr) of the reconstruction $\mathbf{x}$ to the ground truth $\hat{\mathbf{x}}$ defined by $\|\mathbf{x} - \hat{\mathbf{x}}\|/\|\hat{\mathbf{x}}\|$, and the structural similarity index (SSIM) \cite{wang2004image} are provided for each of the three sampling ratios. These results show that LDA generates more accurate images using only $8 \%$ of the amount of learnable parameters of ISTA-Net$^+$. Figure \ref{fig:mri_rec_err} shows the pointwise absolute error of the reconstructed images obtained by ISTA-Net$^+$ and LDA under these three sampling ratios, where brighter pixels indicate larger errors. From Figurre \ref{fig:mri_rec_err}, it can be seen that LDA attains much lower error and hence better reconstruction quality. \begin{table}[hbt] \centering \caption{Average PSNR (dB), RelErr, and SSIM of the reconstructions obtained by ISTA-Net$^+$ and LDA on CS-MRI dataset with sampling ratios 10\%, 20\%, and 30\%.} \label{www} \begin{tabular}{cccccccccc} \toprule \multirow{2}{*}{\textbf{Method}} & \multicolumn{3}{c}{\textbf{10\%}} & \multicolumn{3}{c}{\textbf{20\%}} & \multicolumn{3}{c}{\textbf{30\%}} \\ & PSNR & RelErr & SSIM & PSNR & RelErr & SSIM & PSNR & RelErr & SSIM \\ \midrule ISTA-Net$^+$ & 32.62 & 0.0950&0.9312 & 39.84 & 0.0430&0.9816 & 43.53 & 0.0295&0.9892 \\ \textbf{LDA} & \textbf{34.20} & \textbf{0.0790}& \textbf{0.9462}& \textbf{41.03} & \textbf{0.0363}&\textbf{0.9852} & \textbf{46.12} &\textbf{0.0214} & \textbf{0.9931}\\ \bottomrule \end{tabular} \end{table} \begin{figure}[htp] \centering \subfigure[Reference]{\includegraphics[width=0.227\textwidth]{fig/gt.pdf}} \subfigure[ISTA-Net$^+$ (31.79) 10\%]{\includegraphics[width=0.22\textwidth]{fig-10/cha_ista_10.pdf}} \subfigure[ISTA-Net$^+$ (39.12) 20\%]{\includegraphics[width=0.22\textwidth]{fig/cha_ista_20.pdf}} \subfigure[ISTA-Net$^+$ (42.84) 30\%]{\includegraphics[width=0.22\textwidth]{fig-30/cha_ista_30.pdf}}\\ \subfigure[Radial mask]{\includegraphics[width=0.227\textwidth]{mask/mask20.pdf}} \subfigure[LDA (33.02) 10\%]{\includegraphics[width=0.22\textwidth]{fig-10/cha_gd_10.pdf}} \subfigure[LDA (40.14) 20\%]{\includegraphics[width=0.22\textwidth]{fig/cha_gd_20.pdf}} \subfigure[LDA (45.03) 30\%]{\includegraphics[width=0.22\textwidth]{fig-30/cha_gd_30.pdf}} \caption{Pointwise absolute error of the brain MR image reconstruction with 10\%, 20\% and 30\% sampling ratio using ISTA-Net$^+$ and the proposed LDA under the same color scale. Brighter pixel indicates larger value. PSNR are shown in the parentheses. Ground truth reference image and a radial mask with 20\% sampling ratio are shown in (a) and (e) respectively. } \label{fig:mri_rec_err} \end{figure} \subsection{Experimental results on convergence and learned feature map} \subsubsection{Comparison with standard gradient descent} The proposed LDA performs two residual-type updates, one on the data fidelity $f$ and the other on the regularization $r$ (and the smoothed version $r_{\varepsilon}$), which is motivated by the effectiveness of the ResNet structure. In this experiment, we also unroll the standard gradient descent iteration by turning off the $\mathbf{u}$ step of LDA, and an accelerated inertial version by setting $\mathbf{x}_{k+1} = \mathbf{x}_k - \alpha_k \nabla f(\mathbf{x}_k) + \theta_k (\mathbf{x}_k - \mathbf{x}_{k-1})$ where $\theta_k$ is also learned. We call these two networks GD-Net and AGD-Net, respectively. We test all three methods following the same experiment setting in Section \ref{subsubsec:cs}, and show the average PSNR versus phase (iteration) number of these methods in Figure \ref{fig:5}. As we can see, LDA achieves a much higher PNSR than both GD-Net and AGD-Net, where the latter perform very similarly. In particular, although AGD has improved iteration complexity in the standard convex optimization setting, its network version does not seem to inherit the effectiveness for deep learning applications. Similar comparison has been made for ISTA-Net and FISTA-Net, which are based on ISTA and FISTA with the latter algorithm provably having improved iteration complexity, but their deep network versions have nearly identical performance \cite{zhang2017learning}. This is also partly due to the nonconvexity of the learned objective function, for which inertial gradient descent may produce improper extrapolation and do not improve efficiency. \begin{figure}[htb] \includegraphics[width=0.34\textwidth]{compare-mri.pdf} \includegraphics[width=0.352\textwidth]{psnr_vs_k_crop.pdf} \includegraphics[width=0.3\textwidth]{phi2.pdf} \caption{\emph{Left}: PSNR of reconstructions obtained by LDA versus phase number $K$ on three CS ratios $10 \%, 20 \%$ and $ 30 \%$ for the brain MR image. \emph{Middle}: PSNR of reconstructions obtained by GD-Net, AGD-Net, and LDA versus phase number $K$ on the block CS image reconstruction with CS ratio $10\%$. \emph{Right}: Function value $\phi_{\varepsilon_{k}}(\mathbf{x}_k)$ of LDA versus iteration number, where the last 15 iterations are obtained by continue running LDA using the feature map $\mathbf{g}$ learned in the first 15 iterations. } \label{fig:5} \end{figure} \subsubsection{Convergence behavior of LDA} As in the standard approach of deep neural network training, we set the phase (iteration) number to $K=15$ in LDA in the experiments above. On the other hand, we proved that the iterates generated by LDA converge to a Clarke stationary point in Section \ref{subsec:convergence}. This provides a theoretical guarantee that the LDA is indeed minimizing an objective function where the regularization is learned, and LDA is expected to perform stably even beyond the trained phases. To demonstrate this stability empirically, we continue to run LDA for another 15 phases using the feature map $\mathbf{g}(\mathbf{x})$ learned in the first 15 phases and step sizes $\alpha_k$ computed by the standard line search tracking such that $\phi_{\varepsilon_{k}}(\mathbf{v}_{k+1}) - \phi_{\varepsilon_{k}}(\mathbf{x}_k) \le - \tau \| \mathbf{v}_{k+1} - \mathbf{x}_k\|^2$ for $\tau = 0.35$ with step size reduction rate 0.5. The objective function value $\phi_{\varepsilon_k}(\mathbf{x}_k)$ versus iteration $k$ is shown in the right panel of Figure \ref{fig:5}. As we can see, LDA continuous to reduce function value during these extended iteration stably even beyond the trained phases, as shown in our convergence analysis in Section \ref{subsec:convergence}. \subsubsection{Learned feature map in LDA} A main advantage of LDA is that the feature map $\mathbf{g}$ in the regularization can be learned adaptively from the training data such that the network is more interpretable. This data-driven approach yields automated design of feature maps which are often more complex and efficient, rather than the manually crafted features in the classical image reconstruction models. In this experiment, we plot the norm of the gradient (as a 2D vector computed by forward finite difference) at every pixel $i$ which is used as the feature of the TV based image reconstruction and also $\|\mathbf{g}_i(\mathbf{x})\|$ at pixel $i$ of the regularization learned in LDA in Figure \ref{fig:vs_tv}. We can see that the learned feature map $\mathbf{g}$ captures more important structural details of the images, such as the antennae of the butterfly, the lip, chin, and shoulder of Lena, and the bill of the parrot. These details are crucial in species detection and facial recognition, which seem to be accurately recovered using the learned feature map $\mathbf{g}$ but are heavily blurred or completely missing from the simple gradient image used in TV regularization. This also explains the better image quality obtained by LDA compared to the classical TV based image reconstruction methods. \begin{figure} \centering \includegraphics[width=0.25\textwidth]{fig/TV3.pdf} \includegraphics[width=0.25\textwidth]{fig/TV.pdf} \includegraphics[width=0.25\textwidth]{fig/TV2.pdf} \includegraphics[width=0.25\textwidth]{fig/figure_mp3.pdf} \includegraphics[width=0.25\textwidth]{fig/feature_map.pdf} \includegraphics[width=0.25\textwidth]{fig/feature_mp2.pdf} \caption{The norm of the gradient at every pixel in TV based image reconstruction (top row) and the norm of the feature map $\mathbf{g}$ at every pixel learned in LDA (bottom row). Important details, such as the antennae of the butterfly, the lip, chin, and shoulder of Lena, and the bill of the parrot, are faithfully recovered by LDA.} \label{fig:vs_tv} \end{figure} \clearpage \section{Conclusion} \label{sec:conclusion} We proposed a general learning based framework for solving nonsmooth and nonconvex image reconstruction problems, where the regularization function is modeled as the composition of the $l_{2,1}$ norm and a smooth but nonconvex feature mapping parametrized as a deep convolutional neural network. We developed a provably convergent descent-type algorithm to solve the nonsmooth nonconvex minimization problem by leveraging the Nesterov's smoothing technique and the idea of residual learning, and learn the network parameters such that the outputs of the algorithm match the references in training data. Our method is versatile as one can employ various modern network structures into the regularization, and the resulting network inherits the guaranteed convergence of the algorithm. The proposed network is applied to a variety of real-world image reconstruction problems, and the numerical results demonstrate the outstanding performance and efficiency of our method. \bibliographystyle{abbrv}
-95,936.470184
[ -2.44140625, 2.390625 ]
39.726027
[ -2.88671875, 0.59130859375, -2.134765625, -5.234375, -0.53173828125, 7.9609375 ]
[ 3.740234375, 7.80078125, 0.58251953125, 7.6171875 ]
524
9,392
[ -2.82421875, 3.1015625 ]
31.775087
[ -6.6328125, -5.4609375, -5.7109375, -2.17578125, 3.048828125, 14.5390625 ]
0.523457
16.58347
21.965503
3.086731
[ 1.851610541343689 ]
-55,536.905403
6.508944
-95,383.544697
0.485834
6.349344
[ -2.779296875, -4.04296875, -3.876953125, -4.6328125, 2.583984375, 12.265625 ]
[ -5.51953125, -2.65234375, -2.375, -1.6435546875, 3.96484375, 5.8671875 ]
BkiUd-k241xg-MOC_zQI
\section{Introduction} In an ideal ferroelectric the symmetries of the crystal only allow for the expansion of the free energy in even powers of the polarization, producing two degenerate ground states, one for each polarization direction \cite{Devonshire1954}. In the presence of an electric field, the degeneracy breaks due to a linear coupling between the field and polarization, favoring one state over the other depending on the sign of the field, which should produce ferroelectric polarization-field hysteresis loops with two equal but opposite coercive fields. In theory this should also hold for ferroelectric superlattices that have compositional inversion symmetry \cite{Neaton2003, Johnston2005, Dawber2005}. However, in practice, electric polarization asymmetry, where one polarization state is preferred over another, or in other words, a built-in bias, is often seen in ferroelectric thin films and superlattices \cite{zhangPRB2014,agarACSNano2015,leeAdvMat2012,zhaoJAP2015,HHwuJAP2013}. Many contributing factors have been suggested as sources of polarization asymmetry. Broadly, they can be classified as (i) compositional, eg., asymmetric electrodes, polarization-strain gradient coupling \cite{zhangPRB2014,KarthikPRB2013,annurev-matsci-071312-121634,10.1142/9789814719322_0002}, chemical and/or compositional gradients \cite{agarACSNano2015}, and asymmetric interfaces within the superlattice \cite{CalloriPRL2012,PhysRevB.80.224110,PhysRevLett.101.087601}, or (ii) related to defects such as Pb vacancies \cite{gaoNatcomm2011}, oxygen vacancies \cite{gaoNatcomm2011}, Pb-O vacancy complexes \cite{Scott1991}, or other defect dipoles \cite{HHwuJAP2013}. It is typically difficult to decisively identify the source of polarization asymmetry in any given material system, and concrete solutions to control and tune this built-in bias are lacking. In artificially layered superlattices variables such as the chemical composition, the number of interfaces, and even the inversion symmetry at these interfaces \cite{Callori2012} can be precisely controlled, providing an excellent opportunity to systematically study the built-in bias. \begin{figure} \includegraphics[width=8.5cm]{Directionsplusbias.pdf} \caption{Definition of up/down direction in the superlattice samples. Up is considered to be away from the substrate and corresponds to growth direction. Electrical voltage is applied to the top electrode with respect to the grounded bottom electrode. A positive voltage favors down polarization, negative voltage prefers up. Definition of built-in bias for this work as the offset of the Capacitance Voltage loop from zero volts. Experimental data measured in a PTO/STO superlattice. } \label{fig:definitions} \end{figure} In this paper, we study in detail the origins of this bias in PbTiO\textsubscript{3} based superlattice structures. In particular we focus on PbTiO\textsubscript{3}/SrTiO\textsubscript{3} (PTO/STO) and PbTiO\textsubscript{3}/SrRuO\textsubscript{3} (PTO/SRO) systems. Our experiments show that the built-in bias in PTO/STO systems is systematically different from that in PTO/SRO superlattices. In order to understand the differences, we gain insight into the physical mechanisms underlying this phenomenon by performing a detailed first-principles study on the role of dipolar defects in these two systems. We find that the electrostatic interactions induced by Pb-O divacancies in these superlattice systems can explain the experimental observations. To demonstrate the value of our findings, as well as to confirm the theoretical predictions, we design a system where we take advantage of the presence of Pb-O divacancies to engineer a superlattice with no built-in bias. \section{Experimental Results and discussion} \subsection{Composition dependence of built in bias} Epitaxial growth of n\textsubscript{1}PbTiO\textsubscript{3} (u.c.)/n\textsubscript{2}SrTiO\textsubscript{3} (u.c.) and n\textsubscript{1}PbTiO\textsubscript{3} (u.c.) /n\textsubscript{3}SrRuO\textsubscript{3} (u.c.) superlattices was achieved using an off-axis RF magnetron sputtering deposition system on (001) TiO\textsubscript{2} terminated SrTiO\textsubscript{3} substrates. Here we use the commonly used notation where n\textsubscript{1} /n\textsubscript{2} refers to the thickness of the layers within the superlattice bilayer repeast in unit cells. The total superlattice film thickness for all samples in this paper is $\approx$100nm. A key parameter useful in describing the evolution of properties in these superlattice systems is the PTO volume fraction $(\frac{n_{1}}{n_{1}+n_{2}})$. Experimentally, electrical properties of ferroelectrics are measured using a variety of techniques and built-in bias can be defined in various ways. A key measurement is the polarization-voltage hysteresis loop. This can be done through a continuous sweep of the voltage from zero, to a maximum field in one direction, then to the same field in the opposite direction, before bringing the field back to zero. During this sweep the current flowing can be integrated to give the switched polarization. This assumes that the current is dominated by switching current, whereas in practice there can be substantial dielectric charging and leakage currents. These additional components can be removed using an PUND pulse switching approach, leaving pure switching polarization. The loop shown in Figure. \ref{fig:definitions} is a result obtained using the PUND approach. Another important measurement is the capacitance-voltage measurement. Here capacitance is measured using a small AC field superimposed on a larger DC voltage, which is swept around in much the same way as in the polarization-voltage hysteresis measurement, though usually significantly slower. Empirically, we have found that the center point of the polarization-voltage hysteresis loop and the crossing point in capacitance-voltage loops are closely aligned and represent a useful measurement of the built-in-bias of a sample. However, in general it is easier to clearly identify the latter. Hence experimental values of the built-in bias reported in this paper are obtained from capacitance-voltage loop data, as shown in Fig \ref{fig:definitions}. \begin{figure*} \hspace*{-25.0pt}% \subfloat{ \subfigimg[width=0.33\linewidth]{\textbf{a}}{figure2_b.png}% } \hspace*{-20.0pt}% \subfloat{ \subfigimg[width=0.33\linewidth]{\textbf{b}}{figure2_c.png}% } \hspace*{-20.0pt}% \subfloat{ \subfigimg[width=0.33\linewidth]{\textbf{c}}{figure2_d.png}% } \hspace*{-20.0pt}% \caption{(a) The scaling of the built-in bias in the PTO/STO superlattice system as a function of the PTO Vvolume fraction. (b) Bias dependence on the number of superlattice repeats at a fixed volume fraction in PTO/STO superlattices, and (c) bias as function of volume fraction in PTO/SRO superlattices} \label{datasummary} \end{figure*} The first trend that we identify experimentally, shown in Fig. \ref{datasummary} (b), is that the built-in bias rapidly increases as a function of PTO volume fraction both for samples with asymmetric electodes (SRO-PTO/STO-Pd) as well as symmetric electrodes (SRO-PTO/STO-SRO). To investigate the role that interfaces play in the built-in bias of our superlattice thin films, we prepared a series of superlattice samples with the same PbTiO\textsubscript{3} volume fraction (0.83) but different bilayer thickness, and, as a result, a different number of interfaces. To minimize bias due to asymmetric top and bottom electrodes in this series, 30nm top and bottom SrRuO\textsubscript{3} layers were deposited in situ followed by photolithography and Ar gas dry etch to pattern the top SrRuO\textsubscript{3}. The samples prepared were a 20/4 PTO/STO, a 10/2 PTO/STO and a 5/1 PTO/STO composed of 10, 21 and 42 bilayer repeats respectively. Naively, one can imagine that if the interface is the main source of asymmetry in the superlattice, the more interfaces that are present the larger the built-in bias should be. However, as shown in Fig. \ref{datasummary} (b) we found that the built-in bias actually decreased with the increase of interfaces in the superlattice. As a contrast to PTO/STO superlattices we grew a series of PTO/SRO superlattices, that have an additional compositional asymmetry at the interface\cite{CalloriPRL2012}. This asymmetry occurs because PTO/SRO superlattices have variation in both the A and B site cations, unlike PTO/STO superlattices where variation is confined to the A site, which does not break compositional inversion symmetry. This compositional breaking of inversion symmetry manifests itself as a built-in bias that occurs even if the samples are free of defects and have symmetric top and bottom electrodes. However, experimentally we find that the bias offset in these PTO/SRO samples is actually significantly smaller than in the PTO/STO system. Perhaps the most interesting feature of the built-in bias in PTO/SRO is that we observe a transition from positive to negative bias in samples with PTO volume fractions of 0.9 and above, in stark contrast to the PTO/STO samples where a positive built-in bias of several volts is observed in samples with these PTO volume fractions. Our previous work \cite{Callori2012} showed that that PTO/SRO superlattices with lower PTO volume fractions will display more asymmetric ferroelectric properties, ie. larger built-in biases, and that the effects of this asymmetry should decrease with increasing PTO volume fractions. We might then expect that PTO/SRO samples with high PTO volume fraction will display characteristics similar to those observed in PTO/STO superlattices, so the observation of an opposite bias sign in these samples is somewhat surprising. To understand this result we need to look deeply at the origin of bias in these systems. \subsection{Oxygen vacancies as a source of built-in-bias} The most common defects in these materials are oxygen vacancies, and hence it is natural to explore whether they can explain the observed experimental behavior. In order to explore the potential impact of oxygen vacancies on the built-in bias we performed two sets of annealing experiments. The first experiment involved annealing a 10/2 PTO/STO superlattice at a relatively low temperature of 400\textsuperscript{o}C. The annealing time and annealing environment was modified between experiments. X-ray diffraction $2\theta-\theta$ measurements (XRD) performed before and after annealing (shown in Supplemental Fig S1(a)) show that superlattice peaks associated with periodic layering remain sharp with high intensity after annealing, indicating that no significant structural change occurs at 400\textsuperscript{o}C. When annealed at this temperature in vacuum, the bias did not change. However, after annealing in air for 2 hours, the bias dropped from around 1.02 V to 0.83V and remained at that level after continued annealing in air up to 12hrs. This observation is most likely due to a decreased oxygen vacancy concentration in the sample, as one would expect after annealing in air. The second annealing experiment we performed measured the bias after annealing in vacuum at higher temperature. The experiment was done on a 5/1 PTO/STO superlattice, which was selected due to the 1 unit cell STO layer which we anticipated would be most damaged by the high temperature anneal. This time, after annealing in vacuum at 600$^{\circ}$ C, in $2\theta-\theta$ measurements (XRD) performed before and after annealing (shown in Supplemental Fig S1(b)) the superlattice peaks vanished. This indicates that the layered structure in the superlattice underwent significant disruption, in which the high temperature treatment allows inter-diffusion of Pb and Sr into a more favorable disordered configuration\cite{Cooper07}, akin to a solid solution, rather than the ordered layered structure that was imposed by the growth process. However, even with this degradation of the superlattice structure, the polarization, as well as the built-in bias do not change. No oxygen was introduced during the annealing treatment in vacuum, and hence the total number of vacancies in the sample remains the same. Unlike the Pb and Sr atoms at the superlattice interfaces, the vacancies do not appear to redistribute under temperature. \begin{figure} \begin{centering} \includegraphics[width=8.5cm]{figure4.png} \par\end{centering} \caption[Cycling test to explore the effect of oxygen vacancies.]{ The built-in bias as a function of the number of voltage cycles for several superlattice samples. The bias offset decreases as the number of voltage cycles increases} \label{annealfatigue} \end{figure} On the other hand, oxygen vacancies can be redistributed through repetitive electrical cycling\cite{scott2000oxygen,dawber2000model}. We drove such a redistribution by applying a 10kHz 4V AC signal (larger than the coercive field) to our samples. After the electrical cycling we imposed a 10 minute delay time for the sample to settle before performing a capacitance butterfly loop measurement to obtain the built-in bias. Results are presented in Fig.~\ref{annealfatigue}, the magnitude of the built-in bias decreases with the number of voltage cycles for both the PTO/STO and PTO/SRO samples. These experiments suggest that defects related to oxygen vacancies are clearly implicated in the built-in bias. In the following sections we explore mechanisms that can explicitly couple the presence of oxygen vacancies to the built-in-bias. \subsection{PbO divacancies in superlattices} The study of oxygen vacancies in oxide perovskites continues to be an active research field \cite{PhysRevResearch.2.023313}. Oxygen vacancies in their simplest form can be considered as charged point defects. However, a uniform distribution of point charges does not produce an electric field and point defects will only lead to built-in-bias when there is a non-uniform distribution, e.g. a local gradient, of them. Charged point defects can encourage the formation of tail-to-tail or head-to-head domains\cite{Park1998,PhysRevB.94.100101}, but these also do not create a built-in bias due to their intrinsic symmetry. On the other hand ordered dipolar defects naturally lead to one polarization state being preferred over the other which would lead to a built-in bias \cite{Arlt1988, Scott1991}. In PTO, both Pb and O vacancies have been shown to be stable point defects \cite{Zhang2006} and can readily form in the growth of PTO thin films. Taking all these into consideration, in this work we choose to model nearest neighbor Pb-O divacancies as the most probable sources of bias. This choice also reflects that they are found to be lower in energy than next-nearest neighbor divacancies \cite{Pykk2000,Cockayne2004}. As we will see, the excellent correlation between computational and experimental results indicates that our choice is well conceived. The PbO divacancies we consider in the thin film and superlattice systems are formed by removing a Pb atom from a PbO plane and the nearest O atom from the adjacent TiO$_2$ plane. The dipole moment vector of this divacancy along the [101] direction points to the oxygen vacancy site. Its out of plane projection results in the up(down) dipole responsible for the observed bias, as shown in Fig~\ref{fig:stosro_vacloc}. A priori, it might be considered that PbO divacancies are equally likely to form as either up or down defects, as long as there is no external driving force to break the up/down symmetry. Under these circumstances, PbO up/down divacancies would form with equal probability, resulting in no net effect on the observed bias. \begin{figure} \centering \includegraphics[width=8.5cm]{PTOSTOMech.pdf} \caption{Process through which a preferred orientation of the defect dipoles is developed in the growth of PTO/STO superlattices.} \label{fig:PTOSTOmeach} \end{figure} However, in our experiments there is an naturally preferred asymmetry introduced by the layer-by-layer growth process. This is shown in Fig. \ref{fig:PTOSTOmeach}. The superlattices are always grown on top of a STO substrate. Because of this, in PTO/STO superlattices, the first grown interface is STO(bottom)/PTO(top). Bulk PTO has a larger lattice constant than bulk STO. Hence the in-plane strain in PTO on STO is compressive at standard deposition temperatures where PTO is cubic\cite{liu2020role,doi:10.1063/1.4939790}. The compressive strain on PTO can favor vacancy formation to enable release of the lattice mismatch-induced stress. This is also supported by the higher volatility of Pb, which favors the subsequent formation of O vacancies to ensure charge neutrality. This implies that there will be a preferred interface where the divacancies form with a fixed direction of the dipole moment. In particular, Pb vacancies will be formed at the first grown PbO layer, with O vacancies formed at the TiO$_2$ layer underneath, with the dipole moment pointing down towards the substrate. As divacancies continue to form within the bulk of growing PTO layers they will tend to align themselves with the existing divacancies that formed at the interface layer and the collective alignment of the divacancy dipole moments would lead to the experimentally observed bias, hence explaining the observation that the built-in bias increases with increasing PTO volume fraction. While it is not possible to directly capture this growth induced asymmetry in our simulations, we postulate that it is this phenomenon which allows the divacancies to preferably form in one direction, becoming a significant source of the bias in our samples. In the following sections we present the modelling of these divacancies, with results that qualitatively and quantitatively support the experimental observations. \section{Modeling and Discussion} \subsection{Methods} Our simulations of Pb-O divacancies involve density functional theory (DFT) within the local density approximation (LDA) using both numerical atomic orbitals, as implemented in \textsc{siesta} \cite{Soler2002}, and augmented plane-waves as implemented in \textsc{lautrec}. For \textsc{siesta}, we used norm-conserving pseudopotentials generated using the Trouiller-Martins\cite{Troullier1991} scheme. The exchange-correlation functional used was PZ\cite{Perdew1981}. The electrons treated explicitly with a single-$\zeta$ were 5d$^{10}$ (Pb), 4s$^{2}$4p$^{6}$ (Sr, Ru) and 3s$^{2}$3p$^{6}$ (Ti). The electrons treated with a double-$\zeta$ were 6p$^{2}$ (Pb), 5s$^{2}$ (Sr), 4s$^{2}$3d$^{2}$ (Ti), 5s$^{1}$4d$^{7}$ (Ru) and 2s$^{2}$2p$^{4}$ (O). The Brillouin zone was sampled using a $6\times6\times1$ Monkhorst-Pack \cite{Monkhorst1976} mesh with a plane-wave- equivalent cutoff energy of 400 Ry. The force and pressure tolerances were 0.04 eV/\AA{} and 0.0006 eV/\AA$^{3}$, respectively. For \textsc{lautrec}, we used the projector augmented wave (PAW) scheme \cite{Blchl1994}. The exchange-correlation functional used was PW92\cite{Perdew1992} which is equivalent to PZ because they are fit to the same Ceperley-Alder data. The valence electrons explicitly treated for Pb, Sr, Ti, Ru and O were 6s$^{2}$5d$^{10}$6p$^{2}$, 4s$^{2}$4p$^{6}$5s$^{2}$, 3s$^{2}$3p$^{6}$4s$^{2}$3d$^{2}$, 4s$^{2}$4p$^{6}$5s$^{1}$4d$^{7}$ and 2s$^{2}$2p$^{4}$, respectively. The Brillouin zone was sampled using a $2\times2\times1$ Monkhorst-Pack mesh with a plane wave cutoff energy of 40 Ry. \subsection{PTO/STO superlattices} \begin{figure} \includegraphics[width=8.5cm]{stosro_vacloc_3d} \caption{Location of the distinct types of Pb-O divacancies and their dipole moments near the interface in the PTO/STO and PTO/SRO superlattice. Grey and red spheres represent Pb and O ions, respectively. Blue and purple octahedra are centered on Ti and Ru ions, respectively.} \label{fig:stosro_vacloc} \end{figure} \begin{figure*} \hspace*{-25.0pt}% \subfloat{ \subfigimg[width=0.33\linewidth]{\textbf{a}}{pto5sto2_dw.png}% \label{fig:pto5sto2_dw} } \hspace*{-20.0pt}% \subfloat{ \subfigimg[width=0.33\linewidth]{\textbf{b}}{pto7sro1_dw.png}% \label{fig:pto7sro1_dw} } \hspace*{-20.0pt}% \subfloat{ \subfigimg[width=0.33\linewidth]{\textbf{c}}{pto10sro1_dw.png}% \label{fig:pto10sro1_dw} } \hspace*{-20.0pt}% \caption{Double well potentials of (a) PTO/STO 5/2, (b) PTO/SRO 7/1 and (c) PTO/SRO 10/1 superlattices. The intermediate points between the two local minima are calculated by interpolating the atomic positions. The plots have all been shifted to have the same zero of energy, fixed to the ground state of the system, for ease of comparison. % $N$\textsubscript{PTO} is the number of PTO layers in the supercell. % The dashed lines are best fits to a sixth order polynomial and each curve is plotted relative to its lowest energy state.} \label{fig:double_wells} \end{figure*} Two distinct types of Pb-O divacancy sites exist in PTO/STO superlattices, as shown in the left schematic of Fig. \ref{fig:stosro_vacloc}. We first investigated the energy landscape of a single PbO divacancy in a (PTO)\textsubscript{5}/(STO)\textsubscript{2} superlattice with an in-plane $2\sqrt{2}\times2\sqrt{2}$ supercell. In particular, we compute the stability of interfacial and bulk divacancies as a function of the polarization state of PTO. To simulate growth on a STO substrate, the in-plane lattice constant was constrained to the STO equilibrium lattice constant, determined from first principles, to be 3.87 \AA. As previously stated, the divacancies we consider are formed by removing a Pb atom in a PbO plane and an O atom in the adjacent TiO\textsubscript{2} plane. Divacancies in bulk PTO are positioned in the center of the PTO layer, as far away as possible from the interface. The dipole moment of the divacancy has a non-negligible in-plane component. However, the experimentally applied electric field on the system is only in the [001] direction and therefore the in-plane component does not contribute to the coupling of the out-of-plane polarization with the electric field. The details of calculating the out-of-plane polarization can be found in Supplementary Section SII.\cite{PhysRevB.73.075121,PhysRevB.75.205121} We initialize our superlattice in the $P4mm$ space group, which naturally gives rise to polar distortions along the [001] direction. This is achieved through distorting the TiO\textsubscript{2} planes by displacing the Ti atom above (below) and the O atoms below (above) the planar center, which corresponds to the system being polarized up (down). Relaxation of the system, in general, leads to two metastable polarization states of opposite sign. For both bulk and interfacial divacancies, the most stable of these two states in each case aligns the bulk polarization with the dipole moment of the divacancy, as shown in Fig. \ref{fig:pto5sto2_dw}. In addition, we found that the most stable location of the divacancy is in the interface between the PTO and STO, with an energy 0.04 ev lower than that of the bulk divacancy. These results are largely what would be expected from electrostatic considerations. It is interesting to notice that for interfacial defects, the double well potential is shifted horizontally by a significant amount. However, it can be shown that this horizontal polarization shift has no effect in the experimental measurement of hysteresis (see Supplemental Fig. S2), nor causes any bias. This is because it is only the difference between the two polarization states that experimentally is measured through the integrated current as the system's polarization is switched. On the other hand, as shown in our theoretical modeling of hysteresis loops in the Supplemental Fig. S2, the difference in energy between the two minima of the double well (vertical asymmetry) is proportional to the bias. Experimental results shown in Fig~\ref{datasummary}(b), indicate that the bias does not increase with increasing number of interfaces. On the contrary, for the superlattices considered, the bias decreases roughly linearly, by more than a factor of two, as the number of interfaces increases. Our results support and explain this observation, because indeed the bulk divacancies have a larger vertical asymmetry than the interface divacancies, as seen in Fig.~\ref{fig:pto5sto2_dw}. Putting together the theoretical results and the symmetry breaking imposed by the growth process, we can conclude that the role of the interfaces is to break the symmetry within the bulk by orienting the bulk defect dipoles parallel to the interface defect dipoles. Effectively, the interfaces polarize the bulk. For a constant PTO/STO volume fraction, decreasing the number of interfaces implies increasing the amount of superlattice that can be considered PTO bulk. This results in a larger measured bias, assuming a constant density of defects in the bulk regions. \subsection{PTO/SRO superlattices} In PTO/SRO superlattices, due to the lack of compositional inversion symmetry \cite{CalloriPRL2012} there are a total of 5 unique divacancy sites. These are presented in the right model of Fig. \ref{fig:stosro_vacloc}. In the figure we also indicate the direction of the pristine superlattice intrinsic polarization for the corresponding atomic arrangement. This intrinsic polariation asymmetry, originates from the lack of compositional inversion symmetry \cite{CalloriPRL2012}. The corresponding pristine double well potential is shown in the two bottom figures for a (PTO)\textsubscript{7}/(SRO)\textsubscript{1} and a (PTO)\textsubscript{10}/(SRO)\textsubscript{1} superlattice systems. The plots shows how the intrinsic asymmetry, which stabilizes the naturally preferred polarization over the opposite one, decreases with increasing PTO concentration, eventually converging towards the fully symmetric double well \cite{CalloriPRL2012}. For the two superlatice structures we modelled a $2\sqrt{2}\times2\sqrt{2}$ in plane supercell area with an in-plane lattice constant constrained to the STO equilibrium lattice constant of 3.87 \AA{}. The divacancies were initialized in the same way as before. For divacancies formed at the interface, there are three unique possible locations, which also uniquely determine the direction of the divacancy dipole moment. The three interfacial divacancies, shown in Fig. \ref{fig:stosro_vacloc}, are in the PTO unit cell closer to the SrO layer (\textit{PTO Interface I}), in the PTO unit cell closer the RuO\textsubscript{2} layer (\textit{PTO Interface II}) and in the PbO-RuO\textsubscript{2} layers, or equivalently, in the PbRuO\textsubscript{3} (PRO) unit cell (\textit{PRO Interface}). We note that divacancy dipole moments in \textit{PTO Interface I} and \textit{PTO Interface II} are parallel with the direction of the naturally preferred polarization, caused by the compositional asymmetry at the interfaces of the superlattice. However, divacancy dipole moments formed in the \textit{PRO Interface} are antiparallel to the naturally preferred polarization. For divacancies formed in the PTO central region, we must consider both orientations, because, in contrast to divacancies in PTO/STO, they are not symmetric under permutation due to the compositional inversion symmetry breaking. We found that for all divacancies, except the one formed in the \textit{PRO Interface}, we were only able to stabilize the naturally preferred polarization state, i.e. the double well potential becomes a single well and the system under those conditions can be characterized as a polar insulator\cite{Callori2012}. This occurs because the system poled on the opposite polarization to that of the divacancy relaxes back into the naturally preferred polarization state for the considered volume fractions. However, something interesting occurs when the divacancy is located at the \textit{PRO Interface}. In this case, the two nonequivalent polarizations can be stabilized. In the 7/1 volume fraction system, the naturally preferred polarization (pointing down) is still the global minimum, despite the dipole moment of the defect opposing it, as shown in Fig. \ref{fig:pto7sro1_dw}. However, the energy difference between the metastable state, i.e. the higher energy minimum of the asymmetric double well potential, and the stable state decreases by $\sim$ 1 eV relative to the pristine double well. This occurs because the defect dipole moment in the metastable state is aligned with the system polarization, but anti-aligned with the naturally preferred one, which still dominates the overall polarization double well asymmetry. With larger PTO volume fractions, in the 10/1 superlattice, we observe a change of behavior. In this case a dipolar defect in the \textit{PRO Interface} is sufficient to reverse the preferred polarization direction so that it is now in the opposite direction to the polarization preference produced by the compositional symmetry breaking, as shown in Fig. \ref{fig:pto10sro1_dw}. We have considered all possible defect locations, and their corresponding results are presented in Supplemental section V. Only when the defect is located at the \textit{PRO Interface} we observe a stable polarization sign change with increasing PTO volume fraction. These results would explain the experimental observations, which show how the bias in PTO/SRO superlattices presents a sign change with increasing PTO concentration. The bias reversal can be understood as a compositionally dominated bias at low volume fractions and defect dominated bias at high volume fractions. These findings would explain the experimental results if there was a strong preference for the development of defect dipoles in the \textit{PRO Interface} over other possible sites. However, we note that the configuration where the defect is located in the \textit{PRO Interface} is 0.97 eV higher in energy relative to when the defect is located in the PTO layers. We have addressed this inconsistency through a calculation of the stability of divacancies, presented in the supplemental Fig. S8. The main discrepancy between our simulations and experiments is that, due to computational limitations, the defect concentrations considered are 1-2 orders of magnitude larger than standard experimental values. The results of our modeling of how electrostatic boundary conditions influence the stability of the divacancies in Supplemental Section S.V(A) show that the stability of the defect in the \textit{PRO Interface} increases with decreasing defect concentration. \section{Experimental design of built-in bias free superlattices} The previous sections have used a combination of experimental and theoretical data to explain the microscopic origin of the observed built-in bias in PTO/STO and PTO/SRO superlattices. One practical consequence of understanding the bias origin is that if our insights are correct, we could design bias free samples, not by preventing the formation of defects, but by taking advantage of their presence. This is facilitated by the differing signs of defect-induced bias in these two families of superlattices. Using as a guide the bias offset versus composition of the PTO/STO superlattices and the PTO/SRO superlattices plotted in Fig \ref{datasummary}, we produced a demonstration (n\textsubscript{1}/n\textsubscript{2}/n\textsubscript{3}/n\textsubscript{4})PTO/STO/PTO/SRO combination superlattice, as illustrated in Fig. \ref{combo} (a). If the biases from one bilayer add simply those from another then with the correct n\textsubscript{3}/n\textsubscript{2} PTO/STO and n\textsubscript{1}/n\textsubscript{4} PTO/SRO combinations, the total sum of the built-in bias will average out and be near 0 V. Fig. \ref{combo}(a) is a cross-sectional HRTEM image of a (15/1/4/2)PTO/STO/PTO/SRO sample showing good epitaxial growth and layering of just such a hybrid superlattice. In Fig \ref{combo}(b) C-V measurements show good ferroelectric switching as well as a built-in bias near zero. The fact that the built-in bias of the PTO/STO and PTO/SRO bilayers can effectively cancel is strong evidence for our argument that the alignment of Pb-O divacancy dipoles is determined as each layer is grown. \begin{figure} \centering \includegraphics[width=8.5cm]{./Combofigurenew.pdf} \caption{A hybrid superlattice that combines PbTiO\textsubscript{3}/SrTiO\textsubscript{3} with PbTiO\textsubscript{3}/SrRuO\textsubscript{3}.(a) A HRTEM cross-sectional image of a 15/1/4/2 PTO/STO/PTO/SRO superlattice. (b) The CV butterfly loop of the hybrid superlattice sample and the two building block superlattices that were combined to make it} \label{combo} \end{figure} \section{Conclusions} This study has shown that interfacial Pb-O divacancies in PbTiO$_3$-based superlattices produce a large built-in bias, which depends on the composition of the supelattices. We have correlated the position and orientation of the divacancies in the superlattice with the observed bias using a combination of experimental and first-principles simulation results. We have also shown that the divacancy-induced built-in bias co-exists with other sources of symmetry breaking such as compositional inversion symmetry breaking, which is present in PbTiO$_3$/SrRuO$_3$ superlattices. While in superlattices without inversion symmetry breaking the bias always has the same sign, in PbTiO$_3$/SrRuO$_3$ superlattices the sign of the bias depends on the concentration of PbTiO$_3$ relative to SrRuO$_3$. The origin of this dependency is linked to the coupling between the dipole moment of the divacancies and the ferroelectric polarization in the material, which in turn is uniquely fixed by the superlattice growth. Finally, we demonstrate that it is possible to engineer bias-free systems without having to eliminate the defects in the oxide superlattices, giving significantly more freedom in the space of experimental parameters that can be used for growth. \section{Acknowledgements} We acknowledge funding from the National Science Foundation DMR-1334867 and DMR-1334428. SD and MVF-S. thank Stony Brook Research Computing and Cyberinfrastructure, and the Institute for Advanced Computational Science at Stony Brook University for access to the high-performance SeaWulf computing system, which was made possible by a \$1.4M National Science Foundation grant (\#1531492). We acknowledge very helpful discussions with Onur Erten and Cyrus Dreyer.
-17,361.916233
[ -2.525390625, 2.45703125 ]
56.455696
[ -2.34375, 0.6787109375, -2.53515625, -5.0546875, -0.93798828125, 8.015625 ]
[ 3.056640625, 8.21875, 2.58203125, 5.28515625 ]
288
4,735
[ -3.00390625, 3.2421875 ]
21.899577
[ -6.4921875, -4.53125, -4.59765625, -2.810546875, 2.2265625, 13.421875 ]
0.991813
17.365941
24.899683
2.500114
[ 2.3853073120117188 ]
-14,266.368819
5.855755
-17,014.171547
0.908861
5.806728
[ -3.021484375, -3.904296875, -3.2890625, -4.12890625, 2.5234375, 11.3828125 ]
[ -5.3046875, -2.2578125, -2.51953125, -1.939453125, 3.578125, 5.41796875 ]
BkiUeGvxK6nrxjHzCYv8
\section{Introduction} In this paper we consider the Landesman-Lazer type problem for the boundary value problem: \begin{equation}\label{eq1}\left\{\begin{array}{ll} -\Delta u=\lambda u+f(x,u), \hs\hs x\in \Omega;\\[1ex] u(x)=0, \hspace{1cm}}\def\hs{\hspace{0.5cm}\hs\, \hspace{1cm}}\def\hs{\hspace{0.5cm}\Hs x\in\partial\Omega \end{array}\right. \end{equation} as $\lambda$ varies near resonance, where $\Omega\subset\mathbb{R}^N$ is a bounded domain, and $f\in C^1(\overline\Omega\times\mathbb{R})$ satisfies the Landesman-Lazer type condition: \begin{enumerate} \item[(LC)] $f$ is bounded; furthermore, \begin{equation}\label{LC1} \liminf\limits_{t\rightarrow+\infty}f(x,t)\geq\overline{f}>0,\hspace{1cm}}\def\hs{\hspace{0.5cm} \limsup\limits_{t\rightarrow-\infty}f(x,t)\leq-\underline{f}<0\end{equation} uniformly for $x\in\overline\Omega$ (where $\overline f$ and $\underline f$ are independent of $x$). \end{enumerate} Such problems can be seen as nonlinear perturbations of the corresponding linear ones, and has aroused much interest in the past decades; see \cite{CL,CM,FGP,LW,PM,PP,ST,H.N,BC,BEG,K,CC,U,MS1,MS,SW,CW} and references therein. If $\lambda$ is not an eigenvalue of the operator $A=-\Delta$ (associated with the homogeneous Dirichlet boundary condition), it can be easily shown that the solution set of \eqref{eq1} is bounded. This basic fact in turn allows us to show that the problem has at least a solution by using different means, in particular by means of fixed point theory and topological degree. Here we are interested in the multiplicity of solutions of \eqref{eq1} as $\lambda$ varies near an eigenvalue $\mu_k$ of $A$. The motivation comes from the work of Mawhin and Schmitt \cite{MS1,MS}, Schmitt and Wang \cite{SW} and Chang and Wang \cite{CW}, etc. In \cite{MS} the authors proved under appropriate Landesman-Lazer type conditions that if $\mu_k$ is of odd multiplicity, then the problem has at least two distinct solutions for $\lambda$ on one side of $\mu_k$ but close to $\mu_k$, and at least one solution for $\lambda$ on the other side. Later the restriction on the multiplicity of $\mu_k$ in this result was removed by Schmitt and Wang in a general framework on bifurcation of potential operators in \cite{SW}. A pure dynamical argument for Schmitt and Wang's result on \eqref{eq1} can also be found in \cite{LLZ}. For the first eigenvalue $\mu_1$, it was shown in \cite{MS1} (see \cite[Theorems 4 and 5]{MS1}) and \cite{CW} (see \cite[Section 3]{CW}) that the problem has at least three distinct solutions for $\lambda$ in a one-sided neighborhood of $\mu_1$, two of which going to infinity and one remaining bounded as $\lambda\rightarrow\mu_1$. Our main purpose in this present work is to extend this elegant result to any eigenvalue $\mu_k$ of $A$. Specifically, let $$ \beta_k=\min\{\mu_k-\mu_{k-1},\mu_{k+1}-\mu_k\},\hspace{1cm}}\def\hs{\hspace{0.5cm} k=1,2,\cdots $$ (here we assign $\beta_1=\mu_2-\mu_1$), and assume that \begin{equation}\label{SMC}M\mu_1^{-1/2}L_f\int_0^\infty\big(2+\tau^{-\frac{1}{2}}\big)e^{-\frac{1}{4}\beta_k\tau}d\tau<1,\end{equation} where $M\geq1$ is a constant depending only upon the operator $A$, and $L_f$ is the Lipschitz constant of $f$. We will show under the above smallness requirement on $L_f$ that there exists $0<\theta \leq \beta_k/4$ such that \eqref{eq1} has at least three distinct solutions $u_{\lambda}^{i}$ ($i=0,1,2$) for each $\lambda\in [\mu_k-\theta ,\mu_k)$, and $u_{\lambda}^{1}$ and $u_{\lambda}^{2}$ go to $\infty$ as $\lambda\rightarrow \mu_k$ whereas $u_\lambda^0$ remains bounded on $[\mu_k-\theta ,\mu_k)$. A ``dual'' version of the result also holds true if we replace \eqref{LC1} by \begin{equation}\label{LC2} \limsup\limits_{t\rightarrow +\infty}f(x,t)\leq-\overline{f}<0,\hspace{1cm}}\def\hs{\hspace{0.5cm} \liminf\limits_{t\rightarrow -\infty}f(x,t)\geq\underline{f}>0.\end{equation} It is worth noticing that for a given globally Lipschitz continuous function $f$, since $\beta_k\rightarrow+\infty$ as $k\rightarrow\infty$ (hence the integral in \eqref{SMC} goes to $0$), the condition \eqref{SMC} is automatically fulfilled provided $k$ is sufficiently large. Our method here is as follows. Instead of transforming \eqref{eq1} into an operator equation and applying the topological degree or other means such as variational methods, we view the problem as the stationary one of the parabolic equation \begin{equation}\label{eq02}\left\{\begin{array}{ll} u_t-\Delta u=\lambda u+f(x,u), \hs\,\, x\in \Omega;\\[1ex] \,u(x)=0, \hspace{1cm}}\def\hs{\hspace{0.5cm}\Hs \hspace{1cm}}\def\hs{\hspace{0.5cm} \hspace{1cm}}\def\hs{\hspace{0.5cm} x\in\partial\Omega. \end{array}\right. \end{equation} Then we give an existence result on some global invariant manifolds ${\mathcal M}_\lambda^c$ for the semiflow $\Phi_\lambda$ generated by \eqref{eq02} for $\lambda$ near each eigenvalue $\mu_k$. Such a manifold ${\mathcal M}_\lambda^c$ contains all the invariant sets of the system. This allows us to reduce the system on ${\mathcal M}_\lambda^c$ and prove, using the shape theory of attractors \cite{Kap}, that there exists $0<\theta \leq \beta_k/4$ such that the system bifurcates from infinity a compact isolated invariant set $K_\lambda^\infty$ which takes the shape of a sphere $\mathbb{S}^{m-1}$ for $\lambda\in\Lambda_k^-=[\mu_k-\theta,\mu_k)$, where $m$ is the algebraic multiplicity of $\mu_k$. Since $\Phi_\lambda$ is a gradient system, it can be shown that $K_\lambda^\infty$ necessarily contains two distinct equilibria of $\Phi_\lambda$. These equilibria are precisely solutions of \eqref{eq1}. Thus we conclude that the Landesman-Lazer type problem \eqref{eq1} bifurcates from infinity two distinct solutions as $\lambda$ varies in $\Lambda_k^-$. Combining this result with some known ones in \cite[Theorem 5.2]{LLZ}, we immediately complete the proof of our main results promised above. Let us mention that our approach is of a pure dynamical nature and is different from those in the literature. It allows us to obtain a more clear picture on the dynamic bifurcation from infinity of the parabolic problem \eqref{eq02} near resonance, which, from the point of view of dynamical systems theory, is naturally of independent interest. This paper is organized as follows. In Section 2 we present an existence result on global invariant manifolds of \eqref{eq02} in an abstract framework of evolution equations in Banach spaces. In Section 3 we give a more precise description on the dynamic bifurcation from infinity of \eqref{eq02} and prove our main results. \section{Existence of invariant manifolds for nonlinear evolution equations} Let $X$ be a Banach space with norm $\|\.\|$, and $A$ be a sectorial operator on $X$ with compact resolvent. Consider the semilinear equation \begin{equation}\label{e:2.0}x_t+Ax=\lambda x+f(x)\end{equation} in $X$. In this section we present an existence result on global invariant manifolds for the equation when $\lambda$ varies near the real part $\lambda_0=\hbox{Re}\,\mu_0$ of an eigenvalue $\mu_0$ of the operator $A$ \subsection{Mathematical setting} Denote $\sigma(A)$ the spectrum of $A$ and write $\mbox{Re}\,\sigma(A):=\{\mbox{Re}\,\mu:\,\,\mu\in \sigma(A)\}.$ Pick a number $a>0$ such that $$\hbox{Re}\,\sigma(A+aI)>0.$$ Let $\Lambda=A+aI$. For each $\alpha\geq 0$, define $X^\alpha=D(\Lambda^\alpha)$. $X^\alpha$ is equipped {with \,\,the\,\,norm} $\|\.\|_\alpha$ defined by $$\|x\|_\alpha=\|\Lambda^\alpha x\|,\hs x\in X^\alpha.$$ It is well known that the definition of $X^\alpha$ is independent of the choice of $a$. Let $\lambda_0\in\mbox{Re}\,\sigma(A)$. Since $A$ has compact resolvent, $\lambda_0$ is isolated in $\mbox{Re}\,\sigma(A)$. Hence $\sigma(A)$ has a spectral decomposition $\sigma(A)=\sigma_1\cup \sigma_2\cup\sigma_3$ with $\sigma_2=\{\mu\in\sigma(A):\,\,\hbox{Re}\,\mu=\lambda_0\}$ and $$\sigma_1=\{\mu\in\sigma(A):\,\,\hbox{Re}\,\mu<\lambda_0\},\hs \sigma_3=\{\mu\in \sigma(A):\,\,\hbox{Re}\,\mu>\lambda_0\}.$$ Let $$ \gamma_1=\max\{\mbox{Re}\,\lambda:\,\,\lambda\in\sigma_1\},\hs \gamma_3=\min\{\mbox{Re}\,\lambda:\,\,\lambda\in\sigma_3\}. $$ Then $\gamma_1<\lambda_0<\gamma_3$. The space $X$ has a corresponding direct sum decomposition $X=X_1\oplus X_2\oplus X_3$ with $X_1$ and $X_2$ being finite dimensional. Denote $\Pi_i:X\rightarrow X_i$ the projection from $X$ to $X_i$ ($i=1,2,3$). Let $\beta=\min\{\lambda_0-\gamma_1,\,\gamma_2-\lambda_0\},$ and write $$B=B(\lambda):=A-\lambda I,\hs B_i=B|_{X_i}.$$ We infer from Henry \cite[Theorems 1.5.3 and 1.5.4]{D.H} that there exists $M\geq1$ (depending only upon $A$) such that for $\alpha\in[0,1]$ \begin{equation}\label{b1}\|e^{-B_1 t}\|\leq Me^{\frac{3}{4}\beta t},\hs \|\Lambda^\alpha e^{-B_1 t}\|\leq Me^{\frac{3}{4}\beta t},\hspace{1cm}}\def\hs{\hspace{0.5cm} t\leq0,\end{equation} \begin{equation}\label{b3}\| e^{-B_2 t}\|\leq Me^{\frac{\beta}{4}|t|},\hs \|\Lambda^\alpha e^{-B_2 t}\|\leq Me^{\frac{\beta}{4}|t|},\hspace{1cm}}\def\hs{\hspace{0.5cm} t\in\mathbb{R}.\end{equation} \begin{equation}\label{b2}\|\Lambda^\alpha e^{-B_3 t}\|\leq Mt^{-\alpha}e^{-\frac{3}{4}\beta t},\hs \|\Lambda^\alpha e^{-B_3 t}\Pi_3\Lambda^{-\alpha}\|\leq Me^{-\frac{3}{4}\beta t},\hspace{1cm}}\def\hs{\hspace{0.5cm} t>0,\end{equation} (The latter estimates in \eqref{b1} and \eqref{b3} are due to the finite dimensionality of the spaces $X_1$ and $X_2$.) Given $\mu\geq 0$, define a Banach space ${\mathscr X}_\mu$ as $$\mathscr{X}_\mu=\left\{x\in C(\mathbb{R};X^\alpha): \,\,\sup_{t\in\mathbb{R}}e^{-\mu |t|}\|x(t)\|_\alpha<\infty\right\},$$ which is equipped with the norm $\|\.\|_{\mathscr{X}_\mu}$: $$\|x\|_{\mathscr{X}_\mu}=\sup_{t\in\mathbb{R}}e^{-\mu |t|}\|x(t)\|_\alpha,\hspace{1cm}}\def\hs{\hspace{0.5cm} \forall\,x\in {\mathscr X}_\mu.$$ The equation \eqref{e:2.0} can be rewritten as \begin{equation}\label{e1}x_t+B x=f(x).\end{equation} For our purposes here, from now on we always assume \begin{enumerate} \item[(F1)] $f\in C(X^\alpha,X)$ and is globally Lipschitz for some $\alpha\in[0,1)$. \end{enumerate} It is easy to see that this condition also implies that there exists $C>0$ such that \begin{equation}\label{eC} \|f(x)\|\leq C(\|x\|_\alpha+1),\hspace{1cm}}\def\hs{\hspace{0.5cm}\forall\,x\in X^\alpha. \end{equation} Under the assumption (F1) the Cauchy problem of \eqref{e1} is well-posed in $X^\alpha$. Specifically, for each $x_0\in X^\alpha$ the equation \eqref{e1} has a unique strong solution $x(t)=\phi_\lambda(t;x_0)$ with initial value $x(0)=x_0$ which globally exists on $\mathbb{R}^+$; see, e.g., \cite[Corollary 3.3.5]{D.H}. Set $\Phi_\lambda(t)x_0=\phi_\lambda(t;x_0)$. Then $\Phi_\lambda$ is a (global) semiflow on $X^\alpha$. \subsection{A basic lemma } The following lemma will play a fundamental role in the construction of invariant manifolds. \begin{lemma}\label{le4.2} Let $\beta/4<\mu<3\beta/4$. A function $x\in\mathscr{X}_\mu$ is a solution of \eqref{e1} on $\mathbb{R}$ if and only if it solves the following integral equation \begin{equation}\label{e4} \begin{split} x(t)&=e^{-B_2 t}\Pi_2 x(0)+\int_{0}^te^{-B_2(t-\tau)}\Pi_2f(x(\tau))d\tau\\ &\quad+\int_{-\infty}^{t}e^{-B_3(t-\tau)}\Pi_3 f(x(\tau))d\tau\\ &\quad-\int_{t}^\8e^{-B_1(t-\tau)}\Pi_1 f(x(\tau))d\tau. \end{split} \end{equation} \end{lemma} \begin{remark} In case $\sigma_1=\emptyset$ the integral equation \eqref{e4} reduces to \begin{equation*} \begin{split} x(t)&=e^{-B_2 t}\Pi_2 x(0)+\int_{0}^te^{-B_2(t-\tau)}\Pi_2 f(x(\tau))d\tau\\ &\quad+\int_{-\infty}^{t}e^{-B_3(t-\tau)}\Pi_3 f(x(\tau))d\tau.\end{split} \end{equation*}\end{remark} {\bf Proof of Lemma \ref{le4.2}.} Let $x\in \mathscr{X}_\mu$ be a solution of $(\ref{e1})$ on $\mathbb{R}$. Write $x=x_1+x_2+x_3$, where $x_i=\Pi_i x$ ($i=1,2,3$). Then for any $t,t_0\in\mathbb{R}$, \begin{equation}\label{e6b} x_i(t)=e^{-B_i(t-t_0)}x_i(t_0)+\int_{t_0}^te^{-B_i(t-\tau)}\Pi_i f(x(\tau))d\tau,\hs i=1,2,3.\end{equation} For $i=1$, if $t\leq t_0$ then by \eqref{b3} we see that \begin{equation}\label{e8} \begin{split} \|e^{-B_1(t-t_0)}x_1(t_0)\|_\alpha&\leq Me^{\frac{3}{4}\beta(t-t_0)}\|x(t_0)\|_\alpha\\ &=Me^{\frac{3}{4}\beta t}e^{-(\frac{3}{4}\beta-\mu)t_0}e^{-\mu t_0} \|x(t_0)\|_\alpha\\ & \leq Me^{\frac{3}{4}\beta t}e^{-(\frac{3}{4}\beta-\mu)t_0}\|x\|_{\mathscr{X}_\mu}\ra0, \hs\hbox{as }t_0\rightarrow+\infty.\end{split} \end{equation} Thus setting $t_0\rightarrow+\infty$ in \eqref{e6b} we obtain that $$x_1(t)=-\int^\infty_te^{-B_1(t-\tau)}\Pi_1 f(x(\tau))d\tau.$$ For $i=3$, if $t_0\leq 0$ then by \eqref{b2} we have \begin{equation*} \begin{split} \|e^{-B_3(t-t_0)}x_3(t_0)\|_\alpha&\leq Me^{-\frac{3}{4}\beta (t-t_0)} \|x(t_0)\|_\alpha\\ &=Me^{-\frac{3}{4}\beta t}e^{(\frac{3}{4}\beta-\mu)t_0}\(e^{\mu t_0} \|x(t_0)\|_\alpha\)\\ &\leq Me^{-\frac{3}{4}\beta t}e^{(\frac{3}{4}\beta-\mu)t_0}\|x\|_{\mathscr{X}_\mu}\ra0,\hs \mbox{as }t_0\rightarrow-\infty. \end{split} \end{equation*} Hence by \eqref{e6b} we deduce that $$x_3(t)=\int^{t}_{-\infty}e^{-B_3(t-\tau)}\Pi_3 f(x(\tau))d\tau.$$ For $i=2$, taking $t_0=0$ in \eqref{b2} it yields $$ x_2(t)=e^{-B_2t}x_2(0)+\int_{0}^te^{-B_2(t-\tau)}\Pi_2 f(x(\tau))d\tau $$ Combing the above results together one immediately concludes the validity of the equation \eqref{e4}. Conversely, for each $x\in\mathscr{X}_\mu$ satisfying $(\ref{e4})$ one can easily verify that $x$ is a solution of $(\ref{e1})$ on $\mathbb{R}$. $\Box$ \subsection{Existence of global invariant manifolds} We are now ready to state and prove our main result in this section. Denote $$X^\alpha_i:=X_i\cap X^\alpha,\hspace{1cm}}\def\hs{\hspace{0.5cm} i=1,2,3,$$ and let $X^\alpha_{ij}=X^\alpha_i\oplus X^\alpha_j$ ($i\ne j$). \begin{theorem}\label{th1} Suppose that the Lipschitz constant $L_f$ of $f$ satisfies \begin{equation}\label{eq1.6}ML_f\int_0^\infty\big(2+\tau^{-\alpha}\big)e^{-\frac{\beta}{4}\tau}d\tau <1.\end{equation} Then for each $\lambda\in (\lambda_0-\beta/4,\lambda_0+\beta/4):=J$, the semiflow $\Phi_\lambda$ defined by \eqref{e:2.0} has a global invariant manifold ${\mathcal M}^c_\lambda$ given by $${\mathcal M}^c_\lambda=\{y+\xi_\lambda(y):y\in X^\alpha_2\},$$ where $\xi_\lambda:X^\alpha_2\rightarrow X^\alpha_{13}$ is Lipschitz continuous uniformly on $\lambda\in J$. \end{theorem} \vskip8pt}\def\vs{\vskip4pt\noindent{\bf Proof of Theorem \ref{th1}.} Instead of the original equation we consider the modified one in \eqref{e1}. For each $\lambda\in J$ and $y\in X_2^\alpha$, one can use the righthand side of equation $(\ref{e4})$ to define a contraction mapping ${\mathcal T}:={\mathcal T}_{\lambda,y}$ on $\mathscr{X}_{\beta/2}$ as follows: \begin{equation*} \begin{split} {\mathcal T} x(t)&=e^{-B_2 t}y+\int_{0}^te^{-B_2(t-\tau)}\Pi_2 f(x(\tau))d\tau\\ &\quad+\int_{-\infty}^{t}e^{-B_3(t-\tau)}\Pi_3 f(x(\tau))d\tau\\ &\quad-\int_{t}^\8e^{-B_1(t-\tau)}\Pi_1 f(x(\tau))d\tau. \end{split} \end{equation*} We first verify that $\mathcal{T}$ maps $\mathscr{X}_{\beta/2}$ into itself. For notational convenience, we write $$0\wedge t=\min\{0,t\},\hs 0\vee t=\max\{0,t\}$$ for $t\in\mathbb{R}$. Let $x\in \mathscr{X}_{\beta/2}$. By \eqref{b1}-\eqref{b2} and \eqref{eC} we have \begin{equation}\label{e:2.15a} \begin{split} \|\mathcal {T}x(t)\|_\alpha&\leq Me^{\frac{\beta}{4}|t|}\|y\|_\alpha+MC\int_{0\wedge t}^{0\vee t}e^{\frac{\beta}{4}|t-\tau|}\big(\|x(\tau)\|_\alpha+1\big)d\tau\\ &\quad+ MC\int_{-\infty}^t (t-\tau)^{-\alpha} e^{-\frac{3\beta}{4}(t-\tau)}\big(\|x(\tau)\|_\alpha+1\big)d\tau\\ &\quad+MC\int_t^\infty e^{\frac{3\beta}{4}(t-\tau)}\big(\|x(\tau)\|_\alpha+1\big)d\tau, \end{split}\end{equation}where $C$ is the constant in \eqref{eC}. Simple computations show that \begin{equation*} \frac{\beta}{4}|t-\tau|-\frac{\beta}{2}|t|=-\frac{\beta}{4}|t-\tau|-\frac{\beta}{2}|\tau|,\hspace{1cm}}\def\hs{\hspace{0.5cm} \tau\in [0\wedge t,\,0\vee t]. \end{equation*} It is also easy to see that \begin{equation*} \begin{split} e^{-\frac{\beta}{2}|t|}=e^{-\frac{\beta}{2}|\tau+(t-\tau)|}\leq e^{\frac{\beta}{2}|t-\tau|}e^{-\frac{\beta}{2}|\tau|},\hspace{1cm}}\def\hs{\hspace{0.5cm} t,\,\tau\in\mathbb{R}. \end{split} \end{equation*} Thus by \eqref{e:2.15a} we find that \begin{equation*} \begin{split} e^{-\frac{\beta}{2}|t|}\|\mathcal {T}x(t)\|_\alpha \leq\, &Me^{-\frac{\beta}{4}|t|}\|y\|_\alpha+MC\int_{0\wedge t}^{0\vee t}e^{-\frac{\beta}{4}|t-\tau|}\big[e^{-\frac{\beta}{2}|\tau|}\big(\|x(\tau)\|_\alpha+1\big)\big]d\tau\\ &+ MC\int_{-\infty}^t (t-\tau)^{-\alpha}e^{\frac{\beta}{2}|t-\tau|} e^{-\frac{3\beta}{4}(t-\tau)}\big[e^{-\frac{\beta}{2}|\tau|}\big(\|x(\tau)\|_\alpha+1\big)\big]d\tau\\ &+MC\int_t^\infty e^{\frac{\beta}{2}|t-\tau|}e^{\frac{3\beta}{4}(t-\tau)}\big[e^{-\frac{\beta}{2}|\tau|}\big(\|x(\tau)\|_\alpha+1\big)\big] d\tau\\ =\,&Me^{-\frac{\beta}{4}|t|}\|y\|_\alpha+MC(\|x\|_{\mathscr{X}_{\beta/2}}+1\big)\int_{0\wedge t}^{0\vee t}e^{-\frac{\beta}{4}|t-\tau|}d\tau\\ &+MC(\|x\|_{\mathscr{X}_{\beta/2}}+1\big)\int_{-\infty}^t (t-\tau)^{-\alpha} e^{-\frac{\beta}{4}(t-\tau)}d\tau\\ &+MC(\|x\|_{\mathscr{X}_{\beta/2}}+1\big)\int_t^\infty e^{\frac{\beta}{4}(t-\tau)} d\tau\\ \leq \,& M\|y\|_\alpha+M_\beta C\big(\|x\|_{\mathscr{X}_{\beta/2}}+1\big),\hspace{1cm}}\def\hs{\hspace{0.5cm} \forall\,t\in\mathbb{R}, \end{split} \end{equation*} where \begin{equation}\label{Mb} M_\ =M\int_0^\infty\big(2+\tau^{-\alpha}\big)e^{-\frac{\beta}{4}\tau}d\tau.\end{equation} It follows that $\|\mathcal {T}x\|_{\mathscr{X}_{\beta/2}}<\infty$, that is, $\mathcal{T}x\in\mathscr{X}_{\beta/2}$. Next, we check that $\mathcal{T}$ is contractive. Let $x,\,x'\in\mathscr{X}_{\beta/2}$. In a quite similar fashion as above it can be shown that \begin{equation}\label{eq2.2} \begin{split} &\hs\,\, e^{-\frac{\beta}{2}|t|}\|\mathcal {T}x(t)-\mathcal {T}x'(t)\|_\alpha\\ &\leq ML_f\int_{0\wedge t}^{0\vee t}e^{-\frac{\beta}{4}|t-\tau|}\(e^{-\frac{\beta}{2}|\tau|}\|x(\tau)-x'(\tau)\|_\alpha \)d\tau\\ &\quad+ ML_f\int_{-\infty}^t(t-\tau)^{-\alpha} e^{-\frac{\beta}{4}(t-\tau)}\(e^{-\frac{\beta}{2}|\tau|}\|x(\tau)-x'(\tau)\|_\alpha \)d\tau\\ &\quad+ML_f\int^\infty_t e^{\frac{\beta}{4}(t-\tau)}\(e^{-\frac{\beta}{2}|\tau|}\|x(\tau)-x'(\tau)\|_\alpha\)d\tau\\ &\leq M_\beta L_f\|x-x'\|_{\mathscr{X}_{\beta/2}},\hspace{1cm}}\def\hs{\hspace{0.5cm}\forall\, t\in\mathbb{R}, \end{split} \end{equation} where (and below) $M_\beta$ is the number given in \eqref{Mb}. Therefore $$ \|\mathcal {T}x-\mathcal {T}x'\|_{\mathscr{X}_{\beta/2}}\leq M_\beta L_f\|x-x'\|_{\mathscr{X}_{\beta/2}}. $$ The condition \eqref{eq1.6} then asserts that ${\mathcal T}$ is contractive. Thanks to the Banach fixed-point theorem, ${\mathcal T}={\mathcal T}_{\lambda,y}$ has a unique fixed point $\gamma_{y}:=\gamma_{\lambda,y}\in \mathscr{X}_{\beta/2}$ which, by the definition of ${\mathcal T}$, solves the integral equation \begin{equation}\label{equ5} \begin{split} \gamma_y(t)&=e^{-B_2t}y+\int_{0}^te^{-B_2(t-\tau)}\Pi_2 f(\gamma_y(\tau))d\tau\\ &\quad+\int_{-\infty}^{t}e^{-B_3(t-\tau)}\Pi_3 f(\gamma_y(\tau))d\tau\\[0.5ex] &\quad-\int_{t}^\8e^{-B_1(t-\tau)}\Pi_1 f(\gamma_y(\tau))d\tau. \end{split} \end{equation} (Hence $\gamma_y(t)$ is a solution of $(\ref{e1})$ on $\mathbb{R}$ with $\Pi_2\gamma_y(0)=y$.) Let $y,z\in X_2^\alpha$ and $t\in\mathbb{R}$. Similar to \eqref{eq2.2}, by $(\ref{equ5})$ we find that \begin{equation*} \begin{split} &e^{-\frac{\beta}{2} |t|}\|\gamma_{y}(t)-\gamma_{z}(t)\|_\alpha\\ \leq\,&Me^{-\frac{\beta}{4} |t|}\|y-z\|_\alpha+ ML_f\int_{0\wedge t}^{0\vee t}e^{-\frac{\beta}{4}|t-\tau|}\big(e^{-\frac{\beta}{2}|\tau|}\|\gamma_{y}(\tau)-\gamma_{z}(\tau)\|_\alpha \big)d\tau\\ &+ ML_f\int_{-\infty}^t(t-\tau)^{-\alpha} e^{-\frac{\beta}{4}(t-\tau)}\big(e^{-\frac{\beta}{2}|\tau|}\|\gamma_{y}(\tau)-\gamma_{z}(\tau)\|_\alpha \big)d\tau\\ &+ML_f\int^\infty_t e^{\frac{\beta}{4}(t-\tau)}\big(e^{-\frac{\beta}{2}|\tau|}\|\gamma_{y}(\tau)-\gamma_{z}(\tau)\|_\alpha \big)d\tau\\ \leq\,& M\|y-z\|_\alpha+ M_\beta L_f\|\gamma_{y}-\gamma_{z}\|_{\mathscr{X}_{\beta/2}}. \end{split} \end{equation*} Hence $$ \|\gamma_{y}-\gamma_{z}\|_{\mathscr{X}_{\beta/2}}\leq\, M\|y-z\|_\alpha+ M_\beta L_f\|\gamma_{y}-\gamma_{z}\|_{\mathscr{X}_{\beta/2}}. $$ Therefore \begin{equation}\label{eq2} \begin{split} \|\gamma_{y}(0)-\gamma_{z}(0)\|_\alpha\leq\|\gamma_{y}-\gamma_{z}\|_{\mathscr{X}_{\beta/2}}\leq \tilde L_0\|y-z\|_\alpha,\end{split} \end{equation} where $\tilde L_0={M}/({1-M_\beta L_f})$ is a constant independent of $\lambda\in J$. Now we define a mapping $\xi_\lambda: X_2^\alpha\rightarrow X_{13}^\alpha$ as $$ \xi_\lambda(y)=\gamma_y(0)-y,\hspace{1cm}}\def\hs{\hspace{0.5cm} y\in X_2^\alpha. $$ Setting $t=0$ in \eqref{equ5} one finds that \begin{equation}\label{eq2.5} \xi_\lambda(y)=\int_{-\infty}^0e^{B_3\tau}\Pi_3 f(\gamma_{y}(\tau))d\tau-\int_0^\8e^{B_1\tau}\Pi_1 f(\gamma_{y}(\tau))d\tau \end{equation} for $y\in X_2^\alpha$. Let $L_0=\tilde L_0+1$. It follows by \eqref{eq2} that $$\|\xi_\lambda(y)-\xi_\lambda(z)\|_\alpha\leq L_0\|y-z\|_\alpha, \hspace{1cm}}\def\hs{\hspace{0.5cm} y,z\in X_2^\alpha.$$ Let $${\mathcal M}^c_\lambda=\{y+\xi_\lambda(y):y\in X_2^\alpha\}.$$ Then ${\mathcal M}^c_\lambda$ is an invariant manifold of \eqref{e:2.0}. Clearly ${\mathcal M}_\lambda^c$ is homeomorphic to $X_2^\alpha$. $\Box$ \vskip8pt}\def\vs{\vskip4pt \section{Bifurcation and multiplicity of \eqref{eq1}} Let us now look at the bifurcation and multiplicity of the Landesman-Lazer type problem of \eqref{eq1} when $\lambda$ crosses any eigenvalue of the operator $A=-\Delta$. For this purpose, we first turn our attention to the dynamic bifurcation of the parabolic problem \begin{equation}\label{e:3.1}\left\{\begin{array}{ll} u_t-\Delta u=\lambda u+f(x,u), \hs\,\, x\in ^{} \Omega;\\[1ex] \,u(x)=0, \hspace{1cm}}\def\hs{\hspace{0.5cm}\Hs \hspace{1cm}}\def\hs{\hspace{0.5cm} \hspace{1cm}}\def\hs{\hspace{0.5cm} x\in\partial\Omega, \end{array}\right. \end{equation} where $f\in C^1(\overline\Omega\times\mathbb{R})$ is globally Lipschitz with Lipschitz constant $L_f$ and satisfies the (LC) condition in Section 1. \subsection{Mathematical setting} Let $H=L^2(\Omega)$ and $V=H_0^1(\Omega)$. By $(\cdot,\cdot)$ and $|\cdot|$ we denote the usual inner product and norm on $H$, respectively. The inner product and norm on $V$, denoted by $((\.,\.))$ and $\|\cdot\|$, respectively, are defined as $$ ((u,v))= \int_{\Omega}\nabla u\.\nabla v\mathrm{d}x,\hs \|u\|=\(\int_{\Omega}|\nabla u|^2\mathrm{d}x\)^{1/2} $$ for $u,v\in V$. (The notation $\|\.\|$ is also used to denote the norm of any linear operator. We hope this will course no confusion.) Denote $A$ the operator $-\Delta$ associated with the homogenous Dirichlet boundary condition. Then $A$ is a sectorial operator on $H$ and has a compact resolvent. It is a basic knowledge that $D(A^{1/2})=V$. Let $$0<\mu_1<\mu_2<\cdots<\mu_k<\cdots$$ be the eigenvalues of $A$. Then $$\mbox{$\mu_{k+1}-\mu_k\rightarrow+\infty$,\,\, as $k\rightarrow\infty$};$$ (see e.g. \cite[Chapter 4]{M}). Denote $W_k$ the eigenspace corresponding to $\mu_k$. System \eqref{e:3.1} can be rewritten as an abstract evolution equation in $V$: \begin{equation}\label{e3.2} u_t+B u= \tilde{f}(u),\hs u=u(t)\in V,\end{equation} where $B:=B_\lambda=A-\lambda I,$ and $\tilde{f}:V\rightarrow H$ is the Nemitski operator given by $$ \tilde{f}(u)(x)=f(x,u(x)),\hspace{1cm}}\def\hs{\hspace{0.5cm} u\in V. $$ One trivially verifies that \begin{equation}\label{eq1.8}|\tilde{f}(u_1)- \tilde{f}(u_2)| \leq\tilde L \|u_1-u_2\|,\hspace{1cm}}\def\hs{\hspace{0.5cm}\forall\, u_1, u_2\in V,\end{equation} where $\tilde L =L_f/\sqrt{\mu_1}$. \vs Set $$ \beta_k=\min\{\mu_k-\mu_{k-1},\mu_{k+1}-\mu_k\},\hspace{1cm}}\def\hs{\hspace{0.5cm} k=1,2,\cdots. $$ (Here we assign $\beta_1=\mu_2-\mu_1$.) For each $k$, if $\lambda\in(\mu_k-\beta_k,\mu_k+\beta_k)$ then the spectrum $\sigma_\lambda(B)$ of the operator $B=B_\lambda$ has a spectrum decomposition $$\sigma_\lambda(B)=\sigma_\lambda^u\cup\sigma_\lambda^c\cup\sigma_\lambda^s,$$ where $\sigma_\lambda^c=\{\mu_k-\lambda\} $, and $$\sigma_\lambda^u=\{\mu_j-\lambda:\,\,j<k\},\hs \sigma_\lambda^s=\{\mu_j-\lambda:\,\,j>k\}.$$ Clearly $$ \sigma_\lambda^u\subset (-\infty,0),\hs\mbox{and }\,\sigma_\lambda^s\subset (0,\infty). $$ The space $H$ has a corresponding orthogonal decomposition $H=\oplus_{i=u,c,s}H_i$ (independent of $\lambda$) with $H_i\perp H_j$ if $i\ne j$. Set $H_{us}=H_u\oplus H_s$. Let $B_i$ be the restriction of $B$ on $H_i$, and denote $\Pi_i:H\rightarrow H_i$ the projection, where $i=u,c,s,us$. \subsection{Dynamic bifurcation from infinity} Denote $\Phi_\lambda(t)$ the semiflow generated by the initial value problem of \eqref{e3.2} on $V$, namely, for each $u_0\in V$, $u(t)=\Phi_\lambda(t)u_0$ is the (unique) strong solution of \eqref{e3.2} in $V$ with $u(0)=u_0$. Set $$V_i:=H_i\cap V,\hspace{1cm}}\def\hs{\hspace{0.5cm} i=u,s,c,us.$$ Denote $$M_{\beta_k}=M\int_0^\infty\big(2+\tau^{-\frac{1}{2}}\big)e^{-\frac{1}{4}\beta_k\tau}d\tau,\hspace{1cm}}\def\hs{\hspace{0.5cm} k\geq1.$$ Then we have the following result about the dynamic bifurcation of \eqref{e3.2}. \begin{theorem}\label{th3.1}Given $k\geq 1$, suppose the Lipschitz constant $L_f$ of $f$ satisfies \begin{equation}\label{e3b} M_{\beta_k}L_f/\sqrt{\mu_1}<1. \end{equation} Then there exists $0<\theta <\beta_k/4$ such that when $\lambda\in \Lambda_k^-:=[\mu_k-\theta ,\mu_k)$, $\Phi_\lambda$ has a compact invariant set $K^\infty_\lambda$ which takes the shape of $(m-1)$-dimensional sphere $\mathbb{S}^{m-1}$. Furthermore, \begin{equation}\label{e3a}\lim_{\lambda\rightarrow\mu_k^-}\min\{\|v\|:\,\,v\in K^\infty_\lambda\}=\infty.\end{equation}\end{theorem} \begin{remark} In the above theorem we have employed a topological concept, shape, without definition. Informally speaking, this notion can be seen as a generalization of that of homotopy type, and is used to describe topological structures of ``bad spaces'' such as invariant sets and attractors for which it is difficult to talk about homotopy type. It is a basic knowledge that spaces having same homotopy type enjoy same shape. The interested reader is referred to \cite{Kap} for details. \end{remark} \begin{remark} As $\beta_k\rightarrow\infty$ as $k\rightarrow\infty$, it is easy to see by definition that $M_{\beta_k}\rightarrow 0$ as $k\rightarrow\infty$. Therefore, for any globally Lipschitz function $f$, the smallness requirement \eqref{e3b} is automatically satisfied as long as $k$ is large enough. \end{remark} To prove Theorem \ref{th3.1}, we need some auxiliary results. The proposition below is a straightforward application of Theorem \ref{th1}. \begin{proposition}\label{p2} Suppose $f$ satisfies \eqref{e3b}. Then for each $\lambda\in(\mu_k-\beta_k/4,\,\mu_k+\beta_k/4):=J_k$, the semiflow $\Phi_\lambda$ has a global invariant manifold ${\mathcal M}^c_\lambda$ given by \begin{equation}\label{e3c}{\mathcal M}^c_\lambda=\{y+\xi_\lambda(y):y\in V_c\},\end{equation} where $\xi_\lambda:V_c\rightarrow V_{13}$ is globally Lipschitz with Lipschitz constant \begin{equation}\label{e3d} L_{\xi_\lambda}\leq L_0,\hspace{1cm}}\def\hs{\hspace{0.5cm}\forall\,\lambda\in J_k \end{equation} for some $L_0>0$ independent of $\lambda\in J_k$. \end{proposition} Given a function $v$ on $\Omega$, we use $v_\pm$ to denote the positive and negtive parts of $v$, respectively, $$v_\pm=\max\{\pm v(x),0\},\hs x\in \Omega.$$ Then $v=v_+-v_-$. The following fundamental result on $f$ is taken from \cite{LLZ} (see also \cite[Section 6]{LiD}). \begin{proposition}\label{p1}\cite{LLZ} Suppose $f$ satisfies the Landesman-Lazer type condition $(LC)$. Then for any $R,\varepsilon>0$, there exists $s_0>0$ such that $$\int_\Omega f(x,sv+u)vdx\geq \int_\Omega \(\overline{f}v_++\underline{f}v_-\)dx-\varepsilon$$ for all $s\geq s_0$, $v\in \overline{B} (1)$ and $u\in \overline{B} (R)$, where $B (r)$ denotes the ball in $H$ centered at $0$ with radius $r$.\end{proposition} Henceforth we always assume $L_f $ satisfies \eqref{e3b}, so for each $\lambda$ with $|\lambda-\mu_k|<\beta_k/4$, the semiflow $\Phi_\lambda$ has an invariant manifold ${\mathcal M}_\lambda^c$ given by \eqref{e3c}. If we reduce the system \eqref{e3.2} on ${\mathcal M}_\lambda^c$, it takes the form \begin{equation}\label{eq2.1}w_t+B_c w=\Pi_2\tilde{f}(w+\xi_\lambda(w)),\hs w\in V_c.\end{equation} Let $\phi_\lambda$ denote the semiflow generated by \eqref{eq2.1} on $V_c$. Since $V_c$ is finite dimensional, all the norms on $V_c$ are equivalent. Hence for convenience, we equip $V_c$ the norm $|\.|$ of $H$ in the following argument. Given $0\leq a<b\leq\infty$, denote $$\Xi [a,b]:=\{x\in V_c,\,\,a\leq|x| \leq b\}.$$ \begin{lemma}\label{le1} Under the hypotheses \eqref{e3b} and $(LC)$, there exist $R_0\geq 0$ and $c_0>0$ such that the following assertions hold. \begin{enumerate} \item[$(1)$] If $\lambda\in[\mu_k,\mu_k+\beta_k/4)$, then for any solution $w(t)$ of \eqref{eq2.1} in $\Xi [R_0,\infty]$, we have \begin{equation}\label{e5.10} \frac{d}{dt}|w| ^2\geq c_0 |w| . \end{equation} \item[$(2)$] For any $R>R_0$, there exists $0<\varepsilon<\beta_k/4$ such that if $\lambda\in[\mu_k-\varepsilon,\mu_k)$, then \eqref{e5.10} holds true for any solution $w(t)$ of \eqref{eq2.1} in $\Xi [R_0,R]$. \item[$(3)$] There exists $\theta>0$ such that for each $\lambda\in[\mu_k-\theta ,\mu_k)$, the semiflow $\phi_\lambda$ has a positively invariant set $N_\lambda=\Xi [a_\lambda,b_\lambda]$ with $a_\lambda,\,b_\lambda\rightarrow\infty$ as $\lambda\rightarrow \mu_k^-$. \end{enumerate} \end{lemma} \vs {\bf Proof.} Taking the inner product of (\ref{eq2.1}) with $w$ in $H$, it yields \begin{equation}\label{e6.16}\frac{1}{2}\frac{d}{dt}|w| ^2+(\mu_k-\lambda)|w| ^2= \big(\Pi_2\tilde{f}(w+\xi_\lambda(w)),w\big)= \big(\tilde{f}(w+\xi_\lambda(w)),w\big). \end{equation} Let us first estimate the last term in \eqref{e6.16}. As the norm $|\.|_{L^1(\Omega)}$ of $L^1(\Omega)$ and that of $H$ are equivalent on $V_c$, one easily sees that \begin{equation}\label{e6.26}\begin{array}{ll} \min\{|v|_{L^1(\Omega)}:\,\,v\in V_c,\,\,|v| =1\}:=r>0.\end{array} \end{equation} Pick a number $\delta>0$ with $\delta\leq\min\{\overline f,\underline f\}$. We infer from the representation of $\xi_\lambda$ and the boundedness of $f$ that $\xi_\lambda$ is uniformly bounded on $\lambda$. Thus by virtue of Proposition \ref{p1} there exists $s_0>0$ such that when $s\geq s_0$, \begin{equation}\label{e6.20}\begin{split}\big(\tilde{f}(sv+\xi_\lambda(sv)),v\big)&=\int_\Omega f(x,sv+\xi_\lambda(sv))v\,dx\\[1ex] &\geq \int_\Omega\(\overline fv_++\underline fv_-\)dx-\frac{1}{2}r\delta \end{split}\end{equation} for all $v\in \overline\mbox{B} (1)$. Now we rewrite $$\mbox{ $w=sv$,\,\, where $s=|w| $.}$$ Then $v\in \overline\mbox{B} (1)$. Suppose $s\geq s_0$, by (\ref{e6.20}) one finds that \begin{equation*}\begin{split} \big(\tilde{f}\big(w+\xi_\lambda(w)),w\big)&=s\int_\Omega f\(x,sv+\xi_\lambda(sv)\)vdx\\[1ex] &\geq s\(\int_\Omega\(\overline f v_++\underline f v_-\)dx-\frac{1}{2}r\delta \).\end{split} \end{equation*} Observing that \begin{equation*}\begin{split} &\int_\Omega\(\overline fv^++\underline fv^-\)dx -\frac{1}{2}r\delta \\[1ex] \geq &\delta \int_\Omega|v|dx-\frac{1}{2}r\delta\geq (\mbox{by }(\ref{e6.26}))\geq \frac{1}{2}r\delta,\end{split}\end{equation*} we conclude that \begin{equation}\label{e5.22} \big(\tilde{f}\big(w+\xi_\lambda(w)),w\big)\geq \frac{1}{2}r\delta s=\frac{1}{2}r\delta |w| . \end{equation} Combining \eqref{e5.22} and (\ref{e6.16}) together we find that \begin{equation}\label{e5.23} \frac{d}{dt}|w| ^2\geq 2\(\lambda-\mu_k\) |w| ^2+r\delta |w| \end{equation} as long as $|w(t)| \geq s_0$. \vs Set $R_0=s_0$, $c_0=r\delta/2$. Assume $\lambda\in[\mu_k,\mu_k+\beta_k/4)$. Then $\lambda-\mu_k\geq 0$, and we infer from \eqref{e5.23} that if $|w| \geq R_0$ then $$ \frac{d}{dt}|w| ^2\geq r\delta |w| >c_0|w| .$$ Hence assertion (1) holds. \vs Now assume $\lambda<\mu_k$. Let $R>R_0$. Choose an $\varepsilon>0$ sufficiently small so that $\varepsilon R^2<r\delta s_0/4$. Then if $\lambda\in[\mu_k-\varepsilon,\mu_k)$, for any solution $w(t)$ of \eqref{eq2.1} in $\Xi [R_0,R]$, by \eqref{e5.23} we deduce that $$\begin{array}{ll} \frac{d}{dt}|w| ^2&\geq -2|\lambda-\mu_k|\,R^2+r\delta |w| \\[1ex] &\geq -2\theta R^2 +2c_0|w| \\[1ex] &= \big(c_0|w| -2\theta R^2\big)+c_0|w| \\ &\geq c_0|w| ,\end{array} $$ which justifies assertion (2). Note that \eqref{e5.10} implies that $\Xi [R,\infty]$ is positively invariant for $\phi_\lambda$ when $\lambda\in[\mu_k-\varepsilon,\mu_k)$. \vs Let $R_j=R_0+j$ ($j=1,2,\cdots$). Then for each $j$, we deduce by assertion (2) that there exists $\varepsilon_j>0$ such that if $\lambda\in[\mu_k-\varepsilon_j,\mu_k)$, \eqref{e5.10} holds true for any solution $w(t)$ of \eqref{eq2.1} in $\Xi [R_0,R_j]$. Hence $\Xi [R_j,\infty]$ is positively invariant for $\phi_\lambda$ when $\lambda\in[\mu_k-\varepsilon_j,\mu_k)$. On the other hand, by the boundedness of $f$ we have \begin{equation}\label{eq3.6} \begin{split}\big(\tilde{f}(w+\xi_\lambda(w)),w\big)&\leq \big|\tilde{f}(w+\xi_\lambda(w))\big| \,|w| \leq C\,|w| \\ &\leq \frac{\mu_k-\lambda}{2}|w|^2 +C(\lambda),\end{split} \end{equation} where $C(\lambda)\rightarrow +\infty$ as $\lambda\rightarrow \mu_k^-$. Combining \eqref{eq3.6} with \eqref{e6.16}, we find that $$\frac{d}{dt}|w| ^2\leq-(\mu_k-\lambda)|w| ^2+2C(\lambda). $$ Thanks to the classical Gronwall lemma, \begin{equation}\label{e3e}|w(t)| ^2\leq e^{-(\mu_k-\lambda)t}|w(0)| ^2+\(1-e^{-(\mu_k-\lambda)t}\)\frac{2C(\lambda)}{\mu_k-\lambda},\hs t\geq0.\end{equation} By \eqref{e3e} it is easy to verify that if $$\rho\geq \sqrt{2C(\lambda)/(\mu_k-\lambda)}:=\rho_\lambda,$$ then $\{v\in V_c:\,\,|v|\leq \rho\}$ is positively invariant. We may assume $\varepsilon_1>\varepsilon_2>\cdots>\varepsilon_j\rightarrow 0$. Then $$[\mu_k-\varepsilon_1,\mu_k)=\Cup_{j\geq 1}[\mu_k-\varepsilon_j,\,\mu_k-\varepsilon_{j+1}).$$ Set $\theta =\varepsilon_1$, and let $\lambda\in[\mu_k-\theta ,\mu_k)$. If $\lambda\in [\mu_k-\varepsilon_j,\,\mu_k-\varepsilon_{j+1})$, we take $$ a_\lambda=R_j,\hs b_\lambda=R_j+\rho_\lambda. $$ Clearly $a_\lambda,b_\lambda\rightarrow\infty$ as $\lambda\rightarrow\mu_k^-$. We infer from the above argument that $\Xi[a_\lambda,b_\lambda]$ is positively invariant under the system $\phi_\lambda$, hence assertion (3) holds true. $\Box$ \vskip8pt}\def\vs{\vskip4pt Now let us turn to the proof of Theorem \ref{th3.1}. \vs\noindent {\bf Proof of Theorem \ref{th3.1}.} Let $k\geq1$, and let $\theta $ be the number given in Lemma \ref{le1}. Assume $\lambda\in[\mu_k-\theta ,\mu_k)$. Then Lemma \ref{le1} (3) asserts that $N_\lambda=\Xi[a_\lambda,b_\lambda]$ is a positively invariant set of $\phi_\lambda$. Set $$ {\mathcal A}_\lambda^\infty=\bigcap}\def\Cup{\bigcup_{\tau\geq 0}\overline{\Cup_{t\geq\tau}\phi_\lambda(t)N_\lambda}. $$ By the basic knowledge in the attractor theory (see e.g. \cite{Hale,Tem}) we know that ${\mathcal A}_\lambda^\infty$ is the global attractor of $\phi_\lambda$ restricted on the phase space $X=N_\lambda$. Since $N_\lambda$ has the homotopy type of an $(m-1)$-dimensional sphere $\mathbb{S}^{m-1}$, it shares the same shape of $\mathbb{S}^{m-1}$. Therefore employing the shape theory of attractors in \cite{Kap} (see also \cite{S}), ${\mathcal A}^\infty_\lambda$ has the shape of $\mathbb{S}^{m-1}$. Let $$K^\infty_\lambda=\{w+\xi_\lambda(w): w\in {\mathcal A}_\lambda^\infty\}. $$ Then $K^\infty_\lambda\subset {\mathcal M}^c_\lambda$ is a compact invariant set of the original system $\Phi_\lambda$ which takes the shape of an $(m-1)$-dimensional sphere. The conclusion in \eqref{e3a} directly follows from the fact that $a_\lambda,b_\lambda\rightarrow+\infty$ as $\lambda\rightarrow \mu_k^-$. The proof of the theorem is complete. $\Box$ \subsection{Bifurcation and multiplicity of \eqref{eq1}} We are now ready to state and prove our main result in this work. \begin{theorem}\label{th3.2} Assume $f$ satisfies the Landesman-Lazer type condition $(LC)$. Let $\mu_k$ be an eigenvalue of $A$. Suppose $L_f$ satisfies \eqref{e3b}. Then there exists $0<\theta < \beta_k/4$ such that the problem \eqref{eq1} has at least three distinct solutions $u_{\lambda}^{i}$ $(i=0,1,2)$ for each $\lambda\in\Lambda_k^-=[\mu_k-\theta ,\mu_k)$, and $u_{\lambda}^{1}$ and $u_{\lambda}^{2}$ go to $\infty$ as $\lambda\rightarrow \mu_k^-$ whereas $u_\lambda^0$ remains bounded on $\Lambda^-_k$. \end{theorem} \begin{remark} It is known under the hypotheses of Theorem \ref{th3.2} that \eqref{eq1} has at least one solution for all $\lambda\in\mathbb{R}$; see, e.g., \cite[Theorem 5.3]{LLZ}. \end{remark} {\bf Proof.} Let $\mu_k$ be an eigenvalue of $A$, and let $\theta $ be the number given in Theorem \ref{th3.1}. Then Theorem \ref{th3.1} asserts that for each $\lambda\in \Lambda_k^-=[\mu_k-\theta ,\mu_k)$, the system $\Phi_\lambda$ has a compact invariant set $K^\infty_\lambda$. We first show that $K^\infty_\lambda$ contains at least two distinct solutions $u_{\lambda}^{1}$ and $u_{\lambda}^{2}$ of \eqref{eq1}. Since $K^\infty_\lambda$ has the shape of $\mathbb{S}^{m-1}$, it consists of at least two distinct points $u$ and $v$. If both $u$ and $v$ are equilibrium points of $\Phi_\lambda$ then we are done with $u_{\lambda}^{1}=u$ and $u_{\lambda}^{2}=v$. Thus we assume, say, $v$ is not an equilibrium of $\Phi_\lambda$. Let $\gamma=\gamma(t)$ be a complete solution of $\Phi_\lambda$ contained in $K^\infty_\lambda$, $\gamma(0)=v$. As $\Phi_\lambda$ is a gradient system (and hence there is no homoclinic structure in $K^\infty_\lambda$), the $\omega $-limit $\omega (\gamma)$ and $\alpha$-limit $\alpha(\gamma)$ of $\gamma$ do not intersect. Because $\omega (\gamma)$ and $\alpha(\gamma)$ consist of equilibrium points, we deduce that $\Phi_\lambda$ has at least two distinct equilibria in $K^\infty_\lambda$. Consequently \eqref{eq1} has two distinct solutions $u_{\lambda}^{1}$ and $u_{\lambda}^{2}$. By virtue of \eqref{e3a} it is clear that \begin{equation}\label{eq3.7}\lim_{\lambda\rightarrow\mu_k^-}\|u_{\lambda}^{i}\|=\infty,\hspace{1cm}}\def\hs{\hspace{0.5cm} i=1,2. \end{equation} We also infer from \cite[Theorem 5.3]{LLZ} that for each $\lambda\in\Lambda^-_k$, $\Phi_\lambda$ has an equilibrium $u^0_\lambda$ which remains bounded as $\lambda$ varies in $\Lambda^-_k$. In consideration of \eqref{eq3.7} it can be assumed that $u^0_\lambda\ne u^i_\lambda$ for $i=1,2$. Then $u^i_\lambda$ ($i=0,1,2$) are solutions of \eqref{eq1} fulfilling the requirements of the theorem. $\Box$ \vskip8pt}\def\vs{\vskip4pt \noindent{\bf Acknowledgments.} \vs The authors gratefully acknowledge the support of the National Natural Science Foundation of China under Grants 11471240, 11071185. {\small \begin {thebibliography} {99} \bibitem{BC} A. Bahri and J.M. Coron, On a nonlinear elliptic equation involving the critical sobolev exponent: The effect of the topology of the domain, {\sl Comm. Pure Appl. Math.}, 41(2010), 253-294. \bibitem{BEG} P. B$\acute{\mbox{e}}$nilan, L. Boccardo, T. Gallou$\ddot{e}$t, R. Gariepy, M. Pierre and J.L. Vazquez, An $L^1$ theory of existence and uniqueness of nonlinear elliptic equations, {\sl Ann. Sc. Norm. Super. Pisa Cl. Sci.}, 22(1995), 241-273. \bibitem{H.N} H. Br$\acute{\mbox{e}}$zis and L. Nirenberg, Positive solutions of nonlinear elliptic equations involving critical sobolev exponents, {\sl Comm. Pure Appl. Math.}, 36(1983), 437-477. \bibitem{CC}W. Chen and C. Li, Classification of solutions of some nonlinear elliptic equations, {\sl Duke Math. J.}, 63(1991), 615-622. \bibitem{CL} X. Chang and Y. Li, Existence and multiplicity of nontrivial solutions for semilinear elliptic dirichlet problems across resonance, {\sl Topol. Methods Nonlinear Anal.}, 36(2010), 285-310. \bibitem {CW} K.C. Chang and Z.Q. Wang, Notes on the bifurcation theorem, {\sl J. Fixed Point Theory Appl.}, 1(2007), 195-208. \bibitem{CM} R. Chiappinelli, J. Mawhin and R. Nugari, Bifurcation from infinity and multiple solutions for some dirichlet problems with unbounded nonlinearities, {\sl Nonlinear Anal.}, 18(1992), 1099-1112. \bibitem{FGP} M. Filippakis, L. Gasi$\acute{n}$ski and N.S. Papageorgiou, A multiplicity result for semilinear resonant elliptic problems with nonsmooth potential, {\sl Nonlinear Anal.}, 61(2005), 61-75. \bibitem{Hale} J.K. Hale, Asymptotic Behavior of Dissipative Systems, Mathematical Surveys Monographs 25, AMS Providence, RI, 1998. \bibitem {D.H} D. Henry, Geometric Theory of Semilinear Parabolic Equations, {\sl Lecture Notes in Math.} 840, Springer-Verlag, 1981. \bibitem{K} N.V. Krylov, Controlled Diffusion Processes, Springer-Verlag, Berlin, 1980. \bibitem{Kap} L. Kapitanski and I. Rodnianski, Shape and morse theory of attractors, {\sl Comm. Pure Appl. Math.}, 53(2000), 218-242. \bibitem{LiD} D.S. Li, G.L. Shi, and X.F. Song, Linking theorems of local semiflows on complete metric spaces, https://arxiv.org/abs/1312.1868, 2015. \bibitem {LLZ} C.Q. Li, D.S. Li and Z.J. Zhang, Dynamic bifurcation from infinity of nonlinear evolution equations, {\sl SIAM J. Appl. Dyn. Syst.}, 16(2017), 1831-1868. \bibitem{LW} D.S. Li and Z.Q. Wang, Local and global dynamic bifurcation of nonlinear evolution equation, {\sl Indiana Univ. Math. J.}, in press; arXiv:1612.08128, 2016. \bibitem{M} R. McOwen, Partial Differential Equations: Methods and Applications, Prentice Hall, Upper Saddle River, NJ, 1996. \bibitem {MS} J. Mawhin and K. Schmitt, Landesman-Lazer type problems at an eigenvalue of odd multiplicity, {\sl Results Math.}, 14(1988), 138-146. \bibitem {MS1} J. Mawhin and K. Schmitt, Nonlinear eigenvalue problems with the parameter near resonance, {\sl Ann. Polon. Math.}. 51(1990), 241-248. \bibitem{PM} FD Paiva and E. Massa, Semilinear elliptic problems near resonance with a nonprincipal eigenvalue, {\sl J. Math. Anal. Appl.}, 342(2008), 638-650. \bibitem{PP} N.S. Papageorgiou and F. Papalini, Multiple solutions for nearly resonant nonlinear dirichlet problems, {\sl Potential Anal.}, 37(2012), 247-279. \bibitem{S} J. Sanjurjo, Global topological properties of the Hopf bifurcation, {\sl J. differential Equations}, 243(2007) 238-255. \bibitem {R.Y} G.R. Sell and Y.C. You, Dynamics of Evolution Equations, Springer-Verlag, New York, 2002. \bibitem {SW} K. Schmitt and Z.Q. Wang, On bifurcation from infinity for potential operators, {\sl Differential Integral Equations}, 4(1991), 933-943. \bibitem{ST} J. Su and C. Tang, Multiplicity results for semilinear elliptic equations with resonance at higher eigenvalues, {\sl Nonlinear Anal. TMA}, 44(2001), 311-321. \bibitem{Tem} R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics, 2nd edition, Springer Verlag, New York, 1997. \bibitem{U}J. Urbas, On the existence of nonclassical solutions for two classes of fully nonlinear elliptic equations, {\sl Indiana Univ. Math. J.}, 39(1990), 355-382. \end {thebibliography} \end{document}
-51,187.549979
[ -2.873046875, 2.47265625 ]
44.128114
[ -3.13671875, 0.65576171875, -2.36328125, -6.77734375, -1.1025390625, 9.453125 ]
[ 2.447265625, 8.7734375, -0.64208984375, 5.26171875 ]
494
4,285
[ -3.484375, 3.73046875 ]
36.645359
[ -5.4765625, -4.23828125, -4.859375, -2.58203125, 1.8056640625, 12.359375 ]
0.384952
23.710953
31.715286
4.111182
[ 1.167602777481079 ]
-31,351.747191
6.668611
-50,941.624115
0.244969
6.25866
[ -1.65625, -3.4140625, -4.21875, -5.84765625, 1.9833984375, 13.109375 ]
[ -5.6796875, -1.8427734375, -2.171875, -1.2392578125, 3.232421875, 3.529296875 ]
BkiUd0Q4eIZjxkFh19au
\section{Introduction}On-board real-time processing of data through embedded systems plays a crucial role in applying the images acquired from the portable platforms (e.g., \glspl{gls:UAV}) to the applications requiring instant responses such as search and rescue missions, urban management, traffic monitoring, and parking lot utilization. Methods based on \glspl{gls:CNN}, for example, FPN~\cite{fpn}, FasterRCNN~\cite{fasterrcnnNIPS2015}, R-FCN~\cite{NIPS2016_6465}, \glspl{gls:SSD}~\cite{DBLP:conf/eccv/LiuAESRFB16}, and Yolov2~\cite{redmon2017yolo9000}, have shown promising results in many object detection tasks. Despite their high detection precision, these methods are computationally demanding and their models are usually bulky due to the deep backbone networks being used. Employing \glspl{gls:CNN} for the on-board real-time applications requires developing time and computation efficient methods due to the limited processing resources available on-board. A number of networks have been developed recently such as GoogleNet~\cite{inception}, Xception~\cite{xception}, ResNeXt~\cite{resnext}, MobileNet~\cite{mobilenetv1}, PeleeNet~\cite{pelee}, SqueezeNet~\cite{squeeznet}, and ShuffleNet~\cite{ZhaZho17ShuffleNet} which have less complex structures as compared to the other \glspl{gls:CNN} while providing comparable or even superior results. For the real-time object detection applications (\eg vehicle detection), there are a few recent works proposing the methods such as MobileNet~\cite{mobilenetv1} with SSD~\cite{8099834}, PVANET~\cite{pvanet}, and Tiny-Yolo~\cite{redmon2017yolo9000}. They have shown computational efficiency to be deployed in mobile platforms. Zhang et al.~\cite{ZhaZho17ShuffleNet} employed ShuffleNet as the backbone network, which uses point-wise grouped convolutions and channel shuffle to greatly reduce the computations while maintaining the accuracy. The authors reported a better performance compared with MobileNet using Faster-RCNN detection approach. Kim et al.~\cite{pvanet} developed PVANET by concatenating $3\times3$ conv layer with its negation as a building block for the initial feature extraction stage. Recently, Wang et al.~\cite{pelee} proposed PeleeNet that uses a combination of parallel multi-size kernel convolutions as a 2-way dense layer and a similar module to the Squeeze module. They additionally applied a residual block after feature extraction stage to improve the accuracy using the SSD~\cite{DBLP:conf/eccv/LiuAESRFB16} approach. The authors reported more accurate results compared to MobileNet and ShuffleNet on the Pascal VOC dataset despite the smaller model size and computation cost of PeleeNet. Redmon and Farhadi~\cite{redmon2017yolo9000} proposed Yolov2, a fast object detection method, but yet with high accuracy. However, their method is still computationally heavy for real-time processing on an embedded platform. Tiny Yolov2 as the smaller version of Yolov2, although it is faster, but it lacks high-level extraction capability which results in poor performance. In the work of Huang et al.~\cite{8099834}, they showed the SSD detection approach together with SqueezeNet and MobileNet as the backbone networks. Although SSD with SqueezeNet backbone results in a smaller model than MobileNet, its results are less accurate and its computation is slightly more expensive. In general, replacing the backbone network with SqueezeNet, MobileNet, or any other efficient network - though enhancing computational efficiency - can degrade the accuracy if no further modification is performed. In this paper, we propose ShuffleDet, a real-time vehicle detection approach to be used on-board by mobile platforms such as \glspl{gls:UAV}. ShuffleDet network is composed of ShuffleNet and a modified variant of SSD based on channel shuffling and grouped convolution. We design a unit to appropriately transfer the pretrained parameters of the pretrained model on terrestrial imagery to aerial imagery domain. We call this unit ~\gls{gls:DAB} which includes deformable convolutions~\cite{dai17dcn} and Inception-ResNetv2 units~\cite{inceptionresnetv2}. To the best of our knowledge, group convolution and channel shuffling have not been used before in real-time vehicle detection based on \gls{gls:UAV} imagery. ShuffleDet runs at 14 frames per second (FPS) on NVIDIA Jetson TX2 while having the computational complexity of 3.8 \glspl{gls:GFLOP}. Experimental results on the CARPK and PUCPR+ datasets~\cite{HsiehLH17} demonstrates that ShuffleDet achieves a good trade-off between accuracy and speed for mobile platforms while it is comparably computation and time efficient. \section{Method} In this section, a detailed description of the network architecture is presented. We use ShuffleNet~\cite{ZhaZho17ShuffleNet} which is designed for object recognition to extract high-level features as our backend network. \begin{figure*}[h] \centering \includegraphics[width=\textwidth]{figures/overflow} \caption{Illustration of ShuffleDet architecture. The backbone network is ShuffleNet. Modified inception layers are applied as extra layers. $C$ stands for channel. \gls{gls:DAB} unit is deployed to adapt to the new domain of \gls{gls:UAV} imagery using a residual block containing deformable convolution layers\protect\footnotemark.} \label{fig:overflow} \end{figure*} \footnotetext{UAV photo is from \url{https://www.quantum-systems.com/tron}} ShuffleNet\cite{ZhaZho17ShuffleNet} shows that by utilizing grouped or depth-wise separable convolutions, one can reduce the computational demand, while still boosting the performance through a decent representation ability. A major bottleneck can arise by replacing $1\times1$ convolution layers with stacked grouped convolutions which can degrade the accuracy of the network. This is due to the fact that a limited portion of input channels are utilized by the output channels. In order to solve this issue channel shuffling was proposed in \cite{ZhaZho17ShuffleNet} which we also use inside ShuffleDet architecture. Figure~\ref{fig:overflow} illustrates the network architecture of ShuffleDet. In stage 1, a $3\times3$ convolutional layer is applied to the input image with a stride of 2 which downsamples the input by a factor of 2. This layer is followed by a maxpooling layer with a stride of 2 and kernel of $3\times3$. This maxpooling operation destroys half of the input information. This is critical as vehicles in our case are small objects~\cite{azimiACCV,azimi2018ISPRS,azimi2018advanced, azimi2018aerial}. Having said that without this operation, computation cost will be multiplied. Therefore, we keep the maxpooling layer and we try to enhance the performance especially via \glspl{gls:DAB} units which will be discussed later. After the maxpooling three stages containing multiple units from ShuffleNet are performed. Stage 2 and 4 contain 3 ShuffleNet units while stage 3 in the middle is composed of 7 units. The whole stage 1 to 4 leads to 32x down-sampling factor. ShuffleUnit illustrated in Figure~\ref{fig:overflow} acts as residual bottleneck unit. Using stride 2 in ShuffleUnit, an average pooling is applied to the primary branch parallel with depthwise convolution with a stride 2 in the residual branch. To ensure that all of the input channels are connected to the output channels, channel shuffling is performed before the depthwise convolution. A $1\times1$ grouped convolutions are applied before the channel shuffling as a bottleneck in order to reduce the number of feature maps in the output for the efficient computation. It has been shown~\cite{ZhaZho17ShuffleNet,quantization} that the group convolutions also improve the accuracy. The second grouped convolution brings back the number of feature maps or channel to the number of input channels for a more accurate representation capability. Using a stride of 2, the features of average pooling and second grouped convolution is concatenated while having a stride of 1, maxpooling is omitted and depth-wise convolution is performed. Moreover, the outputs are summed up instead of using concatenation. Figure \ref{fig:overflow} shows the detailed structure of ShuffleNet units with and without stride of 2. Stage 1, 2, 3 and stage 4 are employed to enhance the heat map resolution as input intermediate layers. In the detection module, we primarily inspire from SSD approach. In order to enrich the extracted features from the intermediate layers, we perform extra feature layers in stage 5. In our case, the output from stage 4 is passed through stage 5 as illustrated in Figure~\ref*{fig:overflow} This is compatible with using multi-box strategy explained in the SSD method. In total, we extract 7 feature maps of different sizes from the backbone network. To enhance the performance, instead of employing a conventional convolution layer similar to SSD method for each extra layer, we use a modified module of Reduction-B from Inception-ResNet-v2~\cite{inceptionresnetv2} in stage 5. Unlike ResNet and VGG, inception modules have not been explored enough in object detection task due to their higher computation cost. We stack 4 modified inception modules as stage 5 for feature map extraction at different levels. Unlike original Inception-ResNet-v2 work, we add $1\times1$ conv layers after maxpooling and concatenation layer. The maxpooling layer reduces spatial-resolution and dimension as a bottleneck. $1\times1$ convolution in return expands the dimension to insert further non-linearity to the network resulting in a better performance. The same philosophy was used in the latter $1\times1$ conv layer. Applying the inception module adds more computational cost to the network. To compensate its load, we replace $3\times3$ convolution layers with $3\times3$ depthwise convolutions. Depth-wise convolution improves the performance slightly, yet it has $\dfrac{1}{N} + \dfrac{1}{k^2}$ times less computation cost compared with regular conv layers. $N$ is the number of output channels and $k$ is the kernel size. Furthermore, we divide the input channels equally among the branches. The output number of channels for each layer is an equally-divided concatenation of output channels from each branch. These modifications keep the model size as well as computational complexity small. We observe using this modified inception modules enhances the performance. We conjecture unlike the original SSD which uses $1\times1$ and $3\times3$ conv layers in series as extra layers, multi-size kernels parallel in inception modules capture features in different sizes simultaneously e.g. $1\times1$ kernels to detect small vehicles and $3\times3$ kernels for bigger ones which could be the reason for this enhancement. This shows by widening the network and augmenting the cardinality, we can achieve better results. This comes only with a marginal increase in computational complexity. Moreover, by using multi-size kernels, one does not need to worry which kernel size is more appropriate. In order to regress bounding boxes and predict object classes from extra layers as illustrated in \cref{fig:overflow}, the base-line SSD processes each feature map by only a single $3\times3$ convolution layer followed by {\fontfamily{qcr}\selectfont permute} and {\fontfamily{qcr}\selectfont flatten} layers in multi-box detection layer. This includes feature maps only from one of the high-resolution layers. This leads to a weak performance in detecting small-scale vehicles. The feature maps from higher-resolution layers e.g. in our case stage 2 and 3 are responsible to detect small-scale vehicles. Stage 1 is ignored due to its high computational complexity. Those corresponding feature maps are semantically weak and not deep enough to be capable of detecting small-scale vehicles. ResNet and VGG19 works denote that employing deeper features enhances the object recognition accuracy. However, those backbone networks are computationally heavy to be deployed on on-board processors in UAVs which work under strict power constraints. As an alternative, we propose using a residual module which we call DAB as shown in Figure~\ref{fig:overflow}. Combination of $1\times1$ convention and $3\times3$ deformable convolution operations enrich the features further, but still introducing low computation burden. We choose a portion of input channels to keep the computation cost low. ${1/8, 1/8, 1/8, 1/4, 1/2, 1/2, 1}$ are used as the portion of input channels of output layers from stage 2 to the last extra layer and inside DAB unit we assign ${1/5, 4/5, 4/5}$ portion of input channels to each branch as illustrated in Figure~\ref{fig:overflow}. The output channels remain similar to the original SSD. The only difference is the introduced extra multi-box feature map from stage 2. SSD calculates the number of default boxes per class by $W\times H \times B$ in which $W$ and $H$ are input width and height and $B$ is from the set of ${4, 6, 6, 4, 4}$ for each feature map. We choose $B=4$ for the stage 2 leading to 28642 boxes per class. In aerial imagery, vehicles appear to be very small and almost always in rectangle geometric shape. On the other hand, the pre-trained ShuffleNet has been trained on ground imagery while our images are in another domain of aerial imagery. Therefore pre-trained weights should be adapted to the new domain. We use deformable convolution as introduced in \cite{dai17dcn} to take into account the new domain and the geometric properties of the vehicles. Deformable convolution adds an offset to the conventional conv layer in order to learn from the geometric shape of the objects. They are not limited to a fix kernel size and offset is learned during training by adding only an inexpensive conv layer to compute the offset field. Deformable conv layer shows considerable improvement in case of using images acquired from low-flying UAVs. However, the impact is less by using images from high-altitude platforms such as helicopter or airplanes. According to \cite{dai17dcn} the computation cost of deformable convolutions is negligible. Finally, we apply {\fontfamily{qcr}\selectfont ReLU} layer to element-wise added features in the DAB to add more non-linearity. In general, naive implementation of ShuffleNet with SSD has 2.94 GFLOPs while ShuffleDet has 3.8 GFLOPs. Despite an increase in the computation cost, ShuffleDet has considerable higher accuracy. As vehicles appear to be small objects in UAV images, we choose default prior boxes with smaller scales similar to \cite{azimiACCV}. Eventually, \gls{gls:NMS} is employed to suppress irrelevant detection boxes. It is worth mentioning that during training hard negative mining is employed with the ratio of $3:1$ between negative and positive samples. This leads to more stable and faster training. We also apply batch normalization after each module in DAB as well as extra feature layers. \section{Experiments and Discussion} In this section, we provide ablation evaluation of our proposed approach and compare it to the state-of-the-art CNN-based vehicle detection methods. The experiments were conducted on the CARPK and PUCPR+ datasets~\cite{HsiehLH17}, which contain 1573 and 125 images of $1280\times720$ pixels, respectively. The vehicles in the images are annotated by horizontal bounding boxes. To have a fair comparison with different baseline methods, we follow the same strategy as theirs for splitting the datasets into training and testing sets. Moreover, we train ShuffleNet as the backbone network on the ImageNet-2012~\cite{imagenet} dataset achieving similar performance compared to the original ShuffleNet work. The results are compared to the benchmark using MAE and RMSE, similar to the baseline~\cite{HsiehLH17}. In addition, we use data augmentation in a similar way to the original work on SSD. \subsection{Experimental Setup} We use Caffe to implement our proposed algorithm. It is trained using Nvidia Titan XP GPU and evaluated on NVIDIA Jetson TX2 as an embedded edge device. For the optimization, we use stochastic gradient descent with the base learning rate of 0.001, gamma 0.1, momentum 0.9 to train the network for 120k iterations. The learning rate is reduced after 80k and 100k by a factor of 10. Moreover, the images are resized to $512\times512$ pixels along with their annotations. Additionally, we initialize the first four layers with our pre-trained ShuffleNet weights and the rest with Gaussian noise. For the grouped convolutions, we set the number of groups to 3 throughout the experiments. Furthermore, \gls{gls:NMS} of 0.3 and confidence score threshold of 0.5 are considered. \subsection{Ablation Evaluation} In this section, we present an ablation study on the effect of the submodules in our approach. Table~\ref{tab:1} shows the impact of the modified inception module compared to the original baseline. According to the results, introducing the first modified inception module (small scales) decreases RMSE by about 4 points indicating the importance of wider networks in first layers as the critical layers of the network for small object detection. Replacing the baseline's extra layers with more modified inception models further improves the performance. This highlights the role of higher-resolution layers in the vehicle detection tasks. \begin{table*} \centering \caption{Evaluation of modified inception module (mincep) in the stage 5 on the CARPK dataset. The DAB units are in place. Smaller the RMSE, better the performance.} \begin{adjustbox}{width=0.7\textwidth} \label{tab:1} \begin{tabular}{c|c||c|c|c|c|c} method& RMSE & small scales & mincep-1 & mincep-2 & mincep-3 & mincep-4 \\ \toprule\toprule ShuffleNet-SSD-512 & 63.57 &- &- &- & - &\\ ShuffleDet & 52.75 & - & -& -& -&\\ ShuffleDet & 45.26 & \checkmark& -& -& -&\\ ShuffleDet & 41.89 & \checkmark & \checkmark& -& -& -\\ ShuffleDet & 40.47 & \checkmark & \checkmark& \checkmark&- & -\\ ShuffleDet & 39.67 &\checkmark & \checkmark& \checkmark& \checkmark& -\\ ShuffleDet & 38.46 & \checkmark & \checkmark& \checkmark& \checkmark&\checkmark \\ \end{tabular} \end{adjustbox} \end{table*} Table~\ref{tab:2} represents the evaluation of DAB unit in which we observe a significant reduction in RMSE (almost 5 points) even by the first DAB unit on stage 2. This further indicates the significance of including higher-resolution layer. Furthermore, the results show that adding DAB modules to the extra layer can additionally enhance the performance to a lesser degree. This performance indicates that applying the DAB unit in the high-resolution layers can lead to a significant improvement in detecting small vehicles allowing a better utilization of the deformable convolution to adapt to the vehicle geometries. \begin{table*} \label{tab:2} \centering \caption{Evaluation of using DAB unit on the CARPK dataset. We refer to modified inception layers as {\fontfamily{qcr}\selectfont mincep}. The modified inception modules and small scales are in place.} \begin{adjustbox}{width=0.8\textwidth} \label{tab:2} \begin{tabular}{c|c||c|c|c|c|c|c} method& RMSE & DAB-stage2 & DAB-stage3 & DAB-stage4 & DAB-mincep-1 & DAB-mincep-2 & DAB-mincep-3 \\ \toprule\toprule ShuffleNet-SSD-512 & 63.57 &- &- &- & - & -& -\\ ShuffleDet & 49.26 & - & -& -& - & - & -\\ ShuffleDet & 44.17 & \checkmark& -& -& -& -& -\\ ShuffleDet & 42.02 & \checkmark& \checkmark&- & -& -& -\\ ShuffleDet & 40. 75 & \checkmark& \checkmark& \checkmark& -&- & -\\ ShuffleDet & 39.81 & \checkmark& \checkmark& \checkmark&\checkmark &- & -\\ ShuffleDet & 39.14 & \checkmark& \checkmark& \checkmark&\checkmark &\checkmark & -\\ ShuffleDet & 38.46 & \checkmark& \checkmark& \checkmark&\checkmark&\checkmark&\checkmark \end{tabular} \end{adjustbox} \end{table*} We choose $s_{min}=0.05$ and $s_{max}=0.4$ as minimum and maximum vehicle scales with ratio of ${2,3,1/2,1/3}$ as hyper-parameters in the original SSD. This improves the performance significantly according to Table~\ref{tab:1} by almost 7 RMSE points. It is worth noting that ShuffleNet-SSD-512 has 2.94 GFLOPs as complexity cost while ShuffleDet has 3.8 GFLOPs. This shows ShuffleDet adds only a marginal computation cost while achieving a significant boost in the accuracy. Figure~\ref{fig:carpk} shows sample results of ShuffleDet on the CARPK and PUCPR+ datasets. \begin{figure*}[h] \begin{subfigure}{0.47\textwidth} \includegraphics[width=\textwidth]{figures/CARPK} \caption{} \label{} \end{subfigure} \hfill \begin{subfigure}{0.47\textwidth} \includegraphics[width=\textwidth]{figures/PUCPR} \caption{} \label{} \end{subfigure} \caption{Sample vehicle detection results using ShuffleDet on the CARPK(a) dataset and the PUCPR+ dataset(b).} \label{fig:carpk} \end{figure*} \subsection{Comparison with the benchmark} In this part, compare our method with the benchmark. Tables~\ref{tab:3}~and~\ref{tab:4} show that our method can achieve competitive performance while having significantly less computation cost compared with the state of the art. In comparison with the original implementation of Faster-RCNN~\cite{fasterrcnnNIPS2015} and Yolo~\cite{yolov1}, our method achieves significantly better results. ShuffleDet achieves comparative result with the state of the art with only about less 2 RMSE points in the CARPK dataset. The reason for the big gap between SSD-512, MobileNet-SSD-512 and shuffleDet is mostly due to our tuned scales and aspect ratios. This effect can also be observed between the original implementation of Faster-RCNN with and without small RPNs. \begin{table*} \label{tab:3} \centering \caption{Evaluation of ShuffleDet with the benchmark on the PUCPR+ dataset. The less is better.} \begin{adjustbox}{width=0.75\textwidth} \label{tab:3} \begin{tabular}{c||c|c|c|c} method& backbone & GFLOPs & MAE & RMSE \\ \toprule\toprule YOLO\cite{yolov1} & custom & 26.49 & 156.00 & 200.42 \\ Faster-RCNN\cite{fasterrcnnNIPS2015} & VGG16 & 118.61 & 111.40 & 149.35\\ Faster R-CNN (RPN-small)\cite{fasterrcnnNIPS2015} & VGG16 & 118.61 & 39.88 & 47.67\\ One-Look Regression\cite{DBLP:journals/corr/MundhenkKSB16} & - & - & 21.88 & 36.73\\ Hsieh et al.\cite{HsiehLH17} & VGG16 & - & 22.76 & 34.46\\ SSD-512\cite{DBLP:conf/eccv/LiuAESRFB16}& VGG16 & 88.16 & 123.75& 168.24\\ MobileNet-SSD-512\cite{8099834}& MobileNet & 3.2 & 175.26 & 225.12\\ our ShuffleDet & ShuffleNet & 3.8 & 41.58 & 49.68\\ \end{tabular} \end{adjustbox} \end{table*} \begin{table*} \centering \caption{Evaluation of ShuffleDet with the benchmark on the CARPK dataset. The less is better.} \begin{adjustbox}{width=0.75\textwidth} \label{tab:4} \begin{tabular}{c||c|c|c|c} method& backbone & GFLOPs & MAE & RMSE \\ \toprule\toprule YOLO\cite{yolov1} & custom & 26.49 & 48.89 & 57.55 \\ Faster-RCNN\cite{fasterrcnnNIPS2015} & VGG16 & 118.61 & 47.45 & 57.39\\ Faster R-CNN (RPN-small)\cite{fasterrcnnNIPS2015} & VGG16 & 118.61 & 24.32 & 37.62\\ One-Look Regression\cite{DBLP:journals/corr/MundhenkKSB16} & - & - & 59.46 & 66.84\\ Hsieh et al.\cite{HsiehLH17} & VGG16 & - & 23.80 & 36.79\\ SSD-512\cite{DBLP:conf/eccv/LiuAESRFB16}& VGG16 & 88.16 & 48.02 & 57.42\\ MobileNet-SSD-512\cite{8099834}& MobileNet & 3.2 & 57.34 & 65.24\\ our ShuffleDet & ShuffleNet & 3.8 & 26.75 & 38.46\\ \end{tabular} \end{adjustbox} \end{table*} Moreover, ShufflDet achieves its superiority to Faster-RCNN and Yolo while it is significantly more computation efficient, 3.8 GFLOPs compared to 118 and 26.49 GFLOPs. While Faster-RCNN runs at Jetson TX2 with 1 FPS, tiny Yolov2 at 8 and Yolov2 at 4 FPS, and original SSD with 88.16 GFLOPs at 5 FPS, our ShuffleDet network runs at 14 FPS showing a great potential to be deployed in the real-time on-board processing in UAV imagery. In addition, our approach achieves almost 70\% and 50\% better performance than MobileNet-SSD-512 and the naive implementation of ShuffleNet-SSD on the CARPK dataset, relatively. \section{Generalization Ability} To evaluate the generalization ability of our method, we train it on the 3K-DLR-Munich dataset~\cite{KangMattyus}. This dataset contains aerial images of $5616\times3744$ pixels over the Munich city. Due to the large size of each image similar to~\cite{azimiACCV}, we chop the images into the patches of $512\times512$ pixels which have 100 pixels overlap. To prepare the final results, for each image, we merge the detections results of the patches and then apply none-maximum suppression. Figure~\ref{fig:dlr} illustrates a detection result of our algorithm for the 3K-DLR-Munich dataset. \begin{figure*}[h] \centering \includegraphics[width=0.7\textwidth]{figures/1} \caption{Vehicle detection result using ShuffleDet on the 3K-DLR-Munich dataset.} \label{fig:dlr} \end{figure*} Table~\ref{tab:dlr} compares the performance of ShuffleDet and two implementations of Faster-RCNN on the 3K-DLR-Munich dataset. According to the table, ShuffleDet not only outperforms the Faster-RCNN methods but also its inference is much more time efficient. The consistent behavior of our proposed approach on the 3K-DLR-Munich dataset indicates that it could be generally applied to different datasets. ShuffleDet is capable of 2 FPS processing of high-resolution aerial images in Jetson TX2 platform while Faster-RCNN with VGG16 and ResNet-50 takes a couple of seconds. \begin{table*} \centering \caption{Evaluation of ShuffleDet on 3K-DLR-Munich dataset. Inference time is computed in Jetson TX2 as an edge device.} \begin{adjustbox}{width=0.75\textwidth} \label{tab:dlr} \begin{tabular}{c||c|c|c|c} method& backend & GFLOPs & mAP & inference time\\ \toprule\toprule Faster-RCNN~\cite{fasterrcnnNIPS2015} & VGG-16 & 118.61 & 67.45\% & 7.78s \\ Faster-RCNN~\cite{fasterrcnnNIPS2015} & ResNet-50 & 22.06 &69.23\% & 7.34s \\ our ShuffleDet & ShuffleNet & \textbf{3.8} & 62.89 & \textbf{524ms}\\ \end{tabular} \end{adjustbox} \end{table*} \section{Conclusions} In this paper, we presented ShuffleDet, a real-time vehicle detection algorithm appropriate for on-board embedded UAV imagery. ShuffleDet is based on channel shuffling and grouped convolution in its feature extraction stage. To evaluate the effect of different modules of ShuffleDet, an ablation study is performed to discuss its accuracy and time-efficiency. Joint channel shuffling and grouped convolution significantly boost the inference time. Inception modules with depthwise convolutions enhance the accuracy while introducing a marginal computation burden. Moreover, we show residual modules with deformable convolutions are effective modules for semantic representation enhancement in the small number of layers as well as domain adaptation. Experimental results on the CARPK and PUCPR+ datasets indicate that ShuffleDet outperforms the state-of-the-arts methods while it is much more time and computation efficient. Additionally, the consistent behavior of ShuffleDet on the 3K-DLR-Munich dataset demonstrate its generalization ability. Furthermore, the implementation of ShuffleDet on Jetson TX2, which runs at 14 FPS, showing a great potential of our approach to be used in UAVs for on-board real-time vehicle detection. \renewcommand{\bibname}{References} \bibliographystyle{splncs04}
-17,242.751807
[ -2.84765625, 2.572265625 ]
62.931034
[ -3.396484375, 0.46142578125, -1.9423828125, -4.58203125, 0.212158203125, 7.2265625 ]
[ 2.09375, 7.046875, 1.5595703125, 4.87890625 ]
288
3,556
[ -1.96875, 2.05078125 ]
26.14037
[ -6.015625, -3.82421875, -3.779296875, -1.486328125, 2.67578125, 10.953125 ]
0.67398
10.496014
27.221597
4.364219
[ 2.031315803527832 ]
-13,584.563957
6.050056
-16,723.190118
0.371851
5.88736
[ -3.013671875, -3.46875, -3.232421875, -3.91015625, 2.62890625, 10.5078125 ]
[ -6.08203125, -2.189453125, -1.94140625, -1.6484375, 4.234375, 5.26953125 ]
BkiUfEs5qhDCNj5tich1
\section{Introduction}\label{sec:intro} The aim of this paper is to show an elementary proof of certain identities on binomials and state an answer to~\cite[Remark~8.2]{hko}. The identity which we prove in this paper stems from the representation theory of real semi-simple Lie groups, which admits discrete series. To describe this, let $G$ be a real semi-simple Lie group and $K$ be its maximal compact group. Take a unitary representation $\pi$ of $G$. Consider the map \[ \phi: \pi \to C^{\infty}(K\backslash G/K; \tau \otimes \tau^{*}). \] For a $K$-finite vector $v$, $\phi(v)$ satisfies, by definition, $\phi(v)(kgk')=\tau(k^{-1})\tau^{*}(k')\phi(v)(g)$ $(k,k'\in K, g\in G)$, and we say $\phi(v)$ a matrix coefficient of $\pi$ with respect to $\tau$. Because $G$ has the Cartan decomposition $G=KAK$ where $A$ is a maximal split torus in $G$, the radial part,{\em i.e.}, the restriction of matrix coefficients to $A$ is regarded as a $\tau$-valued function on Euclidean domain. Now we assume $\operatorname{rank}(G)=\operatorname{rank}(K)$ for $G$ to admit a discrete series representation, say $\pi$. If $G$ is of hermitian type and $\pi$ is holomorphic, then $\phi(v)$ is described by the Laurent polynomials of certain hyperbolic functions if we take its radial part, known in the theory of Bergman kernels on the symmetric domain. If the Gel'fand-Kirillov dimension of $\pi$ is enough high, the radial part can be also highly transcendental. But the dimension is relatively low, the radial part function is expected to be tractable. In fact, it is turned out to be feasible when $G$ is a unitary group of degree $4$ defined by the form $|z_{1}|^2+|z_{2}|^2-|z_{3}|^2-|z_{4}|^2$, and $\pi$ is the second lowest discrete series~(in~\cite{hko}, it is called \textit{a middle discrete series}). Because this situation is the origin of our binomial relations, we describe details. In this case $\dim A=2$; the radial part is a $2$-variable function. We find certain transform forces the function into separation of variables; one side is described by polynomial and the other side is essentially a Gaussian hypergeometric function ${}_{2}F_{1}$. The binomials $\beta_m(r,s,k,l)$ we treat here is nothing but the coefficients appearing on the polynomial side. The desired identity rephrases that the value of matrix coefficients at unit matrix is a unit. Because we believe this kind of computation against matrix coefficients should work in somewhat broader contexts and likely to produce similar identities containing involved binomial coefficients, we hope our computation helps those who try to prove them. \medbreak Next, some terminology is defined before stating the main theorem. Let $\Bbb{Z}$, $\Bbb{N}$ and ${\Bbb{Z}_{>0}}$ denote the set of integers, nonnegative integers and positive integers, respectively. If $k$ is a positive integer and $r_1$, $\dots$, $r_k$ are integers such that $n=r_1+\cdots+r_k$ is nonnegative integer, the \defterm{multinomial coefficient} $\binom{n}{r_1,\dots,r_k}$ is, by definition, \begin{equation} \begin{cases} \frac{n!}{r_1!\cdots r_k!} &\text{ if all $r_i\geq0$,}\\ 0 &\text{ otherwise.} \end{cases} \end{equation} Especially, when $k=2$, $\binom{n}{r}=\binom{n}{r,n-r}$ is called the \defterm{binomial coefficient}. (For many interesting identities these famous coefficients satisfy, see \cite{K}.) When $a$, $b$ $s$, $l$ and $m$ are integers such that $s\geq a\geq 0$ and $s\geq l$, we write \begin{equation} \beta_{m}(s,l,a,b)=\sum_{n=0}^{|b|+m} \binom{s-a}{b_{+}+m-n}\binom{a}{b_{-}+m-n}\binom{s-l+n}{n}, \label{eq:beta} \end{equation} where $b_{+}$ and $b_{-}$ are defined by \begin{align} b_{+}+b_{-}=|b|,\qquad b_{+}-b_{-}=b. \end{align} In other word $b_{\pm}$ is defined to be $ \frac{|b|\pm b}2. $ The aim of this paper is to give an elementary proof of the following theorem. \begin{theorem} \label{th:main} Let $s$ and $l$ be nonnegative integers such that $s\geq l\geq0$. Let $j$ be an integer. Then we have \begin{align} &\sum_{m=0}^{\lfloor (l-1)/2\rfloor}\sum_{i=2m}^{l-1} (-2)^{i-2m}\left\{\binom{s-l}{i-2m,j-l+m,s-i-j+m} \, \beta_{m}(s,l,i+j-l,l-i)\right. \nonumber\\ &\qquad\qquad+\left.\binom{s-l}{i-2m,j-i+m,s-l-j+m} \, \beta_{m}(s,l,l+j-i,i-l)\right\} \nonumber\\ &\qquad\qquad+\sum_{m=0}^{\lfloor l/2\rfloor}(-2)^{l-2m}\binom{s-l}{l-2m,j-l+m,s-l-j+m} \, \beta_{m}(s,l,j,0)=\binom{s}{j} ,\label{eq:main} \end{align} where $\lfloor x\rfloor$ stands for the largest integer less than or equal to $x$ for any real number $x$. Note that the left-hand side apparently include the parameter $l$, but, eventually, it is independent of $l$ in the right-hand side. \end{theorem} \section{Proof of the identity}\label{sec:proof} First we summarize certain recurrence properties of $\beta_m$ as follows. \begin{lemma} \label{lem:beta} Let $s$, $l$, $a$ and $b$ be integers such that $s\geq l$ and $s\geq a\geq 0$. Then the following identities hold. \begin{enumerate} \item[(i)] If $b>0$, then \begin{equation} \label{eq01} \beta_{m}(s,l,a,b)=\beta_{m}(s+1,l,a,b)-\beta_{m}(s+1,l,a+1,b-1). \end{equation} \item[(ii)] If $a>0$ and $b\geq0$, then \begin{equation} \label{eq02} \beta_{m}(s,l,a-1,b)=\beta_{m}(s+1,l,a,b)-\beta_{m-1}(s+1,l,a-1,b+1). \end{equation} \item[(iii)] If $b\leq0$, then \begin{equation} \label{eq03} \beta_{m}(s,l,a,b)=\beta_{m}(s+1,l,a,b)-\beta_{m-1}(s+1,l,a+1,b-1). \end{equation} \item[(iv)] If $a>0$ and $b<0$, then \begin{equation} \label{eq04} \beta_{m}(s,l,a-1,b)=\beta_{m}(s+1,l,a,b)-\beta_{m}(s+1,l,a-1,b+1). \end{equation} \end{enumerate} \end{lemma} \begin{demo}{Proof} We only use the well-known recurrence equation of binomial coefficients which reads \begin{equation} \binom{n+1}{r}=\binom{n}{r}+\binom{n}{r-1}, \label{eq:binom-rec} \end{equation} and perform direct computations to prove these identities. First we prove (\ref{eq01}). Applying the recurrence (\ref{eq:binom-rec}) to the first and third binomial coefficients of (\ref{eq:beta}), we obtain that $\beta_{m}(s,l,a,b)$ equals \begin{align*} &\sum_{n\geq0} \binom{s-a+1}{b+m-n}\binom{a}{m-n}\binom{s-l+n+1}{n} -\sum_{n\geq0} \binom{s-a}{b+m-n-1}\binom{a}{m-n}\binom{s-l+n+1}{n}\\ &\qquad\qquad-\sum_{n\geq0} \binom{s-a}{b+m-n}\binom{a}{m-n}\binom{s-l+n}{n-1}. \end{align*} If we apply the recurrence $\binom{a}{m-n}=\binom{a+1}{m-n}-\binom{a}{m-n-1}$ to the second term, then we obtain this equals \begin{align*} &\beta_{m}(s+1,l,a,b) -\sum_{n\geq0}\binom{s-a}{b+m-n-1}\binom{a+1}{m-n}\binom{s-l+n+1}{n} \\&\qquad -\sum_{n\geq0}\binom{s-a}{b+m-n}\binom{a}{m-n}\binom{s-l+n}{n-1} +\sum_{n\geq0} \binom{s-a}{b+m-n-1}\binom{a}{m-n-1}\binom{s-l+n+1}{n}. \end{align*} The last two terms kill each other and consequently we obtain \begin{align*} &\beta_{m}(s,l,a,b)=\beta_{m}(s+1,l,a,b)-\beta_{m}(s+1,l,a+1,b-1). \end{align*} This proves the first identity. The other identities can be proven similarly. The details are left to the reader. \end{demo} Let $s$, $l$, $m$, $j$ be integers such that $s\geq l$ and $m\geq0$. We define $\Lambda_{m}(s,l,j)$ by \begin{align} \Lambda_{m}(s,l,j) &=\sum_{i=2m}^{l-1}(-2)^{i-2m}\, \binom{s-l}{i-2m,j-l+m,s-i-j+m}\, \beta_{m}(s,l,i+j-l,l-i) \nonumber\\ &+\sum_{i=2m}^{l}(-2)^{i-2m}\, \binom{s-l}{i-2m,j-i+m,s-l-j+m}\, \beta_{m}(s,l,l+j-i,i-l). \label{eq:Lambda} \end{align} Then $\Lambda_{m}(s,l,j)$ satisfies the following recurrence equation. \begin{lemma} \label{lem:Lambda} Let $s$, $l$, $m$, $j$ be integers such that $s\geq l$ and $j\geq0$. Then \begin{align} &\Lambda_{m}(s,l,j)+\Lambda_{m}(s,l,j-1)=\Lambda_{m}(s+1,l,j) +\Phi_{m}(s,l,j)-\Phi_{m-1}(s,l,j) \label{eq:lemma} \end{align} where \begin{align} &\Phi_{m}(s,l,j)=\sum_{i=2m+1}^{l-1}(-2)^{i-2m-1}\Biggl\{\binom{s-l}{i-2m-1,j-l+m,s-i-j+m+1}\, \beta_{m}(s+1,l,i+j-l,l-i)\nonumber\\ &\qquad\qquad+\binom{s-l}{i-2m-1,j-i+m,s-l-j+m+1}\, \beta_{m}(s+1,l,l+j-i,i-l)\Biggr\}. \label{eq:Phi} \end{align} \end{lemma} \begin{demo}{Proof} By (\ref{eq01}) and (\ref{eq03}), we obtain $\Lambda_{m}(s,l,j)$ equals \begin{align*} &\sum_{i=2m}^{l-1}(-2)^{i-2m} \binom{s-l}{i-2m,j-l+m,s-i-j+m} \\&\qquad\qquad\times \biggl\{\,\beta_{m}(s+1,l,i+j-l,l-i) -\beta_{m}(s+1,l,i+j-l+1,l-i-1)\,\biggr\}\\ &+\sum_{i=2m}^{l}(-2)^{i-2m} \binom{s-l}{i-2m,j-i+m,s-l-j+m} \\&\qquad\qquad\times \biggl\{\beta_{m}(s+1,l,l+j-i,i-l) -\beta_{m-1}(s+1,l,l+j-i+1,i-l-1)\biggr\}. \end{align*} Similarly, using (\ref{eq02}) and (\ref{eq04}), we can rewrite $\Lambda_{m}(s,l,j-1)$ as \begin{align*} &\sum_{i=2m}^{l}(-2)^{i-2m} \binom{s-l}{i-2m,j-l+m-1,s-i-j+m+1}\\ &\qquad\times \biggl\{\,\beta_{m}(s+1,l,i+j-l,l-i) -\beta_{m-1}(s+1,l,i+j-l-1,l-i+1)\,\biggr\}\\ &+\sum_{i=2m}^{l-1}(-2)^{i-2m} \binom{s-l}{i-2m,j-i+m-1,s-l-j+m+1}\\ &\qquad\times \biggl\{\,\beta_{m}(s+1,l,l+j-i,i-l) -\beta_{m}(s+1,l,l+j-i-1,i-l+1)\,\biggr\}. \end{align*} Adding these two identities, we obtain that $\Lambda_{m}(s,l,j)+\Lambda_{m}(s,l,j-1)$ is equal to \begin{align*} &\sum_{i=2m}^{l}(-2)^{i-2m}\, A\,\beta_{m}(s+1,l,i+j-l,l-i) +\sum_{i=2m}^{l-1}(-2)^{i-2m}\, B\,\beta_{m}(s+1,l,l+j-i,i-l)\\ &-\sum_{i=2m}^{l-1}(-2)^{i-2m} \binom{s-l}{i-2m,j-l+m,s-i-j+m} \,\beta_{m}(s+1,l,i+j-l+1,l-i-1)\\ &-\sum_{i=2m}^{l}(-2)^{i-2m} \binom{s-l}{i-2m,j-i+m,s-l-j+m} \,\beta_{m-1}(s+1,l,l+j-i+1,i-l-1)\\ &-\sum_{i=2m}^{l}(-2)^{i-2m} \binom{s-l}{i-2m,j-l+m-1,s-i-j+m+1} \,\beta_{m-1}(s+1,l,i+j-l-1,l-i+1)\\ &-\sum_{i=2m}^{l-1}(-2)^{i-2m} \binom{s-l}{i-2m,j-i+m-1,s-l-j+m+1} \,\beta_{m}(s+1,l,l+j-i-1,i-l+1), \end{align*} where \begin{align*} A&=\binom{s-l}{i-2m,j-l+m,s-i-j+m}+\binom{s-l}{i-2m,j-l+m-1,s-i-j+m+1},\\ B&=\binom{s-l}{i-2m,j-i+m,s-l-j+m}+\binom{s-l}{i-2m,j-i+m-1,s-l-j+m+1}. \end{align*} If we replace $i$ by $i+1$ or $i-1$ in the last four terms, then this sum becomes \begin{align*} &\sum_{i=2m}^{l}(-2)^{i-2m}\, A\,\beta_{m}(s+1,l,i+j-l,l-i) +\sum_{i=2m}^{l-1}(-2)^{i-2m}\, B\,\beta_{m}(s+1,l,l+j-i,i-l)\\ &-\sum_{i=2m+1}^{l}(-2)^{i-2m-1} \binom{s-l}{i-2m-1,j-l+m,s-i-j+m+1} \, \beta_{m}(s+1,l,i+j-l,l-i)\\ &-\sum_{i=2m-1}^{l-1}(-2)^{i-2m+1} \binom{s-l}{i-2m+1,j-i+m-1,s-l-j+m} \, \beta_{m-1}(s+1,l,l+j-i,i-l)\\ &-\sum_{i=2m-1}^{l-1}(-2)^{i-2m+1} \binom{s-l}{i-2m+1,j-l+m-1,s-i-j+m} \, \beta_{m-1}(s+1,l,i+j-l,l-i)\\ &-\sum_{i=2m+1}^{l}(-2)^{i-2m-1} \binom{s-l}{i-2m-1,j-i+m,s-l-j+m+1} \, \beta_{m}(s+1,l,l+j-i,i-l). \end{align*} Using \begin{align*} &A+\binom{s-l}{i-2m-1,j-l+m,s-i-j+m+1}=\binom{s-l+1}{i-2m,j-l+m,s-i-j+m+1},\\ &B+\binom{s-l}{i-2m-1,j-i+m,s-l-j+m+1}=\binom{s-l+1}{i-2m,j-i+m,s-l-j+m+1}, \end{align*} we see that $\Lambda_{m}(s,l,j)+\Lambda_{m}(s,l,j-1)$ is equal to \begin{align*} &\sum_{i=2m}^{l-1}(-2)^{i-2m} \binom{s-l+1}{i-2m,j-l+m,s-i-j+m+1} \, \beta_{m}(s+1,l,i+j-l,l-i)\\ &+\sum_{i=2m}^{l}(-2)^{i-2m} \binom{s-l+1}{i-2m,j-i+m,s-l-j+m+1} \, \beta_{m}(s+1,l,l+j-i,i-l)\\ &+\sum_{i=2m+1}^{l-1}(-2)^{i-2m-1} \binom{s-l}{i-2m-1,j-l+m,s-i-j+m+1} \, \beta_{m}(s+1,l,i+j-l,l-i)\\ &-\sum_{i=2m-1}^{l-1}(-2)^{i-2m+1} \binom{s-l}{i-2m+1,j-i+m-1,s-l-j+m} \, \beta_{m-1}(s+1,l,l+j-i,i-l)\\ &-\sum_{i=2m-1}^{l-1}(-2)^{i-2m+1} \binom{s-l}{i-2m+1,j-l+m-1,s-i-j+m} \, \beta_{m-1}(s+1,l,i+j-l,l-i) \\ &+\sum_{i=2m+1}^{l-1}(-2)^{i-2m-1} \binom{s-l}{i-2m-1,j-i+m,s-l-j+m+1} \, \beta_{m}(s+1,l,l+j-i,i-l), \end{align*} which is equal to the right-hand side of \eqref{eq:lemma}. This completes the proof of the lemma. \end{demo} \bigbreak \noindent \begin{demo}{Proof of Theorem~\ref{th:main}} Assume $s\geq l\geq0$. If we put \begin{equation*} \Gamma(s,l,j)=\sum_{m=0}^{\infty}\Lambda_{m}(s,l,j), \end{equation*} then, by \eqref{eq:lemma}, it is easy to see that \begin{equation} \Gamma(s+1,l,j)=\Gamma(s,l,j)+\Gamma(s,l,j-1) \label{eq:rec-eq} \end{equation} holds. In addition, if $j<0$ or $j>s$, then we have $\Lambda_{m}(s,l,j)=0$ for all $m\geq0$ since the multinomial coefficients vanish in the definition \eqref{eq:Lambda}. If $j=0$, then we also have \[ \Lambda_{m}(s,l,j) =\begin{cases} \beta_{0}(s,l,l,-l)=1 &\text{ if $m=0$,}\\ 0 &\text{ if $m>0$.} \end{cases} \] Hence, we have \begin{equation} \Gamma(s,l,j) =\begin{cases} 0 &\text{ if $j<0$ or $j>s$,}\\ 1 &\text{ if $j=0$.} \end{cases} \label{eq:initial} \end{equation} From \eqref{eq:rec-eq} and \eqref{eq:initial}, we conclude that $\Gamma(s,l,j)=\binom{s}{j}$. This completes the proof. \end{demo} \section{Concluding remarks} An interesting question we can ask is ``Can one make a $q$-analogue of the identity \eqref{eq:main}?''. (For $q$-series, the reader can refer to \cite{A1}.) We had a trial in this direction which is not yet successful. For example, define $\beta_{m}(s,l,a,b)$ by \begin{equation} \beta_{m}(s,l,a,b)=\sum_{n=0}^{|b|+m}q^{n(n-|b|+l-2m)} \qbinom{s-a}{b_{+}+m-n}\qbinom{a}{b_{-}+m-n}\qbinom{s-l+n}{n}, \label{eq:beta-q} \end{equation} where \begin{equation*} \qbinom{r_{1}+\cdots+r_{k}}{r_{1},\dots,r_{k}} =\begin{cases} \frac{\qint{r_{1}+\cdots+r_{k}}!}{\qint{r_1}!\cdots\qint{r_k}!} &\text{ if all $r_i\geq0$,}\\ 0 &\text{ otherwise,} \end{cases}, \qquad\qquad \qbinom{n}{r}=\qbinom{n}{r,n-r}, \end{equation*} with $\qint{n}!=(1+q)\cdots(1+q+\cdots+q^{n-1})$. Then one can prove that $\beta_{m}(s,l,a,b)$ satisfies the following simple recurrence equations. \begin{prop} \label{lem:beta-q} Let $s$, $l$, $a$ and $b$ be integers such that $s\geq l$ and $s\geq a\geq 0$. Then the following identities hold. \begin{enumerate} \item[(i)] If $b>0$, then \begin{equation} \label{eq01q} \beta_{m}(s,l,a,b)=\beta_{m}(s+1,l,a,b)-q^{s-a-b-m+1}\beta_{m}(s+1,l,a+1,b-1) \end{equation} \item[(ii)] If $a>0$ and $b\geq0$, then \begin{equation} \label{eq02q} \beta_{m}(s,l,a-1,b)=\beta_{m}(s+1,l,a,b)-q^{a-m}\beta_{m-1}(s+1,l,a-1,b+1) \end{equation} \item[(iii)] If $b\leq0$, then \begin{equation} \label{eq03q} \beta_{m}(s,l,a,b)=\beta_{m}(s+1,l,a,b)-q^{s-a-m+1}\beta_{m-1}(s+1,l,a+1,b-1) \end{equation} \item[(iv)] If $a>0$ and $b<0$, then \begin{equation} \label{eq04q} \beta_{m}(s,l,a-1,b)=\beta_{m}(s+1,l,a,b)-q^{a+b-m}\beta_{m}(s+1,l,a-1,b+1) \end{equation} \end{enumerate} \end{prop} Nevertheless, at this point, we don't know how to define $\Lambda_{m}(s,l,j)$ which would have a simple recurrence equation. \medbreak Another interesting question we can ask is the following. One can see that the left-hand side of \eqref{eq:main} is a sum (a double sum or triple sum), but the right-hand side is so simple, i.e., just a binomial coefficient $\binom{s}{j}$. So one may ask whether any of the algorithms such as the WZ algorithm (see \cite{Z2}) would be able to handle it? (We would like to thank to Prof. C.~Krattenthaler for his helpful comment).
-32,898.135304
[ -2.8125, 2.431640625 ]
20.258621
[ -3.666015625, 0.35498046875, -2.294921875, -5.27734375, -0.857421875, 8.1484375 ]
[ 2.41015625, 8.3828125, 0.44921875, 5.328125 ]
91
1,354
[ -3.33203125, 3.8515625 ]
40.604722
[ -5.73828125, -3.58203125, -3.908203125, -2.3984375, 1.7119140625, 11.0625 ]
1.506091
6.982549
39.364845
6.43033
[ 1.8395881652832031 ]
-24,224.651049
6.669129
-33,213.086344
1.24031
5.604247
[ -2.279296875, -2.96484375, -3.55078125, -5.3203125, 1.9560546875, 12.03125 ]
[ -5.20703125, -0.90478515625, -1.5, -0.91796875, 2.580078125, 2.6875 ]
BkiUeZbxaKgTq1C9OpJ4
\section{Introduction} On angular scales $\lower.5ex\hbox{\gtsima} 100$ arcsec (corresponding to multipole number $\ell \lower.5ex\hbox{\ltsima} 10^4$) the power spectrum of the source-subtracted Near InfraRed Background (NIRB) fluctuations has an amplitude that exceeds by $\lower.5ex\hbox{\gtsima} 100$ times the signal expected from all galaxies below the detection limit and first stars. The origin of such ``clustering excess" (i.e. due to correlations in the sources spatial distribution) has now represented a puzzle since its discovery (\citealt{2005Natur.438...45K,2007ApJ...654L...5K,2011ApJ...742..124M, 2012ApJ...753...63K, 2012Natur.490..514C, 2014Sci...346..732Z, 2015ApJ...807..140S, 2012ApJ...756...92C, 2013MNRAS.431..383Y}). Two scenarios have been proposed to interpret the clustering excess. The first advocates the contribution from intrahalo light (IHL), i.e. relatively old stars stripped from their parent galaxies following merging events. These stars therefore reside in between dark matter halos and constitute a low-surface brightness haze around galaxies. The IHL is expected to come mostly from low redshifts ($1+z\lower.5ex\hbox{\ltsima} 1.5$) systems \citep{2012Natur.490..514C,2014Sci...346..732Z}. The second scenario is instead based on the presence of a class of early, highly obscured accreting black holes of intermediate mass ($\sim10^{4-6} M_\odot$) at $z\lower.5ex\hbox{\gtsima} 13$ \citep{2013MNRAS.433.1556Y,2014MNRAS.440.1263Y}. As a suitable mechanism to produce such objects does exist -- the so called Direct Collapse Black Holes (DCBH, for a concise overview of the problem see \citealt{2014MNRAS.443.2410F}), and the interpretation of the supermassive black holes observed at $z=6$ seemingly requires massive seeds \citep{2011RvMA...23..189V}, such hypothesis seems particularly worth exploring. Both scenarios successfully explain the observed clustering excess, albeit with apparently demanding requirements. In fact, if the excess is to be explained by intrahalo light, then a large fraction of the stars at low-$z$ must reside outside systems that we would normally classify as ``galaxies'' \citep{2014Sci...346..732Z}. On the other hand, in the DCBH scenario the abundance of seed black holes produced until $z\sim13$ must represent a sizeable fraction of the estimated present-day black hole abundance, as deduced from local scaling relations \citep{2013ARA&A..51..511K} and recently revised by \citet{2015A&A...574L..10C}. However, it is important to outline that both scenarios are not in conflict with any known observational evidence. The two scenarios, however, differ strongly for what concerns the interpretation of the observed cross-correlation between 3.6 $\mu$m NIRB fluctuations and those measured at (0.5-2) keV in the cosmic X-ray background (CXB) by \citet{2013ApJ...769...68C}. While accreting black holes naturally produce X-ray emission, no obvious similar mechanism can be identified for intrahalo stars, thus making difficult to explain the observed cross-correlation. The DCBH scenario might have its own problems. They could possibly arise from the recent measurement of NIRB fluctuations at 1.1 and 1.6 $\mu$m obtained by {\tt CIBER}\footnote{\url{http://ciber.caltech.edu}}. \cite{2014Sci...346..732Z} showed that these fluctuations do correlate with those observed at 3.6 $\mu$m by {\tt Spitzer}, thus suggesting a common source. If confirmed, this could possibly represent a problem for DCBHs as their formation must stop, based on a number of physical arguments discussed in \citet{2014MNRAS.440.1263Y}, after $z\simeq 13$. Intergalactic absorption at wavelengths shorter than the Ly$\alpha$ line would then prevent DCBH to contribute at 1.1 and 1.6 $\mu$m. Here, we show that the cross-correlation between {\tt CIBER} and {\tt Spitzer} bands does not represent a problem for the DCBH hypothesis, as very likely it arises from a well-known contaminant: Diffuse Galactic Light (DGL), i.e. dust scattered or thermally emitted light in the Milky Way\footnote{Often the term DGL specifically refers to stellar light scattered by dust, while longer wavelength ($\lower.5ex\hbox{\gtsima} 5 \mu$m) thermal emission from grains is referred to as the ``Galactic cirrus".}. \cite{2014Sci...346..732Z} pointed out that DGL strongly contributes to the {\tt CIBER} 1.1 and 1.6 $\mu$m auto-correlation power spectra on large scales. Motivated by this evidence, we show that the 1.1$\times$3.6 $\mu$m and 1.6$\times$3.6 $\mu$m cross-correlations at the largest scales ($\lower.5ex\hbox{\gtsima} 1\degree$) can be purely ascribed to the DGL and that there is no tension between the high redshift DCBH scenario and {\tt CIBER} observations. Actually, the sub-dominant contribution of DGL in the 3.6$\mu$m band helps {\it decreasing} the DCBH abundance required to explain the clustering excess. \section{methods}\label{methods} In order to estimate the DGL component in the 3.6 $\mu$m auto-correlation power spectrum we proceed in the following way. First, we use the fact that on small scales (and in all bands) shot noise completely dominates the 1-halo and 2-halo clustering terms both in the auto- and cross-correlation power spectra. Therefore we can safely assume that at these scales \begin{equation} C_l\approx C_{\rm SN}, \end{equation} and derive the value of $C_{\rm SN}$ by fitting the five {\tt CIBER} data points at the smallest scales in the 1.1 $\mu$m and 1.6 $\mu$m auto-correlation power spectra, and in the 1.1(1.6)$\times$3.6 $\mu$m cross-correlation ones. Results of the fit are reported in Table~\ref{fittings}. Similarly, on the largest scales the signal is dominated by the DGL component, so that \begin{equation} C_l \approx C_{\rm DGL} = A_{\rm DGL} \left(\frac{l}{1000}\right)^{\alpha}. \end{equation} Here we assume, following \citet{2014Sci...346..732Z,2015NatCo...6E7945M}, that the DGL power spectrum has a power-law form with a fixed slope $\alpha = - 3.05$ for both the DGL auto-correlation and cross-correlation power spectra as derived by the combined fit of {\tt CIBER} and {\tt HST} data\footnote{Note that the constraints are mainly provided by {\tt HST} data.} \citep{2015NatCo...6E7945M}. Results of the fit of the six {\tt CIBER} data points on the largest scales are also given in Table~\ref{fittings}. We checked that our results do not change significantly when we let the value of $\alpha$ vary within its 1$\sigma$ errors ($-3.05 \pm 0.07$). \begin{table} \begin{center} \caption{Best fitting parameters for 1.1 and 1.6 $\mu$m auto-, and 1.1$\times$3.6, 1.6$\times$3.6 $\mu$m cross-correlation power spectra. We use the 5 (6) data points at the smallest (largest) angular scales for shot noise (DGL) component fitting.} \begin{tabular}{lcc} \hline \hline power spectrum & ${\rm log}(C_{\rm SN})$ & ${\rm log}(A_{\rm DGL})$ \\ \hline 1.1$\times$1.1 $\mu$m&$-6.11^{+0.007}_{-0.008}$& $-4.5^{+0.3}_{-1.7}$ \\ 1.6$\times$1.6 $\mu$m&$-6.33^{+0.007}_{-0.01}$&$-4.6^{+0.3}_{-1.8}$ \\ 1.1$\times$3.6 $\mu$m&$-7.69^{+0.01}_{-0.006}$& $-6.4^{+0.3}_{-1.1}$ \\ 1.6$\times$3.6 $\mu$m&$-7.79^{+0.009}_{-0.008}$& $-6.4^{+0.3}_{-1.1}$\\ \hline \hline \end{tabular}\\ \label{fittings} \end{center} \end{table} We further implicitly assume here that the DGL fluctuations at two wavelengths are perfectly correlated (i.e. unity correlation coefficient). This seems a reasonable assumption given that the three bands very closely spaced in wavelength. At this point we can derive the contribution of DGL to the 3.6 $\mu$m auto-correlation power spectrum as \begin{equation} A_{\rm DGL}^{\rm ab} = \sqrt{ A_{\rm DGL}^{\rm aa}\times A_{\rm DGL}^{\rm bb}}, \end{equation} which gives \begin{equation}\label{a3.6} A_{\rm DGL}^{\rm bb}=(A_{\rm DGL}^{\rm ab})^2/A_{\rm DGL}^{\rm aa}. \end{equation} In the previous expressions, $b$ corresponds to the 3.6 $\mu$m band; $a$ corresponds either to the 1.1 or 1.6 $\mu$m band. \begin{figure*} \centering{ \subfigure{\includegraphics[scale=0.4]{fig1a.eps}} \subfigure{\includegraphics[scale=0.4]{fig1b.eps}} \subfigure{\includegraphics[scale=0.4]{fig1c.eps}} \subfigure{\includegraphics[scale=0.4]{fig1d.eps}} \caption{{\it Upper:} The 1.1 (left) and 1.6 $\mu$m (right) {\tt CIBER} auto-correlation power spectra (filled circles); the solid line shows the sum of the best-fit shot noise and DGL contribution, with the 1$\sigma$ variance indicated by shaded areas. Also plotted the HST power spectra from \citet{2015NatCo...6E7945M} (filled triangles); note that in the 1.1 $\mu$m panel the {\tt HST} points are actually measured at 1.25 $\mu$m. {\it Lower:} Same as above for the cross-correlation power spectra. } \label{auto} } \end{figure*} \begin{figure*} \centering{ \subfigure{\includegraphics[scale=0.4]{fig2a.eps}} \subfigure{\includegraphics[scale=0.4]{fig2b.eps}} \subfigure{\includegraphics[scale=0.4]{fig2c.eps}} \subfigure{\includegraphics[scale=0.4]{fig2d.eps}} \caption{The 3.6 $\mu$m auto-correlation power spectra derived from (1.1) 1.6 $\mu$m auto-power spectra and (1.1)1.6$\times$3.6 $\mu$m cross-correlation power spectra by assuming perfect correlations. \textit{Upper panels}: sum of DGL and low-$z$ galaxies contributions. \textit{Bottom}: we add the DCBH contribution (thin lines are for DCBH only, thick lines are the sum) consistent with a mass density of active DCBHs of $4\times10^5~M_\odot$Mpc$^{-3}$ at peak (solid curves and corresponding shaded regions) or $2.7 \times 10^5~M_\odot$ Mpc$^{-3}$ at peak (dashed curves with corresponding shaded regions). In each panel the filled circles (triangles) with error bars are data from \citet{2012Natur.490..514C} \citep{2012ApJ...753...63K}. } \label{auto36} } \end{figure*} Note that the DGL is derived from the (1.1)1.6$\times$3.6 $\mu$m cross-correlation power spectra in \citet{2014Sci...346..732Z}. In this work {\tt Spitzer} observations were analyzed using a shallower source subtraction depth (mag $\simeq 22$) than in the original work by \citet{2012Natur.490..514C}, who instead used a deeper threshold (mag $\simeq 24$). As a result, the 3.6 $\mu$m auto-power spectrum in \citet{2014Sci...346..732Z} is largely dominated by shot noise and no interesting signal is seen. For this reason we have decided to compare the derived auto-power spectrum with the 3.6 $\mu$m auto-power in \citet{2012Natur.490..514C}. This should not introduce any artifacts, as the DGL component must be independent of the point source subtraction depth. Also, note that the DGL fitting has been derived for the same fields, thus eliminating possible effects introduced by the dependence of the DGL signal from Galactic coordinates. \section{Results}\label{results} We compare the curve $[A_{\rm DGL}(l/1000)^\alpha+C_{\rm SN}]l^2/(2\pi)$ and its 1$\sigma$ variance with observations in Fig. \ref{auto}. We find that both the {\tt CIBER} measured auto-correlation power spectra and the cross-correlation power spectra could be nicely matched by the sum of shot noise and DGL without the need for additional components. Indeed, while some extra-power can be seen in {\tt CIBER} data at $3000 \lower.5ex\hbox{\ltsima} l \lower.5ex\hbox{\ltsima} 2\times10^4$, such an bump is not seen in \citet{2015NatCo...6E7945M} where resolved sources are subtracted down to a much deeper limiting magnitude. Thus it is likely that this bump in {\tt CIBER} data can be due to foreground sources not resolved in \citet{2014Sci...346..732Z} analysis. As described in the previous Section, we estimate the contribution of the DGL to the 3.6 $\mu$m auto-correlation power spectrum from both 1.1 $\mu$m and 1.6 $\mu$m data. We obtain a value for the DGL amplitude log($A_{\rm DGL})=-8.3_{-3.3}^{+1.1}$ and log($A_{\rm DGL})=-8.2_{-3.5}^{+1.1}$ from the 1.1 $\mu$m and 1.6 $\mu$m data, respectively. In the top panels of Fig. \ref{auto36} we show the 3.6 $\mu$m fluctuation spectrum measured by {\tt Spitzer} (\citealt{2012Natur.490..514C} for the same field, and \citealt{2012ApJ...753...63K} in a different field as comparison), along with the contribution from DGL and $z<5$ galaxies (clustering and shot noise) as computed by following the methods in \citet{2012ApJ...752..113H}. The solid line shows the sum of these two components while the light shaded area indicates the 1$\sigma$ uncertainty. Shot noise and DGL account quite well for the observed power on both large and small scales. However, the clustering excess is still clearly visible at intermediate scales, i.e. at $300\lower.5ex\hbox{\ltsima} l \lower.5ex\hbox{\ltsima} 2\times10^4$. In principle, such extra power might be partially accounted by the large errors associated to the estimated DGL component. However, the maximally allowed DGL power spectrum largely overestimates the observed signal at $l<2\times 10^3$. Moreover, we recall that if the 3.6 $\mu$m fluctuations were purely due to the combination of low-$z$ galaxies and DGL signals, the observed 3.6 $\mu$m - CXB cross-correlation would remain unexplained. Given the results on the DGL obtained so far, it is important to re-examine the DCBH scenario in \citet{2013MNRAS.433.1556Y} to assess whether DCBHs can simultaneously provide the extra power at intermediate scales \textit{and} the observed 3.6 $\mu$m-CXB cross-correlation. DCBH forms out of pristine gas within atomic-cooling halos ($T_{\rm vir} \ge 10^4$~K) that are irradiated by a strong external flux in the Lyman-Werner or near-infrared bands ($J_{\rm LW}\gsim10^{1-5}\times10^{-21}$ erg s$^{-1}$cm$^{-2}$Hz$^{-1}$sr$^{-1}$, see e.g. \citealt{2010MNRAS.402.1249S}) . In such halos the Ly$\alpha$ transition is the only efficient cooling mechanism, since H$_2$ formation is suppressed. As a result the gas contracts almost isothermally and eventually directly collapses into a central black hole seed of mass $\sim10^{4-6} ~M_\odot$. The seed continues to accrete the surrounding gas, and radiates energy from radio to X-ray with a spectrum derived by, e.g., \citet{2015MNRAS.454.3771P}. We will consider two cases: (i) the fiducial model described in \citet{2013MNRAS.433.1556Y}, with a mass density in active DCBHs $\rho_\bullet\approx 4\times10^5~M_\odot$Mpc$^{-3}$ at peak, and (ii) a reduced model in which the abundance is about 2/3 of the fiducial one, $\rho_\bullet\approx 2.7\times10^5~M_\odot$Mpc$^{-3}$. Note that the DCBH mass density $\sim M_{\rm BH}\times n_{\rm BH}$, so there are degeneracies between the $M_{\rm BH}$ and $n_{\rm BH}$. Here, we actually only vary the $M_{\rm BH}$, while keeping other parameters as in the fiducial model. Results are shown in the bottom panels of Fig. \ref{auto36}. It is clear that the DCBH fiducial model can provide the required power at intermediate scales to and, when added to the DGL and low-$z$ galaxy contributions (solid line), it gives a good fit to {\tt Spitzer} data at all scales. Moreover, given the uncertainties in the DGL signal determination, even the reduced DCBH model can be accommodated. We then turn to the CXB. Fig. \ref{IRX} reports the cross-correlation spectrum between CXB in $(0.5-2.0)$ keV \citep{2013ApJ...769...68C} and 3.6 $\mu$m IR fluctuations. We also plot the expected DCBH contribution for the fiducial (reduced) model assuming a typical X-ray equivalent HI column density of $N_{\rm H} = 1.3\times10^{25}$~cm$^{-2}$, and the sum of the DCBH + low-$z$ sources (AGNs, galaxies and hot gas, modeled by \citet{2014ApJ...785...38H}). Clearly, DCBHs can provide the extra power at large scales required by the data (see \citealt{2013MNRAS.433.1556Y} for more details). We also find that the predicted CXB auto-correlation power spectrum of DCBHs for both the fiducial and reduced model is still well below the measured level \citep{2013ApJ...769...68C}, and does not exceed the unresolved fraction of the CXB intensity at 1.5 keV measured by \citet{2012A&A...548A..87M}. \begin{figure} \centering{ \includegraphics[scale=0.4]{fig3.eps} \caption{$(0.5-2.0)$ keV CXB -- 3.6 $\mu$m IR cross-correlation power spectrum. Points are observations from \citet{2013ApJ...769...68C}; dashed curves show the contribution from DCBHs; solid curves are the sum of DCBH and remaining $z<6$ sources (AGNs, galaxies and hot gas, from \citealt{2014ApJ...785...38H}). Thick (thin) lines are for the fiducial (reduced) model with $\rho_\bullet = 4\times10^5~M_\odot$Mpc$^{-3}$ ($\rho_\bullet= 2.7\times10^5~M_\odot$Mpc$^{-3}$) at peaks.} \label{IRX} } \end{figure} \section{Conclusions} The DCBH scenario accounting for the observed ``clustering excess'' over known galaxies signal in the NIRB power spectrum has been questioned by the recent detection of a correlation between the two {\tt CIBER} 1.1/1.6 $\mu$m bands with the 3.6 $\mu$m {\tt Spitzer} one. This correlation is hardly explained by early DCBHs that, due to intergalactic absorption, cannot contribute to the shortest wavelength bands. We have shown that the new correlation is caused instead by a Diffuse Galactic Light (DGL) component arising from Galactic stellar light scattered by dust. In particular, we have found that: \begin{itemize} \item The (1.1)1.6$\times$3.6 $\mu$m cross-correlation power spectra can be fitted nicely by a DGL component that dominates the large scale NIRB fluctuations, and a shot noise component from the remaining low-$z$ galaxies accounting for the small scale power. \item By assuming perfect correlations (unity correlation coefficient) at different wavelengths, the derived best-fitting DGL fluctuations dominate the 3.6 $\mu$m auto-correlation power spectra on scales $l\lower.5ex\hbox{\ltsima} 300$. Between $300\lower.5ex\hbox{\ltsima} l \lsim2\times10^4$ extra sources in addition to the DGL and the remaining low-$z$ galaxies are required to interpret the ``clustering excess". \item If the 3.6 $\mu$m NIRB fluctuations are only from DGL + remaining low-$z$ galaxies, the observed 3.6-CXB cross-correlation is hard to explain. Introducing the DCBH gives a much better fit of 3.6 $\mu$m auto-correlation power spectra, and naturally explains the 3.6-CXB cross-correlation. It also predicts the much decreased clustering term in NIRB-CXB correlations at $\lower.5ex\hbox{\ltsima} 1.6$~$\mu$m. \item Finally, we point out that the DGL inclusion allows to decrease by up to about 30\% the required DCBH abundance/mass. \end{itemize} We conclude that the DCBH scenario remains a viable interpretation of the puzzling NIRB clustering excess, making the NIRB a superb tool to investigate the pristine cosmic epochs in which the supermassive black hole seeds formed. \section*{ACKNOWLEDGMENTS} We thank the Euclid-LIBRAE Team for insightful discussions. We are also indebted to M. Zemcov and A. Cooray for help with the interpretation of the {\tt CIBER} data, to A. Cooray and K. Mitchell-Wynne for useful comments.
-16,030.375858
[ -0.93701171875, 1.3115234375 ]
42.222222
[ -3.130859375, 0.62255859375, -1.7451171875, -5.0625, -1.0888671875, 7.78515625 ]
[ 3.185546875, 7.78515625, 3.19921875, 3.609375 ]
364
2,485
[ -3.13671875, 3.484375 ]
29.828565
[ -5.9609375, -4.01953125, -3.76953125, -2.3203125, 1.4599609375, 11.8359375 ]
0.707614
19.735329
32.555332
5.648204
[ 2.1787147521972656 ]
-12,106.987583
5.686922
-15,838.362251
0.6227
5.756211
[ -3.345703125, -3.537109375, -3.38671875, -4.21484375, 2.298828125, 11.2578125 ]
[ -5.8984375, -2.287109375, -2.2109375, -1.7734375, 3.759765625, 5.3984375 ]
BkiUdgg5qoYDgbkp4TEY
\section{Introduction} The robustness against adversarial attacks of (deep) neural networks (NNs) for classification tasks has become one of the most discussed topics in machine learning research since it was discovered \cite{GoodfellowEtAl:ExplainingAndHarnessingAdversarialExamples:arXiv2015,Szegedy2013}. By making almost imperceptible changes to the input of a NN, attackers are able to force a misclassification of the input or even switch the prediction to any desired class. With machine learning taking a more important role within our society, the security of machine learning models in general is under more scrutiny than ever. To define an adversarial example, we use a definition similar to \cite{ElsayedBengioEtAl:LargeMarginDeepNetworksClassification:NIPS2018_7364}. Suppose we use a set of scoring functions $f_{j}:\mathcal{X\rightarrow}\mathbb{R}$ which assign a score to each class $j\in\mathcal{C}=\left\{ 1,\ldots,N_{c}\right\} $ given an input $\mathbf{x}$ of the data space $\mathcal{X}$. Moreover, the predicted class label $c^{*}\left(\mathbf{x}\right)$ for $\mathbf{x}$ is determined by a winner-takes-all rule $c^{*}\left(\mathbf{x}\right)=\arg\max_{j}f_{j}\left(\mathbf{x}\right)$ and we have access to a labeled data point $\left(\mathbf{x},y\right)$ which is correctly classified as $c^{*}\left(\mathbf{x}\right)=y$. An adversarial example $\tilde{\mathbf{x}}$ of the sample $\mathbf{x}$ is defined as the minimal required perturbation of $\mathbf{x}$ by $\boldsymbol{\epsilon}$ to find a point at the decision boundary or in the classification region of a different class than $y$, i.\,e. \begin{equation} \min_{\boldsymbol{\epsilon}}\left\Vert \boldsymbol{\epsilon}\right\Vert ,\textrm{ s.t. }f_{j}\left(\tilde{\mathbf{x}}\right)\geq f_{y}\left(\tilde{\mathbf{x}}\right)\textrm{ and }\tilde{\mathbf{x}}=\mathbf{x}+\boldsymbol{\epsilon}\in\mathcal{X}\textrm{ and \ensuremath{j\neq y.}}\label{eq:AdversarialExample} \end{equation} Note that the magnitude of the perturbation is measured regarding a respective norm $\left\Vert \cdot\right\Vert $. If $f_{j}\left(\tilde{\mathbf{x}}\right)\approx f_{y}\left(\tilde{\mathbf{x}}\right)$, an adversarial example close to the decision boundary is found. Thus, adversarials are also related to the analysis of the decision boundaries in a learned model. It is important to define the difference between the ability to generalize and the robustness of a model \cite{stutz2018disentangling}. Assume a model trained on a finite number of data points drawn from an unknown data manifold in $\mathcal{X}$. Generalization refers to the property to correctly classify an \emph{arbitrary} point \emph{from} the unknown data manifold (so-called on-manifold samples). The robustness of a model refers to the ability to correctly classify on-manifold samples that were \emph{arbitrarily disturbed}, e.\,g. by injecting Gaussian noise. Depending on the kind of noise these samples are on-manifold or off-manifold adversarials (not located on the data manifold). Generalization and robustness have to be learned explicitly because the one does not imply the other. Although Learning Vector Quantization (LVQ), as originally suggested by \noun{T.~Kohonen} in \cite{kohonen88j}, is frequently claimed as one of the most robust crisp classification approaches, its robustness has not been actively studied yet. This claim is based on the characteristics of LVQ methods to partition the data space into Vorono? cells (receptive fields), according to the best matching prototype vector. For the Generalized LVQ (GLVQ) \cite{sato96a}, considered as a differentiable cost function based variant of LVQ, robustness is theoretically anticipated because it maximizes the hypothesis margin in the \emph{input space} \cite{Crammer2002a}. This changes if the squared Euclidean distance in GLVQ is replaced by adaptive dissimilarity measures such as in Generalized Matrix LVQ (GMLVQ) \cite{Schneider2009_MatrixLearning} or Generalized Tangent LVQ (GTLVQ) \cite{Villmann2016h}. They first apply a projection and measure the dissimilarity in the corresponding \emph{projection space}, also denoted as feature space. A general robustness assumption for these models seems to be more vague. The \textbf{observations} of this paper are: \textbf{(1)}~GLVQ and GTLVQ have a high robustness because of their hypothesis margin maximization in an appropriate space. \textbf{(2)}~GMLVQ is susceptible to adversarial attacks and hypothesis margin maximization does not guarantee a robust model in general. \textbf{(3)}~By increasing the number of prototypes the robustness \emph{and} the generalization ability of a LVQ model increases. \textbf{(4)}~Adversarial examples generated for GLVQ and GTLVQ often make semantic sense by interpolating between digits. \section{Learning Vector Quantization\label{sec:Learning-Vector-Quantization}} LVQ assumes a set $\mathcal{W}=\left\{ \mathbf{w}_{1},\ldots,\mathbf{w}_{N_{w}}\right\} $ of prototypes $\mathbf{w}_{k}\in\mathbb{R}^{n}$ to represent and classify the data $\mathbf{x}\in\mathcal{X}\subseteq\mathbb{R}^{n}$ regarding a chosen dissimilarity $d\left(\mathbf{x},\mathbf{w}_{k}\right)$. Each prototype is responsible for exactly one class $c\left(\mathbf{w}_{k}\right)\in\mathcal{C}$ and each class is represented by at least one prototype. The training dataset is defined as a set of labeled data points $X=\left\{ \left(\mathbf{x}_{i},y_{i}\right)|\mathbf{x}_{i}\in\mathcal{X},y_{i}\in\mathcal{C}\right\} $. The scoring function for the class $j$ yields $f_{j}\left(\mathbf{x}\right)=-\min_{k:c\left(\mathbf{w}_{k}\right)=j}d\left(\mathbf{x},\mathbf{w}_{k}\right)$. Hence, the predicted class $c^{*}\left(\mathbf{x}\right)$ is the class label $c\left(\mathbf{w}_{k}\right)$ of the closest prototype $\mathbf{w}_{k}$ to $\mathbf{x}$. \subsubsection*{Generalized LVQ:} GLVQ is a cost function based variant of LVQ such that stochastic gradient descent learning (SGDL) can be performed as optimization strategy \cite{sato96a}. Given a training sample $\left(\mathbf{x}_{i},y_{i}\right)\in X$, the two \emph{closest} prototypes $\mathbf{w}^{+}\in\mathcal{W}$ and $\mathbf{w}^{-}\in\mathcal{W}$ with correct label $c\left(\mathbf{w}^{+}\right)=y_{i}$ and incorrect label $c\left(\mathbf{w}^{-}\right)\neq y_{i}$ are determined. The dissimilarity function is defined as the squared Euclidean distance $d_{E}^{2}\left(\mathbf{x},\mathbf{w}_{k}\right)=\left(\mathbf{x}-\mathbf{w}_{k}\right)^{T}\left(\mathbf{x}-\mathbf{w}_{k}\right)$. The cost function of GLVQ is \begin{equation} E_{GLVQ}\left(X,\mathcal{W}\right)=\sum_{\left(\mathbf{x}_{i},y_{i}\right)\in X}l\left(\mathbf{x}_{i},y_{i},\mathcal{W}\right)\label{eq:cost GLVQ} \end{equation} with the local loss $l\left(\mathbf{x}_{i},y_{i},\mathcal{W}\right)=\varphi\left(\mu\left(\mathbf{x}_{i},y_{i},\mathcal{W}\right)\right)$ where $\varphi$ is a monotonically increasing differentiable activation function. The classifier function $\mu$ is defined as \begin{equation} \mu\left(\mathbf{x}_{i},y_{i},\mathcal{W}\right)=\frac{d^{+}\left(\mathbf{x}_{i}\right)-d^{-}\left(\mathbf{x}_{i}\right)}{d^{+}\left(\mathbf{x}_{i}\right)+d^{-}\left(\mathbf{x}_{i}\right)}\in\left[-1,1\right]\label{eq:classifier function} \end{equation} where $d^{\pm}\left(\mathbf{x}_{i}\right)=d_{E}^{2}\left(\mathbf{x}_{i},\mathbf{w}^{\pm}\right)$. Thus, $\mu\left(\mathbf{x}_{i},y_{i},\mathcal{W}\right)$ is negative for a correctly classified training sample $\left(\mathbf{x}_{i},y_{i}\right)$ and positive otherwise. Since $l\left(\mathbf{x}_{i},y_{i},\mathcal{W}\right)$ is differentiable, the prototypes $\mathcal{W}$ can be learned by a SGDL approach. \subsubsection*{Generalized Matrix LVQ:} By substituting the dissimilarity measure $d_{E}^{2}$ in GLVQ with an adaptive dissimilarity measure \begin{equation} d_{\boldsymbol{\Omega}}^{2}\left(\mathbf{x},\mathbf{w}_{k}\right)=d_{E}^{2}\left(\boldsymbol{\Omega}\mathbf{x},\boldsymbol{\Omega}\mathbf{w}_{k}\right),\label{eq:GMLVQ} \end{equation} GMLVQ is obtained \cite{Schneider2009_MatrixLearning}. The relevance matrix $\boldsymbol{\Omega}\in\mathbb{R}^{r\times n}$ is learned during training in parallel to the prototypes. The parameter $r$ controls the projection dimension of $\boldsymbol{\Omega}$ and must be defined in advance. \subsubsection*{Generalized Tangent LVQ:} In contrast to the previous methods, GTLVQ \cite{Villmann2016h} defines the prototypes as affine subspaces in $\mathbb{R}^{n}$ instead of points. More precisely, the set of prototypes is defined as $\mathcal{W}_{T}=\left\{ \left(\mathbf{w}_{1},\mathbf{W}_{1}\right),\ldots,\left(\mathbf{w}_{N_{w}},\mathbf{W}_{N_{w}}\right)\right\} $ where $\mathbf{W}_{k}\in\mathbb{R}^{n\times r}$ is the $r$-dimensional basis and $\mathbf{w}_{k}$ is the translation vector of the affine subspace. Together with the parameter vector $\boldsymbol{\theta}\in\mathbb{R}^{r}$, they form the prototype as affine subspace $\mathbf{w}_{k}+\mathbf{W}_{k}\boldsymbol{\theta}$. The tangent distance is defined as \begin{equation} d_{T}^{2}\left(\mathbf{x},\left(\mathbf{w}_{k},\mathbf{W}_{k}\right)\right)=\min_{\boldsymbol{\theta}\in\mathbb{R}^{r}}d_{E}^{2}\left(\mathbf{x},\mathbf{w}_{k}+\mathbf{W}_{k}\boldsymbol{\theta}\right)\label{eq:GTLVQ} \end{equation} where $r$ is a hyperparameter. Substituting $d_{E}^{2}$ in GLVQ with $d_{T}^{2}$ and redefining the set of prototypes to $\mathcal{W}_{T}$ yields GTLVQ. The affine subspaces defined by $\left(\mathbf{w}_{k},\mathbf{W}_{k}\right)$ are learned by SGDL. \section{Experimental Setup} In this section adversarial attacks as well as robustness metrics are introduced and the setup of the evaluation is explained. The setup used here follows the one presented in \cite{SchottEtAl:AdversariallyRobustNNforMNIST:2018} with a few minor modifications to the study of LVQ methods. All experiments and models were implemented using the \noun{Keras} framework in \noun{Python} on top of \noun{Tensorflow}.\footnote{\noun{Tensorflow}: \url{www.tensorflow.org}; \noun{Keras:} \url{www.keras.io}} All evaluated LVQ models are made available as pretrained \noun{Tensorflow }graphs and as part of the \noun{Foolbox zoo}\footnote{\url{https://foolbox.readthedocs.io/en/latest/modules/zoo.html}} at \url{https://github.com/LarsHoldijk/robust_LVQ_models}\noun{.} The \noun{Foolbox} \cite{RauberEtAl:FoolboxPythonToolboxBenchmarkAdversarialRobustness:ICML2017} implementations with default settings were used for the attacks. The evaluation was performed using the MNIST dataset as it is one of the most used datasets for robust model evaluation in the literature. Despite being considered by many as a solved `toy' dataset with state-of-the-art (SOTA) deep learning models reaching close to perfect classification accuracy, the defense of adversarial attacks on MNIST is still far from being trivial \cite{SchottEtAl:AdversariallyRobustNNforMNIST:2018}. The dataset consists of handwritten digits in the data space $\mathcal{X=}\left[0,1\right]^{n}$ with $n=28\cdot28$. We trained our models on the 60K training images and evaluated all metrics and scores on the \emph{complete} 10K test images. \subsection{Adversarial Attacks\label{subsec:Adversarial-Attacks}} Adversarial attacks can be grouped into two different approaches, white-box and black-box, distinguished by the amount of knowledge about the model available to the attacker. White-box or gradient-based attacks are based on exploiting the interior gradients of the NNs, while black-box attacks rely only on the output of the model, either the logits, the probabilities or just the predicted discrete class labels. Each attack is designed to optimize the adversarial image regarding a given norm. Usually, the attacks are defined to optimize over $L^{p}$ norms (or $p$-norms) with $p\in\left\{ 0,2,\infty\right\} $ and, therefore, are called $L^{p}$-attacks. In the evaluation, nine attacks including white-box and black-box attacks were compared. The white-box attacks are: Fast Gradient Sign Method (FGSM) \cite{GoodfellowEtAl:ExplainingAndHarnessingAdversarialExamples:arXiv2015}, Fast Gradient Method (FGM), Basic Iterative Method (BIM) \cite{KurakinEtAl:AdversarialExamplesPhysicalWorld:arXiv2017}, Momentum Iterative Method (MIM) \cite{dong2018boosting} and Deepfool \cite{Moosavi-Dezfooli2015}. The black-box attacks are: Gaussian blur, Salt-and-Pepper (S\&P), Pointwise \cite{SchottEtAl:AdversariallyRobustNNforMNIST:2018} and Boundary \cite{BrendelEtAl:DecisionBasedAdversarialAttacksforBlackBoxMachineLearningModels:Proc.ICLR2018}. See Tab.~\ref{tab:robustness_comparison} for the $L^{p}$ definition of each attack. Note that some of the attacks are defined for more than one norm. \subsection{Robustness Metrics} The robustness of a model is measured by four different metrics, all based on the \emph{adversarial distances} $\delta_{A}\left(\mathbf{x},y\right)$. Given a labeled test sample $\left(\mathbf{x},y\right)$ from a test set $T$ and an adversarial $L^{p}$-attack $A$, $\delta_{A}\left(\mathbf{x},y\right)$ is defined as: \textbf{(1)}~zero if the data sample is misclassified $c^{*}\left(\mathbf{x}\right)\neq y$; \textbf{(2)}~$\left\Vert \boldsymbol{\epsilon}\right\Vert _{p}=\left\Vert \tilde{\mathbf{x}}-\mathbf{x}\right\Vert _{p}$ if $A$ found an adversary $\tilde{\mathbf{x}}$ and $c^{*}\left(\mathbf{x}\right)=y$; \textbf{(3)}~$\infty$ if no adversary was found by $A$ and $c^{*}\left(\mathbf{x}\right)=y$. For each attack $A$ the \emph{median-$\delta_{A}$} score is defined as $\textrm{median}\left\{ \delta_{A}\left(\mathbf{x},y\right)|\left(\mathbf{x},y\right)\in T\right\} ,$ describing an averaged $\delta_{A}$ over $T$ robust to outliers.\footnote{Hence, \emph{median-$\delta_{A}$} can be $\infty$ if for over $50\%$ of the samples no adversary was found.} The \emph{median-$\delta_{p}^{*}$} score is computed for all $L^{p}$-attacks as the $\textrm{median}\left\{ \delta_{p}^{*}\left(\mathbf{x},y\right)|\left(\mathbf{x},y\right)\in T\right\} $ where $\delta_{p}^{*}\left(\mathbf{x},y\right)$ is defined as $\min\left\{ \delta_{A}\left(\mathbf{x},y\right)|A\text{\textrm{ is a }}L^{p}\textrm{-attack}\right\} $. This score is a worst-case evaluation of the median-\emph{$\delta_{A}$,} assuming that each sample is disturbed by the respective worst-case attack $A_{p}^{*}$ (the attack with the smallest distance). Additionally, the threshold accuracies \emph{acc-}$A$ and \emph{acc-}$A_{p}^{*}$ of a model over $T$ are defined as the percentage of adversarial examples found with $\delta_{A}\left(\mathbf{x},y\right)\leq t_{p}$, using either the given $L^{p}$-attack $A$ for all samples or the respective worst-case attack $A_{p}^{*}$ respectively. This metric represents the remaining accuracy of the model when only adversaries under a given threshold are considered valid. We used the following thresholds for our evaluation: $t_{0}=12$, $t_{2}=1.5$ and $t_{\infty}=0.3$. \subsection{Training Setup and Models} \begin{table}[t] \caption{The results of the robustness evaluation. Attacks are clustered by their $L^{p}$ class, the boxes denote the type of the attack (white- or black-box). Accuracies are given in percentages and the \#prototypes is recorded per class. All scores are evaluated on the test set. For each model we report the clean accuracy (clean acc.), the median-$\delta_{A}$ (left value) and acc-$A$ score (right value) for each attack and the worst-case (worst-c.) analysis over all $L^{p}$-attacks by presenting the median-$\delta_{p}^{*}$ (left value) and acc-$A_{p}^{*}$ score (right value). Higher scores mean higher robustness of the model. The median-$\delta_{A}$ of the most robust model in each attack is highlighted in bold. Overall, the model with the best (highest) worst-case median-$\delta_{p}^{*}$ is underlined and highlighted.\label{tab:robustness_comparison}} \centering{}{\scriptsize{}}% \begin{longtable}{ll|>{\centering}p{0.5cm}>{\centering}p{0.5cm}||>{\centering}p{0.5cm}>{\centering}p{0.5cm}||>{\centering}p{0.5cm}>{\centering}p{0.5cm}|>{\centering}p{0.5cm}>{\centering}p{0.5cm}||>{\centering}p{0.5cm}>{\centering}p{0.5cm}|>{\centering}p{0.5cm}>{\centering}p{0.5cm}||>{\centering}p{0.5cm}>{\centering}p{0.5cm}|>{\centering}p{0.5cm}>{\centering}p{0.5cm}} & & \multicolumn{2}{c||}{{\footnotesize{}CNN}} & \multicolumn{2}{c||}{{\footnotesize{}Madry}} & \multicolumn{4}{c||}{{\footnotesize{}GLVQ}} & \multicolumn{4}{c||}{{\footnotesize{}GMLVQ}} & \multicolumn{4}{c}{{\footnotesize{}GTLVQ}}\tabularnewline \hline \multicolumn{2}{l|}{{\footnotesize{}\#prototypes}} & & & & & \multicolumn{2}{c|}{{\footnotesize{}1}} & \multicolumn{2}{c||}{{\footnotesize{}128}} & \multicolumn{2}{c|}{{\footnotesize{}1}} & \multicolumn{2}{c||}{{\footnotesize{}49}} & \multicolumn{2}{c|}{{\footnotesize{}1}} & \multicolumn{2}{c}{{\footnotesize{}10}}\tabularnewline \hline \multicolumn{2}{l|}{{\footnotesize{}Clean acc.}} & & {\footnotesize{}99} & & {\footnotesize{}99} & & {\footnotesize{}83} & & {\footnotesize{}95} & & {\footnotesize{}88} & & {\footnotesize{}93} & & {\footnotesize{}95} & & {\footnotesize{}97}\tabularnewline \hline \multirow{7}{*}{\begin{turn}{90} {\footnotesize{}$L^{2}$} \end{turn}} & {\footnotesize{}FGM} \hspace*{\fill}{\tiny{}$\square$} & {\footnotesize{}2.1} & \textcolor{black}{\footnotesize{}73} & \textbf{\footnotesize{}$\boldsymbol{\infty}$} & {\footnotesize{}96} & {\footnotesize{}$\infty$} & {\footnotesize{}63} & {\footnotesize{}$\infty$} & {\footnotesize{}76} & {\footnotesize{}0.6} & {\footnotesize{}7} & {\footnotesize{}0.8} & {\footnotesize{}15} & {\footnotesize{}$\infty$} & {\footnotesize{}71} & {\footnotesize{}$\infty$} & {\footnotesize{}81}\tabularnewline & {\footnotesize{}Deepfool} \hspace*{\fill}{\tiny{}$\square$} & {\footnotesize{}1.9} & \textcolor{black}{\footnotesize{}70} & \textbf{\footnotesize{}5.5} & {\footnotesize{}94} & {\footnotesize{}1.6} & {\footnotesize{}53} & {\footnotesize{}2.3} & {\footnotesize{}73} & {\footnotesize{}0.5} & {\footnotesize{}26} & {\footnotesize{}0.7} & {\footnotesize{}27} & {\footnotesize{}2.3} & {\footnotesize{}73} & {\footnotesize{}2.5} & {\footnotesize{}81}\tabularnewline & {\footnotesize{}BIM} \hspace*{\fill}{\tiny{}$\square$} & {\footnotesize{}1.5} & \textcolor{black}{\footnotesize{}50} & \textbf{\footnotesize{}4.9} & {\footnotesize{}94} & {\footnotesize{}1.5} & {\footnotesize{}50} & {\footnotesize{}2.1} & {\footnotesize{}68} & {\footnotesize{}0.6} & {\footnotesize{}6} & {\footnotesize{}0.7} & {\footnotesize{}8} & {\footnotesize{}2.1} & {\footnotesize{}68} & {\footnotesize{}2.3} & {\footnotesize{}77}\tabularnewline & {\footnotesize{}Gaussian} \hspace*{\fill}{\tiny{}$\blacksquare$} & {\footnotesize{}6.4} & \textcolor{black}{\footnotesize{}99} & {\footnotesize{}6.6} & {\footnotesize{}98} & {\footnotesize{}6.8} & {\footnotesize{}83} & {\footnotesize{}6.7} & {\footnotesize{}68} & {\footnotesize{}6.3} & {\footnotesize{}88} & {\footnotesize{}6.2} & {\footnotesize{}92} & \textbf{\footnotesize{}7.1} & {\footnotesize{}94} & {\footnotesize{}6.9} & {\footnotesize{}97}\tabularnewline & {\footnotesize{}Pointwise} \hspace*{\fill}{\tiny{}$\blacksquare$} & {\footnotesize{}4.2} & \textcolor{black}{\footnotesize{}96} & {\footnotesize{}2.1} & {\footnotesize{}80} & {\footnotesize{}4.5} & {\footnotesize{}79} & {\footnotesize{}5.4} & {\footnotesize{}92} & {\footnotesize{}1.6} & {\footnotesize{}54} & {\footnotesize{}2.4} & {\footnotesize{}78} & {\footnotesize{}5.5} & {\footnotesize{}92} & \textbf{\footnotesize{}5.6} & {\footnotesize{}95}\tabularnewline & {\footnotesize{}Boundary} \hspace*{\fill}{\tiny{}$\blacksquare$} & {\footnotesize{}1.9} & \textcolor{black}{\footnotesize{}76} & {\footnotesize{}1.5} & {\footnotesize{}52} & {\footnotesize{}2.1} & {\footnotesize{}61} & \textbf{\footnotesize{}3.2} & {\footnotesize{}76} & {\footnotesize{}0.6} & {\footnotesize{}7} & {\footnotesize{}0.8} & {\footnotesize{}7} & {\footnotesize{}2.8} & {\footnotesize{}78} & {\footnotesize{}3.1} & {\footnotesize{}86}\tabularnewline \cline{2-18} & \textbf{\footnotesize{}worst-c.} & {\footnotesize{}1.5} & \textcolor{black}{\footnotesize{}50} & {\footnotesize{}1.5} & {\footnotesize{}52} & {\footnotesize{}1.5} & {\footnotesize{}49} & {\footnotesize{}2.1} & {\footnotesize{}68} & {\footnotesize{}0.5} & {\footnotesize{}3} & {\footnotesize{}0.6} & {\footnotesize{}3} & {\footnotesize{}2.1} & {\footnotesize{}68} & \textbf{\footnotesize{}\uline{2.2}} & {\footnotesize{}77}\tabularnewline \hline \hline \multirow{5}{*}{\begin{turn}{90} {\footnotesize{}$L^{\infty}$} \end{turn}} & {\footnotesize{}FGSM} \hspace*{\fill}{\tiny{}$\square$} & {\footnotesize{}.17} & \textcolor{black}{\footnotesize{}7} & \textbf{\footnotesize{}.52} & {\footnotesize{}96} & {\footnotesize{}.17} & {\footnotesize{}11} & {\footnotesize{}.29} & {\footnotesize{}43} & {\footnotesize{}.04} & {\footnotesize{}0} & {\footnotesize{}.05} & {\footnotesize{}0} & {\footnotesize{}.22} & {\footnotesize{}18} & {\footnotesize{}.25} & {\footnotesize{}26}\tabularnewline & {\footnotesize{}Deepfool} \hspace*{\fill}{\tiny{}$\square$} & {\footnotesize{}.16} & \textcolor{black}{\footnotesize{}1} & \textbf{\footnotesize{}.49} & {\footnotesize{}95} & {\footnotesize{}.13} & {\footnotesize{}7} & {\footnotesize{}.22} & {\footnotesize{}21} & {\footnotesize{}.04} & {\footnotesize{}27} & {\footnotesize{}.05} & {\footnotesize{}19} & {\footnotesize{}.19} & {\footnotesize{}9} & {\footnotesize{}.22} & {\footnotesize{}19}\tabularnewline & {\footnotesize{}BIM} \hspace*{\fill}{\tiny{}$\square$} & {\footnotesize{}.12} & \textcolor{black}{\footnotesize{}0} & \textbf{\footnotesize{}.41} & {\footnotesize{}94} & {\footnotesize{}.12} & {\footnotesize{}3} & {\footnotesize{}.20} & {\footnotesize{}9} & {\footnotesize{}.04} & {\footnotesize{}0} & {\footnotesize{}.05} & {\footnotesize{}0} & {\footnotesize{}.17} & {\footnotesize{}3} & {\footnotesize{}.20} & {\footnotesize{}5}\tabularnewline & \textcolor{black}{\footnotesize{}MIM} \hspace*{\fill}{\tiny{}$\square$} & \textcolor{black}{\footnotesize{}.13} & \textcolor{black}{\footnotesize{}0} & \textbf{\textcolor{black}{\footnotesize{}.38}} & \textcolor{black}{\footnotesize{}93} & \textcolor{black}{\footnotesize{}.12} & \textcolor{black}{\footnotesize{}3} & {\footnotesize{}.19} & {\footnotesize{}9} & \textcolor{black}{\footnotesize{}.04} & \textcolor{black}{\footnotesize{}0} & {\footnotesize{}.05} & {\footnotesize{}0} & \textcolor{black}{\footnotesize{}.17} & \textcolor{black}{\footnotesize{}3} & {\footnotesize{}.20} & {\footnotesize{}5}\tabularnewline \cline{2-18} & \textbf{\footnotesize{}worst-c.} & {\footnotesize{}.12} & \textcolor{black}{\footnotesize{}0} & \textbf{\footnotesize{}\uline{.38}} & {\footnotesize{}93} & {\footnotesize{}.11} & {\footnotesize{}2} & {\footnotesize{}.19} & {\footnotesize{}5} & {\footnotesize{}.03} & {\footnotesize{}0} & {\footnotesize{}.04} & {\footnotesize{}0} & {\footnotesize{}.17} & {\footnotesize{}3} & {\footnotesize{}.19} & {\footnotesize{}4}\tabularnewline \hline \hline \multirow{3}{*}{\begin{turn}{90} {\footnotesize{}$L^{0}$} \end{turn}} & {\footnotesize{}Pointwise} \hspace*{\fill}{\tiny{}$\blacksquare$} & {\footnotesize{}19} & \textcolor{black}{\footnotesize{}73} & {\footnotesize{}4} & {\footnotesize{}1} & {\footnotesize{}22} & {\footnotesize{}64} & {\footnotesize{}32} & {\footnotesize{}79} & {\footnotesize{}3} & {\footnotesize{}6} & {\footnotesize{}6} & {\footnotesize{}18} & {\footnotesize{}34} & {\footnotesize{}80} & \textbf{\footnotesize{}35} & {\footnotesize{}85}\tabularnewline & {\footnotesize{}S\&P} \hspace*{\fill}{\tiny{}$\blacksquare$} & {\footnotesize{}65} & \textcolor{black}{\footnotesize{}94} & {\footnotesize{}17} & {\footnotesize{}63} & {\footnotesize{}126} & {\footnotesize{}77} & \textbf{\footnotesize{}188} & {\footnotesize{}92} & {\footnotesize{}8} & {\footnotesize{}37} & {\footnotesize{}17} & {\footnotesize{}61} & {\footnotesize{}155} & {\footnotesize{}91} & {\footnotesize{}179} & {\footnotesize{}95}\tabularnewline \cline{2-18} & \textbf{\footnotesize{}worst-c.} & {\footnotesize{}19} & \textcolor{black}{\footnotesize{}73} & {\footnotesize{}4} & {\footnotesize{}1} & {\footnotesize{}22} & {\footnotesize{}64} & {\footnotesize{}32} & {\footnotesize{}79} & {\footnotesize{}3} & {\footnotesize{}6} & {\footnotesize{}6} & {\footnotesize{}18} & {\footnotesize{}34} & {\footnotesize{}80} & \textbf{\footnotesize{}\uline{35}} & {\footnotesize{}85}\tabularnewline \end{longtable}{\scriptsize\par} \end{table} All models, except the Madry model, were trained with the Adam optimizer \cite{KingmaBa:AdamMethodStochasticGradient:ICLR2015} for 150 epochs using basic data augmentation in the form of random shifts by $\pm2$\,pixels and random rotations by $\pm15^{{^\circ}}$. \subsubsection*{NN Models:\label{subsec:Baseline-models}} Two NNs are used as baseline models for the evaluation. The first model is a convolutional NN, denoted as CNN, with two convolutional layers and two fully connected layers. The convolutional layers have 32 and 64 filters with a stride of one and a kernel size of 3$\times$3. Both are followed by max-pooling layers with a window size and stride each of 2$\times$2. None of the layers use padding. The first fully connected layer has 128 neurons and a dropout rate of 0.5. All layers use the ReLU activation function except for the final fully connected output layer which uses a softmax function. The network was trained using the categorical cross entropy loss and an initial learning rate of $10^{-4}$ with a decay of 0.9 at plateaus. The second baseline model is the current SOTA model for MNIST in terms of robustness proposed in \cite{Madry2017} and denoted as Madry. This model relies on a special kind of adversarial training by considering it as a min-max optimization game: before the loss function is minimized over a given training batch, the original images are partially substituted by perturbed images with $\left\Vert \boldsymbol{\epsilon}\right\Vert _{\infty}\leq0.3$ such that the loss function is \emph{maximized} over the given batch. The Madry model was downloaded from \url{https://github.com/MadryLab/mnist_challenge}. \subsubsection*{LVQ Models:} All three LVQ models were trained using an initial learning rate of 0.01 with a decay of 0.5 at plateaus and with $\varphi$ defined as the identity function. The prototypes (translation vectors) of all methods were class-wise initialized by k-means over the training dataset. For GMLVQ, we defined $\boldsymbol{\Omega}$ with $n=r$ and initialized $\boldsymbol{\Omega}$ as a scaled identity matrix with Frobenius norm one. After each update step, $\boldsymbol{\Omega}$ was normalized to again have Frobenius norm one. The basis matrices $\mathbf{W}_{k}$ of GTLVQ were defined by $r=12$ and initialized by a singular value decomposition with respect to each initialized prototype $\mathbf{w}_{k}$ over the set of class corresponding training points \cite{Villmann2016h}. The prototypes were not constrained to $\mathcal{X}$ (`box constrained') during the training, resulting in possibly non-interpretable prototypes as they can be points in $\mathbb{R}^{n}$.\footnote{A restriction to $\mathcal{X}$ leads to an accuracy decrease of less than $1\%$.} Two versions of each LVQ model were trained: one with one prototype per class and one with multiple prototypes per class. For the latter the numbers of prototypes were chosen such that all LVQ models have roughly 1M parameters. The chosen number of prototypes per class are given in Tab.~\ref{tab:robustness_comparison} by \#prototypes. \section{Results} \begin{figure} \begin{centering} \begin{tabular}{lcccccccccc} CNN & \includegraphics{images/cnn_0_8} & \includegraphics{images/cnn_1_6} & \includegraphics{images/cnn_2_1} & \includegraphics{images/cnn_3_8} & \includegraphics{images/cnn_4_8} & \includegraphics{images/cnn_5_3} & \includegraphics{images/cnn_6_5} & \includegraphics{images/cnn_7_3} & \includegraphics{images/cnn_8_3} & \includegraphics{images/cnn_9_4}\tabularnewline Madry & \includegraphics{images/madry_0_7} & \includegraphics{images/madry_1_5} & \includegraphics{images/madry_2_1} & \includegraphics{images/madry_3_8} & \includegraphics{images/madry_4_9} & \includegraphics{images/madry_5_3} & \includegraphics{images/madry_6_0} & \includegraphics{images/madry_7_4} & \includegraphics{images/madry_8_2} & \includegraphics{images/madry_9_4}\tabularnewline GLVQ & \includegraphics{images/glvq_0_5} & \includegraphics{images/glvq_1_4} & \includegraphics{images/glvq_2_3} & \includegraphics{images/glvq_3_7} & \includegraphics{images/glvq_4_2} & \includegraphics{images/glvq_5_0} & \includegraphics{images/glvq_6_5} & \includegraphics{images/glvq_7_4} & \includegraphics{images/glvq_8_3} & \includegraphics{images/glvq_9_4}\tabularnewline GMLVQ & \includegraphics{images/gmlvq_0_2} & \includegraphics{images/gmlvq_1_6} & \includegraphics{images/gmlvq_2_5} & \includegraphics{images/gmlvq_3_2} & \includegraphics{images/gmlvq_4_6} & \includegraphics{images/gmlvq_5_3} & \includegraphics{images/gmlvq_6_8} & \includegraphics{images/gmlvq_7_9} & \includegraphics{images/gmlvq_8_5} & \includegraphics{images/gmlvq_9_8}\tabularnewline GTLVQ & \includegraphics{images/gtlvq_0_7} & \includegraphics{images/gtlvq_1_4} & \includegraphics{images/gtlvq_2_3} & \includegraphics{images/gtlvq_3_0} & \includegraphics{images/gtlvq_4_9} & \includegraphics{images/gtlvq_5_3} & \includegraphics{images/gtlvq_6_5} & \includegraphics{images/gtlvq_7_4} & \includegraphics{images/gtlvq_8_3} & \includegraphics{images/gtlvq_9_4}\tabularnewline \end{tabular} \par\end{centering} \caption{For each model, adversarial examples generated by the attacks (from left to right): Gaussian, Deepfool ($L_{2}$), BIM ($L_{2}$), Boundary, Pointwise ($L_{0}$), S\&P, FGSM, Deepfool ($L_{\infty}$), BIM ($L_{\infty}$) and MIM. For the LVQ models the version with more prototypes per class was used. The ten digits were randomly selected under the condition that every digit was classified correctly by all models. The original images are 0, 1, ..., 9 from left to right. The red digits in the lower right corners indicate the models prediction after the adversarial attack.\label{fig: images}} \end{figure} The results of the model robustness evaluation are presented in Tab.~\ref{tab:robustness_comparison}. Fig.~\ref{fig: images} displays adversarial examples generated for each model. Below, the four most notable observations that can be made from the results are discussed. \subsubsection*{Hypothesis margin maximization in the input space produces robust models (GLVQ and GTLVQ are highly robust): \label{subsec:Hypothesis-margin-maximization in the input space}} Tab.~\ref{tab:robustness_comparison} shows outstanding robustness against adversarial attacks for GLVQ and GTLVQ. GLVQ with multiple prototypes and GTLVQ with both one or more prototypes per class, outperform the NN models by a large difference for the $L^{0}$- and $L^{2}$-attacks while having a considerably lower clean accuracy. This is not only the case for individual black-box attacks but also for the worst-case scenarios. For the $L^{0}$-attacks this difference is especially apparent. A possible explanation is that the robustness of GLVQ and GTLVQ is achieved due to the input space hypothesis margin maximization \cite{Crammer2002a}.\footnote{Note that the results of \cite{Crammer2002a} hold for GTLVQ as it can be seen as a version of GLVQ with infinitely many prototypes learning the affine subspaces.} In \cite{Crammer2002a} it was stated that the hypothesis margin is a lower bound for the sample margin which is, \emph{if defined in the input space}, used in the definition of adversarial examples (\ref{eq:AdversarialExample}). \emph{Hence, if we maximize the hypothesis margin in the input space we guarantee a high sample margin and therefore, a robust model.} A first attempt to transfer this principle was made in \cite{ElsayedBengioEtAl:LargeMarginDeepNetworksClassification:NIPS2018_7364} to create a robust NN by a first order approximation of the sample margin in the input space. However, the Madry model still outperforms GLVQ and GTLVQ in the $L^{\infty}$-attacks as expected. This result is easily explained using the manifold based definition of adversarial examples and the adversarial training procedure of the Madry model, which optimizes the robustness against $\left\Vert \boldsymbol{\epsilon}\right\Vert _{\infty}\leq0.3$. Considering the manifold definition, one could say that Madry augmented the original \noun{MNIST} manifold to include small $L^{\infty}$ perturbations. Doing so, Madry creates a new \emph{training}-manifold in addition to the original \noun{MNIST} manifold. In other words, the $L^{\infty}$ robustness of the adversarial trained Madry model can be seen as its generalization on the new training-manifold (this becomes clear if one considers the high acc-$A$ scores for $L^{\infty}$). For this reason, the Madry model is only robust against off-manifold examples that are on the generated training-manifold. As soon as off-training-manifold examples are considered the accuracy will drop fast. This was also shown in \cite{SchottEtAl:AdversariallyRobustNNforMNIST:2018}, where the accuracy of the Madry model is significantly lower when considering a threshold $t_{\infty}>0.3$.\footnote{For future work a more extensive evaluation should be considered: including not only the norm for which a single attack was optimized but rather a combination of all three norms. This gives a better insight on the characteristics of the attack and the defending model. The $L^{0}$ norm can be interpreted as the number of pixels that have to change, the $L^{\infty}$ norm as the maximum deviation of a pixel and the $L^{2}$ norm as a kind of average pixel change. As attacks are optimized for a certain norm, only considering this norm might give a skewed impression of their attacking capability. Continuing, calculating a threshold accuracy including only adversaries that are below all three thresholds may give an interesting and more meaningful metric.} Furthermore, the Madry model has outstanding robustness scores for gradient-based attacks in general. We accredit this effect to potential obfuscation of gradients as a side-effect of the adversarial training procedure. While \cite{athalye2018obfuscated} was not able to find concrete evidence of gradient obfuscation due to adversarial training in the Madry model, it did list black-box-attacks outperforming white-box attacks as a signal for its occurrence. \subsubsection*{Hypothesis margin maximization in a space different to the input space does not necessarily produce robust models (GMLVQ is susceptible for adversarial attacks):} In contrast to GLVQ and GTLVQ, GMLVQ has the lowest robustness score across all attacks and all methods. Taking the strong relation of GTLVQ and GMLVQ into account \cite{Villmann2016h}, it is a remarkable result.\footnote{GTLVQ can be seen as localized version of GMLVQ with the constraint that the $\boldsymbol{\Omega}$ matrices must be orthogonal projectors. } One potential reason is, that GMLVQ maximizes the hypothesis margin in a projection space which differ in general from the input space. The margin maximization in the projection space is used to construct a model with good generalization abilities, which is why GMLVQ usually outperforms GLVQ in terms of accuracy (see the clean accuracy for GLVQ and GMLVQ with one prototype per class). However, a large margin in the projection space does not guarantee a big margin in the input space. Thus, GMLVQ does not implicitly optimize the separation margin, as used in the definition of an adversarial example (\ref{eq:AdversarialExample}), in the input space. Hence, GMLVQ is a good example to show that a model, which generalizes well, is not necessarily robust. Another effect which describes the observed lack of robustness by GMLVQ is its tendency to oversimplify (to collapse data dimensions) without regularization. Oversimplification may induce heavy distortions in the mapping between input and projection space, potentially creating dimensions in which a small perturbation in the input space can be mapped to a large perturbation in the projection space. These dimensions are later used to efficiently place the adversarial attack. This effect is closely related to theory known from metric learning, here oversimplification was used by \cite{globerson2006metric} to optimize a classifier over $d_{\boldsymbol{\Omega}}^{2}$, which \emph{maximally collapses (concentrates) the classes to single points} (related to the prototypes in GMLVQ). It is empirically shown that this effect helps to achieve a good generalization. To improve the robustness of GMLVQ penalizing the collapsing of dimensions may be a successful approach. A method to achieve this is to force the eigenvalue spectrum of the mapping to follow a uniform distribution, as proposed in \cite{Villmann2010l}. This regularization technique would also strengthen the transferability between the margin in the projection and input space. Unfortunately, it requires the possibly numerical instable computation of the derivative of a determinant of a product of $\boldsymbol{\Omega}$ which makes it impossible to train an appropriate model for MNIST using this regularization so far. The fact that GTLVQ is a constrained version of GMLVQ gives additional reason to believe that regularizations\,/\,constraints are able to force a model to be more robust. \subsubsection*{Increasing the number of prototypes improves the ability to generalize and the robustness:} For all three LVQ models the robustness improves if the number of prototypes per class increases. Additionally, increasing the number of prototypes leads to a better ability to generalize. This observation provides empirical evidence supporting the results of \cite{stutz2018disentangling}. In \cite{stutz2018disentangling} it was stated that generalization and robustness are not necessarily contradicting goals, which is a topic recently under discussion. With multiple prototypes per class, the robustness of the GLVQ model improves by a significantly larger margin than GTLVQ. This can be explained by the high accuracy of GTLVQ with one prototype. The high accuracy with one prototype per class indicates that the data manifold of \noun{MNIST} is almost flat and can therefore be described with one tangent such that introducing more prototypes does not improve the model's generalization ability. If we add more prototypes in GLVQ, the prototypes will start to approximate the data manifold and with that implicitly the tangent prototypes used in GTLVQ. With more prototypes per class, the scores of GLVQ will therefore most likely converge towards those of GTLVQ. \subsubsection*{GLVQ and GTLVQ require semantically correct adversarial examples:} Fig.~\ref{fig: images} shows a large semantic difference between the adversarial examples generated for GLVQ\,/\,GTLVQ and the other models. A large portion of the adversarial examples generated for the GLVQ and GTLVQ models look like interpolations between the original digit and another digit.\footnote{A similar effect was observed in \cite{SchottEtAl:AdversariallyRobustNNforMNIST:2018} for k-NN models.} This effect is especially visible for the Deepfool, BIM and Boundary attacks. In addition to this, the Pointwise attack is required to generate features from other digits to fool the models, e.\,g. the horizontal bar of a two in the case of GLVQ and the closed ring of a nine for GTLVQ (see digit four). In other words, for GLVQ and GTLVQ some of the attacks generate adversaries that closer resemble on-manifold samples than off-manifold. For the other models, the adversaries are more like off-manifold samples (or in the case of Madry, off-training-manifold). \section{Conclusion} In this paper we extensively evaluated the robustness of LVQ models against adversarial attacks. Most notably, we have shown that there is a large difference in the robustness of the different LVQ models, even if they all perform a hypothesis margin maximization. GLVQ and GTLVQ show high robustness against adversarial attacks, while GMLVQ scores the lowest across all attacks and all models. The discussion related to this observation has lead to four important \textbf{conclusions}:\emph{ }\textbf{(1)}~For (hypothesis) margin maximization to lead to robust models the space in which the margin is maximized matters, this must be the same space as where the attack is placed. \textbf{(2)}~Collapsed dimensions are beneficial for the generalization ability of a model. However, they can be harmful for the model's robustness. \textbf{(3)}~It is possible to derive a robust model by applying a fitting regularization\,/\,constraint. This can be seen in the relation between GTLVQ and GMLVQ and is also studied for NNs \cite{croce2018provable}. \textbf{(4)}~Our experimental results with an increased number of prototypes support the claim of \cite{stutz2018disentangling}, that the ability to generalize and the robustness are principally not contradicting goals. In summary, the overall robustness of LVQ models is impressive. Using only one prototype per class and no purposefully designed adversarial training, GTLVQ is on par with SOTA robustness on MNIST. With further research, the robustness of LVQ models against adversarial attacks can be a valid reason to deploy them instead of NNs in security critical applications. \bibliographystyle{unsrt} \input{WSOM2019.bbl} \end{document}
-37,679.062282
[ -2.24609375, 2.1875 ]
20.637899
[ -3.76171875, 0.01812744140625, -2.2578125, -6.08203125, -0.6552734375, 8.484375 ]
[ 2.431640625, 8.1484375, 0.64111328125, 6.140625 ]
334
4,197
[ -3.353515625, 3.927734375 ]
27.448292
[ -6.21875, -4.51171875, -4.43359375, -1.9580078125, 2.4296875, 12.34375 ]
0.579822
12.880973
30.021444
1.625479
[ 2.420670509338379 ]
-26,167.524034
7.396712
-36,798.525897
0.460637
5.998403
[ -2.74609375, -3.685546875, -3.90234375, -4.73828125, 2.556640625, 11.8828125 ]
[ -5.58984375, -2.345703125, -2.33984375, -1.7998046875, 3.470703125, 5.21484375 ]
BkiUbVHxK7FjYCv2QCBP
\section{Introduction} This short example shows a contrived example on how to format the authors' information for {\it IJCAI--19 Proceedings}. \section{Author names} Each author name must be followed by: \begin{itemize} \item A newline {\tt \textbackslash{}\textbackslash{}} command for the last author. \item An {\tt \textbackslash{}And} command for the second to last author. \item An {\tt \textbackslash{}and} command for the other authors. \end{itemize} \section{Affiliations} After all authors, start the affiliations section by using the {\tt \textbackslash{}affiliations} command. Each affiliation must be terminated by a newline {\tt \textbackslash{}\textbackslash{}} command. Make sure that you include the newline on the last affiliation too. \section{Mapping authors to affiliations} If some scenarios, the affiliation of each author is clear without any further indication (\emph{e.g.}, all authors share the same affiliation, all authors have a single and different affiliation). In these situations you don't need to do anything special. In more complex scenarios you will have to clearly indicate the affiliation(s) for each author. This is done by using numeric math superscripts {\tt \$\{\^{}$i,j, \ldots$\}\$}. You must use numbers, not symbols, because those are reserved for footnotes in this section (should you need them). Check the authors definition in this example for reference. \section{Emails} This section is optional, and can be omitted entirely if you prefer. If you want to include e-mails, you should either include all authors' e-mails or just the contact author(s)' ones. Start the e-mails section with the {\tt \textbackslash{}emails} command. After that, write all emails you want to include separated by a comma and a space, following the same order used for the authors (\emph{i.e.}, the first e-mail should correspond to the first author, the second e-mail to the second author and so on). You may ``contract" consecutive e-mails on the same domain as shown in this example (write the users' part within curly brackets, followed by the domain name). Only e-mails of the exact same domain may be contracted. For instance, contracting ``[email protected]" and ``[email protected]" is not allowed because the domains are different. \end{document} \section{Introduction} In recent years, neural networks have shown outstanding performance in several tasks, at the expense of a high number of model parameters or weights \cite{Simonyan2014}, \cite{He2016}, \cite{Szegedy2017}. As the initial main goal was to maximize performance, such as classification accuracy, the model size was not considered to be an issue, especially given the significant computational and memory resources of typical servers used for training and inference. However, alternative platforms such as mobile devices and edge-based embedded devices come with limited computational and memory capabilities, thus imposing strict limitations on the size of neural networks. In order to reduce either the storage requirements of neural networks, or their inference complexity, or both, several algorithms have addressed the reduction of parameters, while retaining most of the original model's performance. This is possible as neural networks seem to contain a significant amount of redundancy in their weights. One popular approach aims at pruning weights and convolution filters which are considered as least important based on their contribution, with a successive fine-tuning phase which adapts the network to the architectural changes introduced by the pruning operation. In \cite{Lecun1989}, the contribution of each weight is determined as its effect to the training error when setting its value to zero. Quantization approaches focus on reducing the precision or bit-depth of weights, as in \cite{Hubara2016}. For example, the authors in \cite{Vanhoucke2019} showed that quantizing weights to 8 bits results in significant speed-up while incurring into minimal accuracy losses. The work in \cite{Han2016} combines different approaches, namely pruning, quantization and Huffman-coding. Regarding pruning, the contribution of a weight is determined by its absolute numerical value, thus weights with low absolute values with respect to a threshold are pruned. In addition, the authors quantize the non-pruned weights by clustering them into a set of clusters using k-means (256 clusters for convolutions and 32 clusters for fully-connected layers). Yet another approach is the low-rank factorization of weight matrices, where a matrix is factorized or decomposed into two lower-rank matrices. In \cite{Chen2018}, the authors note that a small rank can limit the expressiveness of the model and thus propose to learn an input-dependent factorization. Here, we propose to use a loss that was originally proposed in \cite{Hoyer} to achieve sparsity in a signal. This allows for training a neural network to have as many weights with zero value as possible. Moreover, in this paper we show that at the same time the loss also approximates a quantization operation which maps values of non-zero weights to a discrete number of possible values. The paper is organized as follows. Section \ref{related_work} surveys previous works which are most closely related to the method proposed in this paper. Section \ref{proposed_method} describes the proposed method in detail. In particular, it introduces the compressibility loss, its properties, and the post-training quantization strategy. Section \ref{results} describes our experiments and results, provides in-depth analyses of the effects of sparsity and quantization, and reports on the application of curriculum-learning to the compressibility loss. Section \ref{conclusions} discusses the obtained results and future directions. \section{Related Work} \label{related_work} There have been several previous works on sparsifying weights in neural networks by using a sparsity constraint during training. In \cite{Lebedev2016}, the authors use group-sparsity regularization on convolutional filters, which has the effect of shrinking some groups to zero. In particular, they use a $l_{2,1}$-norm regularizer. In \cite{Zhou2016} the authors state that in FFT domain (used to accelerate matrix multiplications on GPUs) sparse matrices are not anymore sparse, thus they propose a method for pruning neurons instead of simply weights. To this end, they impose a sparsity constraint during training directly on the neurons, for which two options are experimented with: tensor low rank constraint and group sparsity constraint (also here a $l_{2,1}$-norm regularizer). In \cite{Gomez2018}, the authors propose a targeted dropout where the weights of the neural network are first sorted at descending order according to their absolute values and the bottom $\gamma$ portion of weigts are randomly (with probability $\alpha$) and independently disabled/dropped during training. The work aims only sparsity and does not consider the compressibility of the remaining weights. In \cite{Leclerc2018}, the authors propose a method to train a neural network with $L_1$ loss on a switch variable which takes a value between 0 and 1 and is used to be multiplied the weight value. During training, the weights that are very small are actually pruned and network adaptively changes architecture. In \cite{Choi2018}, a universal neural network compression method was proposed that applies uniform random dithering and lattice quantization on neural network weights followed by a fine-tuning process. \section{Proposed Method} \label{proposed_method} \subsection{Compressibility Loss} The neural network compression method is based on the following compressibility loss which was partially introduced in \cite{Hoyer}: \begin{equation} L(\bm{x})=\frac{||\bm{x}||_1}{||\bm{x}||_2} \label{loss} \end{equation} where $||\bm{x}||_1$ and $||\bm{x}||_2$ stand for $L_1$ and $L_2$ norms of $\bm{x}$ respectively. Originally this loss was proposed as a negated measure of sparsity. Next, we show via following theorem that besides sparsity, the loss also helps achieving a very low entropy for the non-zero part of the signal as well. \begin{theorem} Let $\bm{x} \in \mathbb{R}^d$ be any non-zero vector, then at any critical point of $L(\bm{x})$, $\bm{x}$ is a ternary signal which satisfies $x_i \in \{ -c,0,c \}$ where $c=\frac{(||\bm{x}||_2)^2}{||\bm{x}||_1}$. \label{main} \end{theorem} \begin{proof} Let us begin by taking the derivative of $L(\bm{x})$ with respect to $\bm{x}$ as formulated in Eq. \ref{deriv}. \begin{equation} \frac{\partial L(\bm{x})}{\partial \bm{x}} = \frac{sign(\bm{x})}{||\bm{x}||_2}-\frac{\bm{x}||\bm{x}||_1}{(||\bm{x}||_2)^3} \label{deriv} \end{equation} Note that in Eq. \ref{deriv}, $sign(.)$ function applies element-wise. Then, we equate the derivative to zero to find the critical points which gives the equality in Eq. \ref{derivzero}. \begin{equation} \bm{x} = \frac{sign(\bm{x}) (||\bm{x}||_2)^2}{||\bm{x}||_1} \label{derivzero} \end{equation} We analyse Eq. \ref{derivzero} by considering the following three cases. \begin{itemize} \item $x_i>0$ In this case Eq. \ref{derivzero} reduces to: \begin{equation} x_i=\frac{(||\bm{x}||_2)^2}{||\bm{x}||_1} \label{case1} \end{equation} \item $x_i<0$ In this case Eq. \ref{derivzero} reduces to: \begin{equation} x_i=-\frac{(||\bm{x}||_2)^2}{||\bm{x}||_1} \label{case2} \end{equation} \item $x_i=0$ In this case Eq. \ref{derivzero} reduces to: \begin{equation} x_i=0 \label{case3} \end{equation} \end{itemize} The Equations \ref{case1}, \ref{case2} and \ref{case3} prove Theorem \ref{main}. \end{proof} \begin{corollary} The objective $L(\bm{x})$ at a critical point takes the value of $\sqrt{n}$ where $n$ is the number of nonzero elements in $\bm{x}$. \label{maincorol} \end{corollary} \begin{proof} According to Theorem \ref{main}, at a critical point, $x_i \in \{ -c,0,c \}$ where $c=\frac{(||\bm{x}||_2)^2}{||\bm{x}||_1}$ holds, therefore the following also holds. \begin{equation} ||\bm{x}||_1=n\frac{(||\bm{x}||_2)^2}{||\bm{x}||_1} \label{corolproof1} \end{equation} Then, Eq. \ref{corolproof1} can be re-arranged as follows. \begin{equation} \sqrt{n}=\frac{||\bm{x}||_1}{||\bm{x}||_2} \label{corolproof2} \end{equation} Eq. \ref{corolproof2} proves Corollary \ref{maincorol}. \end{proof} In order to make use of the loss in Eq. \ref{loss} and the result of Theorem \ref{main}, one has to prove that the critical points are in fact local minima as it is stated in Theorem \ref{local}. \begin{theorem} All critical points $\bm{x}$ of $L(\bm{x})$ are local minima from the directions where the direction vector is non-zero in at least one point which satisfies $x_i=0$. \label{local} \end{theorem} The proof for Theorem \ref{local} is provided in the Appendix \ref{proofsec}. \subsection{Properties of Compressibility Loss} The compressibility loss differentiates from the commonly used $L_2$ and $L_1$ regularizers, which shrink the weights and help to achieve sparsification. By shrinking the weights, these regularizers do not obtain more compressible nonzero weights. The compressibility loss not only sparsifies the weights, but also ensures the non-zero elements are highly compressible by enforcing the signal to be ternary at critical points. In addition, as can be seen from Theorem \ref{main}, at a critical point, the non-zero part of the weight vector ($\pm{c}$) can take any value. Moreover, compressibility loss is a better indicator of sparsity. At the same level of sparsity, independent of the values of nonzeros, the compressibility loss gives the same value at critical points. On the contrary, for $L_2$ and $L_1$ losses this is not ensured. \subsection{Compression Method} \label{compmet} We propose a neural network compression method which consists of retraining a given network architecture using the compressibility loss. A straightforward way to train a network to be compressible would be to apply the compressibility loss separately to each weight layer $\bm{w}^{(i)}, i \in [1,l]$ in the neural network. The compressibility loss of this method can be formulated as in Eq. \ref{alt1}. \begin{equation} L_{c_1}= \sum_{i=1}^l \lambda_i L(\bm{w}^{(i)}) \label{alt1} \end{equation} \begin{table*} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{|P{1.3cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}} \hline \multirow{2}{1.6cm}{\textbf{Sparsity}} & \multicolumn{2}{c|}{\textbf{Original}} & \multicolumn{2}{c|}{\textbf{CNET$_{\text{0.005}}$}} & \multicolumn{2}{c|}{\textbf{CNET$_{\text{0.01}}$}} & \multicolumn{2}{c|}{\textbf{CNET$_{\text{0.02}}$}} & \multicolumn{2}{c|}{\textbf{CNET$_{\text{0.03}}$}} & \multicolumn{2}{c|}{\textbf{CNET$_{\text{0.045}}$}}\\ \cline{2-13} & \textbf{Acc.} & \textbf{Comp.} & \textbf{Acc.} & \textbf{Comp.} & \textbf{Acc.} & \textbf{Comp.} & \textbf{Acc.} & \textbf{Comp.} & \textbf{Acc.} & \textbf{Comp.} & \textbf{Acc.} & \textbf{Comp.}\\ \hline 0$\%$ & 91.51 & 1 & 91.36 & 1 & 90.97 & 1 & 90.30 & 1 & 89.46 & 1 & 88.41 & 1 \\ \hline 30$\%$ & 88.96 & 1.42 & 91.36 & 1.36 & 90.97 & 1.36 & 90.30 & 1.35 & 89.46 & 1.36 & 88.41 & 1.37 \\ \hline 40$\%$ & 83.72 & 1.66 & 91.36 & 1.61 & 90.97 & 1.56 & 90.30 & 1.57 & 89.46 & 1.58 & 88.41 & 1.59 \\ \hline 50$\%$ & 61.76 & 1.98 & 91.26 & 1.94 & 90.97 & 1.88 & 90.30 & 1.86 & 89.46 & 1.88 & 88.41 & 1.88 \\ \hline 60$\%$ & 30.68 & 2.45 & 91.02 & 2.43 & 90.98 & 2.41 & 90.30 & 2.31 & 89.46 & 2.33 & 88.41 & 2.32 \\ \hline 70$\%$ & 10.04 & 3.23 & 89.37 & 3.24 & 90.71 & 3.22 & 90.30 & 3.16 & 89.46 & 3.10 & 88.41 & 3.06 \\ \hline 80$\%$ & 10.00 & 4.88 & 67.18 & 4.84 & 86.83 & 4.73 & 90.02 & 4.75 & 89.46 & 4.77 & 88.41 & 4.61 \\ \hline 90$\%$ & 10.00 & 9.33 & 12.29 & 9.18 & 33.10 & 9.24 & 82.68 & 9.30 & 84.85 & 9.32 & 88.09 & 9.31 \\ \hline \end{tabular} \caption{Sparsity Effect: Accuracy (Acc.) and Mask Compression Ratio (Comp.) comparison to baseline at different sparsity levels.} \label{basecomp1} \end{table*} This method would require manually specifying the $\lambda_i$ for the expected compressibility at each layer of the neural network. A simple implementation can be to achieve the same level of compressibility in each layer by setting all $\lambda_i$ to the same value. However this would assume that each layer has the same amount of redundancy which is not necessarily true. Another drawback of this method is that non-zero elements at critical points may take different values in different layers, which is not desirable from a compressibility point of view. For example, let us assume an ideal case where two weight vectors in two layers ended up in a critical point of compressibility loss, i.e. $w_i^{(1)} \in \{-c_0,0,c_0\}$ and $w_i^{(2)} \in \{-c_1,0,c_1\}$. Although both vectors are ternary, the concatenation of these vectors is not necessarily ternary, i.e. $c_0=c_1$ does not necessarily hold. This limits the compressibility to a layer-wise level. Because of the limitations of the above method, we propose to apply the compressibility loss on a single vector $\bm{w_{net}}$ obtained by flattening and concatenating every weight tensor in the neural network. The compressibility loss of this method can be formulated as in Eq. \ref{alt2}. \begin{equation} L_{c_2}= \lambda L(\bm{w_{net}}) \label{alt2} \end{equation} Applying the loss on a single concatenated weight vector would ensure compressibility at neural network level as opposed to layer-wise level in the first method. For example, in the ideal case of ending up at a local minimum of compressibility loss, one can guarantee that any weight in the neural network would satisfy $w_{net_i} \in \{-c,0,c\}$, hence the entire neural network's weight vector is ternary. Moreover, when using the compressibility loss in combination with a task-specific loss (such as categorical cross-entropy loss for image classification), the sparsity level at each layer would be adjusted automatically according to task performance. I.e., the sparsity level of each layer would be proportional to the redundancy of that layer with respect to the final task, thus avoiding making assumptions on the redundancy of each layer. Based on the discussed advantages throughout the paper we only use the compressibility loss applied to entire network as in Eq. \ref{alt2}. \subsubsection{Post-Training Pruning and Non-uniform Quantization} After the training, we prune the neural network weights by simply setting the weights smaller than a threshold to zero. After pruning, we perform a non-uniform quantization based on $k$-means clustering. This is applied to the non-pruned weights of the entire neural network. The number of clusters $k$ is selected based on the desired quantization level. \subsubsection{Coding and Compression} Instead of including the pruned weights (zero values) in the encoded weight vector, we create a separate binary mask which indicates zeros and non-zeros in the weight vector. The mask is then flattened, bit-packed and compressed using Numpy npz algorithm \cite{npz2019}. Then, the $k$-means labels of weight elements are stored in another tensor and npz-compressed. Finally the cluster centroids for $k$-means are stored and npz-compressed. The compression ratio is calculated as the ratio of the total file size of the above files over the file size of the npz-compressed original neural network weights. \section{Experimental Results} \label{results} We conduct experiments on the image classification task using ResNet32 architecture. The baseline method consists of training on CIFAR-10 dataset using categorical cross-entropy loss. Our method adds the compressibility loss (given in Eq. \ref{alt2}) during training. We conduct trainings with different $\lambda$ settings for compressibility loss and report the accuracy and compression levels at different sparsity levels. A sparsity level is simply achieved by pruning the weights by the threshold that achieves the desired sparsity level. We refer to our method as compressible network (CNET) and we indicate the $\lambda$ setting as a subscript as $CNET_{\lambda}$. \subsection{Sparsity Effect} In order to check our method's effect on resulting weight sparsity, we first report the accuracies and compression ratios without post-training non-uniform quantization method described in Section \ref{compmet}. Therefore the reported accuracies in Table \ref{basecomp1} are not affected by the quantization, but only pruning. Moreover, the reported compression ratio only evaluates the effect due to mask-coding of zeros. In this experiment, non-zero values are not quantized at all but only npz compressed. As observed from the Table \ref{basecomp1}, as we increase $\lambda$, the accuracy without pruning decreases gradually, yet the accuracy becomes more robust to pruning. For example with $\lambda=0.005$, the accuracy without pruning is almost the same with original network, however after sparsity level $60\%$, the accuracy drops steadily. With $\lambda=0.045$, the accuracy without pruning is $3\%$ lower than the original network, however the network is robust to pruning and there is almost no accuracy drop until sparsity level $90\%$. The compression ratios are similar for all models at same sparsity levels. This is expected since the only compression is due to mask-coding the zeros and non-zeros are not compressed. The only difference in compression ratios may be resulting from the very small deviation in sparsity levels. \subsection{Quantization Effect} The second experiment highlights the compressibility of non-zero weights aspect of our method. Compression of non-zeros is achieved by the method described in Section \ref{compmet}. At sparsity level $0\%$ we do not perform compression and report the accuracy directly. At other sparsity levels, we always apply the compression method described in Section \ref{compmet}. \begin{table*} \centering \renewcommand{\arraystretch}{1.2} \begin{tabular}{|P{1.3cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}|P{0.6cm}|P{0.8cm}} \hline \multirow{2}{1.6cm}{\textbf{Sparsity}} & \multicolumn{2}{c|}{\textbf{Original}} & \multicolumn{2}{c|}{\textbf{CNET$_{\text{0.005}}$}} & \multicolumn{2}{c|}{\textbf{CNET$_{\text{0.01}}$}} & \multicolumn{2}{c|}{\textbf{CNET$_{\text{0.02}}$}} & \multicolumn{2}{c|}{\textbf{CNET$_{\text{0.03}}$}} & \multicolumn{2}{c|}{\textbf{CNET$_{\text{0.045}}$}}\\ \cline{2-13} & \textbf{Acc.} & \textbf{Comp.} & \textbf{Acc.} & \textbf{Comp.} & \textbf{Acc.} & \textbf{Comp.} & \textbf{Acc.} & \textbf{Comp.} & \textbf{Acc.} & \textbf{Comp.} & \textbf{Acc.} & \textbf{Comp.}\\ \hline 0$\%$ & 91.51 & 1 & 91.36 & 1 & 90.97 & 1 & 90.30 & 1 & 89.46 & 1 & 88.41 & 1 \\ \hline 30$\%$ & 88.95 & 5.45 & 91.33 & 7.12 & 90.84 & 8.60 & 90.28 & 10.88 & 89.03 & 12.98 & 88.24 & 14.77 \\ \hline 40$\%$ & 83.75 & 6.14 & 91.31 & 8.29 & 90.67 & 8.93 & 90.33 & 11.02 & 89.27 & 13.04 & 88.43 & 14.81 \\ \hline 50$\%$ & 61.27 & 7.09 & 91.19 & 8.47 & 90.81 & 9.83 & 90.24 & 11.48 & 89.05 & 13.28 & 88.44 & 15.05 \\ \hline 60$\%$ & 29.42 & 8.48 & 90.98 & 9.81 & 90.69 & 11.95 & 90.15 & 12.65 & 89.29 & 14.11 & 88.25 & 15.79 \\ \hline 70$\%$ & 10.04 & 10.70 & 89.51 & 12.14 & 90.70 & 13.51 & 90.16 & 16.06 & 89.38 & 16.15 & 88.43 & 17.44 \\ \hline 80$\%$ & 10.00 & 14.72 & 71.77 & 16.65 & 86.82 & 17.92 & 90.09 & 19.37 & 89.21 & 23.40 & 88.23 & 22.14 \\ \hline 90$\%$ & 10.00 & 15.33 & 12.62 & 28.4 & 33.28 & 30.48 & 76.91 & 33.37 & 84.93 & 32.90 & 87.89 & 35.14 \\ \hline \end{tabular} \caption{Quantization Effect: Accuracy (Acc.) and Mask and Quantization Compression Ratio (Comp.) comparison to baseline at different sparsity levels.} \label{basecomp2} \end{table*} \begin{table*} \centering \begin{tabular}{ |c|c|c|c|c|c|c| } \hline \textbf{method} & \textbf{Original} & \textbf{CNET$_{\text{0.005}}$} & \textbf{CNET$_{\text{0.01}}$} & \textbf{CNET$_{\text{0.02}}$} & \textbf{CNET$_{\text{0.03}}$} & \textbf{CNET$_{\text{0.045}}$} \\ \hline \textbf{entropy} & 7.54 & 6.75 & 6.41 & 6.13 & 5.07 & 4.00 \\ \hline \hline \end{tabular} \caption{Entropy Experiments: Entropy at $70\%$ sparsity for different $\lambda$ settings.} \label{entropytable} \end{table*} For $k$-means clustering we always use 256 cluster centers to achieve 8-bit representations. Since our method is trained with the compressibility loss, non-zero elements of the resulting $\bm{w_{net}}$ has low entropy, thus some clusters are more densely populated than others and samples are more compact around cluster centroids. Remember that in the ideal case of ending up in compressibility loss' local minima, $\bm{w_{net}}$ would have been ternary, thus the non-zero parts would have been binary. This would have resulted into two clusters whose elements are exactly equal to corresponding centroids and zero performance loss due to quantization. In practice we never reach this case, however the low-entropy and compact-clusters effect is observed from the high gains in compression ratio compared to original model. In particular, when compressibility loss is weighted higher, i.e. when $\lambda$ is increased, at the same sparsity levels, the compression ratio shows a strong tendency to increase. For example at $90\%$ sparsity, the CNET$_{0.005}$ achieves 28.4 compression ratio and CNET$_{0.045}$ achieves 35.14 compression ratio, whereas the compression ratio obtained for the original model is only 15.33. This results from the compressibility aspect of our method for the non-zero elements. Remember here that the quantization is only applied to non-zero elements and at the same sparsity level, CNET$_{0.005}$ and CNET$_{0.045}$ have same number of non-zero elements, thus the difference in compression ratio is directly due to compressibility of non-zero elements. In order to highlight the quantization effect even further, we also measure the entropy of the probability distribution function $P$ obtained by normalizing the vector of cluster populations. The entropy is simply measured via Eq. \ref{entropyeq}. It is clearly observed from Table \ref{entropytable} that as the weight on the loss is increased, entropy is decreased which is a direct indicator of increase in compressibility. \begin{equation} \label{entropyeq} h=-\sum_i P_ilog_2(P_i) \end{equation} \begin{table} \centering \begin{tabular}{ |c|c|c|c|c|c|c| } \hline \textbf{Cl. No.} & 256 & 128 & 64 & 32 & 16\\ \hline \textbf{Acc.} & 87.89 & 88.23 & 84.61 & 81.65 & 45.4 \\ \hline \textbf{Comp.} & 35.14 & 39.44 & 43.87 & 48.51 & 57.48 \\ \hline \hline \end{tabular} \caption{Cluster Number (Cl. No.) Experiments: accuracy (Acc.) and compression ratio (Comp.) for the method CNET$_{0.045}$ at $90\%$ sparsity} \label{cluster_exp} \end{table} Finally, we also perform experiments on the robustness to $k$-means centroid number of a particular method (CNET$_{0.045}$ at $90\%$ sparsity). This experiment also highlights the quantization effect of our method and enables achieving higher compression ratios with lower quantization centroid numbers. It can be observed from Table \ref{cluster_exp} that it is possible to achieve higher compression ratios without loss (128 clusters scenario) and moreover one can achieve $10\%$ lower accuracy than original model at around 50 compression ratio. \subsection{On the Effect of Mask Compression} We performed an experiment which does not apply mask compression but treats the zeros in the weights as a separate cluster with centroid 0. Then npz compression is applied to labels and to centroids. We performed the experiment for 50$\%$ sparsity for CNET$_{0.045}$ and achieved a compression ratio of 13.32, whereas with using mask compression one can achieve 15.05 compression ratio. This experiment justifies the use of mask compression and the compression method described in Section \ref{compmet}. \subsection{Gradually Increasing $\lambda$ During Training} In this experiment, we have gradually increased $\lambda$ during training. In particular, we have started with $\lambda=0$ and linearly increased it by $0.007$ at the end of each epoch. By this approach we could achieve very high compression ratios as one can see in Table \ref{ramp}. \subsection{Comparison With the State-of-the-Art} We compare our method with three recent state-of-the-art methods, namely Targeted Dropout (TDrop) \cite{Gomez2018}, Smallify \cite{Leclerc2018} and Universal DNN compression (UDNN) \cite{Choi2018}. While TDrop and Smallify are designed to achieve a high level of sparsity, they do not consider the quantization aspect of the non-zero weights. UDNN is mostly focused on the quantization aspect. Therefore we compare our method to Smallify and TDrop based on sparsity and to UDNN based on compression ratios. For TDrop, $\alpha=0.66 , \gamma=0.75$ setting was reported which achieved the best top accuracy according to the original paper. The Smallify results were copied from \cite{Gomez2018}. First, we report performance of CNET$_{0.045}$, Smallify and TDrop at different sparsity levels. One can observe from Table \ref{soa1} that with fixed parameter settings, CNET and Smallify perform better than TDrop at highest sparsity level of $90\%$. Smallify method seems to outperform our method at the sparsity levels reported in table \ref{soa1}. However, our end goal is not to obtain a certain sparsity, but to achieve higher compression rates where sparsity is only one aspect. Another important note here is that the baseline performance of TDrop is reported to be 94.30, whereas ours and Smallify's baseline performance was around 91.50, therefore one should account for the 3 points of difference between the two baselines when comparing. \begin{table} \centering \begin{tabular}{ |c|c|c|c|c|c|c| } \hline \textbf{Sparsity} & 90$\%$ & 95$\%$ & 96$\%$ & 97$\%$ & 98$\%$\\ \hline \textbf{Acc.} & 86.44 & 85.79 & 86.19 & 85.12 & 76.57 \\ \hline \textbf{Comp.} & 46.85 & 57.41 & 65.48 & 78.46 & 102.76 \\ \hline \hline \end{tabular} \caption{CNET$_{\text{ramp}}$ experiments with gradually increasing $\lambda$ during training: accuracy (Acc.) and compression ratio (Comp.) at indicated sparsity levels.} \label{ramp} \end{table} \begin{table} \centering \begin{tabular}{ |c|c|c|c|c|c|c| } \hline \textbf{Sparsity} & \textbf{30\%} & \textbf{50\%} & \textbf{70\%} & \textbf{80\%} & \textbf{90\%}\\ \hline \textbf{TDrop} & 93.89 & 93.84 & 93.84 & 92.31 & 46.57 \\ \hline \textbf{Smallify} & 91.52 & 91.51 & 91.57 & 91.55 & 91.45 \\ \hline \textbf{CNET$_\text{0.045}$} & 88.24 & 88.44 & 88.43 & 88.23 & 87.89 \\ \hline \hline \end{tabular} \caption{TDrop, Smallify and CNET$_{0.045}$ comparison: accuracies at indicated sparsity levels.} \label{soa1} \end{table} The next comparison is at higher sparsity levels where TDrop and CNET at ramp setting was reported, i.e. during training parameters of both methods were gradually increased to slowly adapt to sparsification/compression. The results are shared in Table \ref{soa2}. \begin{table} \centering \begin{tabular}{ |c|c|c|c|c|c|c| } \hline \textbf{Sparsity} & \textbf{90\%} & \textbf{95\%} & \textbf{96\%} & \textbf{97\%} & \textbf{98\%}\\ \hline \textbf{TDrop$_{\text{ramp}}$} & 88.70 & 88.73 & 88.74 & 88.67 & 88.70 \\ \hline \textbf{Smallify} & 91.48 & 90.30 & 87.54 & 70.29 & 33.04 \\ \hline \textbf{CNET$_{\text{ramp}}$} & 86.44 & 85.79 & 86.19 & 85.12 & 76.57 \\ \hline \hline \end{tabular} \caption{TDrop$_{\text{ramp}}$, Smallify and CNET$_{\text{ramp}}$ comparison: accuracies at indicated sparsity levels.} \label{soa2} \end{table} One can observe from Table \ref{soa2} that TDrop and CNET with ramp setting, can achieve very robust performances at very high sparsity levels whereas Smallify fails to do so. It is observed that at ramp settings TDrop is more robust than CNET at extremely high sparsity level $98\%$. We speculate that this might be the case due to inherent fine-tuning in TDrop method, where the weights are actually pruned during training, whereas in our method this is not the case. We aim to include improvements in this aspect in our future work. \begin{figure} \includegraphics[width=\linewidth]{UDNN.pdf} \caption{Comparison with UDNN} \label{fig:UDNNcomp} \end{figure} Since TDrop and Smallify do not make any experiments on the compression ratio, we also compare our method (CNET$_{\text{ramp}}$) to UDNN where compression ratio was reported. The experimental results for UDNN were provided by the authors of the paper. We choose the variants of both methods that achieve the highest compression rates. The results are provided in Fig. \ref{fig:UDNNcomp}. One can observe that at the given range of compression ratio, our method outperforms UDNN in terms of accuracy by a large margin. \section{Conclusions} \label{conclusions} We have adopted a previously proposed negated sparsity measure and both theoretically and experimentally showed that when this measure is used as a compressibility loss, it results into more compressible neural network weights both in terms of sparsity and low-entropy non-zero part of the weights. We have also shown that our method is comparable or better to state-of-the-art methods on neural network compression. \bibliographystyle{named}
-25,855.428901
[ -0.69189453125, 1.033203125 ]
55.748373
[ -3.21875, 0.27197265625, -1.8564453125, -4.58984375, -0.337646484375, 6.734375 ]
[ 0.1285400390625, 2.46484375, -1.111328125, 4.375 ]
565
4,088
[ -2.34765625, 2.779296875 ]
31.197588
[ -5.8046875, -3.56640625, -3.078125, -1.4921875, 1.9052734375, 9.8671875 ]
1.873687
31.270493
28.375734
7.656369
[ 2.0623178482055664 ]
-18,515.11163
5.705235
-25,437.132739
1.320585
6.010192
[ -2.94140625, -3.36328125, -2.751953125, -4.0546875, 2.67578125, 10.21875 ]
[ -5.609375, -2.8046875, -2.375, -1.806640625, 3.900390625, 5.66015625 ]
BkiUfXDxK1ThhAqzYsFE
\section{Introduction} VERITAS and other Imaging Air Cherenkov Telescopes (IACTs) record images of Cherenkov showers initiated by gamma rays and other types of cosmic rays. Those images are parameterized, typically assuming the images are elliptical. The parametrization of the shower is used to reconstruct the energy and sky position of the particle that initiated the air shower. However a large number of other cosmic rays are also recorded, dominated by hadron-initiated showers, which are the major source of backgrounds for the experiment. Most of the cosmic-ray showers can be removed from the analysis by using shape-based discrimination from their parameterization, but a number still remain after these gamma-ray selection cuts. IACTs are designed to detect $\gamma$ rays, which point back to their point of origin. This assumption is used to subtract the remaining background. Typically, the remaining hadron showers are subtracted by selecting another region of the sky where no gamma-ray sources are expected and scaling this region to estimate the remaining background in the region of interest (ROI) for a gamma-ray source. The significance within the ROI is calculated typically by the Li\&Ma 1983 equation 17 \cite{1983ApJ...272..317L}: \begin{equation} S = \sqrt{2}\left(N_{ON}\ln{\frac{(1+\alpha)N_{ON}}{\alpha(N_{ON} +N_{OFF})}} + N_{OFF}\ln{\frac{(1+\alpha)N_{OFF}}{N_{ON} +N_{OFF}}}\right)^{1/2} \end{equation} where $N_{ON}$ is the number of counts on an ON region of interest (ROI), $N_{OFF}$ is the total number of counts in one or more background (or OFF) regions, and $\alpha$ is a scaling factor between events in the ON region and the OFF regions. The generalized equation for calculating $\alpha$ is: \begin{equation} \alpha = \frac{\int Acc_{ON}(x,y,t,E)dx dy dt dE}{\int Acc_{OFF}(x,y,t,E)dx dy dt dE} \end{equation} where $Acc$ is the acceptance function, defined as the probability after triggering and all off line cuts of an event being reconstructed with a certain position and energy \cite{2007A&A...466.1219B}. The indicies $ON$ and $OFF$ refer to the acceptances in sampled on the ON and OFF region(s), respectively. This work focuses on the Ring Background Method (RBM) \cite{2007A&A...466.1219B} where the background is defined as a single annulus around the ROI, but this work should be applicable to all background estimation methods available for IACTs in the literature. For the RBM method the acceptance is typically estimated using all data not in the ROI or in any background exclusion region. Regions of the FOV with gamma-ray sources and bright stars are excluded from background and acceptance estimates. More details on how acceptance is generated for the RBM, see Section 4. In the analysis of VERITAS and other IACT data, it is expected to get a distribution of statistical significances of sky map bins with mean of zero and width of unity in the absence of a VHE signal. However, it is not uncommon to see significance distributions of width greater than unity, indicating that the background is poorly estimated and the significances in the ROI are incorrect for sufficiently wide distributions. As a result, the criteria for a $\gamma$-ray detection (typically 5$\sigma$ above the background) can no longer be relied upon, as the chances of a false positive are increased. This work explores the reasons for a wider significance distributions and solutions to this issue in simulations and in VERITAS data. This work is organized as follows: the next section describes and shows a simulation that was used to recreate the problem and explore different aspects of it. The following section describes a proceedure to correct for zenith gradients in the camera, followed by a description of a `modified' Li\&Ma equation which takes into account systematic error of the background into the calculation. Finally we will show the results of validation of these techniques on VERITAS data. \section{Simulations} Significance distributions calculated with the Li\&Ma 17 equation were simulated to see if the observed wider significance distributions could be recreated. These were generated with the assumption that there was no significant signal in the field of view and therefore the mean rate measured in the ROI is the same as the background region(s) when scaled by $\alpha$. The wider significance distributions were recreated when adding in a systematic error in $\alpha$. The algorithm for the simulations are as follows: \begin{enumerate} \item Generate $10^{4}$ random values for $\alpha$. In this case, we assume that $\alpha$ is Gaussian distributed with a mean of 0.1 and width of 0.001. \item Increase $N_{ON}$ and $N_{OFF}$ each by a random amount $10^{4}$ times (once for each $\alpha$ value generated) for a time step $\Delta t$ according to: \begin{equation} N_{ON,i+1} = N_{ON,i} + Poisson(R_{bg}\Delta t) \end{equation} \begin{equation} N_{OFF,i+1} = N_{OFF,i} + Poisson(R_{bg}\Delta t/\alpha) \end{equation} where $N_{ON,0} = N_{OFF,0} = 0$ and $Poisson(x)$ is a randomly generated integer from a Poisson distribution with mean of $x$. \item Calculate significance for each of the $10^{4}$ sets of $N_{ON}$, $N_{OFF}$, using the mean $\alpha$ value of 0.1 in Equation 2.1. \item Fit the distribution of the significances generated in the previous steps to a Gaussian. That width is recorded in Figure 1. \item Repeat steps 2 through 4 for any number of time steps. The results shown in Figure 1 used a $\Delta$t of 1 hour and maximum timestep of 100 hours. \end{enumerate} \begin{figure} \centering \includegraphics[scale=0.37]{LiMaSim_bg2p0_timesteps_sigwidth.pdf} \includegraphics[scale=0.37]{LiMaSim_bg2p0_sigDist_tmax100hrs_wlabel.pdf} \caption{Example simulation results for a background rate of 2.0 cosmic rays per minute. Left: Width of the significance distribution as a function of exposure time for a `perfect' $\alpha$ and a $\alpha$ smeared by a Gaussian distribution. Right: Final significance distributions after an exposure of 100 hours. } \end{figure} If $\alpha$ is `perfect', i.e., the same $\alpha$ value is used in Equation 2.1 as Equation 2.4, then the width will $\it{never}$ deviate far from 1.0, regardless of the number of time steps. However, if we `smear' $\alpha$ by adding (or subtracting) a random small amount, $\delta\alpha$, generated from a Gaussian distribution, then the significance distribution width is $\sim$1.0 at small time steps, and eventually gets wider as time elapses. Figure 1 shows the results of the MC study with significance width as a function of exposure time. Based on the simulation study, a $\delta\alpha$ value randomly generated from a Gaussian with a width $\sim0.001$, assuming a fixed $\alpha$ value of 0.1, reproduces the width of the significance distribution with the standard VERITAS analysis with similar background rates. This points to some sort of systematic error in $\alpha$ that is not accounted for. It is possible to tighten the $\gamma$ ray selection cuts to narrow the distribution, as this keeps the fluctuations small and the uncertainties in $\alpha$. However this option often raises the energy threshold and reduces the total statistics in the sample, so this is not the best option for many analysis, particularly for those requiring a low energy threshold. Since $\alpha$ is derived from the acceptance functions, it is reasonable to look at possible factors that could possibly affect IACT acceptance functions. \section{Zenith Acceptance Correction} As discussed in the previous section, issues related to the acceptance function is assumed to be responsible for the width of the significance distribution seen VERITAS data. The standard VERITAS analysis packages make the assumption that the acceptance function is solely a function of $\rho$, the angular distance between the tracking position of the array pointing to the reconstructed event position. A zenith angle gradient was found in the data, which distorts the radial dependence of the acceptance function. Correcting the acceptance for the zenith gradient showed an improvement in the significance distribution width, bringing it closer to the expected distribution. This will be shown in the validation section. In order to qualitatively measure the acceptance gradient in the sky map, we define a new parameter called $\it{flatness}$, which is defined as the ratio of the acceptance in a given sky map bin to the number of events in that bin: \begin{equation} f_{i} = \frac{N_{i}/Acc_{i}}{<N/Acc>}\sim Constant \end{equation} where $f_{i}$ is the flatness, and $N_{i}$ is the number of counts in a sky map bin and the index $i$ denotes a particular sky map bin. Note that the ratio of counts to acceptance is divided by the mean of all bins so that flatness is always centered on 1.0. If the acceptance function accurately describes the background, then flatness should be approximately constant over the entire FOV for every sky map bin not in an \textit{a priori} defined background exclusion region. An example of a flatness map is shown in Figure 2 (left). The next step is to produce sky maps related to zenith angle. For scaling purposes, the difference of the zenith angle of the reconstructed event position and the zenith angle of the telescope tracking position is used (Figure 2, center). The contents of each bin in the zenith map and the flatness map are plotted against each other, and the Pearson correlation coefficient is used to determine the strength of any camera gradients. This part of the procedure was tested on Segue 1 for zenith angle, azimuth angle and NSB; only zenith angle showed any sort of strong correlation. After the gradient was quantified for zenith angle, we then developed a proceedure to correct for it in the data. It should be noted that the both of the other major IACTs currently in operation, MAGIC \cite{Mag_Bg} and HESS \cite{HESS_Bg} use acceptance functions that are depedent in zenith, so the fact that the acceptance also takes into account zenith angle in itself is not a novel idea. The method used here makes use of the Flatness/Zenith relationship (Figure 2, right). The scatter plot of $\Delta Zn$ and flatness described in the previous section is binned into a histogram. The histogram is normalized by the value at $\Delta Zn = 0$ to better quantify the size of the effect across the camera and then is fit to a polynmial. A fourth-degree polynomial is used to remove possible second-order effects, but a linear fit most likely also acceptable. The polynomial fit is then used to re-weight the acceptance function: \begin{equation} Acc'_{i} = Acc_{i}(1 + p_{1}/p_{0}\Delta Zn + p_{2}/p_{0}\Delta Zn^{2} + p_{3}/p_{0}\Delta Zn^{3} + p_{4}/p_{0}\Delta Zn^{4}) \end{equation} Where $Acc_{i}$ is the acceptance in the $i$th RA/Dec bin, $p_{n}$ is the $n$th coefficient from the flatness fit, and $\Delta Zn$ is the average reconstructed zenith angle subtracted from the average telescope tracking zenith angle. Once the zenith-corrected acceptance function, $Acc'_{i}$ is obtained and the correction applied to each bin of the acceptance map, is used in Equation 1.2 to recalculate $\alpha$. The new $\alpha$ values are then used to recalculate significance from Equation 1.1. \begin{figure} \centering \includegraphics[scale=0.2]{DeltaZn.pdf} \includegraphics[scale=0.2]{Flatness.pdf} \includegraphics[scale=0.23]{DeltaZnSegueHFit_pol4.png} \caption{Visualization of the zenith gradient in VERITAS Segue 1 data. Left: Sky map of the average $\Delta$Zn, which is defined as the difference between zenith of the event reconstructed position and the zenith of the telescope tracking position. Center: Sky map showing the $N_{i}/Acc_{i}$of the VERITAS sky map, with all background exclusions regions removed. Right: Histogram of flatness as a function of $\Delta$Zn. A fit to a 4th-degree polynomial is shown in red.} \end{figure} \section{Modified Li\&Ma Significance Equation Accounting for Background Systematics} Acceptance functions are typically a derived from data and there is an intrinsic systematic uncertainty associated with it. A `Modified' version of the Li\&Ma equation is needed to correctly take into account the $\alpha$ systematic error, which we will refer to as $\sigma_{\alpha} $ \cite{2015APh....67...70S}. This solution presented here is analytic, but approaches have been taken by fitting using TMinuit in the literature as well \cite{Dickinson2012}. The full derivation of the Modified Li\&Ma Equation is in \cite{2015APh....67...70S} along with details of its performance on simulations. It follows a maximum likelihood framework with $\alpha$ treated as a nuscience parameter. The Modified Li\&Ma equation is: \begin{equation} S_{Mod} = \frac{N_{on} - \alpha N_{off}}{|N_{on} - \alpha N_{off}|}\sqrt{S^{2}_{LiMa}(N_{on},N_{off},\alpha') + \Big(\frac{\alpha' - \alpha}{\sigma_{\alpha}}\Big)^{2} } \end{equation} Where $S_{LiMa}$ is the Li\&Ma significance calculation in Equation 1.1 and $\alpha$' is the solution to the equation: \begin{equation} 0 = \alpha'^{3} - \alpha'^{2}(\alpha - 1) - \alpha'(\alpha - \sigma_{\alpha}^{2}N_{off}) - \sigma_{\alpha}^{2}N_{on} \end{equation} This is a cubic equation with up to three real solutions. It's noteworthy to mention that in the case where $\sigma_{\alpha} = 0$, this reduces to $\alpha'$ equals $\alpha$, 0 and -1 and we recover Equation 1.1. If there are multiple positive and real solutions, then the solution is used that maximizes the likelihood: \begin{equation} \Lambda(\alpha') = N_{on}\log\alpha' + (N_{on} + N_{off})\log\Big(\frac{N_{on} + N_{off}}{1+\alpha'}\Big) - \frac{1}{2}\Big(\frac{\alpha' - \alpha}{\sigma_{\alpha}}\Big)^{2} \end{equation} It is assumed here that the main source of $\sigma_{\alpha}$ is the statistical uncertainty of the acceptance curves in VEGAS, one of the VERITAS standard analysis codes \cite{VEGASRef}. The procedure for estimating $\sigma_{\alpha}$ follows, which requires how this code determines $\alpha$. On a run-by-run basis, an acceptance curve is calculated as a function of angular distance from the array pointing position to the reconstructed event position. That curve is then smoothed several times and is given a weight based on the number of events passing cuts in that run. The weighted acceptance curve is evaluated in each sky map position in either RA/Dec or Galactic coordinates. This process is repeated for all observation runs. $\alpha$ is then calculated for each position from equation 1.2, assuming $E$ and $t$ are constant, using an ON region described as an circular region around the region of interest (ROI) and the off region being an annulus centered on the same ROI. The process of $\sigma_{\alpha}$ estimation follows a similar procedure as the $\alpha$ calculation. The RMS from the acceptance curve is used and the same background weight is applied. The weighted RMS is then added to each sky map position, and then the process is repeated for all runs. $\sigma_{\alpha}$ is then calculated by error propagation of $\alpha$: \begin{equation} \alpha = \frac{\int Acc_{ON}(x,y,t,E)dx dy dt dE}{\int Acc_{OFF}(x,y,t,E)dx dy dt dE} = \frac{\sum Acc_{ON}(x_{i},y_{i})}{\sum Acc_{OFF}(x_{i},y_{i})} \end{equation} \begin{equation} \sigma_{\alpha} = \alpha\sqrt{ \Big(\frac{\sum\delta Acc_{ON}(x_{i},y_{i})}{\sum Acc_{ON}(x_{i},y_{i})} \Big)^{2} + \Big(\frac{\sum\delta Acc_{OFF}(x_{i},y_{i})}{\sum Acc_{OFF}(x_{i},y_{i})} \Big)^{2} } \end{equation} \section{Validation on VERITAS Data} We test the results from these systematic studies on VERITAS data and see if they improve the significance distributions. The zenith correction and modified Li\&Ma equation were tested on various data sets. Segue 1 and PKS 1424+240 were tested using soft spectral cuts since the scientific goals of both of these benefits from having a low energy threshold; IC 443 was tested with harder but extended cuts since it is an extended SNR for VERITAS. Figure 3 shows the results of the validation on PKS 1424+240. Table 1 summarizes the validation results for the target position and the best fit of the significance distributions to a Gaussian function. In all cases, the significance distribution widths decrease as the zenith correction and the modified Li\&Ma are applied and the situations where both are applied the width is within 10\% of unity. In the case where both are applied, the zenith correction is applied first before determining $\sigma_{\alpha}$ for the Modified Li\&Ma equation. The zenith correction does not greatly affect the significance at the source position. Application of the modified Li\&Ma equation does decrease the target position significance, and may infer that as a loss of sensitivity. But the reader should keep in mind that if the significance distribution is sufficiently wider than unity, the significance is overestimated. Instead of a loss of sensitivity, this should be interpreted as the \textit{true} value of significance of the target observation. \begin{table}[t] \begin{center} \scalebox{0.75}{ \begin{tabular}{l | l | l | c | c | c | c | c | c | c } \hline Target & Zn Accept. & Mod. Li\&Ma & $N_{on}$ & $N_{off}$ & $\alpha$ & $\sigma_{\alpha}$ & S ($\sigma$) & Sig. Mean & Sig. Width\\ \hline\hline PKS 1424+240 & No & No & 10042 & 74779 & 0.1176 &N/A & 12.27 & -0.07 & 1.43\\ PKS 1424+240 & No & Yes & 10042 & 74779 & 0.1176 & 0.0011 & 9.39 & -0.07 & 1.25\\ PKS 1424+240 & Yes & No & 10042 & 74779 & 0.1167 & N/A & 12.95 & -0.08 & 1.17\\ PKS 1424+240 & Yes & Yes & 10042 & 74779 & 0.1167 & 0.0011 & 9.93 & -0.07 & 1.02\\ \hline Segue 1 & No & No & 17010 & 106291 & 0.1573 &N/A & 2.102 & 0.02 & 1.72\\ Segue 1 & No & Yes & 17010 & 106291 & 0.1573 & 0.0017 & 1.292 & 0.01 & 1.31\\ Segue 1 & Yes & No & 17010 & 106291 & 0.1590 & N/A & 0.782 & 0.01 & 1.23\\ Segue 1 & Yes & Yes & 17010 & 106291 & 0.1590 & 0.0017 & 0.479 & 0.01 & 0.99\\ \hline IC443 ($\theta^{2} <$0.04) & No & No &2365 & 11199 & 0.1859 & N/A & 5.56 & -0.03 & 1.10 \\ IC443 ($\theta^{2} <$0.04) & No & Yes &2365 & 11199 & 0.1859 & 0.0043 & 4.00 & -0.04 & 0.95 \\ IC443 ($\theta^{2} <$0.04) & Yes & No &2365 & 11199 & 0.1870 & N/A & 5.30 & 0.08 & 1.08 \\ IC443 ($\theta^{2} <$0.04) & Yes & Yes &2365 & 11199 & 0.1870 & 0.0043 & 3.81 & 0.07 & 0.93 \\ \hline IC443 ($\theta^{2} <$0.09) & No & No &4913 & 11199 & 0.4040 & N/A & 4.79 & -0.04 & 1.16 \\ IC443 ($\theta^{2} <$0.09) & No & Yes &4913 & 11199 & 0.4040 & 0.0093 & 2.92 & -0.04 & 0.94 \\ IC443 ($\theta^{2} <$0.09) & Yes & No &4913 & 11199 & 0.4064 & N/A & 4.45 & 0.09 & 1.12 \\ IC443 ($\theta^{2} <$0.09) & Yes & Yes &4913 & 11199 & 0.4064 & 0.0093 & 2.71 & 0.08 & 0.91 \\ \hline \end{tabular} } \caption{Summary of validation results at the target position and the mean and width of the significance distributions of the validation samples. The significance distributions listed here exclude bins centered within the ROI and bright stars.} \end{center} \end{table} \begin{figure} \centering \includegraphics[scale=0.15]{pks1424_hFit_sigMap_nZn.png} \includegraphics[scale=0.15]{pks1424_hFit_sigMap_nZn_modLiMa.png} \includegraphics[scale=0.15]{pks1424_hFit_sigMap_wZn.png} \includegraphics[scale=0.15]{pks1424_hFit_sigMap_wZn_modLiMa.png} \includegraphics[scale=0.3]{pks1424_hFit_sigDistAll.png} \caption{Top row: significance maps of PKS 1424+240. The z-axis scale is fixed between -6$\sigma$ and +6$\sigma$ in each map. Upper Far Left: significance map without zenith correction or modified Li\&Ma. Upper Center Left: significance map with modified Li\&Ma. Upper Center Left: Significance map with zenith correction. Upper Far Left: Lower Left: significance map with modified Li\&Ma and zenith correction. Bottom row: significance distributions of the field of view around PKS 1424+240 excluding the VHE source and bright stars, using the same order as the top row.} \end{figure} \section{Conclusions and Future Work} This work has addressed the systematics associated with the background weighting parameter, $\alpha$, and not discussed reduction of the background rates. It is possible that other as yet unaccounted background systematics exist. Improved $\gamma$/Hadron separation would decrease the overall systematic uncertainty in the background estimation from these unaccounted for systematics and should also improve the width of significance distributions in addition to any sensitivity boost gained from a lower background rate. It is therefore encouraged to combine the methods outlined here with boosted-decision trees \cite{Krause2016} or any similar methods. We show that employing the methods outlined here, the significance distributions in VERITAS and other IACT analysis can be greatly improved. The significance width of the validation data set of PKS 1424+240 is reduced from 1.43 to 1.02 and Segue 1 is improved from 1.72 to 0.99. Future work should investigate other factors that could affect camera acceptances, such as optically bright stars, night sky background and azimuth angle and how to best account for these effects. The techniques in this work and in the future in this area becomes incredibly important as targets observed by the current generation of IACTs for hundreds of hours or more and for the upcoming CTA observatory \cite{CTARef}. \section{Acknowledgments} For the full acknowledgments, please visit https://veritas.sao.arizona.edu/.
-16,199.420092
[ -3.2578125, 3.015625 ]
38.743455
[ -3.1328125, 0.337158203125, -2.044921875, -5.828125, -0.5126953125, 8.3046875 ]
[ 0.8984375, 7.03515625, 2.5546875, 5.0625 ]
258
3,041
[ -2.669921875, 2.986328125 ]
28.924587
[ -6.34375, -4.32421875, -4.13671875, -2.1484375, 2.173828125, 12.015625 ]
1.269036
30.994838
27.85268
8.980616
[ 3.166301727294922 ]
-11,726.001132
5.312068
-15,846.418553
1.169989
5.68583
[ -3.35546875, -3.623046875, -2.943359375, -3.80859375, 2.5703125, 10.546875 ]
[ -5.890625, -2.517578125, -2.482421875, -1.9638671875, 3.787109375, 5.6796875 ]
BkiUbq825V5hd-4287Pf
\section{Introduction} During the course of their evolution, massive stars have strong winds which eject matter into their surroundings. During their post-main sequence evolution, these stars can move back and forth from the blue to the red side of the Hertzsprung-Russell (HR)~diagram and back to the red, with little time spent at intermediate effective temperatures (e.g., Langer~{\cite{L91b}}). Hydrodynamic considerations imply that each such transition does produce a circumstellar shell: When the star moves from the blue to the red side of the HR~diagram, the slow red supergiant (RSG) wind will be stalled by the high pressure of the previously created hot wind bubble, and will accumulate into a shell at the location where this pressure equals the RSG wind ram pressure (Garc\'{i}a-Segura et al.~{\cite{GLM96b}}). We call such a more or less stationary shell the RSG~shell. When the star moves from the red to the blue side of the HR~diagram, the wind speed increases and the blue supergiant (BSG) wind plows up the preceding RSG wind into a rapidly expanding shell, which we call the BSG~shell. Consequently, we expect a spectacular circumstellar phenomenon for stars undergoing so called blue loops, namely that it triggers the formation of an expanding BSG~shell, which will at some point smash into the previously formed stationary RSG~shell. While both, the RSG and the BSG~shell by itself, may be difficult to observe, their violent interaction may release enough energy to provide an observable nebula. Despite this simple and intriguing expectation, there are so far only few attempts to obtain quantitative prediction for the outcome of the described shell interaction (see Blondin et al.~{\cite{BL93}}, Martin et al.~{\cite{MA95}}, Podsiadlowski et al.~{\cite{PM05}}). Within an effort to describe this phenomenon through generic calculations, which use detailed stellar evolution models as input for the circumstellar hydrodynamic modeling (Chi\c{t}\v{a} et. al., in preparation), we focus here on the results for a rotating 12$\,{\rm M}_\odot$ single star. \section{Computational method} As input for our circumstellar hydrodynamic calculations, we use the results of a stellar evolution calculation for a star of 12~$\,{\rm M}_\odot$ and a metallicity of $Z=0.02$. Specifically, we utilize Model F12B from Heger \& Langer~({\cite{HL00}}), which has an initial rotational velocity of 328~$\, {\rm km}\, {\rm s}^{-1}$. The code used to compute this model includes OPAL opacities, detailed nuclear networks, mass loss according to Nieuwenhuijzen \& Jager~({\cite{NJ90}}), the physics of rotation for the stellar interior, and rotationally modulated stellar winds, as described in Heger, Langer \& Woosley~({\cite{HLW00}}). \begin{table} \caption{Ejected mass ($\Delta M$), momentum ($\Delta p$) and kinetic energy ($\Delta E$) during the various evolutionary phases of our stellar model. The evolutionary phase is identified in the first column: main sequence phase~(MS), first red supergiant phase~(RSGI), phase of rapid rotation~(RR), blue supergiant stage~(BSG), and second red supergiant phase~(RSGII). The approximate duration of each phase is given in the second column. } \label{table:F12B} \centering \begin{tabular}{c c c c c} \hline Phase & $\Delta$t & $\Delta$M & $\Delta$p & $\Delta$E \\ & $10^3\,$yr & $\,{\rm M}_\odot$ & $10^{38}\,{\rm g\,cm}\,{\rm s}^{-1}$ & $10^{45}\,$erg \\ \hline MS & 19200 & 0.43 & 396 & 1480 \\ RSG~I & 825 & 0.33 & 71 & 38 \\ RR & 25 & 0.02 & 7.2 & 6.0 \\ BSG & 550 & 0.11 & 52 & 68 \\ RSG~II & 225 & 0.13 & 25 & 12 \\ \hline \end{tabular} \end{table} The evolution of the stellar model in the HR diagram is show in Fig.~\ref{FigHrd}. At core-H exhaustion, it moves to the RSG regime where it remains for 825\,000 yrs ($\sim$60~\% of the core-He burning life time), before it undergoes a blue loop. It then stays in the BSG regime of the HR~diagram for the remaining part of core helium burning, before it moves back to the RSG regime where it explodes as a Type~II supernova. As shown by Heger \& Langer~({\cite{HL98}}), as the convective envelope retreats during the onset of the blue loop, all its angular momentum is concentrated in a small amount of mass in the top layers of the star by the time convection vanishes. Blue loops therefore provide a natural way to bring the stellar surface to close to critical rotation. This does also happen in our chosen stellar model (Fig.~\ref{FigMvvo}). The limit of critical rotation is reached during the red-blue transition, which produces a brief period of strong, slow and anisotropic mass loss (Table~1). The strong mass loss then reduces the rotation rate of the stellar surface (Langer~{\cite{L98}}), and the star settles at a rotation velocity of about~50$\, {\rm km}\, {\rm s}^{-1}$ in the BSG regime. To simulate the evolution of the circumstellar matter (CSM) we use the ZEUS~3D code developed by Stone \& Norman~({\cite{SN92}}) and Clark~({\cite{Clk96}}). ZEUS~3D is an explicit non-conservative code that solves the hydrodynamical equations as partial, finite difference equations on a fixed, staggered mesh grid. Radiatively optically-thin cooling is included by solving the energy equation implicitly according to Mac Low et al.~({\cite{M89}}), and by using the plasma cooling curve of MacDonald \& Bailey~({\cite{M81}}). We compute the evolution of the CSM during the main sequence and the early RSG stage in 1D, with 4500 grid points over a radius of 45~pc, and we assume an interstellar medium density of 1~cm$^{-3}$. After 100\,000\,yr into the first RSG stage, we map the 1D model onto a 2D spherical grid to compute its further evolution. The inflow inner boundary condition is applied at 0.025~pc, and the outer boundary remained at 45~pc. The radial component of the grid is resolved with 1000 grid points, where 900 grid points are used for the inner 5~pc, and 100 grid points for the outer 40~pc. The angular coordinate of 90 degrees is resolved with 200 grid points. The method used here was applied before by Garc\'{i}a-Segura et al.~({\cite{GF96}}, ~{\cite{GML96a}} and ~{\cite{GLM96b}}) and van Marle et al.~({\cite{MLG05}} and ~{\cite{MLG07}}). We are using the time dependent mass loss rate and the terminal wind speed from the stellar evolution model as input in our central mesh point for the hydrodynamic calculations. The wind speed is obtained from the stellar escape velocity using the scaling proposed by Eldridge~({\cite{E05}}). The wind anisotropy is described using the equations of Bjorkman \& Cassinelli~({\cite{BC93}}), as in Langer et al.~({\cite{LGM99}}). For near-critically rotating stars, this provides a slow and dense equatorial outflow, and a fast wind in polar directions. We note that while the Bjorkman-Cassinelli mechanism has been criticized in the context of line driven winds (Owocki et al.~{\cite{O96}}), it is unclear whether line driving does play a major role in the situation of near-critical rotation. The effect of photoionization was included in the simulations by calculating the Str\"{o}mgren radius along each radial grid line as described in Garc\'{i}a-Segura et al.~ ({\cite{GLRF99}}) and van Marle et al.~({\cite{MLG05}}, ~{\cite{MLG07}} and ~{\cite{MLYG08}}). The number of ionizing photons is computed according to the effective temperatures and surface gravities of the stellar evolution model, by interpolating in a grid of model atmospheres for massive OB~stars of solar metallicity computed with the FASTWIND non-LTE code (Puls et al.~{\cite{Pu05}}) as described in Lefever et al.~({\cite{Le07}}). \section{Results} \begin{figure*}[t] \centering \includegraphics[width=0.65 \textwidth, angle=270]{proj3low.eps} \caption{Emission structures from our 2D hydro simulation, for the same moments in time as panels 2 to 4 of Fig\ref{FigDenstemp}, i.e., 9000\,yr, 15\,000\,yr, and 18\,000\,yr after the onset of the BSG wind. {\bf Upper panel}: Emissivity of the gas with $10^{4} < T < 10^{6}\,$K, in the simulation plane. {\bf Lower panel}: Projections of the 3D emission obtained by assuming rotational symmetry of the 2D structures of the upper panel, viewed with an inclination angle of 60$^{\circ}$, constructed with a resolution of 400$\times$400 pixels. } \label{FigEmis} \end{figure*} During its main sequence phase our 12$\,{\rm M}_\odot$ star creates a hot bubble in the interstellar medium which, at core hydrogen exhaustion, is characterized by a radius of 30\,pc and an internal energy density of $10^{-12}\, {\rm erg}$\,cm$^{-3}$. Once the star has become a RSG, a slow ($\sim 50\, {\rm km}\, {\rm s}^{-1}$), dense and isotropic wind is injected into the computational domain (Fig.~\ref{FigMvvo}). This RSG wind accumulates at a distance of $\sim 1.5\,$pc where its ram pressure is balanced by the hot bubble pressure, and forms a RSG shell (cf. Garc\'{i}a-Segura et al.~{\cite{GLM96b}}). At the end of the first RSG phase, this shell contains about $0.26\,{\rm M}_\odot$. It is rather extended ($\Delta r\simeq 1\,$pc), and its central parts have condensed due to cooling. At the onset of the blue loop, the central star reaches close-to-critical rotation, and ejects a dense equatorial disk (Heger \& Langer 1998). While mass and time scales differ, this phenomenon occurs quite analogous to the simulation of the outburst of $\eta\,$Carinae by Langer et al. (~{\cite{LGM99}}). Like in this case, the ensuing BSG wind sweeps up the preceding slow wind material into an ``hour glass'' structure (Fig.~\ref{FigDenstemp}). On a time scale of a few $10^4\,$yr, this hour glass expands into the sphere defined by the RSG shell, with a maximum velocity of $\sim$130$\,\, {\rm km}\, {\rm s}^{-1}$ (Fig.~\ref{FigVelshell}). The faster polar parts of the hour glass hit the inner edge of the RSG shell first. The collision creates a hot ($T\simeq 10^5$\,K) and dense ($n\simeq 10\,$cm$^{-3}$) pair of polar caps. As time proceeds, the collision zone moves to lower latitudes of the RSG shell and becomes more confined in latitude. At the same time, the interaction of the BSG wind with the equatorial disk defines a second, ring shaped collision zone in the equatorial plane, which expands with time with a velocity of 18$\,\, {\rm km}\, {\rm s}^{-1}$. Figure~\ref{FigEmis} shows snapshots of the emissivity map, according to the employed cooling curve in our hydro simulations, for three slices in time, along with projection maps constructed from rotationally symmetric 3D-structures obtained from the 2D maps. Here, only emission from gas in the temperature range between $10^{4}$~K and $10^{6}$~K is considered, which is the dominant component. Hotter gas, which is formed from the reverse shock of the collision, might be observable in the X-ray regime; the peak luminosity of this component in our model is $10^{33}$\,erg\,s$^{-1}$, which is achieved about 50\,000\,yr after the onset of the collision. At an early interaction stage, the radiation is dominated by two polar caps and one equatorial ring, later on by two mid-latitude rings and one fading smaller equatorial ring, and finally two mid-latitude rings at rather low latitude are visible. Those two rings gradually move to the equatorial plane while fading. The full time dependence of the emission structure is shown in an accompanying movie which is available from the A\&A website. The energy budget for the collision of the polar caps of the hour glass with the RSG shell follows directly from the stellar properties. The polar caps have an emissivity of $l\simeq 10^{-21}\,$erg\,cm$^{-3}$\,s$^{-1}$ in a volume of $V \simeq 4\pi r^2 \Delta r = 4\times10^{54}\,$cm$^3$ (with $r=0.5\,$pc and $\Delta r = 0.04\,$pc; see Figure~\ref{FigEmis}). Thus, they shine with a total luminosity of $4\times10^{33}\,$erg\,s$^{-1}$, i.e. roughly one solar luminosity, with a time scale of $\tau_{\rm rad}=l/u \simeq 9000\,$yr, where $u={3\over 2} n k T$ is the internal energy of the gas, and $T\simeq 10^5\,$K and $n\simeq 13\,$cm$^{-3}$ (corresponding to $\rho \simeq 10^{-23}\,$g\,cm$^{-3}$; Fig.~\ref{FigDenstemp}). The total radiated energy of the polar caps is about $E_{\rm rad} \simeq \tau_{\rm rad} L \simeq 10^{45}\,$erg. This corresponds well to the kinetic energy release due to the braking of the polar caps, which reach their maximum velocity of $\varv \simeq 130\, {\rm km}\, {\rm s}^{-1}$ at the time of collision, where it is reduced to $\varv \simeq 50\, {\rm km}\, {\rm s}^{-1}$ (Fig.~\ref{FigVelshell}). That is, $\Delta E_{\rm kin} = {1\over 2} \Delta M \Delta \varv^2 \simeq 8\times10^{44}\,$erg, with $\Delta M \simeq 1.2\times10^{-2}\,{\rm M}_\odot$ and $\Delta \varv\simeq 80\, {\rm km}\, {\rm s}^{-1}$. This kinetic energy can be compared with the BSG wind kinetic energy, which, for $\dot M \simeq 10^{-6.8} \, \mso~{\rm yr}^{-1}$ and $\varv_{\rm wind}\simeq 300\, {\rm km}\, {\rm s}^{-1}$ (Fig.~\ref{FigMvvo}), yields $\sim 1.2\times10^{45}\,$erg over a time period of 9000\,yr. Thus, the polar caps shine because the hour glass shaped BSG shell collides with the spherical RSG shell. A similar consideration could be made for the inner ring, which is produced by the collision of the BSG wind with the equatorial disk ejected by the central star during the phase of near critical rotation. The disk properties depend on the wind properties of the star during this phase. However, in particular their latitude dependence, can not be expected to be reliably predicted within the current assumptions. The total mass of the disk is determined by the mass loss of the star at critical rotation. \section{Discussion} \begin{figure}[!h] \centering \resizebox{\hsize}{!}{\includegraphics[width=0.95 \textwidth]{cartoonsab.eps}} \caption{ Schematic representation of the interaction of the hour glass shaped BSG shell (blue) with the RSG shell (orange). The collision regions are marked with red color; they form the brightest parts of the nebula.} \label{FigCartoon} \end{figure} Figure~\ref{FigCartoon} illustrates a simplified picture of the formation of multiple ring nebulae, according to our model. It contains two kinematic components: a stationary, spherical RSG shell and an expanding hour glass structure. The strongly emitting parts of the structure are the collision surfaces, which are marked in red in the figure. We believe that both kinematic components are realized by nature in the circumstellar medium of massive stars. RSG shells are unambiguously predicted (Garc\'{i}a-Segura~{\cite{G07}}) and while they are not yet observationally confirmed, there seems to be no way to avoid their formation. Expanding hour glass structures, on the other hand, are a well documented feature in circumstellar nebulae of low and high mass stars (see Nota et al.~{\cite{N95}}; Langer et al.~{\cite{LGM99}}) and are thought to be confined by a circumstellar disk in the equatorial plane of the central star. A number of predictions emerge from this simple model. First, the collision starts about 10$^4$\,yr after the onset of the blue loop. This timing is set by the expansion speed of the BSG shell and the radius of the RSG shell. Second, the life time of the nebula is determined by the duration of the collision phase, as the emission time scale is shorter than that. In our example, this is about $10^4$\,yr, or about 1\% of the core helium burning life time. This provides an upper limit to the expected number of triple ring nebulae. Third, the rotation rate of the central star during the collision is high for a BSG, since it just about recovers from critical rotation. At the time of maximum brightness of the nebula, the equatorial rotation rate of our central star is about 80$\, {\rm km}\, {\rm s}^{-1}$ (Fig.~\ref{FigMvvo}). Fourth, as all the material in the nebula is ejected after the first dredge-up phase of the central star, the nebula material is nitrogen-rich, here enhanced by a factor of~6.5, and carbon and oxygen depleted by factors of~6.5 and~1.5, respectively. We note that the level of N-enrichment predicted by current stellar evolution models is quite uncertain (see Hunter et al. {\cite{Hu08}}), but a RSG phase is still expected to produce some nitrogen enhancement. Due to the assumptions of efficient rotational mixing (Heger \& Langer~{\cite{HL00}), the star and nebula in our model are more enriched than expected from non-rotating stellar models. Fifth, one ingredient of our simple model, namely the RSG shell, is expected for massive stars, but not so for low mass star which produce planetary nebulae. Therefore, while quite analogous expanding hour glass structures are observed for both cases (Langer~{\cite{L00}}), multiple ring nebulae formed through the collision process shown in Fig.~\ref{FigCartoon} are expected around massive stars, but not as planetary nebulae. In this sense, the polar caps observed around the blue supergiant Sher~25 might be considered as the first indirect empirical confirmation of a RSG shell. Previous models of multiple ring nebulae were mostly constructed in the context of the triple-ring structure observed around SN\,1987A (Burrows et al.~{\cite{B95}}; Crotts \& Heathcote~{\cite{CH00}}). While single star models often fail to explain important features (e.g., Martin \& Arnett~{\cite{MA95}}; Meyer~{\cite{Me97}}; Woosley et al.~{\cite{W97}}), many invoke rather complex binary phenomena (e.g., Podsiadlowski et al.~{\cite{P91}}; Blondin \& Lundqvist~{\cite{BL93}}; Llyod et al.~{\cite{L95}}; Morris \& Podsiadlowski~{\cite{PM05}}). Whereas we do not attempt to reproduce the circumstellar medium of SN~1987A, a single star approach with suitable choices for the major parameters in our model (initial mass, initial rotation rate, metallicity) appears promising and will be pursued in the near future. The current failure of single star models to produce suitable blue loops and blue supergiant pre-supernova models may have to do more with missing physics in stellar evolution models rather than supporting the evidence for a binary progenitor of SN\,1987A (Woosley et al.~{\cite{W97}}) Various multiple ring nebulae around blue supergiants have been observed in the last 20~years (Smith et al.~{\cite{SBW07}}). While our generic numerical model was not designed to correspond to any of them, many of the general properties of these nebulae are well reproduced. Most striking is the agreement of the emission geometries. While the nebula around the B1.5~Ia supergiant Sher~25 shows two polar caps and one equatorial ring (Brandner et al.~{\cite{BGCW97}}) and the other objects discussed by Smith et al.~{\cite{SBW07}} rather show narrow rings, including the ``twin'' of the SN~1987A nebula around HD~168625 (Smith~{\cite{Sm07}}) all these structure occur as an evolutionary sequence in our model. Expansion velocities of the inner ring ($\sim 18\, {\rm km}\, {\rm s}^{-1}$) and the outer collision products ($\sim 50\, {\rm km}\, {\rm s}^{-1}$), the spatial scale of about 1\,pc, and the kinematic nebula age agree rather well with empirical values. The rotation velocity of our stellar model fits well to the derived value of $\sim 70\, {\rm km}\, {\rm s}^{-1}$ for Sher~25 (Hendry et al.~{\cite{He08}}). Central star and nebula of our model are nitrogen enriched, as are most of the observed nebulae. We note that the emission in our model is caused by compressional heating, which may be in conflict with evidence for photoionization being the dominant process in some observed multiple ring nebulae (see Smith et al.~{\cite{SBW07}}). And indeed, looking at the density distributions shown in Fig.~\ref{FigDenstemp}, which might resemble emission geometries in the pure photo-ionization case, the situation appears more complex. In our simulation, the thick RSG shell ($\Delta r \simeq r \simeq 1\,$pc) collapses in two parts (at $r\simeq 1.2\,$ pc and $r\simeq 1.7\,$) due to a cooling instability. However, this collapse is questionable since it requires a long timescale --- our shell has an age of close to $10^6\,$yr, while in many cases the shell will be much younger at the time of collision ---, and since the employed cooling function is uncertain for temperatures below $10\,000\,$K. Without this collapse, its density would be only about $2\times10^{-25}\,$g\,cm$^{-3}$ (or 0.1 particles/cm$^3$), which may render it unobservable even if it were photoionized. However, even in the case of the collapsed RSG shell as in our simulation, the collision leads to a clear density enhancement. In panel~2 of Fig.~\ref{FigDenstemp}, we see that in our model, the density enhancement in the polar caps is about a factor~5. This is to be considered a lower limit, as higher resolution models might approach the theoretically expected enhancement factor of about~100, which follows from the (well realized) isothermal shock approximation and a Mach number of about~10. The lower panels of Fig.~\ref{FigDenstemp} show that in order to represent the rings of SN~1987A, further refinements are required, which, as we think, might be achieved by altering the properties of the RSG shell. For this particular case, this may indeed be justified, as the life time of the final RSG stage of the progenitor of SN~1987A might have been quite short (Woosley~{\cite{W88}}, Langer~{\cite{L91a}}). Despite that our model does not fit any of the observed cases in detail, the approximate agreement with most general properties of this class of objects encourages to produce tailored models for individual nebulae as next step. Our results indicate that stars with multiple ring nebulae might just have left the RSG branch --- as stellar evolution models argued for the case of SN~1987A (Woosley~{\cite{W88}}, Langer~{\cite{L91a}}) and furthermore, that binarity may not be required to obtain multiple ring emission geometries. \begin{acknowledgements} We are grateful to the anonymous referee for helpful remarks which lead to significant improvements of this paper. We would like to thank Karolien Lefever for providing us with a grid of atmospheric models for hot stars. We would like to thank Anthony Marston, Nathan Smith and Bob van Veelen for helpful discussions. A.J.v.M. acknowledges support from NFS grant AST-0507581. AH was supported by the DOE Program for Scientific Discovery through Advanced Computing (SciDAC; grants DOEFC02- 01ER41176 and DOE-FC02-06ER41438) and performed this work under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. This work was supported by the Dutch Stichting Nationale Computerfaciliteiten (NCF). \end{acknowledgements}
-13,358.800196
[ -3.494140625, 3.171875 ]
30.02611
[ -2.77734375, 0.53662109375, -1.5869140625, -5.625, -0.60107421875, 7.875 ]
[ 3.8203125, 6.96875, 2.5234375, 5.01171875 ]
214
3,364
[ -3.6796875, 4.21484375 ]
27.788173
[ -5.8515625, -3.205078125, -3.462890625, -2.568359375, 1.2841796875, 11.015625 ]
1.980775
11.261013
31.361474
3.78268
[ 2.331634521484375 ]
-11,254.250218
5.102556
-13,056.703567
0.454413
5.877721
[ -3.35546875, -3.501953125, -2.8359375, -3.83984375, 2.412109375, 10.3515625 ]
[ -5.8984375, -2.296875, -2.169921875, -1.412109375, 3.66015625, 4.92578125 ]
BkiUdBw25YjgKNDlVLJa
\section{Device Fabrication} CVD (chemical vapor deposition)-grown graphene is used for producing large scale films required for this study. The large-domain monolayer graphene is grown on a copper substrate, and then transferred to the Si/SiO$_2$ silicon substrate which is used as a back gate. To transfer, first a polymer (PMMA, poly(methyl methacrylate)) layer is spin-coated onto one surface of the copper to protect the graphene. The other side of the copper undergoes an O$_2$ plasma ashing for 30 s at 90 W RF power to fully expose the copper underneath. The copper is then placed exposed-side down into an ammonium persulfate (APS) chemical bath to be etched away\cite{Suk}, followed by a rinse in de-ionized (DI) water. The polymer layer floats and supports the graphene as the copper is etched, leaving a PMMA/graphene film on the surface of the last water bath. The silicon substrate is then placed in the water and used to pick up the polymer/graphene film at a $\sim 45\degree$ angle to minimize the residual water. The transferred film is baked on a hot plate at $150\degree$C to remove wrinkles and water residue. Once the film has fully dried, the polymer supporting layer was then dissolved in DCM (dichloromethane) to reveal the complete transferred graphene layer. The sample was patterned into the rectangular shape for thermal measurements (5 x 50 $\mu m$) with standard electron beam lithography (EBL) techniques, using PMMA as the electron resist. For our device, we defined a pattern using an NPGS system \cite{Nabity} equipped FEI XL 30 SEM (scanning electron microscope). Multiple stages of EBL were used to make a single device. The first stage is to deposit a design of gold bonding pads and leads that will interface with the graphene region. During this stage a grid of small gold markers are also written. These serve as alignment markers for subsequent EBL steps that require finer resolution. Second, an etch write is done to define the shape of the sample. Oxygen plasma (30 s) is sufficient to etch away any unwanted graphene not covered by PMMA. Finally, a third stage of EBL is used to pattern the superconducting leads directly connecting to the sample. This stage is left until the end, as the low melting point ($\sim150\degree$C) of the lead-indium (PbIn) prohibits additional lithography steps due to the high temperature needed to cure the PMMA. \begin{figure} \includegraphics[width=3.15in]{FigureS1_2.pdf}% \caption{\label{fig:std} (a) Applied heating power vs inverse temperature at the far end of the strip $T^{-1}(L)$, assumed to be equal to the phonon temperature $T_{ph}$. Data is shown for different values of gate voltage $V_G$: $-20V$ (black), $-10V$ (red), $0V$ (green), $10V$ (blue), $20V$ (light blue), $30V$ (magenta). Power is plotted on a logarithmic scale. } \end{figure} \begin{figure} \includegraphics[width=3.15in]{FigureS2.pdf}% \caption{\label{fig:std} Heating power versus temperature for the case where the graphene strip acting as a heater, and the strip acting as a thermometer are separated by $3\mu m$. In this case, heat exchange can only be mediated via the Si/SiO$_2$ substrate. } \end{figure} Finally, lead-indium (PbIn) superconducting contacts were evaporated onto the device as described in the main text. Building upon our previous work with lead contacts, here we used instead an alloy of lead and indium \cite{Jeong}. Pb oxidizes rapidly upon exposure to air, which severely degrades the contacts. Combining the Pb with In reduces the metal's oxidation, without any significant reduction in the critical temperature. We first create the alloy by melting both materials together in a vacuum deposition chamber. Holding the temperature of the crucible above their melting points and below the evaporation point for 10 min allows the metals to intermix before being co-evaporated. Roughly 100 nm of this PbIn alloy is deposited at a high rate of 2 nm/s and moderate vacuum (x$10^{-5}$ mbar) to ensure small metal grain sizes. LN$_2$ is used to keep the substrate cooled. \begin{figure*} \includegraphics[width=6.0in]{FigureS3.pdf}% \caption{\label{fig:std} Evolution of the temperature profile as a function of electrical conductivity and electron phonon coupling. (a-c) Temperature profiles for the electrical conductivities of $\sigma\,=\,0.9$ mS, 1.8 mS, and 3.6 mS. The electron phonon coupling is held constant at 0.32 W.m$^{-2}$.K$^{-4}$. Panel (b) is identical to figure 2 of the main paper and shows actual data points for convenience. (d-f) Temperature profiles for an electron-phonon coupling strength of $\Sigma$ = 0.16, $\Sigma$ = 0.32 and $\Sigma$ = 0.64 W.m$^{-2}$.K$^{-4}$. The electrical conductivity is held constant at 1.8mS. Panel (e) is identical to panel (b) duplicated for convenience.} \end{figure*} \begin{figure} \includegraphics[width=3.0in]{FigureS4.pdf}% \caption{\label{fig:std} Electron temperature versus distance from the heating source at different applied powers. Continuous curves correspond to solution of the heat equation (2) for powers ranging from 0 to 0.1 nW. Here, we assume the disordered case with $\delta\,=\,3$, and find that the fit quality is worse than with $\delta\,=\,4$. } \end{figure} \section{Characterization of Substrate Heating} As explained in the main paper, the temperature increases significantly at the far end of the graphene strip, $T(L)$, which we attribute to an increase in the phonon temperature $T_{ph}$ in graphene. Figure S1a shows $T(L)$ as a function of the applied power. The cooling power at the far end of the strip appears to follow a faster than power-law dependence (Fig. S1). This dependence, as well as the lack of trend in the gate voltage, are presently not understood. To quantify the effects of heating on the Si/SiO$_2$ substrate, we used a different sample that also combined the heater and thermometer. In brief, we find the local heating of the substrate (and the sample holder) is negligible at the heating power applied in this paper. Two graphene Josephson junctions separated by about 3 $\mu$m were fabricated on top of the same type of Si/SiO$_2$ substrate as studied in the main text. One of the junctions served as a thermometer, and another as a heater. First, we measured the critical current $I_C$ of the thermometer junction as a function of the overall sample holder temperature, as controlled by a global heater and a resistance thermometer mounted on the sample holder. Next, with the global heater off, heating current $I_H$ was applied to the heater junction. Critical current of the thermometer junction $I_C$ was measured as a function of the Joule heating power $P=I_H^2 R_H$. (Here, $R_H$ is the resistance of the heater.) Thus, a curve of temperature vs heating power T(P) could be calculated (Figure S2). (For detailed methodology see Refs. \onlinecite{Multiterminal, Phonon}). We see that when the two devices are only connected via the substrate we require a heating power of $P\sim400$ pW to reach a temperature of $500$mK (Figure S2). Returning to the sample studied in the main text, we conclude that the rise of the substrate temperature should be negligible, and it does not explain the increased electron temperature at distances of a few tens of microns from the heater. \section{Modeling} The local electron temperature $T(x)$ along the graphene strip is expected to solve the stationary non-linear heat equation with local heating $P$ and electron-phonon coupling $\Sigma$, all expressed per unit of area: \begin{equation} 0=P+\Sigma (T^{\delta} - T_{ph}^{\delta} ) - \mathcal{L} \sigma \nabla \cdot (T \cdot \nabla T) \end{equation} Here $T_{ph}$ represents the phonon temperature, $\sigma$ the electrical conductivity, $\mathcal{L}$ the Lorenz number and $\delta$ an exponent. A change of variable $y=(T/T_{ph})^{2}$ and the definition of the length scale $a= \sqrt{\mathcal{L} \sigma/2\Sigma T_{ph}^{\delta-2}}$ allows us to rewrite the differential equation as: \begin{equation} \frac{d^{2}y}{dx^{2}} = \frac{1}{a^{2}}(y^{\delta/2} - 1) - \frac{2 p(x)}{\mathcal{L} \sigma} \end{equation} As stated previously, the vanishing heat flow at $x=0$ and $x=L$ yields the boundary conditions $y'(0)=y'(L)=0$, with L=50 $\mu$m. We approximate the Joule heating power density $p(x)$ as constant and only finite between x=2 $\mu$m and x=4 $\mu$m which corresponds to the extent of the heating contacts. We then solve the differential equation iteratively for y'(L)=0 and a dense array of trial values for y(L); the final solution y(x) is then the one that verifies $y'(0)=0$ as well. In order to illustrate trends in the temperature profile, in Figure S3 we present solutions of the heat equation for different values of the electrical conductivity and the electron-phonon coupling $\Sigma$. As the electrical conductivity increases, heat diffusion through the electron bath is facilitated, which results in a temperature increase farther from the source, and a shallower temperature gradient close to it. When the electron-phonon coupling $\Sigma$ increases, the local electron temperature is of course lower and decays to $T_{ph}$ much faster. In order to get an estimate of the electron phonon coupling $\Sigma$, we generate the temperature profile $T(x)$ for an array of values of $\Sigma$ and use a least square fitting procedure to extract $\Sigma\approx 0.32\pm.09 $ W.m$^{-2}$.K$^{-4}$ Finally, for disordered graphene, the cooling power of phonons is enhanced and its scaling with temperature has an exponent $\delta\,=\,3$. As explained in the main text, the crossover temperature to that regime is expected to be on the order of 1 K. Figure S4 shows a fit to our data using solutions of Equation (2) with $\delta\,=\,3$, with the optimal fit found for $\Sigma\,=\,0.29$ W.K$^{-3}$.m$^{-2}$. As expected the temperature gradient for a given $T$ is a little less steep than for $\delta\,=\,4$. Overall, we conclude that $\delta\,=\,4$ describes our data better than $\delta\,=\,3$.
-7,613.725098
[ -3.181640625, 2.908203125 ]
65.853659
[ -5.98046875, -3.337890625, -2.78515625, -7.63671875, 1.47265625, 12.4140625 ]
[ 2.29296875, 6.1875, 2.9609375, 6.21484375 ]
100
1,506
[ -3.02734375, 3.72265625 ]
25.246588
[ -5.6796875, -4.06640625, -2.919921875, -1.2275390625, 1.93359375, 9.2109375 ]
1.764706
51.118468
39.243028
2.170845
[ 3.1369428634643555 ]
-5,988.176863
5.079681
-7,438.363993
0.588235
5.502141
[ -3.3984375, -4.03125, -3.75, -4.09765625, 2.970703125, 10.84375 ]
[ -6.96875, -4.7578125, -3.15234375, -2.6875, 4.86328125, 8.125 ]
BkiUbSLxK7Ehm4qsy0t6
\section{\label{}} \section{INTRODUCTION} The search for $\gamma$-rays from radio galaxies is important for the understanding of the dynamics and structure of jets in active galactic nuclei (AGN). Even though radio galaxies are AGN with jets, their jet is not oriented toward the observer and therefore the radiation produced by the jet is not Doppler-boosted towards higher energies and luminosities, making them more challenging to detect in the very high energy (VHE: $E>100$~GeV) regime. The discovery of VHE $\gamma$-rays from the radio galaxy M~87 by the HEGRA collaboration~\citep{Aharonian2003}, detected later by VERITAS~\citep{Acciari2008a}, and from NGC~5128 (Centaurus~A) by the HESS collaboration~\citep{Aharonian2009} has shown that non-blazar AGN can produce very energetic photons from non-thermal processes. Radio galaxies are classified into two main families based on the morphology of their radio emission~\citep{FanaroffRiley}, whether it is core dominated (FR~I) or lobe dominated (FR~II), with differences in the radio energetics and in the discrete spectral properties~\citep{Zirbel1995}. The large number of features that FR~I radio galaxies share with BL Lac type blazars suggests a possible unification between the two sub-classes of AGN, in which FR I radio galaxies are BL Lac objects observed at larger jet viewing angles~\citep{UrryPadovani}. Evidence for synchrotron emission in radio to X-ray energies from both the extended structures and the core is well explained by relativistic particles moving in a beamed relativistic jet~\citep{Ghisellini1993}. A commonly considered mechanism for HE-VHE (HE: high energy, 100~MeV$<E<$100~GeV) radiation is the synchrotron-self-Compton (SSC) process~\citep{Jones1974}, where the optical and UV synchrotron photons are up-scattered by the same relativistic electrons in the jet. Predictions concerning the inverse Compton (IC) component have long been established for the $\gamma$-ray emission~\citep{BloomMarscher1996} and frequency-dependent variability~\citep{Ghisellini1989}. Besides leptonic scenarios, several models also consider a hadronic origin for non-thermal emission in jets. Accelerated protons can initiate electromagnetic cascades or photomeson processes~\citep{Mannheim1993}, or directly emit synchrotron radiation \citep{Aharonian2002, Reimer2004} and produce $\gamma$-rays through collisions with ambient gas \citep{Beall1999, Pohl2000}. Modelling the blazar jet emission with a homogeneous SSC mechanism may imply particularly high Lorentz factors, $\Gamma \gtrsim 50$, with consequent high Doppler factors and small beaming angles $\theta \simeq 1^\circ$ \citep{Kraw2002}. Such a small beaming angle is in conflict with the unification scheme according to which FR~I radio galaxies and BL~Lac objects are the same kind of object observed at different viewing angles. Moreover, these high values for the Doppler factor are in disagreement with the small apparent velocities observed in the sub-parsec region of the TeV BL Lac objects Mrk~421 and Mrk~501 \citep{Marscher1999}. These considerations suggest a more complicated geometry, for example a decelerating flow in the jet with a consequent gradient in the Lorentz factor of the accelerated particles and a smaller average $\Gamma$ \citep{Georganopoulos2003}. As a result of this gradient, the fast upstream particles interact with the downstream seed photons with an amplified energy density, because of the Doppler boost due to the relative Lorentz factor $\Gamma_\mathrm{rel}$. The IC process then requires less extreme values for the Lorentz factor and allows larger values for the beaming angle. In a similar way, a jet spine-sheath structure consisting of a faster internal spine surrounded by a slower layer has been also suggested for the broadband non-thermal emission of VHE BL Lac objects~\citep{Ghisellini2005}. An inhomogeneous jet with a slow component may explain the HE-VHE emission observed in radio galaxies at larger angles ($\theta_\mathrm{layer} = 1/\Gamma_\mathrm{layer} \sim 20^\circ$). Observation of the VHE component from radio galaxies is therefore significant for the AGN jet modeling. In this work an overview of the observations of radio galaxies by VERITAS is presented. \section{OBSERVATIONS} \subsection{NGC~1275} NGC~1275 (Perseus~A, 3C~84) is a nearby \mbox{($z = 0.018$)} radio galaxy located in the center of the Perseus cluster. It is one of the most unusual early-type galaxies in the nearby Universe. Its radio emission is core dominated, but it also has strong emission lines. In addition, the emission line system shows a double structure, corresponding to both a high-velocity and a low-velocity system. The puzzling nature of NGC~1275 makes it difficult to definitively classify it in a standard AGN sub-class. It has been recently detected in high energy $\gamma$-rays by \emph{Fermi}~\cite{Abdo2009}, with a flux between 100~MeV and 25~GeV described by a power law with photon index $-2.17$. VERITAS observed the source between January and February 2009 for a total amount of good-quality data of 7.8~hours. Additional \emph{Fermi}-LAT data simultaneous to the VERITAS observational campaign have been analyzed, reporting a lower flux by a factor of 1.37 and a similar photon index $-2.15$ compared to the 2008 published \emph{Fermi}-LAT spectrum. A differential upper limit at the decorrelation energy of ~340~GeV is calculated and is incompatible \mbox{($P(\chi^2)=3.6\times10^{-4}$)} with an extrapolation of the \emph{Fermi} measured spectrum (fig.~\ref{fig:ngc1275}). A deviation from the power-law regime is therefore a likely explanation. Three possible models have been considered: a power law with an exponential cutoff, with a sub-exponential cutoff and a broken power law, all rising from different electron energy distributions in the jet as the absorption by the extra-galactic background light (EBL) is excluded due to the proximity of the galaxy. The three possibilities are equally compatible \mbox{($P(\chi^2)=0.2$)} with the VERITAS upper limit. The estimated cutoff energies are $E_\mathrm{exp}\approx20$~GeV and $E_\mathrm{subexp}\approx 120$~GeV in the case of an exponential and sub-exponential cutoff respectively, and $E_\mathrm{b}\approx 16$~GeV in the case of a broken power law. For details of the analysis see~\cite{Acciari2009b}. The result of the observation is rather interesting. It shows, for the first time, that there can be a deviation from the power-law regime in a radio-galaxy spectrum at an energy of the order of $~100$~GeV or lower. This is the first example of a scientific result obtained by VERITAS in conjunction with \emph{Fermi}. \begin{figure*}[t] \centering \includegraphics[width=135mm]{fig1.eps} \caption{NGC~1275 spectrum and the VERITAS upper limit on the differential flux at the decorrelation energy 338~GeV (standard cuts). The solid circles with error bars are the measurement by the \emph{Fermi} $\gamma$-ray space telescope during the VERITAS observation campaign. Empty circles with error bars are the measurement presented in~\cite{Abdo2009} from the energy-binned analysis. The solid line is the power-law fit to the \emph{Fermi} data. The dashed line is the extrapolation of the power-law. The dotted-dashed line is the fit of a power law with an exponential cutoff at 18~GeV. The double-dotted dashed line is the fit of a power law with a sub-exponential cutoff at 120~GeV and the dotted line is the smooth broken power law fit of a break energy at 16~GeV. All fits are done on the \emph{Fermi} data analyzed in this work (solid circles).} \label{fig:ngc1275} \end{figure*} \subsection{3C~111} 3C~111 is a near ($z=0.0485$) FR-II radio source whose central component is coincident with a broad-line Seyfert~1 galaxy. The radio morphology shows a double-lobe/single-jet structure with the jet emerging at an angle of $\sim63^\circ$~\cite{Linfield1984}. The central component is variable on time scales of a few months. Hints of superluminal behaviour are observed~\cite{Preuss1988, Preuss1990}. The radio spectrum is flat with an index of $-0.3$ between 6~cm and 80~cm~\cite{Becker1991}. Strong emission is detected in the mm and infra-red bands too~\cite{Bloom1994, Golombeck1988}. In the X-ray band, 3C~111 has been detected by many instruments with a long-term variability within a factor of 5~\cite{Reynolds1998}. The broadband spectral energy distribution (SED) shows a double-peaked structure (see fig.~\ref{fig:3c111} right) with typical blazar-like features and the source has been suggested to be a misaligned blazar~\cite{Sguera2005}. The radio galaxy 3C~111 has been suggested as a counterpart for the unidentified EGRET $\gamma$-ray source 3EG~J0416+3650~\cite{Hartman1999}. This is a $\gamma$-ray source located at $l=162^\circ.2 \quad b=9^\circ.97$, i.e. close to the galactic plane, with a 95\% confidence error radius of 38'.2. However, since the optical position of the radio galaxy is outside the 99\% confidence level contour of the EGRET $\gamma$-ray source ($\sim 76$~arcmin separation), the probability of the association between the two sources is rather low ($P=0.019$)~\cite{Mattox2001}. However, additional hint supporting the association between 3C~111 and 3EG~J0416+3650 can be found in~\cite{Sguera2005}. Due to the large uncertainty on the EGRET $\gamma$-ray source position, 12~X-ray and radio sources can be found inside the $3\, \sigma$ confinement error box~(see fig.~\ref{fig:3c111} left). Nevertheless, the radio galaxy 3C~111 is among the 12 sources the only object that is known to emit both in radio and X-rays, with the hardest and strongest X-ray flux. The $5.3\,\sigma$ detection reported by EGRET with an average flux above 100~MeV of \mbox{$1.3\times 10^{-7}$ cm$^{-2}$ s$^{-1}$} with a simple power-law photon index $-2.59$ makes this $\gamma$-ray source interesting for an instrument like VERITAS. If detected, given its higher angular resolution, VERITAS would be able to definitely identify the $\gamma$-ray emitter with the underlying object. VERITAS observed the radio galaxy 3C~111 during fall 2008 at a zenith angle range between $15^\circ$ and $30^\circ$. All data taken under bad weather conditions or with technical problems have been discharged. Finally, a total amount of about 11~hours has remained for analysis purposes. No VHE signal has been detected, a 99\% confidence level upper limit above the analysis threshold of 300~GeV has been derived. The result is reported in table~\ref{tab:1}. \begin{figure*}[t] \centering \includegraphics[width=70mm]{fig2.ps} \includegraphics[width=70mm]{fig3.ps} \caption{(\emph{left}) The BeppoSAX-MECS (2-10~keV) image superimposed on the EGRET $\gamma$-ray probability contours at 50\%, 68\%, 95\%, 99\% and 99.9\% confidence level. Crosses: ROSAT faint sources; diamonds: ESS sources; plusses: NVSS radio sources; squares: GBT radio sources. (\emph{right}) Broadband SED of 3C~111. Open circles: radio; open squares: mm-band; filled triangles: IRAS; filled circles: optical; open pentagons: infra red (2MASS); filled squares: BeppoSAX; open triangles: EGRET; arrows: $2\,\sigma$ upper limits by COMPTEL. Both figures from~\cite{Sguera2005}.} \label{fig:3c111} \end{figure*} \subsection{M~87} M~87 is a radio galaxy located in the Virgo cluster at a distance of 16~Mpc~\cite{Macri1999}. Originally detected at TeV energies at 4 sigma significance by HEGRA~\cite{Aharonian2003} and finally detected over 5 sigma by HESS~\cite{Aharonian2006}, it has been later detected also by VERITAS~\cite{Acciari2008a}. The substructures of the jet are well studied in the X-ray, optical and radio wavelengths~\cite{Wilson2002}, with an estimated angle of $20^\circ - 40^\circ$ toward the line of sight. Its proximity and spatially-resolved structures at all wavelengths make M~87 a unique laboratory to study the jet physics, especially for the related mechanisms to the VHE $\gamma$-rays production. Given its peculiarity, an extensive VERITAS-led coordinated multi-wavelength observational campaign, involving all major imaging air Cherenkov telescopes (IACT) currently operating, VERITAS, MAGIC and HESS, and othe X-ray and radio instruments, namely Chandra and VLBA, has been performed in 2008. Correlation studies of this broad-band observational campaign resulted in the identification of the region responsible for the origin of the $\gamma$-ray emission~\cite{Acciari2009b}. A dedicated contribution has been therefore presented at this Symposium~\cite{Wagner2009}. \section{CONCLUSIONS} VERITAS observed three radio galaxies during the last two years. Only in one case, the already-known $\gamma$-ray emitter M~87, the observation resulted in a VHE $\gamma$-ray emission detection. Given the peculiarity of the radio galaxy M~87, that makes it a unique laboratory for the study of the blazar astrophysics, in particular for the jet-related processes, VERITAS coordinated an observational campaign together with the major IACT partners and other X-ray and radio partners. The broad-band observational campaign resulted in the discovery of the region responsible of the $\gamma$-ray emission. The observation of other two radio galaxies, 3C~111 and NGC~1275, did not result in a VHE detection. However, a joint work together with \emph{Fermi}-LAT on NGC~1275 resulted in the identification of a variation from the power-law regime at an energy of the order of $\approx 100$~GeV or lower, a previously unknown feature of radio-galaxies $\gamma$-ray component. \begin{table}[t] \begin{center} \caption{VERITAS upper limits on the observed radio galaxies VHE flux. The five columns represent: the source name; the period of observation; the energy threshold for that specific analysis; the total observation time of good quality data; the 99\% confidence level integral flux upper limit in cm$^{-2}$~s$^{-1}$.} \begin{tabular}{|l|c|c|c|c|} \hline \textbf{Source} & \textbf{Obs. Period} & \textbf{$E_\mathrm{th}$} & \textbf{$T_\mathrm{obs}$} & \textbf{Flux U.L.} \\ \hline \textbf{3C~111} & 10/08 - 12/08 & 300~GeV & 11~hr & $3.5\times 10^{-12}$\\ \hline \textbf{NGC~1275} & 01/09 - 02/09 & 190~GeV & 8~hr & $5.11\times 10^{-12}$ \\ \hline \textbf{M~87} & \multicolumn{4}{|c|}{See~\cite{Wagner2009}}\\ \hline \end{tabular} \label{tab:1} \end{center} \end{table} \bigskip \begin{acknowledgments} This research is supported by grants from the US Department of Energy, the US National Science Foundation, and the Smithsonian Institution, by NSERC in Canada, by Science Foundation Ireland, and by STFC in the UK. We acknowledge the excellent work of the technical support staff at the FLWO and the collaborating institutions in the construction and operation of the instrument. \end{acknowledgments} \bigskip
-10,109.01955
[ -3.18359375, 3.03125 ]
37.18593
[ -5.625, -3.400390625, -2.716796875, -7.28515625, 1.634765625, 11.828125 ]
[ 2.685546875, 6.87890625, 3.333984375, 4.71484375 ]
119
2,010
[ -3.001953125, 3.384765625 ]
25.306893
[ -5.88671875, -4.31640625, -2.80859375, -0.95947265625, 2.09765625, 9.15625 ]
1.164094
20.67334
38.40796
4.242947
[ 2.5210165977478027 ]
-8,410.701325
5.769652
-9,654.997255
0.724325
5.691715
[ -3.955078125, -4.24609375, -3.435546875, -3.474609375, 2.6484375, 9.90625 ]
[ -6.984375, -4.82421875, -3.59765625, -2.75, 5.12109375, 8.9609375 ]
BkiUdgc5qhLBWY0XuZmA
\section{Introduction} Predicting human, goods and information mobility between locations is an important topic in complex human behavior \cite{Ba19,Ba10}, transportation science \cite{OW11,HU18}, sociology \cite{hu09}, economic geography \cite{RT03} and regional economics \cite{Lm18,Ka00,pa07}, and it also has practical applications in urban planning \cite{ld17,ba08}, population migration \cite{to95}, cargo transportation \cite{ka10}, traffic engineering \cite{he01}, infectious disease epidemiology \cite{hu004,eu004,bal09,wag09} and emergency management \cite{bag11,lu12,ru13}. For more than 100 years, researchers have proposed a variety of models for predicting the mobility of people between locations. The most influential model is the gravity model, which is analogous to Newton's law of gravitation, i.e., the flow between two places is proportional to their population and decays as the power of their distance. The gravity model is simple in form and has been successfully used to predict railway freight volume \cite{Z46}, subway passengers \cite{gon12}, highway traffic flow \cite{JWS08}, air travel \cite{gr07}, commuting \cite{VBS06} and population migration \cite{to95}. Hereafter, researchers derived the gravity model from the perspective of destination selection behavior using the theory of determining utility \cite{N69}, stochastic utility \cite{D75} and game theory \cite{yz19}. Another classic model that is also established from the perspective of destination selection behavior is the intervening opportunity (IO) model \cite{S40}. Different from the gravity model, the IO model takes the total number of opportunities (often proportional to population) between the origin and the destination (named intervening opportunities), instead of the actual distance between the two places, as a key factor in determining human mobility. The concept of intervening opportunities provides a new direction for constructing the human mobility prediction model \cite{BBG17}. Inspired by the IO model, Simini et al. establish a parameter-free human mobility model named the radiation model \cite{SGMB12}. The radiation model assumes that when seeking job offers, the commuter will choose the closest workplace to his/her home, whose benefit is higher than the best offer available in his/her home county, i.e., the benefit of home is higher than the benefits of the intervening opportunities and lower than the benefit of the workplace. The radiation model can better predict the commuting behavior between counties. Some researchers improve the radiation model and propose various commuting prediction models, such as the radiation model with selection \cite{SMN13}, generalized radiation model \cite{KLGQ15}, the flow and jump model \cite{VTN18}, travel cost optimized radiation model \cite{var16} and a cost-based radiation model \cite{REWGT14}. Yan et al. propose a population-weighted opportunities (PWO) model \cite{YZFDW14} by mining human daily travel data from several cities, such as the GPS trajectories from vehicles and call detail records from mobile phones. The PWO model assumes that the probability of an individual selecting a destination is proportional to the number of opportunities at the destination and inversely proportional to the total population at the locations whose distances to the destination are shorter than or equal to the distance from the individual's origin to the destination, which can better predict intracity trips. Yan et al. further combine the PWO model with the continuous-time random walks model \cite{mon65} to obtain a universal model of individual and population \cite{YWGL17}, which realizes the prediction of intracity and intercity mobility patterns at both the individual and population levels. {\color{black}Huang et al. propose a novel human mobility model that can capture real-time human mobility in a sustainable and economical manner, which broadens our view.\cite{HU18}} Sim et al. establish a deliberate social tie (DST) model \cite{sim15} from the perspective of social interactions. The DST model assumes that an individual seeks out social ties only with other individuals whose attribute values are higher than the attribute value of the individual and the attribute values of the intervening opportunities. Motivated by the DST model, Liu and Yan propose an opportunity priority selection (OPS) model that assumes that the destination selected by the individual is the location that presents a higher benefit than the benefit of the origin and the benefits of the intervening opportunities \cite{LY19}. In general, all of the IO class models \cite{SGMB12,SMN13,KLGQ15,VTN18,var16,REWGT14,YZFDW14,YWGL17,sim15,LY19} share two common assumptions: (i) using an agent to represent all of the individuals; (ii) when selecting a destination, the agent will compare the benefits of different locations. The difference between these IO class models is that the rules for comparing benefits of different locations are different. Although the radiation class models \cite{SGMB12,SMN13,KLGQ15,VTN18,var16,REWGT14} can accurately predict commuting behavior and other IO class models \cite{YZFDW14,YWGL17,sim15,LY19} can accurately predict intracity and/or intercity mobility, an IO class model that can simultaneously describe the individual's destination selection behavior at different spatiotemporal scales is still lacking. In this paper, we propose a universal opportunity (UO) model to characterize an individual's destination selection behavior. The basic idea of the model is that when an individual selects a destination, she/he will comprehensively compare the benefits of the origin, the destination and the intervening opportunities. Furthermore, we use various mobility data sets to demonstrate the predictive power of our model. The results show that the model can accurately predict different spatiotemporal scale movements such as intracity trips, intercity travels, intercity freight, commuting, job hunting and migration. Moreover, our model can also cover the classical radiation model and OPS model, presenting a new universal framework for predicting human mobility in different scenarios. \section{Results} \subsection{Model} We assume that when an individual chooses a destination, like the radiation model \cite{SGMB12} and the OPS model \cite{LY19}, she/he first evaluates the benefit of the location's opportunities \cite{pan03} where the benefit is randomly chosen from a distribution $p(z)$. After that, the individual comprehensively compares the benefits of the origin, the destination and the intervening opportunities and selects a location as the destination. To characterize the behavior of an individual comprehensive comparison of the benefits of the locations, we use two parameters $\alpha$ and $\beta$. Parameter $\alpha$ reflects the behavior of the individual's tendency to choose the destination whose benefit is higher than the benefits of the origin and the intervening opportunities. Parameter $\beta$ reflects the behavior of the individual's tendency to choose the destination whose benefit is higher than the benefit of the origin, and the benefit of the origin is higher than the benefits of the intervening opportunities. According to the above assumption, the probability that location $j$ is selected by the individual at location $i$ is \begin{equation} \label{eq1} Q_{ij}=\int_0^{\infty}{\mathrm{Pr}_{m_{i}+{\alpha}{\cdot}s_{ij}}(z)} {\mathrm{Pr}_{{\beta}{\cdot}s_{ij}}(<z)}\mathrm{Pr}_{m_{j}}(>z) \mathrm{d}z, \end{equation} where $m_{i}$ is the number of opportunities at location $i$, $m_{j}$ is the number of opportunities at location $j$, $s_{ij}$ is the number of intervening opportunities \cite{S40} (i.e., the sum of the number of opportunities at all locations whose distances from $i$ are shorter than the distance from $i$ to $j$), ${\mathrm{Pr}_{m_{i}+{\alpha}{\cdot}s_{ij}}(z)}$ is the probability that the maximum benefit obtained after $m_{i}+{\alpha}{\cdot}s_{ij}$ samplings is exactly $z$, ${\mathrm{Pr}_{{\beta}{\cdot}s_{ij}}(<z)}$ is the probability that the maximum benefit obtained after ${\beta}{\cdot}s_{ij}$ samplings is less than $z$, ${Pr}_{m_{j}}(>z)$ is the probability that the maximum benefit obtained after $m_{j}$ samplings is greater than $z$, $\alpha$ and $\beta$ are both non-negative and $\alpha+\beta\leq1$. {\color{black}Since $\mathrm{Pr}_{x}(<z)=p(<z)^{x}$, we obtain \begin{equation} \label{eq2} \mathrm{Pr}_{x}(z)=\frac{\mathrm{d}\mathrm{Pr}_{x}(<z)}{\mathrm{d}z}=x p(<z)^{x-1}\frac{\mathrm{d}p(<z)}{\mathrm{d}z}. \end{equation} Eq. (\ref{eq1}) can be written as \begin{equation} \label{eq3} \begin{aligned} Q_{ij}=&\int_0^{\infty}{\mathrm{Pr}_{m_{i}+{\alpha}{\cdot}s_{ij}}(z)} {\mathrm{Pr}_{{\beta}{\cdot}s_{ij}}(<z)}\mathrm{Pr}_{m_{j}}(>z) \mathrm{d}z\\ =&(m_i+{\alpha} s_{ij}) \int_0^{1}(p(<z)^{m_i+({\alpha}+{\beta})s_{ij}-1}\\ &-p(<z)^{m_i+({\alpha}+{\beta})s_{ij}+m_{j}-1})\mathrm{d}p(<z)\\ =&(m_i+{\alpha} s_{ij}) (\frac{p(<z)^{m_i+({\alpha}+{\beta})s_{ij}}}{m_i+({\alpha}+{\beta})s_{ij}}\Big\vert_0^{1}\\ &-\frac{p(<z)^{m_i+({\alpha}+{\beta})s_{ij}+m_j}}{m_i+({\alpha}+{\beta})s_{ij}+m_j}\Big\vert_0^{1})\\ =&\frac{({m_i+{\alpha}s_{ij}}){m_j}}{[{m_i+({\alpha}+{\beta})s_{ij}}][{m_i+({\alpha}+{\beta})s_{ij}+m_j}]}. \end{aligned} \end{equation}} Then, the probability of the individual at location $i$ choosing location $j$ is \begin{equation} \label{eq4} P_{ij}=\frac{Q_{ij}}{\sum\limits_j Q_{ij}}\propto\frac{({m_i+{\alpha}s_{ij}}){m_j}}{[{m_i+({\alpha}+{\beta})s_{ij}}][{m_i+({\alpha}+{\beta})s_{ij}+m_j}]}. \end{equation} Further, if we know the total number of individuals $O_i$ who travel from location $i$, the flux $T_{ij}$ from location $i$ to location $j$ can be calculated as \begin{equation} \label{eq5} T_{ij}=O_iP_{ij}= O_i\frac{\frac{({m_i+{\alpha}s_{ij}}){m_j}}{([{m_i+({\alpha}+{\beta})s_{ij}}][{m_i+({\alpha}+{\beta})s_{ij}+m_j}])}}{\sum\limits_j\frac{({m_i+{\alpha}s_{ij}}){m_j}}{([{m_i+({\alpha}+{\beta})s_{ij}}][{m_i+({\alpha}+{\beta})s_{ij}+m_j}])}}. \end{equation} This is the final form of the model and we name it the universal opportunity (UO) model. {\color{black} The $\alpha$ and $\beta$ parameters in the UO model reflect the two behavioral tendencies of the individual when choosing potential destinations (where the opportunity benefit is higher than the benefit of the origin). From Eq. (\ref{eq3}), we can see that the larger the value of parameter $\alpha$, the greater the probability that distant potential destinations will be selected by the individual. We name this behavioral tendency the exploratory tendency. On the other hand, the larger the value of parameter $\beta$, the greater the probability that near potential destinations will be selected by the individual. We name this behavioral tendency the cautious tendency.} We choose average travel distance and normalized entropy as two fundamental metrics to discuss the influence of two parameters $\alpha$ and $\beta$ on individual destination selection behavior. The average travel distance reflects the bulk density of individual destination selection \cite{BHG06,GHB08,Y13,roth11}, and normalized entropy reflects the heterogeneity of individual destination selection \cite{ea10}. As shown in Fig. \ref{fig1}, the two fundamental metrics have the same regularities with a change in two parameters, whether the number of destination opportunities is a uniform or random distribution. When $\alpha=0$, $\beta=1$, the average travel distance is the shortest, and the normalized entropy value is the smallest; when $\alpha=0$, $\beta=0$, the average travel distance is the longest, and the normalized entropy value is the largest. From the definitions of the two parameters, we can easily explain the reasons for the regularities. When $\alpha$ is closer to 0, $\beta$ is closer to 1, the individual {\color{black} is more cautious, and the probability of choosing near potential destinations is higher}, so the shorter the average travel distance and the stronger the heterogeneity. {\color{black} When $\alpha$ is closer to 1, $\beta$ is closer to 0, the individual is more exploratory, and the probability of choosing distant potential destinations is higher, so the average distance is increased while the heterogeneity is decreased.} When $\alpha$ and $\beta$ are both closer to 0, the individual attaches more importance to the benefit that the location brings to him/her and does not care about the order of locations, so the longer the average travel distance and the stronger the homogeneity. \begin{figure} \flushright \includegraphics[width=1.02\linewidth]{fig1.pdf} \caption{{\bf Average travel distance and normalized entropy versus different parameter combinations.} ({\it a}-{\it b}) Average travel distance and normalized entropy values corresponding to different parameter combinations. Here, the number of destination opportunities is a uniform distribution. ({\it c}-{\it d}) Same average travel distance and normalized entropy values as in ({\it a}-{\it b}), but the number of destination opportunities is a random distribution.} \label{fig1} \end{figure} Moreover, when $\alpha$ and $\beta$ take extreme values (i.e., the three vertices of the triangle in Fig. \ref{fig1}), we can derive three special human mobility models. When $\alpha=0$, $\beta=0$, we name this model the opportunity only (OO) model. In this model, the individual chooses the location whose benefit is higher than the benefit of the origin. Then, the probability of the individual at location $i$ choosing location $j$ as the destination is \begin{equation} \label{eq6} P_{ij}=\frac{m_j/(m_i+m_j)}{\sum\limits_j m_j/(m_i+m_j)}\propto\frac{m_j}{{m_i+m_j}}. \end{equation} When $\alpha=1$, $\beta=0$, our model can be simplified to the OPS model, in which the individual chooses the location whose benefit is higher than the benefit of the origin and the benefits of the intervening opportunities. Then, the probability of the individual at location $i$ choosing location $j$ as the destination is \begin{equation} \label{eq7} P_{ij}=\frac{m_j/(m_i+s_{ij}+m_j)}{\sum\limits_j m_j/(m_i+s_{ij}+m_j)}\propto\frac{m_j}{{m_i+s_{ij}+m_j}}. \end{equation} When $\alpha=0$, $\beta=1$, our model can be simplified to the radiation model, in which the individual chooses the location whose benefit is higher than the benefit of the origin and the benefits of the intervening opportunities are lower than the benefit of the origin. Then, the probability of the individual at location $i$ choosing location $j$ as the destination is \begin{equation} \label{eq8} \begin{aligned} P_{ij}=&\frac{m_im_j/[({m_i+s_{ij}})({m_i+s_{ij}+m_j})]}{\sum\limits_j m_im_j/[({m_i+s_{ij}})({m_i+s_{ij}+m_j})]}\\ \propto&\frac{m_im_j}{({m_i+s_{ij}})({m_i+s_{ij}+m_j})}. \end{aligned} \end{equation} From equations (\ref{eq6})-(\ref{eq8}), we can see that the OO model, the OPS model and the radiation model are all special cases of our UO model. \subsection{Prediction} We use {\color{black} fourteen} empirical data sets, including commuting trips between United States' counties (USC), {\color{black} commuting trips between the provinces of Italy (ITC), commuting trips between the subregions of Hungary(HUC),} freight between Chinese cities (CNF), internal job hunting in China (CNJ), internal migrations in the US (USM), intercity travels in China (CNT), intercity travels in the US (UST), {\color{black} intercity travels in Belgium (BLT),} intracity trips in Suzhou (SZT), {\color{black} intracity trips in Beijing(BJT), intracity trips in Shenzhen (SHT), intracity trips in London (LOT) and intracity trips in Berlin (BET)} (see {\bf Methods}), to validate the predictive ability of the UO model. We first extract the flux $T_{ij}$ from location $i$ to location $j$ from the data set and obtain the real mobility matrix. Then, we exploit the S{\o}rensen similarity index \cite{YZFDW14} (SSI, see {\bf Methods}) to calculate the similarity between the real mobility matrix and the mobility matrix predicted by the UO model under different parameter combinations. The results are shown in Fig. \ref{fig2}. {\color{black}Figure \ref{fig2}{\it o } shows the optimal values of the parameter $\alpha$ and $\beta$ corresponding to the highest SSI for the fourteen data sets. } \begin{figure*} \centering \includegraphics[width=1.0\linewidth]{fig2.pdf} \caption{{\bf Results for empirical data sets.} ({\it a}-{\it {\color{black} n})} We exploit SSI to calculate the similarity between the real mobility matrix and the predicted mobility matrix under different parameter combinations for the {\color{black}fourteen} data sets. Here, the color bar represents the SSI, where a dark red (blue) dot indicates a higher (lower) SSI. ({\it {\color{black} o}}) The optimal values of the parameters $\alpha$ and $\beta$ correspond to the highest SSI for the {\color{black}fourteen} data sets.} \label{fig2} \end{figure*} \begin{figure*} \flushleft \includegraphics[width=0.96\linewidth]{fig3.pdf} \caption{{\bf Comparing predicting accuracy of the UO model, the radiation model, the OPS model and the OO model in terms of SSI.}} \label{fig-3} \end{figure*} \begin{table*}\centering \caption{{\color{black}{\bf Comparison of models prediction accuracy.} SSI is the S{\o}rensen similarity index between the real mobility matrix and the mobility matrix predicted by different models. RMSE is the root mean square error of predicted mobility matrix. UO, RM, OPS, and OO stand for the universal opportunity model, the radiation model, the opportunity priority selection model and the opportunity only model, respectively.} } \label{tb-1} {\color{black} \begin{tabular}{p{1.6cm}p{1.6cm}p{1.6cm}p{1.6cm}p{1.6cm}p{1.99cm} p{1.99cm}p{1.99cm}p{1.99cm}} \hline Data set & SSI-UO & SSI-RM & SSI-OPS & SSI-OO & RMSE-UO & RMSE-RM & RMSE-OPS & RMSE-OO \\ \hline USC & 0.610 & 0.603 & 0.384 & 0.042 & 2158.766 & 2308.054 & 2948.205 & 3654.402 \\ ITC & 0.648 & 0.641 & 0.447 & 0.158 & 1600.862 & 1696.488 & 2132.627 & 3033.019 \\ HUC & 0.549 & 0.504 & 0.504 & 0.186 & 477.254 & 546.878 & 429.377 & 612.904 \\ CNF & 0.676 & 0.561 & 0.587 & 0.289 & 111.201 & 184.724 & 128.789 & 183.655 \\ CNJ & 0.739 & 0.449 & 0.738 & 0.567 & 185.709 & 481.072 & 189.816 & 297.379 \\ USM & 0.767 & 0.434 & 0.759 & 0.632 & 1126.110 & 3275.661 & 1218.255 & 1521.585 \\ CNT & 0.702 & 0.518 & 0.698 & 0.452 & 441.063 & 829.869 & 438.463 & 731.153 \\ UST & 0.748 & 0.607 & 0.729 & 0.518 & 55.851 & 95.013 & 65.513 & 115.795 \\ BLT & 0.796 & 0.639 & 0.791 & 0.611 & 26.236 & 58.641 & 26.339 & 48.080 \\ SZT & 0.757 & 0.358 & 0.732 & 0.463 & 7.871 & 47.801 & 9.133 & 12.553 \\ BJT & 0.748 & 0.268 & 0.697 & 0.489 & 6.567 & 68.039 & 12.291 & 12.040 \\ SHT & 0.760 & 0.358 & 0.734 & 0.470 & 48.196 & 368.901 & 71.152 & 91.000 \\ LOT & 0.661 & 0.416 & 0.657 & 0.476 & 4.309 & 20.031 & 4.603 & 8.104 \\ BET & 0.646 & 0.421 & 0.642 & 0.447 & 3.288 & 11.271 & 3.356 & 5.323 \\ \hline \end{tabular} } \end{table*} {\color{black} It can be seen from Fig. \ref{fig2}{\it a}-{\it d} that for USC, ITC, HUC and CNF, when $\alpha$ is close to 0 and $\beta$ is close to 1, the SSI is relatively large. The reason is that for commuting data sets (USC, ITC and HUC), the commuting distance or time is very important for commuters. As a result, most people tend to choose near potential destinations when finding a job based on their place of residence or adjusting their place of residence after finding a job. This cautious destination selection tendency also exists in freight. Freight to far destinations will lead to an increase in transportation costs and a decrease in the freight frequency, which will have a negative impact on freight revenue. Thus, unless the destination opportunity benefit is very high, the individual tends to choose a near destination rather than a far destination for freight. For the migration and job hunting data sets (USM and CNJ), when $\alpha$ is close to 1 and $\beta$ is close to 0, the SSI is relatively large, as shown in Fig. \ref{fig2}{\it e}-{\it f}. The reason is that both job seekers and migrants pay more attention to the destination opportunity benefit rather than the distance to the destination. In other words, they are more exploratory but less cautious. Even if a high benefit destination is far away, it will still be selected by individuals with a relatively high probability. The reason is that the distance to the destination has a smaller impact on long temporal scale mobility behaviors, such as migration and job hunting, than on daily commuting behaviors. For intercity travel data sets (CNT, UST and BLT), when $\alpha$ and $\beta$ are both near the middle of the diagonal line of the triangle, the SSI is relatively large, as shown in Fig. \ref{fig2}{\it g}-{\it i}. For most people, intercity travel is occasional and not as frequent as commuting. Travelers are less inclined than commuters to choose near potential destinations but they tend to explore distant potential destinations. Thus, the exploratory tendency parameter $\alpha$ of intercity travels is much larger than that of commuting. On the other hand, the importance of the travel cost of intercity travels is higher than that of the cost of migration. Thus, the cautious tendency parameter $\beta$ of intercity travels is larger than that of migration. For intracity trips data sets (SZT, BJT, SHT, LOT and BET), when $\alpha$ and $\beta$ are both close to 0, the SSI is relatively large, as shown in Fig. \ref{fig2}{\it j}-{\it n}. The reason is that compared with the intercity mobility behavior on a large spatial scale, the spatial scale of intracity mobility behavior is small. In this scenario, the individual is not necessarily concerned about the travel distance and focuses more on the benefit that the location will directly bring to him/her. Thus, the optimal values of $\alpha$ and $\beta$ are both close to 0, as shown in Fig. \ref{fig2}{\it o}. } We next compare the predictive accuracy of the mobility fluxes of the UO model with the radiation model, the OPS model and the OO model. In terms of SSI, as shown in Fig. \ref{fig-3} {and \color{black} Table \ref{tb-1}}, the UO model performs best. However, {\color{black} the radiation model and the OPS model can provide only relatively accurate predictions for some data sets. For example, the radiation model can predict commuting and freight trips relatively accurately but cannot accurately predict other types of mobility. The reason is that the individual tends to choose near potential destinations rather than distant potential destinations in commuting and freight, where travel costs are more important. From Fig. \ref{fig2}o, we can see that for commuting and freight data sets, the optimal parameter $\beta$ (which reflects the individual's cautious tendency) of the UO model is close to 1, and the optimal parameter $\alpha$ (which reflects the individual's exploratory tendency) is close to 0. Therefore, the prediction accuracy of the radiation model in which the individual only chooses the closest potential destination (i.e., $\alpha=0, \beta=1$) is close to that of the UO model in commuting and freight data sets. However, the prediction accuracy of the radiation model is considerably lower than that of the UO model in job hunting, migration and noncommuting travel data sets. The reason is that the individual is more likely to choose distant potential destinations in these data sets. In these cases, the prediction accuracy of the OPS model, in which the individual tends to choose distant potential destinations, is closer to that of the UO model.} {\color{black} We further use a frequently used statistical index, named the root mean square error (RMSE), to measure the prediction errors of the UO model and the other three models, and Table \ref{tb-1} lists the results . From the table, we can see that in most cases, the RMSE of the UO model is smaller than that of the other benchmark models, although the RMSE is not the parameter optimization objective of the UO model.} These results prove that the three models only capture the individual's destination selection behavior at a specific spatiotemporal scale. Yet our UO model can accurately describe the individual's destination selection behavior at different spatiotemporal scales. \section{Discussion} Although previous IO class models are widely used to predict the mobility of people between locations \cite{SGMB12,SMN13,KLGQ15,VTN18,var16,REWGT14,YZFDW14,YWGL17,sim15,LY19}, these models can only achieve accurate prediction at specific spatiotemporal scales. In this paper, we developed a UO model to predict human mobility at different spatiotemporal scales. Our model establishes a new framework in IO class models and covers the classical radiation model \cite{SGMB12} and the OPS model \cite{LY19}. {\color{black} Although the UO model has two parameters, they are different from the parameters in some regression analysis models or machine learning models in the sense that they simply improve the prediction accuracy of the model. These two parameters essentially describe the two tendencies, i.e., exploratory tendency and cautious tendency, of an individual's destination selection behavior. They not only enable the UO model to better predict human mobility at different spatiotemporal scales than the parameter-free models but also help us better understand the underlying mechanism of the individual's destination selection behavior in different types of human mobility.} Many phenomena in complex system field are strongly related to human mobility \cite{BBG17}. For example, the spread of disease is directly affected by human travel distance between locations and the population size of locations \cite{hu004,eu004,bal09,watt05,kra16,kit10,vib06}. The UO model can accurately describe the individual's destination selection behavior at different spatiotemporal scales, which has potential applications for understanding the spread of disease within humans. Not only that, but the IO model can also describe an individual's selection behavior in social networks such as friend networks and scientific collaboration networks. In friend networks, the individual tends to choose friends who are close to him/her and have a high sense of identity \cite{sim15,ill13}. In scientific collaboration networks, the individual tends to choose nearby scholars who have high scientific influence \cite{pan12}. These phenomena indicate that when one seeks to build beneficial ties, she/he will take into account both the distance and the benefits of the opportunities. The UO model can describe the individual's interactive object selection behavior, providing a new perspective for social network analysis. Despite its fine performance in predicting human mobility, the UO model has room for further improvements. For example, most existing IO class models use an agent to represent all of the individuals and neglect the diversity of individual selection behavior \cite{Y13,song10,bj11,panl15,Gal16,zhao16}. Building mobility prediction model for each individual may reflect the diversity in detail. However, it is extremely cumbersome and cannot grasp the commonality among individuals' mobility patterns. One possible approach is first clustering individuals according to their mobility behavior characteristics \cite{lou15,lian11,lian14}, then expanding our UO model for different classes of individuals, which may more accurately predict human mobility. \section{Material and methods} \subsection{Data sets} (1) {\color{black}Commuting trips. The commuting trips data sets include the commuting trips between United States' counties \cite{SGMB12} (USC) , the commuting trips between the provinces of Italy \cite{VTN18} (ITC) and the commuting trips between the subregions of Hungary \cite{VTN18} (HUC), which were downloaded from http://www.census.gov/population/www/cen2000/\\com-muting/index.html, http://www.stat.it/storage/\\cartografia/matrici\underline{\hbox to 0.15cm{}} pendolarismo/matrici\underline{\hbox to0.15cm{}}pendolarismo\underline{\hbox to 0.15cm{}} 2011.zip and http://www.ksh.hu, respectively. Since we focus on mobility among zones(counties, provinces or subregions), all the residences/workplaces within a zone are regarded as the same with an identical zone label. Then, we can accumulate the total number $T_{ij}$ of trips from zone $i$ to zone $j$, which is also carried out in the following data sets.} (2) Freight between Chinese cities (CNF). The CNF data set is extracted from the travel records of freight between Chinese cities from 19 May 2015 to 23 May 2015. When freight is loaded or unloaded, the coordinates and time are recorded automatically by a GPS-based device installed in the truck. All the loading/unloading locations within a city are regarded as the same with an identical zone label. (3) Internal job hunting in China (CNJ). The CNJ data set is extracted from more than 160 million job hunters' resumes from 2006 to 2016 and was downloaded from https://www.zhaopin.com. The resumes contain job hunter work experience, from which we can obtain a job hunter's former workplaces. All the workplaces within a city are regarded as the same with an identical zone label. (4) Internal migrations in the US (USM). The USM data set is extracted from the Statistics of Income Division of the Internal Revenue Service (IRS) in the US from 2011 to 2012 and was downloaded from https://www.irs.gov/statistics/soi-tax-stats-migration-data. The IRS contains records of all individual income tax forms filed in each year, from which we can determine who has or has not, moved residence/workplace locations in the intervening fiscal year \cite{BBG17}. All the residence/workplace locations within a state are regarded as the same with an identical zone label. (5) {\color{black}Intercity travels. The intercity travels data sets include intercity travels in China (CNT), intercity travels in the US (UST) and intercity travels in Belgium (BLT).} The CNT data set is extracted from check-in records of the Sina Weibo website for users in mainland China \cite{YWGL17}. The UST data set is extracted from check-in records of the Foursquare website for users in the continental US \cite{lev12}. {\color{black}The BLT data set is extracted from check-in records of the website Gowalla for users in Belgium \cite{cho11}.} {\color{black}These data sets contain} each user's spatial and temporal information, from which we can obtain the user's location. All the check-in locations within a city are regarded as the same with an identical zone label. (6) {\color{black}Intracity trips. The intracity trips data sets include intracity trips in Suzhou (SZT), intracity trips in Beijing (BJT), intracity trips in Shenzhen (SHT), intracity trips in London (LOT) and intracity trips in Berlin (BET).} The SZT data set is extracted from the mobile phone call detail records in Suzhou, a city of China. The data contains the time and positions of users making phone calls or sending text messages. {\color{black}The BJT data set \cite{liang12} and the SHT data set \cite{liang12} are extracted from the travel records of taxi passengers in Beijing and Shenzhen, respectively. When a passenger gets on or gets off a taxi, the coordinates and time are recorded automatically by a GPS-based device installed in the taxi. The LOT data set \cite{cho11} and the BET data \cite{cho11} set are extracted from checkin records at Gowalla in London and Berlin. Because of the absence of natural partitions in cities (in contrast to states or counties), the city is divided into zones, each of which is 1 km $\times$ 1 km (for SZT is 0.01 longitude $\times$ 0.01 latitude).} All the locations within a zone are regarded as the same with an identical zone label \cite{YZFDW14}. \subsection{Normalized entropy} We use normalized entropy to reflect the heterogeneity of individual destination selection \begin{equation} \label{eq9} E_{i}=\dfrac{-\sum\limits_{j=1}^{N}p_{ij}\log(p_{ij})}{\log(p_{ij})}, \end{equation} where $E_{i}$ is the normalized entropy of location $i$, $p_{ij}$ is the probability that the individual at location $i$ chooses location $j$ as his/her destination, and $N$ is the number of locations. \subsection{S{\o}rensen similarity index} The S{\o}rensen similarity index \cite{S48} is a similarity measure between two samples. Here, we apply a modified version \cite{YZFDW14} of the index to measure whether real fluxes are correctly reproduced (on average) by theoretical models, defined as \begin{equation} \label{eq10} \mathrm{SSI} = \frac{1}{N(N-1)}\sum^{N}_{i}{\sum^{N}_{j \neq i}{\frac{2 \min (T_{ij},T^{'}_{ij})}{T_{ij}+T^{'}_{ij}} }}, \end{equation} where $N$ is the number of locations, $T_{ij}$ is the predicted flux from location $i$ to $j$ and $T^{'}_{ij}$ is the empirical flux. Obviously, if each $T_{ij}$ is equal to $T^{'}_{ij}$ the index is 1, and if all $T_{ij}$ are far from the real values, the index is close to 0. \noindent\textbf{Acknowledgements:} X.-Y.Y. was supported by NSFC under grant nos. 71822102, 71621001 and 71671015.
-22,316.891045
[ -2.806640625, 2.482421875 ]
40.833333
[ -3.337890625, 1.0185546875, -1.892578125, -6.0234375, -0.62158203125, 8.1640625 ]
[ 3.07421875, 6.51953125, 3.19921875, 7.46484375 ]
346
4,495
[ -3.44140625, 4.03125 ]
27.164036
[ -6.28515625, -3.861328125, -4.12109375, -2.046875, 2.404296875, 11.7421875 ]
1.341435
14.593686
23.20356
14.212067
[ 2.982145071029663 ]
-17,061.628607
5.63871
-21,985.37572
0.710171
5.69016
[ -3.38671875, -3.2578125, -3.1640625, -4.12890625, 2.75, 10.7109375 ]
[ -6, -2.587890625, -2.673828125, -1.9375, 3.787109375, 6.52734375 ]
BkiUd3c4eIZijir1ousz
\section{Preliminaries} \subsubsection*{Finite Automata} An \emph{NFA} is a quintuple $\A = (Q,\Sigma,\delta,I,F)$, where $Q$ is the finite set of states, $\Sigma$ is the finite alphabet, $\delta \subseteq Q \times \Sigma \times Q$ is the transition relation, $I \subseteq Q$ is the set of initial states, and $F \subseteq Q$ is the set of accepting states. We write $q \xrightarrow{a} r$ to denote that $(q,a,r) \in \delta$. A finite sequence $q_0 \xrightarrow{a_1} q_1 \xrightarrow{a_2} \cdots \xrightarrow{a_n} q_n$ is called a \emph{run}; it can be summarized as $q_0 \xrightarrow{a_1 \cdots a_n} q_n$. The NFA~$\A$ \emph{recognizes} the language $L(\A) := \{w \in \Sigma^* \mid \exists\, q_0 \in I \,.\, \exists\, f \in F \,.\, q_0 \xrightarrow{w} f\}$. The NFA~$\A$ is a \emph{DFA} if $|I| = 1$ and for every $q \in Q$ and $a \in \Sigma$ there is exactly one $q'$ with $q \xrightarrow{a} q'$. The NFA~$\A$ is a \emph{UFA} if for every word $w = a_1 \cdots a_n \in \Sigma^*$ there is at most one \emph{accepting} run for~$w$, i.e., a run $q_0 \xrightarrow{a_1} q_1 \xrightarrow{a_2} \cdots \xrightarrow{a_n} q_n$ with $q_0 \in I$ and $q_n \in F$. Clearly, any DFA is a UFA. \subsubsection*{Notation $\tO$ and $\tOmega$} We use the notation $\tO(f(n))$ and $\tOmega(f(n))$ to hide polylogarithmic factors; i.e., $\tO(f(n)) = f(n) \log^{O(1)} f(n)$ and $\tOmega(f(n)) = f(n) / \log^{O(1)} f(n)$. \section{UFA Complementation} \label{sec:complement} Given two finite automata $\A_1, \A_2$, recognizing languages $L_1, L_2 \subseteq \Sigma^*$, respectively, the \emph{state complexity} of union (or intersection, or complement, etc.) is how many states may be needed for an automaton that recognizes $L_1 \cup L_2$ (or $L_1 \cap L_2$, or $\Sigma^* \setminus L_1$, etc.). It depends on the type of automaton considered, such as NFAs, DFAs, or UFAs. The state complexity has been well studied for various types of automata and language operations, see, e.g., \cite{JirasekJS18} and the references therein for some known results. For example, it was shown in~\cite{HolzerK03} that complementing an NFA with $n$~states may require $\Theta(2^n)$ states. However, the state complexity for UFAs is not yet fully understood. It was shown only in~2018 by Raskin~\cite{Raskin18} that the state complexity for UFAs and complement is not polynomial: \begin{proposition}[\cite{Raskin18}] \label{prop:Raskin18} For any $n \in \N$ there exists a UFA~$\A$ with $n$~states and unary alphabet~$\Sigma$ (i.e., $|\Sigma|=1$) such that any NFA that recognizes $\Sigma^* \setminus L(\A)$ has at least $n^{(\log \log \log n)^{\Omega(1)}}$ states. \end{proposition} This super-polynomial blowup (even for unary alphabet and even if the output automaton is allowed to be ambiguous) refuted a conjecture that it may be possible to complement UFAs with a polynomial blowup~\cite{Colcombet15}. A non-trivial upper bound (for general alphabets and outputting a UFA) was shown by Jir{\'{a}}sek et al.~\cite{JirasekJS18}: \begin{proposition}[\cite{JirasekJS18}] \label{prop:Jirasek} Let $\A$ be a UFA with $n \ge 7$ states that recognizes a language $L \subseteq \Sigma^*$. Then there exists a UFA with at most $n \cdot 2^{0.786 n}$ states that recognizes the language $\Sigma^* \setminus L$. \end{proposition} An almost tight analysis \cite{IK21} of Jir{\'{a}}sek et al.'s construction yields a slight improvement: \begin{proposition}[\cite{IK21}] \label{prop:IK21} Let $\A$ be a UFA with $n \ge 0$ states that recognizes a language $L \subseteq \Sigma^*$. Then there exists a UFA with at most $\sqrt{n+1} \cdot 2^{n/2}$ states that recognizes the language $\Sigma^* \setminus L$. \end{proposition} In this section we improve the lower bound from \cref{prop:Raskin18}: \begin{theorem} \label{thm:complement} For infinitely many $N \in \N$ there is a UFA~$\A$ with $N$~states and alphabet $\Sigma = \{0,1\}$ and finite $L(\A) \subseteq \Sigma^*$ such that any NFA that recognizes $\Sigma^* \setminus L(\A)$ has at least $N^{\tOmega(\log N)}$ states. \end{theorem} Like \cref{prop:Raskin18}, the lower bound holds even for NFAs (not just UFAs) that recognize the complement language. Unlike \cref{prop:Raskin18}, the lower bound in \cref{thm:complement} uses a binary alphabet, i.e., $|\Sigma| = 2$. In the rest of the section we prove \cref{thm:complement}. The proof uses concepts and results from communication complexity, in particular a recent result from \cite{Balodis2021FOCS}. \subsection{Communication Complexity} \label{sub:com-com} Let $D = C_1 \lor \cdots \lor C_m$ be an $n$-variate boolean formula in disjunctive normal form (DNF). DNF~$D$ has \emph{width}~$k$ if every $C_i$ is a conjunction of at most $k$ literals. We call such~$D$ a \emph{$k$-DNF}. For conjunctive normal form (CNF) formulas the width and $k$-CNFs are defined analogously. DNF~$D$ is said to be \emph{unambiguous} if for every input $x \in \{0,1\}^n$ at most one of the conjunctions~$C_i$ evaluates to true, $C_i( x ) = 1$. For any boolean function $f : \{0,1\}^n \to \{0,1\}$ define \begin{itemize} \item $\C_1(f)$ as the least $k$ such that $f$ can be written as a $k$-DNF; \item $\C_0(f)$ as the least $k$ such that $f$ can be written as a $k$-CNF; \item $\UC_1(f)$ as the least $k$ such that $f$ can be written as an unambiguous $k$-DNF. \end{itemize} Note that $\C_0(f) = \C_1(\neg f)$. The following is a recent result~\cite{Balodis2021FOCS}: \begin{theorem}[{\cite[Theorem~1]{Balodis2021FOCS}}] \label{thm:Puzzle-I} For infinitely many~$n$ there exists a boolean function $f: \{0,1\}^n \to \{0,1\}$ with $\UC_1(f) = n^{\Omega(1)}$ and $\C_0(f) = \tOmega(\UC_1(f)^2)$. \end{theorem} In words, for infinitely many~$k$ there is an unambiguous $k$-DNF such that any equivalent CNF requires width~$\tOmega(k^2)$. The bound is almost tight, as every unambiguous $k$-DNF has an equivalent $k^2$-CNF; see~\cite[Section~3]{Goos15}. We need results on two-party communication complexity; see \cite{KushilevitzNisan} for the standard textbook. Consider a ``two-party'' function $F : X \times Y \to \{0,1\}$. A set $A \times B \subseteq X \times Y$ (with $A \subseteq X$ and $B \subseteq Y$) is called a \emph{rectangle}. Rectangles $R_1, \ldots, R_k$ \emph{cover} a set $S \subseteq X \times Y$ if $\bigcup_i R_i = S$. For $b \in \{0,1\}$, the \emph{cover number} $\Cov_b(F)$ is the least number of rectangles that cover $F^{-1}(b)$. The \emph{nondeterministic (resp., co-nondeterministic) communication complexity} of~$F$ is defined as $\Non_1(F) := \log_2 \Cov_1(F)$ (resp., $\Non_0(F) := \log_2 \Cov_0(F)$). Note that $\Non_0(F) = \Non_1(\neg F)$. The nondeterministic communication complexity can be interpreted as the number of bits that two parties, holding inputs $x \in X$ and $y \in Y$, respectively, need to communicate in a nondeterministic (i.e., based on guessing and checking) protocol in order to establish that $F(x,y) = 1$; see~\cite[Chapter~2]{KushilevitzNisan} for details. The following is a ``lifting'' theorem, which allows us to transfer lower bounds on the DNF width of a boolean function to the nondeterministic communication complexity of a two-party function. \begin{theorem}[{\cite[Theorem~4]{Goos15}}] \label{thm:lifting} For any $n \in \N$ there is a function $g : \{0,1\}^b \times \{0,1\}^b \to \{0,1\}$ with $b = \Theta(\log n)$ such that for any function $f : \{0,1\}^n \to \{0,1\}$ the function $F : \{0,1\}^{b n} \times \{0,1\}^{b n} \to \{0,1\}$ defined by \[ F((x_1, \ldots, x_n), (y_1, \ldots, y_n)) = f(g(x_1,y_1), \ldots, g(x_n,y_n)) \ \text{for } x_i,y_j \in \{0,1\}^b \] satisfies $\Non_0(F) = \Omega(\C_0(f) \cdot b)$ (and thus also $\Non_1(F) = \Omega(\C_1(f) \cdot b)$). \end{theorem} Finally, we need the following simple lemma: \begin{lemma} \label{lem:NFA-CC} If a two-party function $F: \{0,1\}^m \times \{0,1\}^m \to \{0,1\}$ admits an NFA with $s$~states, i.e., there is an NFA $\A$ with $s$~states and $L(\A) = \{x y \in \{0,1\}^{2 m} \mid F(x,y) = 1\}$, then $\Cov_1(F) \le s$. \end{lemma} \begin{proof} Let $\A = (Q,\Sigma,\delta,I,F)$ be an NFA with $L(\A) = \{x y \in \{0,1\}^{2 m} \mid F(x,y) = 1\}$. We show that $F^{-1}(1)$ is covered by at most $|Q|$ rectangles. Indeed, $F^{-1}(1)$ equals \[ \bigcup_{q \in Q} (\{x \in \{0,1\}^m \mid \exists\, q_0 \in I \,.\, q_0 \xrightarrow{x} q\}) \times (\{y \in \{0,1\}^m \mid \exists\, f \in F \,.\, q \xrightarrow{y} f\}) \,. \] (Alternatively, in terms of a nondeterministic protocol, the first party, holding $x \in \{0,1\}^m$, produces a run for~$x$ from an initial state to a state~$q$ and then sends the name of~$q$, which takes $\log_2 |Q|$ bits, to the other party. The other party then produces a run for~$y$ from $q$ to an accepting state.) \end{proof} \subsection{Proof of \texorpdfstring{\cref{thm:complement}}{Theorem~\ref{thm:complement}}} For $n \in \N$, let $f: \{0,1\}^n \to \{0,1\}$ be the function from \cref{thm:Puzzle-I}, i.e., $f$ has an unambiguous $k$-DNF with $k = n^{\Omega(1)}$ (hence, $\log n = O(\log k)$) and $\C_0(f) = \tOmega(k^2)$. Let $g: \{0,1\}^b \times \{0,1\}^b \to \{0,1\}$ with $b = \Theta(\log n)$ and $F: \{0,1\}^{b n} \times \{0,1\}^{b n} \to \{0,1\}$ be the two-party functions from \cref{thm:lifting}, with $F((x_1, \ldots, x_n), (y_1, \ldots, y_n)) = f(g(x_1,y_1), \ldots, g(x_n,y_n))$. The UFA~$\A$ from the statement of \cref{thm:complement} will recognize $F^{-1}(1)$. First we argue that $F$ has an unambiguous DNF of small width. Indeed, $g$ and $\neg g$ have unambiguous $2 b$-DNFs, which can be extracted from the deterministic decision tree of~$g$. By plugging these unambiguous $2 b$-DNFs for $g$ and~$\neg g$ into the unambiguous $k$-DNF for~$f$ (and ``multiplying out''), one obtains an unambiguous $2 b k$-DNF, say~$D$, for~$F$. Over the $2 b n$ variables of~$F$, there exist at most $(2(2 b n) + 1)^{2 b k}$ different conjunctions of at most $2 b k$ literals. So $D$ consists of at most $n^{O(b k)}$ conjunctions. From~$D$ we obtain a UFA~$\A$ that recognizes $F^{-1}(1) \subseteq \{0,1\}^{2 b n}$, as follows. Each initial state of~$\A$ corresponds to a conjunction in~$D$. When reading the input $x \in \{0,1\}^{2 b n}$, the UFA checks that the corresponding assignment to the variables satisfies the conjunction represented by the initial state. This check requires at most $O(b n)$ states for each initial state. Thus, $\A$ has at most $n^{O(b k)} = 2^{\tO(k)} =: N$ states in total. On the other hand, by \cref{thm:lifting}, we have $\Non_0(F) = \Omega(\C_0(f) \cdot b) = \tOmega(k^2)$. So by \cref{lem:NFA-CC} any NFA that recognizes $F^{-1}(0)$ has at least $2^{\tOmega(k^2)}$ states. Any NFA that recognizes $\{0,1\}^* \setminus L(\A)$ can be transformed into an NFA that recognizes $F^{-1}(0) = \{0,1\}^{2 b n} \setminus L(\A)$ by taking a product with a DFA that has $2 b n + 2$ states. It follows that any NFA that recognizes $\{0,1\}^* \setminus L(\A)$ has at least $2^{\tOmega(k^2)} / (2 b n + 2) = 2^{\tOmega(k^2)} = N^{\tOmega(\log N)}$ states. \qed \section{Separation of Regular Languages by UFAs} In~\cite[Conjecture~2]{Colcombet15}, Colcombet conjectured that for any NFAs $\A_1, \A_2$ with $L(\A_1) \cap L(\A_2) = \emptyset$ there is a polynomial-sized UFA~$\A$ with $L(\A_1) \subseteq L(\A)$ and $L(\A) \cap L(\A_2) = \emptyset$. Related \emph{separability} questions are classical in formal language theory and have attracted renewed attention; see, e.g, \cite{CzerwinskiL19} and the references therein. Separating automata have also been used recently to elegantly describe quasi-polynomial time algorithms for solving parity games in an automata theoretic framework; see \cite[Chapter~3]{automata-toolbox} and~\cite{CzerwinskiD19}. In this section we refute the above-mentioned conjecture by Colcombet, even when $L(\A_2) = \Sigma^* \setminus L(\A_1)$: \begin{theorem} \label{thm:separation} For any $N \in \N$ there are NFAs $\A_1, \A_2$ with $N$~states and alphabet $\Sigma = \{0,1\}$ and finite $L(\A_1)$ and $L(\A_2) = \Sigma^* \setminus L(\A_1)$ such that any UFA that recognizes $L(\A_1)$ has at least $N^{\Omega(\log N)}$ states. \end{theorem} Loosely speaking, in our construction, NFAs $\A_1, \A_2$ recognize \emph{(sparse) set disjointness} and its complement. For $n \in \N$ write $[n] := \{1, \ldots, n\}$ and define for $k \le n$ \begin{align*} \Disj^n_k\ &:=\ \{(S,T) \mid S \subseteq [n],\ T \subseteq [n],\ |S| = |T| = k,\ S \cap T = \emptyset\}\,. \end{align*} Define also $\enc{\Disj^n_k} := \{\enc{S} \enc{T} \mid (S,T) \in \Disj^n_k\}$ where $\enc{S} \in \Sigma^n = \{0,1\}^n$ is such that the $i$th letter of $\enc{S}$ is~$1$ if and only if $i \in S$, and similarly for $\enc{T}$. Note that $\enc{S}, \enc{T}$ each contain $k$ times the letter~$1$. To prove~\cref{thm:separation} it suffices to prove the following lemma. \begin{lemma} \label{lem:separation} For any $n \in \N$ let $k := \lceil \log_2 n \rceil$. There are NFAs $\A_1, \A_2$ with $n^{O(1)}$ states and alphabet $\Sigma = \{0,1\}$ and $L(\A_1) = \enc{\Disj^n_k}$ and $L(\A_2) = \Sigma^* \setminus \enc{\Disj^n_k}$. Any UFA that recognizes $\enc{\Disj^n_k}$ has at least $n^{\Omega(\log n)}$ states. \end{lemma} In the rest of the section we prove \cref{lem:separation}. We use known results from communication complexity to show that any UFA for $\enc{\Disj^n_k}$ needs super-polynomially many states. We will give a self-contained proof of the existence of polynomial-sized NFAs for $\enc{\Disj^n_k}$ and its complement, but the main argument also comes from communication complexity, as we remark below at the end of the section. \subsection{Communication Complexity} Recall from \cref{sub:com-com} the notions of rectangles and rectangles covering a set. For a two-party function $F : X \times Y \to \{0,1\}$, the \emph{partition number} $\Par_1(F)$ is the least number of \emph{pairwise disjoint} rectangles that cover~$F^{-1}(1)$. Note that $\Cov_1(F) \le \Par_1(F)$. The \emph{unambiguous communication complexity} of~$F$ is defined as $\Una_1(F) := \log_2 \Par_1(F)$. Note that $\Non_1(F) \le \Una_1(F)$. Denote by $M(F) \in \{0,1\}^{X \times Y}$ the \emph{communication matrix}, with entries $M(F)_{x,y} = F(x,y)$. Denote by $\rk(M)$ the rank over the reals of a matrix~$M$. The following lemma, the ``rank bound'', is often used for lower bounds on the \emph{deterministic} communication complexity (a concept we do not need here), but it holds even for unambiguous communication complexity: \begin{lemma} \label{lem:rank-bound} Let $F : X \times Y \to \{0,1\}$. Then $\rk(M(F)) \le \Par_1(F)$. \end{lemma} \begin{proof} For $k = \Par_1(F)$, let $A_1 \times B_1, \ldots, A_k \times B_k$ be pairwise disjoint rectangles that cover~$F^{-1}(1)$. Each $A_i \times B_i$ defines a rank-$1$ matrix $M(i) \in \{0,1\}^{X \times Y}$ with $M(i)_{x,y} = 1$ if and only if $x \in A_i$ and $y \in B_i$. It follows from the pairwise disjointness that $M(F) = \sum_{i=1}^k M(i)$. Hence $\rk(M(F)) \le \sum_{i=1}^k \rk(M(i)) = k = \Par_1(F)$. \end{proof} The following lemma and its proof are analogous to \cref{lem:NFA-CC}. \begin{lemma} \label{lem:UFA-CC} If a two-party function $F: \{0,1\}^m \times \{0,1\}^m \to \{0,1\}$ admits a UFA with $s$~states, i.e., there is a UFA $\A$ with $s$~states and $L(\A) = \{x y \in \{0,1\}^{2 m} \mid F(x,y) = 1\}$, then $\Par_1(F) \le s$. \end{lemma} \begin{proof} Let $\A = (Q,\Sigma,\delta,I,F)$ be a UFA with $L(\A) = \{x y \in \{0,1\}^{2 m} \mid F(x,y) = 1\}$. We show that $F^{-1}(1)$ is covered by at most $|Q|$ pairwise disjoint rectangles. Indeed, $F^{-1}(1)$ equals \[ \bigcup_{q \in Q} (\{x \in \{0,1\}^m \mid \exists\, q_0 \in I \,.\, q_0 \xrightarrow{x} q\}) \times (\{y \in \{0,1\}^m \mid \exists\, f \in F \,.\, q \xrightarrow{y} f\}) \] and the rectangles do not overlap, as $\A$ is unambiguous. \end{proof} \subsection{Proof of \texorpdfstring{\cref{lem:separation}}{Lemma~\ref{lem:separation}}} First we prove the statement on UFAs. Write $\binom{[n]}{k} := \{S \subseteq [n] \mid |S| = k\}$. Let $F : \binom{[n]}{k} \times \binom{[n]}{k} \to \{0,1\}$ be the two-party function with $F(S,T) = 1$ if and only if $(S,T) \in \Disj^n_k$. It is shown, e.g., in \cite[Example~2.12]{KushilevitzNisan} that the communication matrix $M(F)$ has full rank $\binom{n}{k}$. Let $F': \{0,1\}^n \times \{0,1\}^n \to \{0,1\}$ be such that $F'(x,y) = 1$ if and only if $x y \in \enc{\Disj^n_k}$. Then $M(F)$ is a principal submatrix of~$M(F')$, so $\binom{n}{k} \le \rk(M(F'))$. Using \cref{lem:rank-bound,lem:UFA-CC} it follows that any UFA, say~$\A$, that recognizes $\enc{\Disj^n_k}$ has at least $\binom{n}{k} \ge (\frac{n}{k})^k$ states. With $k := \lceil \log_2 n \rceil$, it follows that $\A$ has $n^{\Omega(\log n)}$ states. It is easy to see that there is an NFA, $\A_2$, with $n^{O(1)}$~states and $L(\A_2) = \Sigma^* \setminus \enc{\Disj^n_k}$. Indeed, we can assume that the input is of the form $\enc{S} \enc{T}$; otherwise $\A_2$ accepts. NFA~$\A_2$ guesses $i \in [n]$ such that $i \in S \cap T$ and then checks it. Finally, we show that there is an NFA, $\A_1$, with $n^{O(1)}$~states and $L(\A_1) = \enc{\Disj^n_k}$. We can assume that the input is of the form $\enc{S} \enc{T}$; otherwise $\A_1$ rejects. NFA~$\A_1$ ``hard-codes'' polynomially many sets $Z_1, \ldots, Z_\ell \subseteq [n]$. It guesses $i \in [\ell]$ such that $S \subseteq Z_i$ and $Z_i \cap T = \emptyset$ and then checks it. It remains to show that there exist $\ell = n^{O(1)}$ sets $Z_1, \ldots, Z_\ell \subseteq [n]$ such that for any $(S,T) \in \Disj^n_k$ there is $i \in [\ell]$ with $S \subseteq Z_i$ and $Z_i \cap T = \emptyset$. The argument uses the probabilistic method and is due to~\cite{Razborov90}; see also \cite[Example~2.12]{KushilevitzNisan}. We reproduce it here due to its elegance and brevity. Fix $(S,T) \in \Disj^n_k$. Say that a set $Z \subseteq [n]$ \emph{separates}~$(S,T)$ if $S \subseteq Z$ and $Z \cap T = \emptyset$. A random set $Z \subseteq [n]$ (each $i$ is in~$Z$ with probability~$1/2$) separates~$(S,T)$ with probability~$2^{-2 k}$. Thus, choosing $\ell := \left\lceil 2^{2 k} \ln \binom{n}{k}^2 \right\rceil = n^{O(1)}$ random sets~$Z \subseteq [n]$ independently, the probability that none of them separates~$(S,T)$ is \[ (1 - 2^{-2 k})^\ell \ < \ e^{-2^{-2 k} \ell} \ \le \ \binom{n}{k}^{-2}\,. \] By the union bound, since $|\Disj^n_k| < \binom{n}{k}^2$, the probability that there exists $(S,T) \in \Disj^n_k$ such that none of $\ell$ random sets separates~$(S,T)$ is less than~$1$. Equivalently, the probability that for all $(S,T) \in \Disj^n_k$ at least one of $\ell$ random sets separates~$(S,T)$ is positive. It follows that there are $Z_1, \ldots, Z_\ell \subseteq [n]$ such that each~$(S,T) \in \Disj^n_k$ is separated by some~$Z_i$. \qed The proof above is based on known arguments from communication complexity. Indeed, they show, for $k = \lceil \log_2 n \rceil$ and the function~$F$ from above, that $\Una_1(F) \in \Omega(\log^2 n)$ and $\Non_0(F) \in O(\log n)$ and $\Non_1(F) \in O(\log n)$. This gap is in a sense the largest possible, as $\Una_1(F) = O(\Non_0(F) \cdot \Non_1(F))$ holds for all two-party functions~$F$. We even have $\Det(F) = O(\Non_0(F) \cdot \Non_1(F))$, where $\Det(F) \ge \Una_1(F)$ is the \emph{deterministic} communication complexity \cite[Theorem~2.11]{KushilevitzNisan}. \section{Conclusions} In the main results, \cref{thm:complement,thm:separation}, we have obtained super-polynomial but quasi-polynomial lower bounds on UFA complementation and separation. These bounds are not known to be tight; indeed, in both cases the best known upper bound is exponential. At the same time, we have transferred techniques from communication complexity relatively directly. More concretely, both main theorems hinge on a finite language $\{x y \mid F(x,y) = 1\}$ where $F$ is a two-party function whose communication complexity is in a sense extreme. This suggests two kinds of opportunities for future work: \begin{itemize} \item Can other techniques from communication complexity improve the lower bounds further? Perhaps by somehow iterating a two-party function or via multi-party communication complexity? \item Can techniques for proving upper bounds on communication complexity be adapted to prove upper bounds on the size of automata? \end{itemize} \bibliographystyle{plain}
-24,860.424102
[ -2.380859375, 2.005859375 ]
67.479675
[ -3.244140625, 1.0966796875, -1.64453125, -5.9140625, -0.78515625, 7.63671875 ]
[ 0.8330078125, 7.80078125, 0.74560546875, 6.9921875 ]
173
2,847
[ -3.349609375, 3.642578125 ]
35.779141
[ -5.875, -4.0078125, -4.125, -1.9375, 2.154296875, 11.8203125 ]
2.897155
17.130755
25.114155
2.439142
[ 3.405789852142334 ]
-16,552.724648
4.740429
-24,711.980517
0.207469
5.671602
[ -2.2890625, -3.171875, -3.771484375, -5.3515625, 2.3359375, 12.078125 ]
[ -5.953125, -1.6259765625, -1.916015625, -1.5517578125, 3.212890625, 3.970703125 ]
BkiUd3A4dbjiVENjpLkA
\section{Introduction} \label{sec:intro} \smallskip In the standard paradigm of hierarchical structure formation, the angular momentum (AM) growth of a dark matter protohalo is driven by the large-scale gravitational tidal torque until maximum expansion. This ``tidal torque theory (TTT)'' (e.g., \citealt{doroshkevich70}; \citealt{white84}) has been shown to agree well with the predictions from cosmological $N$-body simulations (e.g., \citealt{porciani02a,porciani02b}). The AM of a halo is often characterized using the dimensionless spin parameter \citep{bullock01j}: \begin{equation} \label{eq:spin} \lambda = \frac{j}{\sqrt{2}\VvR_{\rm vir}} \end{equation} where $j=J/M$ is the specific angular momentum (sAM) and where $R_{\rm vir}$ and $V_{\rm vir}$ are the virial radius and velocity of the halo. \footnote{\cite{peebles69} originally defined the spin parameter for dark matter haloes as $\lambda = J|E|^{1/2}G^{-1}M^{-5/2}$, where $J$ is the magnitude of the total dark-matter AM within the virial radius; $M$ is the virial mass; and $E$ is the total energy of the halo. Calculating the total energy $E$ is computationally expensive, and the potential energy is not uniquely defined, hindering practical use of the definition. For smooth spherical haloes with all particles on circular orbits, the two definitions are linked via $\lambda_{\rm Peebles} = \lambda_{\rm Bullock} f_c^{-1/2}$, with $f_c$ measuring the deviation of $E$ from that of a singular isothermal sphere. The \citeauthor{bullock01j} definition enables one to compare the sAM of different components in a halo; while the \citeauthor{peebles69} definition can only be used considering all the mass inside a halo.} The spin parameter as defined in \equ{spin} can refer to any part of the halo, and to any component of the galaxy inside the halo, such as the gas and/or stars in the disc and/or the whole galaxy, using the specific angular momentum of the component of interest. \smallskip Galaxies form as cold gas condenses within the potential wells of dark matter haloes \citep{wr78,blum84}. Since the baryons and the dark matter have similar spatial distributions in the cosmic web, they are expected to gain comparable amount of sAM through the large-scale tidal torques. It used to be commonly assumed that the AM of the gas is conserved during the collapse, such that the galactic disc is expected to have a similar spin to that of its hosting halo $\lambda_{\rm gal}\simeq\lambda_{\rm halo}$ (\citealt{fe80}; \citealt{mmw98}; \citealt{bullock01j}). In addition, the rotation curves of galaxies are close to flat, so for disc galaxies the rotation speed at some characteristic radius of the galaxy, $V_{\rm rot}$, is comparable to the virial velocity $V_{\rm vir}$. These establish a link between the characteristic radius of the galaxy and the the virial radius of the halo. The sAM of a galaxy can be written as \begin{equation} \label{eq:jgal} \jgal \simeq R_{\rm e}V_{\rm rot} , \end{equation} where $R_{\rm e}$ is the 3D half stellar mass radius of a galaxy.\footnote{The more familiar form of \equ{jgal} in the literature (e.g., \citealt{mmw98}) is, for exponential discs, \begin{equation} \jd = 2R_{\rm d}V_{\rm c}, \nonumber \end{equation} where $\jd$ is the sAM of the disc, $R_{\rm d}$ is the scale radius, and the rotation curve is flat at the level $V_{\rm c}$. Here we opt for a more general expression. For exponential discs, $R_{\rm e}\approx1.68R_{\rm d}$.} The radius of the galaxy can be expressed as \begin{align} R_{\rm e} & \simeq \frac{\jgal}{\jh} \frac{\jh}{R_{\rm vir}V_{\rm vir}}\frac{V_{\rm vir}}{V_{\rm rot}}R_{\rm vir} \nonumber \\ & = f_j \lambda_{\rm halo} R_{\rm vir}. \label{eq:Re1} \end{align} Here the factor $f_j\equiv \lambda_{\rm gal}/\lambda_{\rm halo}$, with $\lambda_{\rm gal}$ denoting $\jgal/(\sqrt{2}R_{\rm vir}V_{\rm vir})$, is taken as unity if the gas sAM is assumed to be conserved. This is adopted in most semi-analytic models of galaxy formation (e.g., \citealt{somerville08b}; \citealt{guo11}; \citealt{benson12}) when trying to predict disc sizes. \smallskip Cosmological $N$-body simulations show that halo spin follows log-normal distributions with a median of $\langle\lambda_{\rm halo}\rangle\simeq0.035$ and a standard deviation of $\sigma_{\log_{10}\lambda_{\rm halo}} \simeq 0.25$ (e.g., \citealt{bullock01j}, \citealt{bett07}). Hydro-cosmological simulations show that $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ follow similar, log-normal distributions\footnote{though there are differences between the distributions of spins of the different components of the galaxy}. For example, the spin of gas within $0.1R_{\rm vir}$ has a median value of $\sim0.04$ and a standard deviation of $\sim$0.25 dex (e.g., \citealt{danovich15}). Observed star-forming disc galaxies in the redshift range $z=0.5$-3 indicate a similar result \citep{burkert16}. The spin parameters of the discs were evaluated using their sAM based on measured H$_\alpha$ kinematics and normalized by the halo virial velocities and radii inferred from the kinematics and stellar mass. This study found that $\lambda_{\rm gas}$ obeys a log-normal distribution similar to those found for $\lambda_{\rm halo}$ (and $\lambda_{\rm gas}$) in simulations. \smallskip At face value, the similarity between the distributions $P(\lambda_{\rm gal})$ and $P(\lambda_{\rm halo})$ may naively suggest that $\lambda_{\rm gal}$ of a given galaxy reflects $\lambda_{\rm halo}$ of the host halo. However, one should note that the two spin parameters, $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$, are quantities that refer to different spatial and temporal scales -- the former mostly represents the sAM of the {\it inner} region and of recently arrived gas, while the latter represents the sAM of dark matter within the {\it whole} virial radius and is an integral over accretion throughout history. Generally, mass accreted more recently into the halo has higher sAM than the mass accreted earlier. For example, at the virial radius of a halo, the spin parameter of the dark matter and cold gas streams are of the order of 0.1-0.2 in simulations \citep{danovich15}, significantly higher than the spin of the overall halo. Therefore, the fact that the distributions of $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ are similar, actually hints against AM conservation. \smallskip \cite{danovich15} used cosmological simulations to study the evolution of gas sAM as gas streams from the cosmic web and feeds high-z galaxies. They described this evolution in four stages, as summarized in a cartoon in their Figure 20. They first revealed that outside the haloes, since the gas streams are thinner than the dark matter streams, they have higher quadrupole moments of the inertia tensor. Therefore, upon entering $R_{\rm vir}$, the sAM of cold gas is highly correlated to, but $\sim1.5$ times higher than, the sAM of the incoming dark matter. Inside $R_{\rm vir}$, gas first maintains its AM in the outer halo while the incoming dark matter mixes with the existing halo of lower spin; then instreaming gas loses AM due to dissipation and torques from the inner galactic disc. Finally, torques and mass exchanges within the gaseous discs cause further changes in the disc sAM. That is, overall, the gas gains and loses sAM with respect to the dark matter at different stages due to various mechanisms. Any mechanism that causes AM exchange or difference between the inner and outer parts of a halo, or any time variation of the AM supply in the cosmic web, can cause $f_j$ to deviate from unity. Since these processes differ in strength from galaxy to galaxy and from time to time, $f_j$ is expected to be stochastic, and the initial tight correlation between the sAM of incoming gas and dark matter is expected to be weakened or possibly smeared out. If $f_j$ is stochastic or it is anti-correlated with $\lambda_{\rm halo}$, the simple recipe for galaxy size as given by \equ{Re1} is problematic. \smallskip \cite{kravtsov13} compiled a sample of nearby galaxies of mixed morphologies and inferred the relation between galaxy size and halo virial radius using abundance matching. He found that the data are scattered about a linear relation, $R_{\rm e}\simeq AR_{\rm vir}$, with $A=0.015$ on average, and a scatter $\sigma_{\log A}\approx0.25$dex. Interestingly, not only the median of the proportionality factor is of the order of what is expected for $\lambda_{\rm halo}$, but also the scatter is similar to $\sigma_{\log\lambda_{\rm halo}}$. \cite{huang17} and \cite{somerville17} extended the study of the kind of \cite{kravtsov13} to $z=0$-3. They confirmed the form of the linear relation, but reported slightly larger proportionality factors, with noticeable dependences on redshift. \cite{somerville17} showed in detail that the dispersion in the conditional size distribution in stellar mass bins is in excellent agreement with the simple ansatz $R_{\rm e} \propto \lambda_{\rm halo} R_{\rm vir}$. However, as they point out, this does not prove that there is a strong one-to-one correlation between $R_{\rm e}$ and $\lambda_{\rm halo} R_{\rm vir}$. Given that $\lambda_{\rm halo}$ is expected to be almost constant over time and as a function of mass, the dependence of the proportionality factor on mass and redshift hints that other parameters in addition to $\lambda_{\rm halo}$ must play a role in determining galaxy size. \smallskip These concerns cast doubt on the simple assumptions of a strong one-to-one correlation between the spins of a galaxy and its host halo, and the role of halo spin in determining the galaxy size. This motivates us to examine these issues directly in cosmological hydrodynamical simulations. We use two suites of simulations with very different subgrid physics for this study. If no correlation between $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ is found, we will look for a revised recipe for predicting galaxy size in semi-analytic models. \smallskip The outline of this paper is as follows. In \se{method}, we describe the simulations. In \se{dist}, we compare the distributions of spin of different components -- cold gas, stars, and dark matter halo. In \se{correlation}, we characterize the correlations between galaxy spin and halo spin. In \se{correlation2}, we present the correlations between gas spin and stellar spin. In \se{evolution}, we explore the possible effects of wet compaction (a compact star forming phase that many galaxies undergo at high-$z$) and mergers on the $\lambda_{\rm gal}$-$\lambda_{\rm halo}$ correlation (or the lack therefore). In \se{size}, we study the relation between galaxy size and halo virial radius. In \se{discussion}, we compare our results to previous studies. In \se{conclusion}, we summarize our conclusions. \section{Method} \label{sec:method} \smallskip In this study, we use two suites of zoom-in hydro-cosmological simulations, VELA and NIHAO, of different hydro-gravitational solver, resolution and different recipes for cooling, star formation, and stellar feedback. \subsection{VELA} \smallskip VELA is a suite of zoom-in hydro-cosmological simulations of 34 moderately massive central galaxies. It ultilizes the Adaptive Refinement Tree (ART) code \citep{krav97,krav03}, which follows the evolution of a gravitating $N$-body system and the Eulerian gas dynamics using an adaptive mesh. At the subgrid level, the code incorporates the key physical processes relevant for galaxy formation, including gas cooling by atomic hydrogen and helium as well as by metals and molecular hydrogen, photoionization heating by the UV background with partial self-shielding, star formation, stellar mass-loss, metal enrichment of the interstellar medium, and stellar feedback. \smallskip Star formation is stochastic in cells with gas temperature $T < 10^4$ K and densities $n_{\rm H} > 1$ cm$^{-3}$ , at a rate consistent with the Kennicutt-Schmidt law \citep{Kennicutt98}. Supernovae and stellar winds are implemented by local injection of thermal energy at a constant rate, as in \cite{ck09}, \cite{cdb10}, and \cite{ceverino12}. Radiative stellar feedback is implemented with no significant infrared trapping, motivated by \cite{dk13}, as described in \cite{ceverino14}. \smallskip The simulation adopts the Wilkinson Microwave Anisotropy Probe 5 cosmological parameters ($\Omega_{\rm m}= 0.27$, $\Omega_{\Lambda} = 0.73$, $\Omega_{\rm b} = 0.045$, h = 0.7, $\sigma_8 = 0.82$) \citep{WMAP5}. The dark matter particle mass is $8.3 \times 10^4 M_\odot$, in the zoom-in region, and star particles have a minimum mass of $10^3 M_\odot$. The AMR cells are refined to a minimum size in the range 17-35 pc at all times in the dense regions. The force resolution is two grid cells, as required for computing the gradient of the gravitational potential. Each cell is split into eight cells once it contains a mass in stars and dark matter higher than $2.6 \times 10^5 M_\odot$, equivalent to three dark matter particles, or once it contains a gas mass higher than $1.5 \times 10^6 M_\odot$. The output timesteps that are analyzed are uniform in scale factor, with $\Delta a = 0.01$. \smallskip Among the 34 galaxies, 28 galaxies reach $z=2$, with the majority reaching $z=1$ and three of them reaching $z=0.8$. The VELA galaxies are selected to be systems that, at $z=1$, show no on-going major mergers, and are in the mass \footnote{Throughout the paper, halo mass is defined as the total mass within a sphere if radius $R_{\rm vir}$ that encompasses an overdensity of $\Delta_{\rm vir}(z)$ times the critical density of the Universe \citep{bn98}.} range of $M_{\rm vir}=2\times10^{11}$--$2\times10^{12}M_\odot$, about a median of $4.6\times10^{11}M_\odot$. At $z = 2$, the sample span the halo mass range of $(1$--$9)\times10^{11}M_\odot$, and the stellar mass range of $(0.2$--$5.7)\times10^{10}M_\odot$. If they were to evolve to $z=0$, their mass range would bracket the Milky-Way mass. More details concerning the simulations are presented in \cite{ceverino14} and \cite{zolotov15}. \subsection{NIHAO} \smallskip We also use a subset of the Numerical Investigation of a Hundred Astrophysical Objects (NIHAO) project \citep{wang15}, consisting of 13 central galaxies that are Milky-Way sized or slightly more massive at z=0 [$M_{\rm vir} = 7 \times 10^{11}$ - $3 \times 10^{12} M_\odot$], evolved using the SPH code Gasoline 2.0\citep{wadsley17}. The code includes a subgrid model for turbulent mixing of metals and energy \citep{wadsley08}, ultraviolet heating, ionization and metal cooling \citep{shen10}. Star formation and feedback follows the model used in the MaGICC simulations \citep{stinson13}, adopting a threshold for star formation of $n_{\rm H} > 10.3$ cm$^{-3}$. Stars feed energy back into the interstellar medium via blast-wave supernova feedback \citep{stinson06} and early stellar feedback from massive stars. \smallskip The simulations are run in a flat $\Lambda$CDM cosmology with parameters from the Planck Collaboration \cite{planck15} ($\Omega_{\rm m} = 0.3175$, $\Omega_{\Lambda} = 0.6824$, $\Omega_{\rm b} = 0.0490$, $h=0.671$, $\sigma_8 = 0.8344$, $n = 0.9624$). Particle masses and force softenings are chosen to resolve the mass profile to below 1 per cent of the virial radius at all masses, ensuring that galaxy half-light radii are well resolved. For the 13 galaxies that we use, the particle mass is $1.735 \times 10^6M_\odot$ for dark matter, and $3.166 \times 10^5M_\odot$ for gas. The force softening length is 931.4pc for dark matter, and 397.9pc for baryons (comoving). The output is uniform in cosmic time, with $\Delta t\simeq 215$Myr, approximately $\Delta a \simeq 0.014$ at $z\sim1$. \begin{figure} \includegraphics[width=0.48\textwidth]{SpinDistributions.eps} \caption{ Cumulative distributions of the spin for host halo (within $R_{\rm vir}$), cold gas (within 0.1$R_{\rm vir}$), and stars (within 0.1$R_{\rm vir}$) of the VELA ({\it upper}) and NIHAO ({\it lower}) simulations. The lines represent the best-fit log-normal distributions. $\lambda_{\rm gas}$ has a mean value slightly lower than $\lambda_{\rm halo}$, and exhibits a marginally larger scatter, while $\lambda_{\rm star}$ is much lower than the other components and follows a much broader distribution. The NIHAO $\lambda_{\rm star}$ and $\lambda_{\rm halo}$ distributions are similar to those of VELA; while $\lambda_{\rm gas}$ in NIHAO is 0.15 dex lower. } \label{fig:spinCDF} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{spin_stacked.eps} \caption{ The spin of a galaxy (stars+cold gas within 0.1$R_{\rm vir}$) versus the spin of its host halo (DM within $R_{\rm vir}$), for the VELA ({\it left}) and NIHAO ({\it right}) simulations. Each dot is {\it a galaxy at one snapshot}. Here we have corrected for systematic dependence on halo mass or redshift (see text). The cross marks the median and the 16th and 84th percentiles. The Pearson correlation coefficient $\mathcal{R}$ is quoted at the upper right corner. In both simulations, there is negligible correlation between $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$. } \label{fig:correlation} \end{figure*} \subsection{Comments} \smallskip The NIHAO galaxies reach $z=0$, while more than half of the VELA galaxies reach $z=1$, some reaching $z=0.8$. Both suites consist of central galaxies, selected to be far from larger neighbours throughout their assembly history, and would be $\sim L_\star$ or sub-$L_\star$ galaxies at $z=0$, if left evolving in isolation. \smallskip In what follows, in order to probe potential redshift dependence, we bin the snapshots into three redshift ranges, $z=$0-0.8, 0.8-2, and 2-7. To achieve better statistics, we consider {\it each galaxy at each snapshot} as an {\it independent} measurement. The median halo masses of the NIHAO snapshots are $\log(M_{\rm vir}/M_\odot)=11.82^{+0.21}_{-0.17}$, $11.59^{+0.29}_{-0.27}$, and $10.86^{+0.49}_{-0.43}$, in the three redshift ranges, respectively. Here the upper and lower limits indicate the 16th and 84th percentiles. The median halo masses of the VELA snapshots are $\log(M_{\rm vir}/M_\odot)=11.51^{+0.32}_{-0.25}$ and $10.99^{+0.45}_{-0.66}$ respectively, for $z=$0.8-2, and 2-7, respectively. \smallskip The two suites differ in many subgrid recipes, and in particular in numerical resolution and in the implementation of stellar feedback. The VELA simulations have relatively weak stellar feedback. The stellar-to-halo mass ratio in the VELA simulations is in ballpark agreement (considering the scatter in these relations) with the results of abundance matching relations from \cite{behroozi13a} and \cite{moster13}. The stellar feedback of the NIHAO simulation is much stronger than VELA, and is tuned to match the $z=0$ stellar mass versus halo mass relation from abundance matching. The two simulations are therefore complementary in terms of revealing potential dependence of the results on the feedback strength. Neither VELA or NIHAO has included AGNs. The effects of AGNs may start to become important for $L_\star$-galaxies. The most massive end of our sample may be affected and therefore should be taken with caution. \smallskip The NIHAO galaxies are mostly on the star formation main sequence, while the VELA simulations exhibit more diverse sSFR. Very few galaxies in our sample are completely quenched. There have been a plethora of studies discussing the correlation between morphology and angular momentum loss, and this is {\it not} the focus of this paper -- we generally do not distinguish early-type and late-type galaxies in the following, unless there is a significant morphology-dependence in the results. We verify though that our main results hold if we only use the galaxies on or off the main sequence. \subsection{Measuring Spin} \smallskip Galaxy centers are defined as follows. For the VELA simulations, we identify the cell of the highest baryonic density, draw a sphere of 1kpc around it, and take the center-of-mass (CoM) of the stars in the 1kpc-sphere as the center position, $\mathbf{r_0}$. The CoM velocity of these stars is taken as the bulk velocity, $\mathbf{v_0}$, which is then used to define the rest frame for calculating the AM. \smallskip For the NIHAO galaxies, we take the center of the host halo as given by the {\tt AHF} halo catalog \citep{knollmann09} as an initial guess, and run a shrinking-sphere algorithm on the stars within 5kpc from the initial center until the shrinking sphere contains 50 particles. We take the outcome of the shrinking-sphere algorithm as $\mathbf{r_0}$, and the CoM velocity of the stars in the 5kpc-sphere as $\mathbf{v_0}$. The size of the sampling sphere, 1kpc and 5kpc for VELA and NIHAO, respectively, is chosen such that it is well above the spatial resolution of the simulation and is significantly smaller than $R_{\rm vir}$ (to avoid contaminations from massive satellite galaxies). \smallskip With visual inspections of the projected density maps of dark matter, gas, and stars, we find that the centers defined above are sensible -- although they are basically the locations of the highest stellar mass density, they overlap with the positions of the highest dark matter and gas mass densities within a couple of softening lengths, even during major mergers. We also verify that our main results are not sensitive to the definition of centers: using sampling spheres of 0.01$R_{\rm vir}$ or using the centers of dark matter instead of stars yields statistically indistinguishable results. \smallskip The spin parameters of cold\footnote{Density $n_{\rm H} > 1$ cm$^{-3}$ and temperature $T<10^4$K.} gas ($\lambda_{\rm gas}$) and stars ($\lambda_{\rm star}$) are defined by \equ{spin}, with the sAM $j$ measured within 10\% of the virial radius $R_{\rm vir}$. Dark matter halo spin is defined within $R_{\rm vir}$. The AM $J$ is defined about the origin $(\mathbf{r}_0,\mathbf{v}_0)$. Throughout, we consider a {\it galaxy} to be the stars and cold gas within 10\% of the virial radius of the host halo, and usually present the galaxy spin $\lambda_{\rm gal}$ measured from stars and cold gas combined, unless the spins of the stars and gas show different trends. \smallskip There is a corresponding dark-matter-only (DMO) simulation for each NIHAO galaxy. The DMO simulations adopt the same initial conditions as the hydro ones, and replace the gas particles with dark matter particles of the same mass. We measure the dark matter properties of the simulations with baryons, and mention the check on the DMO outputs when necessary. \section{Distribution of spin for different components} \label{sec:dist} \smallskip \fig{spinCDF} shows the cumulative distributions of the spins of the cold gas, stars and dark matter halo, for all snapshots after $z=7$. The spin parameters of the different components are all well-described by log-normal distributions. Halo spin obeys a log-normal distribution with a mean of $\langle\lambda_{\rm halo}\rangle=0.037$, and a standard deviation of $\sigma_{\log\lambda}\simeq0.2$-0.25, very similar to those found in $N$-body simulations (e.g. \citealt{bullock01j}, \citealt{bett07}; \citealt{mc11}; \citealt{somerville17}; \citealt{lee17a}). \smallskip In both simulations, the stellar spin is $\sim5$ times lower than the cold gas spin, which, in turn, is slightly lower than the halo spin. The cold gas spin is only 30-50\% lower than halo spin, with $\langle\lambda_{\rm gas}\rangle=0.026$ (VELA) and 0.018 (NIHAO). This indicates that the radial specific angular momentum profile of the gas is higher than that of the dark matter, to the extent of $j_{\rm gas}(0.1R_{\rm vir})$ being comparable to $j_{\rm dm}(R_{\rm vir})$. Indeed, as discussed in \citet{danovich15}, the gas streams have higher sAM than the dark matter streams at accretion to $R_{\rm vir}$ within a factor of two, but as they reach the inner halo, they are spinned down by the torque from the galactic disc. The scatter of $\lambda_{\rm gas}$ is somewhat larger than that of $\lambda_{\rm halo}$, by $\sim$ 0.05dex in both cases. \smallskip The spin of cold gas ($\lambda_{\rm gas}$) and of stars ($\lambda_{\rm star}$) of the NIHAO simulations are systematically lower than those of VELA . There can be several possible reasons. First, as to the difference in gas spin, the gas properties are different between the two suites. Due to the relatively high density threshold of star formation, the gas in the NIHAO simulations can condense to higher densities than that in VELA before turning into stars. Due to the stronger stellar feedback, NIHAO galaxies have stronger exchange of AM between the cold galaxy range and the hot halo. Second, wet compaction, a process that high-$z$, stream-fed galaxies generally undergo, is efficient at raising $\lambda_{\rm gas}$ (see \se{compaction} for more details). The strength of compaction is expected to be sensitive to numerical resolution and feedback, so the NIHAO galaxies, of much coarser resolution and stronger feedback than VELA, generally show much weaker compactions. Finally, artificial losses of AM can occur in SPH simulations with limited resolution \citep{kaufmann06}. With a force softening length of a few hundred parsecs, and gas particle mass of a few times $10^5M_\odot$, the NIHAO simulations could carry non-negligible numerical AM loss. \section{Spins of galaxy versus halo} \label{sec:correlation} \begin{figure*} \includegraphics[width=1.05\textwidth]{spin_Mzbins_bardm.eps} \caption{ The spin of a galaxy (stars+gas within 0.1$R_{\rm vir}$) versus the spin of its host halo (DM within $R_{\rm vir}$), in different bins of halo mass ({\it upper}: $M_{\rm vir}>10^{11.4}M_\odot$, {\it lower}:$M_{\rm vir}<10^{11.4}M_\odot$) and redshift, in the VELA simulation ({\it left}) and in the NIHAO simulation ({\it right}). Each dot is {\it a galaxy at one snapshot}. The big circles mark the medians with error bars indicating the 16th and 84th percentiles. The Pearson correlation coefficient $\mathcal{R}$ is quoted at the upper left corner. The solid lines are linear regression of the form of $\log\lambda_{\rm gal} = a + (1+b)\log\lambda_{\rm halo}$, with the best-fit parameters indicated. The VELA simulation exhibits negligible correlation throughout all the $M_{\rm vir}$ and $z$ bins. $\lambda_{\rm gal}$ is higher by a factor of $\sim$2 in systems with $M_{\rm vir} > 10^{11.4}M_\odot$ or equivalently post compaction (see text in \se{compaction}). In the NIHAO simulation, a weak correlation emerges at $z\la1$. } \label{fig:correlation_Mzbins} \end{figure*} \smallskip In this section, we characterize the correlation of the {\it amplitudes} of the spins of the galaxies and their host haloes, as well as the alignment of the spin {\it vectors}. \subsection{Spin amplitude} \smallskip \fig{correlation} presents galaxy spin versus host halo spin for all the galaxies at all snapshots at $z<7$. Here, we have tried to remove potential trends with halo mass or redshift as follows. First, the sample is binned into three redshift ranges, $z=0$-0.8, 0.8-2, and 2-7, and two halo mass ranges, $M_{\rm vir}>10^{11.4}M_\odot$, and $<10^{11.4}M_\odot$. We then calculate the median $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ for each mass and redshift bin, and for the whole sample. Finally, in the $\lambda_{\rm gal}$-$\lambda_{\rm halo}$ plane, the data points of different bins are shifted by the offset between the median point of the corresponding bin and that of the whole sample. As can be seen, there is almost no correlation between $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ in either NIHAO or VELA. In fact, as shown in \fig{correlation_Mzbins}, there is negligible correlation in almost every $(M_{\rm vir},z)$ bin at $z\ga0.8$ -- the Pearson correlation coefficient $\mathcal{R}$ seldom exceeds $0.3$. There seems to be a correlation emerging in the lowest redshift bin in the NIHAO simulations, but it is still weak with $\mathcal{R}=0.48$. \smallskip \cite{danovich15} showed that the sAM of gas and dark matter are strongly correlated at virial crossing, \footnote{In fact, \cite{danovich15} found $\lambda_{\rm gas} \simeq $1.5$\lambda_{\rm dm}$, with both $\lambda_{\rm gas}$ and $\lambda_{\rm dm}$ measured at $\gaR_{\rm vir}$. Gas spin at accretion is slightly higher, due to the higher quadrupole moment resulting from the early dissipative contraction of gas into the central cords of the thick dark matter filaments.} therefore, the lack of correlation between $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ means that the spin of baryons evolves differently with respect to that of the dark matter, inside the virial radius. That is, the angular momentum retention ratio, $f_j\equiv\lambda_{\rm gal}/\lambda_{\rm halo}$, must deviate from a constant of order unity, and vary from one galaxy to another and from time to time. We expect $f_j$ to depend systematically on $\lambda_{\rm halo}$ -- any mechanism that causes an anti-correlation between $f_j$ and $\lambda_{\rm halo}$ can most efficiently erase the initial correlation between the sAM of baryons and dark matter in the cosmic web. Assuming for simplicity that the anti-correlation is parametrized by $f_j \propto \lambda_{\rm halo}^b$ with a negative $b$, one can write \begin{equation} \label{eq:regression} \log\lambda_{\rm gal} = a + (1+b)\log\lambda_{\rm halo}, \end{equation} where $a$ is the zero point of the relation, and $(1+b)$ is also a measure of the correlation strength between $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ (in addition to the Pearson $\mathcal{R}$), that ranges from proportionality ($1+b=1$) to no correlation ($1+b=0$). \smallskip The dashed lines in \fig{correlation_Mzbins} are linear regressions of the form of \equ{regression}. Clearly, there is always $-1\langle b<0$ across all the redshift and halo mass bins. Hence, to smear out an initial correlation between the sAM of baryons and dark matter in the cosmic web, some mechanisms operate inside the halo such that initially high-$\lambda_{\rm halo}$ systems end up having lower-$\lambda_{\rm gal}$; and perhaps also that initially low-$\lambda_{\rm halo}$ systems end up with higher $\lambda_{\rm gal}$. We discuss two possible mechanisms of this sort, wet compaction and mergers, in \se{evolution}. \smallskip The left-hand panels of \fig{correlation_Mzbins} show that, in the VELA simulations, $\lambda_{\rm gal}$ is higher in more massive haloes. As will be discussed in more detail in \se{compaction}, this is basically a manifestation of galaxy compaction -- galaxies above the mass threshold $M_{\rm vir}\sim10^{11.4}M_\odot$ are typically post compaction, where the sAM is higher due to an extended ring that has formed from newly accreted gas. \smallskip By inspecting the spin of stars and cold gas separately, we verify that the results are qualitatively the same, with a weak to null correlation between either baryonic component within $0.1R_{\rm vir}$ and the dark matter halo within $R_{\rm vir}$. \subsection{Alignment} \begin{figure} \includegraphics[width=0.5\textwidth]{ThetaGalHaloDistributions.eps} \caption{ Cumulative distribution of the cosine of the angle between the angular momentum vectors of the galaxy (stars + cold gas within 0.1$R_{\rm vir}$) and that of the host halo (dark matter within $R_{\rm vir}$), for VELA ({\it left}) and NIHAO ({\it right}) galaxies in different redshift and halo mass bins. Dotted lines mark the fraction of systems with $\theta<30^\circ$. In VELA, the median $\cos\theta=0.53$-0.67, and seems to decrease (i.e., the alignment becomes marginally worse) at lower redshift. The NIHAO galaxies exhibit better alignment than VELA, with a similar, weak redshift trend. The lower-$M_{\rm vir}$ bin of NIHAO likely suffers from small-number statistics. } \label{fig:alignment} \end{figure} \smallskip Although the amplitudes of spin are barely correlated, are the spin vectors of baryons and dark matter randomly oriented? To answer this question, we plot in \fig{alignment} the cumulative distributions of $\cos\theta_{\rm gal,halo} = \jjgal \cdot \jjh / |\jjgal||\jjh|$ for different halo mass and redshift bins. \smallskip Generally, the median $\cos\theta_{\rm gal,halo}$ is in the range of 0.6-0.7. Approximately $40$\% ($20-30$\%) systems have $\cos\theta_{\rm gal,halo}>$ 0.71 (0.87), corresponding to an angle of $\theta_{\rm gal,halo}<45^\circ$ ($30^\circ$). At a given halo mass, the alignment becomes marginally weaker at later times. \smallskip Comparing the two simulation suites, for the more massive haloes, NIHAO exhibits a slightly better alignment, with the median $\cos \theta_{\rm gal, halo}=$0.65-0.72, while VELA shows a median of $\cos \theta_{\rm gal, halo}=$0.50-0.62 depending on redshift. For the less massive cases, there seems to be a significant fraction with $\cos\theta\la0$. We note though, that the NIHAO results at $M_{\rm vir}<10^{11.4}M_\odot$ suffer from small-number statistics, and therefore opt not to overinterpret them. \smallskip Not shown here, we find the alignment between cold gas and halo to be better than that between stars and halo. In particular, the NIHAO galaxies with $M_{\rm vir}>10^{11.4}M_\odot$ have a median $\cos\theta_{\rm gas, halo}$ of 0.71-0.73 and a median $\cos\theta_{\rm stars, halo}$ of 0.64-0.69 (depending on redshift weakly), compared to the corresponding VELA results of 0.59-0.68 and 0.48-0.62, respectively. For VELA galaxies with $M_{\rm vir}<10^{11.4}M_\odot$, the medians of $\cos\theta_{\rm gas, halo}$ and $\theta_{\rm stars, halo}$ are 0.57-0.63 and 0.53-0.57, respectively. \smallskip It is intriguing that the amplitudes of spins are uncorrelated although the alignment of the spin vectors is relatively good. It may well be that because the gas streams and the stellar disc are generally coplanar \citep{danovich12}, the torques that cause angular momentum gain or loss do not randomize the directions of the spin vectors. \subsection{Spin of galaxy versus inner halo} \begin{figure} \includegraphics[width=0.45\textwidth]{CorrelationProfile_NIHAO.eps} \caption{ Correlation between the spin of cold gas measured within $0.1R_{\rm vir}$ and the spin of dark matter within radius $r$, as a function of radius $r$, for the NIHAO galaxies. The {\it solid} line represents the median result over the redshift range $z=0-7$. The dark matter spin is measured in the (fiducial) hydrodynamical simulation, with the shaded region bracketed by the dotted lines representing the 16th and 84th percentiles. The {\it dashed} line represents the median result, for which the dark matter spin is measured in the dark-matter-only (DMO) simulation that is complementary to the NIHAO simulation and that uses the same initial condition and replaces the gas particles in the fiducial NIHAO simulation with dark matter particles of equal mass. Gas spin is strongly correlated with the spin of dark matter out to $\sim0.2R_{\rm vir}$ if the dark matter spin is measured in the hydrodynamical simulation; while there is negligible correlation between gas spin and the dark matter spin measured in the DMO simulation. } \label{fig:correlation_inner} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{spin_Mzbins_gasstar.eps} \caption{ The spin of cold gas versus the spin of stars, for VELA ({\it left}) and NIHAO ({\it right}) galaxies in different bins of halo mass and redshift. In both VELA and NIHAO, the spins of the baryonic components are strongly correlated, and the correlation is stronger at later times. On average, the spin of either baryonic component is higher in more massive (post compaction) systems. } \label{fig:correlation_gasstar} \end{figure*} \smallskip The correlation between the galaxy spin and the {\it whole} halo spin is weak, but the two parameters sample very different spatial scales. It may well be that the spin of the dark matter in the inner part of the halo, where the galaxy dwells, correlates with the galaxy spin better. To check this, we focus on the NIHAO galaxies, and measure the Pearson correlation coefficient between the gas spin, $\lambda_{\rm gas}$, and the spin of the dark matter within radius $r$, $\lambda_{\rm dm}(<r)$, at each snapshot, looking for a radius within which the dark matter spin is a good proxy of the gas spin. \smallskip The result is shown in \fig{correlation_inner}. On average, there is a strong correlation ($\mathcal{R}\sim0.7$) out to $r\sim0.2R_{\rm vir}$. Beyond $0.2R_{\rm vir}$, the correlation drops quickly to negligible. The scatter of the correlation profile largely reflects redshift dependence: the correlation is stronger at lower redshift, as already hinted in the right-hand panels of \fig{correlation_Mzbins}. \smallskip From the point of view of semi-analytic modelling, we are more interested to know whether or not one can use halo properties measured in $N$-body simulations to predict galaxy properties. Therefore, to test if $\lambda_{\rm dm}(<0.2R_{\rm vir})$ is an adequate galaxy spin indicator, we repeat the exercise, re-measuring $\lambda_{\rm dm}(r)$ in the dark-matter-only (DMO) simulation complementary to the NIHAO sample. The DMO simulations adopt the same initial conditions as the fiducial NIHAO sample, and replace the gas particles with dark matter particles of the same mass. As shown by the dashed line in \fig{correlation_inner}, the $\lambda_{\rm dm}(<r)$ measured in the DMO simulation barely correlates with the gas spin, irrespective of radius $r$. Hence, the DMO inner halo spin, $\lambda_{\rm dm}(<0.2R_{\rm vir})$, is not eligible to be a proxy of galaxy spin as would be used in semi-analytic modeling, and baryonic processes influence the inner halo spin and cause an correlation. What specific mechanisms cause this correlation is beyond the scope of this study. Speculatively, the same processes could simultaneously be related to the null correlation between the galaxy spin and the whole halo spin. \section{Spins of gas versus stars} \label{sec:correlation2} \smallskip Here we measure the correlation between the spins of stars and cold gas in the galaxy. On one hand, a correlation is expected to reflect the fact that stars have formed from cold gas and that the accreted stars and gas might have suffered similar torques. On the other hand, the stars may reflect the spin of the gas at earlier times, which may be different from that of the newly accreted gas. \subsection{Spin amplitude} \smallskip As shown in \fig{correlation_gasstar}, for the two simulation suites, the spins of the baryonic components are correlated, with $\mathcal{R}\sim0.6$-0.8. In terms of redshift trend, both simulations seem to show a mildly stronger correlation at lower redshift. As for halo mass dependence, in the VELA sample, both $\lambda_{\rm gas}$ and $\lambda_{\rm star}$ are higher by $\sim$50\% for systems with $M_{\rm vir}>10^{11.4}M_\odot$ than for $M_{\rm vir}<10^{11.4}M_\odot$. NIHAO shows the same qualitative trend, although the baryonic spins in NIHAO are overall slightly lower than in VELA. NIHAO again exhibits marginally stronger correlations than VELA in general. \subsection{Alignment} \smallskip \fig{alignment_gasstar} shows the cumulative distributions of $\cos\theta_{\rm gas,star} = \jjgas \cdot \jjstar / |\jjgas||\jjstar|$. The gas and stellar spin vectors are generally well aligned. The median $\cos\theta_{\rm gas,star}$ is 0.96-0.98 (0.96-0.99) in the VELA (NIHAO) simulation at $M_{\rm vir}>10^{11.4}M_\odot$, and is 0.88-0.92 in the VELA simulation at $M_{\rm vir}<10^{11.4}M_\odot$. The gas and stellar spin vectors are better aligned in more massive systems, and there seems to be a weak trend that the alignment becomes marginally better at later times. \smallskip We notice that a non-negligible fraction of galaxies have {\it counter-rotating} gas and stellar components, especially the less massive ones. In particular, the fraction of having $\theta_{\rm gas, stars} > 120^\circ$ is 5\% (9\%) and 0\% (3\%) in the VELA (NIHAO) simulation, for $M_{\rm vir}<10^{11.4}M_\odot$ and $>10^{11.4}M_\odot$, respectively. \begin{figure} \includegraphics[width=0.5\textwidth]{ThetaGasStarDistributions.eps} \caption{ Cumulative distribution of the cosine of the angle between the angular momentum vectors of cold gas and stars, both measured within $0.1R_{\rm vir}$, for VELA ({\it left}) and NIHAO {\it right} galaxies in different redshift and halo mass bins. The redshift and halo mass bins are the same as used in previous shows, as indicated. Dotted lines indicate the fraction of systems with $\theta<30^\circ$. In both simulation suites, the spin vectors of the baryonic components are well aligned. The alignment is better in more massive systems, and strengthens marginally at later times. } \label{fig:alignment_gasstar} \end{figure} \section{Origin of the null correlation between galaxy and halo spins} \label{sec:evolution} \smallskip In this section, we discuss two mechanisms that can cause $f_j$ to anti-correlate with $\lambda_{\rm halo}$, and speculate on other possible processes that can decouple $\lambda_{\rm gal}$ from $\lambda_{\rm halo}$. \subsection{Effect of compaction} \label{sec:compaction} \begin{figure*} \includegraphics[width=\textwidth]{SpinCompactionRelation.eps} \caption{ The spin of cold gas, stars (within $0.1R_{\rm vir}$), and dark matter halo (within $R_{\rm vir}$) versus $\Deltat_{\rm BN}/t_{\rm vir}$ in the VELA simulation, where $\Deltat_{\rm BN}/t_{\rm vir}$ is the time to the BN snapshot in units of virial time. The spins of baryons are rising after the compaction event, due to the formation of an extended ring, while $\lambda_{\rm halo}$ remains roughly constant. Color marks halo mass, indicating that compactions in VELA generally occurs at $\ga10^{11-11.5}M_\odot$. } \label{fig:compaction} \end{figure*} \smallskip One possible origin for the anti-correlation between $f_j$ and $\lambda_{\rm halo}$ is the dramatic compaction event that most galaxies undergo. \cite{db14} argued analytically, and \cite{zolotov15} and \cite{tacchella16_prof} showed using simulations, that most galaxies undergo phases of dissipative gas contraction triggered by mergers or counter-rotating accretion streams, into compact, star-forming systems, termed ``blue nuggets" (BN). These objects have been observed \citep{barro13,barro14_bn_rn,barro14_kin,barro15_kin,barro17_uni,barro16_kin}. Observationally, these compact star forming nuclei may be obscured by dust and are not necessarily blue in color. In the VELA simulations, compaction triggers inside-out quenching once above a threshold mass -- stellar mass $10^{9.5-10}M_\odot$ and halo mass $10^{11-11.5}M_\odot$. \smallskip Wet compaction tends to occur when the incoming material has low sAM \citep{db14}. During the subsequent blue-nugget phase, the central gas is depleted to star formation and the associated outflows. These outflows preferentially eject low-AM gas, while the new incoming gas with higher spin settles in an extended ring (e.g. Fig. 7 in \citealt{zolotov15}). As a result, the gas spin in the galaxy is sharply rising during the BN phase. This implies that with a low $\lambda_{\rm gas}$ at halo entry (presumably reflecting low $\lambda_{\rm halo}$, \citealt{danovich15}), the galaxy, that undergoes compaction, ends up having a higher $\lambda_{\rm gas}$, namely $\lambda_{\rm gal}/\lambda_{\rm halo}>1$. \smallskip The spin of stars exhibits a similar behavior. During the BN phase, new stars form in the center following the compact gas, but after the BN phase the stellar effective radius is gradually growing, partly due to new stars that form in the outer ring with high AM and partly due to new ex-situ stars from minor mergers, likely with higher AM. \smallskip We illustrate this effect using the VELA simulations. We detect the main BN phase by identifying the snapshot of the most prominent increase in the surface gas density within 1kpc (see Dekel et al. 2018, in prep., for more details), and investigate the evolution of the spins of different components before and after the main BN phase. \fig{compaction} shows $\lambda_{\rm gas}$, $\lambda_{\rm star}$ and $\lambda_{\rm halo}$ versus $\Delta t_{\rm BN}/t_{\rm vir}$, the time from the BN phase in units of virial time. Clearly, compaction affects the spins of baryons as aforementioned, but not the spin of the dark halo. Since compactions are more common at high-$z$, this, combined with the correlation of compaction and low $\lambda_{\rm halo}$ reported in \citeauthor{zolotov15}, explains the anti-correlation between $f_j$ and $\lambda_{\rm halo}$ at high $z$. Compaction also generally occurs at $M_{\rm vir}\simeq10^{11-11.5}M_\odot$, explaining the higher $\lambda_{\rm gal}$ in more massive haloes as we have seen in the left-hand panels of \fig{correlation}. We caution though that the post-compaction ring formation is hypothetical. The suppression of star formation by AGN feedback may become important for post-compaction systems and turn BNs into compact quiescent systems (``red nuggests'') quickly. Without AGNs, the VELA simulations may over-estimate the significance of the post-compaction ring phase. \subsection{Effect of mergers} \label{sec:merger} \smallskip Another possible source for the anti-correlation of $f_j$ and $\lambda_{\rm halo}$ is the variations of spin during a major merger. \footnote{Since the compaction is in many cases associated with a merger, the two may affect $f_j$ simultaneously.} The accretion of a large satellite makes $\lambda_{\rm halo}$ rise, as the spin is temporarily dominated by the orbital angular momentum of the merging dark matter haloes. The galaxy spin $\lambda_{\rm gal}$ is temporarily unaffected unless the satellite survives the tidal disruption and reaches the central baryonic range. In such case, within a couple of halo dynamical times, $\lambda_{\rm halo}$ relaxes to its normal value as some of the high AM material ends up beyond $R_{\rm vir}$ \citep{lee17b}, but $\lambda_{\rm gal}$ rises, as the orbital angular momentum of the baryonic components now dominates the galaxy spin. This two-phase process is expected to introduce an anti-correlation between $f_j$ and $\lambda_{\rm halo}$. \smallskip We illustrate this effect using the NIHAO simulations. We detect halo mergers as mass increments of the main progenitor more than 10\% (i.e., major and minor mergers). \footnote{A mass ratio of 1:10 between the satellite and the central is chosen such that 1) orbit decay due to dynamical friction is efficient; 2) the time interval between two mergers is long enough to allow for the characteristic evolution pattern of spins as described above.} If there are multiple detections of mergers within two halo dynamical times, $t_{\rm vir}\equivR_{\rm vir}/V_{\rm vir}$, we only keep the earliest one, to avoid double counting the re-accretion of splashback satellites. This gives us a clean, but not necessarily complete sample of massive mergers. \fig{merger} shows the spin ratio $f_j$, with respect to that at the moment of a halo merger $f_j(0)$, as a function of the time after halo merger, $\Deltat_{\rm HM}/t_{\rm vir}$. Clearly, $f_j$ drops abruptly upon the accretion of a large satellite, and starts to recover after $\sim2t_{\rm vir}$. We verify that VELA simulations also show the decrease-and-recover behavior of $f_j$. As with compaction, massive mergers are also more frequent at high-$z$, helping to partly explain the anti-correlation between $f_j$ and $\lambda_{\rm halo}$ at high $z$. We caution that both effects are more common at higher masses, and the low mass bin perhaps requires other explanations. \begin{figure} \includegraphics[width=0.45\textwidth]{fjMergerRelation.eps} \caption{ The ratio $f_j\equiv \lambda_{\rm gal}/\lambda_{\rm halo}$ with respect to $f_j$ at the moment of a massive halo merger (see text) as a function of the time after the merger, $\Deltat_{\rm HM}/t_{\rm vir}$, for the NIHAO galaxies. Thin grey lines are individual cases; the thick line with error bars indicate the median and 16 and 84 percentiles. $f_j$ decrease immediately after halo merger and start to recover after $\sim$2 virial times, both phases giving rise to an anti-correlation between $f_j$ and $\lambda_{\rm halo}$. } \label{fig:merger} \end{figure} \subsection{Other reasons for the lack of correlation} \smallskip While the two mechanisms discussed above generate some anti-correlation, we learn that they are not enough for explaining the full effect. For example, removing the post-halo-merger snapshots within 4$t_{\rm vir}$ in the VELA simulation results in a positive, but rather weak correlation between $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$, with $\mathcal{R}\approx0.3$. The two mechanisms are tightly related -- in fact, about 40\% of compactions are preluded by massive mergers, and the rest are associated with minor mergers, disk instabilities, counter-rotating streams, or other mechanisms (Dekel et al. 2018, in preparation). Here we speculate on a few other possible processes that may smear out the $\lambda_{\rm gal}$-$\lambda_{\rm halo}$ correlation but do not necessarily cause an anti-correlation between $f_j$ and $\lambda_{\rm halo}$. We refer interested readers to \cite{danovich15}, for a comprehensive discussion of different stages of AM build up for galaxies at high-$z$. \smallskip First, $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ are quantities reflecting different time domains. The spin of gas reflects the sAM of recently accreted cold gas, while the spin of the dark matter halo is an integration of the full assembly history. Therefore variations in the incoming streams from the cosmic web affect $\lambda_{\rm gas}$ more and $\lambda_{\rm halo}$ less. This is in line with our finding that $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ are particularly uncorrelated at high $z$, when both gas accretion and depletion (star formation) are much faster. \smallskip Second, gas ejection from the galaxy center due to stellar feedback tends to remove low-spin gas, mixes the gas in the hot halo, and recycles the gas if it cools (e.g., \citealt{defelippis17}). Gas outflows can also occur from the disk outskirts, which removes high-spin gas that returns with low spin, thus lowering the overall spin (DeGraf et al. in prep.). Hence, stronger feedback generally means more AM exchange between the inner and outer halo. However, stronger feedback also means less clumpiness, so the processes that facilitate AM transfer such as dynamical friction, ram pressure, and torques generated by the perturbed disk under violent disk instabitliy \citep{dsc09} may be less efficient. We note that the NIHAO simulations, which have stronger stellar feedback than VELA, show better alignment between spin vectors, and marginally stronger $\lambda_{\rm gal}$-$\lambda_{\rm halo}$ correlation. Therefore, it seems that the net effect of a strong feedback is working {\it for} a better correlation or alignment. However, this interpretation is hindered by the fact that NIHAO has poorer resolution, which also reduces the clumpiness of the galaxies \citep{buck17}. \smallskip Third, torques from the stellar disk on the inspiraling gas ring can spin down the galaxy, not affecting $\lambda_{\rm halo}$. Balancing this effect, compaction gives rise to a central bulge, and thus less torque on the inspiraling gas and less angular momentum loss. \smallskip To conclude, the evolution of $\lambda_{\rm gal}$ is the net effect of many coupled processes, and the null correlation between $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ is not very surprising. \section{Galaxy size predictor revisited} \label{sec:size} \smallskip What we have learnt so far poses a challenge to the classic galaxy-size predictor \begin{equation} \label{eq:Re} R_{\rm e} \simeq f_j \lambda_{\rm halo} R_{\rm vir}. \end{equation} We showed that the proportionality factor $f_j$ ($\equiv\lambda_{\rm gal}/\lambda_{\rm halo}$) has very large scatter, and can vary on short time scales due to compaction, mergers, and other processes. The factor $f_j$ depends systematically on $\lambda_{\rm halo}$: when parametrized as $f_j\propto\lambda_{\rm halo}^{-b}$, the simulations show that $b\approx-1$ at high $z$, suggesting that $\lambda_{\rm halo}$ is not a good predictor of galaxy size, at least at $z\ga1$. \smallskip Here we check the validity of \equ{Re} with the VELA and NIHAO simulations. The left-hand panels of \figs{size_VELA}{size_NIHAO} show $R_{\rm e}/R_{\rm vir}$ versus $\lambda_{\rm halo}$ for the VELA and NIHAO galaxies across all snapshots at $z<7$. Note that in \fig{size_NIHAO} and for the rest of the paper, we have included the full NIHAO sample of $\sim100$ galaxies whose $M_{\rm vir}$ range from $10^{9.5}M_\odot$ to $10^{12.5}M_\odot$. The full sample, with respect to the MW-sized subsample, shows the same trends regarding $R_{\rm e}$ versus $R_{\rm vir}$, and helps with the statistics. \smallskip There is almost no correlation between $R_{\rm e}/R_{\rm vir}$ and $\lambda_{\rm halo}$, independent of redshift. Interestingly, the right-hand panels of \figs{size_VELA}{size_NIHAO} show that both simulation suites still nicely reproduce the observed relation $R_{\rm e} \approx A R_{\rm vir}$, with the overall best-fit $A\approx0.024$. In addition, both suites show a clear redshift trend: the proportionality factor $A$ increases from $\simeq0.02$ at $z\la1$ to 0.03-0.04 at $z\sim3$, and to approximately 0.05 at $z\ga5$. This redshift trend qualitatively agrees with what \citet{somerville17} found using the GAMA and CANDELS surveys and abundance matching, but is a bit too strong. \begin{figure*} \includegraphics[width=0.75\textwidth]{SizeSpinRelation_2panels.eps} \caption{ Galaxy size (3D half-stellar mass radius) to host halo virial radius ratio versus halo spin ({\it left}), and galaxy size versus host halo virial radius ({\it Right:}), colorcoded by redshift, in the VELA simulations. Circles with errorbars indicate the median and 16 and 84 percentiles. Dashed line in the left-hand panel represents a reference line, $R_{\rm e}=0.5\lamhR_{\rm vir}$. \citet{somerville17} showed that this relation, when combined with the stellar to halo mass relation from abundance matching, reproduces the observed $R_{\rm e}$-$M_\star$ relations across redshift up to $z\sim3$. Dotted lines in the right-hand panel are loci of $R_{\rm e} = A R_{\rm vir}$, with $A =$ 0.01,0.02,...,0.06. The overall best-fit $A$ is 0.025 (red, dashed line), while obviously $A$ increases with increasing redshift, in qualitative agreement with observation. } \label{fig:size_VELA} \end{figure*} \begin{figure*} \includegraphics[width=0.75\textwidth]{SizeSpinRelation_NIHAO_hyd_2panels.eps} \caption{ The same as \fig{size_VELA}, but for the NIHAO simulation. Overall, $R_{\rm e} = A R_{\rm vir}$ with $A=0.023$, similar to that of VELA. The redshift dependence of $A$ is in qualitative agreement with VELA. } \label{fig:size_NIHAO} \end{figure*} \smallskip That is, in the simulations, the relation $R_{\rm e} \approx AR_{\rm vir}$ naturally arises, although $A$ is not a function of $\lambda_{\rm halo}$. The interesting questions become: {\it what determines the value of $A$? Are there any secondary halo properties (secondary to halo mass or $R_{\rm vir}$) that can capture the scatter in $A$? What gives rise to the redshift dependence? } Our goal here is to generalize from the simulations an empirical recipe for predicting galaxy size using solely halo properties that can be useful for semi-analytic or semi-empirical models. \subsection{a new empirical galaxy size predictor} \smallskip It turns out that, at fixed halo mass, smaller galaxies tend to live in more concentrated haloes, where the halo concentration parameter is defined as $c\equivR_{\rm vir}/r_{\rm s}$, with $r_{\rm s}$ the scale radius of the best-fit \cite{nfw97} profile. We measure the concentration parameter by fitting an NFW circular velocity profile to the circular velocity profile $V_{\rm c}(r)$ of the dark matter component of the simulated galaxy, as detailed in Appendix \ref{sec:concentration}. Assuming for simplicity a power-law dependence on $c$, we find that galaxy size scales with halo radius and concentration as \begin{equation} \label{eq:Re_new} R_{\rm e} = A^\prime c^\gamma R_{\rm vir}, \end{equation} with $\gamma\approx-0.7$ in both simulation suites. As such, \equ{Re_new} is a tighter relation than $R_{\rm e} = AR_{\rm vir}$, and the factor $A^\prime$ is almost independent of redshift or mass. \smallskip We note that the role of the concentration dependence is two-fold. First, at fixed halo mass and redshift, the size of {\it individual} galaxies anti-correlates with halo concentration. The anti-correlation is well approximated by $c^{-0.7}$. We illustrate this point in Appendix \ref{sec:cDependence}. Second, there is a redshift dependence associated with $c^{-0.7}$, which captures the evolution of the average $R_{\rm e}$-to-$R_{\rm vir}$ ratio. It is well established that halo concentration is a function of halo mass and redshift. Using $N$-body simulations of the Planck cosmology, \citet{dutton14} provide an empirical concentration-mass-reshift relation, given by \begin{equation} \label{eq:concentration} \log \langle c\rangle = a + b\log(M_{\rm vir}/10^{12}h^{-1} M_\odot) \end{equation} where $a=0.537+0.488\exp(-0.718z^{1.08})$ and $b=-0.097+0.024z$. For $M_{\rm vir}\sim 10^{12}M_\odot$, this relation is well approximated by $\langle c\rangle \propto (1+z)^{-0.75}$ up to $z\sim3$. Therefore, with the factor $c^{-0.7}$, \equ{Re_new} indicates that $A\propto(1+z)^{0.5}$, as found in both VELA and NIHAO. This is illustrated in more detail in Appendix \ref{sec:comparison}. \smallskip \fig{size_concentration_VELA} and \fig{size_concentration_NIHAO} show $R_{\rm e}$ versus $R_{\rm vir}$ in the left-hand panels, and $R_{\rm e}$ versus the concentration-corrected halo radius, $(c/10)^{-0.7}R_{\rm vir}$, in the right-hand panels, for VELA and NIHAO respectively. Clearly the concentration scaling leads to a tighter and more universal relation for both suites. \smallskip From the perspective of semi-analytic models, which build upon DMO simulations, we also check the validity of \equ{Re_new} using concentrations and virial radii measured from the matching DMO snapshots of the NIHAO simulations. This is shown in \fig{size_concentration_NIHAO_DMO}. Comparing \fig{size_concentration_NIHAO} and \fig{size_concentration_NIHAO_DMO}, we can see that the same recipe holds, although the best-fit $A^\prime$ is somewhat different, reflecting the average halo response to baryonic physics. We conclude that galaxy half mass radius {\it in the simulations} can be empirically modeled as $A^\prime (c/10)^{-0.7}R_{\rm vir}$, with $A^\prime$ of the order of 0.02 and slightly dependent on the details of baryonic physics. \smallskip Intuitively, the dependence of galaxy size on halo concentration can be rationalized as follows. When considering a fixed halo mass, the concentration measures the depth of the gravitational potential well. For an NFW profile, $\Phi_0 = -V_{\rm vir}^2 c/f(c)$, where $\Phi_0$ is the gravitational potential at the center of the halo, $V_{\rm vir}$ is the virial velocity, and $f(x)=\ln(1+x) - x/(1+x)$. In this regard, the success of \equ{Re_new} simply indicates that galaxy size contains information about the depth of the host potential, such that smaller galaxies live in deeper potential wells, possibly due to the fact that these haloes tend to form earlier (e.g., \citealt{wechsler02,zhao09}). While the above reasoning is suggestive, the cause of the concentration dependence of $R_{\rm e}/R_{\rm vir}$ is posed here as a theoretical challenge, especially the origin of the slope $\gamma \simeq -0.7$ in \equ{Re_new}. \begin{figure*} \includegraphics[width=0.75\textwidth]{SizeSpinRelation_concentration.eps} \caption{ {\it Left}: galaxy size (3D half-stellar mass radius) $R_{\rm e}$ versus halo virial radius $R_{\rm vir}$, for the VELA simulations throughout redshifts ($z=0.8-7$), colorcoded by halo concentration. The red, dashed line is the best-fit relation of the functional form of $R_{\rm e}=AR_{\rm vir}$, as indicated. {\it Right}: galaxy size $R_{\rm e}$ versus the concentration-corrected halo radius, $(c/10)^{-0.7}R_{\rm vir}$. The red, dashed line is the best-fit relation of the form of $R_{\rm e}=A^\prime c^{\gamma}R_{\rm vir}$, with $\gamma=-0.7$ fixed. Circles with error bars indicate the median and the 16th and 84th percentiles. The {\it bottom} panels show the residual with respect to the best-fit model. The quoted numbers are: the root-mean-square of the medians with respect to the best-fit models, and the average 1$\sigma$ scatter of $R_{\rm e}$ in the bins of halo radius. The concentration scaling makes the relation between galaxy size and halo radius tighter and more universal. } \label{fig:size_concentration_VELA} \end{figure*} \begin{figure*} \includegraphics[width=0.75\textwidth]{SizeSpinRelation_NIHAO_hyd_concentration.eps} \caption{ The same as \fig{size_VELA}, but for the NIHAO simulations ($z=0-7$). The same concentration-scaling, $R_{\rm e}\propto c^{-0.7}R_{\rm vir}$, works equally well for NIHAO, with a similar zero-point as found in VELA. } \label{fig:size_concentration_NIHAO} \end{figure*} \begin{figure*} \includegraphics[width=0.75\textwidth]{SizeSpinRelation_NIHAO_DMO_concentration.eps} \caption{ Similar to \fig{size_concentration_NIHAO}, but with the halo properties measured in the matching dark-matter-only simulations. The data points are sparser than in \fig{size_concentration_NIHAO} because for every four hydro output snapshots there is only one dark-matter-only output. The same empirical relation $R_{\rm e}\propto c^{-0.7}R_{\rm vir}$ holds, although the scatter is somewhat larger. The difference between the zero point here (0.025) and in \fig{size_concentration_NIHAO} (0.019) reflects the average halo response to baryonic processes in the NIHAO simulations. } \label{fig:size_concentration_NIHAO_DMO} \end{figure*} \section{Discussion} \label{sec:discussion} \subsection{Comparison with previous studies} \label{sec:discussion1} \smallskip In this section, we compare our results regarding the correlations of spin amplitudes and spin vectors with those reported in the literature. \smallskip \cite{teklu15} use the Magneticum Pathfinder simulations to study the connection between the kinematic morphology of galaxies and their specific baryonic AM and host halo spin. Although their primary focus is the morphology dependence, Fig.11 therein actually shows that the sAM of the halo $\jh$ and the sAM of the cold gas $\jgas$ are barely correlated at $z>1$, for either disks or spheroids. This is in agreement with what we find in VELA and NIHAO. \smallskip In addition, \cite{teklu15} show that at lower redshifts there is a weak correlation coming into place, consistent with what NIHAO implies. We have verified using the Illustris simulations \citep{genel14} that at $z\ga1$ the spin of baryons and the spin of the host halo are barely correlated, and that at lower-$z$ there is a weak correlation developing primarily between $\lambda_{\rm star}$ and $\lambda_{\rm halo}$. The low-$z$ behavior is also confirmed by \cite{rg17}, who report a correlation in the Illustris simulation between the degree of rotation-support of stars and host halo spin for $z=0$ galaxies with $M_\star<10^{11}M_\odot$. \smallskip We note that both the Magneticum Pathfinder simulations and the Illustris simulations have taken AGNs into account, and exhibit similar results compared to VELA and NIHAO. Therefore, AGNs seem to have rather weak effect regarding the $\lambda_{\rm gal}$-$\lambda_{\rm halo}$ correlation. \smallskip We find the alignment of galaxy spin and halo spin to be marginally weaker at later times, and the alignment of gas spin and stellar spin to become slightly better at later times. The same qualitative trends are found by \citet{zs17} in the Illustris simulation, where the spins of dark matter, gas and stars are all measured within the whole virial radius. \smallskip The median angle between the spin vectors of the cold gas and the host halo, $\langle \theta_{\rm gas, halo}\rangle$, is 43-45$^\circ$ in the NIHAO simulations, weakly dependent on redshift. This is in good agreement with most of the reported values in the literature: \cite{hahn10} found 49$^\circ$ at $z=0$; \cite{teklu15} found 45-49$^\circ$ at $z=0.1$, depending weakly on galaxy morphology. \cite{sharma12} reported a significantly smaller value of 30$^\circ$. The median $\langle \theta_{\rm gas, halo}\rangle$ is 47-54$^\circ$ for VELA galaxies with $M_{\rm vir}>10^{11.4}M_\odot$, and is 51-55$^\circ$ for $M_{\rm vir}<10^{11.4}M_\odot$, depending on redshfit. These results are on the high side of the literature values. \smallskip The median angle between the spin vectors of the stars and the host halo $\langle \theta_{\rm stars, halo}\rangle = 46$-$50^\circ$ in the NIHAO simulations, also in good agreement with previous studies. \cite{croft09} found 44$^\circ$ at $z=1$; \cite{hahn10} measured 49$^\circ$ at $z=0$; \cite{teklu15} found 46-57$^\circ$ at $z=0.1$; \cite{bett10} reported a significantly smaller value of 34$^\circ$ at $z=0$; in comparison, at $z<0.8$, $\langle \theta_{\rm stars, halo} \rangle=49^\circ$ in NIHAO. VELA results are again on the high side, with $\langle \theta_{\rm stars, halo} \rangle=61^\circ$ at $z=0.8-2$. \smallskip By tracing the Lagrangian volumes of the inner 10\% of the virial radii at $z=0$, \cite{zavala16} found that the angular momentum loss of baryons tightly correlates with that of the dark matter, since the turn-around time of the dark matter. This is in line with our finding that the spins of the inner halo and the galaxy are correlated. \smallskip \cite{hahn10} and \cite{teklu15} found the median angle between the spin vectors of the baryonic components to be $\langle\theta_{\rm gas,stars}\rangle \approx6-8^\circ$ for late-type galaxies, with very weak redshift dependence in the range $z=$0-2. This is bracketed by the NIHAO result of 5$^\circ$ at $z<2$ and the VELA result of 13$^\circ$ at $0.8<z<2$. \smallskip Starkenburg et al. (2018, in prep) study the counter-rotating galaxies in the Illustris simulation in detail. They find the counter-rotating fraction to be very low. The fraction of galaxies with the angle between the spin vectors of the gas and stars larger than 120$^\circ$ is 0.43\%, independent of halo mass for $M_{\rm vir}=10^{11.4-12.2}M_\odot$ and decreasing at higher masses (private communication). Note that the Illustris sample is generally more massive than the galaxies used in this study, so it is possible that counter-rotations are more common in low mass systems. In fact, for $M_{\rm vir}>10^{11.4}M_\odot$, no counter-rotation defined in the same way is detected in VELA, consistent with Starkenburg et al. \subsection{Redshift dependence of $R_{\rm e}$ in comparison with observations} \label{sec:discussion2} \smallskip We note that, while \equ{Re_new} with $\gamma\simeq-0.7$ accurately describes the simulation results, the implied redshift dependence of $R_{\rm e}/R_{\rm vir}$ seems too strong compared to that inferred from halo abundance matching \citep{somerville17}. In particular, the concentration scaling $c^{-0.7}$ yields approximately $R_{\rm e}/R_{\rm vir} \propto (1+z)^{0.5}$, while the observationally inferred redshift trend is approximately $\propto (1+z)^{0.3}$. This is illustrated in Appendix \ref{sec:comparison}. \smallskip The key observational benchmark for a galaxy size predictor is the $R_{\rm e}$--$M_\star$ relation, which exhibits clear redshift evolution such that, at fixed $M_\star$, galaxies are more compact at higher $z$ (e.g., \citealt{vanderwel14_MR}; \citealt{somerville17}). In the context of empirical modeling of observations, the prediction of the $R_{\rm e}$-$M_\star$ relation requires the combination of the relation between the stellar mass and halo mass from abundance matching and the relation between galaxy size and halo radius from theory. In practice, starting from the halo catalog of an $N$-body simulation, one converts $M_{\rm vir}$ to $M_\star$ using the $M_\star$-$M_{\rm vir}$ relation, and computes $R_{\rm e}$ using $R_{\rm vir}$ and halo structural parameters ($\lambda_{\rm halo}$, $c$) according to the size predictor. \smallskip As said, \equ{Re_new} exhibits a $z$-dependence that is too strong. At $z\sim2$, the average galaxy size predicted by $R_{\rm e} = 0.02(c/10)^{-0.7}R_{\rm vir}$ is $\sim$50\% higher than the 3D-half mass radius observed in CANDELS deduced by \citet{somerville17}. \smallskip We opt not to interpret this tension too literally, for two reasons. First, the half-mass radius as deduced from the observed 2D radius may be biased as a function of redshift, and second, the $M_\star$--$M_{\rm vir}$ relation may carry non-negligible uncertainties, as follows. The deprojection from 2D to 3D may be strongly biased. The conversion of the projected half-light radius ($R_{\rm e,2D}$) to the 3D half-stellar-mass radius ($R_{\rm e}$) involves two factors: \begin{equation}\label{eq:2Dto3D} R_{\rm e,2D} = f_{\rm p} f_{\rm k} R_{\rm e}, \end{equation} where $f_{\rm p}$ corrects for projection and $f_{\rm k}$ accounts for the convertion from light-weighting to mass-weighting. While $f_{\rm k}$($\sim1.2$) seems to be a weak function of mass and redshift \citep{dutton11, lange15}, $f_{\rm p}$ depends significantly on the structure and shape of galaxies and is very likely a strong function of redshift and mass. For spherically-symmetric spheroids or oblate systems, $f_{\rm p}$ is typically between 0.68 (de Vaucouleurs) and 1 (face on exponential disk), but for elongated (prolate) systems, $f_{\rm p}$ can easily be much smaller than unity. Qualitatively, this can be comprehended by considering a cigar-shaped galaxy projected along the major axis. It is easy to show that ellipsoids with an intrinsic axis ratio of $b/a\sim0.4$, $f_{\rm p}$ has a value of $\sim0.5$ for exponential density profiles. The fraction of prolate galaxies increases towards high redshifts and lower masses: at $z\sim1$, more than half of all the galaxies with $M_\star\sim10^{9}M_\odot$ are prolate (\citealt{vanderwel14_MR}, Zhang et al. in prep). This is supported by simulations. In fact, VELA galaxies typically have $b/a\sim0.4$ at $z=2$--2.5 \citep{ceverino15_shape,tomassetti16}. \smallskip \citet{somerville17} practically applied $f_{\rm p}f_{\rm k}=1\times1.2 = 1.2$ for late-type galaxies and $f_{\rm p}f_{\rm k}=0.68 \times 1.15 = 0.78$ for early-type galaxies. That is, for the relevant mass range ($M_\star\la10^{10.5}M_\odot$) where late-type galaxies dominate, $R_{\rm e}$ is approximately always 20\% smaller than $R_{\rm e,2D}$, while for prolate systems, which dominate at high redshifts, it should be significantly larger than $R_{\rm e,2D}$. Taking the effect of projecting elongated systems into account will increase the $R_{\rm e}$ deduced from observations and potentially alleviate the tension. \smallskip The $M_\star$-$M_{\rm vir}$ relations may carry non-negligible systematic uncertainties. Almost all the abundance matching studies assume a \citet{chabrier03} initial mass function (IMF) as found for the Milky Way throughout redshifts. However, the IMF may not be universal. In fact, even in the local universe, there is no consensus whether the IMF is Milky-Way like or bottom heavy (e.g., \citealt{dutton12}). If the IMF is closer to Salpeter (1955) at $z\sim2$, the stellar mass would be higher by $\sim$0.25dex with respect to the standard abundance matching result based on the Chabrier IMF. This alone almost adequately accounts for the 50\% overprediction of the $R_{\rm e}$-$M_\star$ relation. \smallskip If the goal, despite these caveats, is to reproduce the observational estimates as they are, using the standard abundance matching result, one can simply introduce an extra redshift-dependence (in addition to and in the opposite direction to that associated with $c(z)^{-0.7}$) by generalizing \equ{Re_new} to \begin{equation} \label{eq:Re_new2} R_{\rm e} = A^\prime (1+z)^{\beta} c^{\gamma} R_{\rm vir}. \end{equation} We find that $\beta\approx-0.2$ serves as an adequate correction for $\gamma=-0.7$. We clarify again that \equ{Re_new2} is a deviation from \equ{Re_new} that accurately describe the redshift dependence in the simulations. \Equ{Re_new2} should be adopted if the tension between the simulated sizes and the sizes deduced from observations at high redshift is confirmed to be valid. \subsection{Other comments} \label{sec:discussion3} \smallskip We also note that, although the concentration parameter has been used in conventional galaxy size predictors of the form of \equ{Re}, its role was rather limited. In recipes of the sort of \citet{mmw98}, concentration cames in via a factor of order unity that describes the adiabatic contraction of the halo due to the gravity from the galactic disk. Here, with \equ{Re_new2} and $\gamma\approx-0.7$, the $c$-dependence is more pronounced. \smallskip The concentration dependence may also {\it partially} explain the morphology dependence of the $R_{\rm e}$-$R_{\rm vir}$ relation. In the context of conditional abundance matching, the specific star formation rate (color) of a galaxy contains information of the host halo formation time, in the sense that older galaxies dwell in haloes that formed earlier at a given mass \citep{hearin13}. Galaxy color correlates with morphology, and concentration reflects halo formation time. That is, \equ{Re_new2} hints that quiescent galaxies are more compact at a given halo mass (virial radius) than star-forming ones. \smallskip The most prominent difference between VELA and NIHAO in \figs{size_concentration_VELA}{size_concentration_NIHAO} is that the scatter is larger in VELA for both the $R_{\rm e}-R_{\rm vir}$ relation and the $R_{\rm e}$-$c^{-0.7}R_{\rm vir}$ relation. This is likely because the NIHAO sample consists of almost exclusively large, late-type galaxies, while the VELA suite covers a wider range of morphologies. The fact that the $R_{\rm e}$--$c^{-0.7}R_{\rm vir}$ relation still exhibits significant scatter, hints that there might be residual dependence of $R_{\rm e}$ on morphological type. It remains an interesting open question for future studies, whether additional halo properties can help tighten the relation between galaxy size and halo virial radius further. \section{Conclusion} \label{sec:conclusion} \smallskip In this paper, we use two suites of cosmological hydrodynamical simulations to study the correlation between galaxy spin and its host halo spin, and examine the relation between galaxy effective radius and halo virial radius. The two suites differ significantly in numerical resolution and in the strength of stellar feedback, yet show similar results regarding the correlation of spins and the galaxy size - halo size relation, from which we draw the following conclusions. \smallskip \noindent (i) The distribution of galaxy spin follow similar log-normal shapes to that of their host haloes. The median values differ for different components: the spin of stars within $0.1R_{\rm vir}$ has the value of $\sim0.005$--$0.007$, while the spin of cold gas within $0.1R_{\rm vir}$ is $\sim0.02$--$0.03$, slightly lower than but of the same order of the dark matter halo spin ($\sim0.037$), in qualitatively agreement with what is inferred from the $H_\alpha$ kinematics of massive star forming galaxies at high-$z$ \citep{burkert16}. \smallskip \noindent (ii) The similarity of the spin distributions does not translate to a correlation between the spins of each given galaxy and the spins of its host halo. Both simulation suites show that galaxy spin $\lambda_{\rm gal}$ and host halo spin $\lambda_{\rm halo}$ are barely correlated, especially at $z\ga1$. This null correlation is qualitatively the same, if the spin of the galaxy is measured for the cold gas or the stars separately. There seems to be a weak correlation between $\lambda_{\rm gal}$ and $\lambda_{\rm halo}$ at $z\la1$. Given that the specific angular momentum of cold gas and dark matter are correlated at the accretion into the halo virial radius, this indicates that the gas angular momentum is not conserved during the gas inflow into the galaxy and its evolution within the galaxy. The spin of the inner part of the dark matter halo shows a correlation with that of the galaxy, but this is a consequence of baryonic effect on the dark matter halo, such that the inner halo spin from $N$-body simulations cannot serve as a proxy for the galaxy spin. \smallskip \noindent (iii) The angular momentum retention factor, $f_j$ ($\equiv \lambda_{\rm gal}/\lambda_{\rm halo}$), has a value of $\sim0.5$ on average, with large, stochastic variations from galaxy to galaxy and from one time to anther, and it anti-correlates with $\lambda_{\rm halo}$. Wet compaction or mergers could potentially give rise to the anti-correlation between $f_j$ and $\lambda_{\rm halo}$. Low-$\lambda_{\rm halo}$ galaxies tend to develop a wet compaction \citep{db14}, which causes the initially low-spin system to end up with higher $\lambda_{\rm gas}$ by depleting the low-angular momentum gas in the compact star forming nucleus phase, and acquiring higher-angular momentum gas that settles in an extended ring. The merger of massive satellites causes $\lambda_{\rm halo}$ and $\lambda_{\rm gal}$ to rise and fall {\it in turn}, over a time scale of several halo dynamical times. Based on the picture of angular momentum gain and loss described in \cite{danovich15}, we also speculated on other possible mechanisms, which do not necessarily cause an anti-correlation between $f_j$ and $\lambda_{\rm halo}$, but smear out the $\lambda_{\rm gal}$-$\lambda_{\rm halo}$ correlation. \smallskip \noindent (iv) Contrary to the uncorrelated spin amplitudes, the spin orientations of a galaxy and its host halo are correlated. Overall, half of the cases have $\cos\theta\ga 0.6$, where $\theta$ is the angle between the galaxy spin vector and halo spin vector. At a given halo mass bin, the alignment becomes marginally weaker at later times. This suggests that the mechanisms that smear out the correlation of the amplitudes of spin do not totally randomize the alignment. The spin alignment is consistent with the finding that the inflow is predominantly in a preferred plane \citep{danovich12}, such that the torques exerted are preferentially along the spin vector, thus affecting its amplitude but not its direction. \smallskip \noindent (v) The NIHAO simulations, which have stronger stellar feedback than the VELA simulations, show a marginally stronger correlation between the spin of the galaxy and of its host halo, and a slightly better alignment between the spin vectors. This may suggest that stronger feedback operates to the advantage of a better correlation, via stronger angular momentum exchange between the inner part and outskirts of a galaxy. We caution though that this interpretation is hindered by the fact that NIHAO has poorer resolution, which, as with strong feedback, also reduces the clumpiness of galaxies and thus is disadvantageous to angular momentum exchange. \smallskip \noindent (vi) The spins of the cold gas and the stars in a galaxy are correlated, with $\mathcal{R}\sim0.6-0.8$, strengthening at later times. The spin vectors of the baryonic components are well aligned, with about half of the cases having $\cos\theta_{\rm gas, star}\ga0.97$. The alignment is better for more massive systems. A non-negligible fraction of the cases have counter-rotating gas and stars, especially in lower-mass systems. \smallskip \noindent (vii) We find that the halo spin parameter is not significantly correlated with galaxy size in both simulation suites, challenging the conventional, semi-analytic galaxy size estimator: $R_{\rm e}\simeq 0.5 \lamhR_{\rm vir}$. Nevertheless, both VELA and NIHAO reproduce the empirical relation derived from abundance matching, $R_{\rm e}\simeq AR_{\rm vir}$, with the proportionality factor $A\simeq0.02$ at low-$z$ and increasing towards high-$z$. We find that in the simulations, galaxy size can be well described by the relation \begin{equation}\label{eq:Re_new_conclusion} R_{\rm e} = 0.02 (c/10)^{-0.7} R_{\rm vir} \end{equation} where $c$ is the halo concentration. The concentration dependence serves two purposes. First, at fixed halo mass and redshift, the size of individual galaxies anti-correlates with halo concentration as $c^{-0.7}$. Second, there is a redshift dependence associated with the $c^{-0.7}$ factor, which comes in via the concentration-mass-redshift relation and accurately captures the redshift evolution of $A$ in the simulations. This redshift trend, however, is too strong compared to that inferred from observation using abundance matching and a simplified deprojection of sizes, and thus seems to cause over-prediction of galaxy size at high redshift at given $M_\star$. Although there might be some caveats that could potentially change the results deduced from observations, if reproducing the observed $R_{\rm e}$-$M_\star$ relations across redshifts is the primary concern, one can apply an extra redshift dependence to the size predictor, making it $R_{\rm e} = 0.02 (1+z)^{-0.2} (c/10)^{-0.7} R_{\rm vir}$. We clarify again that this is a deviation from \equ{Re_new_conclusion} that accurately describe the redshift dependence in the simulations, and should be adopted if the tension between the simulated sizes and the sizes deduced from observations at high redshift is confirmed to be valid. This will imply that the simulated radii are inaccurate at high redshift. Alternatively, one can assume that the tension is an artifact of the process of analyzing the simulations, that the simulations are reliable, and adopt \equ{Re_new_conclusion}. The empirical relation can be tested with future observations where concentration and virial radius are measured from gravitational lensing. It remains an open question how the potential well of the host halo regulates the size of the galaxy to give rise to the specific $c^{-0.7}$ scaling. \section*{Acknowledgments} \smallskip We acknowledge stimulating discussions with Andreas Burkert and Reinhard Genzel. This work was partly supported by the grants ISF 124/12, I-CORE Program of the PBC/ISF 1829/12, BSF 2014-273, PICS 2015-18, and NSF AST-1405962. FJ is supported by the Planning and Budgeting Committee (PBC) fellowship of the Council for Higher Education in Israel. JP is support by the grant HST-AR-14578.001-A. The VELA simulations were performed at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory, and at NASA Advanced Supercomputing (NAS) at NASA Ames Research Center. DC is supported by the ERC Advanced Grant, STARLIGHT: Formation of the First Stars (project number 339177). The NIHAO simulations were performed on the High Performance Computing resources at New York University Abu Dhabi; on the THEO cluster of the Max-Planck-Institut f${\ddot{u}}$r Astronomie and on the HYDRA clusters at the Rechenzentrum in Garching. \bibliographystyle{mn2e}
-50,005.963062
[ -2.6328125, 2.435546875 ]
67.951542
[ -2.740234375, 0.44384765625, -1.8486328125, -5.61328125, -0.62646484375, 7.546875 ]
[ 2.98828125, 7.76953125, 1.837890625, 5.62109375 ]
781
11,895
[ -3.06640625, 3.478515625 ]
25.764953
[ -6.0703125, -3.716796875, -3.97265625, -2.353515625, 1.7900390625, 11.6875 ]
1.32822
17.755997
17.906683
1.697972
[ 1.9114031791687012 ]
-36,376.607022
5.348382
-48,184.248116
0.707336
5.97925
[ -3.306640625, -3.828125, -2.998046875, -3.939453125, 2.640625, 10.7578125 ]
[ -5.6875, -2.5078125, -2.27734375, -1.5390625, 3.666015625, 5.03125 ]
BkiUdNs5qWTA6f4qDodK
\section{Introduction} One of the biggest questions in fundamental physics is the nature of dark matter (DM). The possibility that DM is a thermal Weakly Interacting Massive Particle (WIMP), whose abundance is determined by $2 \to 2$ annihilations into Standard Model (SM) bath particles, is exciting, but has alluded detection thus far. The WIMP is particularly intriguing because it is very predictive---its abundance is determined only by its interactions with the SM, which informs us how it may be detected. The WIMP paradigm has been a guide towards the properties of DM, such as its mass and interactions. In particular, within the WIMP freezeout mechanism, there is a an upper bound on the DM mass from perturbative unitarity, of $ {\cal O}(100)\,{\rm TeV}$~\cite{Griest:1989wd}. The reason is that the WIMP annihilation rate is proportional to an exponentially decreasing DM density, and so the amount of dark matter that can be annihilated away before freezeout is limited by the theoretical size of the cross-section. Of course, DM may be much heavier than this bound, if it is not a WIMP. Such models include non-thermal dynamics, decoupled dark sectors, inflationary and gravitational production, nonstandard cosmological histories, and large entropy production~\cite{Hui:1998dc,Kolb:1998ki,Chung:1998rq,Chung:2001cb,Feng:2008mu,Harigaya:2014waa,Davoudiasl:2015vba,Randall:2015xza,Dev:2016xcp,Harigaya:2016vda,Berlin:2016vnh,Berlin:2016gtr,Bramante:2017obj,Berlin:2017ife,Hamdan:2017psw,Cirelli:2018iax,Babichev:2018mtd,Hashiba:2018tbu,Hooper:2019gtx}. In none of these cases, however, is the DM abundance solely determined by its interactions with the SM. The common lore is that an elementary DM candidate that is thermally coupled to the SM, within a standard cosmological history, cannot have mass well above the WIMP perturbative unitarity bound. Exceptions include composite DM, but still do not go much beyond the above bound~\cite{Harigaya:2016nlg,Geller:2018biy,Smirnov:2019ngs}. In this {\it Letter} we present a new freezeout mechanism within a standard cosmological history. The DM is an elementary particle that is thermalized with the SM at high temperatures, its relic abundance is determined via its freezeout from the SM bath, and the DM mass can be as high as $10^{14}$ GeV for s-wave processes, without violating the perturbative unitarity limit. In future work, we show how Planck-scale DM can be reached for velocity-dependent processes~\cite{future1}. The general idea is as follows. The dark matter consists of $N$ approximately degenerate states, $\chi_i$ ($i=1..N$). These states co-scatter~\cite{DAgnolo:2017dbv} off of the SM bath, but only in a chain of nearest-neighbor interactions \begin{equation} \chi_i + {\rm sm} \leftrightarrow \chi_{i+1}+ {\rm sm}, \label{chainintro} \end{equation} while the $N^{\rm th}$ state co-decays~\cite{Bandyopadhyay:2011qm,Dror:2016rxc,Kopp:2016yji,Okawa:2016wrr,Dror:2017gjq,Dery:2019jwf} in equilibrium with the SM, \begin{equation} \chi_N \to {\rm sm} + {\rm sm}\,. \label{codecay} \end{equation} Here $\chi_1$ is the DM candidate. This setup is summarized in Fig.~\ref{figchain}. The processes in Eq.~\eqref{chainintro} can maintain chemical equilibrium much longer than annihilations can, because the interaction rate for the scattering $ \Gamma_{\rm sctr} = n_{\rm SM} \left< \sigma v \right>_{\rm sctr} \label{eq:sctr} $ never becomes exponentially suppressed. The in-equilibrium decay allows for the whole system to have vanishing chemical potential for a long time---if the $N^{\rm th}$ particle was annihilating with the bath, the system would inherit the unitarity bound from co-annihilations. Finally, the chain, which will typically require $N>5-20$, depending on the DM mass, ensures the stability of the DM $\chi_1$. \section{General Idea} Consider a DM particle $\chi_1$, whose density changes via scattering with a light SM bath particle, \begin{equation} \chi_1 + {\rm sm} \leftrightarrow \chi_2+ {\rm sm}, \label{chain2} \end{equation} where $\chi_2$ has similar mass to $\chi_1$. This process can maintain chemical equilibrium much longer than annihilations with the same interaction strength, because the interaction rate for the scattering does not depend on the DM density, and therefore the rate does not become exponentially suppressed. The DM is able to maintain equilibrium to smaller temperatures, becoming more Boltzmann suppressed than a WIMP for the same size cross-section. However, $\chi_1 $ can only reduce exponentially for as long as $\chi_2$ reduces exponentially (for instance, by maintaining chemical equilibrium with the SM bath). Thus in order to go beyond the unitarity bound on annihilations, some other process is still needed to reduce the $\chi_2$ density. If this process is annihilations (such as $ \chi_2+ \chi_2 \leftrightarrow~{\rm sm} + {\rm sm} $), then it will freezeout too early, and the unitarity bound on annihilations will apply again. If instead, $\chi_2$ decays equilibrium with the SM bath, chemical equilibrium can be maintained for much longer. Thus, our proposed mechanism is a combination of co-decay and co-scattering dark matter. However, one can easily see that the combination of the scattering process Eq.~\eqref{chain2}, and the in-equilibrium decay process $\chi_2 \to {\rm sm} + {\rm sm}$, will necessarily lead to the fast decay of $\chi_1$ via an off-shell $\chi_2$. In the presence of a co-scattering chain, such that scatters take place only for nearest neighbors \begin{equation} \chi_i + {\rm sm} \leftrightarrow \chi_{i+1}+ {\rm sm}, \label{chain} \end{equation} which codecay in equilibrium with $\chi_N$, \begin{equation} \chi_N \to {\rm sm} + {\rm sm}\,. \end{equation} $\chi_1$ can instead be long lived; here $\chi_1$ is still the DM, but its decay width is suppressed due to the large phase space needed to decay to the SM ($\chi_1$ decays to $2N$ SM particles). This is summarized in Fig.~\ref{figchain}. \begin{figure} \[ \begin{tikzpicture}[baseline=(current bounding box.center)] \begin{feynman}[small, baseline=(a)] \node[blob] (a); \vertex [above left =of a] (i1) {\(\chi_1\)}; \vertex [below left=of a] (i2) {\(\rm sm\)}; \vertex [above right =of a] (f1) {\(\chi_2\)}; \vertex [below right=of a] (f2) {\(\rm sm\)}; \diagram* { (i1) -- (a)-- (f1), (i2) -- (a) -- (f2), }; \end{feynman} \end{tikzpicture} ~~ \begin{tikzpicture}[baseline=(current bounding box.center)] \begin{feynman}[small, baseline=(a)] \node[blob] (a); \vertex [above left =of a] (i1) {\(\chi_2\)}; \vertex [below left=of a] (i2) {\(\rm sm\)}; \vertex [above right =of a] (f1) {\(\chi_3\)}; \vertex [below right=of a] (f2) {\(\rm sm\)}; \diagram* { (i1) -- (a)-- (f1), (i2) -- (a) -- (f2), }; \end{feynman} \end{tikzpicture} \cdots \begin{tikzpicture}[baseline=(current bounding box.center)] \begin{feynman}[small, baseline=(a)] \node[blob] (a); \vertex [above left =of a] (i1) {\(\chi_{_{N-1}}\)}; \vertex [below left=of a] (i2) {\(\rm sm\)}; \vertex [above right =of a] (f1) {\(\chi_{_N}\)}; \vertex [below right=of a] (f2) {\(\rm sm\)}; \vertex [right=.8cm of f1] (c) ; \node [below=.2cm of c, dot] (b) ; \vertex [ right=.75cm of b] (f3) {\(\rm sm\)}; \vertex [ below=.55cm of b] (d) ; \vertex [ right=.2cm of d] (f4) {\(\rm sm\)}; \diagram* { (i1) -- (a)-- (f1) -- (b) -- (f3), (i2) -- (a) -- (f2), (b) -- (f4), }; \end{feynman} \end{tikzpicture} \] \caption{\label{figchain} Freezeout mechanism: the dark matter consists of many nearly degenerate particles which scatter with the Standard Model bath in a nearest-neighbor chain, and maintain chemical equilibrium with the Standard Model bath by in-equilibrium decays and inverse decays.} \end{figure} \section{Two particle case} As a toy example, we first work through the simplest case of $N=2$ dark matter particles with mass $m$. While this example is not cosmologically viable, it demonstrates the basic idea of the mechanism. Consider two degenerate states $\chi_1$ and $\chi_2$, which co-scatter~\cite{DAgnolo:2017dbv} off of SM bath particles via the process \begin{equation} \chi_1+ {\rm sm} \leftrightarrow \chi_2+ {\rm sm}. \end{equation} The dark matter candidate is $\chi_1$, while $\chi_2$ decays in equilibrium with the SM bath. The Boltzmann equations for the system are \begin{eqnarray} \dot{n}_1 + 3 H n_1 &=& n_{\rm sm} \langle \sigma v \rangle (n_2 -n_1) , \nonumber\\ \dot{n}_2+ 3 H n_2 &=& n_{\rm sm} \langle \sigma v \rangle (n_1 -n_2) - (n_2 - n_2^{\rm eq}) \Gamma_{2}, \end{eqnarray} where $\langle \sigma v \rangle$ is a thermally averaged cross section for the scattering process, $n_{\rm eq}$ is the equilibrium number density of $\chi$, and $\Gamma_2$ is the decay rate of $\chi_2$ in the thermal bath. This system is similar to the co-scattering scenario~\cite{DAgnolo:2017dbv}, but here the DM bath is kept in equilibrium with the SM bath via decays and inverse-decays, rather than annihilations. Ultimately, since decays and inverse-decays can stay in equilibrium much longer, this will allow for much heavier DM. Unlike the case for freezeout via annihilations, the instantaneous freezeout approximation will not give a good estimate of the relic abundance. This is because the rate for $\chi_1$ scattering, $\Gamma \sim n_{\rm sm} \langle \sigma v\rangle $, is not dropping off exponentially fast with the expansion, and therefore freezeout takes a long time. However, an approximate analytic solution to the relic abundance can still be determined from the Boltzmann equations, as we now detail. If the decay rate $\Gamma_2$ is larger than the Hubble expansion parameter when the temperature $T$ of the universe is equal to the DM mass, the number density of $\chi_2$ closely follows its equilibrium value. An approximate solution to the $\chi_1$ density can then be found by considering the single equation \begin{eqnarray} Y_1' = \frac{\lambda}{x^2}(-Y_1 + Y_{\rm eq}), \label{n1} \end{eqnarray} where $Y_i = n_i /s$ with $s$ the entropy density, $x=m/T$ and $\lambda = (n_{\rm sm} \langle \sigma v \rangle / H)|_{x=1}$. We have also assumed that the thermally averaged cross-section is velocity independent and took the number of relativistic degrees of freedom $g_\star=g_{\star s}$ to be constant. The asymptotic value of the relic abundance is \begin{eqnarray} Y_1(\infty) \approx \frac{45 g_\chi}{2^{3/2} \pi^3 g_{\star s}} \lambda e^{-2\sqrt{\lambda}} \equiv Y_{\infty}(\lambda)\,, \label{relic_N2} \end{eqnarray} where $g_\chi$ is the number of internal degrees of freedom of a $\chi$ particle. From the Boltzmann equation \eqref{n1}, $\chi_1$ departs equilibrium when ${\lambda}/{x_{\rm fo}^2} \sim 1$. The fact that the relic abundance then scales as $Y_1(\infty) \sim e^{- 2 x_{\rm fo}}$ and not as $e^{- x_{\rm fo}}$ (as one finds in the instantaneous freezeout approximation), is the result of the slow freezeout. At this point $\chi_1$ creation stops, but $\chi_1$ can continue to scatter away. Neglecting the inverse process, solving \begin{equation} Y_1' = - \frac{\lambda}{x^2}Y_1 , \end{equation} yields that $Y_1(x) = Y_1(x_{\rm fo}) e^{\frac{\lambda}{x} - \frac{\lambda}{x_{\rm fo}}} {\to}\, e^{- 2 x_{\rm fo}}$. One also sees from this solution that the abundance stops changing significantly when $x_{\rm fin}\sim \lambda = x_{\rm fo}^2$. From the above estimation, one can find $\langle \sigma v \rangle$ that reproduces the observed relic abundance of dark matter, while satisfying the unitarity bound. For instance, parameterizing the cross-section as $\langle \sigma v \rangle \equiv \alpha^2 / m_\chi^2$, one finds that for $\alpha \simeq 1$, ${m_\chi = 6 \times 10^{14}}$ GeV reproduces the correct relic abundance (assuming the DM scatters off of 4 SM degrees of freedom and that $g_{\star s}= 106.75$). \section{ $N>2$ Degenerate Case} \begin{figure}[t] \centering \includegraphics[scale=0.43]{Yield} \caption{Relic abundance of each particle with temperature evolution. Here the masses are assumed to be degenerate, and the decay rate of $\chi_N$ is chosen as $\Gamma_N = 10^2 H(m)$ to ensure that it decays in-equilibrium. The horizontal dashed line is the observed relic abundance of DM, while the solid black curve is the equilibrium abundance.} \label{fig:Yield} \end{figure} Although the simplest $N=2$ example does not work, it indicates a clear path forward to a viable setup: suppressing the decay rate of $\chi_1$. We thus consider $N >2 $ particles with the same type of nearest-neighbor interactions as before. Similarly, we assume that only $\chi_N$ is able to decay into SM particles, that the masses are degenerate, and that the cross section for each interaction is the same. The equations of the system are \begin{eqnarray} Y'_1 &=& \frac{\lambda}{x^2} ( - Y_1 + Y_2), \label{first_eq} \nonumber\\ Y'_j &=& \frac{\lambda}{x^2} ( Y_{j-1} -2 Y_{j} + Y_{j+1} ) , \nonumber\\ Y_N' &=& \frac{\lambda}{x^2} ( Y_{N-1} - Y_N) - x \lambda_d (Y_N - Y_N^{\rm eq}) , \label{last_eq} \end{eqnarray} where the second equation is valid for $j=2,\cdots,N-1$, $\lambda = (n_{\rm sm} \langle \sigma v \rangle / H)|_{x=1}$, and $\lambda_d = ( \Gamma_N / H)_{x=1}$. Here prime denotes a derivative with respect to $x$. We also ignore the time variation of $g_\star$ and $g_{\star s}$, and assume that the cross section is constant in the non-relativistic limit. Note also that we are interested in parameter space where $\lambda,\,\lambda_d \gg 1$. This system can be solved in similar fashion to the $N=2$ case, by assuming $Y_N = Y_{\rm eq}$ and diagonalizing the differential equations. However, we can more simply obtain a solution with the new $N$ scaling by taking the large $N$ limit, and taking flavor space to be continuous. For $N\gg1$, the system behaves as a random walk between $\chi_j$ states, with a reflecting wall at $j=1$ and an absorbing cliff at $j=N$ (corresponding to the decay). Taking the indices of $j$ as a continuous variable $\ell = \pi j / [2 (N-1)]$, the system is described by the diffusion equation \begin{eqnarray} (\partial_\tau - D \partial_\ell^2) Y_\ell(\tau) = 0, \end{eqnarray} where $\tau = - 1/x$ and $D = \pi^2 \lambda/ [4 (N-1)^2]$ is a diffusion coefficient, and the boundary conditions are \begin{eqnarray} \partial_\ell Y |_{\ell = 0} = 0, \qquad Y_{\pi/2} (\tau) = Y^{\rm eq}(\tau) \end{eqnarray} From the form of the diffusion equation, we see that the system depends on the cross-section only through the combination $D \simeq \pi^2 \lambda/ 4 N^2$. From solving the boundary conditions, the asymptotic profile ${Y_\ell (\infty) \propto \cos \ell}$. The solution of the diffusion equations gives the relic abundance \begin{equation} Y_i(\infty) \simeq Y_{\infty}(D) \frac{4}{\pi} \cos \Big( \frac{\pi}{2} \frac{j}{N} \Big)\,, \label{relicY} \end{equation} where $Y_{\infty}$ is defined in Eq.~\eqref{relic_N2}. In Fig.~\ref{fig:Yield} we plot the yield of the $\chi_i$ particles for $N=10$, solving the full Boltzmann equations, Eqs.~\eqref{last_eq}. We find that the relic abundances match the analytical estimate of Eq.~\eqref{relicY}. Now, one must check that $\chi_1$ is sufficiently long lived to be a viable DM candidate. A model-independent bound on the DM lifetime is $\tau > 5 \times 10^{18}$~sec~\cite{Audren:2014bca} (for earlier studies, see Refs.~\cite{Ichiki:2004vi,PalomaresRuiz:2007ry,Lattanzi:2007ux,Lattanzi:2008ds,Gong:2008gi,DeLopeAmigo:2009dc,Peter:2010au,Peter:2010sz,Wang:2010ma,Peter:2010jy,Huo:2011nz,Aoyama:2011ba}). However, if the DM decays to SM particles, constraints on the lifetime may be much stronger, $\tau > 10^{27}$~sec~(see e.g. Refs.~\cite{Cirelli:2012ut,Essig:2013goa,Blanco:2018esa}). The decay of $\chi_1$ to SM particles takes place only through $N-1$ off-shell $\chi_i$ particles, into $2N$ SM particles, with decay width \begin{eqnarray} \Gamma_1 = \frac{1}{2m} \int d\Phi_{2N} |{\cal M}|^2\,. \end{eqnarray} Treating the SM particles as massless, the total $2N$-body phase space is~\cite{Kleiss:1985gy} \begin{eqnarray} \int d\Phi_{2N} = \frac{1}{S}\frac{2\pi}{\Gamma(2N) \Gamma(2N-1)} \frac{m^{4(N-1)}}{(16\pi^2)^{2N-1}}\, , \end{eqnarray} where $S$ is a symmetry factor accounting for identical particles in the final state. We may approximate the squared matrix element of the decay as~\footnote{We approximate internal propagators as $1/m^4$. One might wonder if the largest contribution arises from internal propagators being almost on-shell. Such possibility arises only with a small phase space density, so the parametric dependence of the decay rate on $m$ does not change. Nevertheless, internal propagators generally lead to larger $\Gamma_1$, but this changes our estimate on $N$ only by $\Delta N \sim 1\textrm{--}2$ for the range of $N$ of our interest. } \begin{eqnarray} |{\cal M}|^2 \simeq S^2 \frac{1}{m^{4N-4}} |{\cal M}|_{\chi {\rm sm} \to \chi {\rm sm}}^{2(N-1)} |{\cal M}|^2_{\chi_N \to {\rm sm}+ {\rm sm}} \end{eqnarray} where the factor $S^2$ accounts for expected number of diagrams. The decay rate of $\chi_1$ is then estimated as \begin{eqnarray} \frac{\Gamma_1}{\Gamma_N} \simeq \frac{S}{(2N-1)!(2N-2)!} \left( \frac{\alpha^2}{16\pi^3} \right)^{N-1}\,. \label{decaychain} \end{eqnarray} \begin{figure}[t!] \centering \includegraphics[scale=0.4]{Minimal_N} \caption{Number of dark matter particles versus DM mass. The colored solid lines depict the minimal number of particles when $\chi_1$ decays through the chain~(see Eq.~\ref{decaychain}), while the gray solid lines depict the same within an explicit model~(see Eq.~\ref{decaymixing}). For both, we require that $\tau_1 > 10^{27}$~sec. The purple line corresponds to $\chi_1$ decay into $2N$ identical SM particles, while the green line corresponds to a pair of $N$ identical SM particles in the final state. The dashed lines are contours of constant $\alpha$ that reproduce the observed relic abundance.} \label{fig:minima_N} \end{figure} Assuming the $\chi_N$ decay is in equilibrium $\Gamma_N > H|_{x=1}$, requiring the stability of the DM, $\tau > 10^{27}$ sec, places a lower bound on the number of dark matter particles $N$. For instance, assuming that the DM decays to $2N$ identical SM particles corresponding to $S=(2N)!$, we find for $m= 10^{7}$~GeV that $N\ge 5$, and for $m= 10^{13}$~GeV that $N\ge14$. In Fig.~\ref{fig:minima_N}, we plot the minimum value of $N$ as a function of $m_\chi$, satisfying the lifetime requirement. \section{Non-degenerate masses and couplings} Up until now, we have considered the case of exactly degenerate masses, but it is natural to consider small mass splittings between the particles. This will have two important effects: creating forbidden channels and allowing the heavier states to decay directly into the lighter states. In standard two-particle forbidden annihilations, the mass splitting is negligible if it is smaller than the temperature when the abundance is still evolving, {\it i.e.}, $\delta m/m < x_{\rm fo}^{-1}$. For the chain interactions, a similar condition is found: \begin{equation} (m_{\rm max} - m_{\rm min})/m_{\rm min} < x_{\rm fin}^{-1} = (\pi^2 \lambda/ 4 N^2)^{-1}, \end{equation} where $m_{\rm max} $ and $m_{\rm min} $ is the maximum and minimum particle mass in the chain. If the mass splitting is larger than this, then the evolution will enter a forbidden regime. The abundance of the lightest particle (the DM candidate) decouples when the Boltzmann suppression factor due to the mass splitting becomes significant and the chain reaction departs from chemical equilibrium. After this point, the abundance of the heavier particles continues its Boltzmann suppression, $Y_{\rm heavy} \sim e^{-\Delta m/T}$. Consequently, the relic abundances of the heavier states are much smaller than that of the lightest state when in the forbidden regime. We leave a detailed study of this forbidden regime to future work. The second important effect of the mass splitting is that the heavier states can quickly cascade decay to the lightest state (which is still very long lived). Depending on the size of the mass-splitting, the decays may be in- or out-of-equilibrium. If the decays are in-equilibrium, the calculation of the relic-abundance will mostly remain unchanged. If the decays are out of equilibrium, the heavier states will simply transfer their abundances to the lightest state, increasing the DM abundance by a factor $\mathcal{O}(N)$. It is also reasonable to consider that the interaction strengths may vary across the chain. In this case, the Boltzmann equations are best solved by diagonalizing the system of differential equations. The relic abundance is approximately given by the degenerate case, but with the cross-section replaced by the smallest eigenvalue cross-section in the chain. \section{Phenomenology} One of the main phenomenological signatures of the mechanism are decays of long-lived relics. Late time decaying DM ($\tau > 10^{10}$~yr) can potentially be probed for all masses. DM decays can change the CMB anisotropies and spectrum~\cite{Slatyer:2016qyl}, but a detailed study for very heavy dark matter has not been performed. Decays of super heavy DM can source ultra-high-energy cosmic rays (UHECR), with energies $\gtrsim 10^9$~GeV. UHECR can be observed in diffuse gamma ray satellites, such as FERMI-LAT~\cite{Ackermann:2014usa}, high energy neutrino experiments, such as IceCube~\cite{Abbasi:2011ji}, and dedicated UHECR observatories, such as Auger~\cite{Abreu:2011zze}. For studies of the indirect detection of heavy decaying DM, see Refs.~\cite{Berezinsky:1997hy,Kuzmin:1998uv,Barbot:2002gt,Esmaili:2012us,Murase:2012xs,Feldstein:2013kka,Rott:2014kfa,Aloisio:2015lva,Kalashev:2016cre,Kalashev:2017ijd,Blanco:2018esa,Alcantara:2019sco}. Prospects for discovering UHECR are expected to improve with the proposed POEMMA mission~\cite{Olinto:2017xbi}. Depending on the parameters of the theory (especially the size of the mass splittings), some of the $\chi_i$ may decay at various cosmological epochs. For instance, a component of the DM could dissociate light elements during Big Bang Nucleosynthesis~(see Ref.~\cite{Kawasaki:2017bqm} and references therein). Additionally, DM decay can lead to spectral distortions in the CMB~\cite{Hu:1993gc,Chluba:2011hw,Poulin:2016anj}, which would be probed by the proposed PiXiE experiment~\cite{Kogut:2011xw}. Partially decaying DM has also been shown to alleviate several cosmological tensions, such as the Hubble tension~\cite{Vattis:2019efj,Pandey:2019plg} and small scale structure puzzles~(see e.g. Refs.~\cite{Wang:2014ina,Enqvist:2015ara}). For further studies of multicomponent dark matter with varying lifetimes, see Refs.~\cite{Dienes:2011ja,Dienes:2018yoq}. A detailed exploration of the phenomenology of our framework will be presented in upcoming work~\cite{future}. \section{A toy model} Having described the general mechanism, we now present a simple model realization. Consider the Lagrangian \begin{equation} {\cal L} \supset -\frac{1}{2}m_i \chi_i \chi_i - \delta m \chi_{i} \chi_{i+1} - y S \chi_i\chi_i - \mu S |H|^2 - \tilde{y} \chi_N L H, \label{model1} \end{equation} where $\chi_i$ ($i=1,\cdots,N$) are left-handed Weyl fermions, $S$ is a SM singlet scalar field, $L$ is the lepton doublet, and $H$ is the SM Higgs. The model is technically natural, since it respects a $Z_2^N$ symmetry that is broken by $\delta m$. Any correction to off-diagonal masses must then depend on some power of $\delta m$. For simplicity, we take $\delta m$ to be flavor independent; relaxing the assumption does not qualitatively alter the results. The mass matrix is given by \begin{eqnarray} M_{ij} = m_i \delta_{ij} + \delta m ( \delta_{i,j+1} + \delta_{i,j-1}) . \end{eqnarray} The above structure is similar to a tight-binding model along a one-dimensional wire in a quantum mechanical system. It is well-known that the wavefunctions in such system are localized at each site~\cite{PhysRev.109.1492}. Assuming ${\Delta m \equiv ( m_{\rm max} - m_{\rm min} ) \gg \delta m}$, the localization length is~\cite{IZRAILEV2012125,Craig:2017ppp} \begin{eqnarray} \xi_{\rm loc}^{-1} \simeq \ln \frac{\Delta m}{2 \delta m } - 1. \end{eqnarray} Due to the localization, the mass eigenstate $\psi_i$ can be approximated as \begin{eqnarray} \psi_i \sim \sum_j \chi_j \exp\left[ - \frac{|i-j|}{\xi_{\rm loc}} \right]\,. \end{eqnarray} In the mass basis, the generated nearest neighbor interactions \begin{equation} \mathcal{L} \supset y e^{-1/\xi_{\rm loc}} S \psi_{i} \psi_{i+1} \,,,. \end{equation} and non-nearest neighbor interactions are exponentially suppressed. The mixing can generate the direct decay ${\psi_1 \to H + L}$, with width \begin{eqnarray} \left( \frac{\Gamma_1}{\Gamma_N} \right)_{\rm mixing} \simeq e^{ - 2N/\xi_{\rm loc} }\,. \label{decaymixing} \end{eqnarray} Taking $\Gamma_N = H(m)$, we find \begin{equation} \xi_{\rm loc}^{-1} \gtrsim \frac{1}{N}\left( 62 + \log \frac{m}{10^{10}~\rm GeV}\right) \end{equation} to ensure that $\tau \gtrsim 10^{27}$ sec. The ratio between $\Delta m$ and $\Delta m$ controls both the relic abundance and the lifetime of DM in this toy model. For the longevity of DM, a small localization length is preferred, while, for the correct relic abundance, the localization should be not so small as it could suppress the nearest neighbor interaction. In Fig.~\ref{fig:minima_N}, we plot the minimum number of heavy particles $\chi_i$ needed for the stability and the observed abundance of dark matter. \section{Summary} In this {\it Letter}, we presented a new freezeout mechanism for super heavy DM that freezes out with the SM within a standard cosmological history. The relic abundance is determined solely via its interactions with the SM. For a velocity-independent cross-section, we showed the DM mass could be as large as $m\sim10^{14}$~GeV within the perturbative unitary limit. In an upcoming paper we show how velocity-dependent interactions, such as if the scattering was mediated by a light mediator, allows for DM to be as heavy as the Planck scale~\cite{future1}. \\ \begin{acknowledgments} {\em Acknowledgments ---} We are grateful to Tim Cohen, Raffaele D'Agnolo, Jared Evans, Yuval Grossman, Kenny C.Y. Ng, Jinhong Park, Josh Ruderman, Juri Smirnov, and the Weizmann Institute HEP lunch group for useful discussions. We especially thank Yonit Hochberg for useful discussions and comments on the manuscript. The work of EK is supported by the Israel Science Foundation (grant No.1111/17), by the Binational Science Foundation (grant No. 2016153) and by the I-CORE Program of the Planning Budgeting Committee (grant No. 1937/12). \end{acknowledgments} \bibliographystyle{h-physrev5}
-20,114.829833
[ -3.3984375, 3.064453125 ]
39.577039
[ -2.689453125, 0.51513671875, -2.25, -5.44140625, -1.1474609375, 7.86328125 ]
[ 3.46875, 7.99609375, 4.421875, 6.64453125 ]
194
3,592
[ -2.4921875, 2.689453125 ]
29.68478
[ -5.765625, -4.25, -4.19921875, -2.46875, 1.7490234375, 11.59375 ]
0.839438
17.824691
28.89755
3.110146
[ 2.419311046600342 ]
-13,448.818088
5.47216
-19,903.050058
0.457875
5.90183
[ -2.548828125, -3.4453125, -3.533203125, -4.8046875, 2.064453125, 11.953125 ]
[ -5.39453125, -1.861328125, -2.21484375, -1.587890625, 2.94921875, 4.4140625 ]
BkiUdbfxK02iP5rWKrVw
\section{Introduction} Many fish stock assessments derive information about stock trends from analyses of longline catch and effort records (IPHC, ICCAT, IATTC references). The classical relative abundance index, Catch Per Unit Effort (CPUE), for populations monitored using longline gear, is defined as the average number of individuals of target species caught per hook and minute of soak time. This commonly used CPUE index of abundance ignores the variability introduced by the competition for baited hooks within and between species. \cite{Somerton95} propose two different indices, based on instantaneous rate of catch from longline survey records which takes interspecific competition into account. \cite{Haimovici+07} claim that CPUE and the instantaneous rate give the same results. Another important source of variability in the abundance indices which has received relatively little attention is the presence of empty hooks i.e., hooks returning without bait or fish. This paper generalizes the use of instantaneous rates of catch from longline surveys as relative abundance indices (as proposed by \cite{Somerton95} and \cite{Rothschild67}) and evaluates alternative approaches to dealing with empty hooks in the formulation of longline survey stock trend indices. The paper is divided into three main sections after the introduction. Section \ref{sec:review} reviews existing methods for derivating relative abundance indices rom longline catch and effort records. Section \ref{sec:EmptyHooks} presents our generalization of the previous methods to account for empty hooks. Some simulation studies are conducted to compare the indices under different levels of interspecific competition and sources for empty hooks. Section \ref{sec:Results} gives the results of the simulation studies and illustrates the behaviour of the indices for monitoring the abundance of quillback rockfish ({\it Sebastes maliger}). The full technical details are outlined in the appendix. \section{Review of methods to derive abundance indices from longline records} \label{sec:review} \subsection{ Catch Per Unit Effort (CPUE)} In a longline survey CPUE is defined as the ratio between the number of fish of the target species caught $N_T$ and the number of hooks $N$ times the soak time $S$: \begin{equation*}CPUE= \frac{\# Target}{\# hooks \times Soak Time} =\frac{N_T}{N \, S} .\end{equation*} This index ignores the effects of competition and gear saturation. \subsection{The simple exponential model} \citet{Somerton95} have proposed two alternative approaches to deal with the issues of hook competition and gear saturation. Only one is useful in our context because dealing with gear saturation requires observed capture times which are not recorded in the available dataset. The number of available baits on the longline is supposed to decrease at an exponential rate measuring the overall pressure on the hooks. This overall pressure may be split into a sum of relative abundance indices per species. Using this approach the $k^{th}$ catch $C_{r,k}$ for a soak time $S$ and for species $r=1,\ldots R$ is given by: \begin{equation} \label{eq:SEmDef} C_{r,k}=\frac{\lambda_r}{\lambda} N(1-e^{-\lambda S} )+\varepsilon_{r,k}, \quad\mbox{with}\quad \varepsilon_{r,k}\overset{i.i.d}{\sim}\mathcal{N}(0,\sigma^2), \end{equation} where $N$ stands for the initial number of hooks, $\lambda$ is the overall pressure on hooks, and $\lambda=\sum_{r}\lambda_r$ is the sum of the specific abundance index per species, and $k$ is the number of the set. The parameters $(\lambda_1,\ldots,\lambda_R)$ defined a relative abundance index for each $t$ species\footnote{ Mostly one particular species is the target species, let $r=1$ while the other species, $r>1$ are non-target species. In this context it is easier to consider only $\lambda_1=\lambda_T$ the relative abundance for the target species and $\lambda_2=\lambda_{NT}$ the relative abundance index which summarizes all the other species.}. A few of the drawbacks of this Simple Exponential Model (SEM) are the assumptions of normality and homoscedasticity of the error terms. It is intuitive that the variability should be higher for species with higher relative abundance, i.e., the variance of the error term should not be constant but should depend in some way on $\lambda_t$. Furthermore $N_{t,k}$ is a discrete number (number of fish caught), potentially small and the normal assumption is not accurate in this case. \subsection{Multinomial Exponential Model} This section reviews the underlying ideas of \citet{Somerton95} and proposes an alternative model which more closely mimics the behavior of the fish. This model has been originally proposed by \cite{Rothschild67} and describes how the catch of a target species could be reduced by the catch of other species. Let us define $T_T$ as the time it took to catch an individual from the target species on one particular hook. $T_T$ is assumed to follow an exponential distribution of rate $\lambda_T$, i.e \begin{equation*}\P \left( T_T \geq u \right) = e^{-\lambda_T u}.\end{equation*} $T_{NT}$ is an exponential random variable with parameter $\lambda_{NT}$ and models the time it took to catch an individual from any of the non-target species. We can define $T=\min\left\lbrace T_{T}, T_{NT}\right\rbrace$ as the time it took to catch an individual whatever the species it is. Thanks to the property of the exponential distribution, $T$ is exponentially distributed with rate $\lambda=\lambda_T + \lambda_{NT}$. This property justifies the decomposition of the overall relative abundance as a sum of specific abundance given by \citet{Somerton95}. \\ After the soak period $S$, for one hook, there are only three possible outcomes: \begin{itemize} \item ${\left \lbrace I=0 \right \rbrace} = \left \lbrace \mbox{ The hook is still baited.}\right\rbrace$. It means that the time for a capture is greater than the soak time. This event occurs with probability \begin{equation*}\P(I=0)=\P(T>S)={e^{-\lambda\, S}}\end{equation*} \item ${\left \lbrace I\ne0 \right \rbrace} = \left \lbrace \mbox{ The hook is no longer baited.}\right\rbrace$. \begin{equation*}\P(I\ne 0)=\P(T<S)={1-e^{-\lambda\, S}}.\end{equation*} Given the hook is no longer baited, there are two possible outcomes: the catch is either from the target species, i.e. $\left \lbrace I=T \right \rbrace $,which occurs with probability \begin{equation*} \P(I=T) ={\P( T_T<T_{NT} \vert T<S )}{\P( T<S )} = {\frac{\lambda_T}{\lambda}}{ (1-e^{-\lambda\, S})}, \end{equation*} or the catch is from a non-target species, corresponding to the event $\left \lbrace I=NT \right \rbrace$ which occurs with probability: \begin{equation*} \P(I=NT) ={\P( T_{NT}<T_T \vert T<S)}{ \P( T<S )} ={\frac{\lambda_{NT}}{\lambda}} {(1-e^{-\lambda\, S})}. \end{equation*} \end{itemize} Assuming that all the hooks on a longline behave independently, the likelihood is given by \begin{equation} \label{eq:lik1} L(\lambda_T, \lambda_{NT}) = \left ( \begin{array}{c} N \\ N_B \end{array} \right) \left ( \begin{array}{c} N_{T}\\ N_{T}+N_{NT} \end{array} \right) (e^{-\lambda\, S})^{N_B} (1-e^{-\lambda\, S})^{N_{T}+N_{NT}} \left(\frac{\lambda_T}{\lambda}\right)^{N_{T}} \left(\frac{\lambda_{NT}}{\lambda}\right)^{N_{NT}}, \end{equation} where \begin{itemize} \item $N$ is the number of hooks on the longline, \item $N_B$ is the number of baited hooks at the end of the soak time, \item $N_{T}$ is the number of individuals of the target species caught, \item $N_{NT}$ is the number of individuals of the non-target species caught. \end{itemize} The combinatorial terms arise since all the hooks are considered independent and the order of the catch on the longline has no importance. This model was originally proposed by \citet{Rothschild67} although presented here with a slightly different approach. This model is called Multinomial Exponential Model (MEM) since the vector $(N_B, N_T, N_{NT})$ follows a multinomial distribution whose vector of probability depends on an exponential term. \bigskip If $\lambda_{NT}$ is larger than $\lambda_T$, it corresponds to a high level of competition: for a given relative abundance of the target species $\lambda_T$, the catch decreases as $\lambda_{NT}$, the non-target species relative abundance, increases. \subsection{Links between the indices} \subsubsection{ Links between MEM and SEM} The expected number of fish caught of the target species $N_T$ is the same under the MEM and SEM assumptions and is given by: \begin{equation*}\mathbb{E} \left ( N_{T} \right) = N \frac{\lambda_T}{\lambda} \left(1- e^{-\lambda S}\right).\end{equation*} Moreover, the models share the same parameters, $\lambda_T$ and $\lambda_{NT}$. The main difference is the error term. In the SEM, the error term is normally distributed with a variance given by: \begin{equation*}\mathbb{V}ar_{SEM}(N_T)=\mathbb{V}ar(N_{NT})=\sigma^2,\end{equation*} while in the MEM the total number of fish caught has a multinomial distribution and the variance is given by: \begin{equation*}\left \lbrace \begin{array}{l} \mathbb{V}ar_{MEM}(N_T)=N \frac{\lambda_T}{\lambda} (1-e^{-\lambda S}) \left ( 1 - \frac{\lambda_T}{\lambda} (1-e^{-\lambda S}) \right) \\ \mathbb{V}ar_{MEM}(N_{NT})=N \frac{\lambda_{NT}}{\lambda} (1-e^{-\lambda S}) \left ( 1 - \frac{\lambda_{NT}}{\lambda} (1-e^{-\lambda S}) \right) . \end{array}\right. \end{equation*} Furthermore $N_T$ and $N_{NT}$ are assumed to be independent in the Simple Exponential Model and not in the Multinomial Exponential Model. \bigskip \subsubsection{Links between CPUE and MEM} Under the MEM assumption, the expected CPUE of the target species is given by: \begin{equation*}\mathbb{E} \left (CPUE \right)=\frac{\lambda_T}{\lambda\, S } (1-e^{-\lambda S})\underset{\lambda S \to 0}{\longrightarrow} \lambda_T\end{equation*} If the overall density index $\lambda$ is small enough, meaning that there is little competition, CPUE and the MEM index give the same results. This theoretical result is consistent with the expected behavior. \section{Dealing with empty hooks} \label{sec:EmptyHooks} In longline experiments, it is common for some hooks to return empty; the hook is no longer baited, but there is no fish on it. There could be several explanations for these empty hooks such as mechanical removal of bait during gear setting/retrieval, consumption of the bait by invertebrates or fish without being hooked or removal of the hooked fish by predators. \medskip In this paper, we consider the hypothesis that empty hooks arise only from the escape of fish. Therefore, the question about empty hooks is reduced to ``How should the empty hooks be allocated to the different species?'' These empty hooks provide information that we could use to improve the quality of our abundance indices. This section describes modifications of the MEM to incorporate this information and details some statistical properties of the indices built using these versions of MEM. It also describes different ways to include the empty hooks information into the SEM. \subsection{Full version of the Multinomial Exponential Model} We propose a modified version of the Multinomial Exponential Model to account for empty hooks. As opposed to the previous version of the MEM, here each fish caught has a probability of escaping equal to $p_T$, for target species, and $p_{NT}$, for non-target species. \\ We use three additional variables to fully specify the model: $N_E$ is the number of observed empty hooks; $N_E^{(T)}$ (respectively $N_E^{(NT)}$) stands for the number of target species (respectively non-target species) individuals which have escaped and these two random variables are not observed. Assuming, as for the simple version of MEM, that all hooks are independent, we are able to conditionally describe the outcomes (Figure \ref{Fig:Outcome}). \begin{itemize} \item The number $N_B$ of baited hooks retrieved at the end of the soak time is the realisation of a binomial random variable with probability of success $e^{-\lambda S}$. $$ N_{B} \sim \mathcal{B}\left(N, e^{-\lambda S}\right).$$ \item Among the $N-N_B$ empty hooks, the total number of individuals from target species caught is $N_T + N_E^{(T)}$ and is also binomially distributed: $$N_T+ N_E^{(T)}\vert N_B \sim \mathcal{B}\left(N-N_B, \frac{\lambda_1}{\lambda}\right).$$ \item Given $N_T+ N_E^{(T)} $, the total number of individuals from target species on the longlines is $N_T$ and is also binomially distributed: $$N_T\vert N_T + N_E^{(T)} \sim \mathcal{B}\left(N_T +N_E^{(T)} , (1-p_T) \right).$$ \item Given $N_{NT}+ N_E^{(NT)} $, the total number of individuals from non-target species on the longlines is $N_{NT}$ and also has a binomial distribution: $$N_{NT}\vert N_{NT} + N_E^{(NT)} \sim \mathcal{B}\left(N_{NT} +N_E^{(NT)} , (1-p_{NT}) \right).$$ \end{itemize} \begin{figure}[h!] \includegraphics[width=8cm]{outcomes} \caption{Conditional description of the model. The observed quantities are solid lines, the hidden quantities are dashed lines.} \label{Fig:Outcome}% \end{figure} The full version of Multinomial Exponential Model is summed up through a probability tree in Figure \ref{Fig:Outcome}. $N_E^{(NT)}$ and $N_E^{(T)}$ are missing quantities but the sum $N_E$ of these two quantities is observed. Appendix \ref{an:Distribution} gives the main step to define the likelihood of this model: \begin{align} \label{eq:MEMlogLike} l(\lambda_T, \lambda_{NT}, p_T, p_{NT} ) = & \frac{N!}{N_B !\, N_T ! \, N_{NT}!\, N_E ! } \left(e^{-\lambda S}\right)^{N_B} \left(1-e^{-\lambda S}\right)^{N-N_B} \cr &\left( \frac{\lambda_T}{\lambda}(1-p_{T})\right)^{N_T}\left( \frac{\lambda_{NT}}{\lambda}(1-p_{NT})\right)^{N_{NT}}\left( \frac{\lambda_{T}p_T + \lambda_{NT}p_{NT}}{\lambda}\right)^{N_{E}} \end{align} The full version of Multinomial Exponential Model may be considered as a multinomial distribution: \begin{align*} & (N_B, N_T, N_{NT}, N_{E}) \sim \mathcal{M}\left (N, \boldsymbol{\alpha} \right)\\ \mbox{with } &\boldsymbol{\alpha} = \left(e^{-\lambda S},\ (1-e^{-\lambda S})\frac{\lambda_T}{\lambda} (1-p_T), \ (1-e^{-\lambda S})\frac{\lambda_{NT}}{\lambda} (1-p_{NT}),\ (1-e^{-\lambda S})\frac{\lambda_T p_T + \lambda_{NT}p_{NT}}{\lambda} \right) \end{align*} In this full version the model is not identifiable since an equivalent version could be expressed with only three parameters $\lambda$, $\lambda_T (1-p_T)$ and $\lambda_{NT} (1-p_{NT})$ (see annex \ref{an:Identifiable} for more details). Some additional information is required to estimate the parameters in this model. It is possible to add some biological knowledge on the probability of escape through prior distribution in a Bayesian framework but no information of this kind is available for our case study. This point is discussed in section \ref{sec:Discussion}. In a frequentist approach some particular solutions have to be chosen. In this paper we focus on two reasonable choices: \begin{enumerate} \item MEM1: empty hooks come only from non-target species, so $p_{T}$ is assumed to be 0. Most of the time the target species is less abundant than all the non-target species. Allocating the empty hooks to the non-target species will at worst lead to an underestimation of the target species. Furthermore baited hooks are designed to catch and retain the target species. \item MEM2: another reasonable choice is to assume that the probability of escape is the same for target and non-target species, i.e $p_T=p_{NT}$. An empty hook has had the bait stolen by a fish but no information about the species of this fish is available so that the empty hooks are allocated according to the relative densities of each group. \end{enumerate} \subsection{Maximum likelihood estimation of MEM} As all the longline sets are supposed to be independent, the complete likelihood is simply the product of the likelihood for each experiment given by formula \ref{eq:MEMlogLike}. At this stage we have to consider two different situations: variable soak times or similar soak times. If the soak times are different for all the longline sets, no analytical formula for the estimators is available, the estimation step has to be performed using a non linear optimization algorithm. Mostly, the longline experiments have been designed to share the same soak time to reduce the causes of variation in the experiment. In this case, analytical formula can be derived for MEM1 and MEM2 because all the information can be summed up through the vector $(N_{B+}, N_{T+},N_{NT+}, N_{E+})$ which corresponds to $(\sum_{l=1}^L N_{B_l},\sum_{l=1}^L N_{T_l},\sum_{l=1}^L N_{NT_l},\sum_{l=1}^L N_{E_l})$, where $l$ is the number of the longline set. Even if the design of the experiment prescribes a constant soak time for each set, the actual soak time can differ slightly due to weather conditions or practical reasons. If the difference is not important, it is judicious to consider a single soak time (the mean for instance) to avoid the need of a numerical optimization which produces some instability in the estimations. As detailed in appendix \ref{an:Estimations}, the maximum likelihood estimators are given by: \begin{equation} \begin{array}{cc} MEM1 & MEM2\\ \left\lbrace \begin{array}{rl} \hat{\lambda}_{T}&=\frac{N_{T+}}{N_+-N_{B+}} \frac{1}{S}\log {\left( \frac{N_+}{N_{B+}}\right)}\\ \hat{\lambda}_{NT}&=\frac{N_{NT+}+N_{E+}}{N_+-N_{B+}} \frac{1}{S} \log {\left(\frac{N_+}{N_{B+}}\right)}\\ \hat{p}_{NT}&=\frac{N_{E+}}{N_{E+}+N_{NT+}},\quad \hat{p}_{T}=0 \end{array} \right. & \left\lbrace \begin{array}{rl} \hat{\lambda}_{T}&=\frac{N_{T+}}{N_{T+}+N_{NT+}} \frac{1}{S}\log {\left( \frac{N_+}{N_{B+}}\right)}\\ \hat{\lambda}_{NT}&=\frac{N_{NT+}}{N_{T+} + N_{NT+}} \frac{1}{S}\log {\left(\frac{N_+}{N_{B+}}\right)}\\ \hat{p}_{T}&=\frac{N_{E+}}{N_{E+}+N_{T+}+N_{NT+}}= \hat{p}_{NT} \end{array} \right. \\ \end{array} \label{eq:MLE} \end{equation} Maximum likelihood estimators are asymptotically unbiased and the covariance matrix is the inverse of the Fisher Information Matrix \citep{Severini00}. If the total number of hooks $N$ is large enough, the joint distributions of these estimators can be approximated by a Multivariate Normal distribution. Asymptotically the covariance matrix for MEM1 is given by: \begin{equation} \label{eq:CovMEM1} Cov_{MEM1} = \frac{\lambda_T\lambda_{NT}}{N (1-e^{-\lambda S})}\left ( \begin{array}{ccc} \frac{(1-e^{-\lambda S})^2}{S^2 e^{-\lambda S}\lambda^2} \frac{\lambda_T}{\lambda_{NT}} + 1 & \frac{(1-e^{-\lambda S})^2}{S^2 e^{-\lambda S}\lambda^2} - 1 & 0 \\ \frac{(1-e^{-\lambda S})^2}{S^2 e^{-\lambda S}\lambda^2} - 1 & \frac{1-e^{-\lambda S}}{S^2 e^{-\lambda S}\lambda^2} \frac{\lambda_{NT}}{\lambda_{T}} + 1& 0 \\ 0 & 0 & \frac{p_{NT} (1-p_{NT}) \lambda }{\lambda_T \lambda_{NT}^2 } \\ \end{array} \right), \end{equation} and for MEM2 by \begin{equation} \label{eq:CovMEM2} Cov_{\small MEM_2} = \frac{\lambda_T \lambda_{NT}}{N (1-e^{-\lambda S})}\left ( \begin{array}{ccc} \frac{ (1-e^{-\lambda S})^2}{S^2 e^{-\lambda S}\lambda^2} \frac{\lambda_T}{\lambda_{NT}} + \frac{1}{1-p}& \frac{(1-e^{-\lambda S})^2}{S^2 e^{-\lambda S}\lambda^2} -\frac{1}{1-p} & 0 \\ \frac{(1-e^{-\lambda S})^2}{S^2 e^{-\lambda S}\lambda^2} -\frac{1}{1-p}& \frac{(1-e^{-\lambda S})^2}{S^2 e^{-\lambda S}\lambda^2} \frac{\lambda_{NT}}{\lambda_T} + \frac{1}{1-p} & 0 \\ 0 & 0 & \frac{p (1-p)}{\lambda_T \lambda_{NT}} \\ \end{array} \right). \end{equation} The result proposed by \cite{Rothschild67} concerning the asymptotic variance for the simple version of MEM with $p_T=p_{NT}=0$ should be the same replacing $p$ or $p_{NT}$ by 0 in the above formula. Nevertheless, the two formulas are not compatible even if the estimators are. We suspect a mistake in the formula proposed by \cite{Rothschild67}. \subsubsection{Bayesian framework for MEM} The multinomial exponential model could also be estimated in a Bayesian framework. The full specification of the Bayesian version of the MEM, requires the definition of some prior distributions for the parameters $(\lambda_T,\lambda_{NT}, p_{T}, p_{NT})$. The relative abundance index is always less than 1. Therefore, in our study, the priors have been chosen as poorly informative and independent. \begin{equation*} \lambda_{T} \sim \beta(1, 1) \quad \lambda_{NT} \sim \beta(1,1). \end{equation*} If there is no informative prior on the probability of escape, the model is still non-identifiable. In a Bayesian framework, an identifiable model can be diagnosed since the posterior distribution is the same as the prior distribution. Nothing has been learned from the data. To avoid this problem, informative prior distributions could be defined using biological knowledge or field experiment. This aspect hasn't been investigated in this work. We use the specific forms of the model (MEM1 or MEM2) to remove the problem of identifiability. The estimation procedure has been implemented using the JAGS software \citep{JAGS}. An example of JAGS code is provided in section \ref{sec:jagscodeMEM1} for MEM1 and \ref{sec:jagscodeMEM2} for MEM2 and all other codes are available on request. \subsection{Estimation for other indices} This section will discuss the estimation steps of all the indices previously presented and highlights the key points of the procedure. \subsubsection{Multiple CPUE} In a given region, you could have several sets of longlines deployed, so the estimation step requires the fitting of one index using several observations. If only one set is deployed, the CPUE is obtained by the catch over the soak time multiplied by the number of hooks. A simple generalization of this index is detailed in Appendix \ref{sec:cpue} and proposes to compute CPUE as \begin{equation*}CPUE=\frac{\sum_{l=1}^L N_{T_l}}{\sum_{l} S_l N_l},\end{equation*} where $l=1,\ldots,L$ stands for the longline set. If the number of hooks or the soak times are different, the generalized CPUE computed is a weighted average of all individual CPUE. A simple average is not sufficient, since for example, the average will attribute the same weight to an experiment with 200 hooks and to another experiment with only 50 hooks. \subsubsection{Simple Exponential model} In the following, two versions of SEM are derived depending on how empty hooks are considered: \begin{enumerate} \item SEM1: empty hooks are assumed to arise only from the non-target species and $N_E$ and $N_{NT}$ are pooled together. \item SEM2: empty hooks are considered as a "third" species, and an additional relative abundance index is defined $\lambda_E$. \end{enumerate} \cite{Hovgard00} proposes to estimate $\lambda$ using \begin{equation*} \hat{\lambda}_{Hov} =\frac{ -\log{ (N_{B}/ N)}}{S}. \end{equation*} If all longline sets share the same soak time and the same initial number of hooks $N$, the MLE for this model are almost the same than for the MEM with the exception of empty hooks. \begin{align*} SEM1 & \left\lbrace \begin{array}{rl} \hat{\lambda}_{T}&=\frac{N_{T+}}{N_+-N_{B+}} \frac{1}{S}\log {\left( \frac{N_+}{N_{B+}}\right)}\\ \hat{\lambda}_{NT}&=\frac{N_{NT+}+N_{E+}}{N_+-N_{B+}} \frac{1}{S} \log {\left(\frac{N_+}{N_{B+}}\right)}\\ \hat{\sigma}^2 &=\frac{1}{2L} \sum_{l=1}^L \left ( N_{T_l} - N \frac{\hat{\lambda}_T}{\hat{\lambda}} (1-e^{-\hat{\lambda} S} )\right)^2 + \left ( N_{NT_l} +N_{E_l}- N \frac{\hat{\lambda}_{NT}}{\hat{\lambda}} (1-e^{-\hat{\lambda} S} )\right)^2 \end{array} \right. \\ SEM2 & \left\lbrace \begin{array}{rl} \hat{\lambda}_{T}&=\frac{N_{T+}}{N_+-N_{B+}} \frac{1}{S}\log {\left( \frac{N_+}{N_{B+}}\right)}\\ \hat{\lambda}_{NT}&=\frac{N_{NT+}}{N_+-N_{B+}} \frac{1}{S} \log {\left(\frac{N_+}{N_{B+}}\right)}\\ \hat{\lambda}_{E}&=\frac{N_{E+}}{N_+-N_{B+}} \frac{1}{S} \log {\left(\frac{N_+}{N_{B+}}\right)}\\ \hat{\sigma}^2 &=\frac{1}{3L} \sum_{l=1}^L \left \lbrace \left ( N_{T_l} - N \frac{\hat{\lambda}_T}{\hat{\lambda}} (1-e^{-\hat{\lambda} S} )\right)^2 + \left ( N_{NT_l} - N \frac{\hat{\lambda}_{NT}}{\hat{\lambda}} (1-e^{-\hat{\lambda} S} )\right)^2 + \right.\\ &\hspace{1cm}\left.\left ( N_{E_l} - N \frac{\hat{\lambda}_{E}}{\hat{\lambda}} (1-e^{-\hat{\lambda} S} )\right)^2\right\rbrace \end{array} \right. \\ \label{eq:SEMMLE} \end{align*} To estimate $\sigma$ at least two longline sets are required: otherwise the estimates correspond to a perfect match and there is no additional variability.\\ We are only interested in building abundance indices for the target species and since the estimations of $\hat{\lambda}_T$ in SEM1 and SEM2 are the same, in the following we call this index SEM. When the soak times or the initial number of hooks are different, a numerical optimisation algorithm has to be used to defined the MLE for SEM1 and SEM2. This approach should be avoided if possible due to numerical instability. From a practical point of view the \verb+nlm+ function available in \verb+R+ software \citep{Rcite} behaves badly. In this work we directly optimize the log likelihood function using function \verb+optim+.\\ The analytical formulas for $\lambda_T$ are exactly the same for SEM and MEM1: if soak times and the initial number of hooks are the same for all sets, there is absolutely no difference between these two indices concerning the relative abundance of the target species, which is not true for the non-target species. But this analytical formula is only valid for SEM when soak times and the initial number of hooks are the same while MEM only requires the same soak times. \subsection{Simulation studies} Bias and variability of the estimators are evaluated in this section through simulation studies. Several plausible scenarios have been studied to give some robust results and advice about the behaviour of all the indices. Specifying data generators is the first step of simulation studies. In our specific case, we have at first glance, different solutions. We can use the Simple Exponential Model but this model doesn't simulate any empty hooks, and it provides some non-integer catch because of the normal hypothesis. The two other solutions (ie MEM1 and MEM2) will be used to study the different scenarios. This choice of data simulator gives obviously an advantage to MEM1 if the data were simulated with MEM1 and to MEM2 in the other case. For one set of fixed parameters (that is $\lambda_T$, $\lambda_{NT}$, $L$ the total number of sets, $S$ the soak time, $N$ the number of hooks on a longline) 5000 fake datasets are generated and the corresponding estimated values for $\lambda_T$ and $\lambda_{NT}$ have been computed. A relative bias and a coefficient of variation are derived from these simulations. To study the impact of the estimation via a numerical algorithm, we also compute the estimators by maximizing the log likelihood using a non linear optimization algorithm. The values $\lambda_T$ and $\lambda_{NT}$ need to be chosen to reflect a plausible situation. In this work, four values of each parameter have been used and all of the sixteen combinations of these two parameters have been addressed. Three different scenarios for empty hooks have been simulated: \begin{enumerate}[Sc.1)] \item There are no empty hooks. Each hook has caught a fish. This situation corresponds to $p_{NT}=p_{T}=0$. \item The ability to escape is the same across species. The probability of escape is set to $20\%$, this corresponds to $p_{T}=p_{NT}=0.2$. \item non-target species are better at escaping. The probability of escape is set to $p_{NT}=0.2$ for the non-target species and to $p_{T}=0.02$ for the target species. \end{enumerate} The results of this simulation study are presented in section \ref{sec:resSim}. \subsection{Description of the B.C. inshore rockfish longline survey} \label{sec:presdata} Since 2003, Fisheries and Oceans Canada has conducted an annual research longline survey in the Strait of Georgia with the fisheries research vessel { \bfseries {\em CCGS Neocaligus}}. Different regions of the Strait are covered each year, resulting in each statistical area (PMFA) being surveyed every two to three years. A 2 km by 2 km grid is overlaid on all inshore rockfish habitat up to 100 m in depth, as determined using Canadian Hydrographic Service charts. These blocks are stratified by depth into shallow (41-70 m) and deep (71-100m) and $8\%$ of the blocks, in a given statistical area, are randomly selected for fishing each year (see Lochead and Yamanaka 2007 for further details). The snap-type longline gear consists of 1800 ft of leaded groundline with 225 circle hooks (13/0) spaced 8 ft apart. Each hook is attached to the snap by a 1.2 ft perlon gangion, crimped at both ends, and with a swivel at the hook. The hooks are baited with Argentinean squid. Soak time for each set is two hours and measured as the time from deployment of the last anchor when setting the gear to the retrieval of the first anchor on board when hauling. As the gear is retrieved, the condition of each hook is recorded as returning with bait, with catch, empty (i.e. without bait or catch on the hook), or unknown, if the hook does not return. Catch is recorded to the species level for both fish and invertebrates. \section{Results} \label{sec:Results} \subsection{Simulation studies} \label{sec:resSim} \subsubsection{Competition but no empty hooks} Figure \ref{Fig:BiasnoEmpty} presents the bias for the two indices in the absence of empty hooks. SEM, MEM1, MEM2 produce exactly the same results because there are no empty hooks and the analytical formula has been used. \begin{center} \begin{figure}[h!] \includegraphics[width=11cm]{BiasNoEmpty} \caption{Relative bias, defined as the absolute value of the bias divided by the true value $\vert\lambda_T-\hat{\lambda}_T\vert / \lambda_T$, expressed as a percentage computed over 5000 simulations for 220 hooks per longline and 20 sets. Bias for SEM1, SEM2, MEM1 and MEM2 is the same in this case. On the right the bias computed for CPUE indices show that the bias increases with the increase of relative abundance of the non-target species.} \label{Fig:BiasnoEmpty}% \end{figure} \end{center} The estimations are unbiased for abundance indices built with the exponential model and so competition is effectively taken into account. On the other hand bias in the CPUE index increases with increasing relative density of the non-target species. This result confirms that the CPUE index strongly depends on competition and should be avoided. This behavior is always the same in all situations which have been addressed in this simulation study. The coefficient of variation presented in Table \ref{Tab:CVnoEmpty} depends on the number of data relying on $N$ and $L$ but also on the relative abundance, parameters $\lambda_T$ and $\lambda_{NT}$. This coefficient could be very high for low relative abundance situations. \begin{table}[htbp] \begin{tabular}{|c|c|c|c|c|}\hline & \multicolumn{4}{c|}{ $\lambda_{NT} $}\\\cline{2-5} $\lambda_T$ & 5e-04 & 0.001 & 0.005 & 0.01 \cr \hline 1e-05 & 43.2 & 44.8 & 50.9 & 56.7 \cr \hline 5e-05 & 19.5 & 20.1 & 22.2 & 25.3 \\ \hline 1e-04 & 13.8 & 14.4 & 15.9 & 17.9 \\ \hline 5e-04 & 6.2 & 6.4 & 7.4 & 8.1 \\ \hline \end{tabular} \caption{Coefficient of variation ($\%$) in estimates of $\lambda_T$ using the MEM models. Results For SEM, MEM1 and MEM2 are exactly the same since the soak time is shared by all longline sets and there is no empty hook.}\label{Tab:CVnoEmpty} \end{table} \begin{figure}[htbp] \includegraphics[width=9cm, height=7cm]{CV_Expected} \caption{Coefficient of variation for the estimates of $\lambda_T$ for MEM1 computed over 5000 simulations as a function of the expected number of catch for target species} \label{Fig:CVnoEmpty}% \end{figure} \medskip The coefficient of variation of the estimators for the relative abundance decreases with the expected number caught, which is in this study, $L\, N \frac{\lambda_T}{\lambda}\left( 1- e^{-\lambda S}\right)$. The relation between these two quantities is illustrated by Figure \ref{Fig:CVnoEmpty}. This expected number caught describes the actual available information on $\lambda_T$. The bias study shows that CPUE behaves poorly when interspecific competition occurs. In the following the results concerning this index will not be shown. \subsubsection{Competition and empty hooks} SEM and MEM have the same behaviour when no empty hooks are present in the dataset. Simulations of empty hooks allow the comparison of the respective behavior of those indices in presence of empty hooks. In our simulation context, the soak time and the initial number of hooks are constant in all sets, so that an analytical formula can be used to compute the indices and SEM and MEM1 produce the same results. Since MEM1 and MEM2 relies on two different hypothesis concerning the origin of the empty hooks, they give some different results. When all species are equally good at escaping (Sc.2) MEM2 produces unbiased estimators of the relative abundance while SEM1, SEM2 and MEM1 tend to underestimate this relative abundance. In the simulations used to produce Figure \ref{Fig:BiasUniform}, the probability of escape for non-target and target fish has been set to $20\%$, which corresponds to the underestimation of $20\%$ for the relative abundance $\lambda_{T}$. \begin{center} \begin{figure}[htbp] \includegraphics[width=11cm]{MEM2True} \caption{Simulations under Scii. MEM1, SEM1 and SEM2 produce the same results and tend to underestimate the relative abundance. MEM2 is the "true" model in this situation and tends to produce unbiased estimate.} \label{Fig:BiasUniform}% \end{figure} \end{center} When the non-target species are better at escaping, MEM2 tends to overestimate the relative abundance of the target species. Results presented in Figure \ref{Fig:Pref} are produced when $p_T=0.02$ and $p_{NT}=0.2$. MEM2 attributes a proportion of the empty hooks to the target species, this proportion depends on $\lambda_T$ and $\lambda_{NT}$. Since in this simulation most of the empty hooks arise from the non-target species, the higher the relative abundance of non-target species is, the more the relative abundance of target species is overestimated. The bias in the estimates for MEM1 is constant and equals $2\%$ which correspondsto the missed fish from the empty hooks. \begin{center} \begin{figure}[htbp] \includegraphics[width=11cm]{Preferential} \caption{Simulations under Sc1: MEM1 and SEM underestimate the true relative abundance by exactly the probability of escape for non-target species. MEM2 highly overestimates the relative abundance.} \label{Fig:Pref}% \end{figure} \end{center} \subsubsection{Numerical instability} In order to study the numerical instability of the optimization algorithm, the maximum likelihood estimators have been computed through the analytical formula and using a numerical optimisation algorithm on the same data set (with shared $S$ and shared $N$). Which ever scenario is used, the optimisation algorithm behaves well, i.e, less than $5\%$ of difference between the estimates computed using the analytical formula and the estimates obtained by numerical optimization except when the ratio between the relative abundance of non-target species $\lambda_{NT}$ and the relative abundance of the target species $\lambda_T$ is very high. In the extreme case, with $\lambda_{NT}=0.01$ and $\lambda_T=1e-05$ the average difference between the two estimates varies from $10\%$ to $40\%$. This poor behaviour occurs when the log-likelihood peak is not strong enough. Some examples of this behaviour concerning the optimisation step are illustrated on the real data in section \ref{sec:RockfishResults}. \subsection{Rockfish survey results} \label{sec:RockfishResults} Figure \ref{Fig:Area13} shows the different relative abundance indices obtained using the scientific survey described in section \ref{sec:presdata}. The confidence intervals have been computed using a bootstrap procedure with 5000 resamples. \begin{center} \begin{figure}[htbp] \includegraphics[width=9cm, height=8cm]{S_TrendA13} \caption{Four indices computed for area 13 for the quillback population. The numerical optimisation has some stability issues for year 2007, the results should be the same then MEM1 estimates. } \label{Fig:Area13} \end{figure} \end{center} The estimate for the numerical version of SEM index exhibits a considerable difference in trends and the confidence interval associated with this estimate is huge. This is due to a numerical instability problem. Indeed, there are very few quillback caught in 2007 in area 13 as suggested by the decrease in the trend for all other indices. Therefore, the numerical optimization procedure behaves poorly which produces a poor estimate but also a large variability using a bootstrap procedure. Within the Strait of Georgia survey, the northern areas (12 and 13) have been surveyed together in the same years (2003, 2004 and 2007) and have alternated with surveys in the southern areas (14-20). Since management isn't applied at the area scale, data from area 12 and 13 have been pooled to form one dataset. The corresponding relative abundance time series is shown in Figure \ref{Fig:Area1213} \begin{center} \begin{figure}[h] \includegraphics[width=9cm, height=8cm]{S_TrendA12_13} \caption{Four indices computed for area 12 and 13.} \label{Fig:Area1213} \end{figure} \end{center} Pooling the data from both areas produce more precise estimates as shown in Table \ref{tab:CVquillback} and solves the numerical instability for the SEM index. The coefficient of variation is divided by almost two when using the whole dataset. We didn't include the other areas of the Strait of Georgia (14-20) in the study since they were only surveyed once in 2005. \begin{table} \begin{tabular}{|c||c|c|c||c|c|c|}\hline \multicolumn{7}{|c|}{Coefficient of variation $(\%)$}\cr \hline Index & \multicolumn{3}{c||}{Area 13} & \multicolumn{3}{c|}{Area 12 and 13}\cr & 2003 & 2004 & 2007 & 2003 & 2004 & 2007 \cr \hline MEM1 & $24.1$ & $26.6$ & $22.7$ & $15.9$ & $12.6$ & $13.6$ \cr MEM2 & $23.5$ & $29$ & $23.1$ & $15.6$ & $13.6$ & $13.3$ \cr SEM N & $25.8$ & $30.2$ & $29.8$ & $15.6$ & $13.1$ & $13.9$ \cr CPUE & $26.4$ & $28.1$ & $24.7$ & $16$ & $13.4$ & $14.1$ \cr \hline \end{tabular} \caption{ Coefficient of variation of the different relative abundance indices expressed in percent. The variability strongly decreases when data are pooled together. The large value for the coefficient of variation for 2007 for the numerical version of SEM is due to optimization instability. } \label{tab:CVquillback} \end{table} Given the uncertainty on the relative abundance indices, no change is statistically significant in the relative abundance of quillback population in the strait of Georgia. \section{Discussion} \label{sec:Discussion} The first and most important conclusion of this study is that when interspecific competition was accounted for in simulated data, the classical longline CPUE estimators gave strongly biased estimates of stock trends in all cases evaluated. In contrast, the exponential model-based estimators showed much less bias. Therefore it is advisable to avoid the use of classical CPUE as much as possible since this index doesn't take competition into account. From one year to another, the level of competition can vary and two CPUE indices computed during two different years are not comparable which is unacceptable for a relative abundance index. In the absence of empty hooks, the abundance indices built on SEM or MEM are comparable even if the fundamental assumptions of SEM and MEM are very different. If $N_{T}$ is considered as a sum over all hooks of the number of hooks which have caught a target individual, the central limit theorem claims that if the number of hooks is large enough $N_T$ exhibits a normal distribution. The independence of hooks is not required, only some weak dependence is necessary (see \citet{Billingsley95} for central limit theorem under weak dependence conditions). SEM doesn't assume the independence on hooks but $N_T$ and $N_{NT}$ are considered as independent variables which is obviously false especially when this model is used to take competition into account. This explains how the number of non-target species caught $N_{NT}$ could decrease the number of target species caught $N_T$. In contrast MEM models the dependence between $N_T$ and $N_{NT}$ but assumes the independence of hooks. Ignoring empty hooks will produce poor indices because most of the empty hooks result from fish escapement which should be counted for in the relative abundance index. Empty hooks are of major importance to building abundance indices even if there is no perfect solution to deal with them. Biological knowledge can be very useful for deciding how to deal with empty hooks and the Bayesian framework offers an intuitive way to use this kind of information to remove the non identifiability problem. But this biological knowledge is hard to obtain since the escapement of fish from hooks is difficult to study. MEM1 and MEM2 require fully explicit choices concerning the empty hooks. Even if this required choice is hard to make, the explicit choice should still be considered. In the SEM model the choice is made by default and it is even not explicit. We recommend that practitioners design studies to collect information about the ability of target and common non-target species to escape and work in a Bayesian framework using the code provided in Annex \ref{an:Winbugs2}. The major advantage is that the uncertainty about empty hooks is translated to uncertainty about the relative abundance indices. If absolutely no information is available, the recommendation is to use SEM1 since the bias on the relative abundance index doesn't depend on the abundance of non-target species. Furthermore, multiplicative constant bias as seen for MEM1 doesn't produce biased estimates of population dynamic parameters model since the multiplicative constant is absorbed by the coefficient of proportionnality which links abundance indices and biomass. One major drawback of all the models formulated in this paper is the assumption of constant relative abundance along the longline set and independence between hooks which are obviously not true. For instance, \cite{Sigler00} has shown that at least the catch rate decrease with time for sablefish ({\it Anoplopoma fimbria}). Different approaches could be explored to avoid this assumption. The first one would be to record the change of habitat along the longline set and using this as a covariate in the model. Another possibility would be to refine the modeling of the abundance index. For instance, we could consider a local relative abundance index $\lambda_{Th}$ at hook $h$ defined as the sum of the main relative abundance $\lambda_T$ plus a noise term. The noise term would be chosen as an autoregressive model for example to use the information of the hooks in the neighbourhood. This extra variability could account for the variability in the habitat. Another perspective for dealing with the variability along the longline is to define the abundance index as piecewise constant function along the longline and trying to detect the change in this function using the tools of change-point detection \citep{Lavielle01}. The possible variation of $\lambda_T$ during the soak time is also a question of interest. Some species could be more attracted by a fresh bait and therefore $\lambda$ would be supposed to decrease with time. This question of attractivity of the baits has been studied by \cite{Ferno+95} in chapter 8 but currently there is no solution to take this into account when building some abundance indices. \bibliographystyle{apalike}
-31,604.70624
[ -3.55078125, 3.203125 ]
40.990991
[ -3.37890625, 0.310791015625, -1.7451171875, -5.89453125, -0.5478515625, 7.78125 ]
[ 2.166015625, 6.05078125, 3.845703125, 8.4375 ]
327
5,969
[ -2.015625, 2.177734375 ]
26.822243
[ -6.296875, -4.25390625, -4.0859375, -2.091796875, 2.294921875, 11.65625 ]
1.064995
26.678519
21.259843
3.027983
[ 1.8918423652648926 ]
-21,123.11862
5.50578
-30,873.723657
1.217137
5.844308
[ -3.11328125, -3.279296875, -3.048828125, -4.21875, 2.5625, 10.6171875 ]
[ -5.70703125, -2.390625, -2.46875, -1.978515625, 3.556640625, 5.3359375 ]
BkiUexfxK6nrxjHzDAkJ
\subsection{Gathering a Large-Scale Video Dataset} Gathering implicitly consists of two rough stages: identifying an overcomplete set of candidate videos using generic queries and filtering out irrelevant videos. \vspace{2mm} \noindent {\bf Generating a set of query candidates:} We began with a set of 11 categories: boardgames, DIY, making drinks, making food, furniture assembly, gardening, doing housework, packing, doing puzzles, repairing, and studying. We generated 13.2K queries using frequent words, Wordnet hyponyms and templated queries (e.g., ``DIY cookies home 2014''), and searched YouTube. These queries yield $\sim$6.5M video responses (an estimated 86 years), which we must filter for containing hands interacting with objects. \vspace{2mm} \noindent {\bf Filtering:} Manually screening such a large dataset is impractical and we therefore use a learned model based on video thumbnails. In particular, we use three learning targets: (a) what fraction of 100 evenly-spaced frames have high responses from a Faster-RCNN hand detector trained on \cite{Fouhey18}; and (b) what fraction of frames are judged as containing interaction by human workers; (c) whether the frames are cartoons. These cannot be evaluated on the whole dataset, so we train models (see supplemental material) that map thumbnails to a prediction of each. This can be rapidly evaluated at scale on our full dataset's thumbnails and our final dataset is the intersection between the datasets that are likely to contain hands and depict interaction, with likely cartoons removed. These systems are not used in the future and all subsequent annotations are at a frame-level and are independent of video-level filtering mechanism. \begin{table}[t] \centering {\footnotesize \caption{Comparison of 100DOH with hand datasets. Our dataset is far larger and has a rich annotation of contact state with objects.} \label{tab:dataimagecompare} \begin{tabular}{@{~}l@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~~}c@{~}} \toprule Name & \# Im & \# Hands & Side & Contact & Objects & Source \\ 100DOH & 100K & 189.6K & \checkmark & \checkmark & \checkmark & YouTube \\ VLOG \cite{Fouhey18} & 5K & 26.1K & X & Per-Image & X & YouTube \\ VIVA \cite{VIVA} & 5.5K & 13.2K & \checkmark & X & X & Capture \\ Ego \cite{Bambach15} & 4.8K & 15K & \checkmark & X & X & Capture \\ VGG \cite{Mittal11} & 2.7K & 4.2K & X & X & X & Flickr, TV \\ TV-Hand \cite{Narasimhaswamy2019} & 9.5K & 8.6K & X & X & X & TV \\ COCO-Hand \cite{Narasimhaswamy2019} & 26.5K & 45.7K & X & X & X & Flickr \\ \bottomrule \end{tabular}} \end{table} \begin{figure*}[!t] \centering \includegraphics[width=0.95\linewidth]{figures/method.pdf} \caption{Our system can act as a foundation to understand interacting hands on the Internet. Our system takes a single RGB image and detects hands (irrespective of scale) and for every hand predicts: a box, side, contact state, and a box around the object it is touching. We can then (1) obtain a parse of hand state; and (2) use the hand box and side to feed a reconstruction system like \cite{Hasson19}. To help make better use of Internet reconstructions, we introduce a self-supervised system that assesses mesh quality that we train on our data.} \label{fig:method} \end{figure*} \subsection{Image Dataset} This yields a video dataset -- 100 Days of Hands (100DOH) -- of 27.3K videos across 11 categories with 131 days of footage of everyday interaction. We use this to build a new 100K frame-level dataset, that is primarily ($\sim85$\%) a subset of 100DOH and ($\sim15$\%) a $3\times$ extended and relabled version of the hand dataset in \cite{Fouhey18}. We chose randomly among frames, filtering out (and retaining for later use) images containing no hands. We include VLOG because we tried building off of VLOG, but realized that we needed more diverse underlying data. \vspace{2mm} \noindent {\bf Annotation:} For every hand in each image, we obtained the following annotations: {\bf (a)} a bounding box around the hand; {\bf (b)} side: left / right, which is crucial for mesh reconstruction; {\bf (c)} the hand contact state (\{no contact, self-contact, other person contact, in contact with portable object, in contact with a non-portable object\}), which provides insights into what the person is doing; and {\bf (d)} a bounding box around the object the person is contacting {\it irrespective of name}. The annotation of non-named bounding boxes is crucial: in-the-wild data is known to have a heavy tail, dooming categorization. Annotation began by counting hands, then marking hand bounding boxes and sides simultaneously; in addition to standard qualification, consensus, and sentinel techniques, hand bounding boxes were annotated, then verified, then re-annotated if hands were missing. Catching these missing boxes is crucial since with data of this scale, hand detection performance can reach a mAP of 90\%. Then hand state and object bounding box were annotated, again using qualifications, consensus, and sentinels. We only included images on which we could get conclusive judgments from workers. In total, in the 100K images, there are 189.6K hands annotated which are in contact with 110.1K objects. \vspace{2mm} \noindent {\bf Splits:} We split by YouTube uploader id to make a 80/10/10\% train/val/test split where each uploader appears in only one split. This split is backwards compatible with the VLOG hand data: no VLOG test appears in the trainset. \vspace{2mm} \noindent {\bf Comparison to existing data:} We compare our dataset in raw statistics with other comparable datasets of videos in Table~\ref{tab:datacompare} and image-based datasets for studying hands in Table~\ref{tab:dataimagecompare} (empirical cross-dataset evaluations demonstrating the utility of the dataset appear in Section~\ref{sec:experiments}). While unlabeled, our video dataset provides a valuable source of large-scale demonstrations of hands engaged in interaction. Our image dataset fills a gap of providing object contact and side information at vast scale. Additionally, as seen in the paper, 100DOH hands appear at a wide variety of scales. \begin{comment} \subsection{Gathering Data (1/2 page only)} Youtube VLOGs are so popular today. There are a lot of video topics about instructions, life experience or just records on Youtube. So, we believe that there are plenty of resources there on Youtube, that can be collected by us. Considering that there are a bunch of videos that contains hands, if we can find a way to filter them, we can make use of them. Before we set about collecting the videos, we brainstormed some action category that are of most interest to us. \noindent {\bf Video Candidates:} At beginning, the 12 categories are playing boardgames, diy, making drinks, making food, furniture assembly, gardening, doing housework, packing, doing puzzle, repairing, study. In order to collect the video candidates for all the videos, we generate queries by using Wordnet or frequent keywords for each category. For example, we use Wordnet to search for the hyponyms of "food" and then choose verb(eg. make, cook), location(eg. restaurant, kitchen, home), year(eg. 2014-2018) and sometimes some adjective to combine with it to generate our queries. For the repairing category, it is not suitable to take hyponyms from Wordnet, so we just browse the youtube pages about repairing and gather the frequent words for it. Finally, for each categeory, we generate 1,200 queries and search them via Google Youtube API to get responses. \noindent {\bf Feature Representation:} In order to find a feature to represent the video, we found that the thumbnails are of great value to us. Since that the thumbnail for each video is aimed to represent it, it is logical for us to collect feature from its 4 thumbnails to represent the video. We extract Alexnet feature {\tt pool5} for each of the 4 thumbnails. Then, we define the video feature as the average of the 4 {\tt pool5} feature plus the mean, min, max of distances between them. \noindent {\bf Hand-Score:} The definition of Hand-Score is the percentage of the equal-spaced 100 frames containing hands. We use Faster-RCNN as the hand detector. Then, we prepare 1000 samples with feature from Alexnet and Hand-Score from Faster-RCNN to train a linear-SVR to do Hand-Score regression for all the videos. Finally, we rank all the videos by their Hand-Score. \noindent {\bf Interaction-Score:} The focus of the dataset is hand-object interaction. This is manually labeled. Again train SVR. \subsection{Dataset Annotation} Our goal is to use the raw videos, but to better understand this, we label 100K images. These are formed from 80K of 100DOH and 20K from VLOG. \begin{enumerate} \item Hand detection: left/right + boxes \item Hand state: class label per-box \item Hand-object: box per-box \item Second pass spotting \end{enumerate} \noindent {\bf Error rate analysis:} Manually examined 1k final data points. False positives: (hopefully 0). False negatives: (hopefully small). Typical failure modes. \subsection{Dataset} \begin{enumerate} \item Comparison table goes here. \item Splits are by youtube id and are backwards compatible with VLOG. \end{enumerate} \subsection{Dataset Analysis} We evaluate the merit of our new dataset of hands by comparing across dataset and alternate methods for detecting hands in the wild. Many of our existing annotations aren't covered in existing datasets, so we can't compare. \noindent {\bf Cross-dataset hand detection analysis:} Using the same base model, we train and test on a variety of datasets. Result -- can obtain effectively same performance on other datasets (or better?). Datasets we totally trash: \cite{VIVA,Bambach15,Mittal11,Fouhey18} The following datasets cannot be used: epic-kitchens \cite{Damen18} ({\bf Why?})? the fpv thing with the white stuff on peoples' hands \cite{GarciaHernando18} ({\bf Why?}). \noindent {\bf Comparison to full-body pose estimation:} One common question is whether we need a specialized hand-detection system given the success of full-body pose estimation \cite{Cao2017}. \noindent {\bf Hand State Estimation:} Comparison to \cite{Fouhey18} somehow? \end{comment} \subsection{Hand Bounding-Box} We evaluate the merit of our new dataset of hands by evaluating cross-dataset performance for hand detection with a standard fixed detector, and a comparison with full-body pose estimation. \begin{table} \centering \caption{Average Precision when we vary definitions of correct detection. Using 15K samples dramatically degrades performance on any category, and 45K samples produces a large drop on getting all outputs correct. \note{DF: need updated version. } \dandan{Updated. Need to mention threshold here? (0.0 for the first 4, 0.1 for the last 2)} } \label{tab:datascale} \begin{tabular}{l@{~~~~}c@{~~~~}c@{~~~~}c@{~~~~}c@{~~~~}c@{~~~~}c} \toprule & Hand & Obj & H+Side & H+State & H+O & All \\ \midrule \input{scale_exp_fix0.1.txt} \bottomrule \end{tabular} \end{table} \vspace{2mm} \noindent {\bf Cross-dataset hand detection analysis:} We begin by training the same base model -- a standard F-RCNN \cite{Ren2015} with a Resnet-101 \cite{He2015} backbone -- on a number of datasets and evaluating same and cross-dataset performance. We use F-RCNN for simplicity and due to its widespread use as a commodity detection system. We only evaluate on datasets where {\it all} hands in a frame are annotated with boxes \cite{VIVA,Bambach15,Mittal11,Fouhey18}. We do not compare with \cite{GarciaHernando18} since it only annotates {\it one} hand (and note \cite{Damen18} has no hand boxes). We evaluate hand detection results using an IoU of 0.5 (i.e., PASCAL \cite{Everingham10}) since unlike objects with clear boundaries e.g., fire hydrants, cars, precise boundary between hand and wrist is unclear in the wild. Table~\ref{tab:crossdatasethands} shows that a model trained on 100DOH generalizes well across datasets and nearly matches performance obtained training and testing on every other hand datasets: at worst, it obtains 92.9\% of the mAP of training and testing on the same data (on VGG). VLOG and VGGHands (both gathered from large-scale diverse data) generalize reasonably well, but far worse than 100DOH with minimum relative mAP of 81.1\% and 68.1\% respectively; models trained on VIVA and EgoHands perform well on egocentric datasets but generalize poorly to non-egocentric \dandan{(VIVA performs slightly well on EGO might because the 3rd-person hands in Ego are similar to the hands in VIVA.)} views (which is unsurprising but worth quantifying). The annotation format of VGG and TV is a quadrilateral rather than an axis-aligned rectangle, and we preprocess the labels to be axis-aligned to make all annotations consistent; this diminishes results slightly. We show some sample cross-dataset results in Figure~\ref{fig:qualitative1}. \vspace{2mm} \noindent {\bf Comparison to full-body pose estimation:} One common question (also asked by \cite{Narasimhaswamy2019}) is whether we need specialized hand-detection given the success of full-body pose estimation systems and datasets \cite{Cao2017}. We evaluate this by computing precision/recall for hand detection using OpenPose \cite{Cao2017}. We convert body joint configuration and hand detection to a common evaluation scheme by defining true positives as: (\cite{Cao2017}) when a pose detector places the hand (estimated as $\wB+0.2(\wB-\eB)$ where $\wB$ and $\eB$ are wrist and elbow locations) within a ground truth box with the same center but twice the width and height; ({\it ours}) when the bounding box detector puts its center point inside the ground-truth box (a higher accuracy standard). Due to truncated people, \cite{Cao2017} achieves a low average precision of around 43.0\% and effectively maxes out recall at 49.5\%. At this recall, a Faster RCNN still has a precision of 99.7\%. While current pose estimators trained on current datasets are effective when the body is mainly visible, dedicated hand detectors appear to still be necessary. \dandan{R2: openpose uses diff trainset} \vspace{2mm} \noindent {\bf Statistical Baselines:} We additionally test whether the dataset can be solved with simple statistical baselines. We computed the median box for all hands as well as a median box for left and right hands. These get an AP of 0.08\% and 0.11\%, which shows that the hands are widely distributed across the image and vary in size. \subsection{Full Hand State} We next evaluate our full hand state detection system in isolation. Here, we show that the scale of our datasets is beneficial by comparing with results obtained by training on smaller subsets of the data. \vspace{2mm} \noindent {\bf Qualitative Results:} We show some results of the full system in Figures \ref{fig:qualitative1} and \ref{fig:random}. In general, our system does a good job at recognizing hands and sides despite a wide variety of scales and contexts present in the data. While it often gets the full state right, this is a clearly harder task with lots of room for improvement. \vspace{2mm} \noindent {\bf Failure modes:} Common failure modes include: getting the precise contact state right, especially when the hand is near an object, which video may improve; associating the correct object with the hand, especially with multiple people interacting with multiple objects, which a more complex inference technique might improve; and getting the correct hand side for egocentric hands (e.g., on \cite{Damen18}), which more egocentric training data may improve. \vspace{2mm} \noindent {\bf Metrics:} We evaluate the full prediction using mAP by modifying the criterion for a true positive. We begin by evaluating hand ({\it Hand}) and interacted object ({\it Obj}) individually. We then count a hand as a true positive only if it also has the correct hand side ({\it H+Side}), state ({\it H+State}), and has the correct object associated with it ({\it H+O}). Finally, we count a hand as a true positive only if it has all correct ({\it All}). \begin{figure} \includegraphics[width=\linewidth]{figures/random_handobj_result_crop.pdf} \caption{{\bf Random} results on video frames from our 100 DOH dataset. Our approach detects hands reliably in a variety of scales, configurations, and contexts.} \label{fig:random} \end{figure} \vspace{2mm} \noindent {\bf Quantitative Results:} \dandan{Need to update according to Table 4.} We are the first to tackle this problem, and therefore there are no methods to compare with. We therefore test to what extent our large-scale data is important and compare to the same method trained on a subsets of data: {\it Full} (90K trainval), {\it 45K} (50\% of trainval); and {\it 15K} (17\% of trainval). We report results in Table~\ref{tab:datascale}. In any category, tripling from 15K to 45K produces large gains, while further doubling to 90K produces more incremental gains. However, when looking to correctly identify all hand state, these mistakes combine to yield a steep performance hit (7\% AP) compared to using the full data. Together, these underscore the need for large-scale data, especially as one looks to tackle tasks requiring correct estimation of {\it multiple} aspects of hands. \vspace{2mm} \noindent {\bf Analysis as a Function of Scale:} We evaluate the performance on different hand scales. We separate images into different bins according to the average hand size (measured as the square-root of the percent of pixels) and evaluate each bin. Tiny hands are naturally harder to find and hand AP rapidly goes up from 78.2\% to 90.3\% as scale goes from 10\% to 20\%; performance however remains stable until 70\%, where it slightly drops. Additional results appear in the supplemental. \subsection{Hand State for Reconstruction} \label{sec:experimentsReconstruction} One of the exciting outcomes of having a system that can reliably identify hand state is that it directly enables automatically applying mesh reconstruction techniques to consumer videos. We present two experiments that assess our method's contributions to this future -- identifying side and our self-supervised mesh assessment technique. We use human judgments to evaluate success. \begin{comment} \begin{table} \centering \caption{Mesh Quality assessment performance (AUROC). Our self-supervised system is able to readily identify incorrect meshes.} \begin{tabular}{c@{~~}c@{~~}c@{~~~~~~}c} \toprule \multicolumn{3}{c}{Supervised} & Unsup \\ MLP & Na\"ive Bayes & Multiv. Gauss & Multiv. Gauss \\ \midrule 89.8 & 89.0 & 86.7 & 60.0 \\ \bottomrule \end{tabular} \end{table} \end{comment} \vspace{2mm} \noindent {\bf Data:} We use 2K images from the test set. Our system produced 3,861 detections, which we reconstructed using \cite{Hasson19} for both the correct hand side and the incorrect hand side resulting in 7,722 meshes. Crowdsourced workers re-annotated the detected hand to preclude mistakes and then assessed each mesh five times as correct/incorrect (definitions in the supplemental). Workers were deliberately {\bf not} told to inspect sides of hands. Workers passed a qualification test; we used sentinels to monitor performance; and results from all reconstructions were annotated simultaneously and in randomized order. \vspace{2mm} \noindent {\bf Quantitative Results (Side):} We first tested whether having the side was important -- an alternate hypothesis is that MANO might repurpose thumbs for pinkies, for instance. We show a few select qualitative examples in Figure \ref{fig:goodbad}. Unsurprisingly, despite not being told to examine side, workers were far more likely to think hands reconstructed with the {\it detected} side were correct (57.8\%) compared to the opposite side (29.1\%). \vspace{2mm} \noindent {\bf Quantitative Results (Quality):} We then used this data to evaluate whether we can successfully identify correct reconstructions. We binarized worker judgments by majority vote and computed AUROC. The proposed method (a MLP trained on positives/negatives identified by self-consistency) obtains an AUROC of 90\% on this data. We compared with a two baselines to put this result in context. Gaussian N\"aive Bayes on the same training data does similarly (89\%), showing that the positive/negative labels are important, not the learning algorithm. Simply fitting a multivariate Gaussian on all generated hands and using the log-likelihood does far worse (60\%), which underscores the importance of the labels. \begin{figure} \includegraphics[width=\linewidth]{figures/goodbad_3row_cr_2.pdf} \caption{Mesh reconstruction and assessment. Our system detects the location and side of hands in Internet videos, that \cite{Hasson19} uses to predict a mesh. Many predictions do not succeed. Our self-supervised system for mesh assessment identifies plausible (useful for downstream tasks) and implausible hands (worth discarding). } \label{fig:goodbad} \end{figure} \subsection{Future prediction} \label{sec:experimentsFuture} We took our networks trained on the training set of 100DOH and tested them on videos from the test set, finding points at which contact changes. We then reconstruct 3K examples, showing both qualitative results and computing quantitative results via human judgment. \vspace{2mm} \noindent {\bf Qualitative Results:} We show a few select qualitative examples in Figure \ref{fig:handpred}. Overall we observe that our method often does a good job of identifying the angle from which the hand should grasp the object. While our approach often finds plausible grasps, the myriad of ways a human {\it can} grasp an object and difficulty of predicting a full mesh makes this a challenging task. \dandan{R1: why only single affordance?} \note{df: ignoring this} \vspace{2mm} \noindent {\bf Human Judgment:} We showed 3K results to crowd-workers in a two-choice test, comparing the result to a random hand from the training set to examine if our system extracts the signal. Note that random is very frequently correct by chance since usually very many grasps suffice (consider how many ways your hand can touch a soda can). Workers selected which they thought was more plausible given the image. Presentation order was randomized, and we employed qualifications and sentinels; examples where workers could not come to an agreement were considered ties. Of the 60\% with a conclusive result (some inconclusive results are due to the input not depicting a clear object), our system was preferred 72\% of the time. \begin{figure}[t] \includegraphics[width=\linewidth]{figures/prediction_crop.pdf} \caption{Hand prediction results. Our system enables extracting pairs of images of uncontacted objects and {\it good} reconstructed meshes. We show results of a system trained to infer meshes from images. (Rows 1,2): Selected Results. (Row 3,4): Random results that crowd workers liked/preferred over a random grasp (Row 3) or did not like (preferring random grasp over it).} \label{fig:handpred} \end{figure} \section{Introduction} \label{sec:introduction} \input{introduction.tex} \section{Related Work} \label{sec:related} \input{related.tex} \section{Dataset} \label{sec:dataset} \input{dataset.tex} \section{Finding Hands \& Objects in Interaction} \label{sec:method} \input{method.tex} \section{Experiments} \label{sec:experiments} \input{experiments.tex} \section{Conclusion} We have presented a method for obtaining information about hand contact state in the scene, a large-scale dataset for training this method, and demonstrated applications of our technique. We are barely scratching the surface in terms of what can be learned in the world of large-scale Internet video and we hope that our rich representation can help the field collectively explore this area. \vspace{2mm} \noindent {\bf Acknowledgments}: This work was supported by: the Advanced Machine Learning Collaborative Grant from Procter \& Gamble in collaboration with Matthew Barker, PhD; and a gift from Nokia Solutions and Networks Oy. {\small \bibliographystyle{ieee_fullname} \subsection{Hand and Object Detection} \label{subsec:methodbase} We build our system on top of a standard object detection system, Faster-RCNN \cite{Ren2015} (FRCNN) by adding auxiliary predictions and losses per-bounding box. We deliberately chose FRCNN for its reputation as a standard foundation for detection tasks; we see additional improvements to the base network as orthogonal to our contributions. Specifically, we build on FRCNN trained to identify two objects -- human hands and contacted objects. As in standard Faster-RCNN, the network predicts, for each anchor box, whether the anchor box is an object, what its category is, and bounding box regression adjustments to the anchor box; these remain unchanged. We predict a series of auxiliary outputs directly from the same ROI-pooled features as the standard classification outputs. We now report these outputs and the losses we use to train these additional layers: We predict hand side $\sB \in \mathbb{R}^2$ and contact state $\cB \in \mathbb{R}^5$ via two additional fully connected layers. The outputs representing left-vs-right and \{none / self / other / portable / non-portable\}. Both are trained by minimizing standard cross-entropy losses $L_\textrm{side}$ and $L_\textrm{state}$. To link up boxes between hands and objects, we predict an association from a hand to an object, similar to Gkioxari et al. \cite{Gkioxari18}, by predicting an {\it offset vector}, factored into a unit vector $\vB \in \mathbb{R}^2$ plus a magnitude $m \in \mathbb{R}$ by two fully connected layers. Given the ground-truth vector between the center of the bounding box of a hand to the center of the bounding box of the object the hand is contacting, we write it as a unit vector $\vB' \in \mathbb{R}^2$ and magnitude $m \in \mathbb{R}$. We minimize the distance between the two vectors $L_\textrm{ori}(\vB,\vB') = ||\vB - \vB'||^2_2$ as well as the squared difference between the magnitudes $L_\textrm{mag}(m,m') = (m-m')^2$. Formulating the relationship as predicting an object per-hand allows multiple hands to contact the same object; while it does preclude a hand contacting multiple objects, we find this is rarer and leave it to future work. We obtain a final discrete parse in terms of a set of hands in contact/correspondence with a set of objects through a greedy optimization on network output. Given a new image, we infer all the hand and object boxes, as well as their side and contact scores and association vector. We convert these soft predictions into a discrete prediction by suppressing unlikely hand/object detections and then associating each confident hand with the object whose center closest matches the hand's bounding box center plus its offset vector. \vspace{2mm} \par \noindent {\bf Training details.} The standard FRCNN losses are minimized as usual; we minimize $L_\textrm{side}$, $L_\textrm{state}$ over anchor boxes corresponding to ground-truth hands and $L_\textrm{ori}$ and $L_\textrm{mag}$ over anchor boxes corresponding to ground-truth hands in contact. We scale the loss terms to handle wide variance in the loss scale but otherwise did not tune loss scales (details in supplemental material). We use a ResNet-101 \cite{He2015} backbone, initialized with Imagenet \cite{ILSVRC15} and train it for 8 epochs with a learning rate of $10^{-3}$ with batch size of 1. \subsection{Applications to Reconstruction} \label{subsec:reconstruction} Our system, out-of-the-box, directly enables the large-scale automatic deployment of techniques for mapping hands to 3D meshes which supplement our outputs. As a concrete demonstration, we build off of the technique of \cite{Hasson19} that maps images to the MANO \cite{Romero17} low-dimensional parameterization of hands via a Resnet-18 \cite{He2015}; this parameterization comprises $[\thetaB,\betaB]$ representing hand pose and shape, which can be converted to a 3D mesh via the differentiable MANO model. Our system provides the necessary inputs (locations {\it plus side}); building a more complex system that integrates with the detection system is an interesting future direction and technically feasible but beyond the scope of a single paper. While this enables many interesting downstream tasks, these tasks would be harmed by incorrect reconstructions and so we present a simple technique for recognizing these failures. We use the ideas of checking a network's equivariance as a signal for confidence from \cite{Bahat2018}. Specifically, given an image, we reconstruct the hand from six rotated copies of the image, reproject joints, rotate them and computed the the mean L2 distance of corresponding joints. We generate these for 3 frames per training video (70.9K images), sort by consistency and set the examples in the top 30\% as positives and the bottom 30\% as negatives. We train a two layer multilayer perceptron (hidden layer sizes 100, 50) on $[\thetaB]$, minimizing the binary cross-entropy; this classifier can be run at inference time to quickly identify poorly estimated frames. We quantify its effectiveness in Section \ref{sec:experimentsReconstruction}. \subsection{Proof of Concept: From Object to Grasp} \label{subsec:grasp} Once we can identify hands in contact in videos and reconstruct them, we can generate training data for identifying {\it how} hands might contact an object. After associating hands to tracks, we search our training set for moments in time where a hand makes contact with an object. On either side are a timestamp $t_\textrm{before}$ where a hand is not in contact and a timestamp $t_\textrm{after}$ where the hand is in contact. At $t_\textrm{after}$, our system provides side, bounding box for both hand and object, a mesh (via \cite{Hasson19}), and our self-supervised mesh assessment score. We can use the object box at $t_\textrm{after}$ to crop the image pre-contact at time $t_\textrm{before}$. We apply a number of filters, including removing examples with overlapping hands and scenes where the object appeared to move (detected by change in appearance). We can then learn a mapping from an image of an uncontacted object to a hand-in-contact. We use 203.4K training samples to build a system. We fine-tuned an Imagenet-pretrained \cite{ILSVRC15} Resnet-18 \cite{He2015} capped by a MLP to predict, for each mesh, hand pose and side, supervised by standard L2 losses along with supervision from hand vertices similar to \cite{Hasson19}. We found that this, like many regression formulations (e.g., see \cite{zhang2016colorful}), averaged out between the multiple modes. To prevent averaged hands, we generated a 10-hand codebook from training samples, represented each hand with the nearest of 10 classes and predicted these. For simplicity, trained another Resnet-18 to predict these classes, minimizing a cross-entropy loss.
-15,821.370757
[ -2.4609375, 2.3203125 ]
31.877023
[ -2.859375, 0.6171875, -1.3798828125, -3.8984375, -0.26123046875, 5.69921875 ]
[ 1.2255859375, 5.265625, 2.86328125, 7.6328125 ]
266
4,446
[ -1.6982421875, 1.86328125 ]
24.389466
[ -5.67578125, -3.21484375, -2.91015625, -1.28515625, 1.7138671875, 9.2890625 ]
1.157647
19.703019
29.217274
2.038658
[ 2.3512580394744873 ]
-12,447.527692
5.440171
-15,912.617489
1.492537
6.100058
[ -3.416015625, -3.154296875, -2.240234375, -3.09375, 2.865234375, 8.6875 ]
[ -5.5078125, -1.6376953125, -1.90234375, -1.2021484375, 3.33984375, 4.35546875 ]
BkiUbUY5qhLACFWlxBr3
\section{Introduction} The determination of topological properties of $(LF)$-spaces of functions is an important problem in functional analysis that usually demands a delicate treatment. In the case of weighted inductive limits of spaces of (vector-valued) continuous and holomorphic functions the subject has a long tradition that goes back to the work of Bierstedt, Meise, and Summers \cite{B-M,B-M-S}. These kinds of spaces naturally arise in numerous fields of analysis like linear partial differential equations, Fourier analysis, or analytic representations of (ultra)distribution spaces. Inspired by this line of research, we introduce and study in the first part of this paper two general classes of weighted inductive limits of spaces of \emph{ultradifferentiable} functions. These spaces can be viewed as natural counterparts of the space $\mathcal{O}_C(\mathbb{R}^d)$ of ``very slowly increasing smooth functions'' in the ultradifferentiable setting; one type corresponding to the Beurling case and the other one to the Roumieu case. The second part of the article is devoted to studying the topological duals of these spaces. Our first goal is to characterize these duals in terms of the growth of convolution averages of their elements, thereby generalizing various classical results of Schwartz \cite{Schwartz} from distributions to ultradistributions; see \cite{D-P-P-V,P-P-V} for earlier work in this direction. Schwartz (and the authors of \cite{D-P-P-V,P-P-V}) use the parametrix method while we develop here a completely different approach based on descriptions of these ultradistribution spaces in terms of the \emph{short-time Fourier transform (STFT)}. This approach allows us to work under much milder assumptions than those needed when working with the parametrix method. In this respect, we mention the interesting paper \cite{B-O} by Bargetz and Ortner in which the mapping properties of the STFT on $\mathcal{O}'_C(\mathbb{R}^d)$ are established by using Schwartz' theory of vector-valued distributions. Our next aim is to study the topological properties of the duals under consideration and characterize when they are ultrabornological. This can be regarded as the ultradistributional analogue of the last part of Grothendieck's doctoral thesis \cite{Grothendieck}. Our method however entirely differs from the one employed by Grothendieck; again, our arguments exploit the STFT. Our results apply to the important case of spaces of convolutors for Gelfand-Shilov spaces. Such convolutor spaces have already been considered in \cite{D-P-Ve, D-P-P-V}; we shall improve various of the results shown there. For instance, we also treat here the quasianalytic case, and, moreover, Corollary \ref{topology-OCD} and Corollary \ref{topology-OCD-Roumieu} essentially solve the question posed after \cite[Thm.\ 3.3]{D-P-P-V}. It is worth mentioning that convolutor spaces appear naturally in the study of abstract partial differential and convolution equations, see e.g.\ \cite{Chazarain, C-Z,Kostic, S-Z}. The plan of the article is as follows. In the preliminary Section \ref{sect-prel} we recall some notions from the theory of $(LF)$-spaces that will be frequently used throughout the first part of the paper. We also discuss there the mapping properties of the STFT on Gelfand-Shilov spaces and their duals. In Section \ref{sect-reg-ultra} we introduce two new types of weighted inductive limits of ultradifferentiable functions and characterize when these spaces are complete in terms of the defining family of weight functions. Our arguments make use of a result of Albanese \cite{Albanese} on the completeness of weighted inductive limits of spaces of Fr\'echet-valued continuous functions (which will be discussed in Subsection \ref{sect-reg-cont}). Section \ref{sect-duals} deals with the duals of our weighted inductive limits of spaces of ultradifferentiable functions. As mentioned before, a key to our arguments is to employ the STFT. Subsection \ref{Char STFT dual inductive} provides a detailed study of the mapping properties of the STFT on our spaces. We establish the sought convolution average characterizations in Subsection \ref{conv average subsection}. The characterization in terms of convolution averages suggests a natural way to define a locally convex topology on these duals and, based upon the results from Section \ref{sect-reg-ultra} and the mapping properties of the STFT, we give in Subsection \ref{subsection topological properties} necessary and sufficient conditions for these spaces to be ultrabornological. \section{preliminaries}\label{sect-prel} In the first part of this preliminary section we recall several regularity conditions for $(LF)$-spaces and state how they are related to each other; see \cite{Wengenroth-96} and \cite[Chap.\ 6]{Wengenroth} for more detailed accounts on the subject. Secondly, we discuss a result of Albanese \cite{Albanese} concerning the completeness of weighted inductive limits of spaces of Fr\'echet-valued continuous functions. This result will play a key role in Section \ref{sect-reg-ultra}. Next, we collect some facts about the Gelfand-Shilov spaces $\mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ and $\mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$ and their duals. We also discuss the mapping properties of the short-time Fourier transform on these spaces, they will be employed in Section \ref{sect-duals}. \subsection{Regularity conditions for $(LF)$-spaces}\label{sect-reg-cond} A Hausdorff l.c.s. (locally convex space) $E$ is called an $(LF)$-space if there is a sequence $(E_n)_{n \in \mathbb{N}}$ of Fr\'echet spaces with $E_n \subset E_{n + 1}$ and continuous inclusion mappings such that $E = \bigcup_{n \in \mathbb{N}} E_n$ and the topology of $E$ coincides with the finest locally convex topology for which all inclusion mappings $E_n \rightarrow E$ are continuous. We call $(E_n)_{n}$ a defining inductive spectrum for $E$ and write $E = \varinjlim E_n$. We emphasize that, for us, $(LF)$-spaces are Hausdorff by definition. If the sequence $(E_n)_{n}$ consists of $(FS)$-spaces, $E$ is called an $(LFS)$-space. Similarly, $E$ is said to be an $(LB)$-space if the sequence $(E_n)_{n}$ consists of Banach spaces. An $(LF)$-space $E=\varinjlim E_n$ is said to be \emph{regular} if every bounded set $B$ in $E$ is contained and bounded in $E_n$ for some $n \in \mathbb{N}$. By Grothendieck's factorization theorem (see e.g.\ \cite[p.\ 225 (4)]{Kothe}), every quasi-complete $(LF)$-space is regular. We now discuss several related concepts. An $(LF)$-space $E = \varinjlim E_n$ is said to satisfy condition $(wQ)$ if for every $n \in \mathbb{N}$ there are a neighborhood $U$ of $0$ in $E_n$ and $m > n$ such that for every $k > m$ and every neighborhood $W$ of $0$ in $E_m$ there are neighborhood $V$ of $0$ in $E_k$ and $C > 0$ with $V \cap U \subseteq CW$. If $ (\| \, \cdot \, \|_{n,N})_{N \in \mathbb{N}}$ is a fundamental sequence of seminorms for $E_n$, then $E$ satisfies $(wQ)$ if and only if \begin{gather*} \forall n \in \mathbb{N} \, \exists m > n \, \exists N \in \mathbb{N} \, \forall k > m \, \forall M \in \mathbb{N} \, \exists K \in \mathbb{N} \, \exists C > 0\, \forall e \in E_n: \\ \|e\|_{m,M} \leq C(\|e\|_{n,N} + \|e\|_{k,K}). \end{gather*} Clearly, every $(LB)$-space satisfies $(wQ)$. Moreover, every regular $(LF)$-space satisfies $(wQ)$ \cite[Thm.\ 4.7]{Vogt-92}. Next, we introduce two strong regularity conditions. An $(LF)$-space $E$ is said to be \emph{boundedly retractive} if for every bounded set $B$ in $E$ there is $n \in \mathbb{N}$ such that $B$ is contained in $E_n$ and $E$ and $E_n$ induce the same topology on $B$, while $E$ is said to be \emph{sequentially retractive} if for every null sequence in $E$ there is $n \in \mathbb{N}$ such that the sequence is contained and converges to zero in $E_n$. Finally, $E$ is said to be \emph{boundedly stable} if for every $n \in \mathbb{N}$ and every bounded set $B$ in $E_n$ there is $m \geq n$ such that for every $k \geq m$ the spaces $E_m$ and $E_k$ induce the same topology on $B$. Notice that, in view of Grothendieck's factorization theorem, these conditions do not depend on the defining inductive spectrum of $E$. This justifies calling an $(LF)$-space boundedly retractive, etc., if one (and thus all) of its defining inductive spectra has this property. These concepts are related to each other in the following way: \begin{theorem}[{\cite[Thm.\ 6.4 and Cor.\ 6.5]{Wengenroth}}] \label{reg-cond} Let $E$ be an $(LF)$-space. The following statements are equivalent: \begin{itemize} \item[$(i)$] $E$ is boundedly retractive. \item[$(ii)$] $E$ is sequentially retractive. \item[$(iii)$] $E$ is boundedly stable and satisfies $(wQ)$. \end{itemize} In such a case, $E$ is complete. \end{theorem} In fact, the conditions in Theorem \ref{reg-cond} are also equivalent to the fact that $E$ is acyclic or that $E$ satisfies Retakh's condition $(M)$. We refer to \cite{Wengenroth-96} and the discussion above \cite[Thm.\ 6.4, p.\ 112]{Wengenroth} for the history of this important theorem. \subsection{Weighted inductive limits of spaces of Fr\'echet-valued continuous functions}\label{sect-reg-cont} Let $X$ be a completely regular Hausdorff topological space. A (pointwise) decreasing sequence $\mathcal{V} := (v_n)_{n \in \mathbb{N}}$ of positive continuous functions on $X$ is called a \emph{decreasing weight system on X}. Given a Fr\'echet space $E$ with a fundamental sequence of seminorms $(\| \, \cdot \, \|_N)_{N}$ we define $Cv_n(X;E)$ as the Fr\'echet space consisting of all $f \in C(X;E)$ such that $$ \sup_{x \in X} \|f(x)\|_N v_n(x) < \infty $$ for all $N \in \mathbb{N}$. We set $$ \mathcal{V}C(X;E) := \varinjlim_{n \in \mathbb{N}} Cv_n(X;E), $$ an $(LF)$-space (it is Hausdorff because the topology of $\mathcal{V}C(X;E)$ is finer than the one induced by $C(X;E)$ and the latter space is Hausdorff). If $E = \mathbb{C}$, we simply write $\mathcal{V}C(X;\mathbb{C}) = \mathcal{V}C(X)$, an $(LB)$-space. We remark that $\mathcal{V}C(X)$ is always complete \cite{B-B} while it is boundedly retractive if and only if $\mathcal{V}$ is \emph{regularly decreasing} \cite[p.\ 118 Thm.\ 7]{Bierstedt}, i.e., for every $n \in \mathbb{N}$ there is $m \geq n$ such that for every $k > m$ and every subset $Y$ of $X$ it holds that $$ \inf_{y \in Y} \frac{v_m(y)}{v_n(y)} > 0 \Longrightarrow \inf_{y \in Y} \frac{v_k(y)}{v_n(y)} > 0. $$ For example, constant weight systems and weight systems $\mathcal{V} = (v_n)_{n}$ satisfying \begin{equation} \forall n \in \mathbb{N} \, \exists m > n \, : \, \mbox{$v_m/v_n$ vanishes at $\infty$} \label{decay-weights-1} \end{equation} are regularly decreasing. Albanese characterized the completeness of the $(LF)$-space $\mathcal{V}C(X;E)$ in the ensuing way: \begin{theorem}[{\cite[Thm.\ 2.3]{Albanese}}]\label{completeness-F} Let $\mathcal{V} = (v_n)_{n}$ be a decreasing weight system. For a non-normable Fr\'echet space $E$ with a fundamental increasing sequence of seminorms $(\| \, \cdot \, \|_N)_{N}$ the following statements are equivalent: \begin{itemize} \item[$(i)$] $\mathcal{V}C(X;E)$ is boundedly retractive. \item[$(ii)$] $\mathcal{V}C(X;E)$ is (quasi-)complete. \item[$(iii)$] $\mathcal{V}C(X;E)$ is regular. \item[$(iv)$] $\mathcal{V}C(X;E)$ satisfies $(wQ)$. \item[$(v)$] The pair $(E, \mathcal{V})$ satisfies $(S_2)^*$, i.e., \begin{gather*} \forall n \in \mathbb{N} \, \exists m > n \, \exists N \in \mathbb{N} \, \forall k > m \, \forall M \in \mathbb{N} \, \exists K \in \mathbb{N} \, \exists C > 0\, \forall e \in E \, \forall x \in X: \\ v_m(x)\|e\|_M \leq C(v_n(x)\|e\|_N + v_k(x)\|e\|_K). \end{gather*} \end{itemize} \end{theorem} \begin{remark}\label{S-2-inv} Let $\mathcal{V}$ be a decreasing weight system. If $E$ and $F$ are topologically isomorphic Fr\'echet spaces, then $(E, \mathcal{V})$ satisfies $(S_2)^*$ if and only if $(F, \mathcal{V})$ does so. \end{remark} We now state separate conditions on $E$ and $\mathcal{V}$ which ensure that the pair $(E, \mathcal{V})$ satisfies $(S_2)^*$. Since this is very similar to the analysis of the conditions $(S^*_1)$ and $(S^*_2)$ in the splitting theory of Fr\'echet spaces \cite{Vogt-87}, we omit all proofs. A Fr\'echet space $E$ with a fundamental increasing sequence of seminorms $(\| \, \cdot \, \|_N)_{N}$ is said to satisfy $(DN)$ if $$ \exists N \in \mathbb{N} \, \forall M > N \, \exists K > M \, \, \exists C > 0 \, \forall e \in E: \, \|e\|^2_M \leq C \|e\|_N\|e\|_K, $$ while it is said to satisfy $(\Omega)$ if $$ \forall N \in \mathbb{N} \, \exists M > N \, \forall K > M \, \exists \theta \in (0,1) \, \exists C > 0 \, \forall e' \in E': \, \|e'\|^*_M \leq C (\|e'\|^*_N)^{1-\theta}( \|e'\|^*_K)^{\theta}, $$ where $$ \|e'\|^*_N := \sup \{ |\langle e', e \rangle | \, : \, e \in E, \, \|e\|_N \leq 1 \} \in [0,\infty], \qquad e' \in E'. $$ A decreasing weight system $\mathcal{V} = (v_n)_{n}$ is said to satisfy $(\Omega)$ if $$ \forall n \in \mathbb{N} \, \exists m > n \, \forall k > m \, \exists \theta \in (0,1) \, \exists C > 0 \, \forall x \in X: \, v_m(x) \leq C v_n(x)^{1-\theta}v_k(x)^{\theta}. $$ This terminology is justified because one can show that $\mathcal{V}$ satisfies $(\Omega)$ if and only if the strong dual of $\mathcal{V}C(X)$ satisfies $(\Omega)$; indeed, by using an obvious analogue of \cite[Lemma 20]{B-D}, this follows from a similar argument as in \cite[Lemma 2.1]{Albanese}. We then have: \begin{proposition} [{cf.\ \cite[Thm.\ 5.1]{Vogt-87}}]\label{(DN)-Omega-1} Let $\mathcal{V}$ be a decreasing weight system and let $E$ be a Fr\'echet space. If $\mathcal{V}$ satisfies $(\Omega)$ and $E$ satisfies $(DN)$, then $(E,\mathcal{V})$ satisfies $(S_2)^\ast$. \end{proposition} If $E$ is a power series space, the conditions in Proposition \ref{(DN)-Omega-1} turn out to be necessary as well. In the rest of this subsection $\beta = (\beta_j)_{j \in \mathbb{N}}$ will stand for a strictly increasing sequence of positive numbers such that $\beta_j \rightarrow \infty$. We define $\Lambda_\infty(\beta)$ ($\Lambda_0(\beta)$, respectively) as the Fr\'echet space consisting of all sequences $(c_j)_j \in \mathbb{C}^\mathbb{N}$ such that $$ \left(\sum_{j = 0}^\infty |c_j|^2e^{2n\beta_j}\right)^{1/2} < \infty \qquad \left(\left(\sum_{j = 0}^\infty |c_j|^2e^{-2\beta_j/n}\right)^{1/2} <\infty \right) $$ for all $n \in \mathbb{N}$. Furthermore, we shall always assume that $\beta$ is shift-stable, i.e., \begin{equation} \sup_{j \in \mathbb{N}} \frac{\beta_{j+1}}{\beta_j} < \infty. \label{stable} \end{equation} \begin{proposition}[{cf.\ \cite[Thm.\ 4.1]{Vogt-87}}]\label{E-fixed-1} Let $\mathcal{V}$ be a decreasing weight system. Then, $\mathcal{V}$ satisfies $(\Omega)$ if and only if $(\Lambda_\infty(\beta),\mathcal{V})$ satisfies $(S_2)^\ast$. \end{proposition} Finally, we introduce increasing weight systems. They will also play an essential role in the rest of this article. An increasing sequence $\mathcal{W} := (w_N)_{N \in \mathbb{N}}$ of positive continuous functions on $X$ is called an \emph{increasing weight system on X}. Sometimes we shall impose the following condition on $\mathcal{W}$ (cf.\ \eqref{decay-weights-1}): \begin{equation} \forall N \in \mathbb{N} \, \exists M > N \, : \mbox{$w_N/w_M$ vanishes at $\infty$}. \label{decay-weights} \end{equation} We define $\mathcal{W}C(X)$ as the Fr\'echet space consisting of all $f \in C(X)$ such that $$ \| f \|_{w_N} := \sup_{x \in X}|f(x)|w_N(x) < \infty $$ for all $N \in \mathbb{N}$. Let $E = \varinjlim E_n$ be an $(LB)$-space. The pair $(\mathcal{W}, E)$ is said to satisfy $(S_2)^*$ if \begin{gather*} \forall n \in \mathbb{N} \, \exists m > n \, \exists N \in \mathbb{N} \, \forall k > m \, \forall M \in \mathbb{N} \, \exists K \in \mathbb{N} \, \exists C > 0\, \forall e \in E_n \, \forall x \in X: \\ \|e\|_{E_m} w_M(x) \leq C(\|e\|_{E_n}w_N(x) + \|e\|_{E_k}w_K(x)). \end{gather*} \begin{remark}\label{S-2-inv-1} Let $\mathcal{W}$ be an increasing weight system. If $E$ and $F$ are topologically isomorphic $(LB)$-spaces, then $(\mathcal{W},E)$ satisfies $(S_2)^*$ if and only if $(\mathcal{W},F)$ does so, as follows from Grothendieck's factorization theorem. In particular, condition $(S_2)^*$ does not depend on the defining inductive spectrum of $E$. \end{remark} We now wish to formulate an analogue of Proposition \ref{E-fixed-1} for increasing weight systems. The weight system $\mathcal{W} = (w_N)_N$ is said to satisfy $(DN)$ if $$ \exists N \in \mathbb{N} \, \forall M > N \, \exists K > M \, \exists C > 0 \, \forall x \in X: \, w^2_M(x) \leq C w_N(x)w_K(x). $$ As before, this terminology is justified because one can show that $\mathcal{W}$ satisfies $(DN)$ if and only if $\mathcal{W}C(X)$ satisfies $(DN)$. \begin{proposition}[{cf.\ \cite[Thm.\ 4.3]{Vogt-87}}]\label{W-fixed-1} Let $\mathcal{W}$ be an increasing weight system. Then, $\mathcal{W}$ satisfies $(DN)$ if and in only if $(\mathcal{W}, \Lambda'_0(\beta))$ satisfies $(S_2)^\ast$. \end{proposition} \subsection{Gelfand-Shilov spaces and the short-time Fourier transform.}\label{gelfand-shilov-spaces} Let $(M_p)_{p \in \mathbb{N}}$ be a sequence of positive real numbers and define $m_p := M_p / M_{p -1}$, $p \geq 1$. We call $M_p$ a \emph{weight sequence} if $\lim_{p \to \infty} m_p = \infty$. Furthermore, we will make use of some of the following conditions: \begin{enumerate} \item[$ (M.1)$ ] $M_p^2 \leq M_{p-1}M_{p+1}$, $p \geq 1$. \item[$(M.2)$ ] $M_{p+q} \leq C_0H^{p+q}M_pM_q$, $p,q \in \mathbb{N}$, for some $C_0,H \geq 1$. \item[ $(M.2)^\ast $] $2m_p \leq m_{Qp}$, $p \geq 1$, for some $Q \in \mathbb{N}$. \item[$(M.3)$ ] $\sum_{p = q}^\infty 1/m_p< Cq/m_{q}$, $q \geq 1$, for some $C \geq 1$. \end{enumerate} The reader can consult \cite{Komatsu,B-M-M} for the meaning of these conditions. It is worth mentioning that $(M.1)$ and $(M.3)$ imply $(M.2)^*$ \cite[Prop.\ 1.1]{Petzsche88}. We write $M_\alpha = M_{|\alpha|}$, $\alpha \in \mathbb{N}^d$. As usual, the relation $M_p\subset N_p$ between two weight sequences means that there are $C,h>0$ such that $M_p\leq Ch^{p}N_{p},$ $p\in\mathbb{N}$. The stronger relation $M_p\prec N_p$ means that the latter inequality remains valid for every $h>0$ and a suitable $C=C_{h}>0$. The \emph{associated function of $M_p$} is defined as $$ M(t):=\sup_{p\in\mathbb{N}}\log\frac{t^pM_0}{M_p},\qquad t > 0, $$ and $M(0):=0$. We define $M$ on $\mathbb{R}^d$ as the radial function $M(x) = M(|x|)$, $x \in \mathbb{R}^d$. Under $(M.1)$, the assumption $(M.2)$ holds \cite[Prop.\ 3.6]{Komatsu} if and only if \begin{equation*} 2M(t) \leq M(Ht) + \log C_0, \qquad t \geq 0, \end{equation*} while, under $(M.1)$ and $(M.2)$, the assumption $(M.2)^*$ \cite[Prop.\ 13]{B-M-M} holds if and only if \begin{equation*} M(2t) \leq H'M(t) + \log C_0', \qquad t \geq 0, \end{equation*} for some $C_0',H' \geq 1$. Let $M_p$ and $A_p$ be two weight sequences. We denote by $A$ the associated function of $A_p$. For $h, \lambda > 0$ we write $\mathcal{S}^{M_p,h}_{A_p,\lambda}(\mathbb{R}^d)$ for the Banach space consisting of all $\varphi \in C^\infty(\mathbb{R}^d)$ such that $$ \| \varphi \|_{\mathcal{S}^{M_p,h}_{A_p,\lambda}} :=\sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d} \frac{h^{|\alpha|}|\varphi^{(\alpha)}(x)|e^{A(\lambda x)}}{M_{\alpha}} < \infty. $$ We define $$ \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) := \varprojlim_{h \to \infty} \mathcal{S}^{M_p,h}_{A_p,h}(\mathbb{R}^d), \qquad \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d) := \varinjlim_{h \to 0^+} \mathcal{S}^{M_p,h}_{A_p,h}(\mathbb{R}^d). $$ Elements of their dual spaces $\mathcal{S}'^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ and $\mathcal{S}'^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d) $ are called \emph{tempered ultradistributions of Beurling and Roumieu type}, respectively. In the sequel we shall write $\ast$ instead of $(M_p)$ or $\{M_p\}$ and $\dagger$ instead of $(A_p)$ or $\{A_p\}$ if we want to treat both cases simultaneously. In addition, we shall often first state assertions for the Beurling case followed in parenthesis by the corresponding statements for the Roumieu case. Let $M_p$ and $A_p$ be weight sequences. We introduce the following set of conditions on $M_p$ and $A_p$: \begin{equation} \mbox{$M_p$ and $A_p$ satisfy $(M.1)$ and $(M.2)$, $p! \prec M_pA_p$, and $\mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ is non-trivial.} \label{group-cond} \end{equation} A sufficient condition for the non-triviality of $\mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ is $p!^\sigma \subset M_p$ and $p!^\tau \subset A_p$ for some $\sigma, \tau > 0$ with $\sigma + \tau > 1$ \cite[p.\ 235]{G-S}. Other non-triviality conditions can be found in \cite{Debrouwere-VindasUH2016}. Under the general conditions \eqref{group-cond}, we have the ensuing properties: \begin{itemize} \item[$(i)$] The Fourier transform is a topological isomorphism from $\mathcal{S}^\ast_\dagger(\mathbb{R}^d)$ onto $\mathcal{S}^\dagger_\ast(\mathbb{R}^d)$, where we fix the constants in the Fourier transform as follows $$ \mathcal{F}(\varphi)(\xi) =\widehat{\varphi}(\xi) := \int_{\mathbb{R}^d}\varphi(x) e^{-2\pi i x\xi}{\rm d}x . $$ We define the Fourier transform from $\mathcal{S}'^\dagger_\ast(\mathbb{R}^d)$ onto $\mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$ via duality. \item[$(ii)$] $\mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)\hookrightarrow \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d) $ (the symbol ``$\hookrightarrow$'' stands for dense and continuous inclusion) \cite[Lemma 2.4]{P-P-V}. \item[$(iii)$] $\mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ is an $(FN)$-space while $\mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$ is a $(DFN)$-space \cite[Prop.\ 2.11]{P-P-V}. \end{itemize} We shall use these properties without explicitly referring to them. Next, we discuss the mapping properties of the short-time Fourier transform on the spaces $\mathcal{S}^\ast_\dagger(\mathbb{R}^d)$ and $\mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$. The translation and modulation operators are denoted by $T_xf = f(\:\cdot\: - x)$ and $M_\xi f = e^{2\pi i \xi \cdot} f$, for $x, \xi \in \mathbb{R}^d$. We also write $\check{f} = f(-\ \cdot\ )$ for reflection about the origin. The short-time Fourier transform (STFT) of a function $f \in L^2(\mathbb{R}^d)$ with respect to a window function $\psi \in L^2(\mathbb{R}^d)$ is defined as $$ V_\psi f(x,\xi) := (f, M_\xi T_x\psi)_{L^2} = \int_{\mathbb{R}^d} f(t) \overline{\psi(t-x)}e^{-2\pi i \xi t} {\rm d}t , \qquad (x, \xi) \in \mathbb{R}^{2d}. $$ It holds that $\|V_\psi f\|_{L^2(\mathbb{R}^{2d})} = \|\psi \|_{L^2}\|f\|_{L^2}$. In particular, the linear mapping $V_\psi : L^2(\mathbb{R}^d) \rightarrow L^2(\mathbb{R}^{2d})$ is continuous. The adjoint of $V_\psi$ is given by the weak integral $$ V^\ast_\psi F = \int \int_{\mathbb{R}^{2d}} F(x,\xi) M_\xi T_x\psi {\rm d}x {\rm d}\xi , \qquad F \in L^2(\mathbb{R}^{2d}). $$ If $\psi \neq 0$ and $\gamma \in L^2(\mathbb{R}^d)$ is a synthesis window for $\psi$, that is, $(\gamma, \psi)_{L^2} \neq 0$, then \begin{equation} \frac{1}{(\gamma, \psi)_{L^2}} V^\ast_\gamma \circ V_\psi = \operatorname{id}_{L^2(\mathbb{R}^d)}. \label{reconstruction-L2} \end{equation} We refer to \cite{Grochenig} for further properties of the STFT. Throughout the rest of this subsection $M_p$ and $A_p$ will stand for weight sequences satisfying \eqref{group-cond}. As usual, given two l.c.s.\ $E$ and $F$ we write $E\widehat{\otimes}F$ for the completion of the tensor product $E\otimes F$ with respect to the $\varepsilon$- or $\pi$-topology provided that either $E$ or $F$ is nuclear. Let $N_p$ and $B_p$ be two other weight sequences satisfying \eqref{group-cond}. We denote by $B$ the associated function of $B_p$. For $h>0$ we write $X_h$ for the Banach space consisting of all $\varphi \in C^\infty(\mathbb{R}^{d_1+d_2})$ such that $$ \sup_{(\alpha, \beta) \in \mathbb{N}^{d_1+d_2}} \sup_{(x, t) \in \mathbb{R}^{d_1+d_2}} \frac{h^{|\alpha| + |\beta|}|\partial^\beta_t \partial^\alpha_x\varphi(x,t)|e^{A(hx) + B(ht)}}{M_\alpha N_\beta} < \infty; $$ one can then show that (cf.\ \cite[Prop.\ 2.12]{P-P-V}) $$ \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}^{(N_p)}_{(B_p)}(\mathbb{R}^{d_2}_t) \cong \varprojlim_{h \to \infty} X_h, \qquad \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}^{\{N_p\}}_{\{B_p\}}(\mathbb{R}^{d_2}_t) \cong \varinjlim_{h\to 0^{+}} X_{h}, $$ as locally convex spaces. In particular, we have that $ \mathcal{S}^\ast_\dagger(\mathbb{R}^{d_1}) \widehat{\otimes} \mathcal{S}^\ast_\dagger(\mathbb{R}^{d_2}) \cong \mathcal{S}^\ast_\dagger(\mathbb{R}^{d_1+ d_2})$. Furthermore, we have the following isomorphisms of l.c.s. $$ ( \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}^{(N_p)}_{(B_p)}(\mathbb{R}^{d_2}_t))' \cong \mathcal{S}'^{(M_p)}_{(A_p)}(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}'^{(N_p)}_{(B_p)}(\mathbb{R}^{d_2}_t) $$ and $$ ( \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}^{\{N_p\}}_{\{B_p\}}(\mathbb{R}^{d_2}_t))' \cong \mathcal{S}'^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}'^{\{N_p\}}_{\{B_p\}}(\mathbb{R}^{d_2}_t). $$ Naturally, the partial Fourier transforms $$ \mathcal{F}_t : \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}^{(N_p)}_{(B_p)}(\mathbb{R}^{d_2}_t) \rightarrow \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}^{(B_p)}_{(N_p)}(\mathbb{R}^{d_2}_\xi) $$ and $$ \mathcal{F}_t : \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}^{\{N_p\}}_{\{B_p\}}(\mathbb{R}^{d_2}_t) \rightarrow \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^{d_1}_x) \widehat{\otimes} \mathcal{S}^{\{B_p\}}_{\{N_p\}}(\mathbb{R}^{d_2}_\xi), $$ given by $$ \mathcal{F}_t(\varphi)(x, \xi) := \int_{\mathbb{R}^{d_2}}\varphi(x,t) e^{-2\pi i t\xi}{\rm d}t , $$ are topological isomorphisms. \begin{proposition}\label{STFT-testfunctions} Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$. The following mappings are continuous: $$ V_\psi: \mathcal{S}^\ast_\dagger(\mathbb{R}^d) \rightarrow \mathcal{S}^\ast_\dagger(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}^\dagger_\ast(\mathbb{R}^d_\xi) $$ and $$ V^\ast_\psi: \mathcal{S}^\ast_\dagger(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}^\dagger_\ast(\mathbb{R}^d_\xi) \rightarrow \mathcal{S}^\ast_\dagger(\mathbb{R}^d). $$ \end{proposition} \begin{proof}Consider the continuous linear mappings $$ \cdot \otimes \overline{\psi} :\mathcal{S}^\ast_\dagger(\mathbb{R}^d_t) \rightarrow\mathcal{S}^\ast_\dagger(\mathbb{R}^{2d}_{t,y}): \varphi(t) \rightarrow \varphi(t)\otimes \overline{\psi}(y) $$ and $$ T: \mathcal{S}^\ast_\dagger(\mathbb{R}^{2d}_{t,y}) \rightarrow \mathcal{S}^\ast_\dagger(\mathbb{R}^{2d}_{t,x}): \chi(t,y) \rightarrow \chi(t,t -x). $$ That $V_\psi$ is continuous then follows from the representation $V_\psi = \mathcal{F}_t \circ T \circ (\cdot \otimes \overline{\psi})$. The continuity of $V^{\ast}_{\psi}$ is an immediate consequence of (the proof of) Lemma \ref{double-int-test} shown below. \end{proof} The STFT of an ultradistribution $f \in \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$ with respect to a window function $\psi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$ is defined as \begin{equation*} V_\psi f(x,\xi):= \langle f, \overline{M_\xi T_x\psi}\rangle= e^{-2\pi i \xi x}(f \ast M_\xi \check{\overline {\psi}})(x), \qquad (x, \xi) \in \mathbb{R}^{2d}. \end{equation*} Clearly, $V_\psi f$ is a smooth function on $\mathbb{R}^{2d}$. We \emph{define} the adjoint STFT of $F \in \mathcal{S}'^\ast_\dagger(\mathbb{R}^d) \widehat{\otimes} \mathcal{S}'^\dagger_\ast(\mathbb{R}^d)$ as $$ \langle V^\ast_\psi F, \varphi \rangle := \langle F, \overline{V_\psi\overline{\varphi}} \rangle, \qquad \varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d). $$ Proposition \ref{STFT-testfunctions}, the reconstruction formula (\ref{reconstruction-L2}), and the same argument given in \cite[Sect.\ 3]{K-P-S-V2016} yield: \begin{proposition}\label{STFT-duals} Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$. The mappings $$ V_\psi: \mathcal{S}'^\ast_\dagger(\mathbb{R}^d) \rightarrow \mathcal{S}'^\ast_\dagger(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}'^\dagger_\ast(\mathbb{R}^d_\xi) $$ and $$ V^\ast_\psi: \mathcal{S}'^\ast_\dagger(\mathbb{R}^d_x) \widehat{\otimes} \mathcal{S}'^\dagger_\ast(\mathbb{R}^d_\xi) \rightarrow \mathcal{S}'^\ast_\dagger(\mathbb{R}^d) $$ are continuous. Moreover, if $\gamma \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ is a synthesis window for $\psi$, then $$ \frac{1}{(\gamma, \psi)_{L^2}} V^\ast_\gamma \circ V_\psi = \operatorname{id}_{ \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)} $$ and the desingularization formula \begin{equation} \langle f, \varphi \rangle = \frac{1}{(\gamma, \psi)_{L^2}} \int \int_{\mathbb{R}^{2d}}V_\psi f(x, \xi) V_{\overline{\gamma}}\varphi(x, - \xi) {\rm d}x {\rm d}\xi \label{regularization-via-STFT} \end{equation} holds for all $f \in \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$ and $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$. \end{proposition} \section{The Gelfand-Shilov type spaces $\mathcal{B}^*_\mathcal{W}(\mathbb{R}^d)$ and $\mathcal{B}^*_\mathcal{V}(\mathbb{R}^d)$}\label{sect-reg-ultra} We now introduce new classes of Gelfand-Shilov type spaces as weighted spaces of ultradifferentiable functions. Let $w$ be a nonnegative function on $\mathbb{R}^d$ and let $M_p$ be a weight sequence. For $h > 0$ we write $\mathcal{B}^{M_p,h}_w(\mathbb{R}^d)$ for the seminormed space consisting of all $\varphi \in C^\infty(\mathbb{R}^d)$ such that $$ \| \varphi \|_{\mathcal{B}^{M_p,h}_w} := \sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d} \frac{h^{|\alpha|}|\varphi^{(\alpha)}(x)|w(x)}{M_\alpha} < \infty. $$ If $w$ is positive and $w^{-1}$ is locally bounded, then $\mathcal{B}^{M_p,h}_w(\mathbb{R}^d)$ is a Banach space. These requirements are fulfilled if $w$ is positive and continuous. We set $$ \mathcal{B}^{(M_p)}_w(\mathbb{R}^d) := \varprojlim_{h \rightarrow \infty} \mathcal{B}^{M_p,h}_w(\mathbb{R}^d), \qquad \mathcal{B}^{\{M_p\}}_w(\mathbb{R}^d) := \varinjlim_{h \rightarrow 0^+} \mathcal{B}^{M_p,h}_w(\mathbb{R}^d). $$ Let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system on $\mathbb{R}^d$. We define $$ \mathcal{B}^{M_p,h}_\mathcal{W}(\mathbb{R}^d) := \varprojlim_{N \in \mathbb{N}} \mathcal{B}^{M_p,h}_{w_N}(\mathbb{R}^d) $$ and $$ \mathcal{B}^{(M_p)}_\mathcal{W}(\mathbb{R}^d) := \varprojlim_{h \rightarrow \infty} \mathcal{B}^{M_p,h}_\mathcal{W}(\mathbb{R}^d), \qquad \mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d) := \varinjlim_{h \rightarrow 0^+} \mathcal{B}^{M_p,h}_\mathcal{W}(\mathbb{R}^d). $$ The spaces $\mathcal{B}^{M_p,h}_\mathcal{W}(\mathbb{R}^d)$ and $\mathcal{B}^{(M_p)}_\mathcal{W}(\mathbb{R}^d)$ are Fr\'echet spaces while $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ is an $(LF)$-space. Similarly, given a decreasing weight system $\mathcal{V} = (v_n)_{n}$ on $\mathbb{R}^d$ we define $$ \mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d) := \varinjlim_{n \in \mathbb{N}} \mathcal{B}^{(M_p)}_{v_n}(\mathbb{R}^d), \qquad \mathcal{B}^{\{M_p\}}_\mathcal{V}(\mathbb{R}^d) := \varinjlim_{n \in \mathbb{N}} \mathcal{B}^{\{M_p\}}_{v_n}(\mathbb{R}^d). $$ The space $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$ is an $(LF)$-space while $\mathcal{B}^{\{M_p\}}_\mathcal{V}(\mathbb{R}^d)$ is an $(LB)$-space. The main goal of this section is to characterize the regularity properties of the $(LF)$-spaces $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ and $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$ in terms of the weight systems $\mathcal{W}$ and $\mathcal{V}$, respectively. As a first step, we discuss when these spaces are boundedly stable. \begin{proposition} \label{boundedly-stable} Let $M_p$ be a weight sequence and let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system. Then, $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ is boundedly stable. \end{proposition} \begin{proof} Let $h > 0$ be arbitrary and let $B$ be a bounded set of $\mathcal{B}^{M_p,h}_\mathcal{W}(\mathbb{R}^d)$. We shall show that, for all $0 < l \leq k < h$, the spaces $\mathcal{B}^{M_p,k}_\mathcal{W}(\mathbb{R}^d)$ and $\mathcal{B}^{M_p,l}_\mathcal{W}(\mathbb{R}^d)$ induce the same topology on $B$. Clearly, it is enough to prove that the filter of neighborhoods of each point of $B$ induced by $\mathcal{B}^{M_p,l}_\mathcal{W}(\mathbb{R}^d)$ is finer than the one induced by $\mathcal{B}^{M_p,k}_\mathcal{W}(\mathbb{R}^d)$; we may of course assume without loss of generality that the point under consideration is $0\in B$. A basis of neighborhoods of $0$ in $\mathcal{B}^{M_p,k}_\mathcal{W}(\mathbb{R}^d)$ is given by $$ U(N,\varepsilon) = \{ \varphi \in \mathcal{B}^{M_p,k}_\mathcal{W}(\mathbb{R}^d) \, : \, \|\varphi\|_{\mathcal{B}^{M_p,k}_{w_N}} \leq \varepsilon \}, \qquad N \in \mathbb{N}, \varepsilon > 0. $$ Let $N \in \mathbb{N}$ and $\varepsilon > 0$ be arbitrary. Let $C > 0$ be such that $\|\varphi\|_{\mathcal{B}^{M_p,h}_{w_N}} \leq C$ for all $\varphi \in B$. Next, choose $N_0 \in \mathbb{N}$ so large that $(k/h)^{N_0} \leq \varepsilon /C$. Finally, set $\delta = (l/k)^{N_0}\varepsilon$ and $$ V = \{ \varphi \in \mathcal{B}^{M_p,l}_\mathcal{W}(\mathbb{R}^d) \, : \, \|\varphi\|_{\mathcal{B}^{M_p,l}_{w_N}} \leq \delta \}, $$ a neighborhood of $0$ in $\mathcal{B}^{M_p,l}_\mathcal{W}(\mathbb{R}^d)$. For $\varphi \in B \cap V$ we have that \begin{align*} \|\varphi\|_{\mathcal{B}^{M_p,k}_{w_N}} &= \max \left \{ \sup_{|\alpha| \leq N_0 } \frac{k^{|\alpha|} \| \varphi^{(\alpha)}\|_{w_N}}{M_\alpha}, \sup_{|\alpha| \geq N_0 } \frac{k^{|\alpha|} \| \varphi^{(\alpha)}\|_{w_N}}{M_\alpha} \right \} \\ &\leq \max \{ (k/l)^{N_0} \delta, (k/h)^{N_0} C \} \leq \varepsilon, \end{align*} which means that $B \cap V \subseteq U(N,\varepsilon)$. \end{proof} \begin{proposition} \label{boundedly-stable-1} Let $M_p$ be a weight sequence and let $\mathcal{V} = (v_n)_{n}$ be a decreasing weight system that is regularly decreasing. Then, $\mathcal{B}^{\ast}_\mathcal{V}(\mathbb{R}^d)$ is boundedly stable. \end{proposition} \begin{proof} We only prove the Beurling case, the Roumieu case is similar. Recall that, since $\mathcal{V}$ is regularly decreasing, $\mathcal{V}C(\mathbb{R}^d)$ is boundedly stable (cf.\ Subsection \ref{sect-reg-cont}). Let $n \in \mathbb{N}$ be arbitrary and choose $m \geq n$ such that for all $k \geq m$ the spaces $Cv_k(\mathbb{R}^d)$ and $Cv_m(\mathbb{R}^d)$ induce the same topology on the bounded sets of $Cv_n(\mathbb{R}^d)$. Now let $B$ be a bounded set of $\mathcal{B}^{(M_p)}_{v_n}(\mathbb{R}^d)$. We shall show that, for all $k \geq m$, the spaces $\mathcal{B}^{(M_p)}_{v_m}(\mathbb{R}^d)$ and $\mathcal{B}^{(M_p)}_{v_k}(\mathbb{R}^d)$ induce the same topology on $B$. As before, we may assume that $0\in B$ and we only verify that the filter of neighborhoods of $0$ induced by $\mathcal{B}^{(M_p)}_{v_k}(\mathbb{R}^d)$ in $B$ is finer than that induced by $\mathcal{B}^{(M_p)}_{v_m}(\mathbb{R}^d)$. A basis of neighborhoods of $0$ in $\mathcal{B}^{(M_p)}_{v_m}(\mathbb{R}^d)$ given by $$ U(h,\varepsilon) = \{ \varphi \in \mathcal{B}^{(M_p)}_{v_m}(\mathbb{R}^d) \, : \, \|\varphi \|_{\mathcal{B}^{M_p,h}_{v_m}} \leq \varepsilon \}, \qquad h,\varepsilon > 0. $$ Let $h,\varepsilon > 0$ be arbitrary. Since $B$ is a bounded subset of $\mathcal{B}^{(M_p)}_{v_n}(\mathbb{R}^d)$, the set $$ B' = \left \{ \frac{h^{|\alpha|}\varphi^{(\alpha)}}{M_\alpha} \, : \, \alpha \in \mathbb{N}^d, \varphi \in B \right \} $$ is bounded in $Cv_n(\mathbb{R}^d)$. Consider the following neighborhood of $0$ in $Cv_m(\mathbb{R}^d)$ $$ U' = \{ f \in Cv_m(\mathbb{R}^d) \, : \, \|f\|_{v_m} \leq \varepsilon\}. $$ As $Cv_m(\mathbb{R}^d)$ and $Cv_k(\mathbb{R}^d)$ induce the same topology on $B'$, there is $\delta > 0$ such that, for $$ V' = \{ f \in Cv_k(\mathbb{R}^d) \, : \, \|f\|_{v_k} \leq \delta\}, $$ it holds that $B' \cap V' \subseteq U'$. Set $$ V = \{ \varphi \in \mathcal{B}^{(M_p)}_{v_k}(\mathbb{R}^d) \, : \, \|\varphi \|_{\mathcal{B}^{M_p,h}_{v_k}} \leq \delta \}, $$ a neighborhood of $0$ in $\mathcal{B}^{(M_p)}_{v_k}(\mathbb{R}^d)$. Let $\varphi \in B \cap V$. Then, $h^{|\alpha|}\varphi^{(\alpha)}/M_\alpha \in B' \cap V' \subseteq U'$ for all $\alpha \in \mathbb{N}^d$ and, thus, $\varphi \in U$. Whence $B \cap V \subseteq U(h,\varepsilon)$. \end{proof} For later use, we remark that Proposition \ref{boundedly-stable} and Proposition \ref{boundedly-stable-1} can be improved if we assume that $\mathcal{W} $ and $\mathcal{V}$ satisfy \eqref{decay-weights} and \eqref{decay-weights-1}, respectively. Namely, in such a case all of the spaces $\mathcal{B}^*_\mathcal{W}(\mathbb{R}^d)$ and $\mathcal{B}^*_\mathcal{V}(\mathbb{R}^d)$ are even Schwartz. To prove this, we need the following simple lemma whose verification is left to the reader. \begin{lemma}\label{compact-inclusion} Let $M_p$ be a weight sequence, let $w$ and $v$ be positive continuous functions on $\mathbb{R}^d$ such that $v/w$ vanishes at $\infty$, and let $0 < k <h$. Then, the inclusion mapping $\mathcal{B}^{M_p,h}_w(\mathbb{R}^d) \rightarrow \mathcal{B}^{M_p,k}_v(\mathbb{R}^d)$ is compact. \end{lemma} Lemma \ref{compact-inclusion} yields: \begin{proposition} \label{LFS-1} Let $M_p$ be a weight sequence and let $\mathcal{W} = (w_n)_{n}$ be an increasing weight system satisfying \eqref{decay-weights}. Then, $\mathcal{B}^{(M_p)}_\mathcal{W}(\mathbb{R}^d)$ is an $(FS)$-space while $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ is an $(LFS)$-space. \end{proposition} \begin{proposition} \label{LFS-2} Let $M_p$ be a weight sequence and let $\mathcal{V} = (v_n)_{n}$ be a decreasing weight system satisfying \eqref{decay-weights-1}. Then, the space $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$ is an $(LFS)$-space while $\mathcal{B}^{\{M_p\}}_\mathcal{V}(\mathbb{R}^d)$ is a $(DFS)$-space. \end{proposition} \begin{proof} The fact that $\mathcal{B}^{\{M_p\}}_\mathcal{V}(\mathbb{R}^d)$ is a $(DFS)$-space follows directly from Lemma \ref{compact-inclusion}. Next, we consider the space $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$. We may assume without loss of generality that $v_{n+1}/v_n$ vanishes at $\infty$ for all $n \in \mathbb{N}$. Fix $n \in \mathbb{N}$ and set $v^{1}_n := \sqrt{v_nv_{n+1}}$. Notice that $v^{1}_n$ is a positive continuous function such that $v_{n+1} \leq v_n^{1} \leq v_n$ and $v_{n+1}/v^1_n$ vanishes at $\infty$. Hence we can inductively define a sequence $(v^N_n)_{N \in \mathbb{N}}$ of positive continuous functions such that $v_{n + 1} \leq v^1_{n} \leq \ldots \leq v^N_n \leq v^{N+1}_n \leq \ldots \leq v_{n}$ and $v^N_n/v^{N+1}_n$ vanishes at $\infty$ for each $N \in \mathbb{N}$. We have that $$ \mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d) = \varinjlim_{n \in \mathbb{N}} \varprojlim_{N \in \mathbb{N}} \mathcal{B}^{M_p,N}_{v^N_n}(\mathbb{R}^d) $$ as locally convex spaces. Moreover, the Fr\'echet spaces $$ \varprojlim_{N \in \mathbb{N}} \mathcal{B}^{M_p,N}_{v^N_n}(\mathbb{R}^d) $$ are Schwartz because of Lemma \ref{compact-inclusion}. \end{proof} We are ready to study the regularity properties of the $(LF)$-spaces $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ and $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$. First we need to introduce some more notation. Let $M_p$ be a non-quasianalytic weight sequence and let $K$ be a regular compact subset of $\mathbb{R}^d$. As customary \cite{Komatsu}, we denote by $\mathcal{D}^{M_p,h}_K$, $h >0$, the Banach space consisting of all $\varphi \in C^\infty(\mathbb{R}^d)$ with $\operatorname{supp} \varphi \subseteq K$ such that $$ \| \varphi \|_{\mathcal{D}^{M_p,h}_K} := \sup_{\alpha \in \mathbb{N}^d} \sup_{x \in K} \frac{h^{|\alpha|}|\varphi^{(\alpha)}(x)|}{M_\alpha} < \infty $$ and set $$ \mathcal{D}^{(M_p)}_K = \varprojlim_{h \to \infty}\mathcal{D}^{M_p,h}_K, \qquad \mathcal{D}^{\{M_p\}}_K = \varinjlim_{h \to 0^+}\mathcal{D}^{M_p,h}_K. $$ We shall also use the following mild condition on an increasing weight system $\mathcal{W} = (w_N)_N$: \begin{equation} \forall N \in \mathbb{N} \, \exists M \geq N \, \exists C > 0 \, \forall x \in \mathbb{R}^d \,: \, \sup_{y \in [-1,1]^d} w_N(x +y) \leq Cw_M(x). \label{translation-weak} \end{equation} \begin{theorem}\label{reg-Beurling} Let $M_p$ be a weight sequence and let $\mathcal{W} = (w_N)_{N}$ be an increasing weight system. Consider the following conditions: \begin{itemize} \item[$(i)$] $\mathcal{W}$ satisfies $(DN)$. \item[$(ii)$] $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ is boundedly retractive. \item[$(iii)$] $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ is complete. \item[$(iv)$] $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ is regular. \item[$(v)$] $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ satisfies $(wQ)$. \end{itemize} Then, $(i) \Rightarrow (ii) \Rightarrow (iii) \Rightarrow (iv) \Rightarrow (v) \Rightarrow (ii)$. Moreover, if $M_p$ satisfies $(M.1)$, $(M.2)$, and $(M.3)$, and $\mathcal{W}$ satisfies \eqref{translation-weak}, then $(v) \Rightarrow (i)$. \end{theorem} \begin{proof} The chain of implications $(ii) \Rightarrow (iii) \Rightarrow (iv) \Rightarrow (v)$ holds for general $(LF)$-spaces (cf.\ Subsection \ref{sect-reg-cond}), while $(v) \Rightarrow (ii)$ follows from Theorem \ref{reg-cond} and Proposition \ref{boundedly-stable}. We now show $(i) \Rightarrow (ii)$. Set $E = \mathcal{W}C(\mathbb{R}^d)$. We first assume that $E$ is normable. Then there is $N_0 \in \mathbb{N}$ such that $E$ is topologically isomorphic to $Cw_{N_0}(\mathbb{R}^d)$. Consequently, $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ is topologically isomorphic to the $(LB)$-space $\mathcal{B}^{\{M_p\}}_{w_{N_0}}(\mathbb{R}^d)$. Applying Proposition \ref{boundedly-stable} to the constant weight system $\mathcal{W} = (w_{N_0})_{N \in \mathbb{N}}$, we obtain that $\mathcal{B}^{\{M_p\}}_{w_{N_0}}(\mathbb{R}^d)$ is boundedly stable and, thus, boundedly retractive by Theorem \ref{reg-cond}. Next, we assume that $E$ is non-normable. By Theorem \ref{reg-cond} it suffices to show that $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ is sequentially retractive. Let $(\varphi_j)_{j \in \mathbb{N}}$ be a null sequence in $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$. We set $X = \mathbb{N}^d$ (endowed with the discrete topology) and $$ \mathcal{V} = (v_n)_{n}, \qquad v_n(\alpha) = n^{-|\alpha|}, \qquad \alpha \in \mathbb{N}^d. $$ Notice that $\mathcal{V}$ satisfies $(\Omega)$. Clearly, the mapping $$ T: \mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d) \rightarrow \mathcal{V}C(\mathbb{N}^d;E): \varphi \mapsto \left( \frac{\varphi^{(\alpha)}}{M_\alpha}\right)_{\alpha \in \mathbb{N}^d} $$ is continuous. Moreover, by Theorem \ref{completeness-F} and Proposition \ref{(DN)-Omega-1}, $\mathcal{V}C(\mathbb{N}^d;E)$ is sequentially retractive. Hence, there is $n \in \mathbb{N}$ such that the sequence $(T(\varphi_j))_{j}$ is contained and converges to zero in $Cv_n(\mathbb{N}^d;E)$, which precisely means that $(\varphi_j)_{j}$ is contained and converges to zero in $\mathcal{B}^{M_p,1/n}_{\mathcal{W}}(\mathbb{R}^d)$. Finally, we assume that $M_p$ satisfies $(M.1)$, $(M.2)$, and $(M.3)$, and that $\mathcal{W}$ satisfies \eqref{translation-weak}, and show $(v) \Rightarrow (i)$. By \cite[Cor.\ 4.10]{M-T} we have that $\mathcal{D}^{\{M_p\}}_{[-1,1]^d} \cong \Lambda'_0(\beta)$ as l.c.s., where $\beta = ({M(j^{1/d})})_{j}$. The sequence $\beta$ satisfies \eqref{stable} because of \cite[Lemma 4.1]{Komatsu}. Hence, by Proposition \ref{W-fixed-1} and Remark \ref{S-2-inv-1}, it suffices to show that $(\mathcal{W}, \mathcal{D}^{\{M_p\}}_{[-1,1]^d})$ satisfies $(S_2)^*$. Since $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ satisfies $(wQ)$, we have that \begin{gather*} \forall n \in \mathbb{N} \, \exists m > n \, \exists N \in \mathbb{N} \, \forall k > m \, \forall M \in \mathbb{N} \, \exists K \in \mathbb{N} \, \exists C > 0\, \forall \varphi \in \mathcal{B}^{M_p, 1/n}_\mathcal{W}(\mathbb{R}^d) \, : \\ \|\varphi\|_{\mathcal{B}^{M_p, 1/m}_{w_M}} \leq C\Big(\|\varphi\|_{\mathcal{B}^{M_p, 1/n}_{w_N}} + \|\varphi\|_{\mathcal{B}^{M_p, 1/k}_{w_K}}\Big). \end{gather*} Let $\varphi \in \mathcal{D}^{M_p,h}_{[-1,1]^d}$, $h > 0$, and $x \in \mathbb{R}^d$ be arbitrary. Then, the translation $T_x\varphi \in \mathcal{B}^{M_p,h}_\mathcal{W}(\mathbb{R}^{d})$ and it holds that $$ \inf_{y \in [-1,1]^d}w_N(x+y) \|\varphi\|_{\mathcal{D}^{M_p,h}_{[-1,1]^d}} \leq \| T_x\varphi\|_{\mathcal{B}^{M_p,h}_{w_N}} \leq \sup_{y \in [-1,1]^d}w_N(x+y) \|\varphi\|_{\mathcal{D}^{M_p,h}_{[-1,1]^d}} $$ for all $N \in \mathbb{N}$. Therefore, condition \eqref{translation-weak} implies that \begin{gather*} \forall n \in \mathbb{N} \, \exists m > n \, \exists N \in \mathbb{N} \, \forall k > m \, \forall M \in \mathbb{N} \, \exists K \in \mathbb{N} \, \exists C > 0\, \forall \varphi \in \mathcal{D}^{M_p, 1/n}_{[-1,1]^d} \, \forall x \in \mathbb{R}^d : \\ \|\varphi\|_{\mathcal{D}^{M_p,{1/m}}_{[-1,1]^d}} w_M(x) \leq C\Big(\|\varphi\|_{\mathcal{D}^{M_p,{1/n}}_{[-1,1]^d}}w_N(x) + \|\varphi\|_{\mathcal{D}^{M_p,{1/k}}_{[-1,1]^d}} w_K(x)\Big). \end{gather*} \end{proof} Let $\mathcal{V} = (v_n)_{n}$ be a decreasing weight system. We introduce the condition: \begin{equation} \forall n \in \mathbb{N} \, \exists m \geq n \, \exists C > 0 \, \forall x \in \mathbb{R}^d \,: \, \sup_{y \in [-1,1]^d} v_m(x +y) \leq Cv_n(x). \label{translation-weak-1} \end{equation} \begin{theorem}\label{reg-Roumieu-1} Let $M_p$ be a weight sequence and let $\mathcal{V} = (v_n)_{n}$ be a decreasing weight system. Consider the following conditions: \begin{itemize} \item[$(i)$] $\mathcal{V}$ satisfies $(\Omega)$. \item[$(ii)$] $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$ is boundedly retractive. \item[$(iii)$] $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$ is complete. \item[$(iv)$] $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$ is regular. \item[$(v)$] $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$ satisfies $(wQ)$. \end{itemize} Then, $(i) \Rightarrow (ii) \Rightarrow (iii) \Rightarrow (iv) \Rightarrow (v)$ and, if $\mathcal{V}$ is regularly decreasing, $(v) \Rightarrow (ii)$. Moreover, if $M_p$ satisfies $(M.1)$, $(M.2)$, and $(M.3)$, and $\mathcal{V}$ satisfies \eqref{translation-weak-1}, then $(v) \Rightarrow (i)$. \end{theorem} \begin{proof} Again, the implications $(ii) \Rightarrow (iii) \Rightarrow (iv) \Rightarrow (v)$ hold for general $(LF)$-spaces (cf.\ Subsection \ref{sect-reg-cond}), while $(v) \Rightarrow (ii)$ (under the extra assumption that $\mathcal{V}$ is regularly decreasing) follows from Theorem \ref{reg-cond} and Proposition \ref{boundedly-stable-1}. We now show $(i) \Rightarrow (ii)$. By Theorem \ref{reg-cond} it suffices to show that $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$ is sequentially retractive. Let $(\varphi_j)_{j \in \mathbb{N}}$ be a null sequence in $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$. We define $E$ as the Fr\'echet space consisting of all $(c_\alpha)_{\alpha} \in \mathbb{C}^{\mathbb{N}^d}$ such that $$ \sup_{\alpha \in \mathbb{N}^d} h^{|\alpha|} |c_\alpha| < \infty $$ for all $h > 0$. Notice that $E$ satisfies $(DN)$. Clearly, the mapping $$ T: \mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d) \rightarrow \mathcal{V}C(\mathbb{R}^d;E): \varphi \mapsto \left( \frac{\varphi^{(\alpha)}}{M_\alpha}\right)_{\alpha \in \mathbb{N}^d} $$ is continuous. Moreover, by Theorem \ref{completeness-F} and Proposition \ref{(DN)-Omega-1}, $\mathcal{V}C(\mathbb{N}^d;E)$ is sequentially retractive. Hence there is $n \in \mathbb{N}$ such that the sequence $(T(\varphi_j))_{j}$ is contained and converges to zero in $Cv_n(\mathbb{N}^d;E)$, which precisely means that $(\varphi_j)_{j}$ is contained and converges to zero in $\mathcal{B}^{(M_p)}_{v_n}(\mathbb{R}^d)$. Next, we assume that $M_p$ satisfies $(M.1)$, $(M.2)$, and $(M.3)$, and that $\mathcal{V}$ satisfies \eqref{translation-weak-1}, and show $(v) \Rightarrow (i)$. By \cite[Cor.\ 4.3]{M-T} we have that $\mathcal{D}^{(M_p)}_{[-1,1]^d} \cong \Lambda_\infty(\beta)$ as l.c.s., where $\beta = (e^{M(j^{1/d})})_{j}$. The sequence $\beta$ satisfies \eqref{stable} because of \cite[Lemma 4.1]{Komatsu}. Hence, by Proposition \ref{E-fixed-1} and Remark \ref{S-2-inv}, it suffices to show that $(\mathcal{D}^{(M_p)}_{[-1,1]^d}, \mathcal{V})$ satisfies $(S_2)^*$; but this can be established as in the last part of the proof of Theorem \ref{reg-Beurling}. \end{proof} We end this section by giving several examples. Let $A_p$ be a weight sequence, we define the following weight systems \begin{equation} \mathcal{W}_{(A_p)} := (e^{A(N\cdot)})_{N \in \mathbb{N}}, \qquad \mathcal{W}_{\{A_p\}} := (e^{-A(\cdot /N)})_{N \in \mathbb{N}}, \label{WS-1} \end{equation} \begin{equation} \mathcal{V}_{(A_p)} := (e^{-A(n\cdot)})_{n \in \mathbb{N}}, \qquad \mathcal{V}_{\{A_p\}} := (e^{A(\cdot /n)})_{n \in \mathbb{N}}. \label{WS-2} \end{equation} We use the ensuing notation for the associated Gelfand-Shilov type spaces \begin{equation} \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) = \mathcal{B}^{(M_p)}_{\mathcal{W}_{(A_p)}}(\mathbb{R}^d), \qquad \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d) = \mathcal{B}^{\{M_p\}}_{\mathcal{V}_{\{A_p\}}}(\mathbb{R}^d), \label{GS-1} \end{equation} \begin{equation} \mathcal{S}^{(M_p)}_{\{A_p\}}(\mathbb{R}^d) := \mathcal{B}^{(M_p)}_{\mathcal{V}_{\{A_p\}}}(\mathbb{R}^d), \qquad \mathcal{S}^{\{M_p\}}_{(A_p)}(\mathbb{R}^d) := \mathcal{B}^{\{M_p\}}_{\mathcal{W}_{(A_p)}}(\mathbb{R}^d), \label{GS-2} \end{equation} \begin{equation} \mathcal{O}_C^{(M_p),(A_p)}(\mathbb{R}^d) := \mathcal{B}^{(M_p)}_{\mathcal{V}_{(A_p)}}(\mathbb{R}^d), \qquad \mathcal{O}_C^{\{M_p\},\{A_p\}}(\mathbb{R}^d):= \mathcal{B}^{\{M_p\}}_{\mathcal{W}_{\{A_p\}}}(\mathbb{R}^d), \label{OC-1} \end{equation} \begin{equation} \mathcal{O}_C^{(M_p),\{A_p\}}(\mathbb{R}^d) := \mathcal{B}^{(M_p)}_{\mathcal{W}_{\{A_p\}}}(\mathbb{R}^d), \qquad \mathcal{O}_C^{\{M_p\},(A_p)}(\mathbb{R}^d) := \mathcal{B}^{\{M_p\}}_{\mathcal{V}_{(A_p)}}(\mathbb{R}^d). \label{OC-2} \end{equation} The spaces \eqref{GS-1} are the classical Gelfand-Shilov spaces already introduced in Subsection \ref{gelfand-shilov-spaces}, while the spaces \eqref{GS-2} may be viewed as mixed type Gelfand-Shilov spaces. The spaces \eqref{OC-1} and \eqref{OC-2} are the natural analogue of the space $\mathcal{O}_C(\mathbb{R}^d)$ with respect to the spaces \eqref{GS-1} and \eqref{GS-2}, respectively; we also mention that related spaces\footnote{Indeed our $\mathcal{O}_C^{(M_p),(M_p)}(\mathbb{R}^d)$ coincides with $\mathcal{O}_C^{(M_p)}(\mathbb{R}^d)$ from \cite{D-P-P-V}; on the other hand, it should be noticed that our space $\mathcal{O}_C^{ \{M_p\},\{M_p\}}(\mathbb{R}^d)$ differs from the one denoted by $\mathcal{O}_C^{\{M_p\}}(\mathbb{R}^d)$ in \cite{D-P-P-V}.} have been studied in \cite[Sect.\ 3]{D-P-P-V}. In order to be able to apply Theorem \ref{reg-Beurling} and Theorem \ref{reg-Roumieu-1} to the special cases under consideration, we first study the properties of the weight systems \eqref{WS-1} and \eqref{WS-2}. As customary, we write $a_p:= A_p/A_{p-1}$, $p \geq 1$, and $$ a(t) := \sum_{a_p \leq t} 1, \qquad t \geq 0, $$ for the counting function of the sequence $a_p$. If $A_p$ satisfies $(M.1)$, then \cite[Eq.\ (3.11)]{Komatsu} \begin{equation} A(t) = \int_0^{t} \frac{a(\lambda)}{\lambda} {\rm d}\lambda , \qquad t \geq 0. \label{representation-ass} \end{equation} \begin{lemma} Let $A_p$ be a weight sequence satisfying $(M.1)$. Then, \begin{itemize} \item[$(i)$] $\mathcal{W}_{(A_p)}$ and $\mathcal{W}_{\{A_p\}}$ satisfy \eqref{decay-weights} and \eqref{translation-weak}, while $\mathcal{V}_{(A_p)}$ and $\mathcal{V}_{\{A_p\}}$ satisfy \eqref{decay-weights-1} and \eqref{translation-weak-1}. \item[$(ii)$] $\mathcal{W}_{(A_p)}$ satisfies $(DN)$. \item[$(iii)$] $\mathcal{V}_{\{A_p\}}$ satisfies $(\Omega)$. \item[$(iv)$] $\mathcal{W}_{\{A_p\}}$ satisfies $(DN)$ if and only if \begin{equation} \forall h > 0 \, \exists k > 0 \, \exists C > 0 \, \forall t \geq 0 \, : \, A(t) + A(kt) \leq 2A(ht) + C. \label{Lang-cond} \end{equation} \item[$(v)$] If $A_p$ satisfies $(M.2)$, then $\mathcal{V}_{(A_p)}$ satisfies $(\Omega)$ if and only if $A_p$ satisfies $(M.2)^*$. \end{itemize} \end{lemma} \begin{proof} $(i)$ Obvious. $(ii)$ It suffices to notice that for all $h > 0$ we have that $$ 2A(ht) = \sup_{p \in \mathbb{N}} \log \frac{(ht)^{2p}A_0^2}{A^2_p} = \sup_{p \in \mathbb{N}} \left( \log \frac{t^pA_0}{A_p} + \log \frac{(h^2t)^pA_0}{A_p}\right) \leq A(t) + A(h^2t). $$ $(iii)$ It suffices to show that $$ \forall h > 0 \, \exists k < h \, \forall l < k \, \exists C > 0 \, \forall t \geq 0 \, : \, (C+1)A(kt) \leq CA(ht) + A(lt), $$ which is equivalent to $$ A(kt) - A(lt) \leq C(A(ht) - A(kt)). $$ Set $k = h/e$. By \eqref{representation-ass} we have that $$ A((ht)/e) - A(lt) \leq \log(h/(le)) a((ht)/e) \leq \log(h/(le))(A(ht) - A((ht)/e). $$ $(iv)$ Clear. $(v)$ The direct implication follows from the fact that $A_p$ satisfies $(M.2)^*$ if and only if (cf.\ Subsection \ref{gelfand-shilov-spaces}) $$ A(2t) \leq H'A(t) + \log C'_0, \qquad t \geq 0, $$ for some $H',C'_0 \geq 1$. Conversely, assume that $A_p$ satisfies $(M.2)^*$. It suffices to show that $$ \forall h > 0 \, \exists k > h \, \forall l > k \, \exists C, C' > 0 \, \forall t \geq 0 \, : \, -(C+1)A(kt) \leq -CA(ht) - A(lt) + C', $$ which is equivalent to $$ A(lt) - A(kt) \leq C(A(kt) - A(ht)) + C'. $$ Condition $(M.2)^*$ implies that there $C, m > 0$ such that $$ a(lt) \leq ma(ht) + C, \qquad t \geq 0. $$ Set $k = he$. Hence \begin{align*} A(lt) -A(het) &\leq \log (l/(he))a(lt) \\ &\leq \log (l/(he))ma(ht) + \log (l/(he)) C \\ &\leq \log (l/(he))m(A(het) - A(ht)) + \log (l/(he))C. \end{align*} \end{proof} We then have, \begin{corollary} \label{cor 1 complete special cases} Let $M_p$ and $A_p$ be weight sequences satisfying $(M.1)$. Then, \begin{itemize} \item[$(i)$] $\mathcal{S}^{(M_p)}_{\{A_p\}}(\mathbb{R}^d)$ and $\mathcal{S}^{\{M_p\}}_{(A_p)}(\mathbb{R}^d)$ are complete. \item[$(ii)$] Assume that $A_p$ satisfies $(M.2)$. $\mathcal{O}_C^{(M_p),(A_p)}(\mathbb{R}^d)$ is complete if $A_p$ satisfies $(M.2)^\ast$. If, in addition, $M_p$ satisfies $(M.1)$, $(M.2)$, and $(M.3)$, then $\mathcal{O}_C^{(M_p),(A_p)}(\mathbb{R}^d)$ is complete if and only if $A_p$ satisfies $(M.2)^\ast$. \item[$(iii)$] $\mathcal{O}_C^{\{M_p\},\{A_p\}}(\mathbb{R}^d)$ is complete if $A_p$ satisfies \eqref{Lang-cond}. If $M_p$ additionally satisfies $(M.1)$, $(M.2)$, and $(M.3)$, then $\mathcal{O}_C^{\{M_p\},\{A_p\}}(\mathbb{R}^d)$ is complete if and only if $A_p$ satisfies \eqref{Lang-cond}. \end{itemize} \end{corollary} \begin{remark}\label{rk incomplete} Condition \eqref{Lang-cond} is satisfied by the so-called $q$-Gevrey sequences $A_p = q^{p^2}$, $q > 1$. On the other hand, it is very important to point out that if $A_p$ satisfies $(M.1)$ and $(M.2)$, then \eqref{Lang-cond} cannot hold for $A_p$. For example, this is always the case for the Gevrey sequences $A_p = p^\sigma$, $\sigma > 0$. We can thus supplement Corollary \ref{cor 1 complete special cases} as follows, \end{remark} \begin{corollary} \label{cor 2 complete special cases} Suppose that $M_p$ satisfies $(M.1)$, $(M.2)$, and $(M.3)$, while $A_p$ satisfies $(M.1)$ and $(M.2)$. Then, the space $\mathcal{O}_C^{\{M_p\},\{A_p\}}(\mathbb{R}^d)$ is incomplete. \end{corollary} \section{The ultradistribution spaces $\mathcal{B}'^*_\mathcal{W}(\mathbb{R}^d)$ and $\mathcal{B}'^*_\mathcal{V}(\mathbb{R}^d)$}\label{sect-duals} The aim of this section is to study the dual spaces $\mathcal{B}'^*_\mathcal{W}(\mathbb{R}^d)$ and $\mathcal{B}'^*_\mathcal{V}(\mathbb{R}^d)$. Our first goal is to characterize these spaces in terms of the growth of the convolution averages of their elements. In order to do so, we start by studying the STFT on these spaces. Motivated by our convolution characterization, we introduce three natural locally convex topologies on the spaces $\mathcal{B}'^*_\mathcal{W}(\mathbb{R}^d)$ and $\mathcal{B}'^*_\mathcal{V}(\mathbb{R}^d)$ and show that they are all identical. Finally, with the aid of the results from Section \ref{sect-reg-ultra}, we give necessary and sufficient conditions for these spaces to be ultrabornological. We start by introducing the class of weight systems that will be considered throughout this section. Let $A_p$ be a weight sequence. An increasing weight system $\mathcal{W} = (w_N)_{N}$ is said to be \emph{$(A_p)$-admissible} if $$ \forall N \in \mathbb{N} \, \exists \lambda > 0 \, \exists M \geq N \, \exists \, C > 0 \, \forall x,t \in \mathbb{R}^d \, : \, w_N(x+t) \leq C w_M(x) e^{A(\lambda t)}, $$ while it is said to be \emph{$\{A_p\}$-admissible} if $$ \forall N \in \mathbb{N} \, \forall \lambda > 0 \, \exists M \geq N \, \exists \, C > 0 \, \forall x,t \in \mathbb{R}^d \, : \, w_N(x+t) \leq C w_M(x) e^{A(\lambda t)}. $$ Likewise, a decreasing weight system $\mathcal{V} = (v_n)_{n}$ is said to be \emph{$(A_p)$-admissible} if $$ \forall n \in \mathbb{N} \, \exists \lambda > 0 \, \exists m \geq n \, \exists \, C > 0 \, \forall x,t \in \mathbb{R}^d \, : \, v_m(x+t) \leq C v_n(x) e^{A(\lambda t)}, $$ while it is said to be \emph{$\{A_p\}$-admissible} if $$ \forall n \in \mathbb{N} \, \forall \lambda > 0 \, \exists m \geq n \, \exists \, C > 0 \, \forall x,t \in \mathbb{R}^d \, : \, v_m(x+t) \leq C v_n(x) e^{A(\lambda t)}. $$ Let $B_p$ be a weight sequence such that $A_p \subset B_p$, then $\mathcal{W}_{(B_p)}$ is $(A_p)$-admissible while $\mathcal{W}_{\{B_p\}}$ is $\{A_p\}$-admissible. Similarly, $\mathcal{V}_{(B_p)}$ is $(A_p)$-admissible while $\mathcal{V}_{\{B_p\}}$ is $\{A_p\}$-admissible. We also need the following strengthened versions of \eqref{decay-weights} and \eqref{decay-weights-1}: \begin{equation} \forall N \in \mathbb{N} \, \exists M > N \, : \, w_N(x)/w_M(x) = O(|x|^{-(d+1)}), \label{decay-weights-L1} \end{equation} and \begin{equation} \forall n \in \mathbb{N} \, \exists m > n \, : \, v_m(x)/v_n(x) = O(|x|^{-(d+1)}). \label{decay-weights-L1-1} \end{equation} If $B_p$ is a weight sequence satisfying $(M.1)$ and $(M.2)$, then $\mathcal{W}_{(B_p)}$ and $\mathcal{W}_{\{B_p\}}$ satisfy \eqref{decay-weights-L1} while $\mathcal{V}_{(B_p)}$ and $\mathcal{V}_{\{B_p\}}$ satisfy \eqref{decay-weights-L1-1}. Unless otherwise stated, $M_p$ and $A_p$ will from now on \emph{always} stand for weight sequences satisfying \eqref{group-cond}. On the other hand, $\mathcal{W} = (w_N)_{N}$ and $\mathcal{V} = (v_n)_{n}$ will \emph{always} denote an increasing and decreasing weight system satisfying \eqref{decay-weights-L1} and \eqref{decay-weights-L1-1}, respectively, which are assumed to be $(A_p)$-admissible in the Beurling case and $\{A_p\}$-admissible in the Roumieu case. \begin{lemma}\label{density-lemma} Let $w$ and $v$ be positive continuous functions on $\mathbb{R}^d$ such that $v/w$ vanishes at $\infty$ and \begin{equation} \label{bound v-w for STFT} v(x+t) \leq Cw(x)e^{A(\lambda t)}, \qquad x,t \in \mathbb{R}^d, \end{equation} for some $C, \lambda > 0$. Then, for $0 < kH < h$, the space $\mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ is dense in $\mathcal{B}^{M_p,h}_{w}(\mathbb{R}^d)$ with respect to the norm $\| \, \cdot \, \|_{\mathcal{B}^{M_p,k}_v}$. \end{lemma} \begin{proof} Choose $\psi, \chi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ with $ \psi(0) = 1$ and $\int_{\mathbb{R}^d}\chi(x) {\rm d}x = 1$. We define $\psi_n =\psi(\cdot/n)$, $\chi_n = n^d\chi(n \cdot)$, and $\varphi_n = \chi_n \ast (\psi_n \varphi) \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$, $n \geq 1$. We now show that $\varphi_n \rightarrow \varphi$ in $\mathcal{B}^{M_p,k}_v(\mathbb{R}^d)$. Choose $l > 0$ so large that $h^{-1}+ l^{-1} \leq (kH)^{-1}$. Notice that \begin{equation} \| \varphi_n - \varphi\|_{\mathcal{B}^{M_p,k}_v} \leq \| \varphi_n - \psi_n\varphi\|_{\mathcal{B}^{M_p,k}_v} + \| \psi_n\varphi - \varphi\|_{\mathcal{B}^{M_p,k}_v}. \label{triangle} \end{equation} We start by estimating the second term in the right-hand side of \eqref{triangle}. We have that \begin{align*} \| \psi_n\varphi - \varphi\|_{\mathcal{B}^{M_p,k}_v} & \leq \sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d} \frac{k^{|\alpha|}v(x)}{M_\alpha}|\psi(x/n)-1| |\varphi^{(\alpha)}(x)| \\ &+ \frac{1}{n}\sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d} \frac{k^{|\alpha|}v(x)}{M_\alpha}\sum_{\beta \leq \alpha, \beta \neq 0}\binom{\alpha}{\beta} |\psi^{(\beta)}(x/n)|| \varphi^{(\alpha-\beta)}(x)| \\ &\leq \|\varphi\|_{\mathcal{B}^{M_p,k}_w}\sup_{x \in \mathbb{R}^d}\frac{v(x)}{w(x)}|\psi(x/n)-1| + \frac{1}{n}\|\psi\|_{\mathcal{S}^{M_p,l}_{A_p,0}}\|\varphi\|_{\mathcal{B}^{M_p,h}_v}, \end{align*} which tends to zero because $\psi(0) = 1$ and $v/w$ vanishes at $\infty$. Next, we estimate the first term at the right-hand side of \eqref{triangle}. Clearly, \begin{align*} \| \psi_n\varphi \|_{\mathcal{B}^{M_p,kH}_w} &\leq \sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d} \frac{(kH)^{|\alpha|}w(x)}{M_\alpha}\sum_{\beta \leq \alpha}\binom{\alpha}{\beta} |\psi^{(\beta)}(x/n)||\varphi^{(\alpha-\beta)}(x)| \\ &\leq \|\psi\|_{\mathcal{S}^{M_p,l}_{A_p,0}}\|\varphi\|_{\mathcal{B}^{M_p,h}_w} \end{align*} for all $n\in \mathbb{N}$. Hence \begin{align*} & \| \varphi_n - \psi_n\varphi\|_{\mathcal{B}^{M_p,k}_v} \\ & \leq \sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d}\frac{k^{|\alpha|}v(x)}{M_\alpha}\int_{\mathbb{R}^d} |\chi(t)| |(\psi_n\varphi)^{(\alpha)}(x-(t/n)) - (\psi_n\varphi)^{(\alpha)}(x)| {\rm d}t \\ & \leq \frac{1}{n}\sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d}\frac{k^{|\alpha|}v(x)}{M_\alpha}\int_{\mathbb{R}^d} |\chi(t)||t| \sum_{j=1}^d \int_{0}^1|(\psi_n\varphi)^{(\alpha+e_j)}(x-(\gamma t/n))|{\rm d}\gamma {\rm d}t \\ &\leq \frac{1}{n}dC\| \psi_n\varphi \|_{\mathcal{B}^{M_p,kH}_w}\sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d}\frac{k^{|\alpha|}M_{\alpha+1}}{(kH)^{|\alpha|+1}M_\alpha}\int_{\mathbb{R}^d} |\chi(t)||t| e^{A(\lambda t)}{\rm d}t \\ &\leq \frac{C'}{n}. \end{align*} \end{proof} \begin{corollary}\label{dense-inclusion} We have the following dense continuous inclusions $$ \mathcal{S}^{\ast}_\dagger(\mathbb{R}^d) \hookrightarrow \mathcal{B}^{\ast}_{\mathcal{W}}(\mathbb{R}^d) \rightarrow \mathcal{W}C(\mathbb{R}^d) \hookrightarrow \mathcal{S}'^{\ast}_\dagger(\mathbb{R}^d) $$ and $$ \mathcal{S}^{\ast}_\dagger(\mathbb{R}^d) \hookrightarrow \mathcal{B}^{\ast}_{\mathcal{V}}(\mathbb{R}^d) \rightarrow \mathcal{V}C(\mathbb{R}^d) \hookrightarrow \mathcal{S}'^{\ast}_\dagger(\mathbb{R}^d). $$ \end{corollary} Corollary \ref{dense-inclusion} of course tells that we may view the dual spaces $\mathcal{B}'^{\ast}_{\mathcal{W}}(\mathbb{R}^d)$ and $\mathcal{B}'^{\ast}_{\mathcal{V}}(\mathbb{R}^d)$ as vector subspaces of $\mathcal{S}'^{\ast}_\dagger(\mathbb{R}^d)$. \subsection{Characterization via the STFT}\label{Char STFT dual inductive} The goal of this subsection is to characterize $\mathcal{B}^{\ast}_{\mathcal{W}}(\mathbb{R}^d)$, $\mathcal{B}^{\ast}_{\mathcal{V}}(\mathbb{R}^d)$, and their dual spaces in terms of the STFT. We start with three lemmas. \begin{lemma}\label{time-freq-norm} Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ and let $w$ and $v$ be positive functions on $\mathbb{R}^d$ for which \eqref{bound v-w for STFT} holds. Then, $$ \|M_\xi T_x \psi \|_{\mathcal{B}^{M_p,h}_v} \leq C\|\psi\|_{\mathcal{S}^{M_p,2h}_{A_p,\lambda}} w(x)e^{M(4\pi h \xi)}. $$ \end{lemma} \begin{proof} We have that \begin{align*} &\|M_\xi T_x \psi \|_{\mathcal{B}^{M_p,h}_v} \\ &\leq \sup_{\alpha \in \mathbb{N}^d}\sup_{t \in \mathbb{R}^d} \frac{h^{|\alpha|}v(t)}{M_\alpha}\sum_{\beta \leq \alpha}\binom{\alpha}{\beta}(2\pi |\xi|)^{|\beta|}|\psi^{(\alpha - \beta)}(t-x)| \\ &\leq Cw(x)\sup_{\alpha \in \mathbb{N}^d}\sup_{t \in \mathbb{R}^d} \frac{1}{2^{|\alpha|}}\sum_{\beta \leq \alpha}\binom{\alpha}{\beta}\frac{(4\pi h |\xi|)^{|\beta|}}{M_\beta}\frac{(2h)^{|\alpha|-|\beta|}|\psi^{(\alpha - \beta)}(t-x)|e^{A(\lambda(t-x))}}{M_{\alpha-\beta}} \\ &\leq C\|\psi\|_{\mathcal{S}^{M_p,2h}_{A_p,\lambda}} w(x)e^{M(4\pi h \xi)}. \end{align*} \end{proof} \begin{lemma}\label{STFT-test} Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ and let $w$ and $v$ be positive measurable functions on $\mathbb{R}^d$ for which \eqref{bound v-w for STFT} holds. Then, there is $C' > 0$ such that $$ |V_\psi \varphi(x,\xi)| v(x)e^{M(\pi h\xi/\sqrt{d})}\leq C' \| \varphi\|_{\mathcal{B}^{M_p,h}_w}, \qquad \varphi \in \mathcal{B}^{M_p,h}_w(\mathbb{R}^d). $$ \end{lemma} \begin{proof} For all $\alpha \in \mathbb{N}^d$ it holds that \begin{align*} |\xi^{\alpha}V_{\psi}\varphi(x, \xi)|v(x) &\leq \frac{C}{(2\pi)^{|\alpha|}} \sum_{\beta \leq \alpha} \binom{\alpha}{\beta} \int_{\mathbb{R}^d} |\varphi^{(\beta)}(t)|w(t) |\psi^{(\alpha - \beta)}(t-x)|e^{A(\lambda(t-x))} {\rm d}t \\ &\leq C' \|\varphi\|_{\mathcal{B}^{M_p,h}_w} (\pi h)^{-|\alpha|} M_\alpha. \end{align*} Hence, \begin{align*} |V_{\psi}\varphi(x,\xi)|v(x) &\leq M_0C' \|\varphi\|_{\mathcal{B}^{M_p,h}_w} \inf_{p \in \mathbb{N}} \frac{M_p}{(\pi h|\xi|/\sqrt{d})^{p}M_0} \leq M_0C' \|\varphi\|_{\mathcal{B}^{M_p,h}_w} e^{-M(\pi h\xi/\sqrt{d})}. \end{align*} \end{proof} \begin{lemma}\label{double-int-test} Let $v$,$w$, and $u$ be positive measurable functions on $\mathbb{R}^d$ such that the pair $v,w$ satisfies \eqref{bound v-w for STFT} and $w(x)/u(x) = O(|x|^{-d+1})$. Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ and suppose that $F$ is a measurable function on $\mathbb{R}^{2d}$ such that $$ \sup_{(x,\xi) \in \mathbb{R}^{2d}}|F(x,\xi)|u(x)e^{M(k\xi)} < \infty. $$ Then, the function $$ t \rightarrow \int \int_{\mathbb{R}^{2d}} F(x,\xi) M_\xi T_x\psi(t) {\rm d}x {\rm d}\xi $$ belongs to $\mathcal{B}^{M_p,k/(4H\pi)}_v(\mathbb{R}^d)$. \end{lemma} \begin{proof} Set $h = k/(4H\pi)$. Lemma \ref{time-freq-norm} implies that \begin{align*} &\left \| \int \int_{\mathbb{R}^{2d}} F(x,\xi) M_\xi T_x\psi {\rm d}x {\rm d}\xi \right \|_{\mathcal{B}^{M_p,h}_v} \\ &\leq \int \int_{\mathbb{R}^{2d}} |F(x,\xi)| \| M_\xi T_x\psi \|_{\mathcal{B}^{M_p,h}_v} {\rm d}x {\rm d}\xi \\ &\leq C\|\psi\|_{\mathcal{S}^{M_p,2h}_{A_p,\lambda}} \int \int_{\mathbb{R}^{2d}} |F(x,\xi)| w(x)e^{M(4\pi h \xi)} {\rm d}x {\rm d}\xi \\ &\leq CC' \|\psi\|_{\mathcal{S}^{M_p,2h}_{A_p,\lambda}} \int \int_{\mathbb{R}^{2d}} \frac{w(x)}{u(x)}e^{-M(k\xi) + M(k\xi/H)} {\rm d}x {\rm d}\xi \\ &\leq C_0CC' \|\psi\|_{\mathcal{S}^{M_p,2h}_{A_p,\lambda}} \int_{\mathbb{R}^{d}} \frac{w(x)}{u(x)}{\rm d}x \int_{\mathbb{R}^d}e^{-M(k\xi/H)}{\rm d}\xi < \infty. \end{align*} \end{proof} We are now able to characterize the spaces $\mathcal{B}^{\ast}_{\mathcal{W}}(\mathbb{R}^d)$, $\mathcal{B}^{\ast}_{\mathcal{V}}(\mathbb{R}^d)$, and their duals via the STFT. \begin{proposition}\label{STFT-test-char} Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) \backslash \{0\}$ and $f \in \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$. Then, $f \in \mathcal{B}^{(M_p)}_{\mathcal{W}}(\mathbb{R}^d)$ ($f \in \mathcal{B}^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$) if and only if \begin{equation} \forall h > 0 \, \forall N \in \mathbb{N} \, (\exists h > 0 \, \forall N \in \mathbb{N}) \, : \, \sup_{(x,\xi) \in \mathbb{R}^{2d}}|V_\psi f(x,\xi)|w_N(x)e^{M(h\xi)} < \infty. \label{bounds-STFT-test-char} \end{equation} \end{proposition} \begin{proof} The direct implication follows immediately from Lemma \ref{STFT-test}. Conversely, choose $\gamma \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ such that $(\gamma, \psi)_{L^2} = 1$. By \eqref{regularization-via-STFT} we have that, for all $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$, it holds that \begin{align*} \langle f, \varphi \rangle &= \int \int_{\mathbb{R}^{2d}}V_\psi f(x, \xi) V_{\overline{\gamma}}\varphi(x, - \xi) {\rm d}x {\rm d}\xi \\ &= \int \int_{\mathbb{R}^{2d}}V_\psi f(x, \xi) \left(\int_{\mathbb{R}^{d}} \varphi(t) M_\xi T_x \gamma(t) {\rm d}t \right) {\rm d}x {\rm d}\xi \\ &= \int_{\mathbb{R}^{d}} \left( \int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) M_\xi T_x \gamma(t) {\rm d}x {\rm d}\xi \right) \varphi(t) {\rm d}t , \end{align*} where the switching of the integrals in the last step is permitted because of \eqref{bounds-STFT-test-char}. Hence, $$ f = \int \int_{\mathbb{R}^{2d}} V_\psi f(x,\xi) M_\xi T_x \gamma {\rm d}x {\rm d}\xi $$ and we conclude that $f \in \mathcal{B}^{\ast}_{\mathcal{W}}(\mathbb{R}^d)$ by applying Lemma \ref{double-int-test} to $F = V_\psi f$. \end{proof} \begin{proposition}\label{STFT-char-1} Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) \backslash \{0\}$ and $f \in \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$. Then, $f \in \mathcal{B}^{(M_p)}_{\mathcal{V}}(\mathbb{R}^d)$ ($f \in \mathcal{B}^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d)$) if and only if $$ \exists n \in \mathbb{N} \, \forall h>0 \, (\exists n \in \mathbb{N} \, \exists h> 0) \, : \, \sup_{(x,\xi) \in \mathbb{R}^{2d}}|V_\psi f(x,\xi)|v_n(x)e^{M(h\xi)} < \infty. $$ \end{proposition} \begin{proof} This can be shown in the same way as Proposition \ref{STFT-test-char}. \end{proof} \begin{proposition}\label{STFT-dual} Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) \backslash \{0\}$ and $f \in \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$. Then, $f \in \mathcal{B}'^{(M_p)}_{\mathcal{W}}(\mathbb{R}^d)$ ($f \in \mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$) if and only if $$ \exists h > 0 \, \exists N \in \mathbb{N} \, (\forall h > 0 \, \exists N \in \mathbb{N}) \, : \, \sup_{(x,\xi) \in \mathbb{R}^{2d}}\frac{|V_\psi f(x,\xi)|}{w_N(x)e^{M(h\xi)}} < \infty. $$ \end{proposition} \begin{proof} The direct implication follows immediately from Lemma \ref{time-freq-norm}. Conversely, choose $\gamma \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ such that $(\gamma, \psi)_{L^2} = 1$. Lemma \ref{STFT-test} implies that $$ \langle \tilde{f}, \varphi \rangle := \int \int_{\mathbb{R}^{2d}}V_\psi f(x, \xi) V_{\overline{\gamma}}\varphi(x, - \xi) {\rm d}x {\rm d}\xi , \qquad \varphi \in \mathcal{B}^\ast_\mathcal{W}(\mathbb{R}^d), $$ defines a continuous linear functional on $\mathcal{B}^\ast_\mathcal{W}(\mathbb{R}^d)$. Since $\tilde{f}_{| \mathcal{S}^\ast_\dagger(\mathbb{R}^d)} = f$ (cf.\ \eqref{regularization-via-STFT}), we obtain that $f \in \mathcal{B}'^{\ast}_{\mathcal{W}}(\mathbb{R}^d)$. \end{proof} In a similar fashion one shows, \begin{proposition}\label{STFT-dual-1} Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) \backslash \{0\}$ and $f \in \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$. Then, $f \in \mathcal{B}'^{(M_p)}_{\mathcal{V}}(\mathbb{R}^d)$ ($f \in \mathcal{B}'^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d)$) if and only if $$ \forall n \in \mathbb{N} \, \exists h>0 \, (\forall n \in \mathbb{N} \, \forall h> 0) \, : \, \sup_{(x,\xi) \in \mathbb{R}^{2d}}\frac{|V_\psi f(x,\xi)|}{v_n(x)e^{M(h\xi)}} < \infty. $$ \end{proposition} \begin{corollary}\label{STFT-reg-dual-GS} Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) \backslash \{0\}$ and let $\gamma \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ be a synthesis window for $\psi$, then $$ \langle f, \varphi \rangle = \frac{1}{(\gamma, \psi)_{L^2}} \int \int_{\mathbb{R}^{2d}}V_\psi f(x, \xi) V_{\overline{\gamma}}\varphi(x, - \xi) {\rm d}x {\rm d}\xi $$ for all $f \in \mathcal{B}'^{\ast}_{\mathcal{W}}(\mathbb{R}^d)$ and $\varphi \in \mathcal{B}^{\ast}_{\mathcal{W}}(\mathbb{R}^d)$ ($f \in \mathcal{B}'^{\ast}_{\mathcal{V}}(\mathbb{R}^d)$ and $\varphi \in \mathcal{B}^{\ast}_{\mathcal{V}}(\mathbb{R}^d)$), where the integral at the right-hand side is absolutely convergent. \end{corollary} As an application of Corollary \ref{STFT-reg-dual-GS} we now give a projective description of the space $\mathcal{B}^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d)$ (cf.\ \cite{Pilipovic-94}), a result that will be used later on. We need some preparation. Let $\mathcal{V} = (v_n)_{n}$ be a decreasing weight system on a completely regular Hausdorff space $X$. The \emph{maximal Nachbin family associated with $\mathcal{V}$}, denoted by $\overline{V}(\mathcal{V}) = \overline{V}$, is given by the space of all nonnegative upper semicontinuous functions $v$ on $X$ such that $$ \sup_{x \in X} \frac{v(x)}{v_n(x)} < \infty $$ for all $n \in \mathbb{N}$. The \emph{projective hull of $\mathcal{V}C(X)$} is then defined as $$ C\overline{V}(X) = \varprojlim_{v \in \overline{V}} Cv(X), $$ where $Cv(X)$ is the seminormed space consisting of all $f \in C(X)$ such that $$ \|f\|_v := \sup_{x \in X} |f(x)|v(x) < \infty. $$ The spaces $\mathcal{V}C(X)$ and $C\overline{V}(X)$ are equal as sets and have the same bounded sets \cite{Bierstedt} (see also Lemma \ref{proj-desc-1} below). The problem of projective description in this context is to find conditions on $\mathcal{V}$ which ensure that these spaces coincide topologically. If $\mathcal{V}$ is regularly decreasing, this is always the case\ \cite[Cor.\ 9, p.\ 121]{Bierstedt}. We now collect some useful facts about the maximal Nachbin family associated with $\mathcal{V}$. \begin{lemma}\label{proj-desc-1} Let $f$ be a nonnegative function on $X$ and let $\mathcal{V} = (v_n)_{n}$ be a decreasing weight system. Then, \begin{itemize} \item[$(i)$] $\sup_{x \in X}f(x)v_n(x) < \infty$ for some $n \in \mathbb{N}$ if and only if $ \sup_{x \in X}f(x)v(x) < \infty$ for all $v \in \overline{V}$. \item[$(ii)$] $\sup_{x \in X}f(x)/v(x) < \infty$ for some $v \in \overline{V}$ if and only if $\sup_{x \in X}f(x)/v_n(x) < \infty$ for all $n \in \mathbb{N}$. \end{itemize} \end{lemma} \begin{proof} The direct implications are clear, we only need to show the ``if" parts. $(i)$ Suppose that $\sup_{x \in X}f(x)v_n(x) < \infty$ does not hold for any $n \in \mathbb{N}$. Choose a sequence $(x_k)_{k \in \mathbb{N}}$ of points in $X$ such that $f(x_k)v_k(x_k) \geq k$ for all $k \in \mathbb{N}$. Set $C_n = \sup_{k \leq n} v_k(x_k)/v_n(x_k)$, $n \in \mathbb{N}$, and $v = \inf_{n \in \mathbb{N}} C_n v_n \in \overline{V}$. Since $$ f(x_k)C_n v_n(x_k) \geq k, \qquad k,n \in \mathbb{N}, $$ we have that $\sup_{x \in X}f(x)v(x) = \infty$, a contradiction. $(ii)$ There are $C_n > 0$ such that $f \leq C_n v_n$ for all $n \in \mathbb{N}$. Then, the function $v = \inf_{n \in \mathbb{N}} C_nv_n \in \overline{V}$ satisfies the requirement. \end{proof} Likewise, \begin{lemma}\label{proj-desc-2} Let $X$ and $Y$ be completely regular Hausdorff spaces, let $f$ be a nonnegative function on $X \times Y$, let $\mathcal{V} = (v_n)_{n}$ be a decreasing weight system on $X$, and let $\mathcal{U} = (u_n)_{n}$ be a decreasing weight system on $Y$. Then, \begin{itemize} \item[$(i)$] $\sup_{(x,y) \in X \times Y}f(x,y)v_n(x)u_n(y) < \infty$ for some $n \in \mathbb{N}$ if and only if \newline $ \sup_{(x,y) \in X \times Y}f(x,y)v(x)u(y) < \infty$ for all $v \in \overline{V}(\mathcal{V})$ and $u \in \overline{V}(\mathcal{U})$. \item[$(ii)$]$ \sup_{(x,y) \in X \times Y}f(x,y)/(v(x)u(y)) < \infty$ for some $v \in \overline{V}(\mathcal{V})$ and $u \in \overline{V}(\mathcal{U})$ if and only if $\sup_{(x,y) \in X \times Y}f(x,y)/(v_n(x)u_n(y)) < \infty$ for all $n \in \mathbb{N}$. \end{itemize} \end{lemma} \begin{lemma}\label{proj-desc-3} Let $w$ be a positive function on $X$ and let $\mathcal{V} = (v_n)_{n}$ be a decreasing weight system satisfying $$ \forall n \in \mathbb{N} \, \exists m \geq n \, \exists C > 0 \, \forall x \in X \, : \, v_m(x) \leq Cw(x)v_n(x). $$ Then, $$ \forall v \in \overline{V} \, \exists \overline{v} \in \overline{V} \, \forall x \in X \, : \, v(x) \leq w(x)\overline{v}(x). $$ \end{lemma} \begin{proof} Let $(n_k)_{k \in \mathbb{N}}$ be an increasing sequence of natural numbers such that $v_{n_{k+1}} \leq C_kwv_{n_k}$ for all $k \in \mathbb{N}$ and some $C_k > 0$. Next, choose $C'_k > 0$ such that $v \leq C'_k v_{n_k}$ for all $k \in \mathbb{N}$. Set $\overline{v} = \inf_{k \in \mathbb{N}} C_kC'_{k+1}v_{n_k} \in \overline{V}$. We have that $$ v \leq \inf_{k \in \mathbb{N}} C'_{k+1}v_{n_{k+1}} \leq w \inf_{k \in \mathbb{N}} C_kC'_{k+1}v_{n_{k}} = w\overline{v}. $$ \end{proof} Similarly, \begin{lemma}\label{proj-desc-4} Let $A_p$ be a weight sequence and let $\mathcal{V} = (v_n)_{n}$ be an $\{A_p\}$-admissible decreasing weight system. Then, $$ \forall v \in \overline{V} \, \forall \lambda > 0 \, \exists \overline{v} \in \overline{V} \, \forall x,t \in \mathbb{R}^d \, : \, v(x+t) \leq \overline{v}(x)e^{A(\lambda t)}. $$ \end{lemma} We write $\mathcal{R}$ for the set of all increasing sequences $(h_j)_{j \in \mathbb{N}}$ of positive numbers tending to infinity. There is a natural order on $\mathcal{R}$ defined by $h_j \prec k_j$ if $h_j\leq k_j$ for all $j \in \mathbb{N}$, and with it $(\mathcal{R},\prec)$ becomes a directed set. Given a weight sequence $M_p$ with associated function $M$ and $h_j \in \mathcal{R}$, we denote by $M^{h_j}$ and $M_{h_j}$ the associated function of the sequence $M_p / \prod_{j = 0}^ph_j$ and $M_p \prod_{j = 0}^p h_j$, respectively. \begin{lemma}\label{projectivedescriptionlemma} Let $M_p$ be a weight sequence satisfying $(M.1)$ and $(M.2)$. Then, \begin{itemize} \item[$(i)$] for every $v \in \overline{V}(\mathcal{V}_{(M_p)})$ there is $h_j \in \mathcal{R}$ such that $$ \sup_{\xi \in \mathbb{R}^d} v(\xi)e^{M^{h_j}(\xi)} < \infty. $$ \item[$(ii)$] for every $v \in \overline{V}(\mathcal{V}_{\{M_p\}})$ there is $h_j \in \mathcal{R}$ such that $$ \sup_{\xi \in \mathbb{R}^d} v(\xi)e^{-M_{h_j}(\xi)} < \infty. $$ \end{itemize} \end{lemma} \begin{proof} $(i)$ We first show that there is a \emph{supordinate} function $\varepsilon$, i.e., a continuous strictly increasing function $\varepsilon: [0,\infty) \rightarrow [0,\infty)$ with $\varepsilon(0) = 0$ and $ t= o(\varepsilon (t))$, such that \begin{equation} \sup_{\xi \in \mathbb{R}^d} v(\xi)e^{M(\varepsilon(|\xi|))} < \infty. \label{condition-supordinate} \end{equation} Conditions $(M.1)$ and $(M.2)$ imply that for all $h> 0$ there is $t> 0$ such that $v(\xi) \leq e^{-M(h\xi)}$ for all $|\xi| > t$. Hence we can inductively determine a strictly increasing sequence $(t_n)_{n \in \mathbb{N}}$ with $t_0 = 0$ and $t_n \rightarrow \infty$ that satisfies $$ v(\xi) \leq e^{-M((n+1)\xi)}, \qquad |\xi| \geq t_n, n \geq 1. $$ Let $l_n$ denote the line through the points $(t_n, nt_n)$ and $(t_{n+1}, (n+1)t_{n+1})$, and define $\varepsilon(t) = l_n(t)$ for $t \in [t_n, t_{n+1})$. The function $\varepsilon$ is supordinate and satisfies \eqref{condition-supordinate}. Therefore it suffices to show that for every supordinate function $\varepsilon$ there is a sequence $h_j \in \mathcal{R}$ and $C > 0$ such that $M^{h_j}(t) \leq M(\varepsilon(t)) + C$ for all $t \geq 0$. We may assume without loss of generality that $\varepsilon(t)/t$ tends increasingly to $\infty$, for otherwise we can find another supordinate function $\tilde \varepsilon$ with $\tilde \varepsilon \leq \varepsilon$ that does satisfy this property. Define $h_p$, $p \geq 1$, as the unique solution of $\varepsilon(m_p/h_p) = m_p$ and $h_0 = h_1$. The sequences $m_p/h_p$ and $h_p$ tend increasingly to infinity. Let $t \geq \max \{m_1/h_1, M_1h_1/M^2_0\}$ be arbitrary and suppose that $m_p/h_p \leq t \leq m_{p+1}/h_{p+1}$ for some $p \geq 1$. Since $h_p t \leq \varepsilon(t)$, we have that \begin{align*} M^{h_j}(t) &\leq \sup_{q \geq 1} \log \prod_{j = 1}^q \frac{h_jt}{m_j} + \log h_0 = \log \prod_{j = 1}^p \frac{h_jt}{m_j} + \log h_0 \leq \log \prod_{j = 1}^p \frac{\varepsilon(t)}{m_j} + \log h_0 \\ &\leq M(\varepsilon(t)) + \log h_0 . \end{align*} $(ii)$ See \cite[Lemma 4.5$(i)$]{D-V-VEmbeddingUltra16}. \end{proof} We are now able to state and prove the projective description of the space $\mathcal{B}^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d)$. \begin{proposition}\label{projectivedescription} A function $\varphi \in C^\infty(\mathbb{R}^d)$ belongs to $\mathcal{B}^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d)$ if and only if $$ \| \varphi \|_{\mathcal{B}^{M_p,h_j}_{v}} := \sup_{\alpha \in \mathbb{N}^d}\sup_{x \in \mathbb{R}^d} \frac{|\varphi^{(\alpha)}(x)|v(x)}{M_{\alpha}\prod_{j = 0}^{|\alpha|}h_j} < \infty $$ for all $v \in \overline{V}$ and $h_j \in \mathcal{R}$. Moreover, the topology of $\mathcal{B}^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d)$ is generated by the system of seminorms $\{\| \, \cdot \, \|_{\mathcal{B}^{M_p,h_j}_{v}} \, : \, v \in \overline{V}, h_j \in \mathcal{R} \}$. \end{proposition} \begin{proof} The first part follows from Lemma \ref{proj-desc-2} and the fact that for a sequence of positive numbers $(a_j)_{j \in \mathbb{N}}$ it holds that $\sup_{j \in \mathbb{N}} a_j n^j < \infty$ for all $n \in \mathbb{N}$ if and only if $\sup_{j \in \mathbb{N}} a_j \prod_{m = 0}^jh_m < \infty$ for some $h_j \in \mathcal{R}$ \cite[Lemma 3.4]{Komatsu3}. Next, we show the topological assertion. Clearly, every seminorm $\| \, \cdot \, \|_{\mathcal{B}^{M_p,r_j}_{v}}$ acts continuously on $\mathcal{B}^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d)$. Conversely, let $p$ be an arbitrary seminorm on $\mathcal{B}^{\{M_p\}}_{\mathcal{V}} (\mathbb{R}^d)$. There is a strongly bounded set $B \subset \mathcal{B}'^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d)$ such that $$ p(\varphi) \leq \sup_{f \in B} |\langle f, \varphi \rangle|, \qquad \varphi \in \mathcal{B}^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d). $$ Let $\psi, \gamma \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) \backslash \{0\}$ with $(\gamma, \psi)_{L^2} = 1$. Lemma \ref{time-freq-norm} implies that $$ \sup_{f \in B} \sup_{(x,\xi) \in \mathbb{R}^{2d}}\frac{|V_\psi f(x,\xi)|}{v_n(x)e^{M(\xi/n)}} < \infty $$ for all $n \in \mathbb{N}$. Lemma \ref{proj-desc-2} and Lemma \ref{projectivedescriptionlemma} therefore imply that $$ |V_\psi f(x,\xi)| \leq v(x)e^{M_{h_j}(\xi)}, \qquad f \in B, $$ for some $v \in \overline{V}$ and $h_j \in \mathcal{R}$. By \cite[Lemma 2.3]{Prangoski} there is $h'_j \in \mathcal{R}$ such that $h'_j \leq h_j$ for $j$ large enough and $$ \prod_{j = 0}^{p+q}h'_j \leq 2^{p+q} \prod_{j = 0}^{p}h'_j \prod_{j = 0}^{q}h'_j, \qquad p,q \in \mathbb{N}, $$ which implies that the weight sequence $M_p\prod_{j = 0}^ph'_j$ satisfies $(M.1)$ and $(M.2)$ (with the constant $2H$ instead of $H$). Furthermore, by Lemma \ref{proj-desc-3}, there is $\overline{v} \in \overline{V}$ such that $v(x)/\overline{v}(x) = O(|x|^{-(d+1}))$ while Lemma \ref{proj-desc-4} yields the existence of $\overline{\overline{v}} \in \overline{V}$ such that $$ \overline{v}(x+t) \leq \overline{\overline{v}}(x)e^{A(t)}, \qquad x,t \in \mathbb{R}^d. $$ Hence Corollary \ref{STFT-reg-dual-GS} and Lemma \ref{STFT-test} imply that \begin{align*} \sup_{f \in B} |\langle f, \varphi \rangle| &\leq \sup_{f \in B} \int \int_{\mathbb{R}^{2d}}|V_\psi f(x, \xi)| |V_{\overline{\gamma}}\varphi(x, - \xi)| {\rm d}x {\rm d}\xi \\ &\leq C\int \int_{\mathbb{R}^{2d}} v(x)e^{M_{h'_j}(\xi)} |V_{\overline{\gamma}}\varphi(x, - \xi)| {\rm d}x {\rm d}\xi \\ &\leq C'\sup_{(x,\xi) \in \mathbb{R}^{2d}} |V_{\overline{\gamma}}\varphi(x,\xi)| \overline{v}(x)e^{M_{h'_j/(2H)}(\xi)} \\ &\leq C''\| \varphi \|_{\mathcal{B}^{M_p,\pi h'_j/(2H\sqrt{d})}_{\overline{\overline{v}}}}. \qedhere \end{align*} \end{proof} \subsection{Characterization via convolution averages}\label{conv average subsection} In this subsection we characterize the elements $f$ of $\mathcal{B}'^{\ast}_{\mathcal{W}}(\mathbb{R}^d)$ and $\mathcal{B}'^{\ast} _{\mathcal{V}}(\mathbb{R}^d)$ in terms of the growth of the convolution averages $f \ast \varphi$ with respect to $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$. We start with the following simple lemma whose verification is left to the reader. \begin{lemma}\label{translation-norm} Let $w$ and $v$ be positive functions on $\mathbb{R}^d$ satisfying \eqref{bound v-w for STFT}. Then, $$ \| T_x\varphi \|_{\mathcal{B}^{M_p,h}_v} \leq Cw(x) \|\varphi\|_{\mathcal{S}^{M_p,h}_{A_p,\lambda}}, \qquad \varphi \in \mathcal{S}^{M_p,h}_{A_p,\lambda}(\mathbb{R}^d). $$ \end{lemma} \begin{lemma}\label{STFT-conv} Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$. For every $h_j \in \mathcal{R}$ ($h > 0$) there is a bounded set $B \subset \mathcal{S} ^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ ($B \subset \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$) such that $$ |V_\psi f(x,\xi)|e^{-M^{h_j}(\xi)} \leq \sup_{\varphi \in B} |(f \ast \varphi)(x)|, \qquad f \in \mathcal{S}'^{(M_p)}_{(A_p)}(\mathbb{R}^d), $$ $$ \left(|V_\psi f(x,\xi)|e^{-M(h\xi)} \leq \sup_{\varphi \in B} |(f \ast \varphi)(x)|, \qquad f \in \mathcal{S}'^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d) \right). $$ \end{lemma} \begin{proof} The set $$ B = \{ e^{-M^{h_j}(\xi)}M_\xi \check{\overline{\psi}} \, : \, \xi \in \mathbb{R}^d \} \qquad \left( B = \{ e^{-M(h\xi)} M_\xi \check{\overline{\psi}} \, : \, \xi \in \mathbb{R}^d \} \right) $$ is bounded in $\mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ (in $\mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$). Hence $$ |V_\psi f(x,\xi)|e^{-M^{h_j}(\xi)} = |(f \ast M_\xi \check{\overline{\psi}})(x)|e^{-M^{h_j}(\xi)} \leq \sup_{\varphi \in B} |(f \ast \varphi)(x)| $$ $$ \left(|V_\psi f(x,\xi)|e^{-M(h\xi)} = |(f \ast M_\xi \check{\overline{\psi}})(x)| e^{-M(h\xi)} \leq \sup_{\varphi \in B} |(f \ast \varphi)(x)|\right) $$ for all $f \in \mathcal{S}'^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ ($f \in \mathcal{S}'^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$). \end{proof} Given an increasing (decreasing) weight system $\mathcal{W} = (w_N)_{N}$ ($\mathcal{V} = (v_n)_{n}$) we define its \emph{dual weight system} as the decreasing (increasing) weight system given by $\mathcal{W}^\circ := (w^{-1}_N)_{N}$ ($\mathcal{V}^\circ := (v^{-1}_n)_{n}$). \begin{theorem}\label{conv-average-Beurling} Let $f \in \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$. The following statements are equivalent: \begin{itemize} \item[$(i)$] $f \in \mathcal{B}'^{\ast}_{\mathcal{W}}(\mathbb{R}^d)$. \item[$(ii)$] $f \ast \varphi \in \mathcal{W}^\circ C(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$. \item[$(iii)$] $f \ast \varphi \in \mathcal{W}^\circ C(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$ and the mapping $$ \ast_f : \mathcal{S}^\ast_\dagger(\mathbb{R}^d) \rightarrow \mathcal{W}^\circ C(\mathbb{R}^d): \varphi \rightarrow f \ast \varphi $$ is continuous. \item[$(iv)$] $f \ast \varphi \in \mathcal{B}^\ast_{\mathcal{W}^\circ}(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$. \item[$(v)$] $f \ast \varphi \in \mathcal{B}^\ast_{\mathcal{W}^\circ}(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$ and the mapping $$ \ast_f : \mathcal{S}^\ast_\dagger(\mathbb{R}^d) \rightarrow \mathcal{B}^\ast_{\mathcal{W}^\circ}(\mathbb{R}^d): \varphi \rightarrow f \ast \varphi $$ is continuous. \end{itemize} \end{theorem} \begin{proof} $(i) \Rightarrow (ii)$ Consequence of Lemma \ref{translation-norm}. $(ii) \Rightarrow (iii)$ Follows from De Wilde's closed graph theorem and the fact that the mapping $\ast_f: \mathcal{S}^\ast_\dagger(\mathbb{R}^d) \rightarrow \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$ is continuous. $(iii) \Rightarrow (iv)$ We first show the Beurling case. By Grothendieck's factorization theorem there is $N \in \mathbb{N}$ such that the mapping $\ast_f: \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) \rightarrow Cw^{-1}_N(\mathbb{R}^d)$ is well-defined and continuous. Let $\varphi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ be arbitrary. For every $h> 0$ the set $$ B_h = \left\{ \frac{h^{|\alpha|}\varphi^{(\alpha)}}{M_\alpha} \, : \, \alpha \in \mathbb{N}^d \right\} $$ is bounded in $\mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$. Hence $$ \| f \ast \varphi \|_{\mathcal{B}^{M_p,h}_{w_N^{-1}}} = \sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d}\frac{h^{|\alpha|}|(f\ast \varphi^{(\alpha)})(x)|}{M_\alpha w_N(x)} = \sup_{\chi \in B_h}\| f \ast \chi\|_{w^{-1}_N} < \infty. $$ Next, we consider the Roumieu case. Let $\varphi \in \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$ be arbitrary and suppose that $\varphi \in \mathcal{S}^{M_p,h}_{A_p,\lambda}(\mathbb{R}^d)$ for some $h, \lambda > 0$. The set $$ B = \left\{ \frac{(h/H)^{|\alpha|}\varphi^{(\alpha)}}{M_\alpha} \, : \, \alpha \in \mathbb{N}^d \right\} $$ is bounded in $\mathcal{S}^{M_p,h/H}_{A_p,\lambda}(\mathbb{R}^d)$. There is $N \in \mathbb{N}$ such that the mapping $\ast_f: \mathcal{S}^{M_p,h/H}_{A_p,\lambda}(\mathbb{R}^d) \rightarrow Cw^{-1}_N(\mathbb{R}^d)$ is well-defined and continuous. Hence $$ \| f \ast \varphi \|_{\mathcal{B}^{M_p,h/H}_{w_N^{-1}}} = \sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d}\frac{(h/H)^{|\alpha|}|(f\ast \varphi^{(\alpha)})(x)|}{M_\alpha w_N(x)} = \sup_{\chi \in B}\| f \ast \chi \|_{w^{-1}_N} < \infty. $$ $(iv) \Rightarrow (v)$ Follows from De Wilde's closed graph theorem and the fact that the mapping $\ast_f: \mathcal{S}^\ast_\dagger(\mathbb{R}^d) \rightarrow \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$ is continuous. $(v) \Rightarrow (iii)$ Obvious. $(iii) \Rightarrow (i)$ Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) \backslash \{0\}$. By Proposition \ref{STFT-dual} it suffices to show $$ \exists h > 0 \, \exists N \in \mathbb{N} \, ( \forall h > 0 \, \exists N \in \mathbb{N}) \,: \, \sup_{(x,\xi) \in \mathbb{R}^{2d}}\frac{|V_\psi f(x,\xi)|}{w_N(x)e^{M(h\xi)}} < \infty. $$ We first show the Beurling case. The above estimate is equivalent to (cf.\ Lemma \ref{proj-desc-1} and Lemma \ref{projectivedescriptionlemma}) $$ \exists N \in \mathbb{N} \, \forall h_j \in \mathcal{R} \, : \, \sup_{(x,\xi) \in \mathbb{R}^{2d}}\frac{|V_\psi f(x,\xi)|}{w_N(x)e^{M^{h_j}(\xi)}} < \infty. $$ By Grothendieck's factorization theorem there is $N \in \mathbb{N}$ such that the mapping $\ast_f: \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) \rightarrow Cw^{-1}_N(\mathbb{R}^d)$ is well-defined and continuous. Lemma \ref{STFT-conv} implies that for every $h_j \in \mathcal{R}$ there is a bounded set $B \subset \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ such that $$ |V_\psi f(x,\xi)|e^{-M^{h_j}(\xi)} \leq \sup_{\varphi \in B} |(f \ast \varphi)(x)| \leq \sup_{\varphi \in B}\| f \ast \varphi\|_{w^{-1}_N} w_N(x). $$ Next, we consider the Roumieu case. Let $h > 0$ be arbitrary. By Lemma \ref{STFT-conv} we find a bounded set $B \subset \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$ such that $$ |V_\psi f(x,\xi)|e^{M(h\xi)} \leq \sup_{\varphi \in B} |(f \ast \varphi)(x)|. $$ Suppose that $B$ is contained and bounded in $\mathcal{S}^{M_p,k}_{A_p,\lambda}(\mathbb{R}^d)$ for some $k,\lambda > 0$. There is $N \in \mathbb{N}$ such that the mapping $\ast_f: \mathcal{S}^{M_p,k}_{A_p,\lambda}(\mathbb{R}^d) \rightarrow Cw^{-1}_N(\mathbb{R}^d)$ is well-defined and continuous. Hence $$ |V_\psi f(x,\xi)| e^{M(h\xi)} \leq \sup_{\varphi \in B}\| f \ast \varphi\|_{w^{-1}_N} w_N(x). $$ \end{proof} \begin{theorem}\label{conv-average-Roumieu} Let $f \in \mathcal{S}'^\ast_\dagger(\mathbb{R}^d)$. The following statements are equivalent: \begin{itemize} \item[$(i)$] $f \in \mathcal{B}'^{\ast}_{\mathcal{V}}(\mathbb{R}^d)$. \item[$(ii)$] $f \ast \varphi \in \mathcal{V}^\circ C(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$. \item[$(iii)$] $f \ast \varphi \in \mathcal{V}^\circ C(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$ and the mapping $$ \ast_f : \mathcal{S}^\ast_\dagger(\mathbb{R}^d) \rightarrow \mathcal{V}^\circ C(\mathbb{R}^d): \varphi \rightarrow f \ast \varphi $$ is continuous. \item[$(iv)$] $f \ast \varphi \in \mathcal{B}^\ast_{\mathcal{V}^\circ}(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$. \item[$(v)$] $f \ast \varphi \in \mathcal{B}^\ast_{\mathcal{V}^\circ}(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$ and the mapping $$ \ast_f : \mathcal{S}^\ast_\dagger(\mathbb{R}^d) \rightarrow \mathcal{B}^\ast_{\mathcal{V}^\circ}(\mathbb{R}^d): \varphi \rightarrow f \ast \varphi $$ is continuous. \end{itemize} \end{theorem} \begin{proof} The proof is similar to the one of Theorem \ref{conv-average-Beurling} and therefore omitted. \end{proof} In the next two corollaries we employ the notation $\ddagger = (B_p)$ or $\{B_p\}$ to treat the Beurling and Roumieu case simultaneously. However, $\ast$ and $\ddagger$ are always both either of Roumieu or Beurling type, that is, we only consider the classical Gelfand-Shilov spaces \eqref{GS-1} and not the mixed type spaces \eqref{GS-2}. Notice that these corollaries improve results from\footnote{Our spaces $\mathcal{O}'^{\ast,\ast}_C(\mathbb{R}^d)$ are the same as the convolutor spaces denoted by $\mathcal{O}'^{\ast}_C(\mathbb{R}^d)$ in \cite{D-P-P-V}. We point out however that, due to an error carried from \cite[Prop.\ 2]{D-P-Ve} to the proof of \cite[Thm.\ 3.2]{D-P-P-V} in the Roumieu case, the space $X=\mathcal{O}^{\{M_p\}}_C(\mathbb{R}^d)$ defined on \cite[p.\ 407]{D-P-P-V} is not a predual of $\mathcal{O}'^{\{M_p\}}_C(\mathbb{R}^d)$ since one only has $X'\subsetneq \mathcal{O}'^{\{M_p\}}_C(\mathbb{R}^d)$.} \cite{D-P-P-V,P-P-V} that were obtained under much stronger assumptions and rather different methods (via the parametrix method). \begin{corollary} \label{cor conv char 1} Let $B_p$ be a weight sequence satisfying $(M.1)$ and $(M.2)$ such that $A_p \subset B_p$. For $f \in \mathcal{S}'^{\ast}_{\dagger}(\mathbb{R}^d)$ the following statements are equivalent: \begin{itemize} \item[$(i)$] $f \in \mathcal{S}'^{\ast}_{\ddagger}(\mathbb{R}^d)$. \item[$(ii)$] $f \ast \varphi \in \mathcal{V}_{(B_p)} C(\mathbb{R}^d)$ ($f \ast \varphi \in \mathcal{W}_{\{B_p\}}C(\mathbb{R}^d)$) for all $\varphi \in \mathcal{S}^{\ast}_{\dagger}(\mathbb{R}^d)$. \item[$(iii)$] $f \ast \varphi \in \mathcal{O}^{\ast,\ddagger}_C(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}^{\ast}_{\dagger}(\mathbb{R}^d)$. \end{itemize} \end{corollary} \begin{corollary} \label{cor conv char 2} Let $B_p$ be a weight sequence satisfying $(M.1)$ and $(M.2)$ such that $A_p \subset B_p$. For $f \in \mathcal{S}'^{\ast}_{\dagger}(\mathbb{R}^d)$ the following statements are equivalent: \begin{itemize} \item[$(i)$] $f \in \mathcal{O}_C'^{\ast,\ddagger}(\mathbb{R}^d)$. \item[$(ii)$] $f \ast \varphi \in \mathcal{W}_{(B_p)} C(\mathbb{R}^d)$ ($f \ast \varphi \in \mathcal{V}_{\{B_p\}}C(\mathbb{R}^d)$) for all $\varphi \in \mathcal{S}^{\ast}_{\dagger}(\mathbb{R}^d)$. \item[$(iii)$] $f \ast \varphi \in \mathcal{S}^{\ast}_{\ddagger}(\mathbb{R}^d)$ for all $\varphi \in \mathcal{S}^{\ast}_{\dagger}(\mathbb{R}^d)$. \end{itemize} \end{corollary} \begin{remark} \label{rk conv char} If $M_p$ is non-quasianalytic, it is clear that one may replace ``for all $\varphi\in \mathcal{S}^{\ast}_{\dagger}(\mathbb{R}^{d})$'' by ``for all $\varphi\in \mathcal{D}^{\ast}(\mathbb{R}^{d})$'' in all statements from Theorem \ref{conv-average-Beurling}, Theorem \ref{conv-average-Roumieu}, Corollary \ref{cor conv char 1}, and Corollary \ref{cor conv char 2}. \end{remark} \subsection{Topological properties} \label{subsection topological properties} For an $(LF)$-space $E = \varinjlim E_n$ we define $$ \mathfrak{S} = \{ B \subset E \, : \, B \mbox{ is contained and bounded in $E_n$ for some $n \in \mathbb{N}$} \}. $$ We write $bs(E',E)$ for the $\mathfrak{S}$-topology on $E'$ (the topology of uniform convergence on sets of $\mathfrak{S}$). Grothendieck's factorization theorem implies that $bs(E',E)$ does not depend on the defining inductive spectrum of $E$. The first goal of this subsection is to show the ensuing two theorems. \begin{theorem}\label{topology-Beurling} The following three topologies coincide on $\mathcal{B}'^{\ast}_{\mathcal{W}}(\mathbb{R}^d)$: \begin{itemize} \item[$(i)$] The initial topology with respect to the mapping $$ \mathcal{B}'^{\ast}_{\mathcal{W}}(\mathbb{R}^d) \rightarrow L_b(\mathcal{S}^\ast_\dagger(\mathbb{R}^d), \mathcal{B}^\ast_{\mathcal{W}^\circ}(\mathbb{R}^d)): f \rightarrow \ast_f. $$ \item[$(ii)$] The initial topology with respect to the mapping $$ \mathcal{B}'^{\ast}_{\mathcal{W}}(\mathbb{R}^d) \rightarrow L_b(\mathcal{S}^\ast_\dagger(\mathbb{R}^d), \mathcal{W}^\circ C(\mathbb{R}^d)): f \rightarrow \ast_f. $$ \item[$(iii)$] $b(\mathcal{B}'^{(M_p)}_{\mathcal{W}}(\mathbb{R}^d),\mathcal{B}^{(M_p)}_{\mathcal{W}}(\mathbb{R}^d))$ ($bs(\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d),\mathcal{B}^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d))$). \end{itemize} \end{theorem} \begin{proof} The first topology is clearly finer than the second one. In order to prove that the second topology is finer than the third one, we need to show that for every bounded set $B \subset \mathcal{B}^{(M_p)}_{\mathcal{W}}(\mathbb{R}^d)$ (for every $h> 0$ and every bounded set $B \subset \mathcal{B}^{M_p,h}_{\mathcal{W}}(\mathbb{R}^d)$) there is a bounded set $B' \subset \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ ($B' \subset \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$), $v \in \overline{V}(\mathcal{W}^{\circ})$, and $C > 0$ such that $$ \sup_{\varphi \in B} |\langle f, \varphi \rangle| \leq C \sup_{\varphi \in B'} \| f \ast \varphi \|_{v} $$ for all $f \in \mathcal{B}'^{(M_p)}_{\mathcal{W}}(\mathbb{R}^d)$ ($f \in \mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$). Choose $\psi, \gamma \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ such that $(\gamma, \psi)_{L^2} = 1$. We first treat the Beurling case. Lemma \ref{STFT-test} implies that $$ \sup_{\varphi \in B} \sup_{(x, \xi) \in \mathbb{R}^{2d}}|V_{\overline{\gamma}} \varphi(x,\xi)| w_N(x)e^{M(h\xi)} < \infty $$ for all $N \in \mathbb{N}$ and $h> 0$, which, by Lemma \ref{proj-desc-2} and Lemma \ref{projectivedescriptionlemma}, is equivalent to $$ \sup_{\varphi \in B} \sup_{(x, \xi) \in \mathbb{R}^{2d}}\frac{|V_{\overline{\gamma}} \varphi(x,\xi)|e^{M^{h_j}(\xi)}}{v(x)} < \infty $$ for some $v \in \overline{V}(\mathcal{W}^\circ)$ and $h_j \in \mathcal{R}$. By Lemma \ref{proj-desc-3} there is $\overline{v} \in \overline{V}(\mathcal{W}^\circ)$ such that $ v(x)/\overline{v}(x) = O(|x|^{-(d+1)})$. Hence Corollary \ref{STFT-reg-dual-GS} and Lemma \ref{STFT-conv} yield that \begin{align*} \sup_{\varphi \in B} |\langle f, \varphi \rangle| &\leq \sup_{\varphi \in B}\int \int_{\mathbb{R}^{2d}}|V_\psi f(x, \xi)| |V_{\overline{\gamma}}\varphi(x, - \xi)| {\rm d}x {\rm d}\xi \\ &\leq C\sup_{(x,\xi) \in \mathbb{R}^{2d}} |V_\psi f(x, \xi)|\overline{v}(x)e^{-M^{h_j/H}(\xi)} \\ &\leq C\sup_{\varphi \in B'} \| f \ast \varphi \|_{\overline{v}}, \end{align*} for some bounded set $B' \subset \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$. Next, we consider the Roumieu case. Set $k = \pi h/\sqrt{d}$. Lemma \ref{STFT-test} implies that $$ \sup_{\varphi \in B} \sup_{(x, \xi) \in \mathbb{R}^{2d}}|V_{\overline{\gamma}} \varphi(x,\xi)| w_N(x)e^{M(k\xi)} < \infty $$ for all $N \in \mathbb{N}$, which, by Lemma \ref{proj-desc-2}, is equivalent to $$ \sup_{\varphi \in B} \sup_{(x, \xi) \in \mathbb{R}^{2d}}\frac{|V_{\overline{\gamma}} \varphi(x,\xi)|e^{M(k\xi)}}{v(x)} < \infty $$ for some $v \in \overline{V}(\mathcal{W}^\circ)$. Using Lemma \ref{proj-desc-3}, there is $\overline{v} \in \overline{V}(\mathcal{W}^\circ)$ such that $v(x)/\overline{v}(x) = O(|x|^{-(d+1)})$. Hence, in view of Corollary \ref{STFT-reg-dual-GS} and Lemma \ref{STFT-conv}, \begin{align*} \sup_{\varphi \in B} |\langle f, \varphi \rangle| &\leq \sup_{\varphi \in B}\int \int_{\mathbb{R}^{2d}}|V_\psi f(x, \xi)| |V_{\overline{\gamma}}\varphi(x, - \xi)| {\rm d}x {\rm d}\xi \\ &\leq C\sup_{(x,\xi) \in \mathbb{R}^{2d}} |V_\psi f(x, \xi)|\overline{v}(x)e^{-M(k\xi/H)} \\ &\leq C\sup_{\varphi \in B'} \| f \ast \varphi \|_{\overline{v}}, \end{align*} for some bounded set $B' \subset \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$. Finally, we show that the third topology is finer than the first one. We start with the Beurling case. Since the strong dual of $\mathcal{B}^{(M_p)}_{\mathcal{W}}(\mathbb{R}^d)$ is bornological (cf.\ Proposition \ref{LFS-1}), it suffices to show that every strongly bounded set $B$ in $\mathcal{B}'^{(M_p)}_{\mathcal{W}}(\mathbb{R}^d)$ is also bounded for the first topology. Let $N \in \mathbb{N}$ and $C,h > 0$ be such that $$ |\langle f, \varphi \rangle | \leq C\|\varphi \|_{\mathcal{B}^{M_p,h}_{w_N}}, \qquad \varphi \in \mathcal{B}^{(M_p)}_{\mathcal{W}}(\mathbb{R}^d), $$ for all $f \in B$. There is $M > N$ such that $$ w_N(x+t) \leq C' w_M(x) e^{A(\lambda t)}, \qquad x,t \in \mathbb{R}^d, $$ for some $C', \lambda > 0$. Hence Lemma \ref{translation-norm} implies that for every bounded set $B' \subset \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ and every $k > 0$ it holds that \begin{align*} \sup_{f \in B} \sup_{\varphi \in B'} \|f \ast \varphi\|_{B^{M_p,k}_{w_M}} &\leq C\sup_{\varphi \in B'} \sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d} \frac{k^{|\alpha|}\|T_{x}(\check{\varphi}^{(\alpha)})\|_{\mathcal{B}^{M_p,h}_{w_N}}}{M_\alpha w_M(x)} \\ &\leq CC' \sup_{\varphi \in B'} \sup_{\alpha \in \mathbb{N}^d} \frac{k^{|\alpha|}\|\varphi^{(\alpha)}\|_{\mathcal{S}^{M_p,h}_{A_p, \lambda}}}{M_\alpha} < \infty, \end{align*} whence $B$ is bounded for the first topology. Next, we consider the Roumieu case. By Proposition \ref{projectivedescription} it suffices to show that for every bounded set $B \subset \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$, every $v \in \overline{V}(\mathcal{W}^\circ)$ and every $h_j \in \mathcal{R}$, there are $h > 0$ and a bounded set $B' \subset \mathcal{B}^{M_p,h}_{\mathcal{W}}(\mathbb{R}^d)$ such that $$ \sup_{\varphi \in B} \|f \ast \varphi \|_{\mathcal{B}^{M_p,h_j}_{v}} \leq \sup_{\varphi \in B'} |\langle f, \varphi \rangle|, $$ for all $f \in \mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$. Let $k, \lambda > 0$ be such that $B$ is contained and bounded in $\mathcal{S}^{M_p,k}_{A_p, \lambda}(\mathbb{R}^d)$. Lemma \ref{translation-norm} implies that $$ B' = \left\{ \frac{T_{x}(\check{\varphi}^{(\alpha)})v(x)}{M_\alpha \prod_{j = 0}^{|\alpha|}h_j } \, : \, \varphi \in B, \alpha \in \mathbb{N}^d, x \in \mathbb{R}^d \right\} $$ is bounded in $\mathcal{B}^{M_p,k/H}_{\mathcal{W}}(\mathbb{R}^d)$. Hence \begin{align*} \sup_{\varphi \in B} \|f \ast \varphi \|_{\mathcal{B}^{M_p,h_j}_{v}} &\leq \sup_{\varphi \in B} \sup_{\alpha \in \mathbb{N}^d} \sup_{x \in \mathbb{R}^d} \frac{|\langle f, T_{x}(\check{\varphi}^{(\alpha)}) \rangle| v(x)}{M_\alpha \prod_{j = 0}^{|\alpha|}h_j} \\ &\leq \sup_{\varphi \in B'} |\langle f, \varphi \rangle|. \end{align*} \end{proof} \begin{theorem}\label{topology-Roumieu} The following three topologies coincide on $\mathcal{B}'^{\ast}_{\mathcal{V}}(\mathbb{R}^d)$: \begin{itemize} \item[$(i)$] The initial topology with respect to the mapping $$ \mathcal{B}'^{\ast}_{\mathcal{V}}(\mathbb{R}^d) \rightarrow L_b(\mathcal{S}^\ast_\dagger(\mathbb{R}^d), \mathcal{B}^\ast_{\mathcal{V}^\circ}(\mathbb{R}^d)): f \rightarrow \ast_f. $$ \item[$(ii)$] The initial topology with respect to the mapping $$ \mathcal{B}'^{\ast}_{\mathcal{V}}(\mathbb{R}^d) \rightarrow L_b(\mathcal{S}^\ast_\dagger(\mathbb{R}^d), \mathcal{V}^\circ C(\mathbb{R}^d)): f \rightarrow \ast_f. $$ \item[$(iii)$] $bs(\mathcal{B}'^{(M_p)}_{\mathcal{V}}(\mathbb{R}^d),\mathcal{B}^{(M_p)}_{\mathcal{V}}(\mathbb{R}^d))$ ($b(\mathcal{B}'^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d),\mathcal{B}^{\{M_p\}}_{\mathcal{V}}(\mathbb{R}^d))$). \end{itemize} \end{theorem} \begin{proof} The proof is similar (but simpler) to the one of Theorem \ref{topology-Beurling} and therefore omitted. \end{proof} We endow $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ and $\mathcal{B}'^{(M_p)}_{\mathcal{V}}(\mathbb{R}^d)$ with one of the three identical topologies considered in Theorem \ref{topology-Beurling} and Theorem \ref{topology-Roumieu}, respectively. \begin{lemma}\label{density-S-in-dual} The embedding \begin{equation} \iota: \mathcal{S}^\ast_\dagger(\mathbb{R}^d) \rightarrow \mathcal{B}'^\ast_{\mathcal{W}}(\mathbb{R}^d): \varphi \rightarrow \left(\psi \rightarrow \int_{\mathbb{R}^d} \varphi(x)\psi(x){\rm d}x \right) \label{density-S-def} \end{equation} has dense range. \end{lemma} \begin{proof} By Corollary \ref{dense-inclusion} it suffices to show that $$ \iota: \mathcal{B}^\ast_{\mathcal{W}^\circ}(\mathbb{R}^d) \rightarrow \mathcal{B}'^\ast_{\mathcal{W}}(\mathbb{R}^d): \varphi \rightarrow \left(\psi \rightarrow \int_{\mathbb{R}^d} \varphi(x)\psi(x){\rm d}x \right) $$ has dense range. Choose $\chi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)$ with $\int_{\mathbb{R}^d}\chi(x) {\rm d}x = 1$ and set $\chi_n = n^d\chi(n \cdot)$, $n \geq 1$. Fix $f \in \mathcal{B}'^\ast_{\mathcal{W}}(\mathbb{R}^d)$, by Theorem \ref{conv-average-Beurling} we have that $f \ast \chi_n \in \mathcal{B}^\ast_{\mathcal{W}^\circ}(\mathbb{R}^d)$. We claim that $\iota(f \ast \chi_n) \rightarrow f$ in $\mathcal{B}'^\ast_{\mathcal{W}}(\mathbb{R}^d)$, or, equivalently, that $ \ast_{\iota(f \ast \chi_n)} \rightarrow \ast_f$ in $L_b(\mathcal{S}^\ast_\dagger(\mathbb{R}^d), \mathcal{B}^\ast_{\mathcal{W}^\circ}(\mathbb{R}^d))$. Since $\ast_{\iota(f \ast \chi_n)} = \ast_f \circ \ast_{\chi_n}$, where $ \ast_{\chi_n} \in L(\mathcal{S}^\ast_\dagger(\mathbb{R}^d),\mathcal{S}^\ast_\dagger(\mathbb{R}^d))$ is defined via $\ast_{\chi_n}(\varphi) = \chi_n \ast \varphi$, $\varphi \in \mathcal{S}^\ast_\dagger(\mathbb{R}^d)$, the claim follows from the fact that $\ast_{\chi_n} \rightarrow \operatorname{id}$ in $L_b(\mathcal{S}^\ast_\dagger(\mathbb{R}^d),\mathcal{S}^\ast_\dagger(\mathbb{R}^d))$. \end{proof} \begin{lemma} The embedding $$ \iota: \mathcal{S}^\ast_\dagger(\mathbb{R}^d) \rightarrow \mathcal{B}'^\ast_{\mathcal{V}}(\mathbb{R}^d): \varphi \rightarrow \left(\psi \rightarrow \int_{\mathbb{R}^d} \varphi(x)\psi(x){\rm d}x \right) $$ has dense range. \end{lemma} \begin{proof} This can be shown in the same way as Lemma \ref{density-S-in-dual}. \end{proof} The ensuing two theorems may be considered as quantified versions of Grothendieck's theorem \cite[Chap.\ II, Thm.\ 16, p.\ 131]{Grothendieck} in the setting of tempered ultradistributions. \begin{theorem}\label{Roumieu-dual} $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ is a $(PLS)$-space\footnote{A l.c.s.\ is said to be a $(PLS)$-space if it can be written as the projective limit of a projective spectrum consisting of $(DFS)$-spaces. We refer to the survey article \cite{Domanski-artykul} for more information on $(PLS)$-spaces. } whose strong dual is canonically isomorphic to $\mathcal{B}^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$, i.e., the embedding \begin{equation} \mathcal{B}^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d) \rightarrow (\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d))'_b: \varphi \rightarrow (f \rightarrow \langle f, \varphi \rangle) \label{canonical-embedding} \end{equation} is a topological isomorphism. Moreover, consider the following conditions: \begin{itemize} \item[$(i)$] $\mathcal{W}$ satisfies $(DN)$. \item[$(ii)$] $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ satisfies one of the equivalent conditions $(ii)$--$(v)$ from Theorem \ref{reg-Beurling}. \item[$(iii)$] $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ is ultrabornological. \item[$(iv)$] $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ is equal to the strong dual of $\mathcal{B}^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$. \end{itemize} Then, $(i) \Rightarrow (ii) \Rightarrow (iii) \Rightarrow (iv) \Rightarrow (ii)$. Furthermore, if $M_p$ satisfies $(M.1)$, $(M.2)$, and $(M.3)$, then $(ii) \Rightarrow (i)$. \end{theorem} \begin{proof} We first show that $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ is a $(PLS)$-space. $L_b(\mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d), \mathcal{B}^{\{M_p\}}_{\mathcal{W}^\circ}(\mathbb{R}^d))$ is a $(PLS)$-space because of Proposition \ref{LFS-1} and the fact that $L_b(E,F) \cong L_b(F'_b,E'_b)$ is a $(PLS)$-space for general $(DFS)$-spaces $E$ and $F$ \cite[Prop.\ 4.3]{D-L}. Since a closed subspace of a $(PLS)$-space is again a $(PLS)$-space, it therefore suffices to show that the embedding $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d) \rightarrow L_b(\mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d), \mathcal{B}^{\{M_p\}}_{\mathcal{W}^\circ}(\mathbb{R}^d)): f \rightarrow \ast_f$ has closed range. Let $(f_i)_i \subset \mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ be a net such that $\ast_{f_i} \rightarrow S$ in $L_b(\mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d), \mathcal{B}^{\{M_p\}}_{\mathcal{W}^\circ}(\mathbb{R}^d))$. Define $$ \langle f, \varphi \rangle := S(\check{\varphi})(0), \qquad \varphi \in \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d). $$ Clearly, $f \in \mathcal{S}'^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$. Moreover, for each $\varphi \in \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$ we have that \begin{align*} (f\ast \varphi)(x) &= \langle f, T_x \check{\varphi} \rangle = S(T_{-x}\varphi) (0) = \lim_{i} (f_i \ast T_{-x} \varphi)(0) \\ &= (T_{-x} \lim_{i} (f_i \ast \varphi))(0) = (T_{-x} S(\varphi))(0) = S(\varphi)(x). \end{align*} This shows that $S = \ast_f$ and, by Theorem \ref{conv-average-Beurling}, $f \in \mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$. Next, we show that \eqref{canonical-embedding} is a topological isomorphism. Clearly, the $bs$-topology is coarser than the strong topology and finer than the weak-$\ast$ topology on $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$. The fact that $\mathcal{B}^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ is barreled (as it is an $(LF)$-space) therefore implies that a subset of $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ is equicontinuous if and only if it is $bs$-bounded, which in turn yields that \eqref{canonical-embedding} is a strict morphism. We now show that it is surjective. Let $\Phi \in (\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d))'$ be arbitrary and set $f = \Phi \circ \iota \in \mathcal{S}'^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$, where $\iota$ is defined via \eqref{density-S-def}. We claim that $f \in \mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$, i.e., that there is $\chi \in \mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ such that $\langle f , \varphi \rangle = \int_{\mathbb{R}^d}\chi(x)\varphi(x) {\rm d}x $ for all $\varphi \in \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$. Hence $\Phi(\iota(\varphi)) = \int_{\mathbb{R}^d}\chi(x)\varphi(x) {\rm d}x = \langle \iota(\varphi), \chi \rangle$ for all $\varphi \in \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)$ and the result would follow from the fact that $\iota( \mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d))$ is dense in $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ (Lemma \ref{density-S-in-dual}). We now prove the claim with the aid of Proposition \ref{STFT-test-char}. Let $\psi \in \mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d) \backslash \{0\}$. Since $\Phi$ is continuous, there are $h > 0$ and a bounded set $B \subset \mathcal{B}^{M_p,h}_\mathcal{W}(\mathbb{R}^d)$ such that \begin{align*} |V_\psi f(x,\xi)| = |\langle f, \overline{M_\xi T_x\psi}\rangle| &= |\Phi(\iota( \overline{M_\xi T_x\psi}))| \leq \sup_{\varphi \in B} |\langle \iota( \overline{M_\xi T_x\psi}), \varphi \rangle| \\ &= \sup_{\varphi \in B} \left | \int_{\mathbb{R}^d} \varphi(t) \overline{M_\xi T_x\psi(t)} {\rm d}t \right | = \sup_{\varphi \in B} |V_\psi \varphi(x,\xi)|. \end{align*} Hence, the required bounds for $|V_\psi f|$ directly follow from Lemma \ref{STFT-test}. We now turn to the chain of implications. $(i) \Rightarrow (ii)$ and $(ii) \Rightarrow (i)$ (under the extra assumption that $M_p$ satisfies $(M.1)$, $(M.2)$, and $(M.3)$) have already been shown in Theorem \ref{reg-Beurling}. $(ii) \Rightarrow (iii)$ Since for general $(LF)$-spaces $E$ it holds that $bs(E',E) = b(E',E)$ if $E$ is regular, we obtain that $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ is equal to the strong dual of $\mathcal{B}^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$. Hence $(iii)$ follows from Proposition \ref{LFS-1} and the fact that the strong dual of a complete Schwartz space is ultrabornological \cite[p.\ 43]{Schwartz-57}. $(iii) \Rightarrow (iv)$ The strong topology on $\mathcal{B}'^{\{M_p\}}_{\mathcal{W}}(\mathbb{R}^d)$ is clearly finer than the $bs$-topology. As the strong dual of an $(LF)$-space is strictly webbed \cite[Prop.\ IV.3.3]{DeWilde}, they are identical by De Wilde's open mapping theorem. $(iv) \Rightarrow (ii)$ Since the mapping \eqref{canonical-embedding} is a topological isomorphism, $\mathcal{B}^{\{M_p\}}_\mathcal{W}(\mathbb{R}^d)$ is reflexive and, thus, quasi-complete. \end{proof} \begin{theorem}\label{Beurling-dual} $\mathcal{B}'^{(M_p)}_{\mathcal{V}}(\mathbb{R}^d)$ is a $(PLS)$-space whose strong dual is canonically isomorphic to $\mathcal{B}^{(M_p)}_{\mathcal{V}}(\mathbb{R}^d)$. Moreover, consider the following conditions: \begin{itemize} \item[$(i)$] $\mathcal{V}$ satisfies $(\Omega)$. \item[$(ii)$] $\mathcal{B}^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$ satisfies one of the equivalent conditions $(ii)$-$(v)$ from Theorem \ref{reg-Roumieu-1}. \item[$(iii)$]$\mathcal{B}'^{(M_p)}_\mathcal{V}(\mathbb{R}^d)$ is ultrabornological. \item[$(iv)$] $\mathcal{B}'^{(M_p)}_{\mathcal{V}}(\mathbb{R}^d)$ is equal to the strong dual of $\mathcal{B}^{(M_p)}_{\mathcal{V}}(\mathbb{R}^d)$. \end{itemize} Then, $(i) \Rightarrow (ii) \Rightarrow (iii) \Rightarrow (iv) \Rightarrow (ii)$. Furthermore, if $M_p$ satisfies $(M.1)$, $(M.2)$, and $(M.3)$, then $(ii) \Rightarrow (i)$. \end{theorem} \begin{proof} The proof is similar to the one of Theorem \ref{Roumieu-dual} and therefore omitted. \end{proof} The next two corollaries settle answers to the question posed after \cite[Thm.\ 3.3, p. 413]{D-P-P-V}. \begin{corollary}\label{topology-OCD} Let $M_p$ and $A_p$ be weight sequences satisfying our standard assumptions \eqref{group-cond}. Then, $\mathcal{O}_C'^{(M_p),(A_p)}(\mathbb{R}^d)$ is ultrabornological if $A_p$ satisfies $(M.2)^\ast$. If, in addition, $M_p$ satisfies $(M.1)$, $(M.2)$, and $(M.3)$, then $\mathcal{O}_C'^{(M_p),(A_p)}(\mathbb{R}^d)$ is ultrabornological if and only if $A_p$ satisfies $(M.2)^\ast$. \end{corollary} Naturally, in view of Theorem \ref{topology-Roumieu} and Theorem \ref{Beurling-dual}, the space $\mathcal{O}_C'^{(M_p),(A_p)}(\mathbb{R}^d)$ is ultrabornological if and only if the strong topology on $\mathcal{O}_C'^{(M_p),(A_p)}(\mathbb{R}^d)$ coincides with the initial topology with respect to the mapping $$ \mathcal{O}_C'^{(M_p),(A_p)}(\mathbb{R}^d)\to L_b(\mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d),\mathcal{S}^{(M_p)}_{(A_p)}(\mathbb{R}^d)): \ f\to \ast_{f}. $$ On the other hand, by Remark \ref{rk incomplete}, the situation in the Roumieu case is completely different from that of Corollary \ref{topology-OCD}. \begin{corollary} \label{topology-OCD-Roumieu} Let $M_p$ be a weight sequence satisfying $(M.1)$, $(M.2)$, and $(M.3)$, and let $A_p$ be a weight sequence satisfying $(M.1)$ and $(M.2)$. Then, $\mathcal{O}_C'^{\{M_p\},\{A_p\}}(\mathbb{R}^d)$ is not ultrabornological; in particular, the strong topology on $\mathcal{O}_C'^{\{M_p\},\{A_p\}}(\mathbb{R}^d)$ is strictly finer than the initial topology with respect to the mapping $$\mathcal{O}_C'^{\{M_p\},\{A_p\}}(\mathbb{R}^d)\to L_b(\mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d),\mathcal{S}^{\{M_p\}}_{\{A_p\}}(\mathbb{R}^d)): \ f\to \ast_{f}.$$ \end{corollary}
-172,186.88548
[ -3.0625, 2.810546875 ]
26.035966
[ -3.328125, 0.423583984375, -2.22265625, -6.32421875, -1.017578125, 9.40625 ]
[ 5.2890625, 10.109375, 1.7802734375, 7.01171875 ]
794
12,070
[ -3.40625, 4.1015625 ]
38.453881
[ -5.6015625, -4.1953125, -5.3671875, -2.123046875, 2.09375, 13.515625 ]
1.627328
8.099716
14.722452
0.916129
[ 1.082467794418335 ]
-110,067.244121
5.956669
-173,326.48692
1.001433
6.011675
[ -1.4638671875, -3.330078125, -4.2578125, -5.6640625, 1.98828125, 13.0390625 ]
[ -6.0390625, -2.515625, -2.421875, -1.67578125, 4.4453125, 5.4140625 ]
BkiUajHxK6nrxjHy_BsR
\section{Introduction and statements of results} In theory of composites or micro-structures, it is important to find inclusion shapes which produce the minimal energy. In relation to such shapes Eshelby \cite{esh57} showed that if the inclusion is of ellipsoidal shape, then for any uniform loading the strain inside $\Omega$ is uniform. We call this remarkable property Eshelby's uniformity property. Eshelby then conjectured in \cite{esh61} that ellipsoids are the only shape (structure) with such a uniformity property. Eshelby's conjecture may be interpreted in two different ways: \medskip \noindent {\bf Weak Eshelby's conjecture}. If the strain is constant inside $\Omega$ for all loadings, then $\Omega$ is an ellipse (2D) or an ellipsoid (3D). \medskip \noindent {\bf Strong Eshelby's conjecture}. If the strain is constant inside $\Omega$ for a single loading, then $\Omega$ is an ellipse (2D) or an ellipsoid (3D). \medskip \noindent The strong Eshelby conjecture of course implies the weak one. The strong Eshelby's conjecture has been proved to be true in two dimensions by Sendeckyj \cite{sen70} (see also \cite{KM_ARMA_08, Liu_PRSA_08} for alternative proofs). However it is only recently that the weak Eshelby conjecture was proved to be true in three dimensions: by Kang-Milton \cite{KM_ARMA_08} and Liu \cite{Liu_PRSA_08}. We refer to above mentioned papers (and \cite{Kan_MM_09}) for comprehensive account of developments on the Eshelby conjecture. Regarding the strong Eshelby conjecture in three dimensions, important pro--gress has been made by Liu. He showed in \cite{Liu_PRSA_08} that the conductivity version of the strong Eshelby conjecture fails to be true completely (see \cite{Liu_PRSA_08} for a precise statements.) However, the strong Eshelby's conjecture (for elasticity) has not been proved or disproved. In this paper we consider the strong Eshelby conjecture. Even though we are not able to resolve the conjecture completely, we obtain results which is stronger than the weak version of Eshelby's conjecture (and weaker than the strong version). We show that if the strain inside inclusion is constant and in addition the eigenvalues of the constant strain are either all the same or all distinct, then the inclusion is of ellipsoidal shape. We then use this result to show that for two linearly independent loadings the strains inside the inclusions are uniform, then the inclusion must be of ellipsoidal shape. It is worth emphasizing that the weak Eshelby's conjecture requires 6 linearly independent loadings while the strong Eshelby's conjecture does a single loadings. In order to present results in more precise way let us introduce some notation. Let $\Omega$ be a bounded domain with a Lipschitz boundary in $\mathbb{R}^d$, $d=2,3$. The domain $\Omega$ is occupied by a homogeneous isotropic elastic material whose Lam\'e parameters are $\tilde\lambda$ and $\tilde \mu$. We assume that the background (the matrix) is also homogeneous and isotropic, and its Lam\'e parameters are $\lambda$ and $ \mu$. Then the elasticity tensors for the matrix and the inclusion can be written respectively as \begin{equation} \mathbb{C}^{0}:=\lambda\mathbf{I}\otimes\mathbf{I}+2\mu \mathbb{I} \quad\mbox{and} \quad \mathbb{C}^{1}:=\tilde{\lambda}\mathbf{I} \otimes \mathbf{I} + 2\tilde{\mu}\mathbb{I}, \end{equation} where ${\mathbf{I}}$ is the $d\times d$ identity matrix (2-tensor) and $\mathbb{I}$ is the identity 4-tensor. The the elasticity tensor for $\mathbb{R}^d$ in the presence of the inclusion $\Omega$ is then given by \begin{equation} \mathbb{C}_\Omega:=(1-1_\Omega) \mathbb{C}^0+1_\Omega \mathbb{C}^1, \end{equation} where $1_\Omega$ is the indicator function of $\Omega$. Let $\kappa$ and $\tilde{\kappa}$ be bulk moduli of $\mathbb{R}^{d}\setminus\overline{\Omega}$ and $\Omega$, respectively, namely, $$ \kappa = d\lambda+2\mu \quad\mbox{and}\quad \tilde{\kappa}= d\tilde{\lambda}+2\tilde{\mu}, \quad d=2,3. $$ It is always assumed that the strong convexity condition holds, {\it i.e.}, \begin{equation}\label{eq:trans-2} \mu>0,\quad \kappa >0,\quad\tilde{\mu}>0\quad\mbox{and}\quad \tilde{\kappa}>0\;. \end{equation} We also assume that \[ (\lambda-\tilde{\lambda})(\mu-\tilde{\mu})>0\;, \] which implies that $\mathbb{C}^{1}-\mathbb{C}^{0}$ is either positive or negative definite as an operator on the space $M_d^s$ of all $d \times d$ symmetric matrices. We consider the following problem of the Lam\'e system of linear elasticity: For a given non-zero symmetric $d \times d$ matrix ${\mathbf A}$ \begin{equation}\label{main-eqn} \left\{ \begin{array}{ll} \nabla\cdot \mathbb{C}_{\Omega} \mathcal{E}({\mathbf u})=0 \quad & \mbox{in } \mathbb{R}^d \\ \noalign{\medskip} {\mathbf u}({\mathbf x})- {\mathbf A} {\mathbf x}=O(|{\mathbf x}|^{1-d}) \quad & \mbox{as }|{\mathbf x}|\rightarrow \infty, \end{array} \right. \end{equation} where $\mathcal{E}({\mathbf u})$ is the strain tensor, {\it i.e.}, $$ \mathcal{E}({\mathbf u}):= \frac{1}{2} (\nabla {\mathbf u} + \nabla {\mathbf u}^{T}) \quad (T \mbox{ for transpose}). $$ The matrix ${\mathbf A}$ represents a uniform loading at infinity. In this paper we prove the following improvements of the weak Eshelby conjecture for the three dimensional elasticity. \begin{thm} \label{mainthm} Let $\Omega$ be a simply connected bounded domain in $\mathbb{R}^3$ with a Lipschitz boundary. If the strain tensor $\mathcal{E}({\mathbf u})$ of the solution ${\mathbf u}$ to \eqnref{main-eqn} is constant in $\Omega$ for a nonzero symmetric matrix ${\mathbf A}$ and $\mathcal{E}({\mathbf u})$ within $\Omega$ has eigenvalues which are either all distinct or all the same, then $\Omega$ is an ellipsoid. \end{thm} \begin{thm} \label{mainthm2} Let $\Omega$ be a simply connected bounded domain in $\mathbb{R}^3$ with a Lipschitz boundary. If the strain tensors of solutions to \eqnref{main-eqn} for two linearly independent ${\mathbf A}$'s are constant in $\Omega$, then $\Omega$ is an ellipsoid. \end{thm} The second main result of this paper is on the shape of the inclusion whose elastic moment tensor (elastic polarizability tensor) has an extremal property. In order to explain the second result, we take the following definition of the Elastic Moment Tensor (henceforth denoted as the EMT) \cite[Lemma 10.3]{AK_book_07}: Let ${\mathbf A}$ be a $d \times d$ matrix and let ${\mathbf u}_{{\mathbf A}}$ be the solution to \eqnref{main-eqn} corresponding to ${\mathbf A}$. Then the EMT $\mathbb{M}$ associated with the inclusion $\Omega$ and the elasticity tensors $\mathbb{C}^0$ and $\mathbb{C}^1$ is a 4-tensor defined by \begin{equation} \label{EMT_def} \mathbb{M} {\mathbf A} = \int_\Omega (\mathbb{C}^1 - \mathbb{C}^0) \mathcal{E}({\mathbf u}_{{\mathbf A}}) \, d {\mathbf x}. \end{equation} The EMT may be defined in many different but equivalent ways. It is worth noticing that if the strain $\mathcal{E}({\mathbf u}_{{\mathbf A}})={\mathbf B}$ is constant in $\Omega$, then \begin{equation} \label{EMT2} \mathbb{M} {\mathbf A} = |\Omega|(\mathbb{C}^1 - \mathbb{C}^0) {\mathbf B}, \end{equation} where $|\Omega|$ denotes the volume of $\Omega$. The EMT enjoys several important properties. For example, it is symmetric and positive-definite or negative-definite on the space $M_d^s$ of $d\times d$ symmetric matrices, depending on the sign of $\tilde\mu-\mu$. The notion of EMT is being used in variety of contexts such as detection of small elastic inclusions for non-destructive evaluation and medical imaging \cite{ACI_SIIS_08, AGKL, AGKL08, AKNT_JE_02, AK_book_07, KKL_IP_03, KKL_IP_07} and effective medium theory \cite{AK_book_07, AKL_IUM_06, milton}. Let us introduce more notation in order to recall the optimal trace bounds (the Hashin-Shtrikman bounds) for the EMT. Let \[ \mathbf{\Lambda}_{1}:=\frac{1}{d}{\mathbf{I}}\otimes{\mathbf{I}}, \quad{\mathbf{\Lambda}}_{2}:=\mathbb{I}-{\mathbf{\Lambda}}_{1}. \] Then the elasticity tensor $\mathbb{C}^{0}$ may be written as \[ \mathbb{C}^{0}= d \kappa \mathbf{\Lambda_{1}}+2\mu\mathbf{\Lambda_{2}}, \] and likewise for $\mathbb{C}^1$. Since for any $d\times d$ symmetric matrix ${\mathbf A}$, ${\mathbf{I}}\otimes{\mathbf{I}}({\mathbf A}) =\textrm{tr}\, ({\mathbf A}){\mathbf{I}}$ and ${\mathbb{I}}({\mathbf A})={\mathbf A}$, one can immediately see that \begin{equation}\label{eq:mbda11} {\mathbf{\Lambda}}_{1}{\mathbf{\Lambda}}_{1}={\mathbf{\Lambda}}_{1},\quad{\mathbf{\Lambda}}_{2} {\mathbf{\Lambda}}_{2}={\mathbf{\Lambda}}_{2},\quad{\mathbf{\Lambda}}_{1}{\mathbf{\Lambda}}_{2}=0. \end{equation} We are now able to recall the optimal trace bounds for the EMT. For $d=2,3$, let \begin{align} K_1 &:= \frac{1}{d (\tilde{\kappa}-\kappa)} \frac{d\tilde{\kappa}+2(d-1)\mu}{d\kappa+2(d-1)\mu} , \label{eq:kone} \\ K_2 &:= \frac{1}{2(\tilde{\mu}-\mu)} \left[ \frac{d^{2}+d-2}{2}+2\left(\tilde{\mu}-\mu\right) \left(\frac{d-1}{2\mu}+\frac{d-1}{d\kappa +2(d-1)\mu}\right)\right]. \label{eq:ktwo} \end{align} The following trace bounds were obtained by Lipton \cite{Lip_JMPS_93} (see also \cite{CK_AMO_08}): Suppose $|\Omega|=1$ and let $\mathbb{M}$ be the EMT associated with $\Omega$, then we have \begin{align} \textrm{tr}\, \left(\mathbf{\Lambda}_{1}\mathbb{M}^{-1}\mathbf{\Lambda}_{1}\right) & \leq K_1 , \label{eq:bd-itrace-1} \\ \textrm{tr}\, \left(\mathbf{\Lambda}_{2}\mathbb{M}^{-1}\mathbf{\Lambda}_{2}\right) & \leq K_2, \label{eq:bd-itrace-2} \end{align} provided that $\tilde{\kappa} -\kappa >0$. (If $\tilde{\kappa} -\kappa <0$, the inequalities change the direction.) Since $\mathbf{\Lambda}_{1}\mathbb{M}^{-1}\mathbf{\Lambda}_{1}$ and $\mathbf{\Lambda}_{2}\mathbb{M}^{-1}\mathbf{\Lambda}_{2}$ are block diagonal components for $\mathbb{M}^{-1}$, one can see that $$ \textrm{tr}\, \mathbb{M}^{-1} = \textrm{tr}\, (\mathbf{\Lambda}_{1}\mathbb{M}^{-1}\mathbf{\Lambda}_{1}) + \textrm{tr}\, (\mathbf{\Lambda}_{2}\mathbb{M}^{-1}\mathbf{\Lambda}_{2}), $$ and hence \begin{equation}\label{eq:lowbound} \textrm{tr}\, \mathbb{M}^{-1} \le K_1 + K_2. \end{equation} Note that $\mathbf{\Lambda}_{1}\mathbb{M} \mathbf{\Lambda}_{1}$ and $\mathbf{\Lambda}_{2}\mathbb{M} \mathbf{\Lambda}_{2}$ are the bulk and shear parts of $\mathbb{M}$, respectively. We also note that \eqnref{eq:bd-itrace-1} and \eqnref{eq:bd-itrace-2} are lower bounds for $\mathbb{M}$ since they are upper bounds for $\mathbb{M}^{-1}$. It is worth emphasizing that upper bounds for $\mathbb{M}$ are also derived in \cite{Lip_JMPS_93}. In \cite{CK_AMO_08}, it is shown that inclusions $\Omega$ whose trace is close to the upper bound must be infinitely thin. The upper and lower bounds for the EMT may also be derived as a low volume fraction limit of the Hashin-Shtrikman bounds for the effective moduli of the two phase composites, which was obtained by Zhikov \cite{Zhi88, Zhi91} and Milton-Kohn \cite{MK88}. Benveniste \cite{Ben87} obtained the upper and lower bounds of EMTs when those EMTs happen to be isotropic. (See also \cite{MMM81}.) In this paper we are interested in the shape of the inclusion whose EMT satisfies the equality in either \eqnref{eq:bd-itrace-1} or \eqnref{eq:bd-itrace-2}. This is an isoperimetric inequality for the EMT. In this direction we prove the following theorem: \begin{thm} \label{thm:main-thm} Let $\Omega$ be a simply connected bounded domain in $\mathbb{R}^3$ with a Lipschitz boundary. Suppose $|\Omega|=1$ and let $\mathbb{M}$ be the EMT associated with $\Omega$. If the equality holds in either \eqnref{eq:bd-itrace-1} or \eqnref{eq:bd-itrace-2}, then $\Omega$ is an ellipse in two dimensions and an ellipsoid in three dimensions. \end{thm} We remark that optimal shapes for a cavity (hole) in two dimension were investigated by Cherkaev {\it et al} \cite{CGMS98} and Milton {\it et al} \cite{MSM03}. The dimension of the space of symmetric 4-tensors in the three dimensional space is 21, and hence the equalities \eqnref{eq:bd-itrace-1} and \eqnref{eq:bd-itrace-2} are satisfied on a 19 ($21-2$) dimensional surface in tensor space. However ellipsoid geometries (with unit volume) only cover a 5 dimensional manifold within that 19 dimensional space. It is interesting to notice similarity of Theorem \ref{thm:main-thm} to the P\'olya-Szeg\"o conjecture, which asserts that the inclusion whose polarization tensor has the smallest trace is a disk or a ball. The P\'olya-Szeg\"o conjecture was proved to be true by Kang-Milton \cite{KM_CONM_06, KM_ARMA_08}. As for the P\'olya-Szeg\"o conjecture, Theorem \ref{thm:main-thm}, which concerns elasticity, will be proved using Eshelby's conjecture. In order to prove Theorem \ref{thm:main-thm}, we will show that if equality holds in \eqnref{eq:bd-itrace-1}, then the strain tensor corresponding to a certain uniform loading ${\mathbf A}$ (with a special structure) is constant in $\Omega$, while if the equality holds in \eqnref{eq:bd-itrace-2}, then the strain tensors corresponding to five (two in 2D) linearly independent uniform loadings are constant in $\Omega$ . Thus in two dimensions the strong Eshelby conjecture immediately implies that the inclusion is an ellipse. However, in three dimension, the weak Eshelby conjecture does not guarantee that the inclusion is an ellipsoid. In order to apply the weak Eshelby conjecture, we need to have equalities in both \eqnref{eq:bd-itrace-1} and \eqnref{eq:bd-itrace-2}, or the equality in the whole lower trace bound \eqnref{eq:lowbound}. But we are able to show additionally that if the equality holds in \eqnref{eq:bd-itrace-1} then the eigenvalues of the strain tensor are all the same, and that if the equality holds in \eqnref{eq:bd-itrace-2} then strains corresponding to five linearly independent loadings are constant. Thus thanks to Theorem \ref{mainthm} and Theorem \ref{mainthm2} we are able to conclude that the inclusion is of ellipsoidal shape. This paper is organized as follows. In section 2, we show that the displacement vectors can be decomposed in a way similar to the Helmholtz decomposition. This is done using the single layer potential for the Lam\'e system. Theorem \ref{mainthm} is proved in section 3, Theorem \ref{mainthm2} in section 4, and Theorem \ref{thm:main-thm} in section 4. The appendix is for the proof of Lemma \ref{discrim} which is used to prove Theorem \ref{mainthm2}. \section{Single layer potential} Let us first recall the notion of the single layer potential for the Lam\'e operator $\mathcal{L}_{\mathbb{C}}{\mathbf u}:=\nabla \cdot \mathbb{C} \mathcal{E}({\mathbf u})$. The Kelvin matrix ${\bf \Gamma} = ( \Gamma_{ij} )_{i,j=1}^3$ of the fundamental solution to the Lam{\'e} operator $\mathcal{L}_{\mathbb{C}}$ in three dimensions is given by \begin{equation} \label{Kelvin} \Gamma_{ij} ({\mathbf x}) : = - \frac{\alpha_1}{4 \pi} \frac{\delta_{ij}}{|{\mathbf x}|} - \frac{\alpha_2}{4 \pi} \frac{x_i x_j}{|{\mathbf x}|^3}, \quad {\mathbf x} \neq 0\;, \end{equation} where \begin{equation} \label{ab} \alpha_1= \frac{1}{2} \left ( \frac{1}{\mu} + \frac{1}{2\mu + \lambda} \right ) \quad\mbox{and}\quad \alpha_2= \frac{1}{2} \left ( \frac{1}{\mu} - \frac{1}{ 2 \mu + \lambda} \right )\;. \end{equation} The single layer potential of the vector valued density function ${\mathbf f}$ on $\partial \Omega$ associated with the Lam{\'e} parameters $(\lambda, \mu)$ is defined by \begin{equation} \mathcal{S}_\Omega [{\mathbf f}] ({\mathbf x}) := \int_{\partial \Omega} {\bf \Gamma} ({\mathbf x}-{\mathbf y}) {\mathbf f} ({\mathbf y})\, d \sigma ({\mathbf y})\;, \quad {\mathbf x} \in \mathbb{R}^3\; . \label{singlelayer} \end{equation} Using the divergence theorem, we have \begin{align*} \mathcal{S}_\Omega[\mathbf{f}] ({\mathbf x}) & = - \frac{\alpha_1}{4\pi} \int_{\partial \Omega} \frac{\mathbf{f}({\mathbf y})}{|{\mathbf x}-{\mathbf y}|} d\sigma({\mathbf y}) - \frac{\alpha_2}{4\pi} \int_{\partial \Omega} \frac{{\mathbf x}-{\mathbf y}}{|{\mathbf x}-{\mathbf y}|^3} ({\mathbf x}-{\mathbf y}) \cdot \mathbf{f}({\mathbf y}) d\sigma({\mathbf y})\\ & = - \frac{\alpha_1}{4\pi} \int_{\partial \Omega} \frac{\mathbf{f}({\mathbf y})}{|{\mathbf x}-{\mathbf y}|} d\sigma({\mathbf y}) + \frac{\alpha_2}{4\pi} \nabla \int_{\partial \Omega} \frac{({\mathbf x}-{\mathbf y}) \cdot \mathbf{f}({\mathbf y})}{|{\mathbf x}-{\mathbf y}|} d\sigma({\mathbf y}) \\ & \quad\quad - \frac{\alpha_2}{4\pi} \int_{\partial \Omega} \frac{1}{|{\mathbf x}-{\mathbf y}|} \nabla_{\mathbf x} \left(({\mathbf x}-{\mathbf y}) \cdot \mathbf{f}({\mathbf y})\right) d\sigma({\mathbf y}) \\ & = - \frac{\alpha_1+\alpha_2}{4\pi} \int_{\partial \Omega} \frac{\mathbf{f}({\mathbf y})}{|{\mathbf x}-{\mathbf y}|} d\sigma({\mathbf y}) + \frac{\alpha_2}{4\pi} \nabla \int_{\partial \Omega} \frac{({\mathbf x}-{\mathbf y}) \cdot \mathbf{f}({\mathbf y})}{|{\mathbf x}-{\mathbf y}|} d\sigma({\mathbf y}) \\ & = - \frac{\alpha_1+\alpha_2}{4\pi} \int_{\partial \Omega} \frac{\mathbf{f}({\mathbf y})}{|{\mathbf x}-{\mathbf y}|} d\sigma({\mathbf y}) + \frac{\alpha_2}{4\pi} \nabla \nabla \cdot \int_{\partial \Omega} |{\mathbf x}-{\mathbf y}| \mathbf{f}({\mathbf y}) d\sigma({\mathbf y}). \end{align*} Since $\Delta |{\mathbf x}| = 2|{\mathbf x}|^{-1}$, we have \begin{equation} \label{scalform1} \mathcal{S}_\Omega[\mathbf{f}] ({\mathbf x}) = - \frac{\alpha_1+\alpha_2}{8\pi} \Delta \int_{\partial \Omega} |{\mathbf x}-{\mathbf y}| \mathbf{f}({\mathbf y}) d\sigma({\mathbf y}) + \frac{\alpha_2}{4\pi} \nabla \nabla \cdot \int_{\partial \Omega} |{\mathbf x}-{\mathbf y}| \mathbf{f}({\mathbf y}) d\sigma({\mathbf y}) \;. \end{equation} Let \begin{equation} \mathcal{H}_\Omega[\mathbf{f}]({\mathbf x}) := \frac{1}{4\pi}\int_{\partial \Omega} |{\mathbf x}-{\mathbf y}| \mathbf{f}({\mathbf y}) d\sigma({\mathbf y}) \;. \end{equation} Then, in summary, we have \begin{equation} \label{scalform2} \mathcal{S}_\Omega[\mathbf{f}] ({\mathbf x}) = - \frac{\alpha_1+\alpha_2}{2} \Delta \mathcal{H}_\Omega[\mathbf{f}]({\mathbf x}) + \alpha_2 \nabla \nabla \cdot \mathcal{H}_\Omega[\mathbf{f}]({\mathbf x}) \;. \end{equation} It is worth emphasizing that $\Delta^2 \mathcal{H}_\Omega[\mathbf{f}]=0$, {\it i.e.}, $\mathcal{H}_\Omega[\mathbf{f}]$ is biharmonic, in $\Omega$ and $\mathbb{R}^3 \setminus \overline{\Omega}$. Thus \eqnref{scalform2} shows that the solution to the Lam\'e system in a bounded domain in $\Omega$ or the exterior $\mathbb{R}^3 \setminus \overline{\Omega}$ can be decomposed into a part harmonic in $\Omega$ or $\mathbb{R}^3 \setminus \overline{\Omega}$ and a gradient part. Suppose that the solution ${\mathbf u}$ to \eqnref{main-eqn} inside $\Omega$ is given by \begin{equation}\label{inside} {\mathbf u} ({\mathbf x}) = {\mathbf B} {\mathbf x} + \mathbf{v}, \quad {\mathbf x} \in \Omega \end{equation} for some constant symmetric matrix ${\mathbf B}$ and a constant vector $\mathbf{v}$. Then the solution is given by \begin{equation} \label{rep-1} \mathbf{u} ({\mathbf x}) = \begin{cases} {\mathbf A}{\mathbf x} + \mathcal{S}_\Omega [\mathbf{f}] ({\mathbf x})\;, \quad & {\mathbf x} \in \mathbb{R}^3 \setminus \overline \Omega\;, \\ {\mathbf B} {\mathbf x} + \mathbf{v} \;, \quad & {\mathbf x} \in \Omega\;, \end{cases} \end{equation} where \begin{equation} \mathbf{f}= (\mathbb{C}_1 - \mathbb{C}_0) \mathcal{E}({\mathbf B}{\mathbf x}) {\mathbf n} \; . \end{equation} Here ${\mathbf n}=(n_1,n_2,n_3)$ is the unit outward normal vector field to $\partial\Omega$. See \cite[Section 4]{KM_ARMA_08}. Note that \begin{equation} \label{vecscal} (\mathbb{C}_1 - \mathbb{C}_0) \mathcal{E}({\mathbf B}{\mathbf x}) {\mathbf n} = [(\tilde\lambda -\lambda) \mbox{tr} \, ({\mathbf B}) \mathbf{I} + 2(\tilde\mu-\mu) {\mathbf B}] {\mathbf n} . \end{equation} Let us put \begin{equation} \label{Bstar} {\mathbf B}^* := (\tilde\lambda -\lambda) \mbox{tr} \, ({\mathbf B}) \mathbf{I} + 2(\tilde\mu-\mu) {\mathbf B}, \end{equation} so that $\mathbf{f}$ in \eqnref{rep-1} is given by \begin{equation}\label{fBstar} \mathbf{f} = {\mathbf B}^* {\mathbf n}. \end{equation} According to \eqnref{scalform1}, we have \begin{equation} \mathcal{S}_\Omega[{\mathbf B}^*{\mathbf n}] ({\mathbf x}) = - \frac{\alpha_1+\alpha_2}{2} \Delta \mathcal{H}_\Omega[{\mathbf B}^*{\mathbf n}]({\mathbf x}) + \alpha_2 \nabla \nabla \cdot \mathcal{H}_\Omega[{\mathbf B}^*{\mathbf n}]({\mathbf x}) \;. \end{equation} One can easily see that \begin{equation} \mathcal{H}_\Omega[{\mathbf B}^*{\mathbf n}]({\mathbf x}) = {\mathbf B}^* \mathcal{H}_\Omega[{\mathbf n}]({\mathbf x})= -{\mathbf B}^* \nabla p_\Omega({\mathbf x}), \quad {\mathbf x} \in \Omega, \end{equation} where $p_\Omega$ is defined by \begin{equation}\label{pom} p_\Omega({\mathbf x}) := \frac{1}{4\pi}\int_{\Omega} |{\mathbf x}-{\mathbf y}| d{\mathbf y} \;, \quad {\mathbf x} \in \mathbb{R}^3 \; . \end{equation} Therefore we have $$ \mathcal{S}_\Omega[{\mathbf B}^*{\mathbf n}] ({\mathbf x}) = \frac{\alpha_1+\alpha_2}{2} {\mathbf B}^* \nabla \Delta p_\Omega({\mathbf x}) - \alpha_2 \nabla \nabla \cdot {\mathbf B}^* \nabla p_\Omega({\mathbf x}) . $$ For a $3 \times 3$ symmetric matrix ${\mathbf B}$, let $\Delta_{\mathbf B}:= \nabla \cdot {\mathbf B} \nabla$. We note that $\Delta_{{\mathbf I}} = \Delta$, the usual Laplacian. We then define \begin{equation} \label{wom} w_\Omega^{\mathbf B} ({\mathbf x}) : = \Delta_{\mathbf B} p_\Omega({\mathbf x}), \quad {\mathbf x} \in \mathbb{R}^3. \end{equation} In particular, we write $w_\Omega=w_\Omega^{{\mathbf I}}$. Then, one can easily see that \begin{equation} w_\Omega ({\mathbf x}) := \frac{2}{4\pi} \int_{\Omega} \frac{1}{|{\mathbf x}-{\mathbf y}|} d\sigma({\mathbf y}), \quad {\mathbf x} \in \mathbb{R}^3, \label{scal} \end{equation} which is ($2$ times) the Newtonian potential of $\Omega$. It is appropriate to recall now the proof of the weak Eshelby conjecture by Kang and Milton. In \cite{KM_ARMA_08}, the matter was reduced to the statement: `The Newtonian potential is quadratic in $\Omega$ if and only if $\Omega$ is an ellipsoid', which was proved by Dive \cite{div31} and Nikliborc \cite{nik32} in relation to the Newtonian potential problem (see also \cite{df86}). This statement can be rephrased as \begin{equation}\label{dive2} \mbox{$w_\Omega$ is quadratic in $\Omega$ if and only if $\Omega$ is an ellipsoid,} \end{equation} If we further put $\alpha:= \frac{\alpha_1+\alpha_2}{2\alpha_2}$, then we have \begin{equation}\label{linear2} \mathcal{S}_\Omega[{\mathbf B}^*{\mathbf n}] ({\mathbf x})= \frac{1}{\alpha_2} \left[ \alpha {\mathbf B}^* \nabla w_\Omega({\mathbf x}) - \nabla w_\Omega^{{\mathbf B}^*}({\mathbf x}) \right] . \end{equation} We emphasize that $\alpha > 1$. \section{Proof of Theorem \ref{mainthm}} Suppose that the solution ${\mathbf u}$ to \eqnref{main-eqn} is linear in $\Omega$ and given by \eqnref{rep-1}. Then by \eqnref{fBstar} we have $$ \mathcal{S}_\Omega[{\mathbf B}^*{\mathbf n}] ({\mathbf x}) = ({\mathbf B}-{\mathbf A}){\mathbf x} + {\mathbf v}, \quad {\mathbf x} \in \Omega. $$ It then follows from \eqnref{linear2} that \begin{equation}\label{linear10} \alpha {\mathbf B}^* \nabla w_\Omega({\mathbf x}) - \nabla w_\Omega^{{\mathbf B}^*}({\mathbf x}) = \alpha_2 ({\mathbf B}-{\mathbf A}){\mathbf x} + \alpha_2 {\mathbf v} , \quad {\mathbf x} \in \Omega. \end{equation} Note that if eigenvalues of ${\mathbf B}$ are either all the same or all distinct, so are those of ${\mathbf B}^*$. After rotation if necessary, we may assume that ${\mathbf B}^*$ is diagonal, say \begin{equation} {\mathbf B}^* = \mbox{diag} [b_1, b_2, b_3]. \end{equation} (i) Suppose first that all eigenvalues of ${\mathbf B}^*$ are the same, {\it i.e.}, $b_1=b_2=b_3=b$. In this case, since $w_\Omega^{{\mathbf B}^*} = b w_\Omega$, it follows from \eqnref{linear10} that $$ b (\alpha-1) \nabla w_\Omega = \mbox{linear in } \Omega. $$ Since $\alpha>1$, $w_\Omega$ is quadratic in $\Omega$, and hence $\Omega$ is an ellipsoid by \eqnref{dive2}. (ii) Suppose now that all eigenvalues of ${\mathbf B}^*$ are distinct, {\it i.e.}, $b_i \neq b_j$ if $i \neq j$. In this case, \eqnref{linear10} yields that $$ \pd{}{x_j} \left(\alpha b_j w_\Omega - w_\Omega^{{\mathbf B}^*} \right) = \mbox{linear in } \Omega, \quad j=1,2,3, $$ and hence $$ \alpha b_j w_\Omega - w_\Omega^{{\mathbf B}^*} \approx f_j ({\mathbf x}) \quad \mbox{in } \Omega, \quad j=1,2,3, $$ for some function $f_j$ which is independent of $x_j$. Here and afterwards $\approx$ denotes the equality up to a quadratic function. It then follows that \begin{equation}\label{aomb} \alpha w_\Omega \approx \frac{f_1-f_2}{b_1-b_2}, \quad w_\Omega^{{\mathbf B}^*} \approx \frac{b_2 f_1- b_1 f_2}{b_1-b_2}, \end{equation} and \begin{equation}\label{fonetwo} (b_3-b_2) f_1 + (b_1-b_3) f_2 + (b_2-b_1) f_3 \approx 0. \end{equation} Since $f_j$ is independent of $x_j$ for $j=1,2,3$, one can easily see that \eqnref{fonetwo} holds only when $f_1$, $f_2$ and $f_3$ take the form \begin{align*} f_1 ({\mathbf x}) &\approx \frac{m(x_3)-n(x_2)}{b_3-b_2}, \\ f_2 ({\mathbf x}) &\approx \frac{r(x_1) -m(x_3)}{b_1-b_3}, \\ f_3 ({\mathbf x}) &\approx \frac{n(x_2)-r(x_1)}{b_2-b_1}, \end{align*} for some functions $m$, $n$ and $r$. It then follows from \eqnref{aomb} that \begin{align} \alpha w_\Omega &\approx \frac{m(x_3)}{(b_3-b_2)(b_1-b_3)} + \frac{n(x_2)}{(b_2-b_1)(b_3-b_2)} + \frac{r(x_1)}{(b_1-b_3)(b_2-b_1)}. \label{awom} \end{align} Since $\Delta w_\Omega = 2$ in $\Omega$, we have $$ \frac{m''(x_3)}{(b_3-b_2)(b_1-b_3)} + \frac{n''(x_2)}{(b_2-b_1)(b_3-b_2)} + \frac{r''(x_1)}{(b_1-b_3)(b_2-b_1)} = \mbox{constant}. $$ Thus $r$, $n$, and $m$ are quadratic functions of $x_1$, $x_2$, and $x_3$, respectively, and hence $w_\Omega$ is quadratic in $\Omega$. Thus $\Omega$ is an ellipsoid. This completes the proof. \hfill \ensuremath{\square} \section{Proof of Theorem \ref{mainthm2}} In order to prove Theorem \ref{mainthm2}, we use the following lemma whose proof will be given in the appendix. \begin{lem}\label{discrim} Let ${\mathbf B}_1$ and ${\mathbf B}_2$ be two symmetric $3\times 3$-matrices. If ${\mathbf B}_1+t{\mathbf B}_2$ has a multiple eigenvalue for all real numbers $t$, then ${\mathbf B}_1$ and ${\mathbf B}_2$ can be diagonalized by the same orthogonal matrix. \end{lem} Let ${\mathbf A}_1$ and ${\mathbf A}_2$ be two linearly independent symmetric $3 \times 3$ matrices and suppose that solutions ${\mathbf u}_1$ and ${\mathbf u}_2$ to \eqnref{main-eqn} with ${\mathbf A}={\mathbf A}_1$ and ${\mathbf A}={\mathbf A}_2$ are linear in $\Omega$. Put ${\mathbf B}_j := \mathcal{E}({\mathbf u}_j)$, $j=1,2$. Since the EMT is positive or negative definite on $M_d^s$ (Theorem 10.6 of \cite{AK_book_07}), \eqnref{EMT2} shows that ${\mathbf B}_1$ and ${\mathbf B}_2$ are linearly independent. According to \eqnref{fBstar} we have \begin{align*} \mathcal{S}_\Omega[{\mathbf B}_1^*{\mathbf n}] ({\mathbf x}) = ({\mathbf B}_1-{\mathbf A}_1){\mathbf x} + {\mathbf v}_1, \quad {\mathbf x} \in \Omega,\\ \mathcal{S}_\Omega[{\mathbf B}_2^*{\mathbf n}] ({\mathbf x}) = ({\mathbf B}_2-{\mathbf A}_2){\mathbf x} + {\mathbf v}_2, \quad {\mathbf x} \in \Omega. \end{align*} It then follows from \eqnref{linear2} that \begin{align}\label{linear103}\begin{cases} \alpha {\mathbf B}_1^* \nabla w_\Omega({\mathbf x}) - \nabla w_\Omega^{{\mathbf B}_1^*}({\mathbf x}) = \alpha_2 ({\mathbf B}_1-{\mathbf A}_1){\mathbf x} + \alpha_2 {\mathbf v}_1 , \\ \alpha {\mathbf B}_2^* \nabla w_\Omega({\mathbf x}) - \nabla w_\Omega^{{\mathbf B}_2^*}({\mathbf x}) = \alpha_2 ({\mathbf B}_2-{\mathbf A}_2){\mathbf x} + \alpha_2 {\mathbf v}_2, \end{cases}\end{align} for $ {\mathbf x} \in \Omega.$ Let us suppose that all of ${\mathbf B}_1$, ${\mathbf B}_2$, and ${\mathbf B}_1+t{\mathbf B}_2$ $(t \in \mathbb{R})$ have an eigenvalue of multiplicity 2 (otherwise we apply Theorem \ref{mainthm} to conclude that $\Omega$ is an ellipsoid). By Lemma \ref{discrim}, ${\mathbf B}_1$ and ${\mathbf B}_2$ can be diagonalized by a single orthogonal matrix. Thus we may assume that ${\mathbf B}_1$ and ${\mathbf B}_2$ are diagonal. Then from \eqnref{Bstar} ${\mathbf B}_1^*$ and ${\mathbf B}_2^*$ are also diagonal and we may let $$ {\mathbf B}_1^*=\mbox{diag}[b_1,b_1,c_1],\quad {\mathbf B}_2^*=\mbox{diag}[b_2,b_2,c_2], $$ where $b_1\ne c_1$ and $b_2\ne c_2$. Since ${\mathbf B}_1^*$ and ${\mathbf B}_2^*$ are linearly independent, we have $$ b_1c_2 \ne c_1 b_2. $$ By \eqnref{linear103}, we have \begin{align} &\alpha b_1 w_\Omega - w^{{\mathbf B}_1^*}_\Omega \approx f(x_3),\label{l1}\\ &\alpha c_1 w_\Omega - w^{{\mathbf B}_1^*}_\Omega \approx g(x_1,x_2),\label{l2}\\ &\alpha b_2 w_\Omega - w^{{\mathbf B}_2^*}_\Omega \approx h(x_3),\label{l3}\\ &\alpha c_2 w_\Omega - w^{{\mathbf B}_2^*}_\Omega \approx l(x_1,x_2),\label{l4} \end{align} for some functions $f$, $g$, $h$, and $l$. Here again $\approx$ denotes the equality up to a quadratic function. By \eqnref{wom}, we have from \eqnref{l1} and \eqnref{l3} \begin{equation} (\alpha-1)b_2 f(x_3)-(\alpha-1)b_1h(x_3) \approx (\alpha-1)(b_1c_2-b_2c_1) \frac{\partial^2p_\Omega}{\partial x_3^2}, \end{equation} and from \eqnref{l2} and \eqnref{l4} \begin{equation} (b_2-\alpha c_2)g(x_1,x_2)-(b_1-\alpha c_1)l(x_1,x_2) \approx (1-\alpha)(b_1c_2-b_2c_1) \frac{\partial^2p_\Omega}{\partial x_3^2}. \end{equation} It then follows that \begin{equation}\label{p3} \frac{\partial^2p_\Omega}{\partial x_3^2} \approx 0. \end{equation} We then obtain from \eqnref{l1}-\eqnref{l4} that \begin{align*} (\alpha-1) b_1 w_\Omega \approx f(x_3),\quad (\alpha-1) b_2 w_\Omega \approx h(x_3), \end{align*} and $$ (\alpha c_1-b_1) w_\Omega \approx g(x_1,x_2), \quad (\alpha c_2-b_2) w_\Omega \approx l(x_1,x_2). $$ Thus we conclude that $w_\Omega \approx 0$, and hence $\Omega$ is an ellipsoid. This completes the proof. \hfill \ensuremath{\square} \section{Proof of Theorem \ref{thm:main-thm}} The space $M_d^s$ is equipped with the inner product ${\mathbf A}:{\mathbf B}$, where ${\mathbf A}:{\mathbf B}$ denotes the contraction of two matrices ${\mathbf A}$ and ${\mathbf B}$, {\it i.e.}, ${\mathbf A}:{\mathbf B}=\sum_{i,j} a_{ij}b_{ij}=\textrm{tr}({\mathbf A}^{T}{\mathbf B})$ where $\textrm{tr}({\mathbf A})$ denotes the trace of ${\mathbf A}$. For $d=2,3$, let $d_*:= \frac{d(d+1)}{2}$, which is the dimension of $M_d^s$. Let ${\mathbf B}_1=\frac{1}{\sqrt{d}} \mathbf{I}_2$ be a basis for $\mathbf{\Lambda}_1(M_d^s)$ (of a unit length), and $\{ {\mathbf B}_2, \ldots, {\mathbf B}_{d_*} \}$ be an orthonormal basis for $\mathbf{\Lambda}_2(M_d^s)$. Then $\{ {\mathbf B}_1, \ldots, {\mathbf B}_{d_*} \}$ is an orthonormal basis for $M_d^s$, {\it i.e.}, $$ {\mathbf B}_i : {\mathbf B}_j = \delta_{ij}, $$ where $\delta_{ij}$ is Kronecker's delta. Note that for any symmetric $4$-tensor $\mathbb{T}$, we have \begin{equation} \textrm{tr}\, \mathbb{T} = \sum_{k=1}^{d_*} \mathbb{T} {\mathbf B}_k : {\mathbf B}_k \, . \end{equation} We deal with the case when $\mathbb{C}^1 - \mathbb{C}^0$ is positive definite so that $\mathbb{M}$ is a symmetric positive-definite linear operator on $M^s_d$. The other case can be treated in the exactly same way. Let us first invoke some facts proved in \cite{CK_AMO_08}. Introduce a $4$-tensor $\widetilde{\mathbb{M}}$ by \begin{equation} \label{wmm} \begin{array}{rl} \displaystyle {\mathbf A}:\widetilde{\mathbb{M}} {\mathbf A} & = \displaystyle \min_{{\mathbf{v}}\in H^{1}\left(\mathbb{R}^{d}\right)}\int_{\mathbb{R}^{d}}\mathbb{C}_{\Omega} \left(\mathcal{E}({\mathbf{v}})+1_{\Omega}\mathbb{G} {\mathbf A}\right):\left(\mathcal{E}({\mathbf{v}})+1_{\Omega}\mathbb{G} {\mathbf A}\right)dx\\ \noalign{\medskip} & \displaystyle \quad \quad+\left|\Omega\right|\left(\mathbb{C}^{1}-\mathbb{C}^{0}\right)\left(\mathbb{C}^{1}\right)^{-1}\mathbb{C}^{0}{\mathbf A}:{\mathbf A} \end{array} \end{equation} for ${\mathbf A} \in M_d^s$, where \begin{equation} \mathbb{G} :=\mathbb{I}-(\mathbb{C}^{1})^{-1}\mathbb{C}^{0}. \end{equation} Note that the minimum in \eqnref{wmm} is attained by $\mathbf{v}=\mathbf{u} - {\mathbf A}\mathbf{x}$, where ${\mathbf u}$ is the solution of \eqnref{main-eqn}. It is proved in \cite[Corollary 3.2]{CK_AMO_08} that \begin{equation}\label{eq:wmmmm} \mathbf{\Lambda}_1 \widetilde{\mathbb{M}} \mathbf{\Lambda}_1 = \mathbf{\Lambda}_1 \mathbb{M} \mathbf{\Lambda}_1 \quad \mbox{and} \quad \mathbf{\Lambda}_2 \widetilde{\mathbb{M}} \mathbf{\Lambda}_2 = \mathbf{\Lambda}_2 \mathbb{M} \mathbf{\Lambda}_2 \, . \end{equation} In particular, we have \begin{equation} \textrm{tr}\, (\mathbf{\Lambda}_{1}\widetilde{\mathbb{M}}^{-1}\mathbf{\Lambda}_{1})= \textrm{tr}\, (\mathbf{\Lambda}_{1}\mathbb{M}^{-1}\mathbf{\Lambda}_{1}) \quad \mbox{and} \quad \textrm{tr}\, (\mathbf{\Lambda}_{2}\widetilde{\mathbb{M}}^{-1}\mathbf{\Lambda}_{2})= \textrm{tr}\, (\mathbf{\Lambda}_{2}\mathbb{M}^{-1}\mathbf{\Lambda}_{2}). \end{equation} Let $\mathbb{C}$ be an isotropic $4$-tensor, {\it i.e.}, \[ \mathbb{C}=\lambda\mathbf{I}\otimes\mathbf{I} +2\mu\mathbb{I} =d \kappa \mathbf{\Lambda_{1}}+2\mu\mathbf{\Lambda_{2}}, \] for some $\lambda$ and $\mu$ satisfying \eqnref{eq:trans-2}. Let $L^{2}\left(\mathbb{R}^{d},M_{d}^{s}\right)$ be the space of square integrable functions on $\mathbb{R}^d$ valued in $M_{d}^{s}$ and $H^1(\mathbb{R}^d, M_d^s)$ the Sobolev space. For ${\mathbf P} \in L^{2}\left(\mathbb{R}^{d},M_{d}^{s}\right)$, we define $F_\mathbb{C}({\mathbf P})$ by \begin{equation} \label{fcom} F_{\mathbb{C}}({\mathbf P}):= - \mathcal{E} \mathcal{L}_{\mathbb{C}}^{-1} (\nabla\cdot {\mathbf P}), \end{equation} where $\mathcal{L}_{\mathbb{C}} = \nabla\cdot \mathbb{C} \mathcal{E}$. In other words, if $\Phi$ is a unique solution in $H^1(\mathbb{R}^d, M_d^s)$ to \begin{equation}\label{LCC} \mathcal{L}_{\mathbb{C}}\left(\Phi \right) + \nabla \cdot {\mathbf P} = 0, \end{equation} then $F_{\mathbb{C}}({\mathbf P})$ is given by $$ F_{\mathbb{C}}({\mathbf P})= \mathcal{E} \left(\Phi \right). $$ If $\Phi$ is the solution to \eqnref{LCC}, then $$ \int_{\mathbb{R}^{d}} \mathbb{C} \left(\mathcal{E} \left(\Phi \right) + \mathbb{C}^{-1}{\mathbf P}\right) : \mathcal{E} \left(\Psi \right) =0 $$ for all $\Psi \in H^1(\mathbb{R}^d, M_d^s)$, and hence by taking $\Psi=\Phi$ we have \begin{equation}\label{eq:FCnegdef} \int_{\mathbb{R}^{d}} {\mathbf P} : F_{\mathbb{C}}({\mathbf P}) = - \int_{\mathbb{R}^{d}} \mathbb{C} F_{\mathbb{C}}({\mathbf P}):F_{\mathbb{C}}({\mathbf P}) \end{equation} We prove Theorem~\ref{thm:main-thm} using the following two propositions whose proofs will be given at the end of this section. \begin{prop}\label{pro:AC0} Let \begin{equation}\label{eq:EAC0} E_{\mathbf A}(\mathbb{C}^0,{\mathbf P})= \int_{\Omega} {\mathbf P}:F_{\mathbb{C}^0}\left(1_\Omega{\mathbf P}\right)+\int_{\Omega}\left(\mathbb{C}^{0} - \mathbb{C}^{1}\right)^{-1}{\mathbf P}:{\mathbf P} + 2\int_{\Omega}{\mathbf P}:{\mathbf A} \, . \end{equation} Then the following holds \begin{equation}\label{eq:HS-3} {\mathbf A}:\widetilde{\mathbb{M}} {\mathbf A} = \sup_{{\mathbf P}\in L^{2}(\mathbb{R}^{d}:M_{d}^{s}) } E_{\mathbf A}(\mathbb{C}^0,{\mathbf P}). \end{equation} Furthermore, this supremum is attained by ${\mathbf P}=1_\Omega \left(\mathbb{C}^{1}-\mathbb{C}^{0}\right) \mathcal{E}(\mathbf{u})$, where $\mathbf{u}$ is the solution of \eqnref{main-eqn}. \end{prop} We then show that structures reaching the lower trace bounds have a particular structure, as explained by the proposition below. \begin{prop}\label{pro:trace-opt} If equality in \eqnref{eq:bd-itrace-1} holds, then we have \begin{equation} \label{tropt1} E_{\mathbf A}(\mathbb{C}^0,1_\Omega {\mathbf B}_1) = \sup_{ {\mathbf P}\in L^{2}(\mathbb{R}^{d}:M_{d}^{s}) } E_{\mathbf A}(\mathbb{C}^0,{\mathbf P}) \end{equation} with ${\mathbf A}= \mathbb{M}^{-1} {\mathbf B}_1$. If equality in \eqnref{eq:bd-itrace-2} holds, then we have \begin{equation} \label{tropt2} E_{\mathbf A}(\mathbb{C}^0,1_\Omega {\mathbf B}_k) = \sup_{ {\mathbf P}\in L^{2}(\mathbb{R}^{d}:M_{d}^{s})} E_{\mathbf A}(\mathbb{C}^0,{\mathbf P}) \end{equation} with ${\mathbf A}= \mathbb{M}^{-1} {\mathbf B}_k$, for $k=2, \ldots, d_*$. \end{prop} \medskip \noindent{\sl Proof of Theorem \ref{thm:main-thm}}. Introduce a bilinear form $\mathcal{F}_{\mathbb{C}^0}({\mathbf Q},{\mathbf R})$ by $$ \mathcal{F}_{\mathbb{C}^0}({\mathbf Q},{\mathbf R}) = \int_{\Omega} {\mathbf Q}:F_{\mathbb{C}^0}\left(1_\Omega{\mathbf R}\right) +\int_{\Omega}\left(\mathbb{C}^{0} - \mathbb{C}^{1}\right)^{-1}{\mathbf Q}:{\mathbf R} \, . $$ It follows from \eqnref{eq:FCnegdef} that \begin{align*} \mathcal{F}_{\mathbb{C}^0}(1_\Omega {\mathbf Q},1_\Omega {\mathbf Q}) & = - \int_{\Omega} \mathbb{C}^0 F_{\mathbb{C}^0}\left(1_\Omega{\mathbf Q}\right) :F_{\mathbb{C}^0}\left(1_\Omega{\mathbf Q}\right) - \int_{\Omega}\left(\mathbb{C}^{1} - \mathbb{C}^{0}\right)^{-1}\left(1_\Omega{\mathbf Q}\right):\left(1_\Omega{\mathbf Q}\right) \\ &\leq - \int_{\Omega}\left(\mathbb{C}^{1} - \mathbb{C}^{0}\right)^{-1}\left(1_\Omega{\mathbf Q}\right):\left(1_\Omega{\mathbf Q}\right) \\ &\leq - K \left\| 1_\Omega{\mathbf Q}\right\|_{L^{2}(\mathbb{R}^{d}:M_{d}^{s})} \end{align*} for some positive constant $K$. The last holds due to the positive-definiteness of $\mathbb{C}^{1} - \mathbb{C}^{0}$. As a consequence, $\mathcal{F}_{\mathbb{C}^0}$ is negative definite when restricted to $H=\{{\mathbf P} \in L^{2}(\Omega:M_{d}^{s})^2, \mbox{ supported in } \Omega \}$. Therefore $E_{\mathbf A}(\mathbb{C}^0,{\mathbf Q})=\mathcal{F}_{\mathbb{C}^0}({\mathbf Q},{\mathbf Q}) + 2\int_{\Omega}{\mathbf Q}:{\mathbf A} $ is a strictly concave functional on $H$, and therefore admits at most one maximizer in $H$. We observe that since $\mathbb{C}^{1}-\mathbb{C}^{0}$ is isotropic, if ${\mathbf B}$ is diagonal and all the eigenvalues are the same, so is $(\mathbb{C}^{1}-\mathbb{C}^{0})^{-1} {\mathbf B}$. If ${\mathbf B}$ is trace-free and all the eigenvalues are distinct, so is $(\mathbb{C}^{1}-\mathbb{C}^{0})^{-1} {\mathbf B}$. Suppose that equality holds in \eqnref{eq:bd-itrace-1}. It then follows from Proposition~\ref{pro:AC0}, \eqnref{pro:trace-opt}, and uniqueness of the maximizer in $\Omega$ that $$ \left(\mathbb{C}^{1}-\mathbb{C}^{0}\right)\mathcal{E}(\mathbf{u}_1) = {\mathbf B}_1 \textrm{ in } \Omega, $$ where ${\mathbf u}_1$ is the solution to \eqnref{main-eqn} with ${\mathbf A}=\mathbb{M}^{-1} {\mathbf B}_1$. Recall that ${\mathbf B}_1=\frac{1}{\sqrt{d}} {\mathbf I}$. Therefore, $\mathcal{E}({\mathbf u}_1)$ is constant in $\Omega$ and all the eigenvalues of $\mathcal{E}({\mathbf u}_1)$ are the same. Thus $\Omega$ is an ellipse or an ellipsoid due to Theorem \ref{mainthm}. Suppose now that equality holds in \eqnref{eq:bd-itrace-2}. Then for similar reasons we can deduce that for each $k=2, \ldots, d_*$, $$ \left(\mathbb{C}^{1}-\mathbb{C}^{0}\right)\mathcal{E}(\mathbf{u}_k) = {\mathbf B}_k \textrm{ in } \Omega, $$ where ${\mathbf u}_k$ is the solution to \eqnref{main-eqn} with ${\mathbf A}=\mathbb{M}^{-1} {\mathbf B}_k$. Thus $\Omega$ is an ellipse or an ellipsoid due to Theorem \ref{mainthm2}. This completes the proof. \hfill \ensuremath{\square} \medskip \noindent{\sl Proof of Proposition \ref{pro:AC0}}. Following the notation of \cite{CK_AMO_08}, we define $W_{\mathbf A}(\mathbb{C},{\mathbf P})$, for ${\mathbf A} \in M_d^s$, by $$ W_{\mathbf A}(\mathbb{C},{\mathbf P})= \int_{\mathbb{R}^{d}}{\mathbf P}:F_{\mathbb{C}}{\mathbf P}+\int_{\mathbb{R}^{d}}\left(\mathbb{C}- \mathbb{C}_{\Omega}\right)^{-1}{\mathbf P}:{\mathbf P} + 2\int_{\Omega}\!{\mathbf P}:\left(\mathbb{C}^{1}-\mathbb{C}\right)^{-1}\left(\mathbb{C}^{1}-\mathbb{C}^{0}\right){\mathbf A} . $$ It is proved in \cite[Proposition 4.1]{CK_AMO_08}, following the variational strategy given in \cite{KM_IMA_86} for the derivation of Hashin-Shtrikman type bounds, that for any isotropic elasticity tensor $\mathbb{C}<\mathbb{C}^{0} (< \mathbb{C}^1)$ we have \begin{equation}\label{eq:HS-1} {\mathbf A}:\widetilde{\mathbb{M}} {\mathbf A}= {\mathbf A}:\left(\mathbb{C}^{1}-\mathbb{C}^{0}\right)\left(\mathbb{C}-\mathbb{C}^{1}\right)^{-1}\left(\mathbb{C}-\mathbb{C}^{0}\right){\mathbf A} + \sup_{{\mathbf P}\in L^{2}\left(\mathbb{R}^{d}:M_{d}^{s}\right)}W_{\mathbf A}(\mathbb{C},{\mathbf P}). \end{equation} Note that the supremum is attained by \begin{equation}\label{eq:sigmaopt} {\mathbf P} = 1_\Omega \left(\mathbb{C}^{1}-\mathbb{C}^{0}\right) {\mathbf A} + \left(\mathbb{C}_\Omega-\mathbb{C}\right)\left(\mathcal{E}(\mathbf{u})-{\mathbf A}\right), \end{equation} where ${\mathbf u}$ is the solution to \eqnref{main-eqn} with ${\mathbf A}$ in above identity. Since $(\mathbb{C}^{1}-\mathbb{C}^{0}) (\mathbb{C}-\mathbb{C}^{1})^{-1} (\mathbb{C}-\mathbb{C}^{0})$ is positive definite, by sending $\mathbb{C}$ to $\mathbb{C}^0$, and restricting the supremum to fields ${\mathbf P}$ such that $1_\Omega {\mathbf P} ={\mathbf P}$ we obtain \begin{equation} {\mathbf A}:\widetilde{\mathbb{M}} {\mathbf A} \ge \sup_{ {\mathbf P}\in L^{2}(\mathbb{R}^{d},M_{d}^{s})} E_{\mathbf A}(\mathbb{C}^0,{\mathbf P}) \, . \end{equation} For any ${\mathbf P} \in L^{2}\left(\mathbb{R}^{d},M_{d}^{s}\right)$, and any positive definite isotropic elasticity tensor $\mathbb{C} < \mathbb{C}^0$, we define $E_{\mathbf A}(\mathbb{C},{\mathbf P})$, for ${\mathbf A} \in M_d^s$, by $$ E_{\mathbf A}(\mathbb{C},{\mathbf P})= \int_{\Omega}\! {\mathbf P}:\!F_{\mathbb{C}}\left(1_\Omega{\mathbf P}\right)+\int_{\Omega}\!\left(\mathbb{C} - \mathbb{C}^{1}\right)^{-1}\!{\mathbf P}:{\mathbf P} + 2\int_{\Omega}\!{\mathbf P}:\!\left(\mathbb{C}^{1}-\mathbb{C}\right)^{-1}\!\left(\mathbb{C}^{1}-\mathbb{C}^{0}\right)\!{\mathbf A} . $$ Note that this definition is consistent of that of $E_{\mathbf A}(\mathbb{C}^0,{\mathbf P})$ given in \eqnref{eq:EAC0} by passing to the limit in $\mathbb{C}$. Introducing the decomposition ${\mathbf P} = {\mathbf P}_\Omega +{\mathbf P}_U$, with ${\mathbf P}_\Omega 1_\Omega ={\mathbf P}_\Omega$, and ${\mathbf P}_U 1_\Omega \equiv 0$, we have \begin{eqnarray*} W_{\mathbf A}(\mathbb{C},{\mathbf P})&=&E_{\mathbf A}(\mathbb{C},{\mathbf P}_\Omega) + W_{\mathbf A}(\mathbb{C},{\mathbf P}_U) \\ &+& \int_{\mathbb{R}^{d}}{\mathbf P}_\Omega:F_{\mathbb{C}}{\mathbf P}_U + \int_{\mathbb{R}^{d}}{\mathbf P}_U:F_{\mathbb{C}}{\mathbf P}_\Omega. \end{eqnarray*} Let $\mathbb{C}=\mathbb{C}_0 - \varepsilon \mathbb{I}$, where $\varepsilon>0$. Then we have $$ W_{\mathbf A}(\mathbb{C},{\mathbf P})= E_{\mathbf A}(\mathbb{C},{\mathbf P}_\Omega) - \epsilon^{-1}\|{\mathbf P}_U\|^2_{L^2\left(\mathbb{R}^{d}\right)} + R({\mathbf P}_U,{\mathbf P}_\Omega), $$ where $$ R\left({\mathbf P}_U,{\mathbf P}_\Omega\right) := \int_{\mathbb{R}^{d}}{\mathbf P}_\Omega:F_{\mathbb{C}}{\mathbf P}_U + \int_{\mathbb{R}^{d}}{\mathbf P}_U:F_{\mathbb{C}}{\mathbf P}_\Omega + \int_{\mathbb{R}^{d}}{\mathbf P}_U:F_{\mathbb{C}}{\mathbf P}_U . $$ By integration by parts, and by the Cauchy-Schwartz inequality, we readily obtain that for $\varepsilon$ small enough $R\left({\mathbf P}_U,{\mathbf P}_\Omega\right)$ satisfies $$ \left|R\left({\mathbf P}_U,{\mathbf P}_\Omega\right) \right| \leq K \|{\mathbf P}_U\|_{L^2\left(\mathbb{R}^{d}\right)} \left( \|{\mathbf P}_U\|_{L^2\left(\mathbb{R}^{d}\right)} + \|{\mathbf P}_\Omega\|_{L^2\left(\mathbb{R}^{d}\right)} \right), $$ where the constant $K$ is independent of ${\mathbf P}_U$, ${\mathbf P}_\Omega$, and $\varepsilon$. As a consequence, for $\varepsilon$ small enough, $$ - \epsilon^{-1}\|{\mathbf P}_U\|^2_{L^2\left(\mathbb{R}^{d}\right)} + R({\mathbf P}_U,{\mathbf P}_\Omega) \leq 3K \varepsilon \|{\mathbf P}_\Omega\|_{L^2\left(\mathbb{R}^{d}\right)}^2. $$ Note that from \eqnref{eq:FCnegdef} $\int_{\Omega} {\mathbf P}_\Omega:F_{\mathbb{C}}\left({\mathbf P}_\Omega\right)$ is negative definite, therefore $$ E_{{\mathbf A}}(\mathbb{C},{\mathbf P})\leq \tilde{K} \|{\mathbf P}_\Omega\|_{L^2\left(\mathbb{R}^{d}\right)} (- \|{\mathbf P}_\Omega\|_{L^2\left(\mathbb{R}^{d}\right)} + 1), $$ where $\tilde{K}$ is another constant independent of ${\mathbf P}_\Omega$ and $\varepsilon$. Thus $\|{\mathbf P}_\Omega\|_{L^2\left(\mathbb{R}^{d}\right)}$ must stay bounded, uniformly with respect to $\epsilon$, close to the supremum. Taking the limit as $\varepsilon$ tends to zero we obtain \eqnref{eq:HS-3}. Replacing $\mathbb{C}$ by $\mathbb{C}^0$ in \eqnref{eq:sigmaopt} concludes the proof. \hfill \ensuremath{\square} \medskip \noindent{\sl Proof of Proposition~\ref{pro:trace-opt}}. Given $k\in\{1,\ldots,d_*\}$, choose ${\mathbf A}= \mathbb{M}^{-1} \mathbf{\Lambda}_l({\mathbf B}_k)$ and use a test function ${\mathbf P} = 1_\Omega \mathbf{\Lambda}_l({\mathbf B}_k)$ in \eqnref{eq:HS-3}. This gives \begin{align} & \mathbb{M}^{-1} \mathbf{\Lambda}_l({\mathbf B}_k): \mathbf{\Lambda}_l({\mathbf B}_k) \ge W_{\mathbf A}(\mathbb{C}^0, 1_\Omega \mathbf{\Lambda}_l({\mathbf B}_k)) \label{eq:mm-one} \\ &\quad = \int_{\Omega} \mathbf{\Lambda}_l({\mathbf B}_k) : F_{\mathbb{C}^0} \left( 1_\Omega \mathbf{\Lambda}_l({\mathbf B}_k) \right) +\int_{\Omega}\left(\mathbb{C}^0- \mathbb{C}^1 \right)^{-1} \mathbf{\Lambda}_l({\mathbf B}_k) : \mathbf{\Lambda}_l({\mathbf B}_k) \nonumber \\ &\quad \quad + 2\int_{\Omega}\mathbf{\Lambda}_l({\mathbf B}_k) : \mathbb{M}^{-1} \mathbf{\Lambda}_l({\mathbf B}_k) \, . \nonumber \end{align} Summing these inequalities over $k$, we obtain \begin{align} \textrm{tr}\, (\mathbf{\Lambda}_l \mathbb{M}^{-1} \mathbf{\Lambda}_l) & \ge \sum_{k=1}^{d_*} \int_{\Omega} \mathbf{\Lambda}_l({\mathbf B}_k) : F_{\mathbb{C}^0} \left( 1_\Omega \mathbf{\Lambda}_l({\mathbf B}_k) \right) \label{eq:trlam} \\ & \quad + \sum_{k=1}^{d_*} \int_{\Omega}\left(\mathbb{C}^0- \mathbb{C}^1 \right)^{-1} \mathbf{\Lambda}_l({\mathbf B}_k) : \mathbf{\Lambda}_l({\mathbf B}_k) + 2\textrm{tr}\, (\mathbf{\Lambda}_l \mathbb{M}^{-1} \mathbf{\Lambda}_l) \, . \nonumber \end{align} It is proved in \cite[(4.27) \& (4.28)]{CK_AMO_08} that $$ \sum_{k=1}^{d_*} \int_{\Omega} \mathbf{\Lambda}_1({\mathbf B}_k) : F_{\mathbb{C}^0} \left( 1_\Omega \mathbf{\Lambda}_1({\mathbf B}_k) \right) = - \frac{1}{d\left(\lambda +2\mu \right)} \, , $$ and $$ \sum_{k=1}^{d_*} \int_{\Omega} \mathbf{\Lambda}_2({\mathbf B}_k) : F_{\mathbb{C}^0} \left( 1_\Omega \mathbf{\Lambda}_2({\mathbf B}_k) \right) = - \left(\frac{d-1}{d\left(\lambda +2\mu \right)}+\frac{d-1}{2\mu} \right) \, . $$ Since $$ \left(\mathbb{C}^0- \mathbb{C}^1 \right)^{-1} = \frac{1}{d(\kappa-\tilde\kappa)} \mathbf{\Lambda}_1 + \frac{1}{2(\mu-\tilde\mu)} \mathbf{\Lambda}_2 \, , $$ one can immediately see that $$ \sum_{k=1}^{d_*} \int_{\Omega}\left(\mathbb{C}^0- \mathbb{C}^1 \right)^{-1} \mathbf{\Lambda}_1({\mathbf B}_k) : \mathbf{\Lambda}_1({\mathbf B}_k) = \frac{1}{d(\kappa-\tilde\kappa)} $$ and $$ \sum_{k=1}^{d_*} \int_{\Omega}\left(\mathbb{C}^0- \mathbb{C}^1 \right)^{-1} \mathbf{\Lambda}_2({\mathbf B}_k) : \mathbf{\Lambda}_2({\mathbf B}_k) = \frac{d_*-1}{2(\mu-\tilde\mu)} \, . $$ Therefore, we get $$ \sum_{k=1}^{d_*} \left[ \int_{\Omega} \mathbf{\Lambda}_l({\mathbf B}_k) : F_{\mathbb{C}^0} \left( 1_\Omega \mathbf{\Lambda}_l({\mathbf B}_k) \right)+ \int_{\Omega}\left(\mathbb{C}^0- \mathbb{C}^1 \right)^{-1} \mathbf{\Lambda}_l({\mathbf B}_k) : \mathbf{\Lambda}_l({\mathbf B}_k) \right] = - K_l $$ for $l=1,2$, where $K_l$ is given in \eqnref{eq:kone} and \eqnref{eq:ktwo}. It then follows from \eqnref{eq:trlam} that \begin{equation}\label{eq:crucial} \textrm{tr}\, (\mathbf{\Lambda}_l \mathbb{M}^{-1} \mathbf{\Lambda}_l) \ge -K_l + 2\textrm{tr}\, (\mathbf{\Lambda}_l \mathbb{M}^{-1} \mathbf{\Lambda}_l). \end{equation} Suppose that equality in \eqnref{eq:bd-itrace-1} holds. Then, in view of \eqnref{eq:crucial}, the inequality in \eqnref{eq:trlam} becomes an equality, and so does the one in \eqnref{eq:mm-one}. Since $\mathbf{\Lambda}_1({\mathbf B}_1)={\mathbf B}_1$ and $\mathbf{\Lambda}_1({\mathbf B}_k)=0$ for $k=2, \ldots, d_*$, we have \begin{equation} E_{\mathbf A}(\mathbb{C}^0,1_\Omega {\mathbf B}_1) = \sup_{ {\mathbf P}\in L^{2}(\mathbb{R}^{d}:M_{d}^{s}) } E_{\mathbf A}(\mathbb{C}^0,{\mathbf P}) \, , \quad {\mathbf A}= \mathbb{M}^{-1} {\mathbf B}_1. \end{equation} Likewise, if equality in \eqnref{eq:bd-itrace-2} holds, then \begin{equation} E_{\mathbf A}(\mathbb{C}^0,1_\Omega {\mathbf B}_k) = \sup_{ {\mathbf P}\in L^{2}(\mathbb{R}^{d}:M_{d}^{s})} E_{\mathbf A}(\mathbb{C}^0,{\mathbf P}) \, , \quad {\mathbf A}= \mathbb{M}^{-1} {\mathbf B}_k, \end{equation} for $k=2, \ldots, d_*$, and the proof is complete. \hfill \ensuremath{\square}
-68,902.442316
[ -3.236328125, 3.009765625 ]
23.046875
[ -2.837890625, 0.2418212890625, -2.205078125, -5.27734375, -0.69140625, 7.8125 ]
[ 2.673828125, 7.40625, 1.83984375, 6.36328125 ]
248
5,077
[ -3.552734375, 4.10546875 ]
37.141569
[ -5.46484375, -3.798828125, -4.4765625, -2.31640625, 1.54296875, 12.125 ]
0.841666
15.863722
21.351192
2.611912
[ 1.8187789916992188 ]
-44,846.36293
6.224936
-69,120.863123
0.370206
5.849266
[ -1.8876953125, -3.38671875, -3.794921875, -5.14453125, 2.083984375, 12.125 ]
[ -5.40234375, -1.6962890625, -2.3125, -1.1884765625, 3.283203125, 3.78125 ]
BkiUbVI5i7PA9OgJDNwj
\section{Introduction} Recent works using deep convolutional networks have been successfully applied to a large variety of computer vision tasks, such as image recognition \citep{He2016}, object segmentation \citep{He2017} and scene segmentation \citep{Chen2018}. These networks are large. For example, ResNet-152 has 60.2 million parameters \citep{Zagoruyko2016} and requires 11.3 billion FLOPs \citep{He2016}. A large number of parameters results in a large memory footprint. At 32-bit floating-point precision, 229.64 MB is needed to store the ResNet-152 parameter values. In low-latency or mobile applications, lower computation complexity, lower memory footprint and better energy efficiency are desired. Many prior works address this need of lower computation complexity. In a survey paper \cite{Cheng2018}, efficient computation of neural networks is organized into four categories: network pruning, low-rank decomposition, teacher-student network and network quantization. Network pruning removes redundant parameters which are not sensitive to performance. Low-rank decomposition uses matrix or tensor decomposition methods to reduce number of parameters. In teacher-student network, knowledge transfer is exploited to train a smaller student network using a bigger teacher network. In these three categories, their common theme is a reduction of number of parameters. During forward propagation, one of the most computationally intensive operation in a neural network is the matrix multiplication of parameters with input. With reduced parameters, FLOPs and memory footprint reduce. With reduced FLOPs, energy efficiency improves. In the fore-mentioned categories, network parameters typically use floating-point precision. In the last category, network quantization, the parameters and, for some works, all computations are quantized. For many low-latency or mobile applications, we typically train offline and deploy pre-trained models. Thus, the main goal is the efficiency in forward propagation. It is desirable to compute backward propagation and parameter updates in floating-point precision. The seminal work \cite{Courbariaux2015} matches our scope. They quantize network weights to binary values, e.g. -1.0 and 1.0, while also keeping weight values in floating-point precision for backward propagation. During forward propagation, instead of matrix multiplication of weights with input, the sign of these binary weights specify addition or subtraction of inputs. Memory footprint is dramatically reduced to 1 bit per weight. Energy efficiency improves because addition is more energy efficient than multiplication \cite{Horowitz2014}. Prior works in network quantization \citep{Courbariaux2015,Li2016,Courbariaux2016,Zhou2016,Wu2018} typically start training from quantizing all weights in the network. Quantization creates error which is the difference between the original value and its quantized value. In other words, actual weight value, $w_{q}$, is \begin{equation} w_{q} = w - w_{error}\label{eqn:error} \end{equation} To reduce the impact of $w_{error}$, we hypothesize that if we quantize some weights while leaving others in floating-point precision, the latter ones would be able to compensate for the error introduced by quantization. To reach a fully quantized network, we propose an iterative training, where we gradually quantize more and more weights. This raises two questions. First, how to choose the grouping of weights to quantize together at each iteration. Second, how to choose the quantization order across groups. A feedforward, deep neural network has many layers. One natural grouping choice is one group per layer. For the quantization order of groups, we propose a sensitivity pre-training to choose the order. A random order and other obvious orders are chosen as comparison. \subsection*{Contributions.} \begin{itemize} \item We propose an iterative training regime that gradually finds a full binary weight network starting from an initial partial binary weight network. \item We demonstrate empirically that starting from a partial binary weight network result in higher accuracy than starting from a full binary weight one. \item We demonstrate empirically that the forward order is best, compared to other obvious orders. In addition, sensitivity pre-training can further improve that. \item Code is available at \url{https://github.com/rakutentech/iterative_training}. \end{itemize} In the sections that follow, we describe the iterative training algorithm in detail. Next, we present the iterative training of fully connected networks using the MNIST dataset \citep{Lecun1998} and of convolutional neural networks using the CIFAR-10 \citep{Krizhevsky2009} and ImageNet \citep{ILSVRC15} datasets. Then we present the sensitivity pre-training for convolution neural networks. Finally, we discuss related work and conclusion. \section{Iterative Training}\label{training} \begin{algorithm}[tb] \caption{Iterative Training} \label{alg:layer_binary} \textbf{Input}: Input data and label\\ \textbf{Parameter}: Number of iterations, $N$\\ \textbf{Parameter}: Number of layers, $L$\\ \textbf{Parameter}: Total number of epochs, $T$\\ \textbf{Parameter}: $BinarizationOrder$ array, length $L$\\ \textbf{Output}: A trained neural network with binary weights \begin{algorithmic}[1] \STATE $BinarizationState \leftarrow zeros(L)$. \FOR{$j \leftarrow 1$ \TO $L$} \STATE $layer \leftarrow BinarizationOrder[j]$. \STATE $BinarizationState[layer] \leftarrow 1$. \FOR{$i \leftarrow 1$ \TO $N$} \STATE BinarizeWeights($BinarizationState$). \STATE ForwardPropagation(). \STATE BackwardPropagation(). \STATE UpdateParameters(). \ENDFOR \ENDFOR \STATE $i \leftarrow L*N$ \WHILE{$i < T$} \STATE BinarizeWeights($BinarizationState$). \STATE ForwardPropagation(). \STATE BackwardPropagation(). \STATE UpdateParameters(). \ENDWHILE \end{algorithmic} \end{algorithm} A feedforward, deep neural network has many layers, say, $L$. We study iterative training by quantizing more and more weights layer-by-layer. Iterative training starts from one quantized layer while all other layers are in floating-point precision. Each iteration trains for a fixed number of epochs, $N$. Next, we quantize the next layer and trains for another $N$ epochs. Iterative training stops when there are no more layers to quantize. In the case of ResNet architectures, same as the original paper, we reduce learning rate by 10 twice and continue training. Algorithm \ref{alg:layer_binary} illustrates the iterative training regime. As the experiments will show, this regime consistently finds fully quantized network with better accuracies than starting from an initial fully quantized network (our baseline). For quantization scheme, we follow weight binarization in \cite{Courbariaux2015}, but, for simplicity, without "tricks": no weight clipping and no learning rate scaling. In addition, we use softmax instead of square hinge loss. The inner for-loop in Algorithm \ref{alg:layer_binary} is the same as the training regime in \cite{Courbariaux2015}, except that a state variable is introduced to control whether a layer needs binarization or not. We use the PyTorch framework \cite{Pytorch2019}. ImageNet results in the biggest GPU memory needs and longest training time, which are about 10 GB and about one day to train one model on a Nvidia V100, respectively. \begin{table*}[tbp] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{@{}lllllll@{}} \toprule Network & 300-100-10 & 784-784-10 & Vgg-5 & Vgg-9 & ResNet-20 & ResNet-21 \\ \midrule \multirow{3}{*}{Convolutional layers} & \multirow{3}{*}{} & \multirow{3}{*}{} & \multirow{3}{*}{64, 64} & 64, 64 & 16, 3x[16, 16] & 64, 4x[64] \\ & & & & 128, 128 & 3x[32, 32] & 5x[128], 5x[256] \\ & & & & 256, 256 & 3x[64, 64] & 5x[512] \\ Fully connected layers & 300, 100, 10 & 784, 784, 10 & 256, 256, 10 & 256, 256, 10 & 10 & 1000 \\ \midrule Dataset & MNIST & MNIST & CIFAR-10 & CIFAR-10 & CIFAR-10 & ImageNet 2012 \\ Train / Validation / Test & 55K / 5K / 10K & 55K / 5K / 10K & 45K / 5K / 10K & 45K / 5K / 10K & 45K / 5K / 10K & 1.2M / 0 / 50K \\ Batch size & 100 & 100 & 100 & 100 & 128 & 256 \\ \multirow{3}{*}{Optimizer} & \multirow{3}{*}{Adam} & \multirow{3}{*}{Adam} & \multirow{3}{*}{Adam} & \multirow{3}{*}{Adam} & SGD & SGD \\ & & & & & Momentum 0.9 & Momentum 0.9 \\ & & & & & Weight decay 1e-4 & Weight decay 1e-4 \\ Pre-training epochs, $K$ & 150 & 150 & 200 & 450 & 300 & 20 \\ Epochs per layer & 150 & 150 & 150 & 150 & 50 & 2 \\ Layers, $L$ & 3 & 3 & 5 & 9 & 20 & 21 \\ Total epochs, $T$ & 450 & 450 & 750 & 1350 & 1200 & 67 \\ \midrule Order count & 6 & 6 & 120 & 362,880 & 2e+18 & 5e+19 \\ Weight count & 266,200 & 1,237,152 & 4,300,992 & 2,261,184 & 268,336 & About 11e6 \\ \bottomrule \end{tabular}% } \caption{Summary of network architectures and their hyper-parameters.} \label{tab:params} \end{table*} As shown by order count in Table \ref{tab:params}, there is a large number of layer binarization order for a deep neural network. In this work, we experiment with random and obvious orders, to show that starting from a partially quantized weight network is better than starting from fully quantized one. In a later section, we introduce the proposed sensitivity pre-training to select a layer binarization order. For obvious orders, we experiment with the forward order, i.e., quantizing layer-by-layer from input layer towards output layer and the reverse order, i.e., from output layer towards input layer. We then compare to training when: (1) all weights are quantized from start (baseline) (2) all weights are in floating-point precision and stay so. As the experiments will show, for bigger and deeper networks, the forward order consistently finds fully quantized network with better accuracies than other orders. In the following subsections, we discuss experimental results for fully connected and convolutional networks. \subsection{Iterative Training for Fully Connected Networks}\label{training_fcn} We investigate iterative training of fully connected networks with the MNIST dataset, which has 60,000 training and 10,000 test images. We use the last 5,000 images from the training set for validation and the remaining 55,000 as training images for all MNIST experiments. We use no data augmentation. We use batch normalization \citep{Ioffe2015}, no drop-out and weight initialization as \cite{He2015}. We use softmax as classifier. For iterative training, we train for 150 epochs per layer. For each network architecture, the total number of training epochs is number of layers multiplied by 150 epochs. Because there are three layers for the chosen networks, all MNIST experiments are trained for 450 epochs. For all cases, we find the best learning rate from the best error on the validation set. For layer-by-layer binarization cases, the best error is selected from epochs when all layers are binarized. We then use each corresponding best learning rate for the error on the test set. We vary the seed for 5 training sessions and report the learning curves of the average test errors in the figures. Table \ref{tab:params} reports other hyper-parameters. \iffalse \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_300_bigger_simpler_sns} \caption{Test errors for 300-100-10 network.} \label{fig:error_of_300} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_784_bigger_simpler_sns} \caption{Test errors for 784-784-10 network.} \label{fig:error_of_784} \end{figure} \fi \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_300_784_bigger_simpler_sns} \caption{ Left: Test errors for 300-100-10 network. Right: Test errors for 784-784-10 network. } \label{fig:error_of_300_784} \end{figure} \begin{table}[tbp] \centering \begin{tabular}{@{}llll@{}} \toprule Case & 300-100-10 & 784-784-10 & Improvement \\ \midrule Binary & 0.023 & 0.021 & 0.002 \\ Float & 0.015 & 0.014 & 0.001 \\ Reverse & 0.024 & 0.018 & 0.006 \\ Forward & 0.026 & 0.016 & 0.010 \\ \bottomrule \end{tabular} \caption{Error improvement from using 784-784-10 over 300-100-10.} \label{tab:784_advantage} \end{table} For network architectures, we study the 300-100-10 network \citep{Lecun1998} and a bigger variant, 784-784-10. Figure \ref{fig:error_of_300_784} shows test errors for the 300-100-10 and 784-784-10 networks. The float case is training where all weights are in floating-point precision and stay so. The binary case (baseline) is training where all weights are binarized from the start. The forward case is training where layer binarization is in the forward order, the reverse in the reverse order. The solid lines are the mean across multiple runs and the matching shaded color is one standard deviation. For the smaller network, 300-100-10, the binary case reaches a lower error than forward and reverse orders. Next best is the reverse order then the forward one. This shows that order of layer binarization matters for accuracy. On the contrary, for the bigger network, 784-784-10, the forward and reverse cases does better than the binary one. Binarization operation is not differentiable. According to Equation \ref{eqn:error}, it injects a random error signal into the network. During iterative training, some of the weights are in floating-point precision. We hypothesize that they are compensating for the random error. At the same time, we think bigger networks are more robust due to more parameters. The error improvement of upgrading to a bigger network is given in Table \ref{tab:784_advantage}. The forward and reverse orders have significantly higher improvement than float and binary, showing that iterative training is beneficial. In addition, the forward order has a higher improvement than reverse. We observe the same pattern for subsequent network architectures. Namely, for bigger and deeper networks, starting from partial binary weight network, instead of full binary weight network, iterative training with forward weight quantization order finds full binary weight network with higher accuracies. \subsection{Iterative Training for Convolutional Networks}\label{training_cnn} We investigate iterative training of convolutional networks with the CIFAR-10 dataset, which has 50,000 training and 10,000 test images. We randomly choose 5,000 images from the training set as the validation set and the remaining 45,000 as training images for all CIFAR-10 experiments. We use the same data augmentation as \cite{He2016}: 4 pixels are padded on each side, and a 32x32 crop is randomly sampled from the padded image or its horizontal flip. We use batch normalization, no drop-out and weight initialization as \cite{He2015}. We use softmax as classifier. We experiment with VGG \citep{Simonyan2015} and ResNet \citep{He2016} architectures. For iterative training of VGG architectures, we train for 150 epochs per layer. For iterative training of ResNet-20 architecture, we train for 50 epochs per layer. Same as the original paper, we reduce learning rate by a factor 10 twice, once at 1000 epochs and a second time at 1100 epochs. Then stop training at 1200 epochs. Using same methodology as MNIST experiments, for all cases, we use the validation set to tune the learning rate and test set to report errors. Table \ref{tab:params} reports other hyper-parameters. \iffalse \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_vggconv2final_sns} \caption{ Test errors for Vgg-5 network. } \label{fig:error_of_conv2} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_vggconv6final_sns} \caption{ Test errors for Vgg-9 network. } \label{fig:error_of_conv6} \end{figure} \fi \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_vggconv2_conv6_final_sns} \caption{ Left: Test errors for Vgg-5 network. Right: Test errors for Vgg-9 network. } \label{fig:error_of_conv2_conv6} \end{figure} For VGG architecture, we study a shallower, Vgg-5, and a deeper network, Vgg-9. As their names suggest, Vgg-5 has 5 layers and Vgg-9, 9. Figure \ref{fig:error_of_conv2_conv6} shows test errors for Vgg-5 and Vgg-9 networks. The float case is training where all weights are in floating-point precision and stay so. The binary case (baseline) is training where all weights are binarized from the start. The forward case is training where layer binarization is in the forward order, the reverse in the reverse order and the random case, a randomly selected order. For both network architectures, the binary case has the highest error and the float case the lowest error. In the same pattern as the larger MNIST network, starting from partial binary weight networks, iterative training finds full binary weight networks that have lower error than the binary cases. For Vgg-5, a shallower network, the ascending error ranking is reverse, forward then random. For Vgg-9, a deeper network, the ranking is forward, random then reverse. This shows again that layer binarization order matters. \begin{table}[tbp] \centering \begin{tabular}{@{}llll@{}} \toprule Case & Vgg-5 & Vgg-9 & Improvement \\ \midrule Binary & 0.30 & 0.28 & 0.02 \\ Float & 0.16 & 0.08 & 0.08 \\ Reverse & 0.22 & 0.1126 & 0.1074 \\ Forward & 0.22 & 0.1025 & 0.1175 \\ \bottomrule \end{tabular} \caption{Error improvement from using Vgg-9 over Vgg-5.} \label{tab:vgg9_advantage} \end{table} The error improvement of upgrading to Vgg-9 from Vgg-5 is summarized in Table \ref{tab:vgg9_advantage}. There is a small improvement for the binary case. The float case has a significantly higher improvement than binary. Next higher is the reverse case. Finally, the forward case has the highest improvement. In the same pattern as in the MNIST experiments, favoring iterative training and the forward order. As shown in Table \ref{tab:params}, although Vgg-9 has a smaller number of weight parameters than Vgg-5, it has more layers. Iterative training continues to be beneficial. We hypothesize that this is due to a more gradual rate of total binarization. For Vgg-9, as each layer is binarized, relatively more weights stay in floating-point precision to compensate for the random noise injected by the binarization operation. \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_resnet20ep1200_sns} \caption{ Left: Test errors for ResNet-20 network. Right: Zoom to final epochs. Learning rate is reduced by 10x at 1000 and again at 1100 epochs. } \label{fig:error_of_resnet20} \end{figure} For an even deeper network, we study ResNet-20 from \cite{He2016}, which has 20 layers, as its name suggests. Figure \ref{fig:error_of_resnet20} shows test errors for the ResNet-20 network. The binary case has the highest error and the float case has the lowest error. In the same pattern as other network architectures, iterative training finds full binary weight networks that have lower error than the binary case. In increasing error order is forward, random and reverse. Again, showing that the order of binarization matter and the forward order has advantage. In the next section, we propose a sensitivity pre-training to select a binarization order. \section{Sensitivity Pre-training}\label{sensitivity_pretraining} In prior sections we demonstrated empirically that starting from a partial binary weight network results in higher accuracy than starting from a fully binary weight one for larger and deeper networks. In this section, we describe the proposed sensitivity pre-training to choose a the binarization order. For shallower neural networks like the 3-layer fully connected networks for the MNIST dataset, exhaustive search for the best binarization order is possible. For deeper neural networks such as Vgg-5, Vgg-9 and Resnet-20, it is impractical to do so, as shown by order count in Table \ref{tab:params}. However, we can obtain a measure of error sensitivity to layer quantization. Then let the sensitivity be a guide for binarization ordering. Sensitivity is computed as follows. We train $L$ models, where in each model only the weights of the L-th layer is binarized while others are in floating-point precision. We train for $K$ epochs and, as before, use validation set to tune the learning rate to get the best validation error for each sensitivity model. $K$ for Vgg-5 is 200 and for Vgg-9 is 450. $K$ for ResNet-20 is 300. For ResNet, same as the original paper, we reduce learning rate by 10 twice, one at epoch 200 and again at epoch 250. Then we rank these $L$ best validation errors in ascending order. This becomes the ascending layer binarization order for iterative training. \iffalse \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_vggconv2final_sensitivity_sns} \caption{ Test errors for Vgg-5 network. Ascending and descending orders are chosen by sensitivity pre-training. } \label{fig:error_of_conv2_with_sensitivity} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_vggconv6final_sensitivity_sns} \caption{ Test errors for Vgg-9 network. Ascending and descending orders are chosen by sensitivity pre-training. } \label{fig:error_of_conv6_with_sensitivity} \end{figure} \fi \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_vggconv2_conv6_final_sensitivity_sns} \caption{ Left: Test errors for Vgg-5 network. Right: Test errors for Vgg-9 network. Ascending and descending orders are chosen by sensitivity pre-training. } \label{fig:error_of_conv2_conv6_with_sensitivity} \end{figure} During iterative training using ascending order, the layer that had the lowest error will be binarized first, while the layer that had the highest error last, meaning the latter stays in floating-point precision the longest during training. As shown in Figure \ref{fig:error_of_conv2_conv6_with_sensitivity} for Vgg-5 and Vgg-9, the ascending order results in a fully binary weight network with the lowest error, beating the forward ones. Also shown is the descending order, which is the reverse of the ascending one. For both networks, the descending order results in error higher than ascending, showing again that binarization order matters. In the case of Vgg-5, the random order is worst while descending one follows closely behind. In the case of Vgg-9, the descending one is the worst of all. In short, the lower the error for one order, the higher its reverse order would be. \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_resnet20ep1200_sensitivity_sns} \caption{ Left: Test errors for Resnet-20 network. Right: Zoom to final epochs. Ascending and descending orders are chosen by sensitivity pre-training. } \label{fig:error_of_resnet20_with_sensitivity} \end{figure} For ResNet-20, Figure \ref{fig:error_of_resnet20_with_sensitivity} shows the test errors with ascending and descending orders. Unlike for Vgg-5 and Vgg-9, the forward order reaches accuracy better than both ascending and descending orders. The proposed sensitivity pre-training considers binarization of layers independently. We hypothesize that there may be interactions between multiple layers. \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_imagenet_sensitivity_sns} \caption{ Left: Test errors for ResNet-21 network. Right: Zoom to final epochs. Ascending order are chosen by sensitivity pre-training. } \label{fig:error_of_resnet21_with_sensitivity} \end{figure} For ImageNet, we experiment with ResNet-18 \cite{He2016}. Since it has 21 layers, we will refer to it as ResNet-21. The optimizer is SGD with momentum 0.9 and weight decay 1e-4. For sensitivity pre-training, $K$ is 20 epochs. For each layer, we sweep 3 learning rates and use the last-epoch errors of the test set to choose the ascending order. In the full training, we choose 2 epochs per layer. The starting learning rate, 0.01, comes from the best learning rate in sensitivity pre-training. Same as the orginal paper, we reduce learning rate by 10 twice, after 42 epochs and again after 57 epochs. We stop training after 67 epochs. The floating-point training is just one run, while all other binarization training are from 5 random-seeded runs. Figure \ref{fig:error_of_resnet21_with_sensitivity} shows the test errors with forward and ascending orders. The ascending order has a lower mean error than forward. Both of which are better than binary. Again, binarization order matters and ascending order is better than the forward one. \subsection{Exhaustive Search} \iffalse \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_300_sensitivity_sns} \caption{ Test errors for 300-100-10 network. Ascending order is same as reverse and descending same as forward. 132 means binarization order is layer 1, layer 3 then layer 2. } \label{fig:error_of_300_with_sensitivity} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_784_sensitivity_sns} \caption{Test errors for 784-784-10 network.} \label{fig:error_of_784_with_sensitivity} \end{figure} \fi \begin{figure}[t] \centering \includegraphics[width=0.5\columnwidth]{gfx_300_784_sensitivity_sns} \caption{ Left: Test errors for 300-100-10 network. Right: Test errors for 784-784-10 network. 132 means binarization order is layer 1, layer 3 then layer 2. Forward order is 123. } \label{fig:error_of_300_784_with_sensitivity} \end{figure} For shallower neural networks like the 3-layer fully connected network for the MNIST dataset, exhaustive search for the best binarization order is possible. Figure \ref{fig:error_of_300_784_with_sensitivity} shows result for all combinations of layer binarization order for 300-100-10 and 784-784-10 networks. For the former, a smaller network, the ascending order turns out to be same as reverse. Errors for all combinations are very close. The best order is not the ascending one, but 132 and 312, both of which are better than binary by a small margin. 132 means binarization order is layer 1, layer 3 then layer 2. Thus, also for 300-100-10, starting from partial weight binarization is better than from full weight binarization. For the bigger network, 784-784-10, the ascending order is better than forward and reverse ones. The descending order is worst of all others. This is consistent with the results from convolutional networks. Here, the ascending one shares with another in best accuracy. In summary, we proposed using sensitivity pre-training as a guide for layer binarization order. For 784-784-10, Vgg-5, Vgg-9 and ResNet-21, we have shown empircally that better accuracies are achieved. This improvement comes at a cost of pre-training additional $L$ models. \section{Related Work}\label{related_work} Our work introduces an iterative layer-by-layer quantization training regime. Although we demonstrated the results using weight binarization, the regime is independent of quantization schemes. We think other schemes, e.g., \cite{Li2016} (where weights are ternary: -1.0, 0 and 1.0), may yield similar trends. \cite{Hu2018} transforms weight binarization as a hashing problem. Same as ours, their iterative method also operates layer-by-layer, from input layer towards output layer. However, they start from a pre-trained network and, after weight quantization without fine-tuning, fine-tune the biases. Ours starts from an untrained network and gradually trains a full binary weight network, which we believe allows the network to adapt to the random noise created by the quantization operation. In addition, their final weights are not pure binary, but power-of-2 multiples. When constrained to pure binary, they report non-convergence. Our iterative training does not require pure binary weights. For future work, we can binarize using power-of-2 multiples. \cite{Zhou2017} iterates over both pruning and quantization techniques. First, weights are partitioned into two groups. Then, weights in first group are quantized to power-of-2 multiples or zero. Next, weights in the second groups are fine-tuned, while the first group receives no parameter updates. In the next iteration, some of weights in second group is assigned to the first group. The process is repeated until all weights are members of the first group. In this partitioning scheme, the first group contains weights from all layers. It is possible to merge both methods because their partitioning is orthogonal to ours. Once weights join the first group, their values stay unchanged for the rest of the fine-tuning. Because our binarization is based on \citep{Courbariaux2015}, floating-point weights prior to quantization are saved for parameter updates. Thus, during iterative training of later layers, weights of prior layer are allowed to adapt and flip signs. However, the disadvantage is more memory are required during training. In low-rank decompositions and teacher-student network, weights are still in floating-point precision. For low-rank decomposition, the implementation requires decomposition operation, which is computationally expensive, and factorization requires extensive model retraining to achieve convergence when compared to the original model \citep{Cheng2018}. Similarly, due to iterative nature of our proposed training regime, training time is also lengthened. \section{Conclusions and Further Work}\label{conclusion} In this work, we proposed a simple iterative training that gradually trains a partial binary weight network to a full binary weight network layer-by-layer. We showed empirically that this regime results in higher accuracy than starting training from a fully binarized weight network. The order of layer binarization matters. We show empircally that, for larger and deeper neural networks, the forward order achieves better accuracies than other binarization orders. We proposed a sensitivity pre-training for selection of binarization order. For 784-784-10, Vgg-5, Vgg-9 and ResNet-21, this guided order achieve better accuracies than the forward order. Iterative training has a cost, which is lengthened training. This trade-off may be acceptable in many applications where pre-trained models are deployed, because efficiency in only forward propagation is needed. A binary weight neural network dramatically reduces computation complexity, memory footprint and, thus, increases energy efficiency. For future work, we would like to understand analytically why layer quantization works and the optimal quantization order. \bibliographystyle{unsrtnat}
-20,723.332523
[ -2.33984375, 2.466796875 ]
59.311424
[ -2.59375, 0.8935546875, -1.3154296875, -4.15234375, -1.1318359375, 6.5546875 ]
[ 2.71484375, 7.57421875, 3.2421875, 7.68359375 ]
337
4,065
[ -2.6484375, 2.91796875 ]
30.005501
[ -5.921875, -3.548828125, -3.392578125, -1.572265625, 2.21484375, 10.4765625 ]
1.839214
20.148665
21.156212
4.603238
[ 2.528294086456299 ]
-16,659.403821
5.911931
-20,432.444216
1.027796
5.698743
[ -3.11328125, -3.240234375, -2.794921875, -3.685546875, 2.603515625, 10.1015625 ]
[ -5.87890625, -1.671875, -1.95703125, -1.4921875, 3.400390625, 5.02734375 ]
BkiUbEQ241xg-EsBtZ7v
\section{Introduction} The tilting modules have become an important tool in different areas like Representation theory of Algebras \cite{hugel2007handbook}. In this paper, we use the foundational aspects developed in \cite{parte1} in order to set and develop a theory of relative tilting objects in abelian categories. The idea of this theory is the following one: if we have a category $\mathcal{X}$ immersed nicely in an abelian category $\mathcal{C},$ then we can use the structure of $\mathcal{C}$ to define a tilting theory on $\mathcal{X}.$ We will show that this work offers a unified framework of different previous notions of tilting, ranging from Auslander- Solberg relative tilting modules on Artin algebras to infinitely generated tilting modules on arbitrary rings. With this new approach, we will review Bazzoni's tilting characterization, relative homological dimensions on the induced tilting classes, and parametrise certain cotorsion-like pairs. As an example, we will show how the tilting theory in exact categories built this way, coincides with the tilting objects in extriangulated categories introduced recently by Bin Zhu and Xiao Zhuang \cite{zhu2019tilting}. It is worth mentioning that a relative tilting theory was recently presented by Pooyan Moradifar and Siamak Yassemi in \cite{Pooyan-Siamak21} with the goal of studying infinitely generated Gorenstein tilting objects. We believe that our work will be a complementary tool for this research line. \ In the last 40 years, tilting theory has been generalized in many ways and contexts with different purposes. Its roots can be traced back to the seminal work of Peter Gabriel \cite{gabriel1972}, which showed a bijection between the indecomposable modules over a finite-dimensional algebra and the positive roots of a Lie group. After that, Joseph Bernstein, Isra\"il Moyseyovich Gelfand and Vladimir A. Ponomarev deepened this study with the aim of constructing all of the indecomposable modules over a finite-dimensional algebra \cite{bernstein1973}. Some time later, Maurice Auslander, Mar\'ia In\'es Platzeck and Idun Reiten generalised these results constructing for the fist time what we now know as a tilting object in the context of finitely generated modules over Artin algebras \cite{auslander1979coxeter}. It was Sheila Brenner and Michael C.R. Butler who axiomatised and gave name to these objects in \cite{brenner1980generalizations}. Subsequently, a more general definition was offered by Dieter Happel and Claus Michael Ringel in \cite{happel1982tilted} with the goal of achieving a better understanding of tilting objects. Few years later, this definition would be extended from tilting objects of projective dimension $\leq1$ to tilting objects of finite projective dimension by Yoichi Miyashita in \cite{miyashita}, but still under the context of finitely generated modules. Later on, the tilting theory context would be extended from finitely generated modules over Artin algebras to infinitely generated modules over arbitrary rings, this is the case of the work of Lidia Angeleri H\"ugel and Fl\'avio Ulhoa Coelho in \cite{Tiltinginfinitamentegenerado}. As noted in the above discussion, there is a diverse family of different tilting definitions with different properties and objectives. This family of tilting theories can be bluntly divided in two subfamilies: ``big'' tilting theories and ``small'' tilting ones. The small tilting theories can be described as the ones defined using only finite coproducts. Namely, all the classical tilting theories, which were developed for finitely generated modules over Artin algebras, are generalized by the small tilting theories. Among them, we can mention the Brenner-Butler, the Happel-Reiten, and the Miyashita theories referred above, but also we can find more recent research works as the tilting functors developed by Roberto Mart\'inez Villa and Martin Ortiz Morales in \cite{martinez2014tilting}. The big tilting theories are those ones that allow arbitrary coproducts on its constructions of tilting objects. These kind of theories start coming up when, inter alia, the works of Brenner-Butler, Happel-Ringel, Ibrahim Assem \cite{assem1984torsion}, and Sverre O. Smalo \cite{smalo1984torsion} were extended to the setting of infinitely generated modules over arbitrary rings in the works of Robert R. Colby and Kent R. Fuller \cite{colby1990tilting}, Riccardo Colpi, Gabriella D'Este, and Alberto Tonolo \cite{Quasitiltingcounterequivalences}, R. Colpi, A. Tonolo, and Jan Trlifaj \cite{partialcotilting}, R. Colpi and J. Trlifaj \cite{Colpi-Trlifaj}, A. Tonolo, J. Trlifaj, and L. Angeleri H\"ugel \cite{Tiltingpreenvelopes}, and L. Angeleri H\"ugel and F. Ulhoa Coelho \cite{Tiltinginfinitamentegenerado}. Recent works on big tilting theories are focused on abelian categories with coproducts as can be seen in the works of Leonid Positselski and Jan {\v{S}}t'ov{\'\i}{\v{c}}ek \cite{positselskicorrespondence}, Pedro Nicol\'as, Manuel Saor\'in, and Alexandra Zvonareva \cite{nicolas2019silting}. The goal of this paper is to develop new tools for understanding the tilting phenomenon. Namely, we will be interested in studying the relation of \emph{cotorsion-like pairs} in an abelian category, with a new tilting notion associated to a subcategory $\mathcal{X}\subseteq\mathcal{C}$, called $n$-$\mathcal{X}$-tilting. This kind of relations were studied for the first time by Maurice Auslander and Idun Reiten in \cite{auslandereiten,auslander1992homologically}. One of their results is \emph{the Auslander-Reiten Correspondence} \cite[Theorem 4.4]{auslander1992homologically}, which shows a correspondence between tilting modules over an Artin algebra and covariantly finite subcategories. It is worth mentioning that this theorem has been taken partially to different contexts by different authors. Some of them are M. Auslander and {\O}yvind Solberg \cite[Theorem 3.2, Theorem 3.24]{auslander1993relative2}, Soud Khalifa Mohamed \cite[Proposition 4.2]{mohamed2009relative}, L. Angeleri and Octavio Mendoza \cite[Theorem 3.2]{Hopel-Mendoza}, and Bin Zhu and Xiao Zhuang in \cite[Theorem 2]{zhu2019tilting}. The paper is organized as follows. The cotorsion-like pairs we referred to previously were presented in \cite{parte1}. They are linked with a generalization of the Auslander-Reiten theory, developed in \cite{auslandereiten}, and the Auslander-Buchweitz approximation theory, developed in \cite{Auslander-Buchweitz}. For the objectives of this paper, the reader can picture them as pairs $\p$ of classes of objects of an abelian category $\mathcal{C}$ such that $\Extx[i][][\mathcal{A}\cap\mathcal{X}][\mathcal{B}\cap\mathcal{X}]=0$ $\forall i>0$, for a given class $\mathcal{X}\subseteq\mathcal{C}$. In Section 2, we will recall the main definitions and results of \cite{parte1}. In particular, we will recall notions related to the cotorsion pairs, relative homological dimensions, relative resolution dimensions, closure properties and the class $\operatorname{Fac}_n ^{\mathcal{X}}(\mathcal{T})$. In Section 3, we state and develop our $n$-$\mathcal{X}$-tilting theory. The idea is to present a tilting theory relative to a class of objects $\mathcal{X}$ together with a set of tools that provides us information on the induced homological dimensions and approximations. In order that our results can be used in a wide variety of contexts, we sought to provide a definition that encompasses different prior definitions. In particular, our definition can be specialized to big or small titing objects according to our needs. For this, we define $n$-$\mathcal{X}$-tilting classes and say that an object $T$ is big (small) $n$-$\mathcal{X}$-tilting if $\mathrm{Add}(T)$ ($\mathrm{add}(T)$) is an $n$-$\mathcal{X}$-tilting class. Let us describe briefly the most relevant results in Section 3. Theorem \ref{thm:el par n-X-tilting} gives some essential properties of the $n$-$\mathcal{X}$-tilting classes inspired in \cite[Theorem 4.3]{Wei}, \cite[Theorem 3.11]{Bazzonintilting} and \cite[Theorem 4.4]{Tiltinginfinitamentegenerado}. We will see in Theorem \ref{thm:n-X-tilting sii n-X-tilting peque=0000F1o} that, for a class of compact objects $\mathcal{X}$, an object $T$ is big $n$-$\mathcal{X}$-tilting if and only if it is small $n$-$\mathcal{X}$-tilting. One of our goals is to study the properties satisfied by the pair $({}^{\bot}(\mathcal{T}^{\bot }),\mathcal{T}^{\bot })$ for a $n$-$\mathcal{X}$-tilting class $\mathcal{T}$. The main result containing this information is Theorem \ref{thm:el par n-X-tilting}. Later on, such pairs will be characterized in Theorem \ref{thm:main1-2}. Lastly, as a consequence of the previous result, we will get two versions of the Auslander-Reiten Correspondence in Corollary \ref{cor: coro1 teo nuevo} and Corollary \ref{cor: coro2 teo nuevo}. Section 4 is devoted in examining $n$-$\mathcal{X}$-tilting theory in the special case where $\mathcal{X}$ is a thick class. It will be shown that the hypotheses are made more flexible and a more relaxed definition is given. This new definition is called saturated $n$-$\mathcal{X}$-tilting. The main results of this section will be Corollary \ref{prop:caracterizacion tilting saturado}, which is a characterization of the saturated $n$-$\mathcal{X}$-tilting classes, together with Theorem \ref{cor:biyeccion tiltilng } and Corollary \ref{cor:biyeccion tiltilng -2} that are two versions of the Auslander-Reiten Correspondence. In Section 5, we will discuss and explore a variety of examples and applications for the $n$-$\mathcal{X}$-tilting theory. In particular, we will show how our definition fits for different tilting theories in the literature. We will also see how our results help us to find equivalences between different tilting notions. The first example of this section are the $\infty$-tilting objects and pairs which were defined by Leonid Positselski and Jan {\v{S}}t'ov{\'\i}{\v{c}}ek in \cite{positselski2019tilting}. We show that these structures are particular cases of the $n$-$\mathcal{X}$-tilting theory. The second example is related with the Miyashita $n$-tilting modules, which can be seen as $n$-$\modd[R]$-tilting modules. Moreover, by using $n$-$\mathcal{X}$-tilting theory, we will deduce a variety of classical results (and others that seem to be new) on Miyashita tilting modules. In the third example of this section, we will develop a theory of Miyashita $n$-tilting modules of type $FP_n,$ for left $n$-coherent rings. The fourth example of this section is devoted to study the tilting phenomena in the context of small exact categories. Namely, for an small exact category $(\mathcal{A},\mathcal{E}),$ with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, we introduce the small $n$-tilting and the Auslander-Solberg $n$-tilting classes in $(\mathcal{A},\mathcal{E}).$ We show that both of them are equivalent to the Zhu-Zhuang tilting theory for exact categories developed in \cite{zhu2019tilting}. Moreover, we will explore a nice embedding of $\mathcal{A}$ into the functor category $\mathrm{Mod}(\mathcal{P}^{op}),$ given by Yoneda's functor, where $\mathcal{P}$ is the set of all the $\mathcal{E}$-projective objects in $\mathcal{A}.$ We also show that the $n$-$\mathcal{X}$-tilting theory developed in the abelian category $\mathrm{Mod}(\mathcal{P}^{op})$ is strongly related with the small $n$-tilting classes in $(\mathcal{A},\mathcal{E}).$ The fifth example is devoted to the S.K. Mohamed's relative tilting theory \cite{mohamed2009relative} and the Auslander-Solberg tilting objects \cite{auslander1993relative2}. In the sixth example, we will study the tilting classes of functors developed by Mart\'inez Villa and Ortiz Morales \cite{martinez2011tilting,martinez2013tilting,martinez2014tilting}, and characterize them in terms of $n$-$\mathcal{X}$-tilting theory. Finally, in the last example, we will study the relation between silting, quasitilting and $n$-$\mathcal{X}$-tilting modules. \section{Preliminaries } In this section, we introduce all the necessary notions and results to the development of the paper. For more details, we recommend the reader to see in \cite{parte1}. \subsection{Notation } Throughout, we denote by $\mathcal{C}$ an abelian category. The symbol $\mathcal{M}\subseteq\mathcal{C}$ means that $\mathcal{M}$ is a class of objects of $\mathcal{C}$. In a similar way, the symbol $\p\subseteq\mathcal{C}^{2}$ will mean that $\mathcal{A}$ and $\mathcal{B}$ are classes of objects of $\mathcal{C}$. On the other hand, $C\in\mathcal{C}$ will mean that $C$ is an object of $\mathcal{C}$. \ We will use the Grothendieck's notation \cite{Ab} to distinguish abelian categories with further structure: \begin{itemize} \item $\mathcal{C}$ is \textbf{Ab3 }if it has coproducts; \item $\mathcal{C}$ is \textbf{Ab4 }if it is Ab3 and the coproduct functor is exact; \item $\mathcal{C}$ is \textbf{Ab5 }if it is Ab3 and the direct limit functor is exact. \end{itemize} Given $X,Y\in\mathcal{C}$ and $n\geq0$, we will consider the $n$-th Yoneda extensions group $\Extx[n][][X][Y]$ \cite[Chapter VII]{mitchell} and the bifunctor $\Extx[n][][-][-]:\mathcal{C}\times\mathcal{C}\rightarrow\mbox{Ab}\mbox{.}$ We will be using the long exact sequence induced by a short exact sequence \cite[Chapter VI, Theorem 5.1]{mitchell} and the Shifting Lemma \cite[Lemma 2.2]{parte1}. In the case that $\mathcal{C}$ be an Ab4 category, for any family $\{A_{i}\}_{i\in I}$ of objects in $\mathcal{C},$ we will make use (without mention it) of the natural isomorphism \cite[Theorem 3.12]{argudin2019yoneda} \[ \Psi_{n}:\Extx[n][][\bigoplus_{i\in I} A_{i}][B]\rightarrow\prod_{i\in I}\Extx[n][][A_{i}][B]\;\forall B\in\mathcal{C}. \] Let $\mathcal{X}\subseteq\mathcal{C}.$ For any integer $i\geq 0,$ the right $i$-th orthogonal complement of $\mathcal{X}$ is defined by $\mathcal{X}^{\perp_i}:=\{C\in\mathcal{C}\;|\;\mathrm{Ext}^i_\mathcal{C}(-,C)|_\mathcal{X}=0\},$ and the total right orthogonal complement of $\mathcal{X}$ by $\mathcal{X}^{\perp}:=\cap_{i\geq 1}\mathcal{X}^{\perp_i}.$ Dually, we have the $i$-th and the total left orthogonal complements ${}^{\perp_i}\mathcal{X}$ and ${}^{\perp}\mathcal{X}$ of $\mathcal{X},$ respectively. In case we have some $\mathcal{Y}\subseteq\mathcal{C}$ such that $\mathcal{Y}\subseteq\mathcal{X}^{\bot}$ ($\mathcal{Y}\subseteq{}^{\bot}\mathcal{X}$), we say that $\mathcal{Y}$ is \textbf{$\mathcal{X}$-injective} (\textbf{$\mathcal{X}$-projective}). \ For a given $\mathcal{M}\subseteq\mathcal{C},$ we consider the class $\smdx[\mathcal{M}]$ of all the direct summands of objects in $\mathcal{M}.$ We denote by $\mathcal{M}^{\oplus}$ ($\mathcal{M}^{\oplus_{<\infty}}$) the class of (finite) coproducts of objects in $\mathcal{M}$. We define $\addx[\mathcal{M}]:=\smdx[\mathcal{M}^{\oplus_{<\infty}}]$ and $\Addx:=\smdx[\mathcal{M}^{\oplus}]$. Furthermore, in case $\mathcal{M}$ consists of single object $M,$ we set $M^{\oplus}:=\mathcal{M}^{\oplus}$, $M^{\oplus_{<\infty}}:=\mathcal{M}^{\oplus_{<\infty}}$, $\smdx[M]:=\smdx[\mathcal{M}]$, $\Addx[M]:=\Addx[\mathcal{M}]$, $\addx[M]:=\addx[\mathcal{M}]$, $M^{\bot}:=\mathcal{M^{\bot}},$ and $^{\bot}M:={}^{\bot}\mathcal{M}$. One important feature of this work is that we do not assume the existence of enough projectives or enough injectives in the abelian category $\mathcal{C}.$ Instead we will be working with the following notions. For $(\mathcal{X},\omega)\subseteq\mathcal{C}^{2},$ it is said that $\omega$ is a \textbf{relative cogenerator in $\mathcal{X}$} if $\omega\subseteq\mathcal{X}$ and any $X\in\mathcal{X}$ admits an exact sequence $\suc[X][W][X']\mbox{,}$ with $W\in\omega$ and $X'\in\mathcal{X}$. The notion of \textbf{relative generator} in defined dually. \subsection{Cotorsion pairs, approximations, and related notions} Following \cite[Definition 3.1]{parte1}, we recall that for $\p\subseteq\mathcal{C}^{2}$ and $\mathcal{X}\subseteq\mathcal{C},$ it is said that $\p$ is a \textbf{left (right) cotorsion pair in $\mathcal{X}$} if $\mathcal{A}\cap\mathcal{X}={}^{\bot_{1}}\mathcal{B}\cap\mathcal{X}$ ($\mathcal{B}\cap\mathcal{X}=\mathcal{A}^{\bot_{1}}\cap\mathcal{X}$). Moreover, $\p$ is a \textbf{cotorsion pair in $\mathcal{X}$} if it is a left and right cotorsion pair in $\mathcal{X}.$ In case $\mathcal{X}=\mathcal{C},$ we simply say that $(\mathcal{A},\mathcal{B})$ is a left (right) cotorsion pair if it is a left (right) cotorsion pair in $\mathcal{C}.$ \ Cotorsion pairs are known for their relation with approximations. Namely, for a given $\mathcal{Z}\subseteq\mathcal{C}$, a morphism $f:Z\rightarrow M$ is called \textbf{$\mathcal{Z}$-precover} if $Z\in\mathcal{Z}$ and $\Homx[][Z'][f]:\mathrm{Hom}_\mathcal{C}(Z',Z)\to \mathrm{Hom}_\mathcal{C}(Z',M)$ is an epimorphism $\forall Z'\in\mathcal{Z}$. In case $f$ fits in an exact sequence $\suc[M'][Z][M][\,][\,]$, where $M'\in\mathcal{Z}^{\bot_{1}}$, $f$ is called \textbf{special $\mathcal{Z}$-precover}. Dually, we have the notion of {\bf $\mathcal{Z}$-preenvelope} and {\bf special $\mathcal{Z}$-preenvelope}. \ Let $(\mathcal{X},\mathcal{Z})\subseteq\mathcal{C}^2.$ Following, \cite[Definition 3.13]{parte1}, it is said that $\mathcal{Z}$ is \textbf{special precovering in} $\mathcal{X}$ if any $X\in\mathcal{X}$ admits an exact sequence $\suc[B][A][X]$ in $\mathcal{C}$ with $\ensuremath{A}\ensuremath{\in\mathcal{Z}\cap\mathcal{X}}$ and $\ensuremath{B}\ensuremath{\in\mathcal{Z}^{\bot_{1}}\cap\mathcal{X}} .$ The notion of \textbf{special preenveloping in $\mathcal{X}$} is defined dually. Recall that a cotorsion pair $\p$ is left complete if $\mathcal{A}$ is special precovering in $\mathcal{C}.$ As a generalization of that, and following \cite[Definition 3.14]{parte1}, it is said that a (not necessarily cotorsion) pair $\p\subseteq\mathcal{C}^{2}$ is \textbf{left $\mathcal{X}$-complete} if any $X\in\mathcal{X}$ admits an exact sequence $\suc[B][A][X]$, with $A\in\mathcal{A}\cap\mathcal{X}$ and $B\in\mathcal{B}\cap\mathcal{X}$. The notion of {\bf right $\mathcal{X}$-complete pair} is defined dually. Moreover, a pair is {\bf $\mathcal{X}$-complete} if it is right and left $\mathcal{X}$-complete. Finally, the pair $\p$ is called \textbf{$\mathcal{X}$-hereditary} if $\idr[\mathcal{A}\cap\mathcal{X}][\mathcal{B}\cap\mathcal{X}]=0$ \cite[Definition 3.7]{parte1}. \subsection[Relative dimensions]{Relative homological dimensions and relative resolution dimensions} In \cite{parte1}, we presented a generalization of the Auslander-Buchweitz-Reiten approximation theory \cite{Auslander-Buchweitz, auslandereiten}. The goal of such work was to study the relations between the relative homological dimensions and the existence of a particular class of relative resolutions and coresolutions. Let us recall such concepts. \ Let $\mathcal{C}$ be an abelian category, $\mathcal{B},\mathcal{A}\subseteq\mathcal{C}$, and $C\in\mathcal{C}$. Following \cite{Auslander-Buchweitz}, the \textbf{$\mathcal{A}$-projective dimension} $\pdr[][C]$ of $C$ is \[ \pdr[][C]:=\min\left\{ n\in\mathbb{N}\,|\:\Extx[k][][C][\mathcal{A}]=0\,\forall k>n\right\} \mbox{,} \] where the minimum of the empty set is the symbol $\infty.$ The \textbf{$\mathcal{A}$-projective dimension} of $\mathcal{B}$ is $\pdr[][\mathcal{B}]:=\sup\left\{ \pdr[][B]\,|\:B\in\mathcal{B}\right\}.$ The \textbf{$\mathcal{A}$-injective dimension} $\mathrm{id}_\mathcal{A}(C)$ of $C$ and the\textbf{ $\mathcal{A}$-injective dimension} $\mathrm{id}_\mathcal{A}(\mathcal{B})$ of $\mathcal{B}$ are defined dually. We recall now, from \cite{parte1}, the notions of relative resolution and relative coresolution. \begin{defn} \cite[Definition 4.1]{parte1} Let $\mathcal{C}$ be an abelian category, $M\in\mathcal{C}$ and let $\mathcal{Y}$,$\mathcal{X}\subseteq\mathcal{C}.$ \begin{itemize} \item[$\mathrm{(a)}$] A \textbf{$\mathcal{Y}_{\mathcal{X}}$-coresolution} of $M$ is an exact sequence in $\mathcal{C}$ of the form $$0\rightarrow M\stackrel{f_{0}}{\rightarrow}Y_{0}\stackrel{f_{1}}{\rightarrow}Y_{1}\stackrel{}{\rightarrow}...\stackrel{}{\rightarrow}Y_{n-1}\stackrel{f_{n}}{\rightarrow}Y_{n}\stackrel{}{\rightarrow}\cdots\mbox{,}$$ with $Y_{k}\in\mathcal{Y}\cup\left\{ 0\right\} $ $\forall k\geq0$ and $\im[f_{i}]\in\mathcal{X}\cup\{0\}$ $\forall i\geq1.$ The class of all the objects in $\mathcal{C}$ having a $\mathcal{Y}_{\mathcal{X}}$-coresolution is denoted by $\mathcal{Y}_{\mathcal{X},\infty}^{\vee}.$ \item[$\mathrm{(b)}$] A \textbf{finite (of length $n$) $\mathcal{Y}_{\mathcal{X}}$-coresolution} of $M$ is an exact sequence in $\mathcal{C}$ of the form $$0\rightarrow M\stackrel{f_{0}}{\rightarrow}Y_{0}\stackrel{f_{1}}{\rightarrow}Y_{1}\stackrel{}{\rightarrow}...\stackrel{}{\rightarrow}Y_{n-1}\stackrel{f_{n}}{\rightarrow}Y_{n}\stackrel{}{\rightarrow}0\mbox{,}$$ with $Y_{n}\in\mathcal{X}\cap\mathcal{Y}$, $Y_{k}\in\mathcal{Y}$ $\forall k\in[0,n-1]$, and $\im[f_{i}]\in\mathcal{X}$ $\forall i\in[1,n-1].$ The class of all the objects in $\mathcal{C}$ having a finite $\mathcal{Y}_{\mathcal{X}}$-coresolution is denoted by $\mathcal{Y}^{\vee}_{\mathcal{X}}.$ Moreover, the class of all the objects in $\mathcal{C}$ having a $\mathcal{Y}_{\mathcal{X}}$-coresolution of length $\leq n$ is denoted by $\mathcal{Y}^{\vee}_{\mathcal{X},n}.$ Note that $\cup_{n\in\mathbb{N}}\mathcal{Y}^{\vee}_{\mathcal{X},n}=\mathcal{Y}^{\vee}_{\mathcal{X}}\subseteq \mathcal{Y}_{\mathcal{X},\infty}^{\vee}.$ \item[$\mathrm{(c)}$] The \textbf{$\mathcal{Y}_{\mathcal{X}}$-coresolution dimension} $\coresdimr{\mathcal{Y}}M{\mathcal{X}}$ of $M$ is defined by $$\coresdimr{\mathcal{Y}}M{\mathcal{X}}:=\min\{n\in\mathbb{N}\,|\, M\in \mathcal{Y}^{\vee}_{\mathcal{X},n}\}.$$ For $\mathcal{Z}\subseteq\mathcal{C},$ we set $\coresdimr{\mathcal{Y}}{\mathcal{Z}}{\mathcal{X}}:=\sup\left\{ \coresdimr{\mathcal{Y}}Z{\mathcal{X}}\,|\:Z\in\mathcal{Z}\right\} .$ \item[$\mathrm{(d)}$] We consider the classes: $\p[\mathcal{X}][\mathcal{Y}]_{\infty}^{\vee}:=\mathcal{X}\cap\mathcal{Y}_{\mathcal{X},\infty}^{\vee},$ $\p[\mathcal{X}][\mathcal{Y}]^{\vee}:=\mathcal{X}\cap\mathcal{Y}_{\mathcal{X}}^{\vee},$ and $\p[\mathcal{X}][\mathcal{Y}]_{n}^{\vee}:=\mathcal{X}\cap\mathcal{Y}_{\mathcal{X},n}^{\vee}.$ \item[$\mathrm{(e)}$] Dually, we define the $\mathcal{Y}_{\mathcal{X}}$-resolution (of length $n$) of $M,$ $\resdimr{\mathcal{Y}}M{\mathcal{X}}$ and the classes: $\mathcal{Y}_{\mathcal{X}}^{\wedge}$, $\mathcal{Y}_{\mathcal{X},\infty}^{\wedge}$ and $\mathcal{Y}_{\mathcal{X},n}^{\wedge}.$ We also have $\p[\mathcal{Y}][\mathcal{X}]_{\infty}^{\wedge}:=\mathcal{Y}_{\mathcal{X},\infty}^{\wedge}\cap\mathcal{X},$ $\p[\mathcal{Y}][\mathcal{X}]^{\wedge}:=\mathcal{Y}_{\mathcal{X}}^{\wedge}\cap\mathcal{X},$ and $\p[\mathcal{Y}][\mathcal{X}]_{n}^{\wedge}:=\mathcal{Y}_{\mathcal{X},n}^{\wedge}\cap\mathcal{X}.$ \end{itemize} If $\mathcal{X}=\mathcal{C},$ we omit the ``$\mathcal{X}$'' symbol in the above notations. Note that, $M$ is isomorphic to some object in $\mathcal{X}\cap\mathcal{Y}$ if, and only if, $\coresdimr{\mathcal{Y}}M{\mathcal{X}}=0$ (respectively, $\resdimr{\mathcal{Y}}M{\mathcal{X}}=0$). \end{defn} \subsection{Closure properties} Let $\mathcal{C}$ be an abelian category, $\mathcal{Y}\subseteq\mathcal{X}\subseteq\mathcal{C}$ and $n\geq1.$ Following \cite[Definition 2.4]{parte1}, we recall that $\mathcal{Y}$ is \textbf{closed by $n$-quotients in $\mathcal{X}$} if for any exact sequence $0\rightarrow A\rightarrow Y_{n}\overset{\varphi_{n}}{\rightarrow}...\overset{}{\rightarrow}Y_{1}\overset{\varphi_{1}}{\rightarrow}B\rightarrow0$ in $\mathcal{C},$ with $Y_{i}\in\mathcal{Y}$, $\Kerx[\varphi_{i}]\in\mathcal{X}$ $\forall i\in[1,n]$ and $B\in\mathcal{X}$, we have that $B\in\mathcal{Y}$. The notion of \textbf{closed by $n$-subobjects in $\mathcal{X}$} is defined dually. These closure properties are useful to characterize classes $\mathcal{T}\subseteq\mathcal{C}$ such that $\pdr[\mathcal{X}][\mathcal{T}]\leq n$ and $\idr[\mathcal{X}][\mathcal{T}]\leq n,$ respectively, see \cite[Proposition 2.7]{parte1}. Other closure notions that we will be using in the development of the paper are the following ones \cite[Definition 3.3]{parte1}. Let $\mathcal{M},\mathcal{X}\subseteq\mathcal{C}$. We say that $\mathcal{M}$ is \textbf{closed under mono-cokernels in $\mathcal{M}\cap\mathcal{X}$} if, for any exact sequence in $\mathcal{C}$ \[ \suc[M][M'][M'']\mbox{ with }M,M'\in\mathcal{M}\cap\mathcal{X}\mbox{,} \] we have that $M''\in\mathcal{M}.$ Dually, it can be defined the notion of being \textbf{closed under epi-kernels in $\mathcal{M}\cap\mathcal{X}$}. In case $\mathcal{M}\subseteq\mathcal{X}$, we will simply say that $\mathcal{M}$ is closed under mono-cokernels and epi-kerneles, respectively. Furthermore, $\mathcal{M}$ is \textbf{$\mathcal{X}$-resolving} if $\mathcal{M}$ contains an $\mathcal{X}$-projective relative generator in $\mathcal{X}$, it is closed under epi-kernels in $\mathcal{M}\cap\mathcal{X}$ and under extensions; and the notion of \textbf{$\mathcal{X}$-coresolving} is defined dually. These notions are very useful to identify $\mathcal{X}$-hereditary pairs \cite[Lemma 3.5 and Lemma 3.6]{parte1}. Let $\mathcal{X}\subseteq\mathcal{C}.$ Following \cite[Definition 2.2]{ABsurvey}, $\mathcal{X}$ is \textbf{right thick} (\textbf{left thick}) if it is closed under extensions, direct summands and mono-cokernels (epi-kernels); and $\mathcal{X}$ is \textbf{thick} if it is left and right thick. \subsection{The class of relative $n$-quotients} Let $\mathcal{C}$ be an abelian category and $\mathcal{T},\mathcal{X}\subseteq\mathcal{C}.$ Following \cite[Section 5]{parte1}, we recall the notion of the relative $n$-$(\mathcal{X},\mathcal{T})$-quotients in $\mathcal{C},$ and the different variants related with small and big classes. \ For any integer $n\geq1,$ $\Gennr$ denotes the class of the objects $C\in\mathcal{C}$ admitting an exact sequence $0\rightarrow K\rightarrow T_{n}\stackrel{f_{n}}{\rightarrow}...\stackrel{f_{2}}{\rightarrow}T_{1}\stackrel{f_{1}}{\rightarrow}C\rightarrow0$ in $\mathcal{C},$ with $\Kerx[f_{i}]\in\mathcal{X}$ and $T_{i}\in\mathcal{T}\cap\mathcal{X}$ $\forall i\in[1,n].$ We also define $\operatorname{Gen}_{n}^{\mathcal{X}}(\mathcal{T}):=\Gennr[\mathcal{T}^{\oplus}][n][\mathcal{X}]$ and $\operatorname{gen}_{n}^{\mathcal{X}}(\mathcal{T}):=\Gennr[\mathcal{T}^{\oplus_{<\infty}}][n][\mathcal{X}]$. For an object $T\in\mathcal{C}$, we define $\operatorname{Gen}_{n}^{\mathcal{X}}(T):=\operatorname{Gen}_{n}^{\mathcal{X}}(\operatorname{Add}(T))$ and $\operatorname{gen}_{n}^{\mathcal{X}}(T):=\operatorname{gen}_{n}^{\mathcal{X}}(\operatorname{add}(T))$. In case of $\mathcal{X}=\mathcal{C}$, we set $\Genn[\mathcal{T}]:=\Gennr[\mathcal{T}][][\mathcal{C}]$, $\operatorname{Gen}_{n}(\mathcal{T}):=\operatorname{Gen}_{n}^{\mathcal{C}}(\mathcal{T})$ and $\operatorname{gen}_{n}(\mathcal{T}):=\operatorname{gen}_{n}^{\mathcal{C}}(\mathcal{T}).$ Some closure properties that such class has can be found in \cite[Proposition 5.2]{parte1}. \section{Relative tilting classes} In this section, we introduce the notion of $n$-$\mathcal{X}$-tilting class in an abelian category $\mathcal{C}$ and develop a relative tilting theory on $\mathcal{X}\subseteq\mathcal{C}.$ We will show that this notion offers a unified framework of different previous notions of tilting which are in the literature. Without further ado, let us define our main object of study. \begin{defn}\label{def:X-tilting}\label{def: tilting peque=0000F1o-1} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}\subseteq\mathcal{C}$ and $n\in\mathbb{N}$. A class $\mathcal{T}\subseteq\mathcal{C}$ is \textbf{$n$-$\mathcal{X}$-tilting} if the following conditions hold true. \begin{description} \item [(T0)] $\mathcal{T}=\smdx[\mathcal{T}].$ \item [(T1)] $\pdr[\mathcal{X}][\mathcal{T}]\leq n.$ \item [{(T2)}] $\mathcal{T}\cap\mathcal{X}\subseteq\mathcal{T}^{\bot}.$ \item [{(T3)}] There is a class $\omega\subseteq\mathcal{T}_{\mathcal{X}}^{\vee}$ which is a relative generator in $\mathcal{X}.$ \item [{(T4)}] There is a class $\alpha\subseteq\mathcal{X}^{\bot}\cap\mathcal{T}^{\bot}$ which is a relative cogenerator in $\mathcal{X}.$ \item [{(T5)}] Every $Z\in\mathcal{T}^{\bot}\cap\mathcal{X}$ admits a $\mathcal{T}$-precover $T'\rightarrow Z$, with $T'\in\mathcal{X}$. \end{description} An $n$-$\mathcal{X}$-tilting class $\mathcal{T}\subseteq\mathcal{C}$ is \textbf{big} (\textbf{small}) if $\mathcal{T}=\mathcal{T}^{\oplus}$ ($\mathcal{T}=\mathcal{T}^{\oplus_{<\infty}}$). An object $T\in\mathcal{C}$ is\textbf{ big} \textbf{(small}) \textbf{$n$-$\mathcal{X}$-tilting} if $\Addx[T]$ ($\addx[T]$) is $n$-$\mathcal{X}$-tilting. \end{defn} In Section 5, we will show that the above definition generalizes a big variety of previous tilting notions. In particular, tilting modules for Artin algebras, given by Dieter Happel and Claus Michael Ringel \cite{happel1982tilted}; and Yoichi Miyashita's tilting modules of finite projective dimension \cite{miyashita}. For now, the most nearby example is the tilting object in abelian categories developed by Leonid Positselski and Jan {\v{S}}t'ov{\'\i}{\v{c}}ek \cite{positselskicorrespondence}, that we call PS $n$-tilting. \begin{defn} \cite[Section 2, Theorem 3.4(3)]{positselskicorrespondence} Let $\mathcal{C}$ be an Ab3 and Ab3{*} category with an injective cogenerator. An object $T\in\mathcal{C}$ is \textbf{PS $n$-tilting} if the following conditions hold true. \begin{description} \item [{(PST1)}] $\pdx[T]\leq n.$ \item [{(PST2)}] $\Addx[T]\subseteq T^{\bot}.$ \item [{(PST3)}] There is a generating class $\mathcal{G}$ in $\mathcal{C}$ such that $\mathcal{G}\subseteq\left(\Addx[T]\right)^{\vee}$. \end{description} \end{defn} \begin{rk} Let $\mathcal{C}$ be an Ab3 and Ab3{*} category $\mathcal{C},$ with an injective cogenerator and $T\in\mathcal{C}.$ Then, $T$ is PS $n$-tilting if, and only if, $T$ is big $n$-$\mathcal{C}$-tilting. Indeed, it can be seen, by taking $\mathcal{X}=\mathcal{C}$ and $\mathcal{T}=\mathrm{Add}(T)$ in Definition \ref{def:X-tilting}, that the conditions (T4) and (T5) are satisfied trivially, and that the conditions (PST1), (PST2), and (PST3) coincide with (T1), (T2), and (T3), respectively. \end{rk} \subsection{Elemental properties of relative tilting classes} \begin{lem}\label{lem:chico}\label{lem:inf1}\label{lem:inf1-1}\label{lem:chico-1} For an abelian category $\mathcal{C}$ and $\mathcal{X}\subseteq\mathcal{C},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] If $\mathcal{T}\subseteq\mathcal{C}$ satisfies $\mathrm{(T2)},$ then $\mathcal{T}\cap\mathcal{X}\subseteq\mathcal{T}^{\bot}\cap{}{}^{\bot}\left(\mathcal{T}^{\bot}\right).$ \item[$\mathrm{(b)}$] If $\mathcal{T}\subseteq\mathcal{C}$ satisfies $\mathrm{(T1)}$ and $\mathrm{(T4),}$ then $\mathcal{X}\subseteq(\mathcal{T}^{\bot}\cap\mathcal{X}){}_{\mathcal{X},n}^{\vee}$. \end{itemize} \end{lem} \begin{proof} (a) By (T2), $\mathcal{T}\cap\mathcal{X}\subseteq \mathcal{T}^{\bot}.$ Moreover, $\mathcal{M}\subseteq{}{}^{\bot}\left(\mathcal{M}^{\bot}\right),$ for any class $\mathcal{M}\subseteq\mathcal{C}.$ Therefore $\mathcal{T}\cap\mathcal{X}\subseteq\mathcal{T}^{\bot}\cap{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)\mbox{.}$ \ (b) It follows from \cite[Proposition 4.5(a)]{parte1}. \end{proof} \begin{lem}\label{lem:inf2}\label{lem:inf2-1} For an abelian category $\mathcal{C}$ and $\mathcal{X},\mathcal{T}\subseteq\mathcal{C},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}_{\mathcal{X}}^{\vee}\cap\mathcal{X}\subseteq\mathcal{T}{}^{\vee}\subseteq{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)\subseteq{}{}^{\bot}\left(\mathcal{T}^{\bot}\cap\mathcal{X}\right).$ \item[$\mathrm{(b)}$] if $\mathcal{X}=\smdx[\mathcal{X}]$ and $\mathcal{T}$ satisfies $\mathrm{(T0)}$ and $\mathrm{(T2)},$ then $(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\cap\mathcal{T}^{\bot}=\mathcal{T}\cap\mathcal{X}.$ \end{itemize} \end{lem} \begin{proof} (a) The inclusion $\mathcal{T}{}^{\vee}\subseteq{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)$ follows from \cite[Lemma 4.3]{parte1}. \ (b) Let $A\in(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\cap \mathcal{T}^{\bot}$. Hence, there is an exact sequence \[ \eta:\:\suc[A][T_{0}][A']\mbox{ with }T_{0}\in\mathcal{T}\cap\mathcal{X}\mbox{ and }A'\in\left(\mathcal{T}\cap\mathcal{X}\right)_{\mathcal{X}}^{\vee}, \] where $A'\in{}^{\bot}\left(\mathcal{T}^{\bot}\right)$ by (a). Note that $\eta$ splits since $A\in\mathcal{T}^{\bot},$ and thus, $A\in\mathcal{T}\cap\mathcal{X}.$ \ Since $\mathcal{T}\cap\mathcal{X}\subseteq (\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee},$ we get from (T2) that $\mathcal{T}\cap\mathcal{X}\subseteq (\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\cap\mathcal{T}^{\bot}.$ \end{proof} \begin{lem} \label{lem:inf3}\label{lem:inf3-1} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ and $\mathcal{T}\subseteq\mathcal{C}$ satisfying $\mathrm{(T0)},$ $\mathrm{(T1)},$ $\mathrm{(T2)}$ and $\mathrm{(T4)}.$ Then \begin{itemize} \item[$\mathrm{(a)}$] $\coresdimr{\mathcal{T}\cap\mathcal{X}}{(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\cap\mathcal{X}}{\mathcal{X}}\leq\pdr[\mathcal{X}][\mathcal{T}]$; \item[$\mathrm{(b)}$] $(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}=(\mathcal{T}\cap\mathcal{X})_{\mathcal{X},k}^{\vee}\;\forall k>\pdr[\mathcal{X}][\mathcal{T}].$ \end{itemize} \end{lem} \begin{proof} We consider $\mathcal{W}:=(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\cap\mathcal{X}$ and $m:=\max\{1,\pdr[\mathcal{X}][\mathcal{T}]\}.$ \ (a) Let $X\in\mathcal{W}$. Then, there is an exact sequence \[ 0\rightarrow X\stackrel{f_{0}}{\rightarrow}Y_{0}\stackrel{f_{1}}{\rightarrow}Y_{1}\stackrel{}{\rightarrow}...\stackrel{}{\rightarrow}Y_{m-1}\stackrel{f_{m}}{\rightarrow}Y_{m}\stackrel{}{\rightarrow}0\mbox{,} \] with $Y_{m}\in\mathcal{W}$, $Y_0,Y_{i}\in\mathcal{T}\cap\mathcal{X}$ and $\im[f_{i}]\in\mathcal{X}$ $\forall i\in[1,m-1]$. Moreover, by (T1), (T4), and \cite[Proposition 2.7]{parte1}, it follows that $Y_{m}\in\mathcal{T}^{\bot}\cap\mathcal{X}$. Then, by Lemma \ref{lem:inf2}(b), $Y_{m}\in\mathcal{W}\cap\mathcal{T}^{\bot}=\mathcal{T}\cap\mathcal{X}$ and thus $\coresdimr{\mathcal{T}\cap\mathcal{X}}{\mathcal{W}}{\mathcal{X}}\leq m$.\\ Assume now that $\pdr[\mathcal{X}][\mathcal{T}]=0.$ Then $\mathcal{T}\subseteq{}^{\bot}\mathcal{X}$ and, for any $W\in\mathcal{W},$ there is an exact sequence $\eta_{W}:\:\suc[W][T_{W}][C_{W}][\,][\,]$ with $T_{W},C_{W}\in\mathcal{T}\cap\mathcal{X}$. Now, since $\mathcal{T}\subseteq{}^{\bot}\mathcal{X}$ and $\mathcal{W}\subseteq\mathcal{X}$, $\Extx[1][][\mathcal{T}\cap\mathcal{X}][\mathcal{W}]=0$. Hence $\eta_{W}$ splits $\forall W\in\mathcal{W}$. In particular, $\mathcal{W}\subseteq\mathcal{T}\cap\mathcal{X}$ and thus $\coresdimr{\mathcal{T}\cap\mathcal{X}}{\mathcal{W}}{\mathcal{X}}=0;$ proving (a). \ (b) Let $M\in(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$. Then, there is an exact sequence \[ \suc[M][T_{0}][M']\mbox{ with }T_{0}\in\mathcal{T}\cap\mathcal{X}\mbox{ and }M'\in\mathcal{W}. \] It follows from (a) that $\coresdimr{\mathcal{T}\cap\mathcal{X}}{M'}{\mathcal{X}}\leq\pdr[\mathcal{X}][\mathcal{T}]=:n$. Hence $M'\in(\mathcal{T}\cap\mathcal{X})_{\mathcal{X},n}^{\vee}$ and thus $M\in(\mathcal{T}\cap\mathcal{X})_{\mathcal{X},n+1}^{\vee}$. Therefore, for any $k>n,$ we have \[ (\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}=(\mathcal{T}\cap\mathcal{X})_{\mathcal{X},n+1}^{\vee}\subseteq(\mathcal{T}\cap\mathcal{X})_{\mathcal{X},k}^{\vee}\subseteq(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}; \] proving (b). \end{proof} \begin{cor}\label{cor:coronuevo pag 55} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions, $\mathcal{T}\subseteq\mathcal{C}$ be $n$-$\mathcal{X}$-tilting, and $\omega=\smdx[\omega]$ be an $\mathcal{X}$-projective relative generator in $\mathcal{X}$ such that $\omega\subseteq\mathcal{T}_{\mathcal{X}}^{\vee}$. Then, $\omega=\mathcal{X}\cap{}^{\bot}\mathcal{X}$ and $\coresdimr{\mathcal{T}\cap\mathcal{X}}{\omega}{\mathcal{X}}\leq\pdr[\mathcal{X}][\mathcal{T}]$. Furthermore, $\omega=\mathcal{T}\cap\mathcal{X}$ if $\pdr[\mathcal{X}][\mathcal{T}]=0$. \end{cor} \begin{proof} By the dual of \cite[Proposition 2.7]{ABsurvey}, we have that $\omega=\mathcal{X}\cap{}^{\bot}\mathcal{X}.$ On another hand, $\omega\subseteq(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\cap\mathcal{X}$ since $\omega\subseteq\mathcal{X}\cap\mathcal{T}_{\mathcal{X}}^{\vee}$ and $\mathcal{X}$ is closed under extensions. Therefore, by Lemma \ref{lem:inf3} (a), it follows that $\coresdimr{\mathcal{T}\cap\mathcal{X}}{\omega}{\mathcal{X}}\leq\pdr[\mathcal{X}][\mathcal{T}]$. \ Let us assume that $\pdr[\mathcal{X}][\mathcal{T}]=0$. Then $\mathcal{T}\subseteq{}^{\bot}\mathcal{X}$ and $\coresdimr{\mathcal{T}\cap\mathcal{X}}{\omega}{\mathcal{X}}=0.$ Hence $\omega\subseteq\mathcal{T}\cap\mathcal{X}\subseteq\mathcal{X}\cap{}^{\bot}\mathcal{X}=\omega$. \end{proof} The following result is a generalization of \cite[Lemma 2.3]{Tiltinginfinitamentegenerado}. \begin{lem}\label{lem:inf4}\label{lem:props T2 con T3' y C2 con C3'}\label{lem:inf4-1}\label{lem:props T2 con T3' y C2 con C3'-1} Let $\mathcal{C}$ be an abelian category and $\mathcal{X}\subseteq\mathcal{C}$ be closed under extensions. If $\mathcal{T}\subseteq\mathcal{C}$ satisfies $\mathrm{(T3)},$ then $\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][1][\mathcal{X}]$.\end{lem} \begin{proof} Let $A\in\mathcal{T}^{\bot}\cap\mathcal{X}$. By (T3), there is an exact sequence \\ \begin{minipage}[t]{0.55\columnwidth}% \[ \eta _1:\;\suc[K][W][A][][a]\mbox{,} \] with $W\in\omega$ and $K\in\mathcal{X}$. Moreover, by (T3) and Lemma \ref{lem:inf2} (a), there is an exact sequence \[ \eta _2:\;\suc[W][B][C][b] \] with $B\in\mathcal{T}$ and $C\in\mathcal{T}{}^{\vee}\cap\mathcal{X}\subseteq{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)\cap\mathcal{X}$. Note that $B\in\mathcal{X}.$ Now, considering the push-out of $b$ and $a$, we get the exact sequences \begin{alignat*}{1} \eta _3:\:\suc[K][B][B'][][x] & \mbox{ and}\\ \eta _4:\:\suc[A][B'][C][t] & , \end{alignat*} % \end{minipage}\hfill{}% \fbox{\begin{minipage}[t]{0.4\columnwidth}% \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1cm,main node/.style=,x=1.5cm,y=1.5cm] \node[main node] (C) at (0,0) {$A$}; \node[main node] (Z) [right of=C] {$B'$}; \node[main node] (X1) [right of=Z] {$C$}; \node[main node] (X) [above of=C] {$W$}; \node[main node] (W) [right of=X] {$B$}; \node[main node] (X2) [right of=W] {$C$}; \node[main node] (Y1) [above of=X] {$K$}; \node[main node] (Y2) [above of=W] {$K$}; \node[main node] (01) [below of=C] {$0$}; \node[main node] (02) [below of=Z] {$0$}; \node[main node] (03) [left of=C] {$0$}; \node[main node] (04) [right of=X1] {$0$}; \node[main node] (05) [left of=X] {$0$}; \node[main node] (06) [right of=X2] {$0$}; \node[main node] (07) [above of=Y1] {$0$}; \node[main node] (08) [above of=Y2] {$0$}; \draw[->, thin] (C) to node {$t$} (Z); \draw[->, thin] (Z) to node {$$} (X1); \draw[->, thin] (03) to node {$$} (C); \draw[->, thin] (X1) to node {$$} (04); \draw[->, thick] (Y1) to node {$$} (X); \draw[->, thin] (Y2) to node {$$} (W); \draw[->, thick] (X) to node {$a$} (C); \draw[-, double] (X2) to node {$$} (X1); \draw[->, thin] (W) to node {$x$} (Z); \draw[->, thick] (05) to node {$$} (X); \draw[->, thick] (X) to node {$b$} (W); \draw[->, thick] (W) to node {$$} (X2); \draw[->, thick] (X2) to node {$$} (06); \draw[-, double] (Y1) to node {$$} (Y2); \draw[->, thick] (07) to node {$$} (Y1); \draw[->, thin] (08) to node {$$} (Y2); \draw[->, thick] (C) to node {$$} (01); \draw[->, thin] (Z) to node {$$} (02); \end{tikzpicture} \]% \end{minipage}}\\ \fbox{\begin{minipage}[t]{0.4\columnwidth}% \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1cm,main node/.style=,x=1.5cm,y=1.5cm] \node[main node] (K) at (0,0) {$K$}; \node[main node] (X) [right of=K] {$W$}; \node[main node] (C1) [right of=X] {$A$}; \node[main node] (YK) [below of=K] {$K'$}; \node[main node] (U) [right of=YK] {$B$}; \node[main node] (C2) [right of=U] {$A$}; \node[main node] (X1) [below of=YK] {$C$}; \node[main node] (X2) [below of=U] {$C$}; \node[main node] (01) [above of=K] {$0$}; \node[main node] (02) [above of=X] {$0$}; \node[main node] (03) [left of=K] {$0$}; \node[main node] (04) [right of=C1] {$0$}; \node[main node] (05) [left of=YK] {$0$}; \node[main node] (06) [right of=C2] {$0$}; \node[main node] (07) [below of=X1] {$0$}; \node[main node] (08) [below of=X2] {$0$}; \draw[->, thick] (K) to node {$$} (X); \draw[->, thick] (X) to node {$a$} (C1); \draw[->, thick] (03) to node {$$} (K); \draw[->, thick] (C1) to node {$$} (04); \draw[->, thick] (K) to node {$$} (YK); \draw[->, thin] (X) to node {$b$} (U); \draw[->, thick] (YK) to node {$$} (X1); \draw[->, thin] (U) to node {$$} (X2); \draw[->, thin] (05) to node {$$} (YK); \draw[->, thin] (YK) to node {$$} (U); \draw[->, thin] (U) to node {$yx$} (C2); \draw[->, thin] (C2) to node {$$} (06); \draw[-, double] (X1) to node {$$} (X2); \draw[->, thick] (X1) to node {$$} (07); \draw[->, thin] (X2) to node {$$} (08); \draw[->, thick] (01) to node {$$} (K); \draw[->, thin] (02) to node {$$} (X); \draw[-, double] (C1) to node {$$} (C2); \end{tikzpicture} \]% \end{minipage}}\hfill{}% \begin{minipage}[t]{0.55\columnwidth}% with $B'\in\Gennr[\mathcal{T}][1]$. Furthermore, $\eta_4$ splits since $A\in\mathcal{T}^\perp$ and $C\in{}^\perp(\mathcal{T}^\perp).$ Thus, there is some $y:B'\rightarrow A$ such that $yt=1_A.$ Consider the exact sequence \[ \eta_5:\:\suc[K'][B][A][][yx]\mbox{.} \] Since $B\in\mathcal{T}$, it remains to show that $K'\in\mathcal{X}$. For that purpose observe that, by using $\eta_5$ and $\eta_2$, we can build the exact sequence \[ \suc[K][K'][C]\mbox{,} \] where $K,C\in\mathcal{X}$. Therefore $K'\in\mathcal{X}$ since $\mathcal{X}$ is closed under extensions.% \end{minipage}\\ \end{proof} An important property of an infinitely generated tilting module of finite projective dimension $T\in\Modx$ is that $\Addx[T]$ is a relative generator in $T^{\bot}$. In our relative context, such property can be translated as the following one: $\mathcal{T}\cap\mathcal{X}$ is a relative generator in $\mathcal{T}^{\bot}\cap\mathcal{X}$. In that sense, the following lemma is a generalization of \cite[Lemma 2.4]{Tiltinginfinitamentegenerado}. \begin{lem}\label{lem:inf5}\label{lem:props C2 y T2}\label{lem:inf5-1}\label{lem:props C2 y T2-1} For an abelian category $\mathcal{C},$ $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ closed under extensions and $\mathcal{T}\subseteq\mathcal{C}$ satisfying $\mathrm{(T2), (T5)}$ and such that $\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][1],$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}\cap\mathcal{X}$ is a relative generator in $\mathcal{T}^{\bot}\cap\mathcal{X}$. \item[$\mathrm{(b)}$] Every morphism $A\rightarrow X$, with $A\in{}{}^{\bot}\left(\mathcal{T}^{\bot}\cap\mathcal{X}\right)$ and $X\in\mathcal{T}^{\bot}\cap\mathcal{X}$, factors through $\mathcal{T}\cap\mathcal{X}$. Moreover, if $\mathcal{T}=\smdx[\mathcal{T}]$, then \[ \mathcal{T}\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}\cap{}{}^{\bot}\left(\mathcal{T}^{\bot}\cap\mathcal{X}\right)=\mathcal{T}^{\bot}\cap\mathcal{X}\cap{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)\mbox{.} \] \end{itemize} \end{lem} \begin{proof} (a) By (T2), $\mathcal{T}\cap\mathcal{X}\subseteq\mathcal{T}^{\bot}$. Let $X\in\mathcal{T}^{\bot}\cap\mathcal{X}$. By (T5), there is a $\mathcal{T}$-precover $g:T'\rightarrow X$ with $T'\in\mathcal{X}$. Moreover, $g$ is an epimorphism since $X\in\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][1][\mathcal{X}].$ Let us prove that $K=\Kerx[g]\in\mathcal{T}^{\bot}\cap\mathcal{X}$. Consider the \makebox[\linewidth][s]{exact sequence $\suc[K][T'][X][][g]\mbox{.}$ By (T2) and the fact that $g$ is an}\\ \begin{minipage}[t]{0.45\columnwidth}% $\mathcal{T}$-precover, it follows that $K\in\mathcal{T}^{\bot}$. It remains to show that $K\in\mathcal{X}$. Since $X\in\Gennr[\mathcal{T}][1]$, there is an exact sequence $\suc[K'][B][X][][f]\mbox{,}$ with $B\in\mathcal{T}\cap\mathcal{X}$ and $K'\in\mathcal{X}$. Let $Z$ be the pullback of $f$ and $g$. We have the following exact sequences \[ \eta:\:\suc[K'][Z][T'] \] \[ \eta':\:\suc[K][Z][B]\mbox{.} \] Since $K',T'\in\mathcal{X}$, we have $Z\in\mathcal{X}.$ Furthermore, $\eta'$ splits since $K\in T^{\bot}$ and $B\in\mathcal{T}$. Therefore $K\in\mathcal{X}$. % \end{minipage}\hfill{}% \fbox{\begin{minipage}[t]{0.4\columnwidth}% \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1cm,main node/.style=,x=1.5cm,y=1.5cm] \node[main node] (C) at (0,0) {$X$}; \node[main node] (X0) [left of=C] {$T'$}; \node[main node] (X1) [left of=X0] {$K$}; \node[main node] (X) [above of=C] {$B$}; \node[main node] (E) [left of=X] {$Z$}; \node[main node] (X2) [left of=E] {$K$}; \node[main node] (Y1) [above of=X] {$K'$}; \node[main node] (Y2) [above of=E] {$K'$}; \node[main node] (01) [below of=C] {$0$}; \node[main node] (02) [below of=X0] {$0$}; \node[main node] (03) [right of=C] {$0$}; \node[main node] (04) [left of=X1] {$0$}; \node[main node] (05) [right of=X] {$0$}; \node[main node] (06) [left of=X2] {$0$}; \node[main node] (07) [above of=Y1] {$0$}; \node[main node] (08) [above of=Y2] {$0$}; \draw[->, thick] (X0) to node {$g$} (C); \draw[->, thick] (X1) to node {$$} (X0); \draw[->, thick] (C) to node {$$} (03); \draw[->, thick] (04) to node {$$} (X1); \draw[->, thick] (Y1) to node {$$} (X); \draw[->, thin] (Y2) to node {$$} (E); \draw[->, thick] (X) to node {$f$} (C); \draw[-, double] (X1) to node {$$} (X2); \draw[->, thin] (E) to node {$$} (X0); \draw[->, thin] (X) to node {$$} (05); \draw[->, thin] (E) to node {$$} (X); \draw[->, thin] (X2) to node {$$} (E); \draw[->, thin] (06) to node {$$} (X2); \draw[-, double] (Y2) to node {$$} (Y1); \draw[->, thick] (07) to node {$$} (Y1); \draw[->, thin] (08) to node {$$} (Y2); \draw[->, thick] (01) to node {$$} (C); \draw[->, thin] (02) to node {$$} (X0); \end{tikzpicture} \]% \end{minipage}} \ (b) Let $f:A\rightarrow X$ with $A\in{}{}^{\bot}\left(\mathcal{T}^{\bot} \cap \mathcal{X} \right)$ and $X\in\mathcal{T}^{\bot}\cap\mathcal{X}$. By (a), there is an exact sequence $\suc[L][T'][X][k][g]\mbox{,}$ with $T'\in\mathcal{T}\cap\mathcal{X}$ and $L\in\mathcal{T}^{\bot}\cap\mathcal{X}$. Observe that $\Extx[1][][A][L]=0$. Hence, $\Homx[][A][g]$ is surjective and thus $f$ factors through $g$. For the second claim, it is enough to observe that, for every $A'\in\mathcal{T}^{\bot}\cap{}\mathcal{X}\cap {}{}^{\bot}\left(\mathcal{T}^{\bot}\right)$ (or $A'\in\mathcal{T}^{\bot}\cap{}\mathcal{X}\cap {}{}^{\bot}\left(\mathcal{T}^{\bot}\cap \mathcal{X} \right)$), $1_{A'}$ factors through $\mathcal{T}\cap\mathcal{X}$. \end{proof} \begin{lem}\label{lem:inf6}\label{lem:inf6-1} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions, and let $\mathcal{T}=\smdx[\mathcal{T}]\subseteq\mathcal{C}$ be a class satisfying $\mathrm{(T1), (T2), (T4), (T5)}$ and such that $\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][1][\mathcal{X}]$. Then, $\mathcal{X}\subseteq(\mathcal{T}^{\bot}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$ and $(\mathcal{T}\cap\mathcal{X})^{\vee}\subseteq{}{}^{\bot}(\mathcal{T}^{\bot}\cap\mathcal{X})$. Moreover, for each $X\in\mathcal{X},$ the following statements hold true: \begin{itemize} \item[$\mathrm{(a)}$] $m:=\coresdimr{\mathcal{T}^{\bot}\cap\mathcal{X}}X{\mathcal{X}}\leq\pdr[\mathcal{X}][\mathcal{T}]<\infty$; \item[$\mathrm{(b)}$] there are exact sequences \[ \suc[X][M_{X}][C_{X}]\qquad\mbox{ and }\qquad\suc[K_{X}][B_{X}][X] \] such that $M_{X},\:K_{X}\in\mathcal{T}^{\bot}\cap\mathcal{X}$; $C_{X},\:B_{X}\in\mathcal{X}$; $\coresdimr{\mathcal{T}\cap\mathcal{X}}{C_{X}}{\mathcal{X}}=m-1$ and $\coresdimr{\mathcal{T}\cap\mathcal{X}}{B_{X}}{\mathcal{X}}\leq m;$ \item[$\mathrm{(c)}$] $B_{X}\rightarrow X$ is a $(\mathcal{T}\cap\mathcal{X})^{\vee}$-precover; \item[$\mathrm{(d)}$] $X\rightarrow M_{X}$ is a $\mathcal{T}^{\bot}\cap\mathcal{X}$-preenvelope. \end{itemize} \end{lem} \begin{proof} By Lemma \ref{lem:inf5}(a,b), it follows that $\mathcal{T}\cap\mathcal{X}$ is a $\mathcal{T}^{\bot}\cap\mathcal{X}$-projective relative generator in $\mathcal{T}^{\bot}\cap\mathcal{X}$. Moreover, $\mathcal{X}\subseteq(\mathcal{T}^{\bot}\cap\mathcal{X})_{\mathcal{X},n}^{\vee}$ by Lemma \ref{lem:inf1}. Hence, by \cite[Theorem 4.4]{parte1}, the lemma is proved. \end{proof} In what follows, we will see that the condition $\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][1][\mathcal{X}]$, obtained in Lemma \ref{lem:inf4}, is equivalent to (T3) if it is assumed that $\mathcal{T}$ satisfies (T1), (T2), (T4), and (T5). The next proposition is a generalization of \cite[Theorem 3.4 (2,3)]{positselskicorrespondence}. \begin{prop}\label{prop:equiv a t3} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions, and let $\mathcal{T}=\smdx[\mathcal{T}]\subseteq\mathcal{C}$ be a class satisfying $\mathrm{(T1), (T2), (T4)},$ and $\mathrm{(T5)}.$ Then, $T$ satisfies $\mathrm{(T3)}$ if and only if $\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][1][\mathcal{X}]$. Furthermore, in such case, we can choose a relative generator $\omega$ in $\mathcal{X}$ such that $\omega\subseteq(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$.\end{prop} \noindent \begin{minipage}[t]{0.55\columnwidth}% \begin{proof} By Lemma \ref{lem:inf4}, it is enough to prove that $\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][1][\mathcal{X}]$ implies (T3). By Lemma \ref{lem:inf6}, every $X\in\mathcal{X}$ admits an exact sequence $\suc[X][M_{X}][C_{X}][f]\mbox{,}$ with $M_{X}\in\mathcal{T}^{\bot}\cap\mathcal{X},C_{X}\in(\mathcal{X},\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}.$ From the inclusion $\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][1][\mathcal{X}]$, we have that $M_{X}$ admits an exact sequence $\suc[M'_{X}][T_{0}][M_{X}][][g]$ with $T_{0}\in\mathcal{T}\cap\mathcal{X}$ and $M'_{X}\in\mathcal{X}$. Considering the pullback of $f$ and $g$, we get an exact sequence \[ \suc[M'_{X}][P_{X}][X]\mbox{,} \] where $P_{X}\in(\mathcal{X},\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$. Hence, $\left\{ P_{X}\right\} _{X\in\mathcal{X}}$ is a relative generator in $\mathcal{X}$ satisfying (T3). \end{proof}% \end{minipage}\hfill{}% \fbox{\begin{minipage}[t]{0.4\columnwidth}% \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1cm,main node/.style=,x=1.5cm,y=1.5cm] \node[main node] (C) at (0,0) {$X$}; \node[main node] (Z) [right of=C] {$M_X$}; \node[main node] (X1) [right of=Z] {$C_X$}; \node[main node] (X) [above of=C] {$P_X$}; \node[main node] (W) [right of=X] {$T_0$}; \node[main node] (X2) [right of=W] {$C_X$}; \node[main node] (Y1) [above of=X] {$M'_X$}; \node[main node] (Y2) [above of=W] {$M'_X$}; \node[main node] (01) [below of=C] {$0$}; \node[main node] (02) [below of=Z] {$0$}; \node[main node] (03) [left of=C] {$0$}; \node[main node] (04) [right of=X1] {$0$}; \node[main node] (05) [left of=X] {$0$}; \node[main node] (06) [right of=X2] {$0$}; \node[main node] (07) [above of=Y1] {$0$}; \node[main node] (08) [above of=Y2] {$0$}; \draw[->, thick] (C) to node {$f$} (Z); \draw[->, thick] (Z) to node {$$} (X1); \draw[->, thick] (03) to node {$$} (C); \draw[->, thick] (X1) to node {$$} (04); \draw[->, thin] (Y1) to node {$$} (X); \draw[->, thick] (Y2) to node {$$} (W); \draw[->, thin] (X) to node {$ $} (C); \draw[-, double] (X2) to node {$$} (X1); \draw[->, thick] (W) to node {$g$} (Z); \draw[->, thin] (05) to node {$$} (X); \draw[->, thin] (X) to node {$ $} (W); \draw[->, thin] (W) to node {$$} (X2); \draw[->, thin] (X2) to node {$$} (06); \draw[-, double] (Y1) to node {$$} (Y2); \draw[->, thin] (07) to node {$$} (Y1); \draw[->, thick] (08) to node {$$} (Y2); \draw[->, thin] (C) to node {$$} (01); \draw[->, thick] (Z) to node {$$} (02); \end{tikzpicture} \]% \end{minipage}}\\ \begin{rem} Let $R$ be a ring. It can be proved that $T^{\bot}$ is preenveloping in $\Modx,$ for any $T\in\Modx$ \cite[Theorem 3.2.1]{Approximations}. This is a property that greatly enriches tilting theory. Below, we will prove a similar property in our relative context. The proof is a generalization of \cite[Proposition 3.3]{Tiltinginfinitamentegenerado}. \end{rem} \begin{thm}\label{thm:el par n-X-tilting}\label{thm:el par n-X-tilting-1} For an abelian category $\mathcal{C},$ $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ closed under extensions and $\mathcal{T}\subseteq\mathcal{C}$ satisfying $\mathrm{(T1),(T2),(T3),(T4)}$ and $\mathrm{(T5)},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] ${}{}^{\bot}(\mathcal{T}^{\bot}\cap\mathcal{X})\cap\mathcal{X}={}^{\bot}(\mathcal{T}^{\bot})\cap\mathcal{X}=\mathcal{T}_{\mathcal{X}}^{\vee}\cap\mathcal{X}=\left(\mathcal{T}\cap\mathcal{X}\right)_{\mathcal{X}}^{\vee}\cap\mathcal{X}$ if $\mathcal{T}=\smdx[\mathcal{T}].$ \item[$\mathrm{(b)}$] $\mathcal{T}^{\bot}\cap\mathcal{X}=\Gennr[\mathcal{T}][k][\mathcal{X}]\cap\mathcal{X}$ $\forall k\geq\max\{1,\pdr[\mathcal{X}][\mathcal{T}]\}$. \item[$\mathrm{(c)}$] If $\mathcal{T}=\smdx[\mathcal{T}]$, then $({}{}^{\bot}(\mathcal{T}^{\bot}),\mathcal{T}^{\bot})$ is $\mathcal{X}$-complete and hereditary. \end{itemize} \end{thm} \begin{proof} (a) By Lemma \ref{lem:inf2}(a), we get the inclusions \begin{center} $\left(\mathcal{T}\cap\mathcal{X}\right)_{\mathcal{X}}^{\vee}\cap\mathcal{X}\subseteq\mathcal{T}_{\mathcal{X}}^{\vee}\cap\mathcal{X}\subseteq{}{}^{\bot}(\mathcal{T}^{\bot})\cap\mathcal{X}\subseteq{}{}^{\bot}(\mathcal{T}^{\bot}\cap\mathcal{X})\cap\mathcal{X}.$ \end{center} Consider $X\in{}{}^{\bot}(\mathcal{T}^{\bot}\cap\mathcal{X})\cap\mathcal{X}$. From Lemma \ref{lem:inf6} and Proposition \ref{prop:equiv a t3}, we get an exact sequence $\suc[X][M_{X}][C_{X}]$ with $M_{X}\in\mathcal{T}^{\bot}\cap\mathcal{X}$ and $C_{X}\in(\mathcal{X},\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$. Moreover, $C_{X}\in{}{}^{\bot}\left(\mathcal{T}^{\bot}\cap\mathcal{X}\right)$ by Lemma \ref{lem:inf2}(a). Note that $M_{X}\in{}{}^{\bot}\left(\mathcal{T}^{\bot}\cap\mathcal{X}\right)$ since $X\in{}{}^{\bot}\left(\mathcal{T}^{\bot}\cap\mathcal{X}\right)$. By Lemma \ref{lem:inf5}(b), $\mathcal{T}\cap\mathcal{X}=\mathcal{T}^{\bot}\cap{}{}^{\bot}\left(\mathcal{T}^{\bot}\cap\mathcal{X}\right)\cap\mathcal{X}\mbox{,}$ and thus, $M_{X}\in\mathcal{T}\cap\mathcal{X}$. Therefore, $X\in\left(\mathcal{T}\cap\mathcal{X}\right)_{\mathcal{X}}^{\vee}\cap\mathcal{X}$. \ (b) By Lemma \ref{lem:inf4} and Lemma \ref{lem:inf5} (a), $\mathcal{T}\cap\mathcal{X}$ is a relative generator in $\mathcal{T}^{\bot}\cap\mathcal{X}$. Hence, it follows that $\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][k][\mathcal{X}]\cap\mathcal{X}$ $\forall k\geq1$. Let $m:=\max\{1,\pdr[\mathcal{X}][\mathcal{T}]\}$. We will show that $\Gennr[\mathcal{T}][k][\mathcal{X}]\cap\mathcal{X}\subseteq\mathcal{T}^{\bot}\cap\mathcal{X}$ $\forall k\geq m$. Consider $C\in\Gennr[\mathcal{T}][k][\mathcal{X}]\cap\mathcal{X}$ with $k\geq m$. By definition, there is an exact sequence \[ 0\rightarrow K\rightarrow T_{k}\stackrel{f_{k}}{\rightarrow}...\stackrel{f_{2}}{\rightarrow}T_{1}\stackrel{f_{1}}{\rightarrow}C\rightarrow0, \] where $\Kerx[f_{i}]\in\mathcal{X}$ and $T_{i}\in\mathcal{T}\cap\mathcal{X}$ $\forall i\in[1,k]$. Then, by (T2), (T1), (T4), and \cite[Proposition 2.7(a)]{parte1}, it follows that $C\in\mathcal{T}^{\bot}\cap\mathcal{X}.$ \ (c) It is clear that the pair $({}{}^{\bot}(\mathcal{T}^{\bot}),\mathcal{T}^{\bot})$ is hereditary. Let us prove that it is $\mathcal{X}$-complete. By Lemma \ref{lem:inf2} (a), $(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\cap\mathcal{X}\subseteq{}^{\bot}(\mathcal{T}^{\bot})\cap\mathcal{X}$. Then, by Lemma \ref{lem:inf6} and Proposition \ref{prop:equiv a t3}, for each $X\in\mathcal{X},$ there are exact sequences $\suc[X][M_{X}][C_{X}]\mbox{ and }\suc[K_{X}][B_{X}][X]\mbox{,}$ where $M_{X},K_{X}\in\mathcal{T}^{\bot}\cap\mathcal{X}$ and $C_{X},B_{X}\in(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\cap\mathcal{X};$ proving (c). \end{proof} The following result is a generalization of \cite[Theorem 4.3]{Wei}, \cite[Theorem 3.11]{Bazzonintilting}, and \cite[Theorem 4.4]{Tiltinginfinitamentegenerado}. \begin{thm}\label{prop:primera generalizacion}\label{prop:primera generalizacion-1} For an abelian category $\mathcal{C},$ $n\geq1,$ $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ closed under extensions and $\mathcal{T}=\smdx[\mathcal{T}]\subseteq\mathcal{C}$ satisfying $\mathrm{(T4)}$ and $\mathrm{(T5)},$ the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is $n$-$\mathcal{X}$-tilting. \item[$\mathrm{(b)}$] $\Gennr\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}$. \item[$\mathrm{(c)}$] $\mathcal{T}^{\bot}\cap\mathcal{X}=\Gennr[][k]\cap\mathcal{X}$ $\forall k\geq n$. \item[$\mathrm{(d)}$] $\mathcal{T}^{\bot}\cap\mathcal{X}$ is closed by $n$-quotients in $\mathcal{X}$ and $\mathcal{T}\cap\mathcal{X}\subseteq\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[][1]$. \end{itemize} \end{thm} \begin{proof} (a) $\Rightarrow$ (b) It follows from Theorem \ref{thm:el par n-X-tilting} (b). \ (b) $\Rightarrow$ (c) It is enough to prove that $\Gennr[\mathcal{T}][n+1][\mathcal{X}]\cap\mathcal{X}\supseteq\Gennr[\mathcal{T}][n][\mathcal{X}]\cap\mathcal{X}$. Let $N\in\Gennr[\mathcal{T}][n][\mathcal{X}]\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}.$ Then, by (T5), there is a $\mathcal{T}$-precover $f:A\rightarrow N$ with $A\in\mathcal{X}.$ Note that $f$ is an epimorphism since $\Gennr[\mathcal{T}][n]\subseteq\Gennr[\mathcal{T}][1].$ Thus, we have the exact sequence $\eta:\; 0\to K\to A\xrightarrow{f} N\to 0,$ where $A\in\mathcal{T}\cap\mathcal{X}\subseteq \Gennr[\mathcal{T}][n][\mathcal{X}]\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}.$ Since $A,N\in\mathcal{T}^{\perp}$ and $f$ is a $\mathcal{T}$-precover, from the exact sequence $\eta,$ it can be shown that $K\in\mathcal{T}^{\bot}.$ We assert now that $K\in \mathcal{X}.$ Indeed, since $N\in\Gennr[][1],$ there is an exact sequence $\eta':\; 0\to K'\to M_0\xrightarrow{f'} N\to 0,$ where $M_0\in \mathcal{T}\cap\mathcal{X}$ and $K'\in\mathcal{X}.$ Then, from the pull-back construction of $f$ and $f',$ we can get an exact sequence $\eta'':\; 0\to K\to P\to M_0\to 0,$ where $P\in\mathcal{X}$ since $\mathcal{X}$ is closed under extensions. Note that $\eta''$ splits since $M_0\in\mathcal{T}$ and $K\in\mathcal{T}^\perp$ and thus $K\in\mathcal{X}.$ Therefore $K\in\mathcal{T}\cap\mathcal{X}=\Gennr\cap\mathcal{X}$ and from the exact sequence $\eta,$ it follows that $N\in \Gennr[\mathcal{T}][n+1][\mathcal{X}]\cap\mathcal{X}.$ \ (c) $\Rightarrow$ (d) By (c), we know that $\mathcal{T}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][n]\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[][1]\mbox{.}$ Since $\mathcal{T}^{\bot}\cap\mathcal{X}=\Gennr[\mathcal{T}][n]\cap\mathcal{X}=\Gennr[\mathcal{T}][n+1]\cap\mathcal{X}\mbox{,}$ from \cite[Lemma 5.2]{parte1}, we get that $\mathcal{T}^{\bot}\cap\mathcal{X}$ is closed by $n$-quotients in $\mathcal{X}.$ \ (d) $\Rightarrow$ (a) Since $\mathcal{T}^{\bot}\cap\mathcal{X}$ is closed by $n$-quotients in $\mathcal{X}$ and $\mathrm{(T4)}$ holds true, it follows from \cite[Proposition 2.7]{parte1}, that $\pdr[\mathcal{X}][\mathcal{T}]\leq n$ and thus (T1) holds true. Furthermore, by (d), we have that (T2) holds true and $\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[][1]\cap\mathcal{X}$. Therefore, by Proposition \ref{prop:equiv a t3}, we conclude that $\mathcal{T}$ is $n$-$\mathcal{X}$-tilting. \end{proof} As a consequence of Theorem \ref{prop:primera generalizacion}, we can give an equivalent condition of (T5) in case $\mathcal{T}\subseteq\mathcal{X}.$ \begin{cor} \label{prop:primera generalizacion-1-1} Let $\mathcal{C}$ be an abelian category, $n\geq1$, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ closed under extensions, and let $\mathcal{T}=\smdx[\mathcal{T}]\subseteq\mathcal{X}$ satisfying $\mathrm{(T1), (T2), (T3)}$ and $\mathrm{(T4)}.$ Then, the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is $n$-$\mathcal{X}$-tilting. \item[$\mathrm{(b)}$] $\mathcal{T}^{\bot}\cap\mathcal{X}=\Gennr[\mathcal{T}]\cap\mathcal{X}=\Gennr[\mathcal{T}][n+1]\cap\mathcal{X}.$ \end{itemize} \end{cor} \begin{proof} (a) $\Rightarrow$ (b) It follows from Theorem \ref{prop:primera generalizacion-1} (c). \ (b) $\Rightarrow$ (a) Let $X\in\mathcal{T}^{\bot}\cap\mathcal{X}=\Gennr[\mathcal{T}]\cap\mathcal{X}=\Gennr[\mathcal{T}][n+1]\cap\mathcal{X}.$ Then, there is an exact sequence $\suc[X'][T'][X][][f],$ with $T'\in\mathcal{T}\cap\mathcal{X}$ and $X'\in\mathcal{T}^{\bot}\cap\mathcal{X}.$ Since $X'\in\mathcal{T}^{\bot},$ it is straightforward to show that $f$ is a $\mathcal{T}$-precover. Therefore, (T5) holds true and thus $\mathcal{T}$ is $n$-$\mathcal{X}$-tilting. \end{proof} \subsection{$n$-$\mathcal{X}$-tilting classes and relative dimensions} \begin{prop}\label{prop:(a)}\label{prop:(a)-1} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions and $\mathcal{T}\subseteq\mathcal{C}$ be an $n$-$\mathcal{X}$-tilting class. Then, the pair $\p:=({}{}^{\bot}(\mathcal{T}^{\bot}),\mathcal{T}^{\bot})$ and the class $\nu:=\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}$ satisfy that $\nu$ is a relative $\mathcal{B}\cap\mathcal{X}$-projective generator in $\mathcal{B}\cap\mathcal{X}$ and a relative $\mathcal{A}\cap\mathcal{X}$-injective cogenerator in $\mathcal{A}\cap\mathcal{X}$. Furthermore, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\nu$ $=\mathcal{A}\cap\mathcal{X}\cap\left(\mathcal{A}\cap\mathcal{X}\right)^{\bot}$ $=\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})=(\nu,\mathcal{A}\cap\mathcal{X})^{\wedge}$ $=(\mathcal{B}\cap\mathcal{X},\nu)^{\vee}=$ $=\mathcal{A}\cap\mathcal{X}\cap\nu^{\wedge}$ $=\mathcal{B}\cap\mathcal{X}\cap\nu^{\vee}.$ \item[$\mathrm{(b)}$] $\mathcal{X}\subseteq(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\subseteq(\mathcal{B}\cap\mathcal{X})^{\vee}.$ \item[$\mathrm{(c)}$] $(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\subseteq(\mathcal{T}\cap\mathcal{X})^{\vee}\subseteq{}^{\bot}(\mathcal{B}\cap\mathcal{X}).$ \item[$\mathrm{(d)}$] $\mathcal{T}\cap\mathcal{X}=(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\cap\mathcal{B}\cap\mathcal{X}=(\mathcal{T}\cap\mathcal{X})^{\vee}\cap\mathcal{B}\cap\mathcal{X}=\nu=$ $(\mathcal{T}\cap\mathcal{X})_{\mathcal{X}}^{\vee}\cap\mathcal{B}.$ \item[$\mathrm{(e)}$] $\mathcal{A}\cap(\mathcal{X},\nu)^{\vee}=\mathcal{A}\cap\mathcal{X}.$ \item[$\mathrm{(f)}$] $\mathcal{B}\cap(\nu,\mathcal{X})^{\wedge}=\left\{ M\in\mathcal{B}\cap\mathcal{X}\,|\:\pdr[\mathcal{B}\cap\mathcal{X}][M]<\infty\right\} $. \end{itemize} \end{prop} \begin{proof} Note that $\mathcal{A}$ and $\mathcal{B}$ are closed under extensions and direct summands. By Theorem \ref{thm:el par n-X-tilting} (c), it follows that the pair $\p$ is $\mathcal{X}$-hereditary and $\mathcal{X}$-complete. Therefore, by \cite[Theorem 4.24 (a)]{parte1} and its dual, we get (a). Moreover, by \cite[Lemma 4.23 (a, b)]{parte1}, we get that $\nu$ is a relative $\mathcal{B}\cap\mathcal{X}$-projective generator in $\mathcal{B}\cap\mathcal{X}$ and a relative $\mathcal{A}\cap\mathcal{X}$-injective cogenerator in $\mathcal{A}\cap\mathcal{X}.$ \ The items (b) and (c) follow from Lemma \ref{lem:inf6}, and the item (d) follows from Lemma \ref{lem:inf2} (b), the item (c) and Lemma \ref{lem:inf5} (b). \ Let us prove (e). By \cite[Lemma 4.17(a)]{parte1}, we know that \begin{center} $\mathcal{A}\cap(\mathcal{X},\nu)^{\vee}=\left\{ M\in\mathcal{A}\cap\mathcal{X}\,|\:\idr[\mathcal{A}\cap\mathcal{X}][M]<\infty\right\} \mbox{.}$ \end{center} Now, from (T4) and \cite[Proposition 4.5 (a)]{parte1}, $\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}{\mathcal{X}}\leq\pdr[\mathcal{X}][T]$; and by \cite[Theorem 4.24 (a)]{parte1}, $\pdr[\mathcal{A}\cap\mathcal{X}][\mathcal{A}\cap\mathcal{X}]=\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}{\mathcal{X}}$. Therefore \begin{center} $\idr[\mathcal{A}\cap\mathcal{X}][\mathcal{A}\cap\mathcal{X}]=\pdr[\mathcal{A}\cap\mathcal{X}][\mathcal{A}\cap\mathcal{X}]=\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}{\mathcal{X}}\leq\pdr[\mathcal{X}][\mathcal{T}]<\infty\mbox{.}$ \end{center} Hence $\mathcal{A}\cap(\mathcal{X},\nu)^{\vee}=\mathcal{A}\cap\mathcal{X}$. Finally, the item (f) follows from the dual result of \cite[Lemma 4.17 (a)]{parte1}. \end{proof} \begin{cor}\label{cor:M tilting resdim<pd}\label{cor:M tilting resdim<pd-1} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions and $\mathcal{T}\subseteq\mathcal{C}$ be $n$-$\mathcal{X}$-tilting. Then, for any $X\in\mathcal{T}^{\bot}\cap\mathcal{X},$ we have \textup{ \[ \resdimx{\mathcal{T}}X\leq\resdimr{{}\mathcal{T}\cap\mathcal{X}}X{\mathcal{T}^{\bot}\cap\mathcal{X}}\leq\pdr[\mathcal{T}^{\bot}\cap\mathcal{X}][X]+1. \] } \end{cor} \begin{proof} Let $X\in\mathcal{T}^{\bot}\cap\mathcal{X}$. We can assume that $m:=\pdr[\mathcal{T}^{\bot}\cap\mathcal{X}][X]<\infty$. By Proposition \ref{prop:(a)}, $\mathcal{T}\cap\mathcal{X}$ is a $\mathcal{T}^{\bot}\cap\mathcal{X}$-projective relative generator in $\mathcal{T}^{\bot}\cap\mathcal{X}$. Hence, we can build an exact sequence $0\rightarrow K_{m+1}\overset{}{\rightarrow}M_{m}\overset{f}{\rightarrow}...\rightarrow M_{0}\rightarrow X\rightarrow0\mbox{,}$ with $M_{i}\in\mathcal{T}\cap\mathcal{X}$ $\forall i\in[0,m]$ and $K_{m+1}\in\mathcal{T}^{\bot}\cap\mathcal{X}$. Then, by the shifting lemma, $\Extx[1][][\mbox{Im}f][K_{m+1}]=\Extx[m+1][][X][K_{m+1}]=0\mbox{.}$ Therefore $K_{m+1}$ is a direct summand of $M_{m}$ and thus $K_{m+1}\in\mathcal{T}\cap\mathcal{X}$.\end{proof} \begin{prop}\label{prop:(b)}\label{prop:(b)-1} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions, and let $\mathcal{T}\subseteq\mathcal{C}$ be $n$-$\mathcal{X}$-tilting. Then, for the pair $\p:=({}{}^{\bot}(\mathcal{T}^{\bot}),\mathcal{T}^{\bot}),$ it follows that $^{\bot}(\mathcal{B}\cap\mathcal{X})\cap\mathcal{X}=\mathcal{A}\cap\mathcal{X}$ and $(\mathcal{A}\cap\mathcal{X})^{\bot}\cap\mathcal{X}=\mathcal{B}\cap\mathcal{X}.$ Moreover, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] For any $X\in\mathcal{X},$ we have that \begin{itemize} \item[$\mathrm{(a1)}$] $\resdimr{\mathcal{A}}X{\mathcal{X}}$ $= \resdimr{\mathcal{A}\cap\mathcal{X}}X{\mathcal{X}}$ $= \resdimr{\mathcal{A}\cap\mathcal{X}}X{\,}$ $=\resdimr{\mathcal{A}}X{\,}$ $=$ $=\pdr[\mathcal{B}\cap\mathcal{X}][X]$ $\leq\resdimr{\mathcal{A}\cap\mathcal{X}}{\mathcal{B}\cap\mathcal{X}}{\,}+1\mbox{;}$ \item[$\mathrm{(a2)}$] $\resdimr{\mathcal{A}}X{\,}\leq\pdr[\mathcal{B}][X]\leq\resdimr{\mathcal{A}}{\mathcal{B}}{\,}+1;$ \item[$\mathrm{(a3)}$] $\idr[\mathcal{A}\cap\mathcal{X}][X]=$ $\coresdimr{\mathcal{B}}X{\mathcal{X}}=$ $\coresdimr{\mathcal{B}\cap\mathcal{X}}X{\mathcal{X}}=$ $\coresdimr{\mathcal{B}\cap\mathcal{X}}X{\,}=$ $=\coresdimr{\mathcal{B}}X{\,}$ $\leq\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{A}\cap\mathcal{X}}{\,}+1\mbox{;}$ \item[$\mathrm{(a4)}$] $\coresdimx{\mathcal{B}}X\leq\idr[\mathcal{A}][X]\leq\coresdimr{\mathcal{B}}{\mathcal{A}}{\,}+1.$ \end{itemize} \item[$\mathrm{(b)}$] $\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}{\mathcal{X}}\leq\pdr[\mathcal{X}][\mathcal{T}]=\pdr[\mathcal{X}][\mathcal{A}]=\pdr[\mathcal{X}][^{\bot}(\mathcal{B}\cap\mathcal{X})]<\infty$. \end{itemize} \end{prop} \begin{proof} Note that $\p$ is $\mathcal{X}$-hereditary and $\mathcal{X}$-complete by Theorem \ref{thm:el par n-X-tilting} (c). Moreover, by Proposition \ref{prop:(a)} (a), $\left(\mathcal{A}\cap\mathcal{X}\right)^{\bot}\cap\mathcal{A}\cap\mathcal{X}\subseteq\mathcal{B}\cap\mathcal{X}$ and ${}^\perp(\mathcal{B}\cap\mathcal{X})\cap\mathcal{B}\cap\mathcal{X}\subseteq\mathcal{A}\cap\mathcal{X}.$ Thereupon, by \cite[Proposition 4.11(e)]{parte1} and its dual, $(\mathcal{A}\cap\mathcal{X})^{\bot}\cap\mathcal{X}\subseteq\mathcal{B}\subseteq(\mathcal{A}\cap\mathcal{X})^{\bot}$ and ${}^{\bot}(\mathcal{B}\cap\mathcal{X})\cap\mathcal{X}\subseteq\mathcal{A}\subseteq{}^{\bot}(\mathcal{B}\cap\mathcal{X})$. Hence, $^{\bot}(\mathcal{B}\cap\mathcal{X})\cap\mathcal{X}=\mathcal{A}\cap\mathcal{X}$ and $(\mathcal{A}\cap\mathcal{X})^{\bot}\cap\mathcal{X}=\mathcal{B}\cap\mathcal{X}$. It remains to prove (a) and (b). Indeed, the item (a) follows from \cite[Proposition 4.11]{parte1}, and the item (b) follows from \cite[Proposition 4.5]{parte1}. \end{proof} \begin{prop}\label{prop:oct1}\label{prop:oct1-1} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions and let $\mathcal{T}\subseteq\mathcal{C}$ be $n$-$\mathcal{X}$-tilting. Then, for the pair $\p:=({}{}^{\bot}(\mathcal{T}^{\bot}),\mathcal{T}^{\bot})$ and the class $\nu:=\mathcal{A}\cap\mathcal{B}\cap\mathcal{X},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\pdr[\mathcal{B}\cap\mathcal{X}][M]=\pdr[\nu][M]=\pdr[\nu^{\wedge}][M]=\resdimr{\mathcal{A}}M{\mathcal{X}}=\resdimx{\mathcal{X}\cap\mathcal{A}}M$ $\forall M\in\left(\mathcal{A},\mathcal{X}\right)^{\wedge}$. \item[$\mathrm{(b)}$] $\pdr[\mathcal{B}\cap\mathcal{X}][M]=\resdimr{\nu}M{\mathcal{X}}$ $\forall M\in(\nu,\mathcal{X})^{\wedge}$. \item[$\mathrm{(c)}$] $\idr[\mathcal{A}\cap\mathcal{X}][M]=\idr[\nu][M]=\idr[\nu^{\vee}][M]=\coresdimr{\mathcal{B\cap\mathcal{X}}}M{\,}=\coresdimr{\mathcal{B}}M{\mathcal{X}}$ $\forall M\in(\mathcal{X},\mathcal{B})^{\vee}$. \item[$\mathrm{(d)}$] $\idr[\mathcal{A}\cap\mathcal{X}][M]=\coresdimr{\nu}M{\mathcal{X}}$ $\forall M\in(\mathcal{X},\nu)^{\vee}.$ \end{itemize} \end{prop} \begin{proof} By Theorem \ref{thm:el par n-X-tilting} (c), we know that the pair $\p$ is $\mathcal{X}$-hereditary and $\mathcal{X}$-complete. Then, the result follows from \cite[Proposition 4.23]{parte1}. \end{proof} \begin{prop}\label{prop:oct2} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions, and let $\mathcal{T}\subseteq\mathcal{C}$ be $n$-$\mathcal{X}$-tilting. Then, for the pair $\p:=({}{}^{\bot}(\mathcal{T}^{\bot}),\mathcal{T}^{\bot})$ and the class $\nu:=\mathcal{A}\cap\mathcal{B}\cap\mathcal{X},$ we have that $\mathcal{A}\cap\mathcal{X}\subseteq\nu^{\vee}$. Moreover, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\pdr[\mathcal{X}][\nu]$ $=\coresdimx{\mathcal{B}\cap\mathcal{X}}{\mathcal{A}\cap\mathcal{X}}$ $=\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{A}\cap\mathcal{X}}{\mathcal{X}}$ $=\coresdimr{\mathcal{B}}{\mathcal{X}}{\mathcal{X}}$ $=\coresdimx{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}$ $=\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}{\mathcal{X}}$ $=\pdr[\mathcal{A\cap\mathcal{X}}][\mathcal{A}\cap\mathcal{X}]$ $=\pdr[\mathcal{\mathcal{X}}][\mathcal{A}\cap\mathcal{X}]$ $=\coresdimx{\nu}{\mathcal{A}\cap\mathcal{X}}$ $=\coresdimr{\mathcal{B}}{\mathcal{A}\cap\mathcal{X}}{\mathcal{X}}$ $\leq\pdr[\mathcal{X}][\mathcal{T}]<\infty\mbox{.}$ \item[$\mathrm{(b)}$] $\idr[\mathcal{X}][\nu]\leq$ $\resdimx{\mathcal{A}\cap\mathcal{X}}{\mathcal{B}\cap\mathcal{X}}$ $=\resdimr{\mathcal{A}\cap\mathcal{X}}{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}$ $=\resdimx{\mathcal{A}\cap\mathcal{X}}{\mathcal{B}\cap\mathcal{X}}$ $=\resdimr{\mathcal{A}\cap\mathcal{X}}{\mathcal{X}}{\mathcal{X}}$ $=\resdimr{\mathcal{A}}{\mathcal{X}}{\mathcal{X}}$ $=\resdimx{\nu}{\mathcal{B}\cap\mathcal{X}}$ $=\idr[\mathcal{B}\cap\mathcal{X}][\mathcal{B}\cap\mathcal{X}]$ $=\idr[\mathcal{X}][\mathcal{B}\cap\mathcal{X}]$ $=\resdimr{\mathcal{A}}{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}$. \item[$\mathrm{(c)}$] $\idr[\mathcal{X}][\mathcal{B}\cap\mathcal{X}]<\infty$ if and only if $\mathcal{B}\cap\mathcal{X}\subseteq\nu^{\wedge}$ and $\idr[\mathcal{X}][\nu]<\infty$. Furthermore, if $\idr[\mathcal{X}][\mathcal{B}\cap\mathcal{X}]<\infty,$ then $\mathcal{B}\cap(\nu,\mathcal{X})^{\wedge}=\mathcal{B}\cap\mathcal{X}\mbox{, }\mathcal{X}\subseteq(\mathcal{A},\mathcal{X})^{\wedge}\subseteq\left(\mathcal{A}\cap\mathcal{X}\right)^{\wedge}\mbox{ and }\idr[\mathcal{X}][\mathcal{B}\cap\mathcal{X}]=\idr[\mathcal{X}][\nu]\mbox{.}$ \end{itemize} \end{prop} \begin{proof} By Theorem \ref{thm:el par n-X-tilting} (c), we know that the pair $\p$ is hereditary and $\mathcal{X}$-complete. In order to prove (a), observe that, by Proposition \ref{prop:(b)} (b) and \cite[Theorem 4.24 (a)]{parte1}, it follows that \[ \pdr[\mathcal{A}\cap\mathcal{X}][\mathcal{A}\cap\mathcal{X}]=\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}{\mathcal{X}}\leq\pdr[\mathcal{X}][\mathcal{T}]<\infty\mbox{.} \] Then, by \cite[Theorem 4.24 (b)]{parte1}, $\mathcal{A}\cap\mathcal{X}\subseteq\nu^{\vee}$ and $\pdr[\mathcal{X}][\mathcal{A}\cap\mathcal{X}]=\pdr[\mathcal{X}][\nu]$. The rest of the equalities appearing in (a) follow from \cite[Theorem 4.24 (a)]{parte1}. \ Except for the equality $\mathcal{B}\cap(\nu,\mathcal{X})^{\wedge}=\mathcal{B}\cap\mathcal{X}$ in (c) (under the hypothesis that $\idr[\mathcal{X}][\mathcal{B}\cap\mathcal{X}]<\infty$), the items (b) and (c) follow from \cite[Theorem 4.24]{parte1}. Let us prove that equality. Indeed, assume that $\idr[\mathcal{X}][\mathcal{B}\cap\mathcal{X}]<\infty.$ Then, by (b), we have that $\mathrm{pd}_{\mathcal{B}\cap\mathcal{X}}(\mathcal{B}\cap\mathcal{X})=\mathrm{id}_{\mathcal{B}\cap\mathcal{X}}(\mathcal{B}\cap\mathcal{X})<\infty.$ Hence, from Proposition \ref{prop:(a)} (f), the required equality follows. \end{proof} \begin{prop}\label{prop:oct3}\label{prop:oct3-1} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions and let $\mathcal{T}\subseteq\mathcal{C}$ be $n$-$\mathcal{X}$-tilting. Then, for the pair $\p:=({}{}^{\bot}(\mathcal{T}^{\bot}),\mathcal{T}^{\bot})$ and the class $\nu:=\mathcal{A}\cap\mathcal{B}\cap\mathcal{X},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\pdr[\mathcal{X}][\mathcal{X}]=\pdr[\mathcal{X}][(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}]=\pdr[\mathcal{X}][(\mathcal{B}\cap\mathcal{X})^{\vee}]=\pdr[\mathcal{X}][\mathcal{B}\cap\mathcal{X}]$. \item[$\mathrm{(b)}$] $\nu=\mathcal{T}\cap\mathcal{X}.$ Moreover, if $\mathcal{T}\subseteq\mathcal{X},$ then $$\pdr[\mathcal{X}][\mathcal{T}]=\pdr[\mathcal{X}][\nu]=\pdr[\mathcal{X}][\nu{}_{\mathcal{X}}^{\vee}]=\pdr[\mathcal{X}][\nu{}^{\vee}].$$ \end{itemize} \end{prop} \begin{proof} We point out that, by Theorem \ref{thm:el par n-X-tilting} (c), the pair $\p$ is hereditary and $\mathcal{X}$-complete. Then, (a) follows from Proposition \ref{prop:(a)} (b) and \cite[Lemma 4.3]{parte1}. Finally, (b) can be obtained from Proposition \ref{prop:(a)} (c,d) and Proposition \ref{prop:(b)} (b). \end{proof} \begin{prop}\label{prop:M ortogonal es preenvolvente esp en X}\label{prop:M ortogonal es preenvolvente esp en X-1} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions and let $\mathcal{T}\subseteq\mathcal{C}$ be $n$-$\mathcal{X}$-tilting. Then, for the pair $\p:=({}{}^{\bot}(\mathcal{T}^{\bot}),\mathcal{T}^{\bot})$ and the class $\nu:=\mathcal{A}\cap\mathcal{B}\cap\mathcal{X},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\nu^{\vee}\subseteq{}^{\bot}(\mathcal{B}\cap\mathcal{X})$ and, for any $Z\in(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$ and $m:=\coresdimr{\mathcal{B}\cap\mathcal{X}}Z{\mathcal{X}}$, there are short exact sequences \begin{alignat*}{1} \suc[Z][M_{Z}][C_{Z}][g_{Z}] & \qquad\mbox{with \ensuremath{C_{Z}\in(\mathcal{X},\nu){}^{\vee},\,M_{Z}\in\mathcal{B}\cap\mathcal{X}}}\mbox{,}\\ \suc[K_{Z}][N_{Z}][Z][][f_{Z}] & \qquad\mbox{with \ensuremath{N_{Z}\in\nu_{\mathcal{\mathcal{X}}}^{\vee}}, \ensuremath{K_{Z}\in\mathcal{B}\cap\mathcal{X}}}\mbox{,} \end{alignat*} such that $g_{Z}$ is a $\mathcal{B}\cap\mathcal{X}$-preenvelope and $f_{Z}$ is an $\nu^{\vee}$-precover. Furthermore, $\coresdimr{\nu}{C_{Z}}{\mathcal{X}}=m-1$, $\coresdimr{\nu}{N_{Z}}{\mathcal{X}}\leq m$, and \begin{center} $\nu^{\vee}\cap\mathcal{X}=\mathcal{A}\cap\mathcal{X}=\mathcal{A}\cap(\mathcal{X},\nu)^{\vee}.$ \end{center} \item[$\mathrm{(b)}$] $\nu^{\wedge}\subseteq{}(\mathcal{A}\cap\mathcal{X})^{\bot}$ and, for any $Z\in(\mathcal{A}\cap\mathcal{X})_{\mathcal{X}}^{\wedge}$ and $m:=\resdimr{\mathcal{A}\cap\mathcal{X}}Z{\mathcal{X}}$, there are short exact sequences \begin{alignat*}{1} \suc[Z][N_{Z}][C_{Z}][f_{Z}] & \qquad\mbox{with \ensuremath{N_{Z}\in\nu_{\mathcal{X}}^{\wedge},\,C_{Z}\in\mathcal{A}\cap\mathcal{X}}}\mbox{,}\\ \suc[K_{Z}][M_{Z}][Z][][g_{Z}] & \qquad\mbox{with \ensuremath{K_{Z}\in(\nu,\mathcal{X}){}^{\wedge}}, \ensuremath{M_{Z}\in\mathcal{A}\cap\mathcal{X}}}\mbox{,} \end{alignat*} such that $g_{Z}$ is a $\mathcal{A}\cap\mathcal{X}$-precover and $f_{Z}$ is a $\nu^{\wedge}$-preenvelope. Furthermore, $\resdimr{\nu}{K_{Z}}{\mathcal{X}}=m-1$, $\resdimr{\nu}{N_{Z}}{\mathcal{X}}\leq m,$ and \begin{center} $\nu^{\wedge}\cap\mathcal{X}=\mathcal{B}\cap\mathcal{X}=\mathcal{B}\cap(\nu,\mathcal{X})^{\wedge}\;$ if $\idr[\mathcal{X}][\mathcal{B}\cap\mathcal{X}]<\infty.$ \end{center} \item[$\mathrm{(c)}$] For any $Z\in(\mathcal{B}\cap\mathcal{X})^{\vee}$ and $m:=\coresdimr{\mathcal{B}\cap\mathcal{X}}Z{\,}$, there are short exact sequences \begin{alignat*}{1} \suc[Z][M_{Z}][C_{Z}][g_{Z}] & \qquad\mbox{with \ensuremath{C_{Z}\in(\mathcal{X},\nu){}^{\vee},\,M_{Z}\in\mathcal{B}\cap\mathcal{X}}}\mbox{,}\\ \suc[K_{Z}][N_{Z}][Z][][f_{Z}] & \qquad\mbox{with \ensuremath{N_{Z}\in\nu{}^{\vee}}, \ensuremath{K_{Z}\in\mathcal{B}\cap\mathcal{X}}}\mbox{,} \end{alignat*} such that $g_{Z}$ is a $\mathcal{B}\cap\mathcal{X}$-preenvelope and $f_{Z}$ is a $\nu^{\vee}$-precover. Furthermore, $\coresdimr{\nu}{C_{Z}}{\mathcal{X}}=m-1$ and $\coresdimr{\nu}{N_{Z}}{\mathcal{X}}\leq m$. \item[$\mathrm{(d)}$] $\forall Z\in(\mathcal{A}\cap\mathcal{X})^{\wedge}$, with $m:=\resdimr{\mathcal{A}\cap\mathcal{X}}Z{\,}$, there are short exact sequences \begin{alignat*}{1} \suc[Z][N_{Z}][C_{Z}][f_{Z}] & \qquad\mbox{with \ensuremath{N_{Z}\in\nu^{\wedge},\,C_{Z}\in\mathcal{A}\cap\mathcal{X}}}\mbox{,}\\ \suc[K_{Z}][M_{Z}][Z][][g_{Z}] & \qquad\mbox{with \ensuremath{K_{Z}\in\nu{}^{\wedge}}, \ensuremath{M_{Z}\in\mathcal{A}\cap\mathcal{X}}}\mbox{,} \end{alignat*} such that $g_{Z}$ is an $\mathcal{A}\cap\mathcal{X}$-precover and $f_{Z}$ is an $\nu^{\wedge}$-preenvelope. Furthermore, $\resdimr{\nu}{K_{Z}}{\,}=m-1$ and $\resdimr{\nu}{N_{Z}}{\,}\leq m.$ \item[$\mathrm{(e)}$] The pair $(\nu^{\vee},\mathcal{B})$ is right $(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$-complete, $\mathcal{X}$-complete, and $\mathcal{X}$-hereditary. \item[$\mathrm{(f)}$] $\mathcal{B}\cap\mathcal{X}$ is special preenveloping in $(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$ and in $\mathcal{X}$. Moreover, the pair $({}^{\bot}(\mathcal{B}\cap\mathcal{X}),\mathcal{B}\cap\mathcal{X})$ is right $\mathcal{X}$-complete. \item[$\mathrm{(g)}$] Any object of $(\mathcal{B}\cap\mathcal{X})^{\vee}$ admits a special $\mathcal{B}\cap\mathcal{X}$-preenvelope. \end{itemize} \end{prop} \begin{proof} (a) Consider the pair $(\mathcal{B}\cap\mathcal{X},\nu).$ Then, by Proposition \ref{prop:(a)}, we get that $\nu$ is a relative $\mathcal{B}\cap\mathcal{X}$-projective generator in $\mathcal{B}\cap\mathcal{X}.$ In particular, by \cite[Lemma 4.3]{parte1}, we have that $\nu^\vee\subseteq {}^\perp(\mathcal{B}\cap\mathcal{X}).$ Then, from \cite[Theorem 4.4]{parte1}, we get almost the item (a). It just remains to prove the equalities $\nu^{\vee}\cap\mathcal{X}=\mathcal{A}\cap\mathcal{X}=\mathcal{A}\cap(\mathcal{X},\nu)^{\vee}.$ Indeed, since $\nu^\vee\subseteq {}^\perp(\mathcal{B}\cap\mathcal{X}),$ these equalities follow from Proposition \ref{prop:(b)} and Proposition \ref{prop:(a)} (e). \ (b) Consider the pair $(\mathcal{A}\cap\mathcal{X},\nu).$ Then, by Proposition \ref{prop:(a)}, we get that $\nu$ is a relative $\mathcal{A}\cap\mathcal{X}$-injective cogenerator in $\mathcal{A}\cap\mathcal{X}.$ In particular, by the dual of \cite[Lemma 4.3]{parte1}, we have that $\nu^\wedge\subseteq (\mathcal{A}\cap\mathcal{X})^\perp.$ Then, from the dual of \cite[Theorem 4.4]{parte1}, we get almost the item (b). It just remains to prove, by assuming that $\mathrm{id}_{\mathcal{X}}(\mathcal{B}\cap\mathcal{X})<\infty,$ the equalities $\nu^{\wedge}\cap\mathcal{X}=\mathcal{B}\cap\mathcal{X}=\mathcal{B}\cap(\nu,\mathcal{X})^{\wedge}.$ Indeed, since $\nu^\wedge\subseteq (\mathcal{A}\cap\mathcal{X})^\perp,$ these equalities follow from Proposition \ref{prop:oct2} (c) and Proposition \ref{prop:(b)}. \ (c) It can be proven by following similar arguments as we did in (a). \ (d) It can be proven by following similar arguments as we did in (b). \ (e) It follows from (a), Proposition \ref{prop:(a)} (b) and Lemma \ref{lem:inf6}. \ (f) Let $X\in(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$. By (a), there is an exact sequence \[ \suc[X][M_{X}][C_{X}][g_{X}]\mbox{,} \] with $C_{X}\in(\mathcal{X},\nu)^{\vee}$, $M_{X}\in\mathcal{B}\cap\mathcal{X}$, and $g_{X}$ a $\mathcal{B}\cap\mathcal{X}$-preenvelope. Thereupon, the following statements are easy to prove. First, $\mathcal{B}\cap\mathcal{X}$ is special preenveloping in $(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$ since $M_{X}\in\mathcal{B}\cap\mathcal{X}=\mathcal{B}\cap\mathcal{X}\cap(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$ and $C_{X}\in{}^{\bot_{1}}(\mathcal{B}\cap\mathcal{X})\cap(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$ by Proposition \ref{prop:(a)}; and second, $\mathcal{B}\cap\mathcal{X}$ is special preenveloping in $\mathcal{X}$ since $\mathcal{X}\subseteq(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$ (see Proposition \ref{prop:(a)} (b)), $M_{X}\in\mathcal{B}\cap\mathcal{X}=\mathcal{X}\cap{}(\mathcal{B}\cap\mathcal{X})$ and $C_{X}\in\mathcal{X}\cap{}{}^{\bot_{1}}(\mathcal{B}\cap\mathcal{X})$. \ (g) Let $X\in(\mathcal{B}\cap\mathcal{X})^{\vee}$. Consider the exact sequence given by (c), \[ \suc[X][M_{X}][C_{X}][g_{X}] \] with $M_{X}\in\mathcal{B}\cap\mathcal{X}$ and $C_{X}\in\nu^{\vee}$. Then $g_{X}$ is a special $\mathcal{B}\cap\mathcal{X}$-preenvelope since $M_{X}\in\mathcal{B}\cap\mathcal{X}$ and $C_{X}\in(\mathcal{T}\cap\mathcal{X})^{\vee}\subseteq{}^{\bot}(\mathcal{B}\cap\mathcal{X})\subseteq{}^{\bot_{1}}(\mathcal{B}\cap\mathcal{X})$ by Proposition \ref{prop:(a)}(b). \end{proof} Next, in a similar way as Lemma \ref{lem:props C2 y T2} and Proposition \ref{prop:M ortogonal es preenvolvente esp en X}, we will show the behavior of the pairs $\p$ such that $\mathcal{B}\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}$, where $\mathcal{T}$ is $n$-$\mathcal{X}$-tilting. \begin{lem}\label{lem:partiltingescompleto} For an abelian category $\mathcal{C},$ $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ closed under extensions, a $n$-$\mathcal{X}$-tilting $\mathcal{T}\subseteq\mathcal{C}$ and a pair $\p$ such that $\mathcal{B}\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] Let $\Extx[1][\mathcal{C}][\mathcal{A}][\mathcal{B}\cap\mathcal{X}]=0.$ Then any morphism $A\rightarrow X,$ with $A\in\mathcal{A}$ and $X\in\mathcal{B}\cap\mathcal{X}$, factors through $\mathcal{T}\cap\mathcal{X}$. \item[$\mathrm{(b)}$] $\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}\subseteq{}^{\bot}(\mathcal{B}\cap\mathcal{X})\cap\mathcal{X}\cap\mathcal{B}=\mathcal{T}\cap\mathcal{X}$ if $\Extx[1][\mathcal{C}][\mathcal{A}][\mathcal{B}\cap\mathcal{X}]=0.$ \item[$\mathrm{(c)}$] Let $^{\bot_{1}}\mathcal{B}\cap\mathcal{X}\subseteq\mathcal{A}$ and $\idr[\mathcal{A}][\mathcal{B}\cap\mathcal{X}]=0.$ Then, the following conditions are equivalent: \begin{itemize} \item[$\mathrm{(c1)}$] $\mathcal{T}\cap\mathcal{X}=\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}$; \item[$\mathrm{(c2)}$] $\p$ is $\mathcal{X}$-complete; \item[$\mathrm{(c3)}$] $\p$ is left $\mathcal{X}$-complete. \end{itemize} \end{itemize} \end{lem} \begin{proof} (a) Let $f:A\rightarrow X$ be a morphism with $A\in\mathcal{A}$ and $X\in\mathcal{B}\cap\mathcal{X}$. By Proposition \ref{prop:equiv a t3} and Lemma \ref{lem:props C2 y T2} (a), there is an exact sequence $\suc[L][T'][X][k][g]\mbox{,}$ with $L\in\mathcal{T}^{\bot}\cap\mathcal{X}=\mathcal{B}\cap\mathcal{X}$ and $T'\in\mathcal{T}\cap\mathcal{X}$. Since $\Extx[1][][A][L]=0$, $\Homx[][A][g]$ is surjective. Hence, $f$ factors through $g$. \ (b) By Proposition \ref{prop:equiv a t3} and Lemma \ref{lem:props C2 y T2}(b), $\mathcal{T}\cap\mathcal{X}={}{}^{\bot}\left(\mathcal{T}^{\bot}\cap\mathcal{X}\right)\cap\mathcal{T}^{\bot}\cap\mathcal{X}={}{}^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)\cap\mathcal{B}\cap\mathcal{X}\mbox{.}$ Thus, by (a), we conclude that $\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}$ $\subseteq\mathcal{T}\cap\mathcal{X}.$ \ (c) $(c1)\Rightarrow(c2):$ By (b), Proposition \ref{prop:equiv a t3} and Lemma \ref{lem:props C2 y T2}, $\mathcal{T}\cap\mathcal{X}=\mathcal{A\cap\mathcal{B}\cap\mathcal{X}}$ is a relative generator in $\mathcal{T}^{\bot}\cap\mathcal{X}=\mathcal{B}\cap\mathcal{X}$. Thus, by \cite[Theorem 4.4 (a)]{parte1}, $\forall X\in(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$ there are short exact sequences $\suc[X][M_{X}][C_{X}][g_{X}]$, with $\,M_{X}\in\mathcal{B}\cap\mathcal{X}$, $C_{X}\in\left(\mathcal{A\cap\mathcal{B}\cap\mathcal{X}}\right)^{\vee}\cap\mathcal{X}$, and $\suc[K_{X}][B_{X}][X][][f_{X}]$, with $B_{X}\in\left(\mathcal{A\cap\mathcal{B}\cap\mathcal{X}}\right)^{\vee}$, $K_{X}\in\mathcal{B}\cap\mathcal{X}.$ Furthermore, since $\mathcal{A\cap\mathcal{B}\cap\mathcal{X}}\subseteq\mathcal{A}\subseteq{}{}^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)$, \cite[Theorem 4.4(c)]{parte1} implies $(\mathcal{A}\cap\mathcal{B}\cap\mathcal{X})^{\vee}\subseteq{}^{\bot}(\mathcal{B}\cap\mathcal{X})\mbox{.}$ Also, by \cite[Proposition 4.5 (a)]{parte1}, $\mathcal{X}\subseteq(T^{\bot}\cap\mathcal{X})_{\mathcal{X}}^{\vee}=(\mathcal{B}\cap\mathcal{X})_{\mathcal{X}}^{\vee}$. Lastly, by \cite[Lemma 4.3]{parte1}, $\pdr[\mathcal{B}][\left(\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}\right)^{\vee}]=\pdr[\mathcal{B}][\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}]=0\mbox{.}$ Therefore $\left(\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}\right)^{\vee}\cap\mathcal{X}\subseteq{}{}^{\bot}\mathcal{B}\cap\mathcal{X}\subseteq{}{}^{\bot_{1}}\mathcal{B}\cap\mathcal{X}\subseteq\mathcal{A}$ and thus $(\mathcal{A},\mathcal{B})$ is $\mathcal{X}$-complete. \ $(c2)\Rightarrow(c3):$ Is trivial. \ $(c3)\Rightarrow(c1):$ Let $X\in\mathcal{T}\cap\mathcal{X}$. By (c3), there is an exact sequence \[ \eta:\;\suc[B][A][X]\mbox{,} \] with $A\in\mathcal{A}\cap\mathcal{X}$ and $B\in\mathcal{B}\cap\mathcal{X}.$ Now, by (b), we have that $X\in\mathcal{T}\cap\mathcal{X}\subseteq{}^{\perp}(\mathcal{B}\cap\mathcal{X})$ and thus $\eta$ splits. Therefore $X\in\mathcal{A}$ and then $\mathcal{T}\cap\mathcal{X}=\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}.$ \end{proof} \subsection{\label{sub:-cerrada-por} Alternative conditions for the axiom (T3)} \begin{defn} \label{def:condiciones T3} For an abelian category $\mathcal{C}$ and $\mathcal{T},\mathcal{X}\subseteq\mathcal{C},$ we consider the following conditions. \begin{description} \item [{(T3')}] There exists $\omega\subseteq\mathcal{T}^\vee_\mathcal{X}$ which is an $\mathcal{X}$-projective relative generator in $\mathcal{X}.$ \item [{(T3'')}] There exists $\sigma\subseteq\mathcal{T}_{\mathcal{X}}^{\vee}$ such that $\Addx[\sigma]$ is an $\mathcal{X}$-projective relative generator in $\mathcal{X}$. \item [{(t3'')}] There exists $\sigma\subseteq\mathcal{T}_{\mathcal{X}}^{\vee}$ such that $\addx[\sigma]$ is an $\mathcal{X}$-projective relative generator in $\mathcal{X}$. \end{description} \end{defn} The following lemma is a generalization of \cite[Lemma 2.3]{Tiltinginfinitamentegenerado}. \begin{lem}\label{lem:lema previo a existencia de preenvolvente} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}\subseteq\mathcal{C}$ be closed under extensions and $\mathcal{T}\subseteq\mathcal{C}$ be a class such that $\mathcal{T}\cap\mathcal{X}\subseteq \mathcal{T}^\perp$ and $\sigma\subseteq\mathcal{X}\cap\mathcal{T}_{\mathcal{X}}^{\vee}.$ Then, for any $W\in\sigma$ and any finite $(\mathcal{T}\cap\mathcal{X})_\mathcal{X}$-coresolution $\suc[W][M_{0}\rightarrow...][M_{n}][f_{0}][f_{n}],$ we have that $f_{0}$ is a special $\mathcal{T}^{\bot}\cap\mathcal{X}$-preenvelope, a special $\mathcal{T}\cap\mathcal{X}$-preenvelope and a special $\mathcal{T}^{\bot }$-preenvelope. \end{lem} \begin{proof} Let $W\in\sigma$ and $\suc[W][M_{0}\rightarrow...][M_{n}][f_{0}][f_{n}]$ be a finite $(\mathcal{T}\cap\mathcal{X})_\mathcal{X}$-coresolution. By Lemma \ref{lem:chico} (a), $\mathcal{T}\cap\mathcal{X}\subseteq\mathcal{T}^{\bot}\cap\mathcal{X}\cap{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)$. Hence $M_{j}\in{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)$ $\forall j\in[0,n]$. Moreover $K_{j}:=\Kerx[f_{j}]\in{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)$ $\forall j\in[1,n]$ since $^{\bot}\left(\mathcal{T}^{\bot}\right)$ is closed by epi-kernels. In particular $\Extx[1][][K_{2}][X]=0$ $\forall X\in\mathcal{T}^{\bot}$ and thus $f_{0}:W\rightarrow M_{0}$ is a special $\mathcal{T}^{\bot }$-preenvelope, which is a $\mathcal{T}^{\bot}\cap\mathcal{X}$-preenvelope and a $\mathcal{T}\cap\mathcal{X}$-preenvelope since $M_{0}\in\mathcal{T}\cap\mathcal{X}\subseteq\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\mathcal{T}^{\bot}\mbox{.}$\end{proof} \begin{lem}\label{lem:exitencia de la preenvolvetnte}\label{lem:exitencia de la preenvolvetnte-1} Let $\mathcal{C}$ be an Ab4 (abelian) category, $\mathcal{X}=\Addx[\mathcal{X}]\subseteq\mathcal{C}$ ($\mathcal{X}=\addx[\mathcal{X}]\subseteq \mathcal{C}$) be closed under extensions, $\mathcal{T}\subseteq\mathcal{C}$ be a class satisfying $\mathrm{(T2)},$ $\sigma\subseteq\mathcal{X}\cap\mathcal{T}_{\mathcal{X}}^{\vee}$ and $\omega:=\Addx[\sigma]$ ($\omega:=\addx[\sigma]$). Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\omega\subseteq{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)\cap\mathcal{X}$. \item[$\mathrm{(b)}$] If $\mathcal{T}=\mathcal{T}^{\oplus}$ ($\mathcal{T}=\mathcal{T}^{\oplus_{<\infty}}$), then every $W\in\omega$ admits an exact sequence $\suc[W][M_{W}][C_{W}][f]\mbox{,}$ where $M_{W}\in\mathcal{T}\cap\mathcal{X}$, $C_{W}\in{}^{\bot}\left(\mathcal{T}^{\bot}\right)\cap\mathcal{X}$ and $f$ is a special $\mathcal{T}^{\bot}\cap\mathcal{X}$-preenvelope, a special $\mathcal{T}\cap\mathcal{X}$-preenvelope and a special $\mathcal{T}^{\bot}$-preenvelope. \end{itemize} \end{lem} \begin{proof} Let us prove the lemma by assuming that $\mathcal{C}$ is an Ab4 category. The case when $\mathcal{C}$ is just abelian can be done by applying similar arguments. \ (a) Let us show that $\sigma \subseteq {}^{\bot}\left(\mathcal{T}^{\bot}\right)\cap\mathcal{X}$. Indeed, by Lemma \ref{lem:lema previo a existencia de preenvolvente}, every $S\in\sigma$ admits an exact sequence $\suc[S][M_{S}][C_{S}]\mbox{,}$ with $M_{S}\in\mathcal{T}\cap\mathcal{X}\subseteq{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)\cap\mathcal{X}\cap\mathcal{T}^{\bot}$, $C_{S}\in\mathcal{X}\cap{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)$, and thus, $S\in^{\bot} (\mathcal{T} ^{\bot})\cap\mathcal{X}$ since $^{\bot} (\mathcal{T} ^{\bot})$ is closed by epi-kernels. Finally, $\Addx[\sigma]\subseteq{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)\cap\mathcal{X}$ since $\mathcal{C}$ is Ab4 and $\sigma \subseteq {}^{\bot}\left(\mathcal{T}^{\bot}\right)\cap\mathcal{X}.$ \ (b) Let $\mathcal{T}=\mathcal{T}^{\oplus}$ and $W\in\omega:=\Addx[\sigma]$. Then there is $W'\in\omega$ and a set $\{S_{i}\}_{i\in I}\subseteq\sigma\subseteq\mathcal{X}$ such that $W\oplus W'\cong\bigoplus_{i\in I}S_{i}^{(\alpha_{i})}$. By Lemma \ref{lem:lema previo a existencia de preenvolvente}, for every $i\in I$, there is an exact sequence $\suc[S_{i}][M_{S_{i}}][C_{S_{i}}]$ with $M_{S_{i}}\in\mathcal{T}\cap\mathcal{X}$ and $C_{S_{i}}\in{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)\cap\mathcal{X}$. Let $S:=\bigoplus_{i\in I}S_{i}^{(\alpha_{i})}$, $M':=\bigoplus_{i\in I}M_{S_{i}}^{(\alpha_{i})}$ and $C:=\bigoplus_{i\in I}C_{S_{i}}^{(\alpha_{i})}$. Since $\mathcal{C}$ is Ab4 and $\mathcal{T}=\mathcal{T}^{\oplus}$, we have the exact sequence \\% \fbox{\begin{minipage}[t]{0.35\columnwidth}% \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1cm,main node/.style=,x=.5cm,y=.5cm] \node[main node] (1) at (0,0){$W'$}; \node[main node] (2) at (-2,0){$S$}; \node[main node] (3) at (-4,0){$W$}; \node[main node] (4) at (-4,-2){$W$}; \node[main node] (5) at (-2,-2){$M'$}; \node[main node] (6) at (0,-2){$Z$}; \node[main node] (7) at (-2,-4){$C$}; \node[main node] (8) at (0,-4){$C$}; \node[main node] (01) at (0,2){$0$}; \node[main node] (02) at (-2,2){$0$}; \node[main node] (03) at (-6,0){$0$}; \node[main node] (04) at (-6,-2){$0$}; \node[main node] (05) at (-2,-6){$0$}; \node[main node] (06) at (0,-6){$0$}; \node[main node] (07) at (2,0){$0$}; \node[main node] (08) at (2,-2){$0$}; \draw[->, thin] (01) to node {$$} (1); \draw[->, thin] (02) to node {$$} (2); \draw[->, thin] (03) to node {$$} (3); \draw[->, thin] (04) to node {$$} (4); \draw[->, thin] (3) to node {$$} (2); \draw[->, thin] (2) to node {$$} (1); \draw[->, thin] (1) to node {$$} (07); \draw[->, thin] (4) to node {$$} (5); \draw[->, thin] (5) to node {$$} (6); \draw[->, thin] (6) to node {$$} (08); \draw[->, thin] (2) to node {$$} (5); \draw[->, thin] (5) to node {$$} (7); \draw[->, thin] (7) to node {$$} (05); \draw[->, thin] (1) to node {$$} (6); \draw[->, thin] (6) to node {$$} (8); \draw[->, thin] (8) to node {$$} (06); \draw[-, double] (3) to node {$$} (4); \draw[-, double] (7) to node {$$} (8); \end{tikzpicture} \]% \end{minipage}}\hfill{}% \begin{minipage}[t]{0.5\columnwidth}% \[ \suc[S][M'][C][f][g]\mbox{,} \] where $S\in\omega$, $M'\in\mathcal{T}\cap\mathcal{X}$ and $C\in{}{}^{\bot}\left(\mathcal{T}^{\bot}\right)\cap\mathcal{X}$; and the splitting exact sequence \[ \suc[W][S][W'][][h]\mbox{.} \] Considering the push-out of $h$ with $f$, we get an exact sequence \[ \eta:\;\suc[W][M'][Z][\alpha]\mbox{.} \] Finally, observe from the exact sequence \[ \suc[W'][Z][C] \] \end{minipage}\\ that $Z\in{}^{\bot}\left(\mathcal{T}^{\bot}\right)\cap\mathcal{X}$. Therefore, using $\eta,$ we can conclude the desired result. \end{proof} \begin{lem}\label{lem:props T2 con T3' y C2 con C3'-2}\label{lem:props T2 con T3' y C2 con C3'-2-1} Let $\mathcal{C}$ be an Ab4 (abelian) category, $\mathcal{X}\subseteq\mathcal{C}$ be closed under extensions and such that $\mathcal{X}=\Addx[\mathcal{X}]$ ($\mathcal{X}=\addx[\mathcal{X}]$). If $\mathcal{T}\subseteq\mathcal{C}$ satisfies $\mathrm{(T2), (T3'')\, ((t3'')),}$ and $\mathcal{T}=\mathcal{T}^{\oplus}$ ($\mathcal{T}=\mathcal{T}^{\oplus_{<\infty}}$), then $\mathcal{T}^{\bot }\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][1]$.\end{lem} \begin{proof} It can be proven in a similar way as Lemma \ref{lem:props T2 con T3' y C2 con C3'}. \end{proof} We close this section with a generalization of \cite[Corollary 3.6]{positselskicorrespondence}. \begin{prop} \label{prop:en caso de tener un generador proyectivo}\label{cor:caract}\label{rem:T3 simple}\label{prop:en caso de tener un generador proyectivo-1}\label{cor:caract-1}\label{rem:T3 simple-1} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}=\smdx[\mathcal{X}]\subseteq\mathcal{C}$ be closed under extensions and admitting an $\mathcal{X}$-projective relative generator in $\mathcal{X}$, and let $\mathcal{T}\subseteq\mathcal{C}$ satisfying $\mathrm{(T0), (T1), (T2), (T4),}$ and $\mathrm{(T5)}.$ Then, the following conditions are equivalent: \begin{description} \item [{(T3)}] There exist $\omega\subseteq\mathcal{T}_{\mathcal{X}}^{\vee}$ which is a relative generator in $\mathcal{X}.$ \item [{(T3')}] There exist $\omega\subseteq\mathcal{T}_{\mathcal{X}}^{\vee}$ which is an $\mathcal{X}$-projective relative generator in $\mathcal{X}.$ \end{description} Furthermore, if $\mathcal{C}$ is Ab4 (abelian), $\mathcal{X}=\Addx[\mathcal{X}]$ ($\mathcal{X}=\addx[\mathcal{X}]$) and $\mathcal{T}=\mathcal{T}^{\oplus}$ ($\mathcal{T}=\mathcal{T}^{\oplus_{<\infty}}$), then $\mathrm{(T3)}$ and $\mathrm{(T3')}$ are equivalent to the following one: \begin{description} \item [{(T3'')}] there exists $\sigma\subseteq\mathcal{T}_{\mathcal{X}}^{\vee}$ such that $\Addx[\sigma]$ is an $\mathcal{X}$-projective relative generator in $\mathcal{X}$ \end{description} ($\mathrm{\mathbf{(t3''):}}$ $\,$ there exists $\sigma\subseteq\mathcal{T}_{\mathcal{X}}^{\vee}$ such that $\addx[\sigma]$ is an $\mathcal{X}$-projective relative generator in $\mathcal{X}$). \end{prop} \begin{proof} Let $\rho$ be an $\mathcal{X}$-projective relative generator in $\mathcal{X}$. \ (T3) $\Rightarrow$ (T3'): By Theorem \ref{thm:el par n-X-tilting} (a), we have $\rho\subseteq\mathcal{T}_{\mathcal{X}}^{\vee}$ . Hence, we can take $\omega:=\rho.$ (T3') $\Rightarrow$ (T3): It is trivial. Let $\mathcal{C}$ be Ab4, $\mathcal{X}=\Addx[\mathcal{X}]$ and $\mathcal{T}=\mathcal{T}^{\oplus}$ (the case where $\mathcal{C}$ is abelian, $\mathcal{X}=\addx[\mathcal{X}]$ and $\mathcal{T=\mathcal{T}^{\oplus_{<\infty}}}$ can be done by similar arguments). (T3') $\Rightarrow$ (T3''): Let $\omega$ be the relative generator in $\mathcal{X}$ satisfying (T3'). Since $\omega\subseteq\mathcal{X}=\Addx[\mathcal{X}]$, we can take $\sigma:=\omega.$ (T3'') $\Rightarrow$ (T3): It follows from Proposition \ref{prop:equiv a t3} and Lemma \ref{lem:props T2 con T3' y C2 con C3'-2}. \end{proof} \subsection{\label{sub:Big small}Tilting for classes of compact-like objects} In this section we will consider a class $\mathcal{X}$ consisting of compact-like objects in an abelian category $\mathcal{C}.$ We shall see that, in this case, a class $\mathcal{T}$ is big $n$-$\mathcal{X}$-tilting if and only if it is small $n$-$\mathcal{X}$-tilting. Let us begin by defining what kind of compact-like objects we will be considering. \begin{defn} \label{def: compact, fg , etc} Let $\mathcal{C}$ be an additive category, $\mathcal{T}\subseteq\mathcal{C}$ and $M\in\mathcal{C}$. \begin{itemize} \item[$\mathrm{(a)}$] $M$ is \textbf{finitely $\mathcal{T}$-generated} if, for every family $\left\{ U_{i}\right\} _{i\in I} \subseteq \mathcal{T}$ such that the coproduct $\bigoplus_{i\in I}U_{i}$ in $\mathcal{C}$ exists, every epimorphism $\varphi:\bigoplus_{i\in I}U_{i}\rightarrow M$ in $\mathcal{C}$ admits a finite set $F\subseteq I$ such that the composition \begin{center} $\bigoplus_{i\in F}U_{i}\xrightarrow{i_{F,I}}\bigoplus_{i\in I}U_{i}\xrightarrow{\varphi}M$ \end{center} is an epimorphism, where $i_{F,I}$ is the natural inclusion. We denote by $\operatorname{f.g.}(\mathcal{T})$ the class of all the finitely $\mathcal{T}$-generated objects in $\mathcal{C}.$ \item[$\mathrm{(b)}$] $M$ is \textbf{$\mathcal{T}$-compact} if, for every family $\left\{ U_{i}\right\} _{i\in I}\subseteq\mathcal{T}$ such that the coproduct $\bigoplus_{i\in I}U_{i}$ in $\mathcal{C}$ exists, every morphism $\psi:M\rightarrow\bigoplus_{i\in I}U_{i}$ in $\mathcal{C}$ admits a finite set $F\subseteq I$ such that $\psi$ factors through the inclusion $i_{F,I}:\bigoplus_{i\in F}U_{i}\rightarrow\bigoplus_{i\in I}U_{i}\mbox{.}$ We denote by $\mathcal{K}_{\mathcal{T}}$ the class of all the $\mathcal{T}$-compact objects in $\mathcal{C}.$ \item[$\mathrm{(c)}$] $M$ is \textbf{$\mathcal{T}$-compact for monomorphisms} if, for every family $\left\{ U_{i}\right\} _{i\in I}\subseteq\mathcal{T}$ such that the coproduct $\bigoplus_{i\in I}U_{i}$ in $\mathcal{C}$ exists, every monomorphism $\psi:M\rightarrow\bigoplus_{i\in I}U_{i}$ in $\mathcal{C}$ admits a finite set $F\subseteq I$ such that $\psi$ factors through the natural inclusion $i_{F,I}:\bigoplus_{i\in F}U_{i}\rightarrow\bigoplus_{i\in I}U_{i}\mbox{.}$ We denote by $\mathcal{K}_{\mathcal{T},\mathcal{M}}$ the class of all the $\mathcal{T}$-compact for monomorphisms objects in $\mathcal{C}$. \end{itemize} \end{defn} \begin{rem}\label{rem: sobre compactos y fg} Let $\mathcal{C}$ be an additive category. \ (1) The notion of $\mathcal{T}$-compact object in $\mathcal{C}$ is a generalization of the compact object in $\mathcal{C}$, which can be found in \cite[Chapter II, Section 16]{mitchell} and \cite[Chapter 3, Section 5]{Popescu} by the name of\emph{ ``small} \emph{object}''. There are numerous generalizations of this notion that are particular cases of $\mathcal{T}$-compactness, such as \emph{``self-small object''} \cite{arnold1975abelian}. \ (2) The notion of finitely $\mathcal{T}$-generated object is a generalization of the finitely generated object commonly found in the context of Ab5 categories, see \cite[Chapter V, Section 3]{ringsofQuotients}. \end{rem} \begin{lem}\cite[Chapter II. Lemma 16.1]{mitchell}\label{lem:para compactos} Let $\mathcal{C}$ be an additive category and $\{A_i\}_{i\in I}\subseteq\mathcal{C}$ a family of objects such that the coproduct $\bigoplus_{i\in I}A_{i}$ in $\mathcal{C}$ exists. Then, a morphism $\alpha:A\rightarrow\bigoplus_{i\in I}A_{i}$ factors through the canonic inclusion $i_{F,I}:\bigoplus_{i\in F}A_{i}\rightarrow\bigoplus_{i\in I}A_{i}$, where $F$ is a finite subset of $I$, if and only if $\alpha=\sum_{i\in F}u_{i}p_{i}\alpha$, where $u_{i}$ and $p_{i}$ are, respectively, the i-th injection and the the i-th projection for the coproduct $\bigoplus_{i\in I}A_{i}$. \end{lem} \begin{cor}\label{lem:compactos son cerrados por cocientes} Let $\mathcal{C}$ be an additive category and $\mathcal{T}\subseteq\mathcal{C}$. Then, the class $\mathcal{K}_{\mathcal{T}}$ is closed by quotients.\end{cor} \begin{proof} Let $\pi:M\rightarrow N$ be an epimorphism in $\mathcal{C}$, where $M\in\mathcal{K}_{\mathcal{T}}$. We will show that $N\in\mathcal{K}_{\mathcal{T}}$. Let $\left\{ U_{i}\right\} _{i\in I}\subseteq\mathcal{T}$ be a family such that the coproduct $\bigoplus_{i\in I}U_{i}$ in $\mathcal{C}$ exists, and let $\alpha:N\rightarrow\bigoplus_{i\in I}U_{i}$ be a morphism in $\mathcal{C}$. Since $M$ is $\mathcal{T}$-compact, by Lemma \ref{lem:para compactos} there is a finite set $J\subseteq I$ such that $\alpha\pi=\sum_{i\in J}u_{i}p_{i}\alpha\pi=\left(\sum_{i\in J}u_{i}p_{i}\alpha\right)\pi\mbox{,}$ where $u_{i}:U_{i}\rightarrow\bigoplus_{i\in I}U_{i}$ and $p_{i}:\bigoplus_{i\in I}U_{i}\rightarrow U_{i}$ are, respectively, the natural inclusion and projection in the coproduct. Then $\alpha=\sum_{i\in }u_{i}p_{i}\alpha$ since $\pi$ is an epimorphism, and thus by Lemma \ref{lem:para compactos}, we get $N\in\mathcal{K}_{\mathcal{T}}.$ \end{proof} The following result is inspired in \cite[Chapter II, Proposition 16.2]{mitchell}. \begin{lem}\label{lem:caracterizacion T-compactos}\label{lem:caracterizacion compactos} For an additive category $\mathcal{C},$ $\mathcal{T}\subseteq\mathcal{C}$ and $M\in\mathcal{C},$ the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $M$ is $\mathcal{T}$-compact. \item[$\mathrm{(b)}$] For every family $\left\{ U_{i}\right\} _{i\in X} \subseteq \mathcal{T}$ such that the coproduct $\bigoplus_{i\in X}U_{i}$ in $\mathcal{C}$ exists, the map $$\upsilon:\bigoplus_{i\in X}\Homx[][M][U_{i}] \rightarrow\Homx[][M][\bigoplus_{i\in X}U_{i}],\,(\alpha_{i})_{i\in X}\mapsto\sum_{i\in X}u_{i}\alpha_{i}\mbox{,}$$ is an isomorphism, where $u_{i}:U_{i}\rightarrow\bigoplus_{i\in X}U_{i}$ is the natural inclusion in the coproduct. \end{itemize} \end{lem} \begin{proof} Let $\left\{ U_{i}\right\} _{i\in X}$ be a family in $\mathcal{T}$ such that the coproduct $\bigoplus_{i\in X}U_{i}$ in $\mathcal{C}$ exists. For every $i\in X$, consider the natural projection $p_{i}:\bigoplus_{i\in X}U_{i}\rightarrow U_{i}$. Note that the sum $\sum_{i\in X}u_{i}\alpha_{i}$ is well-defined, for any $ (\alpha _i)_{i\in X}\in \bigoplus_{i\in X}\Homx[][M][U_{i}] .$ Moreover, $\upsilon$ is always a monomorphism since $p_{k}\left(\sum_{i\in X}u_{i}\alpha_{i}\right)=\alpha_{k}.$ \ (a) $\Rightarrow$ (b) It is enough to show that $\upsilon$ is surjective. Consider a morphism $\alpha:M\rightarrow\bigoplus_{i\in X}U_{i}$ in $\mathcal{C}$. Since $M\in\mathcal{K}_{\mathcal{T}}$, there is a finite set $J\subseteq X$ such that $\alpha=\sum_{i\in J}u_{i}p_{i}\alpha$. Therefore $\alpha=\upsilon(p_{i}\alpha)_{i\in X}$ and thus $\upsilon$ is surjective. \ (b) $\Rightarrow$ (a) Let $\upsilon$ be an isomorphism. In particular, every $\alpha\in\Homx[][M][\bigoplus_{i\in X}U_{i}]$ admits an element $(\alpha_{i})_{i\in X}\in\bigoplus_{i\in X}\Homx[][M][U_{i}]$ such that $\alpha=\sum_{i\in X}u_{i}\alpha_{i}\mbox{.}$ Now, since $p_{k}\upsilon(\alpha_{i})_{i\in X}=\alpha_{k}$ $\forall k\in X$, we have $\alpha=\sum_{j\in X}u_{j}\alpha_{j}=\sum_{j\in X}u_{j}(p_{j}\upsilon(\alpha_{i})_{i\in X})=\sum_{j\in X}u_{j}p_{j}\alpha\mbox{.}$ Hence, $M\in\mathcal{K}_{\mathcal{T}}$ by Lemma \ref{lem:para compactos}.\end{proof} \begin{cor}\label{cor:coproducto de3 compactos} Let $\mathcal{C}$ be an additive category, $\mathcal{T}\subseteq\mathcal{C}$ and $A=\oplus_{i=1}^nA_i$ in $\mathcal{C}.$ Then $A$ is $\mathcal{T}$-compact if, and only if, each $A_i$ is $\mathcal{T}$-compact. \end{cor} \begin{proof} It follows from Lemma \ref{lem:caracterizacion T-compactos}. \end{proof} \begin{cor}\label{cor:finitamente generado es compacto} For an additive category $\mathcal{C},$ $\mathcal{T}\subseteq\mathcal{C}$ and a relative generator $\omega^{\oplus}$ in $\mathcal{C},$ with $\omega\subseteq\mathcal{K}_{\mathcal{T}},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\operatorname{f.g.}(\omega)\subseteq\operatorname{Fac}_{1}(\omega^{\oplus_{<\infty}})\subseteq\mathcal{K}_{\mathcal{T}}$. \item[$\mathrm{(b)}$] If $\mathcal{C}$ is abelian and $\Extx[1][][\omega][\operatorname{Fac}_{1}(\omega^{\oplus_{<\infty}})]=0$, then $\operatorname{Fac}_{1}(\omega^{\oplus_{<\infty}})$ is closed under extensions in $\mathcal{C}$. \end{itemize} \end{cor} \begin{proof} The item (a) follows by Remark \ref{rem: sobre compactos y fg}(b), Lemma \ref{lem:compactos son cerrados por cocientes} and Corollary \ref{cor:coproducto de3 compactos}. Finally, the proof of (b) can be done in a similar way as the proof of Horseshoe's Lemma \cite[Horseshoe Lemma 2.2.8]{weibel1995introduction}. \end{proof} \begin{lem}\label{lem:finitamente generados en Ab5}\cite[Chapter V. Lemma 3.1]{ringsofQuotients} Let $\mathcal{C}$ be an Ab5 category. Then $\operatorname{f.g.}(\mathcal{C})$ is closed under quotients and extensions. \end{lem} We have the following well-known facts. \begin{cor}\label{cor:finitamente generado es compacto-1} Let $R$ be a ring and $\modd[R]:=\operatorname{f.g.}(\Modx[R])$. Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\modd\subseteq\mathcal{K}_{\Modx[R]}$ and $\modd$ is closed under extensions and quotients in $\Modx$. In particular, $\modd$ is right thick in $\Modx$. \item[$\mathrm{(b)}$] $R$ is left noetherian if, and only if, $\modd$ is a thick abelian subcategory of $\Modx$. \end{itemize} \end{cor} \begin{prop}\label{prop:add Add vs compactos} Let $\mathcal{C}$ be an abelian category, $\mathcal{T}\subseteq\mathcal{C}$ and $\mathcal{Z}'\in\left\{ \mathcal{K}_{\mathcal{T}},\mathcal{K}_{\mathcal{T},\mathcal{M}}\right\} $. Then $\Addx[\mathcal{T}]\cap\mathcal{Z}=\addx[\mathcal{T}]\cap\mathcal{Z}$ for every $\mathcal{Z}\subseteq\mathcal{Z}'$. \end{prop} \begin{proof} Let $\mathcal{Z}'\in\left\{ \mathcal{K}_{\mathcal{T}},\mathcal{K}_{\mathcal{T},\mathcal{M}}\right\} $ and $\mathcal{Z}\subseteq\mathcal{Z}'$. Observe that it is enough to show that $\Addx[\mathcal{T}]\cap\mathcal{Z}\subseteq\addx[\mathcal{T}]\cap\mathcal{Z}$. Consider $X\in\Addx[\mathcal{T}]\cap\mathcal{Z}$. Then, there is a splitting exact sequence $\suc[X'][\bigoplus_{i\in I}U_{i}][X][][f]\mbox{ with }\{U_{i}\}_{i\in I}\subseteq\mathcal{T}\mbox{.}$ Let $\mu:X\rightarrow\bigoplus_{i\in I}U_{i}$ be a monomorphism such that $f\mu=1_{X}$. It follows that there is a finite set $J\subseteq I$ and a morphism $\mu':X\rightarrow\bigoplus_{j\in J}U_{j}$ such that $\mu=i_{J,I}\circ\mu'$, where $i_{J,I}:\bigoplus_{j\in J}U_{j}\rightarrow\bigoplus_{i\in I}U_{i}$ is the natural inclusion. Consider the morphism $g:=f\circ i_{J,I}:\bigoplus_{j\in J}U_{j}\rightarrow X$. Since $g\mu'=f\circ i_{J,I}\circ\mu'=f\mu=1_{X}$, $g$ is a splitting epimorphism and thus $X\in\addx[\mathcal{T}]\cap\mathcal{Z}$ since $J$ is finite. \end{proof} \begin{lem}\label{lem: addT precub es AddT precub} Let $\mathcal{C}$ be an Ab3 category, $\mathcal{M}\subseteq\mathcal{C}$ and $\alpha:M\rightarrow X$ in $\mathcal{C}.$ If $\alpha$ is an $\addx[\mathcal{M}]$-precover of $X$, then $\alpha$ is an $\Addx[\mathcal{M}]$-precover of $X$.\end{lem} \begin{proof} Let $\alpha:M\rightarrow X$ be an $\addx[\mathcal{M}]$-precover. Since $\Addx[\mathcal{M}]=\smdx[\mathcal{M}^{\oplus}]$, it is easy to see that every morphism $M'\rightarrow X$, with $M'\in\Addx[\mathcal{M}]$, factors through $\mathcal{M}^{\oplus}$. Furthermore, every morphism $M''\rightarrow X$ with $M''\in\mathcal{M}^{\oplus}$ factors through $M^{\oplus}$. Indeed, consider a morphism $\alpha':\bigoplus_{i\in I}M_{i}\rightarrow X$, with $M_{i}\in\mathcal{M}\:\forall i\in I$, and the canonic inclusions $\left\{ v_{i}:M_{i}\rightarrow\bigoplus_{i\in I}M_{i}\right\} _{i\in I}$. Since $\alpha$ is an $\addx[\mathcal{M}]$-precover, $\forall i\in I$ there is $\lambda_{i}:M_{i}\rightarrow M$ such that $\alpha'v_{i}=\alpha\lambda_{i}.$ Therefore, there is $\lambda:\bigoplus_{i\in I}M_{i}\rightarrow M^{(I)}$ such that $\lambda v_{i}=v'_{i}\lambda_{i}$ $\forall\,i\in I,$ where $v'_{i}:M\rightarrow M^{(I)}$ is the natural inclusion in the coproduct. Observe that $\alpha'$ factors through $\lambda$ and $\alpha'':M^{(I)}\rightarrow X$, where $\alpha''$ is induced by the coproduct universal property and moreover $\alpha''v'_{i}=\alpha,$ for all $i\in I.$ Now, for each $i\in I,$ we have $\alpha''\lambda v_{i}=\alpha''v'_{i}\lambda_{i}=\alpha\lambda_{i}=\alpha'v_{i}$ and thus $\alpha''\lambda=\alpha'$. Finally, we assert that $\alpha''$ factors through $\alpha$. To show it, consider the morphism $\sigma:M^{(I)}\rightarrow M$ such that $\sigma v'_{j}=1_{M},$ for all $j\in I.$ Then, for each $j\in I,$ we get $\alpha\sigma v'_{j}=\alpha1_{M}=\alpha=\alpha''v'_{j}$ and so $\alpha\sigma=\alpha''$. \end{proof} \begin{thm}\label{thm:n-X-tilting sii n-X-tilting peque=0000F1o} Let $\mathcal{C}$ be an Ab4 category, $\mathcal{T}\subseteq\mathcal{C}$, and $\mathcal{Z}'\in\left\{ \mathcal{K}_{\mathcal{T}},\mathcal{K}_{\mathcal{T},\mathcal{M}}\right\} $. Then, for every $\mathcal{Z}\subseteq\mathcal{Z}'$ closed under extensions, $\Addx[\mathcal{T}]$ is $n$-$\mathcal{Z}$-tilting if and only if $\addx[\mathcal{T}]$ is $n$-$\mathcal{Z}$-tilting. \end{thm} \begin{proof} Let $\mathcal{Z}\subseteq\mathcal{Z}'$ be closed under extensions. Since $\mathcal{C}$ is Ab4, $\pdr[\mathcal{X}][\operatorname{Add}(\mathcal{T})]=\pdr[\mathcal{X}][\mathcal{T}]=\pdr[\mathcal{X}][\operatorname{add}(\mathcal{T})]$. Now, by Proposition \ref{prop:add Add vs compactos}, $\Addx[\mathcal{T}]\cap\mathcal{Z}=\addx[\mathcal{T}]\cap\mathcal{Z}\mbox{.}$ Hence, using that $\mathcal{Z}$ is closed under extensions, we have $\omega\subseteq(\Addx[\mathcal{T}])_{\mathcal{Z}}^{\vee}$ if and only if $\omega\subseteq(\addx[\mathcal{T}])_{\mathcal{Z}}^{\vee},$ for any $\omega\subseteq\mathcal{Z}$. Finally, (T5) follows from Lemma \ref{lem: addT precub es AddT precub} and Proposition \ref{prop:add Add vs compactos}. \end{proof} \begin{cor}\label{cor:coro1 p80} Let $\mathcal{C}$ be an Ab4 category, $\mathcal{T}\subseteq\mathcal{W}\subseteq\mathcal{C}$, and let $\omega^{\oplus}$ be a relative generator in $\mathcal{C}$ such that $\omega\subseteq\mathcal{K}_{\mathcal{W}}$. Then, for every $\mathcal{Z}\subseteq\operatorname{f.g.}(\omega)$ closed under extensions, we have that $\Addx[\mathcal{T}]$ is $n$-$\mathcal{Z}$-tilting if and only if $\addx[\mathcal{T}]$ is $n$-$\mathcal{Z}$-tilting. \end{cor} \begin{proof} It follows by Theorem \ref{thm:n-X-tilting sii n-X-tilting peque=0000F1o} and Corollary \ref{cor:finitamente generado es compacto}. \end{proof} \begin{cor}\label{cor:coro2 p80}Let $R$ be a ring and $\mathcal{T}\subseteq\Modx$. Then, for every $\mathcal{Z}\subseteq\modd$, closed under extensions, we have that $\Addx[\mathcal{T}]$ is $n$-$\mathcal{Z}$-tilting if and only if $\addx[\mathcal{T}]$ is $n$-$\mathcal{Z}$-tilting. \end{cor} \begin{proof} It follows by Corollary \ref{cor:finitamente generado es compacto-1} (a) and Corollary \ref{cor:coro1 p80}. \end{proof} \subsection{$n$-$\mathcal{X}$-tilting triples in abelian categories} \begin{defn} Let $\mathcal{C}$ be an abelian category. We say that $\left(\p;\mathcal{T}\right)$ is a\textbf{ big (small) $n$-$\mathcal{X}$-tilting triple} provided the following statements hold true: \begin{description} \item [(TT1)] $\p$ is a left cotorsion pair in $\mathcal{X}$ with $\idr[\mathcal{A}][\mathcal{B}\cap\mathcal{X}]=0.$ \item [(TT2)] $\mathcal{B}$ is closed under extensions and direct summands. \item [(TT3)] There is a big (small) $n$-$\mathcal{X}$-tilting class $\mathcal{T}$ such that $\mathcal{B}\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}$ and $\mathcal{T}\cap\mathcal{X}\subseteq\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}$. \end{description} \end{defn} \begin{lem}\label{lem:lemita inyectivos vs par X-completo a izq} Let $\p$ be a right $\mathcal{X}$-complete pair in an abelian category $\mathcal{C}$ such that $\mathcal{B}\cap\mathcal{X}=\smdx[\mathcal{B}\cap\mathcal{X}].$ If $\alpha\subseteq\mathcal{X}\subseteq{}^{\bot_{1}}\alpha$, then $\alpha\subseteq\mathcal{B}\cap\mathcal{X}$.\end{lem} \begin{proof} It is straightforward. \end{proof} \begin{lem} \label{lem:lema previo a 2.80} For an abelian category $\mathcal{C},$ $\mathcal{X}\subseteq\mathcal{C}$, a right $\mathcal{X}$-complete and $\mathcal{X}$-hereditary pair $\p$ such that $\mathcal{B}\cap\mathcal{X}=\smdx[\mathcal{B}\cap\mathcal{X}]$ and $n:=\max\left\{ 1,\pdr[\mathcal{X}][\mathcal{A}]\right\} <\infty,$ the following conditions hold true. \begin{itemize} \item[$\mathrm{(a)}$] Every $X\in\mathcal{X}$ admits an exact sequence \[ 0\rightarrow X\overset{f_{0}}{\rightarrow}B_{X,0}\overset{f_{1}}{\rightarrow}B_{X,1}\rightarrow...\overset{f_{n}}{\rightarrow}B_{X,n}\rightarrow0\mbox{,} \] with $B_{X,n}\in\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}$, $B_{X,i}\in\mathcal{B}\cap\mathcal{X}$, and $\Cok[f_{i}]\in\mathcal{A}\cap\mathcal{X}$ $\forall i\in[0,n-1]$. In particular, $\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}{\mathcal{A}\cap\mathcal{X}}\leq n$. \item[$\mathrm{(b)}$] Let $\mathcal{A}\cap\mathcal{X}$ and $\mathcal{X}$ be closed under extensions and $^{\bot}(\mathcal{B} \cap \mathcal{X})\cap \mathcal{B} \cap \mathcal{X} \subseteq \mathcal{A} \cap \mathcal{B} \cap \mathcal{X}$. Then, every $W\in\mathcal{X}\cap{}^{\bot}\mathcal{X}$ admits an exact sequence \[ 0\rightarrow W\stackrel{f_{0}}{\rightarrow}B_{W,0}\stackrel{f_{1}}{\rightarrow}B_{W,1}\rightarrow...\stackrel{f_{n}}{\rightarrow}B_{W,n}\rightarrow0 \] with $B_{W,i}\in\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}$ $\forall i\in[0,n]$ and $\Cok[f_{j}]\in\mathcal{A}\cap\mathcal{X}$ $\forall j\in[0,n-1]$. In particular, $\coresdimr{\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}}{\mathcal{X}\cap{}^{\bot}\mathcal{X}}{\mathcal{A}\cap\mathcal{X}}\leq n$. \end{itemize} \end{lem} \begin{proof} (a) Let $X\in\mathcal{X}$. Since $\p$ is right $\mathcal{X}$-complete, there is an exact sequence $\suc[X][B_{X,0}][C_{1}][\,][\,]\mbox{,}$ with $B_{X,0}\in\mathcal{B}\cap\mathcal{X}$ and $C_{1}\in\mathcal{A}\cap\mathcal{X}$. Repeating the same argument recursively, we can build an exact sequence \[ 0\rightarrow X\rightarrow B_{X,0}\rightarrow B_{X,1}\rightarrow...\rightarrow B_{X,n}\rightarrow C_{n+1}\rightarrow0 \] with $B_{X,i}\in\mathcal{B}\cap\mathcal{X}$ $\forall i\in[0,n]$ and $C_{n+1}\in\mathcal{A}\cap\mathcal{X}$. Moreover $B_{X,i}\in C_{n+1}^{\bot}$ $\forall i\in[0,n]$ since $\p$ is $\mathcal{X}$-hereditary. Hence $\Extx[1][][C_{n+1}][C_{n}]\cong\Extx[n+1][][C_{n+1}][X]=0$ since $C_{n+1}\in\mathcal{A}$ and $\pdr[\mathcal{X}][\mathcal{A}]\leq n$. Therefore $\suc[C_{n}][B_{X,n}][C_{n+1}][\,][\,]$ splits and thus $0\rightarrow X\rightarrow B_{X,0}\rightarrow B_{X,1}\rightarrow...\rightarrow B_{X,n-1}\rightarrow C_{n}\rightarrow0$ is the desired exact sequence. \ (b) Let $W\in\mathcal{X} \cap {}^{\bot}\mathcal{X}$. Consider the exact sequence \[ 0\rightarrow W\stackrel{f_{0}}{\rightarrow}B_{W,0}\stackrel{f_{1}}{\rightarrow}B_{W,1}\rightarrow...\stackrel{f_{n}}{\rightarrow}B_{W,n}\rightarrow0 \] obtained in (a). Now, by using that $\mathcal{A}\cap\mathcal{X}$ is closed under extensions, we can conclude that $B_{W,k}\in\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}$ $\forall k\in[1,n]$. Finally, consider the exact sequence $\suc[W][B_{W,0}][C_{1}]\mbox{.}$ Since $W\in\mathcal{X}\cap{}^{\bot}\mathcal{X}\subseteq\mathcal{X}\cap{}^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)$ and $C_{1}\in\mathcal{A}\cap\mathcal{X}\subseteq\mathcal{X}\cap{}^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)$, we have $B_{W,0}\in\mathcal{X}\cap{}{}^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)\cap\mathcal{B}\subseteq \mathcal{X}\cap\mathcal{A}\cap\mathcal{B}\mbox{.}$ \end{proof} The following theorem is a generalization of \cite[Theorem 3.2]{Hopel-Mendoza}. Note that we are writing, at the same time, the big and the small versions. \begin{thm}\label{thm:dimensiones en un par de cotorsion tilting}\label{thm:dimensiones en un par de cotorsion tilting-1}\label{rem:obs 2.80'}\label{rem:obs 2.80''} Let $\mathcal{C}$ be an Ab4 (abelian) category, $\mathcal{X}\subseteq\mathcal{C}$ be closed under extensions such that $\mathcal{X}=\Addx[\mathcal{X}]$ ($\mathcal{X}=\addx[\mathcal{X}]$) and admits an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$ and a class $\sigma$ such that $\Addx[\sigma]$ ($\addx[\sigma]$) is an $\mathcal{X}$-projective relative generator in $\mathcal{X}$. Consider a left cotorsion pair $\p$ in $\mathcal{X}$ such that $\idr[\mathcal{A}][\mathcal{B}\cap\mathcal{X}]=0$ and $\mathcal{B}=\smdx[\mathcal{B}]$ is closed by extensions, and let $\kappa := \mathcal{A} \cap \mathcal{B} \cap \mathcal{X}$. Then, the following statements are equivalent: \begin{itemize} \item[$\mathrm{(a)}$] There exists $\mathcal{T}\subseteq\mathcal{C}$ such that $(\p;\mathcal{T})$ is a big (small) $n$-$\mathcal{X}$-tilting triple. \item[$\mathrm{(b)}$] $\p$ is right $\mathcal{X}$-complete, $^{\bot}(\mathcal{B} \cap \mathcal{X})\cap \mathcal{B} \cap \mathcal{X} \subseteq \mathcal{A} \cap \mathcal{B} \cap \mathcal{X}$, $\pdr[\mathcal{X}][\mathcal{A}]\leq n$, $\kappa=\kappa^{\oplus}$ ($\kappa=\kappa^{\oplus_{<\infty}}$) and $\kappa$ is precovering in $\mathcal{B}\cap\mathcal{X}.$ \item[$\mathrm{(c)}$] $\kappa$ is a big (small) $n$-$\mathcal{X}$-tilting class such that $\mathcal{B}\cap\mathcal{X}=\kappa^{\bot}\cap\mathcal{X}$. \end{itemize} Furthermore, if any of the above conditions is satisfied, then $\mathcal{A}\cap\mathcal{X}\subseteq\kappa^{\vee}$, $\kappa=(\mathcal{A}\cap\mathcal{X})^{\bot}\cap\mathcal{A}\cap\mathcal{X}={}^{\bot}(\mathcal{B}\cap\mathcal{X})\cap\mathcal{B}\cap\mathcal{X}$, $\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}{\mathcal{A}\cap\mathcal{X}}\leq\max\left\{ 1,n\right\} $, and $\idr[\mathcal{A}\cap\mathcal{X}][\mathcal{X}]=\coresdimr{\mathcal{B}}{\mathcal{X}}{\mathcal{X}}\leq n$. Moreover, if $\sigma$ is a (finite) set (and $\addx[X]$ is precovering in $X^{\bot}\cap\mathcal{X}$ $\forall X\in\kappa$), then we can dismiss the hypothesis from $\mathrm{(b)}$ which says that $\kappa$ is precovering in $\mathcal{B}\cap\mathcal{X}$ and to find $T\in\mathcal{C}$ such that $\addx[T]=\kappa$ ($\Addx[T]=\kappa$). \end{thm} \begin{proof} (a) $\Rightarrow$ (b) By Lemma \ref{lem:partiltingescompleto}, it follows that $\p$ is $\mathcal{X}$-complete, $\mathcal{X}$-hereditary, and $\mathcal{T}\cap\mathcal{X}=\kappa={}{}^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)\cap\mathcal{B}\cap\mathcal{X}.$ In particular $\kappa=\kappa^{\oplus}$ ($\kappa=\kappa^{\oplus_{<\infty}}$). Moreover, since $\mathcal{A}\subseteq{}{}{}^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)$, we can conclude that $\pdr[\mathcal{X}][\mathcal{A}]\leq n$ by \cite[Proposition 4.5(b)]{parte1}, (T1) and (T4). Finally, it follows from (T5) that $\kappa$ is precovering in $\mathcal{B}\cap\mathcal{X}$. (b) $\Rightarrow$ (c) Let $\omega:=\Addx[\sigma]$ ($\omega:=\addx[\sigma]$) and let $\alpha$ be an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$. We claim that $\mathcal{B}\cap\mathcal{X}=\kappa^{\bot}\cap\mathcal{X}$. Indeed, since $\kappa\subseteq\mathcal{A}$ and $\idr[\mathcal{A}][\mathcal{B}\cap\mathcal{X}]=0$, we have $\mathcal{B}\cap\mathcal{X}\subseteq\kappa^{\bot}\cap\mathcal{X}.$ Let us show that $\kappa^{\bot}\cap\mathcal{X}\subseteq \mathcal{B}\cap\mathcal{X}.$ Consider $X\in\kappa^{\bot }\cap\mathcal{X}$. Then, by Lemma \ref{lem:lema previo a 2.80} (a), there is an exact sequence $0\rightarrow X\overset{f_{0}}{\rightarrow}B_{0}\overset{f_{1}}{\rightarrow}B_{1}\overset{f_{2}}{\rightarrow}...\overset{f_{k}}{\rightarrow}B_{k}\rightarrow0$ such that $B_{k}\in\kappa$, $B_{i}\in\mathcal{B}\cap\mathcal{X}$ and $\Cok[f_{i}]\in\mathcal{A}\cap\mathcal{X}$ $\forall i\in[0,k]$. Hence, using that $X\in\kappa^{\bot}\cap\mathcal{X}$ and $B_{i}\in\mathcal{B}\cap\mathcal{X}\subseteq\kappa^{\bot}\cap\mathcal{X}$, we have that $\im[f_{j}]\in\kappa^{\bot}\cap\mathcal{X}$ $\forall j\in[1,k-1]$.\\ Let us prove, by induction on $k$, that $X\in\mathcal{B}$. Indeed, if $k=0$ then $X\cong B_{0}\in\mathcal{B}$. For the case $k=1$, we have an exact sequence $\eta_{1}:\;\suc[X][B_{0}][B_{1}][\,][\,]\mbox{,}$ with $X\in\kappa^{\bot}\cap\mathcal{X}$ and $B_{1}\in\kappa$. Hence $\eta_{1}$ splits and thus $X\in\mathcal{B}$. \ Let $k>1$. Let us show that $B_{k-1}\in\kappa$. Indeed, using that $B_{k-1}\in\mathcal{B}\cap\mathcal{X},$ it is enough to show that $B_{k-1}\in\mathcal{A}.$ From the exact sequence $\eta_{k}:\;\suc[K_{k-1}][B_{k-1}][B_{k}][\,][\;]$, we get $B_{k-1}\in\mathcal{A}$ since $K_{k-1},B_{k}\in\mathcal{A}$ and $\mathcal{A}$ is closed under extensions; proving that $B_{k-1}\in\kappa.$ Now, by using that $B_{k}\in\kappa$ and $K_{k-1}\in\kappa^{\bot}\cap\mathcal{X}$, we get that $\eta_{k}$ splits and thus $K_{k-1}\in\kappa$. Furthermore, from the exact sequence \[ 0\rightarrow X\overset{f_{0}}{\rightarrow}B_{0}\overset{f_{1}}{\rightarrow}B_{1}\overset{f_{2}}{\rightarrow}...\overset{f_{k-2}}{\rightarrow}B_{k-2}\rightarrow K_{k-1}\rightarrow0, \] and using that $K_{k-1}\in\kappa$, we have by the inductive hypothesis, that $X\in\mathcal{B}.$ \ Let us show that $\kappa$ is $n$-$\mathcal{X}$-tilting. In order to do that, We proceed to verify the axioms from (T0) to (T5). \begin{description} \item [{(T0)}] It is clear. \item [{(T1)}] Since $\kappa\subseteq\mathcal{A}$, we have $\pdr[\mathcal{X}][\kappa]\leq\pdr[\mathcal{X}][\mathcal{A}]\leq n\mbox{.}$ \item [{(T2)}] Since $\kappa\subseteq\mathcal{B}\cap\mathcal{X}\subseteq\kappa^{\bot}\cap\mathcal{X}$, we have $\kappa\cap\mathcal{X}\subseteq\kappa^{\bot}\cap\mathcal{X}$. \item [{(T4)}] By Lemma \ref{lem:lemita inyectivos vs par X-completo a izq}, $\alpha$ is an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$ such that $\alpha\subseteq\mathcal{B}\cap\mathcal{X}$. Hence, $\alpha\subseteq\mathcal{X}^{\bot}\cap\kappa^{\bot}$ since $\mathcal{B}\cap\mathcal{X}\subseteq\kappa^{\bot}$. \item [{(T5)}] By hypothesis, $\kappa$ is precovering in $\mathcal{B}\cap\mathcal{X}$. Then, using that $\kappa^{\bot}\cap\mathcal{X}=\mathcal{B}\cap\mathcal{X}$, we have that every $Z\in\kappa^{\bot}\cap\mathcal{X}$ admits a $\kappa$-precover $T'\rightarrow Z$ with $T'\in\mathcal{X}$. \item [{(T3)}] It follows from Proposition \ref{rem:T3 simple} and Lemma \ref{lem:lema previo a 2.80}. \end{description} (c) $\Rightarrow$ (a) It is clear. Assume now that one of the above equivalent conditions hold true. Then we have the following facts. By Lemma \ref{lem:lema previo a 2.80} (a), $\coresdimr{\mathcal{B}\cap\mathcal{X}}{\mathcal{X}}{\mathcal{A}\cap\mathcal{X}}\leq\max\left\{ 1,n\right\} $ and thus $\mathcal{X}\subseteq(\mathcal{B}\cap\mathcal{X})_{\mathcal{A}\cap\mathcal{X}}^{\vee}\subseteq\mathcal{B}_{\mathcal{X}}^{\vee}$. Moreover $\idr[\mathcal{A}\cap\mathcal{X}][\mathcal{X}]=\coresdimr{\mathcal{B}}{\mathcal{X}}{\mathcal{X}}$ by \cite[Corollary 4.12(a)]{parte1}. On another hand $\kappa=(\mathcal{A}\cap\mathcal{X})^{\bot}\cap\mathcal{A}\cap\mathcal{X}$ by \cite[Theorem 4.15 (b)]{parte1}; and $\pdr[\mathcal{X}][\mathcal{A}]\leq n < \infty$ by (b). Then, by applying \cite[Theorem 4.24 (a,b)]{parte1} on $(\mathcal{A}\cap\mathcal{X},\mathcal{B})$, $\coresdimr{\mathcal{B}}{\mathcal{X}}{\mathcal{X}}=\pdr[\mathcal{A}\cap\mathcal{X}][\mathcal{A}\cap\mathcal{X}]=\pdr[\mathcal{X}][\mathcal{A}\cap\mathcal{X}]\leq\pdr[\mathcal{X}][\mathcal{A}]\leq n$ and $\mathcal{A}\cap\mathcal{X}\subseteq\kappa^{\vee}$. Finally, assume that $\sigma$ is a (finite) set, ($\addx[X]$ is precovering in $X^{\bot}\cap\mathcal{X}$ for every $X\in\kappa$), $\p$ is right $\mathcal{X}$-complete, $^{\bot}(\mathcal{B} \cap \mathcal{X})\cap \mathcal{B} \cap \mathcal{X} \subseteq \mathcal{A} \cap \mathcal{B} \cap \mathcal{X}$, $\pdr[\mathcal{X}][\mathcal{A}]\leq n$, and $\kappa=\kappa^{\oplus}$ ($\kappa=\kappa^{\oplus_{<\infty}}$). Since $\sigma\subseteq\mathcal{X}\cap{}^{\bot}\mathcal{X}$, by Lemma \ref{lem:lema previo a 2.80} (b), every $W\in\sigma$ admits an exact sequence $0\rightarrow W\stackrel{f_{0}}{\rightarrow}B_{W,0}\stackrel{f_{1}}{\rightarrow}B_{W,1}\rightarrow...\stackrel{f_{n}}{\rightarrow}B_{W,n}\rightarrow0$ such that $B_{W,i}\in\mathcal{A}\cap\mathcal{B}\cap\mathcal{X}$ $\forall i\in[0,n]$ and $\Cok[f_{j}]\in\mathcal{A}\cap\mathcal{X}$ $\forall j\in[0,n-1]$. Consider $T_{W}:=\bigoplus_{i=0}^{n}B_{W,i}$ for every $W\in\sigma$. Since $\kappa=\kappa^{\oplus}$ ($\kappa=\kappa^{\oplus_{<\infty}}$), we have $T:=\bigoplus_{W\in\sigma}T_{W}\in\kappa$ and $\Addx[T]\subseteq\kappa$ ($\addx[T]\subseteq\kappa$). We claim that $\Addx[T]=\kappa$ ($\addx[T]=\kappa$). In order to prove it, we must show that $T$ is big (small) $n$-$\mathcal{X}$-tilting. This is done in the same manner as (b) $\Rightarrow$ (c) was proved. \ Let us show that $\kappa\subseteq\Addx[T]$ ($\kappa\subseteq\addx[T]$). Consider $X\in\kappa$. Observe that $X\in\mathcal{B}\cap\mathcal{X}\subseteq T^{\bot}\cap\mathcal{X}\mbox{.}$ Now, $T^{\bot }\cap\mathcal{X}\subseteq\Gennr[\operatorname{Add}(T)][1][\mathcal{X}]$ ($T^{\bot }\cap\mathcal{X}\subseteq\Gennr[\operatorname{add}(T)][1][\mathcal{X}]$) by Lemma \ref{lem:props T2 con T3' y C2 con C3'}. Hence, by Lemma \ref{lem:props C2 y T2} (a), $\Addx[T]=\Addx[T]\cap\mathcal{X}$ ($\addx[T]=\addx[T]\cap\mathcal{X}$) is a relative generator in $T^{\bot}\cap\mathcal{X}$. Then, using that $X\in T^{\bot}\cap\mathcal{X}$, we can build an exact sequence $0\rightarrow K_{n}\rightarrow T_{n}\overset{f_{n}}{\rightarrow}T_{n-1}\rightarrow...\overset{f_{1}}{\rightarrow}T_{0}\overset{f_{0}}{\rightarrow}X\rightarrow0\mbox{,}$ with $T_{i}\in\Addx[T]$ ($T_{i}\in\addx[T]$) and $K_{i}:=\Kerx[f_{i}]\in T^{\bot}\cap\mathcal{X}$ $\forall i\in[0,n]$. On another hand $\pdr[\mathcal{X}][X]\leq\pdr[\mathcal{X}][\kappa]\leq n$ and thus $\Extx[1][][K_{n-1}][K_{n}]\cong\Extx[n+1][][X][K_{n}]=0.$ Therefore, $K_{n}\in\mathcal{B}\cap\mathcal{X}\subseteq(\mathcal{A}\cap\mathcal{X})^{\bot}$ since the exact sequence $\suc[K_{n}][T_{n}][K_{n-1}][\,][\:]$ splits. Then, using that $T_{i}\in\kappa\subseteq\mathcal{B}\cap\mathcal{X}\subseteq\left(\mathcal{A}\cap\mathcal{X}\right)^{\bot}\,\forall i\in[0,n]$ and $(\mathcal{A}\cap\mathcal{X})^{\bot}$ is closed by mono-cokernels, we have $K_{i}\in\left(\mathcal{A}\cap\mathcal{X}\right)^{\bot}$ $\forall i\in[0,n]$. Finally, since $K_{0}\in\left(\mathcal{A}\cap\mathcal{X}\right)^{\bot}$ and $X\in\kappa\subseteq\mathcal{A}\cap\mathcal{X}$, the exact sequence $\suc[K_{0}][T_{0}][X][\,][\,]$ splits and thus $X\in\Addx[T]$($X\in\addx[T]$). \end{proof} \begin{rem} One of the hypotheses (the small case) in Theorem \ref{rem:obs 2.80'} is that the class $\addx[X]$ is precovering in $X^{\bot}\cap\mathcal{X}$ $\forall X\in\kappa$. We can give two examples where such condition is satisfied: \begin{enumerate} \item [(i)] $\mathcal{X}:=\mathcal{FP}_{n}$, $\mathcal{C}=\Modx[R]$, $\sigma=\{R\}$ with $R$ an $n$-coherent ring, see Lemma \ref{lem:anillo conmutativo coherente, entonces precubiertas}. \item [(ii)] $R$ an Artin algebra, $\mathcal{C}:=\modd[R]$, $\mathcal{X}\subseteq\modd[R]$ and $\sigma=\{R\}$, see \cite[Proposition 4.2]{auslander1980preprojective}. \end{enumerate} \end{rem} \begin{thm}\label{thm:teo nuevo p.94}\label{thm:main1-2} Let $\mathcal{C}$ be an Ab4 (abelian) category, $\mathcal{X}=\mathcal{X}^{\oplus}\subseteq\mathcal{C}$ ($\mathcal{X}\subseteq\mathcal{C}$) be a right thick class, $\sigma$ be a class such that $\Addx[\sigma]$ ($\addx[\sigma]$) is an $\mathcal{X}$-projective relative generator in $\mathcal{X}$, and let $\mathcal{B}\subseteq\mathcal{C}$ be right thick. Then, the following conditions are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] There is a big (small) $n$-$\mathcal{X}$-tilting class $\mathcal{T}\subseteq\mathcal{X}$ such that $\mathcal{B}\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}$. \item[$\mathrm{(b)}$] $\mathcal{B}$ satisfies the following conditions: \begin{itemize} \item [$\mathrm{(b0)}$] $\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$ is closed by coproducts (this condition is dismissed in the small case); \item[$\mathrm{(b1)}$] there is an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$; \item [$\mathrm{(b2)}$] $\mathcal{B}\cap\mathcal{X}$ is special preenveloping in $\mathcal{X}$; \item [$\mathrm{(b3)}$] $\pdr[\mathcal{X}][^{\bot}(\mathcal{B}\cap\mathcal{X})]\leq n$; \item [$\mathrm{(b4)}$] $\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$ is precovering in $\mathcal{B}\cap\mathcal{X}$. \end{itemize} \end{itemize} Moreover, if $\mathrm{(a)}$ or $\mathrm{(b)}$ is satisfied, we have that $\mathcal{T}=\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$ and $\mathcal{B}$ is $\mathcal{X}$-coresolving. \end{thm} \begin{proof} (a) $\Rightarrow$ (b) Let $\mathcal{T}\subseteq\mathcal{X}$ be a big (small) $n$-$\mathcal{X}$-tilting class such that $\mathcal{B}\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}$. Let us verify the conditions of (b). \begin{enumerate} \item [(b0)] Since $\mathcal{T}\subseteq\mathcal{X}$, by Proposition \ref{prop:(a)} (a,d), we have $\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})=\mathcal{T}$ and thus (b0) holds true. \item [(b1)] It follows by (T4). \item [(b2)] By Theorem \ref{thm:el par n-X-tilting} (c), the pair $({}^{\bot}(\mathcal{T}^{\bot}),\mathcal{T}^{\bot})$ is right $\mathcal{X}$-complete. Hence, (b2) follows from Theorem \ref{thm:el par n-X-tilting} (a). \item [(b3)] It follows from \cite[Proposition 4.5 (b)]{parte1}, (T4) and (T1). \item [(b4)] It follows from (T5) and Proposition \ref{prop:(a)} (a,d). \end{enumerate} (b) $\Rightarrow$ (a) Assume the conditions of (b) hold true, and let $\alpha$ be an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}.$ We assert that $\alpha\subseteq\mathcal{B}\cap\mathcal{X}.$ Indeed, using (b2), we have that the pair $(\mathcal{X},\mathcal{B})$ is right $\mathcal{X}$-complete. Hence by Lemma \ref{lem:lemita inyectivos vs par X-completo a izq}, our assertion follows. Also note that, since $\mathcal{B}$ and $\mathcal{X}$ are right thick, $\mathcal{B}$ is closed under extensions and mono-cokernels in $\mathcal{B}\cap\mathcal{X}$. Therefore, $\mathcal{B}$ is $\mathcal{X}$-coresolving, and thus, by \cite[Lemma 3.4]{parte1} and (b2), $({}{}^{\bot}(\mathcal{B}\cap\mathcal{X}),\mathcal{B}\cap\mathcal{X})$ is a right $\mathcal{X}$-complete left cotorsion pair in $\mathcal{X}$. Now, by (b0), (b3) and (b4), we have that $({}{}^{\bot}(\mathcal{B}\cap\mathcal{X})\cap\mathcal{X},\mathcal{B}\cap\mathcal{X})$ satisfy the conditions of Theorem \ref{thm:dimensiones en un par de cotorsion tilting} (b). Hence, by Theorem \ref{thm:dimensiones en un par de cotorsion tilting} (a), the item (a) is satisfied. \end{proof} \begin{rem}\label{rem: obs 2.90'-1}\label{rem: obs1 teo nuevo} Assume the hypotheses of Theorem \ref{thm:main1-2} with $\mathcal{C}$ Ab4, $\mathcal{X}=\mathcal{X}^{\oplus}$ right thick and $\sigma$ a set. Then, we can dismiss the condition (b4) from (b). Moreover, if Theorem \ref{thm:main1-2}(a) is satisfied, we can choose $\mathcal{T}\subseteq\mathcal{X}$ such that $\mathcal{T}=\Addx[T]$ with $T\in\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$. Indeed, this is a consequence of the last sentence in Theorem \ref{thm:dimensiones en un par de cotorsion tilting} and the fact that $\Addx[T]$ is precovering. \end{rem} \begin{rem}\label{rem:obs 2.90''-1}\label{thm:main1-1-1}\label{rem: obs2 teo nuevo} Assume the hypotheses of Theorem \ref{thm:main1-2} with $\mathcal{C}$ abelian, $\mathcal{X}$ right thick and $\sigma$ a finite set. Then, we can replace condition (b4) in (b) for the following condition: \begin{description} \item [{({$*$})}] $\addx[T]$ is precovering in $T^{\bot}\cap\mathcal{X},$ for each $T\in\mathcal{B}\cap\mathcal{X}{}^{\bot}(\mathcal{B}\cap\mathcal{X}).$ \end{description} Moreover, in such case, we can choose $\mathcal{T}=\addx[T]$ with $T\in\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$. Indeed, this is a consequence of the last sentence in Theorem \ref{thm:dimensiones en un par de cotorsion tilting}.\end{rem} \begin{cor} \label{cor:biyeccion tiltilng -1}\label{cor: coro1 teo nuevo} Let $\mathcal{C}$ be an Ab4 category with enough injectives, $\mathcal{X}=\mathcal{X}^{\oplus}$ be a right thick class admitting an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$, and let $\sigma$ be a set such that $\Addx[\sigma]$ is an $\mathcal{X}$-projective relative generator in $\mathcal{X}$. Consider the following classes: \begin{description} \item [{$\mathcal{T}_{n,\mathcal{X}}$}] consisting of all the objects $T\in\mathcal{X}$ that are big $n$-$\mathcal{X}$-tilting; \item [{$\mathcal{TP}_{n,\mathcal{X}}$}] consisting of all the right $\mathcal{X}$-complete and left cotorsion pairs $\p$ such that $\pdr[\mathcal{X}][^{\bot}(\mathcal{B}\cap\mathcal{X})]\leq n$, $^{\bot}(\mathcal{B}\cap\mathcal{X})\cap\mathcal{B}\cap\mathcal{X}$ is closed under coproducts and $\mathcal{B}$ is right thick. \end{description} Consider the equivalence relation $\sim$ in $\mathcal{T}_{n,\mathcal{X}}$, where $T\sim S$ if $\mbox{ \ensuremath{T^{\bot}\cap\mathcal{X}=S^{\bot}\cap\mathcal{X}}}$; and the equivalence relation $\approx$ in $\mathcal{TP}_{n,\mathcal{X}}$, where $\p\approx\p[\mathcal{A}'][\mathcal{B}']$ if $\mathcal{B}\cap\mathcal{X}=\mathcal{B}'\cap\mathcal{X}\mbox{.}$ Then, there is a bijective map $$\phi:\mathcal{T}_{n,\mathcal{X}}/\!\!\sim\;\;\longrightarrow\;\mathcal{TP}_{n,\mathcal{X}}/\!\!\approx,\quad [T]\mapsto[({}^{\bot}(T^{\bot}),T^{\bot})].$$ \end{cor} \begin{proof} For each $T\in\mathcal{T}_{n,\mathcal{X}},$ we consider the pair $\mathcal{P}_{T}:=(^{\bot}(T^{\bot}),T^{\bot}).$ Let us show that $\mathcal{P}_{T}\in\mathcal{TP}_{n,\mathcal{X}}$. To begin with, it is clear that $T^{\bot}$ is right thick, and by Theorem \ref{thm:el par n-X-tilting} (c), $\mathcal{P}_{T}$ is $\mathcal{X}$-complete. On the other hand, by \cite[Lemma 3.4]{parte1}, $\mathcal{P}_{T}$ is left cotorsion since $\mathcal{C}$ has enough injectives. Moreover, by Proposition \ref{prop:equiv a t3} and Lemma \ref{lem:props C2 y T2}, $T^{\bot}\cap{}^{\bot}(T^{\bot}\cap\mathcal{X})\cap\mathcal{X}=\Addx[T]\cap\mathcal{X}=\Addx[T]$ and thus $T^{\bot}\cap{}^{\bot}(T^{\bot}\cap\mathcal{X})\cap\mathcal{X}$ is closed under coproducts. Finally, by Proposition \ref{prop:(b)} (b), $\pdr[\mathcal{X}][^{\bot}(T^{\bot}\cap\mathcal{X})]\leq n$. Moreover, for $T,S\in\mathcal{T}_{n,\mathcal{X}}$, we note that $[T]=[S]\:\Leftrightarrow\:[\mathcal{P}_{T}]=[\mathcal{P}_{S}]\mbox{.}$ Therefore, the map $\phi$ is well-defined and injective. It remains to show that $\phi$ is surjective. Let $\p\in\mathcal{TP}_{n,\mathcal{X}}$. In particular, $\p$ satisfy conditions (b0), (b1), (b2) and (b3) of Theorem \ref{thm:main1-2} (b). Then, by Remark \ref{rem: obs 2.90'-1}, Theorem \ref{thm:main1-2} (a) is satisfied and thus we can find $T\in\mathcal{T}_{n,\mathcal{X}}$ such that $\mathcal{B}\cap\mathcal{X}=T^{\bot}\cap\mathcal{X}$. Therefore $\p\approx({}^{\bot}(T^{\bot}),T^{\bot})$ and then $\phi([T])=[(\mathcal{A},\mathcal{B})].$ \end{proof} A similar result as above can be proven for small $n$-$\mathcal{X}$-tilting objects. \begin{cor}\label{cor: coro2 teo nuevo} Let $\mathcal{C}$ be an abelian category with enough injectives, $\mathcal{X}\subseteq\mathcal{C}$ be a right thick class admitting an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$, and let $\sigma$ be a finite set such that $\addx[\sigma]$ is an $\mathcal{X}$-projective relative generator in $\mathcal{X}.$ Consider the following classes: \begin{description} \item [{$s\mathcal{T}_{n,\mathcal{X}}$}] consisting of all the objects $T\in\mathcal{X}$ that are small $n$-$\mathcal{X}$-tilting; \item [{$s\mathcal{TP}_{n,\mathcal{X}}$}] consisting of all the right $\mathcal{X}$-complete and left cotorsion pairs $\p$ such that $\pdr[\mathcal{X}][^{\bot}(\mathcal{B}\cap\mathcal{X})]\leq n$ and $\mathcal{B}$ is right thick. \end{description} Consider the equivalence relation $\sim$ in $s\mathcal{T}_{n,\mathcal{X}}$, where $T\sim S$ if $\mbox{ \ensuremath{T^{\bot}\cap\mathcal{X}=S^{\bot}\cap\mathcal{X}}}$; and the equivalence relation $\approx$ in $s\mathcal{TP}_{n,\mathcal{X}}$, where $\p\approx\p[\mathcal{A}'][\mathcal{B}']$ if $\mathcal{B}\cap\mathcal{X}=\mathcal{B}'\cap\mathcal{X}\mbox{.}$ Then, there is an injective map \[ \phi:s\mathcal{T}_{n,\mathcal{X}}/\!\!\sim\;\;\longrightarrow\; s\mathcal{TP}{}_{n,\mathcal{X}}/\!\!\approx,\quad[T]\mapsto[({}^{\bot}(T^{\bot}),T^{\bot})]. \] Furthermore, $\phi$ is bijective if every $T\in$ $\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$ satisfies that $\addx[T]$ is precovering in $T^{\bot}\cap\mathcal{X}.$ \end{cor} \begin{proof} Using Theorem \ref{thm:main1-2} and Remark \ref{rem:obs 2.90''-1}, the proof follows by similar arguments as in the proof of Corollary \ref{cor:biyeccion tiltilng -1}. \end{proof} In the case of a ring $R,$ we can get the following result. \begin{cor}\label{cor:coro3 teo nuevo} Let $R$ be a ring, $\mathcal{X}=\mathcal{X}^{\oplus}$ be a right thick class in $\Modx[R]$ admitting an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$, and let $\sigma$ be a set such that $\Addx[\sigma]$ is an $\mathcal{X}$-projective relative generator in $\mathcal{X}$. Then, for every $\mathcal{B}\subseteq\Modx[R]$, the following conditions are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] There is a big $n$-$\mathcal{X}$-tilting object $T\in\mathcal{X}$ such that $\mathcal{B}\cap\mathcal{X}=T^{\bot}\cap\mathcal{X}$. \item[$\mathrm{(b)}$] $\mathcal{B}$ satisfies the following conditions: \begin{itemize} \item[$\mathrm{(b0)}$] $\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$ is closed under coproducts; \item[$\mathrm{(b1)}$] $\mathcal{B}$ is special preenveloping in $\mathcal{X}$; \item[$\mathrm{(b2)}$] $\pdr[\mathcal{X}][^{\bot}(\mathcal{B}\cap\mathcal{X})]\leq n$; \item[$\mathrm{(b3)}$] $\mathcal{B}$ is right thick in $\Modx[R]$. \end{itemize} \end{itemize} \end{cor} \begin{proof} It follows from Corollary \ref{cor: coro1 teo nuevo}. \end{proof} In the case of an Artin algebra $\Lambda,$ we can get the following result. \begin{cor} \label{cor:coro4 teo nuevo} Let $\Lambda$ be an Artin algebra, $\mathcal{X}$ be a right thick class in $\modd[\Lambda]$ admitting an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$, and let $\sigma$ be a set such that $\addx[\sigma]$ is an $\mathcal{X}$-projective relative generator in $\mathcal{X}$. Then, for any $\mathcal{B}\subseteq\modd[\Lambda]$, the following conditions are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] There is a small $n$-$\mathcal{X}$-tilting object $T\in\mathcal{X}$ such that $\mathcal{B}\cap\mathcal{X}=T^{\bot}\cap\mathcal{X}$. \item[$\mathrm{(b)}$] $\mathcal{B}$ satisfies the following conditions: \begin{itemize} \item[$\mathrm{(b1)}$] $\mathcal{B}$ is special preenveloping in $\mathcal{X}$; \item[$\mathrm{(b2)}$] $\pdr[\mathcal{X}][\operatorname{mod}(\Lambda)\cap{}{}^{\bot}(\mathcal{B}\cap\mathcal{X})]\leq n$; \item[$\mathrm{(b3)}$] $\mathcal{B}$ is right thick in $\modd[\Lambda]$. \end{itemize} \end{itemize} \end{cor} \begin{proof} It follows from Corollary \ref{cor: coro2 teo nuevo} and \cite[Proposition 4.2]{auslander1980preprojective}. \end{proof} \section{\label{sec:Tilting sat}Tilting for thick classes} In the previous sections, we developed an $\mathcal{X}$-tilting theory trying to ask the minimum conditions on $\mathcal{X}$. The goal of this section is to deepen this theory in the special case where $\mathcal{X}$ is a thick class. To start with, we state the following three lemmas whose proof is straightforward and are left to the readers. \begin{lem}\label{rem:obssatur}\label{rem:obssatur2}\label{rem:obssatur-1}\label{rem:obssatur2-1} For an abelian category $\mathcal{C},$ $\mathcal{X}\subseteq\mathcal{C}$ closed under mono-cokernels, and $\mathcal{Y} \subseteq \mathcal{X},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] If $\mathcal{X}$ is closed under epi-kernels, then $\Gennr[\mathcal{Y}]=\Genn[\mathcal{Y}]\cap\mathcal{X}$ $\forall n\geq1$. \item[$\mathrm{(b)}$] $\coresdimx{\mathcal{Y}}X=\coresdimr{\mathcal{Y}}X{\mathcal{X}}$ $\forall X\in\mathcal{X}$. \end{itemize} \end{lem} \begin{lem}\label{rem:obssatur-proy}\label{rem:obssatur-proy-1} Let $\mathcal{C}$ be an Ab4 (abelian) category, $\mathcal{R}\subseteq\Projx[\mathcal{C}]$, and let $\mathcal{X}\subseteq\mathcal{C}$ be a thick class such that $\Addx[\mathcal{R}]\subseteq\mathcal{X}\subseteq\operatorname{Gen}_{1}(\mathcal{R})$ ($\addx[\mathcal{R}]\subseteq\mathcal{X}\subseteq\operatorname{gen}_{1}(\mathcal{R})$). Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\Addx[\mathcal{R}]$ ($\addx[\mathcal{R}]$) is an $\mathcal{X}$-projective relative generator in $\mathcal{X}$. \item[$\mathrm{(b)}$] The condition $\mathrm{(T3'')\; ((t3''))}$ in Definition \ref{def:condiciones T3} is satisfied if $\mathcal{R}\subseteq\mathcal{T}^{\vee}$. \end{itemize} \end{lem} \begin{lem}\label{rem:obssatur-iny}\label{rem:obssatur-iny-1} Let $\mathcal{C}$ be an abelian category with enough injectives, and let $\mathcal{X}\subseteq\mathcal{C}$ be a class closed under mono-cokernels and such that $\Injx[\mathcal{C}]\subseteq\mathcal{X}.$ Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\Injx[\mathcal{C}]$ is an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$. \item[$\mathrm{(b)}$] The condition $\mathrm{(T4)},$ with respect $\mathcal{X}$ and any $\mathcal{T}\subseteq\mathcal{C},$ given in Definition \ref{def:X-tilting} is satisfied. \end{itemize} \end{lem} Inspired by the above Lemmas, we can give an alternative definition of $n$-$\mathcal{X}$-tilting object in case $\mathcal{X}$ is thick. \begin{defn}\label{Sat-Tilting} Let $\mathcal{C}$ be an Ab3 (abelian) category, $\mathcal{X}\subseteq\mathcal{C}$ and $\mathcal{R}\subseteq\Projx[\mathcal{C}]$. A class $\mathcal{T}\subseteq\mathcal{C}$ is called\textbf{ big (small) $\mathcal{R}$-saturated $n$-$\mathcal{X}$-tilting} if the following conditions hold true. \begin{description} \item [{(ST0)}] $\mathcal{T}=\Addx[\mathcal{T}]\subseteq\mathcal{X}=\mathcal{X}^{\oplus}$ ($\mathcal{T}=\addx[\mathcal{T}]\subseteq\mathcal{X}$) and $\mathcal{X}$ is a thick class such that $\mathcal{R}\subseteq\mathcal{X}.$ \item [{(ST1)}] $\pdr[\mathcal{X}][\mathcal{T}]\leq n.$ \item [{(ST2)}] $\mathcal{T}\subseteq\mathcal{T}^{\bot}.$ \item [{(ST3)}] $\mathcal{X}\subseteq\operatorname{Gen}_{1}(\mathcal{R})$ ($\mathcal{X}\subseteq\operatorname{gen}_{1}(\mathcal{R})$) and $\mathcal{R}\subseteq\mathcal{T}^{\vee}.$ \item [{(ST4)}] There is an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}.$ \item [{(ST5)}] $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot}\cap\mathcal{X}$. \end{description} An object $T\in\mathcal{C}$ is called \textbf{big (small) $\mathcal{R}$-saturated $n$-$\mathcal{X}$-tilting }if $\Addx[T]$ ($\addx[T]$) is big (small) $\mathcal{R}$-saturated $n$-$\mathcal{X}$-tilting.\end{defn} \begin{prop}\label{prop:satsat}\label{prop:satsat-1} For an Ab4 (abelian) category $\mathcal{C},$ $\mathcal{X}\subseteq\mathcal{C},$ $\mathcal{R}\subseteq\Projx[\mathcal{C}]$ and $\mathcal{T}\subseteq\mathcal{C},$ the following conditions are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is a big (small) $\mathcal{R}$-saturated $n$-$\mathcal{X}$-tilting class. \item[$\mathrm{(b)}$] $\mathcal{T}$ is a big (small) $n$-$\mathcal{X}$-tilting class, $\mathcal{T}\subseteq\mathcal{X}=\mathcal{X}^{\oplus}$ ($\mathcal{T}\subseteq\mathcal{X}$) and $\mathcal{X}$ is a thick class such that $\Addx[\mathcal{R}]$ ($\addx[\mathcal{R}]$) is a relative generator in $\mathcal{X}$. \end{itemize} Moreover, if one of the above conditions holds true, then $\mathcal{T}=\mathcal{T}^\perp\cap{}^\perp(\mathcal{T}^\perp\cap\mathcal{X})\cap\mathcal{X}.$ \end{prop} \begin{proof} Note, firstly, that (ST0) holds true in (b); and secondly, (ST1) and (ST4) coincide with (T1) and (T4), respectively. Moreover, by assuming (ST0), it is clear that (ST2) and (ST5) coincide with (T2) and (T5), respectively, since $\mathcal{T}\subseteq \mathcal{X}.$ Therefore, in order to prove the equivalence between (a) and (b), it is enough (by assuming the other conditions) to prove the equivalence between (T3) and (ST3). \ (ST3) $\Rightarrow$ (T3): By Lemma \ref{rem:obssatur-proy}, we have that (ST3) implies that (T3'') ((t3'')) holds true, and then by Theorem \ref{rem:T3 simple}, we get (T3). \ (T3) $\Rightarrow$ (ST3): By Theorem \ref{thm:el par n-X-tilting} (a), we get $\mathcal{R}\subseteq {}^\perp(\mathcal{T}^\perp)\cap\mathcal{X}=\mathcal{T}^\vee_\mathcal{X}\cap\mathcal{X}\subseteq \mathcal{T}^\vee.$ Moreover $\mathcal{X}\subseteq\operatorname{Gen}_{1}(\mathcal{R})$ ($\mathcal{X}\subseteq\operatorname{gen}_{1}(\mathcal{R})$) since $\mathrm{Add}(\mathcal{R})$ ($\mathrm{add}(\mathcal{R})$) is a relative generator in $\mathcal{X}.$ \ Finally, if one of the above equivalent conditions holds true, the desired equality follows from Proposition \ref{prop:(a)} (d,a). \end{proof} As a consequence of Theorem \ref{prop:primera generalizacion}, we can have the following result. \begin{cor}\label{thm:peq4}\label{prop:caracterizacion tilting saturado}\label{prop:caract tilting peque=0000F1o saturado} Let $\mathcal{C}$ be an Ab4 (abelian) category, $\mathcal{R}\subseteq\Projx[\mathcal{C}]$, $n\geq1$ and $\Addx[\mathcal{T}]=\mathcal{T}\subseteq\mathcal{X}=\mathcal{X}^{\oplus}$ ($\addx[\mathcal{T}]=\mathcal{T}\subseteq\mathcal{X}$), where $\mathcal{X}$ is a thick class in $\mathcal{C}$ such that $\mathcal{R}\subseteq\mathcal{X}$ and $\mathcal{X}\subseteq\operatorname{Gen}_{1}(\mathcal{R})$ ($\mathcal{X}\subseteq\operatorname{gen}_{1}(\mathcal{R})$). If $\mathcal{T}$ satisfies $\mathrm{(ST4)}$ and $\mathrm{(ST5)}$, then the following statements are equivalent: \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is big (small) $\mathcal{R}$-saturated $n$-$\mathcal{X}$-tilting. \item[$\mathrm{(b)}$] $\Genn[\mathcal{T}]\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}.$ \item[$\mathrm{(c)}$] $\Genn[\mathcal{T}][k]\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}$ $\forall k\geq n.$ \item[$\mathrm{(d)}$] $\mathcal{T}^{\bot}\cap\mathcal{X}$ is closed by $n$-quotients in $\mathcal{X}$ and $\mathcal{T}\subseteq\mathcal{T}^{\bot}\cap\mathcal{X}\subseteq\Gennr[\mathcal{T}][1][\,]$. \end{itemize} \end{cor} \begin{proof} It follows from Lemma \ref{rem:obssatur} (a), Proposition \ref{prop:satsat} and Theorem \ref{prop:primera generalizacion}. \end{proof} In the following lines we will give a generalization of the main results in \cite{Tiltinginfinitamentegenerado}. \begin{thm}\label{thm:main1} Let $\mathcal{C}$ be an Ab4 (abelian) category, $\mathcal{R}\subseteq\Projx[\mathcal{C}]$, $\mathcal{X}=\mathcal{X}^{\oplus}\subseteq\mathcal{C}$ ($\mathcal{X}\subseteq\mathcal{C}$) be a thick class such that $\mathcal{R}\subseteq\mathcal{X}\subseteq\operatorname{Gen}_{1}(\mathcal{R})$ ($\mathcal{R}\subseteq\mathcal{X}\subseteq\operatorname{gen}_{1}(\mathcal{R})$), and let $\mathcal{B}\subseteq\mathcal{C}$ be a right thick class. Then, the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] There is a big (small) $\mathcal{R}$-saturated $n$-$\mathcal{X}$-tilting class $\mathcal{T}$ such that $\mathcal{B}\cap\mathcal{X}=\mathcal{T}^{\bot}\cap\mathcal{X}$. \item[$\mathrm{(b)}$] The class $\mathcal{B}$ satisfies the following conditions: \begin{itemize} \item [$\mathrm{(b0)}$] $\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$ is closed under coproducts (this condition is dismissed in the small case); \item [$\mathrm{(b1)}$] there is an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$; \item [$\mathrm{(b2)}$] $\mathcal{B}\cap\mathcal{X}$ is special preenveloping in $\mathcal{X}$; \item [$\mathrm{(b3)}$] $\pdr[\mathcal{X}][{}{}^{\bot}(\mathcal{B}\cap\mathcal{X})]\leq n$; \item [$\mathrm{(b4)}$] $\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$ is precovering in $\mathcal{B}\cap\mathcal{X}$. \end{itemize} \end{itemize} \end{thm} \begin{proof} It follows from Proposition \ref{prop:satsat} and Theorem \ref{thm:main1-2}. \end{proof} In a similar way as we did in Remark \ref{rem: obs 2.90'-1} and Remark \ref{rem:obs 2.90''-1}, by using Proposition \ref{prop:satsat}, we can make the following remarks . \begin{rem}\label{rem: obs 2.90'}Assume the hypotheses of Theorem \ref{thm:main1} with $\mathcal{C}$ Ab4, $\mathcal{X}=\mathcal{X}^{\oplus}$ a thick class in $\mathcal{C}$ such that $\mathcal{R}\subseteq\mathcal{X}\subseteq\operatorname{Gen}_{1}(\mathcal{R})$, and let $\sigma$ be a set such that $\Addx[\mathcal{R}]=\Addx[\sigma]$. Then, we can dismiss the condition (b4) from (b). Moreover, in this case, we can choose $\mathcal{T}\subseteq\mathcal{C}$ in (a) such that $\mathcal{T}=\Addx[T]$ with $T\in\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$. \end{rem} \begin{rem}\label{rem:obs 2.90''}\label{thm:main1-1} Assume the hypotheses of Theorem \ref{thm:main1} with $\mathcal{C}$ abelian, $\mathcal{X}$ a thick class in $\mathcal{C}$ such that $\mathcal{R}\subseteq\mathcal{X}\subseteq\operatorname{gen}_{1}(\mathcal{R})$, and let $\sigma$ be a finite set such that $\addx[\mathcal{R}]=\addx[\sigma]$. Then, we can replace (b4) by the following condition: \begin{description} \item [{({$*$})}] $\addx[T]$ is precovering in $T^{\bot}\cap\mathcal{X},$ for every $T\in\mathcal{B}\cap\mathcal{X}{}^{\bot}(\mathcal{B}\cap\mathcal{X}).$ \end{description} Moreover, in this case, we can choose $\mathcal{T}\subseteq\mathcal{C}$ in (a) such that $\mathcal{T}=\addx[T]$ with $T\in\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$.\end{rem} \begin{thm}\label{thm:peq3}\label{cor:biyeccion tiltilng } Let $\mathcal{C}$ be an Ab4 category with enough injectives, $\mathcal{R}\subseteq\Projx[\mathcal{C}]$, $\sigma$ be a set such that $\Addx[\mathcal{R}]=\Addx[\sigma]$, and let $\mathcal{X}=\mathcal{X}^{\oplus}$ be a thick class in $\mathcal{C}$ admitting an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$ such that $\mathcal{R}\subseteq\mathcal{X}\subseteq\operatorname{Gen}_{1}(\mathcal{R})$. Consider the classes: \begin{description} \item [{$\mathcal{ST}_{\! n,\mathcal{X}}$}] consisting of all the $T\in\mathcal{X}$ that are big $\mathcal{R}$-saturated $n$-$\mathcal{X}$-tilting objects; \item [{$\mathcal{TP}_{n,\mathcal{X}}$}] consisting of all the right $\mathcal{X}$-complete left cotorsion pairs $\p$ with $\mathcal{B}$ right thick, $\pdr[\mathcal{X}][^{\bot}(\mathcal{B}\cap\mathcal{X})]\leq n$ and $^{\bot}(\mathcal{B}\cap\mathcal{X})\cap\mathcal{B}\cap\mathcal{X}$ closed under coproducts. \end{description} Consider the equivalence relation $\sim$ in $\mathcal{ST}_{\! n,\mathcal{X}},$ where $T\sim S$ if $\mbox{ \ensuremath{T^{\bot}\cap\mathcal{X}=S^{\bot}\cap\mathcal{X}}}$; and the equivalence relation $\approx$ in $\mathcal{TP}_{ n,\mathcal{X}},$ where $\p\approx\p[\mathcal{A}'][\mathcal{B}']$ if $\mathcal{B}\cap\mathcal{X}=\mathcal{B}'\cap\mathcal{X}\mbox{.}$ Then, there is a bijective map $$\phi:\mathcal{ST}_{\! n,\mathcal{X}}/\!\!\sim\;\longrightarrow \mathcal{TP}_{n,\mathcal{X}}/\!\!\approx,\quad [T]\mapsto[({}^{\bot}(T^{\bot}),T^{\bot})].$$ \end{thm} \begin{proof} It follows from Corollary \ref{cor: coro1 teo nuevo} and Proposition \ref{prop:satsat}.\end{proof} \begin{cor}\label{cor:biyeccion tiltilng -2} Let $\mathcal{C}$ be an abelian category with enough injectives, $\mathcal{R}\subseteq\Projx[\mathcal{C}]$, $\sigma$ be a finite set such that $\addx[\mathcal{R}]=\addx[\sigma]$, and let $\mathcal{X}\subseteq\mathcal{C}$ be a thick class admitting an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$ such that $\mathcal{R}\subseteq\mathcal{X}\subseteq\operatorname{gen}_{1}(\mathcal{R})$. Consider the classes: \begin{description} \item [{$s\mathcal{ST}_{\! n,\mathcal{X}}$}] consisting of all the $T\in\mathcal{X}$ that are small $\mathcal{R}$-saturated $n$-$\mathcal{X}$-tilting objects; \item [{$s\mathcal{TP}_{n,\mathcal{X}}$}] consisting of all the right $\mathcal{X}$-complete left cotorsion pairs $\p$ with $\mathcal{B}$ right thick and $\pdr[\mathcal{X}][^{\bot}(\mathcal{B}\cap\mathcal{X})]\leq n$. \end{description} Consider the equivalence relation $\sim$ in $s\mathcal{ST}_{\! n,\mathcal{X}},$ where $T\sim S$ if $\mbox{ \ensuremath{T^{\bot}\cap\mathcal{X}=S^{\bot}\cap\mathcal{X}}}$; and the equivalence relation $\approx$ in $s\mathcal{TP}_{ n,\mathcal{X}},$ where $\p\approx\p[\mathcal{A}'][\mathcal{B}']$ if $\mathcal{B}\cap\mathcal{X}=\mathcal{B}'\cap\mathcal{X}.$ Then, there is an injective map $$\phi:s\mathcal{ST}_{\! n,\mathcal{X}}/\!\!\sim\;\longrightarrow s\mathcal{TP}_{n,\mathcal{X}}/\!\!\approx,\quad[T]\mapsto[({}^{\bot}(T^{\bot}),T^{\bot})].$$ Furthermore, $\phi$ is bijective if $\addx[T]$ is precovering in $T^{\bot}\cap\mathcal{X},$ for any $T$ in $\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X}).$ \end{cor} \begin{proof} It follows from Corollary \ref{cor: coro2 teo nuevo} and Proposition \ref{prop:satsat}. \end{proof} The following result is a generalization of \cite[Theorem 2.1]{Tiltingpreenvelopes} and \cite[Corollary 4.3]{Tiltinginfinitamentegenerado}. \begin{cor}\label{thm:peq2}\label{cor:pretorsion tilting} Let $\mathcal{C}$ be an Ab4 category, $\mathcal{R}\subseteq\Projx[\mathcal{R}]$, $\sigma$ be a set such that $\Addx[\mathcal{R}]=\Addx[\sigma]$, $\mathcal{X}=\mathcal{X}^{\oplus}\subseteq\mathcal{C}$ be a thick class that admits an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$ and such that $\mathcal{R}\subseteq\mathcal{X}\subseteq\operatorname{Gen}_{1}(\mathcal{R})$, and let $\mathcal{B}\subseteq\mathcal{C}$ be a class closed under extensions, quotients and coproducts. Then, the following statements are equivalent: \begin{itemize} \item[$\mathrm{(a)}$] There exists a big $\mathcal{R}$-saturated $1$-$\mathcal{X}$-tilting object $T\in\mathcal{X}$ such that $\mathcal{B}\cap\mathcal{X}=T^{\bot}\cap\mathcal{X}.$ \item[$\mathrm{(b)}$] $\mathcal{B}\cap\mathcal{X}$ is a special preenveloping class in $\mathcal{X}$. \end{itemize} Furthermore, if $\mathrm{(a)}$ or $\mathrm{(b)}$ holds true, then $\pdr[\mathcal{X}][^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)]\leq1$. \end{cor} \begin{proof} (a) $\Rightarrow$ (b) It follows from Theorem \ref{thm:main1}. \ (b) $\Rightarrow$ (a) Note that, by Theorem \ref{thm:main1} and Remark \ref{rem: obs 2.90'}, it is enough to show that $\pdr[\mathcal{X}][^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)]\leq1\mbox{.}$ Let $Y\in{}^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)$. Since $\mathcal{B}\cap\mathcal{X}$ is special preenveloping in $\mathcal{X}$, for every $A\in\mathcal{X}$, there is an exact sequence $\suc[A][B][C]$ where $B\in\mathcal{B}\cap\mathcal{X}$ and $C\in{}{}^{\bot_{1}}\left(\mathcal{B}\cap\mathcal{X}\right)\cap\mathcal{X}={}^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)\cap\mathcal{X}$ (see \cite[Lemma 3.4]{parte1}). Furthermore, $C\in\mathcal{B}\cap\mathcal{X}$ since $\mathcal{B}$ is closed by quotients. Therefore, for every $i>0$, we have the exact sequence $\Extx[i][][Y][C]\rightarrow\Extx[i+1][][Y][A]\rightarrow\Extx[i+1][][Y][B]\mbox{,}$ where $\Extx[i][][Y][C]=0=\Extx[i+1][][Y][B]$. Hence $\Extx[i+1][][Y][A]=0,$ for every $A\in\mathcal{X}$ and $i>0,$ and thus $\pdr[\mathcal{X}][Y]\leq1.$ \end{proof} \begin{cor} Let $\mathcal{C}$ be an abelian category, $\mathcal{R}\subseteq\Projx[\mathcal{C}]$, $\sigma$ be a finite set such that $\addx[\mathcal{R}]=\addx[\sigma]$, $\mathcal{X}\subseteq\mathcal{C}$ be a thick class admitting an $\mathcal{X}$-injective relative cogenerator in $\mathcal{X}$ such that $\mathcal{R}\subseteq\mathcal{X}\subseteq\operatorname{gen}_{1}(\mathcal{R})$, and let $\mathcal{B}\subseteq\mathcal{C}$ be closed under extensions and quotients. If every $T\in\mathcal{B}\cap\mathcal{X}\cap{}^{\bot}(\mathcal{B}\cap\mathcal{X})$ satisfies that $\addx[T]$ is precovering in $T^{\bot}\cap\mathcal{X}$, then the following statements are equivalent: \begin{itemize} \item[$\mathrm{(a)}$] There is a small $\mathcal{R}$-saturated $1$-$\mathcal{X}$-tilting object $T\in\mathcal{X}$ such that $\mathcal{B}\cap\mathcal{X}=T^{\bot}\cap\mathcal{X}.$ \item[$\mathrm{(b)}$] $\mathcal{B}\cap\mathcal{X}$ is special preenveloping in $\mathcal{X}$. \end{itemize} Furthermore, if $\mathrm{(a)}$ or $\mathrm{(b)}$ holds true, then $\pdr[\mathcal{X}][{}{}^{\bot}\left(\mathcal{B}\cap\mathcal{X}\right)]\leq1$. \end{cor} \begin{proof} By using Remark \ref{rem:obs 2.90''}, the proof follows as in Corollary \ref{cor:pretorsion tilting}. \end{proof} \section{\label{sec:Examples}Examples and applications } \subsection{$\infty$-tilting objects and pairs} Leonid Positselski and Jan {\v{S}}t'ov{\'\i}{\v{c}}ek defined in \cite{positselski2019tilting} the notion of $\infty$-tilting object and $\infty$-tilting pair. In this section, we recall these notions and give and interpretation in terms of $n$-$\mathcal{X}$-tilting theory. We also recall that an Ab3{*} category, having an injective cogenerator, is Ab3 \cite[Exercise III.2]{mitchell}. \begin{defn}\cite[Section 2]{positselski2019tilting} Let $\mathcal{A}$ be an Ab3{*} category which has an injective cogenerator. An object $T\in\mathcal{A}$ is \textbf{$\infty$-tilting} if the following conditions hold true: \begin{description} \item [{($\infty$-T1)}] $\mathrm{Add}(T)\subseteq T^{\bot}.$ \item [{($\infty$-T2)}] $\mathrm{Inj}(\mathcal{A})\subseteq(\mathrm{Add}(T),T^{\bot})_{\infty}^{\wedge}$. \end{description} \end{defn} \begin{defn}\cite[Section 3]{positselski2019tilting} Let $\mathcal{A}$ be an Ab3{*} category having an injective cogenerator, $T\in\mathcal{A}$ and $\mathcal{E}\subseteq\mathcal{A}.$ The pair $(T,\mathcal{E})$ is \textbf{$\infty$-tilting} if the following conditions hold true: \begin{description} \item [{($\infty$-PT1)}] The class $\mathcal{E}$ is coresolving. \item [{($\infty$-PT2)}] $\mathrm{Add}(T)\subseteq\mathcal{E}\subseteq T^{\bot_{1}}.$ \item [{($\infty$-PT3)}] Any $\mathrm{Add}(T)$-precover $\alpha:T'\rightarrow E$ of $E\in\mathcal{E}$ is an epimorphism and $\mathrm{Ker}(\alpha)\in\mathcal{E}$. \end{description} \end{defn} \begin{rem}\cite[Section 3]{positselski2019tilting} \label{rem:infty tilting} For an $\infty$-tilting pair $(T,\mathcal{E})$ in an Ab3{*} category $\mathcal{A}$, which has an injective cogenerator $J$, the following statements hold true: \begin{itemize} \item[$\mathrm{(a)}$] $\mathrm{Prod}(J)=\mathrm{Inj}(\mathcal{A})\subseteq\mathcal{E}\subseteq T^{\bot}.$ \item[$\mathrm{(b)}$] $\mathrm{Add}(T)\subseteq T^{\bot}.$ \item[$\mathrm{(c)}$] $\mathrm{Add}(T)$ is a relative generator in $\mathcal{E}.$ Moreover, if $\mathcal{A}$ is Ab4, then $\mathrm{Add}(T)$ is $\mathcal{E}$-projective. \item[$\mathrm{(d)}$] $\mathrm{Prod}(J)$ is an $\mathcal{E}$-injective relative cogenerator in $\mathcal{E}$. \end{itemize} \end{rem} The connection between $\infty$-tilting objects and pairs is as follows. \begin{lem}\cite[Lemma 3.1]{positselski2019tilting} For a bicomplete abelian category $\mathcal{A},$ which has an injective cogenerator, and $T\in\mathcal{A},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] There exists a class $\mathcal{E}\subseteq\mathcal{A}$ such that $(T,\mathcal{E})$ is an $\infty$-tilting pair if, and only if, $T$ is an $\infty$-tilting object. \item[$\mathrm{(b)}$] If $T$ is an $\infty$-tilting object, then $(T,(\mathrm{Add}(T),T^{\bot})_{\infty}^{\wedge})$ is an $\infty$-tilting pair. \item[$\mathrm{(c)}$] If $(T,\mathcal{E})$ is an $\infty$-tilting pair, then $\mathcal{E}\subseteq(\mathrm{Add}(T),T^{\bot})_{\infty}^{\wedge}.$ \end{itemize} \end{lem} In what follows, we show that the $\infty$-tilting pairs are contained into the $n$-$\mathcal{X}$-tilting theory. \begin{prop} Let $\mathcal{A}$ be an Ab3{*} category having an injective cogenerator, and let $(T,\mathcal{E})$ be an $\infty$-tilting pair. Then $T$ is a big $0$-$\mathcal{E}$-tilting object such that $\mathrm{Add}(T)=\mathcal{E}\cap{}^{\bot}\mathcal{E}$ and $T^{\bot}\cap\mathcal{E}=\operatorname{Gen}_{1}^{\mathcal{E}}(T)$. \end{prop} \begin{proof} Let $J\in\mathcal{A}$ be an injective cogenerator. By \cite[Chapter 3, Corollary 2.9, p.73]{Popescu}, we get that $\mathcal{A}$ is Ab4. By taking into account Remark \ref{rem:infty tilting}, we can show the following: \begin{description} \item [{(T1)}] $\pdr[\operatorname{Add}(T)][\mathcal{E}]=\pdr[T][\mathcal{E}]=0$ since $\mathcal{E}\subseteq T^{\bot}.$ \item [{(T2)}] $\mathrm{Add}(T)\cap\mathcal{E}\subseteq T^{\bot}=\mathrm{Add}(T)^{\bot}$ since $\mathrm{Add}(T)\subseteq\mathcal{E}.$ \item [{(T3)}] We know that $\mathrm{Add}(T)$ is an $\mathcal{E}$-projective relative generator in $\mathcal{E}$ and thus $\mathrm{Add}(T)\subseteq(\mathrm{Add}(T))_{\mathcal{E}}^{\vee}$. \item [{(T4)}] $\mbox{Prod}(J)=\mathrm{Inj}(\mathcal{A})$ is an $\mathcal{E}$-injective relative cogenerator in $\mathcal{E}$ and $T^{\bot}=\left(\mathrm{Add}(T)\right)^{\bot}$. \item [{(T5)}] Note firstly that $\mathrm{Add}(T)$ is precovering since $\mathcal{A}$ is Ab3. Using now that $\mathrm{Add}(T)\subseteq\mathcal{E},$ any $X\in T^{\bot}\cap\mathcal{E}$ has an $\mathrm{Add}(T)$-precover $A\rightarrow X$, with $A\in\mathcal{E}$. \end{description} Finally, it is enough to use Corollary \ref{cor:coronuevo pag 55} and Theorem \ref{thm:el par n-X-tilting} to conclude that $\mathrm{Add}(T)=\mathcal{E}\cap{}^{\bot}\mathcal{E}$ and $T^{\bot}\cap\mathcal{E}=\operatorname{Gen}_{1}^{\mathcal{E}}(T)\cap\mathcal{E}=\operatorname{Gen}_{1}^{\mathcal{E}}(T)$ since $\mathcal{E}$ is closed under mono-cokernels. \end{proof} \subsection{Miyashita tilting modules} In this section we review the tilting theory developed by Yoichi Miyashita in \cite{miyashita}. Recall that, for a ring $R$, $\projx[R]$ (resp. $\mathrm{inj}(R)$) is the class of finitely generated projective (resp. injective) left $R$-modules. \begin{defn} \label{def:miyashita tilting} Let $R$ be a ring. A left $R$-module $T$ is \textbf{ Miyashita $n$-tilting} if the following conditions hold true. \begin{description} \item [{(MT1)}] $T\in\projx[R]_{n}^{\wedge}.$ \item [{(MT2)}] $T\in T^{\bot}.$ \item [{(MT3)}] $R\in\addx[T]_{n}^{\vee}$. \end{description} \end{defn} \begin{lem}\label{lem:lemita pdr en mod} Let $\mathcal{C}$ be an abelian category, $\mathcal{X}\subseteq\mathcal{C}$ and $\omega=\mathrm{add}(\omega)$ be a $\mathcal{C}$-projective relative generator in $\mathcal{X}.$ Then $\pdr[\mathcal{X}][X]=\pdx[X],$ for any $X\in \mathcal{X}.$ \end{lem} \begin{proof} Let $X\in\mathcal{X}$. It is enough to show that $\pdr[\mathcal{X}][X]\geq\pdx[X].$ We can assume that $\pdr[\mathcal{X}][X]=k<\infty$. Since $\omega$ is a $\mathcal{C}$-projective relative generator in $\mathcal{X},$ there exist the exact sequences \begin{gather*} \suc[X_{1}][P_{0}][X]\mbox{,}\\ \suc[X_{2}][P_{1}][X_{1}]\mbox{,}\\ \vdots\\ \suc[X_{k+1}][P_{k}][X_{k}]\mbox{,} \end{gather*} where $P_{i}\in\omega\subseteq{}{}^{\bot}\mathcal{C}\;\forall i\in[0,k]$ and $X_j\in\mathcal{X}\;\forall j\in[1,k+1].$ Then, by the shifting lemma, $ \mathrm{Ext}^1_\mathcal{C}(X_{k},X_{k+1})\simeq\mathrm{Ext}^{k+1}_\mathcal{C}(X,X_{k+1})=0 $ and thus $X_k\in\omega.$ Finally, by the shifting lemma, $$\mathrm{Ext}^{i+k}_\mathcal{C}(X,C)\simeq \mathrm{Ext}^i_\mathcal{C}(X_k,C)=0\quad\forall\,i\geq 1,\;\forall\,C\in\mathcal{C}.$$ Then $\mathrm{pd}(X)\leq k.$ \end{proof} \begin{lem}\label{lem: 2.52'} Let $R$ be a noetherian ring such that $\mathrm{inj}(R)$ is a relative cogenerator in $\modd[R]$. Then, $\projx[R]$ is a $\Modx[R]$-projective relative generator in $\modd$ and $\mathrm{inj}(R)$ is a $\Modx[R]$-injective relative cogenerator in $\modd$. Moreover, $\pdr[{}\modd{}][X]=\pdx[X]$ and $\idr[{}\modd{}][X]=\mathrm{id}(X)$ $\forall X\in\modd$.\end{lem} \begin{proof} It follows from Lemma \ref{lem:lemita pdr en mod} and its dual. \end{proof} \begin{lem}\label{lem:precubiertas en add} Let $R$ be a finitely generated $S$-algebra, where $S$ is a commutative noetherian ring, and let $T\in\modd$. Then $\addx[T]$ is precovering in $\modd$. Moreover, any $\addx[T]$-precover is an $\Addx[T]$-precover. \end{lem} \begin{proof} Let $X\in\modd$. It can be shown that $\Homx[R][T][X]\in\mathrm{mod}(S)$ and thus it is generated by a finite set $\{f_{1},\cdots,f_{n}\}.$ Then, it is straightforward to prove that $\alpha:=(f_{1},\cdots,f_{n}):T^{n}\rightarrow X$ is an $\addx[T]$-precover. The second claim follows from Lemma \ref{lem: addT precub es AddT precub}. \end{proof} \begin{prop}\label{prop:tilting miyashita} Let $R$ be a finitely generated $S$-algebra, where $S$ is a commutative noetherian ring, and let $T\in\Modx[R]$. Then, the following conditions are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $T$ is a Miyashita $n$-tilting module and $\mathrm{inj}(R)$ is a relative cogenerator in $\modd[R].$ \item[$\mathrm{(b)}$] $T$ is a big $n$-$\modd$-tilting object. \item[$\mathrm{(c)}$] $T$ is a small $n$-$\modd$-tilting object. \item[$\mathrm{(d)}$] $T$ is a small $\projx[R]$-saturated $n$-$\modd[R]$-tilting object. \end{itemize} Moreover, if $\mathrm{inj}(R)$ is a relative cogenerator in $\modd[R]$ and $T$ satifies $\mathrm{(MT1)}$ and $\mathrm{(MT2)},$ then $T$ satisfies $\mathrm{(MT3)}$ if and only if $\mathrm{coresdim}_{\mathrm{add}(T)}({}_RR)<\infty.$ \end{prop} \begin{proof} Observe that $\modd$ is a thick abelian subcategory of $\mathrm{Mod}(R)$ by Corollary \ref{cor:finitamente generado es compacto-1}(b). \ (a) $\Rightarrow$ (b) Let $T$ be a Miyashita $n$-tilting module and $\mathrm{inj}(R)$ be a relative cogenerator in $\modd[R]$. By (MT1) and Lemma \ref{lem: 2.52'}, (T1) holds true. On the other hand, (T4) is satisfied by Lemma \ref{lem: 2.52'}, and (T5) is also true by Lemma \ref{lem:precubiertas en add}. Finally, (T3) follows from (MT3) and Remark \ref{rem:T3 simple}. \ (b) $\Rightarrow$ (a) Let $T$ be an $n$-${}\modd{}$-tilting object. By (T1) and Lemma \ref{lem: 2.52'}, (MT1) is satisfied since $R$ is noetherian. The proof of (MT2) is straightforward. Hence, it remains to show (MT3). With that goal, by Theorem \ref{thm:el par n-X-tilting} (c), there is an exact sequence $\suc[R][M_{0}][X_{0}]\mbox{,}$ where $M_{0}\in T^{\bot}\cap{}\modd[R]{}$ and $X_{0}\in{}{}^{\bot}(T^{\bot})\cap{}\modd[R]{}.$ Moreover, using $R\in{}{}^{\bot}(T^{\bot})$, by Lemma \ref{lem:inf5} (b) we get $M_{0}\in{}{}^{\bot}(T^{\bot})\cap T^{\bot}\cap{}\modd[R]{}=\Addx[T]\cap{}\modd[R]{}=\addx[T].$ By repeating the above argument, we can build (inductively) an exact sequence $\suc[R][M_{0}\rightarrow\cdots\rightarrow M_{k}][X_{k}]\mbox{,}$ with $X_{i}\in{}{}^{\bot}(T^{\bot})\cap{}\modd[R]$ and $M_{i}\in\addx[T]$ $\forall i\in[1,n]$. Finally, $X_{n}\in T^{\bot}\cap{}{}^{\bot}(T^{\bot})\cap{}\modd[R]{}=\addx[T]$ by \cite[Proposition 2.7]{parte1}. \ (b) $\Leftrightarrow$ (c) It follows from Corollary \ref{cor:coro2 p80}. \ (d) $\Leftrightarrow$ (c) It follows from Proposition \ref{prop:satsat-1}. \end{proof} \begin{lem}\label{lem: 2.54'} Let $R$ be an Artin algebra, $\mathcal{X}\subseteq\modd[R]$ and $\mathcal{Y}\subseteq\mathrm{Mod}(R)$. Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{X}_{\modd[R]}^{\vee}=\mathcal{X}^{\vee}=(\modd,\mathcal{X})^{\vee}.$ \item[$\mathrm{(b)}$] $(\mathcal{Y}\cap\modd)^{\vee}=(\modd,\mathcal{Y}\cap\modd)^{\vee}=(\modd,\mathcal{Y})^{\vee}.$ \item[$\mathrm{(c)}$] $\coresdimr{\mathcal{X}}M{{}\modd{}}=\coresdimx{\mathcal{X}}M$ $\forall M\in\Modx[R].$ \item[$\mathrm{(d)}$] $\resdimr{\mathcal{X}}M{{}\modd{}}=\resdimx{\mathcal{X}}M$ $\forall M\in\Modx[R].$ \end{itemize} \end{lem} \begin{proof} It is straightforward. \end{proof} By using Proposition \ref{prop:tilting miyashita} together with the main results in this paper, we can infer well-known properties (and others that seem to be new) of Miyashita tilting modules. The following corollary is the result of doing this. In particular, we will get \cite[Theorem 4.4(a)]{auslander1992homologically} and other results similar to the ones in \cite{auslander1992homologically}. \begin{cor}\label{cor: propiedades miyashita} Let $R$ be an Artin algebra, $T\in\mathrm{Mod}(R)$ be a Miyashita $n$-tilting $R$-module, $\p:=({}^{\bot}(T^{\bot}),T^{\bot})$ in $\Modx[R]$, $\mathcal{A}':={}\mathcal{A}\cap\modd$, $\mathcal{B}':=\mathcal{B}\cap\modd$, and $\nu:=\mathcal{A}\cap\mathcal{B}\cap\modd$. Then, the following conditions hold true. \begin{itemize} \item[$\mathrm{(a)}$] The pair $(\mathcal{A}',\mathcal{B}')$ is complete and hereditary in $\modd$, $^{\bot}\left(\mathcal{B}'\right)\cap\modd=\mathcal{A}'$, and $(\mathcal{A}')^{\bot}\cap\modd=\mathcal{B}'$. Furthermore, \begin{itemize} \item[$\mathrm{(a1)}$] $\mathcal{A}'=\left(\addx[T]\right){}^{\vee}$; \item[$\mathrm{(a2)}$] $\mathcal{B}'=\operatorname{gen}_{k}(T)$ $\forall k\geq\max\left\{ 1,\pdr[\,][T]\right\} $. \end{itemize} \item[$\mathrm{(b)}$] $\nu$ is a $\mathcal{B}'$-projective relative generator in $\mathcal{B}'$ and an $\mathcal{A}'$-injective relative cogenerator in $\mathcal{A}'$. Furthermore, \begin{itemize} \item[$\mathrm{(b1)}$] $\nu$ $=(\nu,\mathcal{A}')^{\wedge}$ $=(\mathcal{B}',\nu)^{\vee}$ $=\mathcal{A}'\cap\nu^{\wedge}$ $=\mathcal{B}'\cap\nu^{\vee}$ $=\mathcal{A}'\cap\mathcal{B}'$ $=\addx[T].$ \item[$\mathrm{(b2)}$] $\modd=\left(\mathcal{B}'\right)^{\vee}$. \end{itemize} \item[$\mathrm{(c)}$] For any $X\in\modd,$ we have that \begin{itemize} \item[$\mathrm{(c1)}$] $\resdimr{\mathcal{A}}X{\,} =\resdimr{\mathcal{A}'}X{\,} =\pdr[\mathcal{B}'][X]$ $\leq\resdimr{\mathcal{A}'}{\mathcal{B}'}{\,}+1\mbox{;}$ \item[$\mathrm{(c2)}$] $\resdimr{\mathcal{A}}X{\,}\leq\pdr[\mathcal{B}][X]\leq\resdimr{\mathcal{A}}{\mathcal{B}}{\,}+1;$ \item[$\mathrm{(c3)}$] $\coresdimr{\mathcal{B}'}X{\,}$ $=\coresdimr{\mathcal{B}}X{\,}$ $=\idr[\mathcal{A}'][X]$ $\leq\coresdimr{\mathcal{B}'}{\mathcal{A}'}{\,}+1\mbox{;}$ \item[$\mathrm{(c4)}$] $\coresdimx{\mathcal{B}}X\leq\idr[\mathcal{A}][X]\leq\coresdimr{\mathcal{B}}{\mathcal{A}}{\,}+1$; \item[$\mathrm{(c5)}$] $\coresdimr{\mathcal{B}'}{{}\modd{}}{\,}\leq\pdr[\,][T]=\pdr[\,][\mathcal{A}]=\pdr[{}\,{}][{}{}^{\bot}(\mathcal{B}')]<\infty$; \end{itemize} \item[$\mathrm{(d)}$] The following equalities hold true: \begin{itemize} \item[$\mathrm{(d1)}$] $\pdr[\mathcal{B}'][M]=\pdr[\operatorname{add}(T)][M]=\resdimx{\mathcal{A}'}M\,\forall M\in(\mathcal{A}')^{\wedge}$; \item[$\mathrm{(d2)}$] $\pdr[\mathcal{B}'][M]=\resdimx{\operatorname{add}(T)}M\,\forall M\in(\addx[T])^{\wedge}$; \item[$\mathrm{(d3)}$] $\idr[\mathcal{A}'][M]=\idr[\operatorname{add}(T)][M]=\coresdimx{\mathcal{B}'}M\,\forall M\in\left(\mathcal{B}'\right)^{\vee}$; \item[$\mathrm{(d4)}$] $\idr[\mathcal{A}'][M]=\coresdimx{\operatorname{add}(T)}M\,\forall M\in\left(\addx[T]\right)^{\vee}$. \end{itemize} \item[$\mathrm{(e)}$] We have the following relations: \begin{itemize} \item[$\mathrm{(e1)}$] $\pdx[T]=\pdx[\mathcal{A}']=\pdr[\mathcal{A}'][\mathcal{A}']=\coresdimx{\mathcal{B}'}{\mathcal{A}'}=\coresdimx{{}\addx[T]{}}{\mathcal{A}'}$ $=\coresdimr{\mathcal{B}'}{{}\modd{}}{\,}<\infty$; \item[$\mathrm{(e2)}$] $\mathrm{id}(T)\leq\resdimr{\mathcal{A}'}{\mathcal{B}'}{\,}=\resdimx{\mathcal{A}'}{{}\modd{}}=\resdimr{{}\addx[T]{}}{\mathcal{B}'}{\,}=\idr[\mathcal{B}'][\mathcal{B}']=\idr[\,][\mathcal{B}']$; \item[$\mathrm{(e3)}$] $\idr[\,][\mathcal{B}']<\infty$ if and only if $\mathcal{B}'\subseteq(\addx[T])^{\wedge}$ and $\idr[\,][T]<\infty$. Moreover, if $\idr[\,][\mathcal{B}']<\infty,$ then $\modd=(\mathcal{A}')^{\wedge}$, $\mathcal{B}'=(\addx[T])^{\wedge}$ and $\idr[\,][\mathcal{B}']=\idr[\,][T]$. \item[$\mathrm{(e4)}$] $\operatorname{gldim}(R)=\pdr[\,][\mathcal{B}']$. \end{itemize} \item[$\mathrm{(f)}$] Let $Z\in\modd$, with $m:=\coresdimr{\mathcal{B}'}Z{\mbox{ }}<\infty.$ Then, there exist the exact sequences \begin{alignat*}{1} \suc[Z][M_{Z}][C_{Z}][g_{Z}] & \qquad\mbox{with \ensuremath{C_{Z}\in(\addx[T]){}^{\vee},\,M_{Z}\in\mathcal{B}'}}\mbox{,}\\ \suc[K_{Z}][N_{Z}][Z][][f_{Z}] & \qquad\mbox{with \ensuremath{N_{Z}\in(\addx[T]){}^{\vee}}, \ensuremath{K_{Z}\in\mathcal{B}'}}\mbox{;} \end{alignat*} where $\coresdimr{{}\addx[T]{}}{C_{Z}}{\,}=m-1$, $\coresdimr{{}\addx[T]{}}{N_{Z}}{\,}\leq m$, $g_{Z}$ is a $\mathcal{B}'$-preenvelop and $f_{Z}$ is an $(\addx[T])^{\vee}$-precover. \item[$\mathrm{(g)}$] Let $Z\in\modd$, with $m:=\resdimr{\mathcal{A}'}Z{\,}<\infty.$ Then, there exist the exact sequences \begin{alignat*}{1} \suc[Z][N_{Z}][C_{Z}][f_{Z}] & \qquad\mbox{with \ensuremath{N_{Z}\in(\addx[T])^{\wedge},\,C_{Z}\in\mathcal{A}'}}\mbox{,}\\ \suc[K_{Z}][M_{Z}][Z][][g_{Z}] & \qquad\mbox{with \ensuremath{K_{Z}\in(\addx[T]){}^{\wedge}}, \ensuremath{M_{Z}\in\mathcal{A}'}}\mbox{;} \end{alignat*} where $\resdimr{{}\addx[T]{}}{K_{Z}}{\,}=m-1$, $\resdimr{{}\addx[T]{}}{N_{Z}}{\,}\leq m$, $g_{Z}$ is an $\mathcal{A}'$-precover and $f_{Z}$ is an $(\addx[T])^{\wedge}$-preenvelope. \end{itemize} \end{cor} \begin{proof} By Proposition \ref{prop:tilting miyashita}, we get that $T$ is an $n$-$\modd$-tilting object. \begin{enumerate} \item It follows from Theorem \ref{thm:el par n-X-tilting}, Lemma \ref{lem:lemita pdr en mod} and Proposition \ref{prop:(b)}. \item It follows from Proposition \ref{prop:(a)} (a,b,d). \item It follows from Proposition \ref{prop:(b)}, Lemma \ref{lem: 2.54'} and Lemma \ref{lem:lemita pdr en mod}. \item By (b1), $\nu=\addx[T]$. Hence, it is enough to consider Proposition \ref{prop:oct1}, Lemma \ref{lem: 2.54'} and its dual. \item Since $\nu=\addx[T]$ (see (b1)), it is enough to consider Proposition \ref{prop:oct2}, Proposition \ref{prop:oct3} (a), Lemma \ref{lem: 2.54'} and Lemma \ref{lem:lemita pdr en mod} and their duals. It remains to show in (e3) that $(\addx[T])^{\wedge}\subseteq\mathcal{B}'$. For this, by (a) and Proposition \ref{prop:M ortogonal es preenvolvente esp en X} (b), we have that $(\addx[T])^{\wedge}=\nu^{\wedge}\subseteq(\mathcal{A}')^{\bot}\cap\modd=\mathcal{B}'$ . \item Let $Z\in\modd$. By (c5), $m\leq\pdr[\,][T]<\infty$. Then, using Proposition \ref{prop:M ortogonal es preenvolvente esp en X} (c), we get the desired exact sequences. \item It follows from (b1) and Proposition \ref{prop:M ortogonal es preenvolvente esp en X} (d). \end{enumerate} \end{proof} \begin{rem} In \cite{anoteonrelative}, Jiaqun Wei works with a relative tilting notion developed by Maurice Auslander and {\O}yvind Solberg in \cite{auslander1993relative}. The Miyashita tilting is a particular case of the the Auslander-Solberg tilting. It is worth to mention that Corollary \ref{cor: propiedades miyashita} (a2) is \cite[Corollary 3.11]{anoteonrelative}. In Section \ref{sub:Tilting-en-categor=0000EDas}, we will study the Auslander-Solberg tilting through our $n$-$\mathcal{X}$-tilting theory. As a result, we will show that Wei's main theorem \cite[Theorem 3.10]{anoteonrelative} can be seen as a particular case of a theorem on $n$-$\mathcal{X}$-tilting objects. \end{rem} \subsection{Miyashita tilting for modules of type $FP_{n}$} In this section we study the left $n$-coherent rings and the left modules of type $FP_{n+1}.$ We characterize when some $T\in\mathrm{Mod}(R)$ is a big $n$-$\mathcal{F}\mathcal{P}_{n+1}$-tilting object. \ Let $R$ be a ring. Following \cite[Section 1]{bravo2017finiteness}, we recall that $M\in\mathrm{Mod}(R)$ is called \textbf{finitely $n$-presented} (or of type $FP_n$) if it admits an exact sequence $F_{n}\rightarrow F_{n-1}\rightarrow\cdots\rightarrow F_{0}\rightarrow M\rightarrow0$ with $F_{i}\in\projx[R]\,\forall i\in[0,n]$. The class of all the left $R$-modules of type $FP_n$ is denoted by $\mathcal{FP}_{n}(R).$ Note that $\mathcal{FP}_0(R)=\mathrm{mod}(R).$ An $M\in\mathrm{Mod}(R)$ is called \textbf{finitely $\infty$-presented} (or of type $FP_\infty$) if it admits an exact sequence $\cdots\rightarrow F_{n}\rightarrow F_{n-1}\rightarrow\cdots\rightarrow F_{0}\rightarrow M\rightarrow0\mbox{,}$ with $F_{i}\in\projx[R]\,\forall i\geq0$. The class of all the left $R$-modules of type $FP_\infty$ is denoted by $\mathcal{FP}_{\infty}(R).$ Note that $\mathcal{FP}_{\infty}(R)=\projx[R]_{\infty}^{\wedge}$. \begin{rem} Let $R$ be a ring and $M\in\mathrm{Mod}(R).$ Then $M\in\mathcal{FP}_{n}(R)$ if and only if there is an exact sequence $ R^{m_{n}}\rightarrow R^{m_{n-1}}\rightarrow\cdots\rightarrow R^{m_{0}}\rightarrow M\rightarrow0 $ with $m_{i}\in\mathbb{N}\;\forall i\in[0,n].$ \end{rem} \begin{lem}\cite[Proposition 1.7]{bravo2017finiteness}\label{lem:propiedades de cerradura FPn} Let $R$ be a ring. Then, $\mathcal{FP}_{n}(R)$ is right thick and $\mathcal{FP}_{\infty}(R)$ is thick.\end{lem} \begin{lem}\cite[Lemma 2.11]{bravo2019locally}\label{lem:FPn es cerrado por n-cocientes} Let $R$ be a ring and $C\in\mathrm{Mod}(R)$ be such that there is an exact sequence $ F_{n}\rightarrow\cdots\rightarrow F_{1}\rightarrow F_{0}\rightarrow C\rightarrow0\mbox{,} $ where $F_{i}\in\mathcal{FP}_{n}(R)$ $\forall i\in[0,n].$ Then $C\in\mathcal{FP}_{n}(R).$ \end{lem} \begin{defn}\cite[Definition 2.2]{bravo2017finiteness} A ring $R$ is called left \textbf{$n$-coherent} if $\mathcal{FP}_{n}(R)=\mathcal{FP}_{n+1}(R).$ \end{defn} \begin{lem}\cite[Corollary 2.6]{bravo2017finiteness}\label{lem:n-coherente, FPn cerrado por nucleos de epis} Let $R$ be a left $n$-coherent ring. Then $\mathcal{FP}_{n}$ is closed under epi-kernels. \end{lem} \begin{lem}\label{lem:anillo conmutativo coherente, entonces precubiertas} For an $n$-coherent commutative ring $R$ and $T\in\mathcal{FP}_{n}(R),$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\mathrm{Hom}_R(T,X)\in\mathcal{FP}_{n}(R)$ $\forall\;X\in T^{\bot}\cap\mathcal{FP}_{n}(R).$ \item[$\mathrm{(b)}$] Every $X\in T^{\bot}\cap\mathcal{FP}_{n}(R)$ admits an $\addx[T]$-precover. Moreover, such $\addx[T]$-precover is an $\Addx[T]$-precover. \end{itemize} \end{lem} \begin{proof} (a) There is a family $\left\{ \suc[K_{i+1}][R^{m_{i}}][K_{i}]\right\} _{i=0}^{n}$ of exact sequences in $\mathrm{Mod}(R),$ where $K_{0}=T$ and $m_{i}\in\mathbb{N}\;\forall i\in[0,n],$ since $T\in\mathcal{FP}_{n}(R).$ \ Let $X\in T^{\bot}\cap\mathcal{FP}_{n}(R).$ Then $\mathrm{Ext}_R^1(K_{i},X)\simeq\mathrm{Ext}^{1+i}_R(T,X)=0\;\forall i\in[0,n],$ and thus, by applying the functor $\mathrm{Hom}_R(-,X)$ to the above family of exact sequences, we get the family $\left\{ 0\to \mathrm{Hom}_R(K_i,X)\to X^{m_{i}}\to \mathrm{Hom}_R(K_{i+1},X)\to 0\right\} _{i=0}^{n}$ of exact sequences in $\mathrm{Mod}(R).$ From this family, we get the exact sequence \[ 0\rightarrow\mathrm{Hom}_R(T,X)\rightarrow X^{m_{0}}\rightarrow X^{m_{1}}\rightarrow\cdots\rightarrow X^{m_{n}}\rightarrow\mathrm{Hom}_R(K_{n+1},X)\rightarrow0. \] Then by Lemmas \ref{lem:propiedades de cerradura FPn} and \ref{lem:FPn es cerrado por n-cocientes}, it follows that $\mathrm{Hom}_R(K_{n+1},X)\in\mathcal{FP}_{n}(R).$ Thus, by using recursively Lemmas \ref{lem:propiedades de cerradura FPn} and \ref{lem:n-coherente, FPn cerrado por nucleos de epis}, we get that $\mathrm{Hom}_R(T,X)\in\mathcal{FP}_{n}(R)$. \ (b) Let $X\in T^{\bot}\cap\mathcal{FP}_{n}(R).$ Then, by (a), $\Homx[R][T][X]\in\mathrm{mod}(R)$ and let $\{f_{1},\cdots,f_{k}\}$ be a finite generating set. It is straightforward to show that $\alpha:=(f_{1},\cdots,f_{k}):T^{k}\rightarrow X$ is an $\addx[T]$-precover. The second statement in (b) follows from Lemma \ref{lem: addT precub es AddT precub}. \end{proof} \begin{lem}\label{lem:lemita miyashita} Let $R$ be a ring and $T\in \modd[R]$ be such that $T\in T^\perp$ and $T\in\mathrm{proj}(R)^\wedge_n.$ Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{Q}:=({}^{\bot}T\cap\mathcal{FP}_{\infty}(R),\addx[T])_{\infty}^{\vee}$ is left thick. \item[$\mathrm{(b)}$] $\left(\addx[T]\right)_{\mathcal{FP}_{\infty}(R)}^{\vee}=\addx[T]^{\vee}=\left\{ M\in\mathcal{Q}\;|\:\;\idr[\mathcal{Q}][M]<\infty\right\} $ is left thick. \end{itemize} \end{lem} \begin{proof} By Lemma \ref{lem:propiedades de cerradura FPn}, we know that $\mathcal{FP}_{\infty}(R)$ is thick in $\Modx[R]$. Note that $T\in\mathcal{FP}_{\infty}(R)$ and thus $\addx[T]\subseteq\mathcal{FP}_{\infty}(R).$ Moreover $\addx[T]\subseteq T^{\bot}\cap\mathcal{F}\mathcal{P}_{\infty}$ since $T\in T^{\bot}.$ Hence, the result follows from \cite[Corollary 4.21]{parte1}. \end{proof} \begin{thm}\label{prop:n-FPn+1-tilting} Let $R$ be a left $n$-coherent ring and $T\in\mathcal{F}\mathcal{P}_{n+1}(R)$. Consider the following statements: \begin{itemize} \item[$\mathrm{(a)}$] $T$ is a Miyashita $n$-tilting $R$-module and there is an $\mathcal{FP}_{n+1}(R)$-injective relative cogenerator in $\mathcal{FP}_{n+1}(R).$ \item[$\mathrm{(b)}$] $T$ is a big $n$-$\mathcal{F}\mathcal{P}_{n+1}(R)$-tilting object. \item[$\mathrm{(c)}$] $T$ is a small $n$-$\mathcal{F}\mathcal{P}_{n+1}(R)$-tilting object. \item[$\mathrm{(d)}$] $T$ is a small $\projx[R]$-saturated $n$-$\mathcal{F}\mathcal{P}_{n+1}(R)$-tilting object. \end{itemize} Then $\mathrm{(b)}\Rightarrow\mathrm{(a)}$ and $\mathrm{(b)}\Leftrightarrow\mathrm{(c)}\Leftrightarrow\mathrm{(d)}$ hold true. Furthermore, $\mathrm{(a)}\Rightarrow\mathrm{(b)}$ holds true if $R$ is commutative. \end{thm} \begin{proof} Note first that, by \cite[Theorem 2.4(3)]{bravo2017finiteness}, $R$ is $k$-coherent $\forall k\geq n$. In particular, by Lemmas \ref{lem:propiedades de cerradura FPn} and \ref{lem:n-coherente, FPn cerrado por nucleos de epis}, $\mathcal{FP}_{n+1}(R)$ is thick in $\Modx[R]$. \ (a) $\Rightarrow$ (b) Let $R$ be commutative. The conditions (T1), (T2), (T4) and (T5) are proved straightforward by using \cite[Lemma 3.1.6]{Approximations} and Lemma \ref{lem:anillo conmutativo coherente, entonces precubiertas} (b). The condition (T3) follows from (MT3) and Lemma \ref{lem:lemita miyashita}. \ (b) $\Rightarrow$ (a) Let $T$ be an $n$-$\mathcal{F}\mathcal{P}_{n+1}(R)$-tilting object. By (T4), we know there is an $\mathcal{FP}_{n+1}(R)$-injective relative cogenerator in $\mathcal{FP}_{n+1}(R)$. Let us prove that $T$ is Miyashita tilting. Since $T\in\mathcal{F}\mathcal{P}_{n+1}(R)$ and $R$ is $\left(n+1\right)$-coherent, it can be shown that (MT1) is satisfied by using (T1) and the shifting Lemma. Moreover, (MT2) follows from (T2). Finally, since $\mathcal{F}\mathcal{P}_{n+1}(R)$ is thick in $\mathrm{Mod}(R)$, by Theorem \ref{thm:el par n-X-tilting} (c), there is an exact sequence $\suc[{}{}_{R}R][M_{0}][X_{0}]$ where $M_{0}\in T^{\bot}\cap\mathcal{F}\mathcal{P}_{n+1}(R)$ and $X_{0}\in{}{}^{\bot}(T^{\bot})\cap\mathcal{F}\mathcal{P}_{n+1}(R).$ Then, using $_{R}R\in{}{}^{\bot}(T^{\bot})$ and Proposition \ref{prop:(a)} (d), we have $M_{0}\in{}{}^{\bot}(T^{\bot})\cap T^{\bot}\cap\mathcal{F}\mathcal{P}_{n+1}(R)=\Addx[T]\cap\mathcal{F}\mathcal{P}_{n+1}(R)=\addx[T].$ By repeating the same arguments (recursively), we can build a long exact sequence $\suc[R][M_{0}\rightarrow\cdots\rightarrow M_{k}][X_{k}]\,\forall k\geq1$ with $M_{i}\in\addx[T]$ and $X_{i}\in{}{}^{\bot}(T^{\bot})\cap\mathcal{F}\mathcal{P}_{n+1}(R)\;\forall i\in[1,k]$. Now, since $\pdr[\mathcal{F}\mathcal{P}_{n+1}(R)][T]\leq n$, we have $T^{\bot}\cap\mathcal{F}\mathcal{P}_{n+1}(R)$ is closed by $n$-quotients in $\mathcal{F}\mathcal{P}_{n+1}(R)$ by \cite[Proposition 2.7]{parte1}. Thus, we have $X_{n}\in T^{\bot}\cap{}{}^{\bot}(T^{\bot})\cap\mathcal{F}\mathcal{P}_{n+1}(R)=\addx[T]$ and therefore $T$ satisfies (MT3). \ (b) $\Leftrightarrow$ (c) It follows from from Corollary \ref{cor:coro2 p80} since $\mathcal{FP}_{n+1}(R)\subseteq\modd.$ \ (d) $\Leftrightarrow$ (c) It follows from Proposition \ref{prop:satsat-1}. \end{proof} \subsection{\label{sub:Tilting-en-categor=0000EDas} Tilting in exact categories} It is a known fact that a small exact category can be embedded into an abelian category. In this section, we will use this fact to introduce a tilting theory on small exact categories. Furthermore, we will see that the tilting objects obtained by this procedure coincide with the tilting objects defined by Bin Zhu y Xiao Zhuang in \cite{zhu2019tilting}. We start by recalling some definitions and important results on exact categories. Let $\mathcal{A}$ be an additive category. A\textbf{ kernel-cokernel pair} $(i,p)$ in $\mathcal{A}$ is a sequence of morphisms $A'\stackrel{i}{\rightarrow}A\stackrel{p}{\rightarrow}A''$ in $\mathcal{A}$ such that $i$ is the kernel of $p$ and $p$ is the cokernel of $i$. Let $\mathcal{E}$ be a fixed class of kernel-cokernel pairs in $\mathcal{A}$. A morphism $i$ ($p$, respectively) is called \textbf{admissible mono} (\textbf{admissible epi,} respectively) if there is a pair $(i,p)\in\mathcal{E}$. We say that $\mathcal{E}$ is \textbf{closed under isomorphisms} if, for every commutative diagram in $\mathcal{A}$ \begin{minipage}[t]{1\columnwidth}% \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=2cm,main node/.style=,x=1.5cm,y=1.5cm] \node (1) at (0,0) {$A$}; \node (2) at (1,0) {$X$}; \node (3) at (2,0) {$B$}; \node (1') at (0,-1) {$A'$}; \node (2') at (1,-1) {$X'$}; \node (3') at (2,-1) {$B',$}; \draw[->, thin] (1) to node {$\alpha$} (2); \draw[->, thin] (2) to node {$\beta$} (3); \draw[->, thin] (1') to [below] node {$\alpha '$} (2'); \draw[->, thin] (2') to [below] node {$\beta '$} (3'); \draw[->, thin] (1) to node {$f$} (1'); \draw[->, thin] (2) to node {$g$} (2'); \draw[->, thin] (3) to node {$h$} (3'); \end{tikzpicture} \]% \end{minipage} \noindent with $f,g,h$ isomorphisms, we have that $(\alpha,\beta)\in\mathcal{E}$ if and only if $(\alpha',\beta')\in\mathcal{E}$. \begin{defn} \cite[Definition 2.1]{buhler2010exact} An \textbf{exact category} is a pair $\p[\mathcal{A}][\mathcal{E}]$, where $\mathcal{A}$ is an additive category and $\mathcal{E}$ is a class of kernel-cokernel pairs closed under isomorphisms, such that the following conditions are satisfied: \begin{description} \item [{(E0)}] $1_{A}$ is an admissible mono and an admissible epi $\forall A\in\mathcal{A}.$ \item [{(E1)}] The class of admissible monos and the class of admissible epis are closed by the composition of morphisms. \end{description} \noindent \begin{minipage}[t]{0.8\columnwidth}% \begin{description} \item [{(E2)}] For any morphism $Y\rightarrow W$ and any admissible epi $Z\rightarrow W$, there is a pullback $(W,X\rightarrow Y,X\rightarrow Z),$ where $X\rightarrow Y$ is an admissible epi. \item [{(E2)$^{op}$}] For every morphism $X\rightarrow Y$ and every admissible mono $X\rightarrow Z$, there is a push-out $(W,Y\rightarrow W,Z\rightarrow W),$ where $Y\rightarrow W$ is an admissible mono.\end{description} \end{minipage}\hfill{}% \fbox{\begin{minipage}[t]{0.15\columnwidth}% \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=2cm,main node/.style=,x=1.5cm,y=1.5cm] \node (1) at (0,0) {$Z$}; \node (2) at (1,0) {$W$}; \node (3) at (0,1) {$X$}; \node (4) at (1,1) {$Y$}; \draw[->, thin] (3) to node {$$} (4); \draw[->, thin] (3) to node {$$} (1); \draw[->, thin] (4) to node {$$} (2); \draw[->, thin] (1) to node {$$} (2); \end{tikzpicture} \]% \end{minipage}} \end{defn} Given an exact category $\p[\mathcal{A}][\mathcal{E}]$, an element $(i,p)\in\mathcal{E}$ is called {\bf short exact sequence} and it is also denoted as $0\to X\xrightarrow{i} Z\xrightarrow{p} Y\to 0.$ Moreover, for every $X,Y\in\mathcal{A}$, we denote by $\mathcal{E}(X,Y)$ the class of all the short exact sequences of the form $0\to Y\rightarrow Z\rightarrow X\to 0.$ \ Let $\mathcal{C}$ be an abelian category and $\mathcal{A}\subseteq\mathcal{C}$ be closed under extensions and such that $0\in \mathcal{A}.$ Consider the class $\mathcal{E}$ of all the exact sequences $\suc$ in $\mathcal{C}$ such that $N,K\in\mathcal{A}.$ Then, it is straightforward to show that the pair $\p[\mathcal{A}][\mathcal{E}]$ is an exact category. \begin{defn} \cite[Definition 5.1]{buhler2010exact} Let $\p[\mathcal{A}][\mathcal{E}]$ and $\p[\mathcal{A}'][\mathcal{E}']$ be exact categories and $F:\mathcal{A}\rightarrow\mathcal{A}'$ be an additive functor. We say that $F$ is \textbf{exact} if $F(\mathcal{E})\subseteq\mathcal{E}'$. We say that $F$ \textbf{reflects exactness} in case $(F\alpha,F\beta)\in\mathcal{E}'$ implies $(\alpha,\beta)\in\mathcal{E}$. \end{defn} Let $\mathcal{A}$ be an additive category. We say that a morphism $f:X\to Y$ in $\mathcal{A}$ is a \textbf{split-epi} if there is a morphism $g:Y\to X$ such that $fg=1_Y$. The \textbf{split-monos} are defined dually. \begin{defn}\cite[Definition 6.1, Remark 6.2]{buhler2010exact} Let $\mathcal{A}$ be an additive category. We say that $\mathcal{A}$ is \textbf{idempotent complete} if any idempotent morphism $p:A\rightarrow A$ in $\mathcal{A}$ admits a kernel. If every split-epi admits a kernel, we say that $\mathcal{A}$ is \textbf{weakly idempotent complete}.\end{defn} \begin{rem}\label{rem:idempotentes en categorias exactas}\cite[Remarks 6.2, 2.3, and 2.8]{buhler2010exact} For any exact category $\p[\mathcal{A}][\mathcal{E}],$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] Every isomorphism is both an admissible mono and an admissible epi. \item[$\mathrm{(b)}$] Every splitting exact sequence belongs to $\mathcal{E}$, and hence, every splitting monomorphism (epimorphism) is an admissible mono (epi). \item[$\mathrm{(c)}$] If any idempotent morphism $p:A\rightarrow A$ in $\mathcal{A}$ admits a kernel, then we can build splitting exact sequences $0\to K\rightarrow A\rightarrow I\to 0$ and $0\to I\rightarrow A\rightarrow K\to 0$, where $K\rightarrow A$ is the kernel of $p$ and $I$ is the image of $p$. \end{itemize} \end{rem} \begin{defn}\cite[Definition 11.1]{buhler2010exact} Let $(\mathcal{A},\mathcal{E})$ be an exact category. An object $P\in\mathcal{A}$ is \textbf{$\mathcal{E}$-projective} if $\Homx[\mathcal{A}][P][-]:\mathcal{A}\rightarrow\mbox{Ab}$ is exact. Denote by $\mbox{Proj}_{\mathcal{E}}(\mathcal{A})$ the class of all the $\mathcal{E}$-projective objects. The \textbf{$\mathcal{E}$-injective} objects are defined dually, and the class of all the $\mathcal{E}$-injective objects is denoted by $\mbox{Inj}_{\mathcal{E}}(\mathcal{A}).$ \end{defn} \begin{defn}\cite[Definition 11.9]{buhler2010exact} We say that an exact category $\p[\mathcal{A}][\mathcal{E}]$ \textbf{has enough $\mathcal{E}$-projectives} if every $A\in\mathcal{A}$ admits an admissible epi $P\rightarrow A$ with $P\in\operatorname{Proj}_{\mathcal{E}}(\mathcal{A})$. Dually, $\p[\mathcal{A}][\mathcal{E}]$ \textbf{has enough $\mathcal{E}$-injectives} if every $A\in\mathcal{A}$ admits an admissible mono $A\rightarrow I$ with $I\in\operatorname{Inj}_{\mathcal{E}}(\mathcal{A})$. \end{defn} \begin{defn} Let $(\mathcal{A},\mathcal{E})$ be a skeletdally small exact category and let $F:\mathcal{A}\rightarrow\operatorname{Ab}$ be an aditive functor. We say that $F$ is left exact if it carries every short exact sequence $0\to N\rightarrow M\rightarrow K\to 0$ in $\mathcal{E}$ to an exact sequence $0\rightarrow F\left(N\right)\rightarrow F\left(M\right)\rightarrow F\left(K\right)$ in $\operatorname{Ab}.$ Denote by $\operatorname{Lex}(\mathcal{A},\mathcal{E})$ the category of all the contravariant additive left exact functors $\mathcal{A}\rightarrow\operatorname{Ab}$. \end{defn} \begin{thm}\cite[Theorem A.1]{buhler2010exact}\cite[Theorem A.7.1]{thomason1990higher}\cite[Chapter II, Section 2]{gabriel1962categories}\cite[Section 2]{quillen1973higher}\label{thm:exacta se sumerge en abeliana} For a small exact category $(\mathcal{A},\mathcal{E}),$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] There is an abelian category $\mathcal{B}$ and a fully faithful exact functor $i:\mathcal{A}\rightarrow\mathcal{B}$ that reflects exactness. Moreover, $i(\mathcal{A})$ is closed under extensions in $\mathcal{B}$. \item[$\mathrm{(b)}$] The category $\mathcal{B}$ may canonically be chosen to be the category $\operatorname{Lex}(\mathcal{A},\mathcal{E})$ and $i$ to be the Yoneda embedding $i(A):=\Homx[\mathcal{A}][-][A]$. \item[$\mathrm{(c)}$] Assume that $\mathcal{A}$ is weakly idempotent complete. If $f$ is a morphism in $\mathcal{A}$ such that $i(f)$ is epic (monic) in $\mathcal{B}$, then $f$ is an admissible epi (mono). \end{itemize} \end{thm} \begin{defn} Let $(\mathcal{A},\mathcal{E})$ be an exact category and $\mathcal{C}$ be an abelian category. If there is a fully faithful exact functor $i:\mathcal{A}\rightarrow\mathcal{C}$ that reflects exactness, we say that $(\mathcal{A},\mathcal{E})$ \textbf{is embedded} in $\mathcal{C}$, and that $i$ is \textbf{the embedding of $\mathcal{A}$ in }$\mathcal{C}$. \end{defn} Consider a category $\mathcal{C}$ and $\mathcal{X}\subseteq\mathcal{C}$. Let $[\mathcal{X}]$ denote the class of all the objects $Z\in\mathcal{C}$ such that $Z\cong X$ with $X\in\mathcal{X}$. \begin{rem}\label{rem:observacion p152} Let $\mathcal{X}$ and $\mathcal{Y}$ be classes objects in a category $\mathcal{C}$. Note that $[\mathcal{X}\cap\mathcal{Y}]\subseteq[\mathcal{X}]\cap[\mathcal{Y}]$. Furthermore, $[\mathcal{X}\cap\mathcal{Y}]=\mathcal{X}\cap[\mathcal{Y}]$ if $\mathcal{X}=[\mathcal{X}].$ \end{rem} For a functor $F:\mathcal{A}\to \mathcal{B},$ the class $[F(\mathcal{A})]$ is also known as the essential image of $F$ in $\mathcal{B}.$ \begin{prop}\label{prop:completa por idempotentes sii cerrada por sumandos} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category embedded in an abelian category $\mathcal{A},$ via $i:\mathcal{A}\rightarrow\mathcal{C}$. Then $\left[i(\mathcal{A})\right]=\operatorname{smd}\left({}\left[i(\mathcal{A})\right]{}\right)$ if, and only if, $\mathcal{A}$ is idempotent complete. \end{prop} \begin{proof} $(\Rightarrow)$ Let $e:A\rightarrow A$ be an idempotent morphism in $\mathcal{A}$. It follows that $i(e)$ is an idempotent morphism in $\mathcal{C}$. Hence, there is an splitting exact sequence in $\mathcal{C}$, $\eta:\;\suc[X][i(A)][Y][k][c]\mbox{,}$ where $k:X\rightarrow i(A)$ is the kernel of $i(e)$ and $c':Y\rightarrow i(A)$ is a a monomorphism such that $i(e)=c'c$. Now, since $\left[i(\mathcal{A})\right]=\operatorname{smd}\left(\left[i(\mathcal{A})\right]\right)$, there are $B,C\in\mathcal{A}$ such that $i(B)\cong X$ and $i(C)\cong Y$. Furthermore, since $i$ is a full functor, there are $\alpha$, $\beta$ and $\gamma$ morphisms in $\mathcal{A}$ such that $i(\alpha)=k$, $i(\beta)=c$ and $i(\gamma)=c'$. Hence, using that $i$ reflects exactness, $(\alpha,\beta)\in\mathcal{E}$ and $\gamma$ is an admissible mono. Moreover, $e=\gamma\beta$ since $i$ is faithful. Finally, it is straightforward to show that $\alpha$ is the kernel of $e$. $(\Leftarrow)$ Let $M\in\mathcal{A}$ be an object such that $i(M)=X\oplus Y$ in $\mathcal{C}$. Consider the idempotent morphism $e=\mu\pi$, where $\mu:X\rightarrow i(M)$ and $\pi:i(M)\rightarrow X$ are the natural injection and projection, respectively. Now, since $i$ is full and faithful, there is a idempotent morphism $e':M\rightarrow M$ such that $i(e')=e$. Moreover, using that $i$ is exact and Remark \ref{rem:idempotentes en categorias exactas} (c), there is an splitting exact sequence $\suc[i(K)][i(M)][i(I)][i(k)][i(j)],$ where $i(k)$ is the kernel of $i(e')=e.$ Thus $X\cong i(K)$ and hence $\left[i(\mathcal{A})\right]=\operatorname{smd}\left(\left[i(\mathcal{A})\right]\right)$. \end{proof} \begin{rem} Observe that, having an exact category $(\mathcal{A},\mathcal{E})$ such that $\mathcal{A}$ is a full subcategory of an abelian category $\mathcal{C}$, is not the same as having the exact category $(\mathcal{A},\mathcal{E})$ embedded in $\mathcal{C},$ via the inclusion $\mathcal{A}\subseteq\mathcal{C}$. For example, if $R$ is a ring and $\mathcal{E}$ is the class of all the splitting exact sequences in $\Modx[R]$, then $(\Modx[R],\mathcal{E})$ is an exact category. Note that $(\Modx[R],\mathcal{E})$ is not embedded in $\Modx[R]$ unless $R$ be a semisimple ring. We will see a non-trivial example of this in Remark \ref{rem:no sumerge}. \end{rem} In what follows, we introduce a tilting theory on exact categories. We can find precedents of this in different contexts. As examples, we can cite Maurice Auslander and {\O}yvind Solberg's relative tilting theory \cite{auslander1993relative2}, or the generalization of such theory developed by Soud Khalifa Mohamed in \cite{mohamed2009relative}. In this section, we will approach to the tilting theory recently developed by Bin Zhu y Xiao Zhuang for extriangulated categories in \cite{zhu2019tilting}. Extriangulated categories is a concept, introduced by Hiroyuki Nakaoka and Yann Palu in \cite{nakaoka2019extriangulated26}, that is a simultaneous generalization of exact and triangulated categories. For the convenience of the reader, in the next lines we will recall some results and definitions of \cite{zhu2019tilting} translated into the context of the exact categories.\newcommandx\pdim[2][usedefault, addprefix=\global, 1=M, 2={(\mathcal{A},\mathcal{E})}]{\operatorname{pdim}_{#2}#1} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category, $n\in\mathbb{N}$ and $\mathcal{X}\subseteq\mathcal{A}$. An \textbf{$\mathcal{X}$-resolution} (of length $\leq n$) in $\p[\mathcal{A}][\mathcal{E}]$ of $A\in\mathcal{A}$ is a sequence of morphisms in $\mathcal{A}$ $$X_{n}\overset{d_{n}}{\rightarrow}X_{n-1}\rightarrow\cdots\rightarrow X_{1}\overset{d_{1}}{\rightarrow}X_{0}\overset{d_{0}}{\rightarrow}A$$ such that there is a family $\{ K_{i+1}\overset{g_{i}}{\rightarrow}X_{i}\overset{f_{i}}{\rightarrow}K_{i}\} _{i=0}^{n}$ in $\mathcal{E}$ with $K_{n+1}:=0,$ $K_n=X_n,$ $K_{0}:=A=:X_{-1},$ $g_n:=d_n,$ $f_n:=1_{X_n},$ $f_{0}:=d_{0}$ and $d_{i}=g_{i-1}f_{i}$ $\forall i\in[1,n-1]$. We denote by $\mathcal{X}_{n,\mathcal{E}}^{\wedge}$ the class of all the objects $A\in\mathcal{A}$ admitting an $\mathcal{X}$-resolution in $\p[\mathcal{A}][\mathcal{E}]$ of length $\leq n$. We define the class $\mathcal{X}_{\mathcal{E}}^{\wedge}:=\bigcup_{n=0}^{\infty}\mathcal{X}_{n,\mathcal{E}}^{\wedge}$ and, for any $A\in\mathcal{X}_{\mathcal{E}}^{\wedge}$, the $\mathcal{X}$-resolution dimension of $A$ is $\resdimx{\mathcal{X},\mathcal{E}}A:=\min\left\{ n\in\mathbb{N}\,|\:A\in\mathcal{X}_{n,\mathcal{E}}^{\wedge}\right\} $. The notion of $\mathcal{X}$-coresolution, the classes $\mathcal{X}_{\mathcal{E},n}^{\vee}$ and $\mathcal{X}_{\mathcal{E}}^{\vee}$, and the $\mathcal{X}$-coresolution dimension $\coresdimr{\mathcal{X},\mathcal{E}}A{\,}$ of $A$ are defined dually. \renewcommandx\Projx[2][usedefault, addprefix=\global, 1=R, 2={\,}]{\operatorname{Proj}_{#2}\left(#1\right)} \renewcommandx\Injx[2][usedefault, addprefix=\global, 1=R, 2={\,}]{\operatorname{Inj}_{#2}\left(#1\right)} Given $A,B\in\mathcal{A}$, it is defined an equivalence relation in $\mathcal{E}(B,A),$ via a commutative diagram of the form\\ \begin{minipage}[t]{1\columnwidth}% \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=2cm,main node/.style=,x=1cm,y=1cm] \node (1) at (0,0) {$A$}; \node (2) at (1,0) {$X$}; \node (3) at (2,0) {$B$}; \node (1') at (0,-1) {$A$}; \node (2') at (1,-1) {$X'$}; \node (3') at (2,-1) {$B$}; \draw[->, thin] (1) to node {$$} (2); \draw[->, thin] (2) to node {$$} (3); \draw[->, thin] (1') to node {$$} (2'); \draw[->, thin] (2') to node {$$} (3'); \draw[-, double] (1) to node {$$} (1'); \draw[->, thin] (2) to node {$$} (2'); \draw[-, double] (3) to node {$$} (3'); \end{tikzpicture} \]% \end{minipage}\\ The equivalence classes of such relation are called \textbf{extensions}, the class of extensions is denoted by $\Extx[1][\mathcal{A}][B][A]$ or $\Extx[1][(\mathcal{A},\mathcal{E})][B][A]$. It can be shown that the class $\Extx[1][\mathcal{A}][B][A]$ is an abelian group with the Baer sum, where $0$ is the equivalence class of the short exact sequence $0\to A\stackrel{\left(\begin{smallmatrix}1\\ 0 \end{smallmatrix}\right)}{\rightarrow}A\oplus B\stackrel{\left(\begin{smallmatrix}0 & 1\end{smallmatrix}\right)}{\rightarrow}B\to 0\mbox{.}$ \ Let $\p[\mathcal{A}][\mathcal{E}]$ be with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives. Then, any $A\in\mathcal{A}$ admits short exact sequences $0\to A\rightarrow I\rightarrow A^{1}\to 0$ and $0\to A_{1}\rightarrow P\rightarrow A\to 0$ in $\mathcal{E},$ with $I\in\Injx[\mathcal{A}][\mathcal{E}]$ and $P\in\Projx[\mathcal{A}][\mathcal{E}]$. In this case, $A^{1}$ is called a \textbf{first cosyzygy} of $A$ and $A_{1}$ is called a \textbf{first syzygy} of $A$. Define an \textbf{$n$-th cosyzygy} (\textbf{$n$-th syzygy}, respectively) by recursion as the cosyzygy (syzygy) of the $(n-1)$-th cosyzygy ($(n-1)$-th syzygy). The class of all the $n$-th cosyzygies of $A$ is denoted by $\Sigma^{n}(A)$, and the class of all the $n$-th syzygies of $A$ is denoted by $\Omega_{n}(A)$. \begin{rem} For an exact category $\p[\mathcal{A}][\mathcal{E}]$ with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, we have the following: \begin{itemize} \item[$\mathrm{(a)}$] \cite[Lemma 5.1]{liu2019hearts} Let $k\geq2$. Then \[ \Extx[1][\mathcal{A}][X][Y^{k-1}]\cong\Extx[1][\mathcal{A}][X_{k-1}][Y]\quad\forall\, X_{k-1}\in\Omega^{k-1}(X),\,\forall\, Y^{k-1}\in\Sigma^{k-1}(Y)\mbox{.} \] Such group is called the $k$-th Ext of $X$ and $Y,$ and it is denoted by $\Extx[k][\mathcal{A}][X][Y]$ or by $\Extx[k][(\mathcal{A},\mathcal{E})][X][Y]$. \item[$\mathrm{(b)}$] For $\mathcal{Z}\subseteq\mathcal{A},$ we consider the right orthogonal class \[ \mathcal{Z}^{\bot_{\mathcal{E}}}: =\left\{ A\in\mathcal{A}\;|\;\Extx[i][\mathcal{A}][Z][A]=0\quad\forall Z\in\mathcal{Z},\,\forall i>0\right\}. \] The left orthogonal class ${}^{\bot_{\mathcal{E}}}\mathcal{Z}$ is defined dually. \item[$\mathrm{(c)}$] For $A\in\mathcal{A}$ and $\mathcal{X}\subseteq\mathcal{A},$ we consider the $\mathcal{X}$-projective dimension of $A$ \[ \pdr[\mathcal{E},\mathcal{X}][A]:=\min\left\{ n\in\mathbb{N}\;|\;\Extx[i][\mathcal{A}][A][-]|_{\mathcal{X}}=0\quad\forall i>n\right\} \mbox{.} \] Given a class $\mathcal{T}\subseteq\mathcal{A}$, we define $ \pdr[\mathcal{E},\mathcal{X}][\mathcal{T}]:=\sup\left\{ \pdr[\mathcal{E},\mathcal{X}][T]\;|\;T\in\mathcal{T}\right\} \mbox{.} $ In case $\mathcal{X}=\mathcal{A}$, we set $\pdr[\mathcal{E}][A]:=\pdr[\mathcal{E},\mathcal{A}][A]$ and $\pdr[\mathcal{E}][\mathcal{T}]:=\pdr[\mathcal{E},\mathcal{A}][\mathcal{T}]$. \item[$\mathrm{(d)}$] \cite[Lemma 3]{zhu2019tilting} Let $X\in\mathcal{A}$. Then $\pdr[\mathcal{E}][X]\leq n$ if, and only if, $X\in(\Projx[\mathcal{A}][\mathcal{E}]){}_{n}^{\wedge}$. \end{itemize} \end{rem} \begin{lem}\label{lem: lema notas p120} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives. Then, for $\mathcal{X},\mathcal{Y}\subseteq\mathcal{A}$, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\pdr[\mathcal{E},\mathcal{Y}][\mathcal{X}_{\mathcal{E}}^{\vee}]=\pdr[\mathcal{E},\mathcal{Y}][\mathcal{X}].$ \item[$\mathrm{(b)}$] $\mathcal{X}_{\mathcal{E}}^{\vee}\subseteq{}^{\bot_{\mathcal{E}}}\left(\mathcal{X}^{\bot_{\mathcal{E}}}\right)$. \end{itemize} \end{lem} \begin{proof} The proof of (a) is similar to the proof of \cite[Lemma 4.3]{parte1}. On the other hand, the proof of (b) follows from (a), in a similar way as in the proof of Lemma \ref{lem:inf2} (a). \end{proof} \begin{prop}\label{prop:proyectivos en cat exacta}\cite[Proposition 3.24]{nakaoka2019extriangulated26} For an exact category $\p[\mathcal{A}][\mathcal{E}],$ with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, and $P\in\mathcal{A},$ the following statements are equivalent. \begin{enumerate} \item[$\mathrm{(a)}$] $P\in\Projx[\mathcal{A}][\mathcal{E}].$ \item[$\mathrm{(b)}$] $\Extx[1][\mathcal{A}][P][A]=0\quad\forall A\in\mathcal{A}.$ \item[$\mathrm{(c)}$] $P\in{}{}^{\bot_{\mathcal{E}}}\mathcal{A}.$ \end{enumerate} \end{prop} \begin{prop}\cite[Fact 1.18, Proposition 5.2]{liu2019hearts}\label{prop: 2.141 sucesi=0000F3n exacta larga del ext en exactas} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, and let $0\to A\stackrel{\mu}{\rightarrow}B\stackrel{\lambda}{\rightarrow}C\to 0$ be a short exact sequence in $\mathcal{E}$. Then, for every $X\in\mathcal{A}$, we have the long exact sequence in $\mathrm{Ext}^i_\mathcal{A}(X,-)$\\ \begin{minipage}[t]{1\columnwidth}% \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=1.5cm,main node/.style=,x=3cm,y=1.3cm] \node[main node] (A0) at (0,0) {$0$}; \node[main node] (A1) at (1,0) {$\operatorname{Hom}_{\mathcal{A}} (X,A)$}; \node[main node] (A2) at (2,0) {$\operatorname{Hom}_{\mathcal{A}} (X,B)$}; \node[main node] (A3) at (3,0) {$\operatorname{Hom}_{\mathcal{A}} (X,C)$}; \node[main node] (B1) at (1,-1) {$\operatorname{Ext}_{\mathcal{A}} ^{1}(X,A)$}; \node[main node] (B2) at (2,-1) {$\cdots$}; \node[main node] (B3) at (3,-1) {$\operatorname{Ext}_{\mathcal{A}} ^{n-1}(X,C)$}; \node[main node] (C1) at (1,-2) {$\operatorname{Ext}_{\mathcal{A}} ^{n}(X,A)$}; \node[main node] (C2) at (2,-2) {$\operatorname{Ext}_{\mathcal{A}} ^{n}(X,B)$}; \node[main node] (C3) at (3,-2) {$\operatorname{Ext}_{\mathcal{A}} ^{n}(X,C)$}; \node[main node] (D1) at (1,-3) {$\operatorname{Ext}_{\mathcal{A}} ^{n+1}(X,A)$}; \node[main node] (D2) at (2,-3) {$\cdots$}; \node[main node] (F0) at (2,-.5) {$\partial _0$}; \node[main node] (F1) at (2,-1.5) {$\partial _{n-1}$}; \node[main node] (F2) at (2,-2.5) {$\partial _{n}$}; \draw[->, thin] (A0) to node {$$} (A1); \draw[->, thin] (A1) to node {$\scriptstyle (X, \mu )$} (A2); \draw[->, thin] (A2) to node {$\scriptstyle (X, \lambda )$} (A3); \draw[->, thin] (B1) to node {$\scriptstyle $} (B2); \draw[->, thin] (B2) to node {$\scriptstyle $} (B3); \draw[->, thin] (C1) to node {$\scriptstyle $} (C2); \draw[->, thin] (C2) to node {$\scriptstyle $} (C3); \draw[->, thin] (D1) to node {$\scriptstyle $} (D2); \draw[-, thin] (A3) ..controls (4,-.4).. (F0); \draw[->, thin] (F0) ..controls (0,-.6).. (B1); \draw[-, thin] (B3) ..controls (4,-1.4).. (F1); \draw[->, thin] (F1) ..controls (0,-1.6).. (C1); \draw[-, thin] (C3) ..controls (4,-2.4).. (F2); \draw[->, thin] (F2) ..controls (0,-2.6).. (D1); \end{tikzpicture} \]% \end{minipage} Dually, we also have the long exact sequence in $\mathrm{Ext}^i_\mathcal{A}(-,X).$ \end{prop} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category, with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, and let $\omega$,$\mathcal{X}$ $\subseteq\mathcal{A}$. We say that $\omega$ is a \textbf{relative $\mathcal{E}$-cogenerator} in $\mathcal{X}$ if $\omega\subseteq\mathcal{X}$ and every $X\in\mathcal{X}$ admits a short exact sequence $0\to X\rightarrow W\rightarrow X'\to 0$ in $\mathcal{E}$ such that $W\in\omega$ and $X'\in\mathcal{X}$. Moreover, $\omega$ is \textbf{$\mathcal{X}$-injective} if $\idr[\mathcal{E},\mathcal{X}][\omega]=0$. The \textbf{relative $\mathcal{E}$-generators} in $\mathcal{X}$ and the \textbf{$\mathcal{X}$-projectives} are defined dually. \begin{defn} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives. A class $\mathcal{T}\subseteq\mathcal{A}$ is called \textbf{small $n$-tilting} in $\p[\mathcal{A}][\mathcal{E}]$ if the following conditions hold true. \begin{description} \item [{(TEC0)}] $\mathcal{T}=\addx[\mathcal{T}].$ \item [{(TEC1)}] $\pdr[\mathcal{E}][\mathcal{T}]\leq n.$ \item [{(TEC2)}] $\mathcal{T}\subseteq\mathcal{T}^{\bot_{\mathcal{E}}}.$ \item [{(TEC3)}] There is a class $\omega\subseteq\mathcal{T}_{\mathcal{E}}^{\vee}$ which is a relative $\mathcal{E}$-generator in $\mathcal{A}.$ \item [{(TEC4)}] $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{\mathcal{E}}}.$ \end{description} An object $T\in\mathcal{A}$ is \textbf{small $n$-tilting in $(\mathcal{A},\mathcal{E})$} if $\addx[T]$ is a small $n$-tilting class in $\p[\mathcal{A}][\mathcal{E}]$. \end{defn} \begin{defn}\cite[Definition 7]{zhu2019tilting} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives. A class $\mathcal{T}\subseteq\mathcal{A}$ is called \textbf{Zhu-Zhuang $n$-tilting } if the following conditions hold true. \begin{description} \item [{(ZZT0)}] $\mathcal{T}=\addx[\mathcal{T}].$ \item [{(ZZT1)}] $\pdr[\mathcal{E}][\mathcal{T}]\leq n.$ \item [{(ZZT2)}] $\mathcal{T}$ is a relative $\mathcal{E}$-generator in $\mathcal{T}^{\bot_{\mathcal{E}}}$. \end{description} An object $T\in\mathcal{A}$ is called \textbf{Zhu-Zhuang $n$-tilting} if $\addx[T]$ is a Zhu-Zhuang $n$-tilting class. \end{defn} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, $\mathcal{T}\subseteq\mathcal{A}$, and let $n\geq0$. Following \cite[Section 4]{zhu2019tilting}, we denote by $\mbox{Pres}_{\mathcal{E}}^{n}(\mathcal{T})$ the class of all the objects $X\in\mathcal{A}$ admitting a family $\left\{0\to X_{i+1}\rightarrow T_{i}\rightarrow X_{i}\to 0\right\} _{i=1}^{n}$ of short exact sequences in $\mathcal{E}$, where $T_{i}\in\mathcal{T}$ $\forall i\in[1,n]$ and $X_{1}=X$. We will also use the notation $\mbox{Pres}_{(\mathcal{A},\mathcal{E})}^{n}(\mathcal{T}).$ \begin{thm}\cite[Theorem 1]{zhu2019tilting}\label{thm:teo exactas} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives. Consider a class $\mathcal{T}=\addx[\mathcal{T}]\subseteq\mathcal{A}$ such that every object in $\mathcal{T}^{\bot_{\mathcal{E}}}$ admits a $\mathcal{T}$-precover. Then, $\mathcal{T}$ is a Zhu-Zhuang $n$-tilting class if and only if $\operatorname{Pres}_{\mathcal{E}}^{m}(\mathcal{T})=\mathcal{T}^{\bot_{\mathcal{E}}}$, where $m:=\max\{1,n\}$. \end{thm} \begin{prop}\cite[Remark 4(1)]{zhu2019tilting}\label{prop:zhu} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, and let $\mathcal{T}\subseteq\mathcal{A}$ be a Zhu-Zhuang $n$-tilting class. Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is a $\mathcal{T}^{\bot_{\mathcal{E}}}$-projective relative $\mathcal{E}$-generator in $\mathcal{T}^{\bot_{\mathcal{E}}}.$ \item[$\mathrm{(b)}$] $\Projx[\mathcal{A}][\mathcal{E}]\subseteq\mathcal{T}_{n,\mathcal{E}}^{\vee}.$ \item[$\mathrm{(c)}$] $\left(\mathcal{T}^{\bot_{\mathcal{E}}}\right)_{\mathcal{E}}^{\vee}=\mathcal{A}$. \end{itemize} \end{prop} \begin{lem}\label{lem: lema1 notas} For a idempotent complete exact category $\p[\mathcal{A}][\mathcal{E}]$ with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, $\mathcal{T}\subseteq\mathcal{A}$ and $\omega\subseteq\mathcal{T}_{\mathcal{E}}^{\vee}$ a relative $\mathcal{E}$-generator in $\mathcal{A}$, the following conditions are satisfied. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}^{\bot_{\mathcal{E}}}\subseteq\Pres[\mathcal{T}][\mathcal{E}][1]$. \item[$\mathrm{(b)}$] If $\mathcal{T}\subseteq\mathcal{T}^{\bot_{\mathcal{E}}}$ and $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{\mathcal{E}}}$, then $\mathcal{T}$ is a relative $\mathcal{E}$-generator in $\mathcal{T}^{\bot_{\mathcal{E}}}$. \end{itemize} \end{lem} \begin{proof} $ $ \begin{enumerate} \item Let $A\in\mathcal{T}^{\bot_{\mathcal{E}}}$. Since $\omega$ is a relative $\mathcal{E}$-generator in $\mathcal{A}$, there is a short exact sequence $\eta:\:0\to K\rightarrow W\overset{a}{\rightarrow}A\to 0$ in $\mathcal{E}$, with $W\in\omega$. Since $\omega\subseteq\mathcal{T}_{\mathcal{E}}^{\vee}$, there is a short exact sequence $0\to W\overset{b}{\rightarrow}B\rightarrow C\to 0$ in $\mathcal{E}$, with $B\in\mathcal{T}$ and $C\in\mathcal{T}_{\mathcal{E}}^{\vee}\subseteq{}^{\bot_{\mathcal{E}}}(\mathcal{T}^{\bot_{\mathcal{E}}})$ (see Lemma \ref{lem: lema notas p120}). \\ \fbox{\begin{minipage}[t]{0.25\columnwidth}% \[ \begin{tikzpicture}[-,>=to,shorten >=1pt,auto,node distance=2cm,main node/.style=,x=1cm,y=1cm] \node (1) at (0,0) {$W$}; \node (2) at (1,0) {$B$}; \node (3) at (2,0) {$C$}; \node (01) at (0,1) {$K$}; \node (02) at (1,1) {$K$}; \node (1') at (0,-1) {$A$}; \node (2') at (1,-1) {$B'$}; \node (3') at (2,-1) {$C$}; \draw[->, thin] (1) to node {$b$} (2); \draw[->, thin] (2) to node {$$} (3); \draw[->, thin] (1') to [above] node {$t$} (2'); \draw[->, thin] (2') to [below] node {$$} (3'); \draw[->, thin] (1) to node {$a$} (1'); \draw[->, thin] (2) to node {$x$} (2'); \draw[-, double] (3) to node {$$} (3'); \draw[-, double] (01) to node {$$} (02); \draw[->, thin] (01) to node {$$} (1); \draw[->, thin] (02) to node {$$} (2); \end{tikzpicture} \]% \end{minipage}}\hfill{}% \begin{minipage}[t]{0.6\columnwidth}% By the push-out diagram of $a$ and $b$, the exact sequence $0\to A\rightarrow B'\rightarrow C\to 0$ in $\mathcal{E}$ splits since $A\in\mathcal{T}^{\bot_{\mathcal{E}}}$ and $C\in{}^{\bot_{\mathcal{E}}}(\mathcal{T}^{\bot_{\mathcal{E}}})$. Hence, there is a morphism $y:B'\rightarrow A$ such that $yt=1_{A}$. Now, since $x,y$ are admissible epis, it follows from \cite[Proposition B.1(ii)]{buhler2010exact} that $yx:B\rightarrow A$ is an admissible epi with $B\in\mathcal{T}$. % \end{minipage} \item Let $\mathcal{T}\subseteq\mathcal{T}^{\bot_{\mathcal{E}}}$ be precovering in $\mathcal{T}^{\bot_{\mathcal{E}}}$. Consider $X\in\mathcal{T}^{\bot_{\mathcal{E}}}$ and a $\mathcal{T}$-precover $g:T'\rightarrow X$. By (a), there is an exact sequence $0\to R\rightarrow T\stackrel{\alpha}{\rightarrow}X\to 0$ in $\mathcal{E}$, with $T\in\mathcal{T}$. Now, since $g$ is a $\mathcal{T}$-precover, there is $g':T\rightarrow T'$ such that $\alpha=gg'$. Then, by \cite[Proposition B.1(iii)]{buhler2010exact}, $g$ is an admissible epi. In particular, there is an exact sequence $\eta:\:0\to K\rightarrow T'\stackrel{g}{\rightarrow}X\to 0$ in $\mathcal{E}.$ Finally, since $T'\in\mathcal{T}\cap\mathcal{T}^{\bot_{\mathcal{E}}}$ and $g$ is a $\mathcal{T}$-precover, it follows by Proposition \ref{prop: 2.141 sucesi=0000F3n exacta larga del ext en exactas} that $K\in\mathcal{T}^{\bot_{\mathcal{E}}}$ and thus $\mathcal{T}$ is a relative $\mathcal{E}$-generator in $\mathcal{T}^{\bot_{\mathcal{E}}}$. \end{enumerate} \end{proof} The following notion is inspired in \cite[Section 3]{auslander1993relative2}. \begin{defn} \label{def: auslander-solberg tilting} Let $\p[][\mathcal{E}]$ be an exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives. A class $\mathcal{T}\subseteq\mathcal{A}$ is {\bf Auslander-Solberg $n$-tilting} in $\p[][\mathcal{E}]$ if the following conditions hold true. \begin{description} \item [{(AST0)}] $\mathcal{T}=\addx[\mathcal{T}].$ \item [{(AST1)}] $\pdr[\mathcal{E}][\mathcal{T}]\leq n.$ \item [{(AST2)}] $\mathcal{T}\subseteq\mathcal{T}^{\bot_{\mathcal{E}}}.$ \item [{(AST3)}] $\operatorname{Proj}_{\mathcal{E}}(\mathcal{A})\subseteq\mathcal{T}_{\mathcal{E}}^{\vee}$. \end{description} An object $T\in\mathcal{A}$ is Auslander-Solberg $n$-tilting in $\p[\mathcal{A}][\mathcal{E}]$ if the class $\addx[T]$ is Auslander-Solberg $n$-tilting in $\p[][\mathcal{E}]$. \end{defn} \begin{thm}\label{thm: teo2 notas} Let $\p[][\mathcal{E}]$ be an idempotent complete exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, and let $\mathcal{T}\subseteq\mathcal{A}$. Then, the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is Zhu-Zhuang $n$-tilting in $\p[\mathcal{A}][\mathcal{E}]$, with $\mathcal{T}$ precovering in $\mathcal{T}^{\bot_{\mathcal{E}}}.$ \item[$\mathrm{(b)}$] $\mathcal{T}$ is small $n$-tilting in $\p[\mathcal{A}][\mathcal{E}].$ \item[$\mathrm{(c)}$] $\mathcal{T}$ is Auslander-Solberg $n$-tilting in $(\mathcal{A},\mathcal{E})$, with $\mathcal{T}$ precovering in $\mathcal{T}^{\bot_{\mathcal{E}}}$. \end{itemize} Furthermore, if one of the above conditions holds true, then $\operatorname{Proj}_{\mathcal{E}}(\mathcal{A})\subseteq\mathcal{T}_{n,\mathcal{E}}^{\vee}$. \end{thm} \begin{proof} It follows from Proposition \ref{prop:zhu} and Lemma \ref{lem: lema1 notas} (b).\end{proof} \begin{cor}\label{cor:coro p 131} Let $\p[\mathcal{A}][\mathcal{E}]$ be an idempotent complete exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, and let $\mathcal{T}\subseteq\mathcal{A}$. Then, the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is small $n$-tilting in $\p[\mathcal{A}][\mathcal{E}].$ \item[$\mathrm{(b)}$] $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{\mathcal{E}}}$ and $\Pres[\mathcal{T}][\mathcal{E}][m]=\mathcal{T}^{\bot_{\mathcal{E}}}$, with $m:=\max\left\{ 1,n\right\} $. \end{itemize} \end{cor} \begin{proof} It follows from Theorems \ref{thm: teo2 notas} and \ref{thm:teo exactas}.\end{proof} \begin{lem} \label{rem:pres gen }Let $\p[\mathcal{A}][\mathcal{E}]$ be an idempotent complete exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives, embedded in an abelian category $\mathcal{C}$, via $i:\mathcal{A}\rightarrow\mathcal{C}$. Then, for $\mathcal{T}\subseteq\mathcal{A}$, we have $\addx[i(\mathcal{T})]=[i(\addx[\mathcal{T}])]\;\mbox{ and }\;\operatorname{gen}_{n}^{[i(\mathcal{A})]}(i(\mathcal{T}))\cap[i(\mathcal{A})]=[i\left(\operatorname{Pres}_{\mathcal{E}}^{n}(\addx[\mathcal{T}])\right)]\mbox{.}$\end{lem} \begin{proof} First note that, $\addx[i(\mathcal{T})]\supseteq i(\addx[\mathcal{T}])$ since $i$ is additive. Moreover, using that $\mathcal{A}$ is idempotent complete and Proposition \ref{prop:completa por idempotentes sii cerrada por sumandos}, we get $[i(\mathcal{A})]=\smdx[[i(\mathcal{A})]].$ Hence $\addx[i(\mathcal{T})]\subseteq\left[i(\addx[\mathcal{T}])\right]$ and thus $\addx[i(\mathcal{T})]=[i(\addx[\mathcal{T}])]$. In particular, $\addx[i(\mathcal{T})]\cap[i(\mathcal{A})]=[i(\addx[\mathcal{T}])]$. Let $X\in\operatorname{gen}_{n}^{[i(\mathcal{A})]}(i(\mathcal{T}))\cap[i(\mathcal{A})]$. Then, there is $A\in\mathcal{A}$ with $X\cong i(A)$ and an exact sequence $\eta:\;0\rightarrow K\rightarrow T_{n}\stackrel{f_{n}}{\rightarrow}...\stackrel{f_{2}}{\rightarrow}T_{1}\stackrel{f_{1}}{\rightarrow}i(A)\rightarrow0\mbox{,}$ with $K_{k}:=\Kerx[f_{k}]\in[i(\mathcal{A})]$ and $T_{k}\in[i(\addx[\mathcal{T}])]$ $\forall k\in[1,n]$. Now, since $i$ reflects exactness, $\eta$ induces a family of exact sequences $\left\{ 0\to i^{-1}(K_{i})\rightarrow i^{-1}(T_{i})\rightarrow i^{-1}(K_{i-1})\to 0\right\} _{i=1}^{n}$ in $\mathcal{E},$ where $i^{-1}\left(T_{k}\right)\in\addx[\mathcal{T}]$ $\forall k\in[1,n]$ and $i^{-1}(K_{0}):=A$. Therefore, $A\in\operatorname{Pres}_{\mathcal{E}}^{n}(\addx[\mathcal{T}])$. \ Let $X\in\left[i\left(\operatorname{Pres}_{\mathcal{E}}^{n}(\addx[\mathcal{T}])\right)\right].$ Then, there is $A\in\operatorname{Pres}_{\mathcal{E}}^{n}(\addx[\mathcal{T}])$ such that $i(A)\cong X,$ and a family $\left\{ 0\to X_{k}\rightarrow T_{k}\rightarrow X_{k-1}\to 0\right\} _{k=1}^{n}$ of exact sequences in $\mathcal{E}$, where $T_{k}\in\addx[\mathcal{T}]$ $\forall k\in[1,n]$ and $X_{0}:=A$. By applying the functor $i$ to this family, we get the exact sequence $0\rightarrow i(X_{n})\rightarrow i(T_{n})\stackrel{f_{n}}{\rightarrow}...\stackrel{f_{2}}{\rightarrow}i(T_{1})\stackrel{f_{1}}{\rightarrow}i(A)\rightarrow0$ in $\mathcal{C}$, where $\Kerx[f_{k}]\in[i(\mathcal{A})]$ and $i(T_{k})\in i\left(\addx[\mathcal{T}]\right)\subseteq\addx[i(\mathcal{T})]$ $\forall k\in[1,n]$. Therefore $X\in\operatorname{gen}_{n}^{[i(\mathcal{A})]}(i(\mathcal{T}))$ since $X\cong i(A).$ \end{proof} \begin{lem}\label{lem:proy inj exact} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives embedded in an abelian category $\mathcal{C}$, via $i:\mathcal{A\rightarrow\mathcal{C}}$, $A,B\in\mathcal{A}$, $X\in\mathcal{C}$, and let $n\geq1$. Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\Extx[1][\mathcal{A}][X][Y]\cong\Extx[1][\mathcal{C}][i(X)][i(Y)]$, $\overline{\eta}\mapsto\overline{i(\eta)}$, $\forall X,Y\in\mathcal{A}$. \item[$\mathrm{(b)}$] Let $i(\Injx[\mathcal{A}][\mathcal{E}])\subseteq\mathcal{C}^{\bot}$. Then, any $\overline{\eta}\in\Extx[1][\mathcal{C}][X][i(A)]$ admits a morphism $\alpha:X\rightarrow i(A')$, with $A'\in\mathcal{A}$, and an extension $\overline{\eta'}\in\Extx[1][\mathcal{C}][i(A')][i(A)]$ such that $\overline{\eta}=\overline{\eta'}\alpha$. Furthermore, any $\overline{\eta}\in\Extx[n][][i(B)][i(A)]$ admits a decomposition in $1$-extensions $\overline{\eta}=\overline{\eta'_{1}}\cdots\overline{\eta'_{n}}$ such that $\eta_{k}\in i(\mathcal{E})\;\forall k\in[1,n]$. \item[$\mathrm{(c)}$] Let $i(\Projx[\mathcal{A}][\mathcal{E}])\subseteq{}^{\bot}\mathcal{C}$. Then, any $\overline{\eta}\in\Extx[1][\mathcal{C}][i(A)][X]$ admits a morphism $\beta:i(A')\rightarrow X$, with $A'\in\mathcal{A}$, and an extension $\overline{\eta'}\in\Extx[1][\mathcal{C}][i(A)][i(A')]$ such that $\overline{\eta}=\beta\overline{\eta'}$. Furthermore, any $\overline{\eta}\in\Extx[n][][i(B)][i(A)]$ admits a decomposition in $1$-extensions $\overline{\eta}=\overline{\eta'_{1}}\cdots\overline{\eta'_{n}}$ such that $\eta_{k}\in i(\mathcal{E})\;\forall k\in[1,n]$. \end{itemize} \end{lem} \begin{proof} (a) Let $X,Y\in\mathcal{A}$. Since $i$ is an exact functor, we have the map \[ \psi:\Extx[1][\mathcal{A}][X][Y]\rightarrow\Extx[1][\mathcal{C}][i(X)][i(Y)],\:\overline{\eta}\mapsto\overline{i(\eta)}\mbox{.} \] Moreover, since $i(\mathcal{A})$ is closed under extensions and $i$ is a full functor that reflects exactness, it follows that $\psi$ is surjective. Now, if there is $\eta=(\alpha,\beta)\in\mathcal{E}$ such that $\psi(\overline{\eta})=0$, then $i(\alpha)$ is a spitting monomorphism. Thus, $\alpha$ is a split-mono and $\overline{\eta}=0$. Therefore, $\psi$ is bijective. \ (b) Let $\eta:\;\suc[i(A)][Y][X][f][g]$ be an exact sequence in $\mathcal{C}$. Since $(\mathcal{A},\mathcal{E})$ has enough injectives and $i$ is exact, there is a short exact sequence $\eta':\;\suc[i(A)][i(I)][i(A')][f'][g']$ with $I\in\Injx[\mathcal{A}][\mathcal{E}]$ and $A'\in\mathcal{A}$. Then, using that $i(I)\in\mathcal{C}^{\bot}$ together with the cokernel universal property, we obtain $\alpha:X\rightarrow i(A')$ such that $\overline{\eta}=\overline{\eta'}\alpha$. Finally, the second statement in (b) follows by repeating the same arguments recursively. \ (c) It can be proved similarly as in the proof of (b). \end{proof} In the above lemma the conditions $i(\operatorname{Proj}_{\mathcal{E}}(\mathcal{A}))\subseteq{}^{\bot}\mathcal{C}$ and $i(\operatorname{Inj}_{\mathcal{E}}(\mathcal{A}))\subseteq\mathcal{C}^{\bot}$ appeared. It is worth to mention that we know examples of categories satisfying such conditions. Indeed, in \cite[Chapter 2. Section 4. Lemma 4, p.357]{gabriel1962categories}, Pierre Gabriel showed the next fact. Let $\p[\mathcal{A}][\mathcal{E}]$ be the exact category given by an abelian noetherian category $\mathcal{A}$ and the class $\mathcal{E}$ of all the short exact sequences in $\mathcal{A}$. We know that $(\mathcal{A},\mathcal{E})$ is embedded in $\mathcal{\mathcal{C}}:=\operatorname{Lex}(\mathcal{A},\mathcal{E})$, via the Yoneda functor $i$. In this context, a functor $F\in\mathcal{\mathcal{C}}$ is injective if, and only if, it is an exact functor. Therefore, this is an example where $i(\operatorname{Inj}_{\mathcal{E}}(\mathcal{A}))\subseteq\mathcal{C}^{\bot}$. \begin{lem}\label{lem:rel exact} Let $\p[\mathcal{A}][\mathcal{E}]$ be an exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives embedded in an abelian category $\mathcal{C}$, via $i:\mathcal{A}\rightarrow\mathcal{C}$, and let $\mathcal{Z}\subseteq\mathcal{A}$. If $i(\Projx[\mathcal{A}][\mathcal{E}])\subseteq{}{}^{\bot}\mathcal{C}$, then the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $i(\Projx[\mathcal{A}][\mathcal{E}])$ is an $i(\mathcal{A})$-projective relative generator in $i(\mathcal{A}).$ \item[$\mathrm{(b)}$] $i(\Injx[\mathcal{A}][\mathcal{E}])$ is an $i(\mathcal{A})$-injective relative cogenerator in $i(\mathcal{A}).$ \item[$\mathrm{(c)}$] $\Extx[k][\mathcal{\mathcal{C}}][i(A)][i(A')]\cong\Extx[k][\mathcal{\mathcal{\mathcal{A}}}][A][A']\,\forall A,A'\in\mathcal{A},\,\forall k>0.$ \item[$\mathrm{(d)}$] $i(\mathcal{Z}^{\bot_{\mathcal{E}}})=i(\mathcal{Z})^{\bot}\cap i(\mathcal{A})$ and $i({}{}^{\bot_{\mathcal{E}}}\mathcal{Z})={}^{\bot}i(\mathcal{Z})\cap i(\mathcal{A})$. \end{itemize} \end{lem} \begin{proof} The proof of (a) is straightforward, and (b) and (d) follow from (c). \ Let us show (c). Let $A,A'\in\mathcal{A}$ and $k>0.$ We have already showed the bijection for $k=1$ in Lemma \ref{lem:proy inj exact} (a). \ Let $k\geq 2.$ Using (a), we can construct the exact sequence in $\mathcal{C}$ \[ \suc[i(A{}_{k-1})][i(P_{k-1})\xrightarrow{f_{k-2}}\cdots\xrightarrow{f_{1}}i(P_{1})][i(A)], \] where $i(P_{j})\in i(\operatorname{Proj}_{\mathcal{E}}(\mathcal{A}))\subseteq\mathrm{Proj}(\mathcal{C})\:\forall j\in[1,k-1]$, $A_{k-1}\in\mathcal{A}$ y $i(A{}_{j}):=\im[f_{j}]\in i(\mathcal{A})\;\forall j\in[1,k-2].$ Thus by the shifting lemma and Lemma \ref{lem:proy inj exact} (a), we get \[ \mathrm{Ext}^k_\mathcal{C}(i(A),i(A'))\simeq\mathrm{Ext}^1_\mathcal{C}(i(A_{k-1}),i(A'))\simeq\mathrm{Ext}^1_\mathcal{A}(A_{k-1},A')\simeq\mathrm{Ext}^k_\mathcal{A}(A,A'). \] \end{proof} \begin{lem}\label{lem:notas p17} Let $F:\mathcal{A}\rightarrow\mathcal{C}$ be a full and faithful functor, and let $\mathcal{X},\mathcal{Y}\subseteq\mathcal{A}$ be such that $\left[\mathcal{X}\right]=\mathcal{X}$ and $\left[\mathcal{Y}\right]=\mathcal{Y}$. If $[F(\mathcal{X})]=[F(\mathcal{Y})]$, then $\mathcal{X}=\mathcal{Y}$. \end{lem} \begin{proof} It is straightforward. \end{proof} \begin{thm}\label{thm:tilting exacto} Let $\p[\mathcal{A}][\mathcal{E}]$ be an idempotent complete exact category with enough $\mathcal{E}$-projectives and $\mathcal{E}$-injectives embedded in an abelian category $\mathcal{C}$, via $i:\mathcal{A}\rightarrow\mathcal{C}$, such that $i(\Projx[\mathcal{A}][\mathcal{E}])\subseteq{}^{\bot}\mathcal{C},$ and let and $\mathcal{T}\subseteq\mathcal{A}.$. Then, the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] The class $\mathcal{T}$ is small $n$-tilting in $\p[\mathcal{A}][\mathcal{E}]$. \item[$\mathrm{(b)}$] The class $\left[i(\mathcal{T})\right]$ is small $n$-$i(\mathcal{A})$-tilting in $\mathcal{C}$. \end{itemize} \end{thm} \begin{proof} (a) $\Rightarrow$ (b) Let us show the conditions on $[i(\mathcal{T})]$ to be $n$-$i(\mathcal{A})$-tilting in $\mathcal{C}$. Condition (T0) follows from Lemma \ref{rem:pres gen }; (T1) follows from Lemma \ref{lem:rel exact} (c) and (TEC1); (T2) follows from (TEC2) and Lemma \ref{lem:rel exact} (d); (T3) follows straightforward from (TEC3); (T4) follows from Lemma \ref{lem:rel exact} (b) and Lemma \ref{lem:rel exact} (d); and finally, (T5) follows from (TEC4) and Lemma \ref{lem:rel exact} (d). \ (b) $\Rightarrow$ (a) By Theorem \ref{thm: teo2 notas}, it is enough to show that $\mathcal{T}$ is Zhu-Zhuang $n$-tilting in $\p[\mathcal{A}][\mathcal{E}]$ and that $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{\mathcal{E}}}$. Let us show that $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{\mathcal{E}}}$. By (T5), every $X\in\left[i(\mathcal{T})\right]\mathcal{^{\bot}}\cap\left[i(\mathcal{A})\right]$ admits an $\left[i(\mathcal{T})\right]$-precover. Then, since $i$ is full and $\left[i(\mathcal{T})\right]\mathcal{^{\bot}}\cap\left[i(\mathcal{A})\right]=\left[i(\mathcal{T}^{\bot_{\mathcal{E}}})\right]$ by Lemma \ref{lem:rel exact}(d), any object of $\mathcal{T}^{\bot_{\mathcal{E}}}$ admits a $\mathcal{T}$-precover. Let $m:=\max\{1,n\}$. By Proposition \ref{prop:primera generalizacion-1}, Lemma \ref{rem:pres gen } and Lemma \ref{lem:rel exact}(d), \[ \left[i(\operatorname{Pres}_{\mathcal{E}}^{m}(\mathcal{T}))\right]=\operatorname{gen}_{m}^{\left[i(\mathcal{A})\right]}(i(\mathcal{T}))\cap\left[i(\mathcal{A})\right]=i(\mathcal{T}^{\bot})\cap\left[i(\mathcal{A})\right]=[i(\mathcal{T})^{\bot}\cap i(\mathcal{A})]=\left[i(\mathcal{T}^{\bot_{\mathcal{E}}})\right]. \] Hence, $\mathcal{T}$ is Zhu-Zhuang $n$-tilting in $(\mathcal{A},\mathcal{E})$ by Lemma \ref{lem:notas p17} and Theorem \ref{thm:teo exactas}.\end{proof} \begin{lem}\label{lem:2.177'}Let $(\mathcal{A},\mathcal{E})$ be a skeletally small exact category with enough $\mathcal{E}$-projectives, and let $\mathcal{P}:=\operatorname{Proj}_{\mathcal{E}}(\mathcal{A})$. Then, $i:\mathcal{A}\rightarrow\Modx[\mathcal{P}^{op}]\mbox{, }X\mapsto\Homx[\mathcal{A}][-][X]|_{\mathcal{P}}\mbox{,}$ is an additive, faithful, full and exact functor that reflects exactness and such that $i(\mathcal{P})\subseteq\Projx[\operatorname{Mod}(\mathcal{P}^{op})]$.\end{lem} \begin{proof} it follows from \cite[Proposition 2.1]{enomoto2017classifying}.\end{proof} \begin{cor} \label{cor:2.177''} Let $\p[\mathcal{A}][\mathcal{E}]$ be an idempotent complete, skeletally small, exact category with enough $\mathcal{E}$-projectives and enough $\mathcal{E}$-injectives. Then, by using the functor $i:\mathcal{A}\rightarrow\Modx[\mathcal{P}^{op}]$ given in Lemma \ref{lem:2.177'}, the following conditions are equivalent for a class $\mathcal{T}\subseteq\mathcal{A}.$ \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is small $n$-tilting in $\p[\mathcal{A}][\mathcal{E}].$ \item[$\mathrm{(b)}$] $\left[i(\mathcal{T})\right]$ is small $n$-$\left[i(\mathcal{A})\right]$-tilting in $\Modx[\mathcal{P}^{op}]$. \end{itemize} \end{cor} \begin{proof} It follows from Lemma \ref{lem:2.177'} and Theorem \ref{thm:tilting exacto}. \end{proof} \subsection{S. K. Mohamed's relative tilting theory} In \cite{mohamed2009relative}, Soud Khalifa Mohamed developed a relative tilting theory inspired in the work of Maurice Auslander and {\O}yvind Solberg in \cite{auslander1993relative2}. In this section, we will review the main aspects of his work and characterize his tilting objects in terms of $n$-$\mathcal{X}$-tilting theory. We recall that, in the context of Artin algebras, a class $\mathcal{C}\subseteq\modd[\Lambda]$ is functorialy finite if it is a precovering and preenveloping class in $\modd[\Lambda]$. \begin{prop}\cite{auslander1993relative, mohamed2009relative} \label{prop:muhamed} Let $\Lambda$ be an Artin algebra, $\mathcal{C}\subseteq\modd[\Lambda]$ be functorialy finite and closed under extensions, and let $\mathcal{X}=\addx[\mathcal{X}]$ be precovering and a relative generator in $\mathcal{C}$. Consider the class $\mathcal{E}_{\mathcal{X}}$ of all the short exact sequences $\suc[A][B][C][\,][f]$ in $\modd[\Lambda]$ such that $\Homx[\Lambda][X][f]$ is surjective $\forall X\in\mathcal{\mathcal{X}}$. Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\overline{\mathcal{E}_{\mathcal{X}}}:=\left\{ \overline{\eta}\,|\:\eta\in\mathcal{E}_{\mathcal{X}}\right\} $ is an additive subfunctor of $\Extx[1][\Lambda][-][-].$ \item[$\mathrm{(b)}$] For any exact sequence $\suc[A][B][C][\,][f]$ in $\mathcal{E}_{\mathcal{X}}$ with $B,C\in\mathcal{C}$, we have that $A\in\mathcal{C}.$ \item[$\mathrm{(c)}$] The pair $(\mathcal{C},\mathcal{E}_{\mathcal{X}}^{\mathcal{C}})$ is an exact category, where $\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}:=\left\{ \eta\in\mathcal{E}_{\mathcal{X}}(X,Y)\,|\:X,Y\in\mathcal{C}\right\}.$ \item[$\mathrm{(d)}$] If $\mathcal{I}_{\mathcal{C}}(\mathcal{E}_{\mathcal{X}}):=\left\{ X\in\mathcal{C}\,|\:\Homx[\Lambda][\eta][X]\mbox{ is exact }\forall\eta\in\mathcal{E}_{\mathcal{X}}\right\} $ is preenveloping in $\mathcal{C}$, then $(\mathcal{C},\mathcal{E}_{\mathcal{X}}^{\mathcal{C}})$ has enough $\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}$-injectives. \item[$\mathrm{(e)}$] The exact category $(\mathcal{C},\mathcal{E}_{\mathcal{X}}^{\mathcal{C}})$ has enough $\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}$-projectives and $\mathcal{X}$ $=\Projx[\mathcal{C}][\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}]$. \end{itemize} \end{prop} \begin{proof} The item (a) can be found in \cite[Proposition 1.7]{auslander1993relative}, (b) in \cite[Proposition 2.2]{mohamed2009relative}, (d) in \cite[Proposition 2.4]{mohamed2009relative}, and (e) can be shown by using that $\mathcal{X}$ is precovering and a relative generator in $\mathcal{C}$. \ Let us prove (c). We need to show the conditions on the pair $(\mathcal{C},\mathcal{E}_{\mathcal{X}}^{\mathcal{C}})$ to be an exact category. Indeed, the condition (E0) is trivial. For (E1), it is straightforward to prove that the composition of admissible epis is an admissible epi, and the same can be shown for admissible monos \cite[Chapter VIII. Section 4. Lemma 1]{maclane}. Finally, (E2) follows from (a), see the details in \cite[p.2998]{auslander1993relative}. \end{proof} \begin{defn} \cite[Section 4]{mohamed2009relative}\label{def:tilting mohamed} Let $\Lambda$ be an Artin algebra. A {\bf Mohamed context} in $\modd[\Lambda]$ is a pair $(\mathcal{C},\mathcal{X})$ of classes of objects in $\modd[\Lambda]$ satisfying the following conditions. \begin{description} \item [{(MC1)}] $\mathcal{C}$ is functorially finite and closed under extensions. \item [{(MC2)}] $\mathcal{X}=\addx[\mathcal{X}]$ is precovering and a relative generator in $\mathcal{C}.$ \item [{(MC3)}] $\mathcal{I}_{\mathcal{C}}(\mathcal{E}_{\mathcal{X}})$ is preenveloping in $\mathcal{C}$. \end{description} \end{defn} The following result, due to Maurice Auslander and Sverre Olaf Smal{\o}, will be useful for our purpose. Recall that a class $\mathcal{A}\subseteq\modd[\Lambda]$, with $\Lambda$ an Artin $k$-algebra, is \textbf{of finite type} if $\addx[\mathcal{A}]$ has a finite number of iso-classes of indecomposable $\Lambda$-modules. \begin{prop}\cite[Proposition 4.2]{auslander1980preprojective}\label{prop:finito es preenvolvente y precubriente} Let $\Lambda$ be an Artin algebra and let $\mathcal{A}\subseteq\modd[\Lambda]$ be a class of finite type. Then, $\mathcal{A}$ is functorially finite in $\modd[\Lambda]$. In particular, $\addx[X]$ is precovering and preenveloping in $\modd[\Lambda]$ $\forall X\in\modd[\Lambda]$. \end{prop} \begin{lem}\cite[Proposition 2.1, Proposition 2.8]{enomoto2017classifying}\label{lem:notas p17 enomoto} Let $\p[\mathcal{A}][\mathcal{E}]$ be a skeletally small exact category with enough $\mathcal{E}$-projectives, $\mathcal{P}:=\operatorname{Proj}_{\mathcal{E}}(\mathcal{A})$, and let $\operatorname{proj}(\mathcal{P}^{op})$ be the class of all the finitely generated projective objects in $\Modx[\mathcal{P}^{op}]$. Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] The functor $i:\mathcal{A}\rightarrow\Modx[\mathcal{P}^{op}]$, $A\mapsto\Homx[\mathcal{A}][-][A]|_{\mathcal{P}}$, is faithful, full, exact and reflects exactness. \item[$\mathrm{(b)}$] $[i(\mathcal{P})]\subseteq\Projx[\operatorname{Mod}(\mathcal{P}^{op})]$. \item[$\mathrm{(c)}$] $\Extx[k][\mathcal{A}][X][Y]\simeq\Extx[k][\operatorname{Mod}(\mathcal{P}^{op})][i(X)][i(Y)]$ $\forall\, k\geq1,$ $\forall\,X,Y\in\mathcal{A}.$ \item[$\mathrm{(d)}$] $\projx[\mathcal{P}^{op}]=\addx[i(\mathcal{A})]$. \end{itemize} \end{lem} \begin{rem}\label{rem:no sumerge} Let $\Lambda$ be an Artin algebra and let $\p[\mathcal{C}][\mathcal{X}]$ be a Mohamed context in $\modd[\Lambda]$. \begin{enumerate} \item [(1)] By Proposition \ref{prop:muhamed}, $(\mathcal{C},\mathcal{E}_{\mathcal{X}}^{\mathcal{C}})$ is a skeletally small exact category with enough $\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}$-projectives and $\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}$-injectives, where $\operatorname{Proj}_{\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}}(\mathcal{C})$=$\mathcal{X}$. \item [(2)] In general, the inclusion functor $(\mathcal{C},\mathcal{E}_{\mathcal{X}}^{\mathcal{C}})\rightarrow\modd[\Lambda],$ induced by the inclusion $\mathcal{C}\subseteq\modd[\Lambda],$ does not reflect exactness. Indeed, consider $\Lambda:=\mathbb{R}[x]/\left\langle x^{4}\right\rangle $, $M={}_{\Lambda}\Lambda$, $N:=x^{2}M$ and $K:=N/x^{2}N$. Let $\mathcal{C}:=\modd[\Lambda]$ and let $\mathcal{X}:=\addx[M\oplus K]$. By Proposition \ref{prop:finito es preenvolvente y precubriente}, it follows that $(\mathcal{C},\mathcal{X})$ is a Mohamed context in $\modd[\Lambda]$. Note that the exact sequence $\suc[N][M][K][\mbox{ }][\,]$ does not belong in $\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}$ since it does not split. Therefore, $(\mathcal{C},\mathcal{X})$ is not embedded in $\modd[\Lambda],$ via the natural inclusion $\mathcal{C}\subseteq\modd[\Lambda].$ \end{enumerate} \end{rem} \begin{lem}\label{lem:cerrado por sumandos exact} Let $\Lambda$ be an Artin algebra, $(\mathcal{C},\mathcal{X})$ be a Mohamed context in $\mathrm{mod}(\Lambda),$ $\mathcal{P}:=\operatorname{Proj}_{\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}}(\mathcal{C})=\mathcal{X},$ and let $i:\mathcal{C}\rightarrow\Modx[\mathcal{P}^{op}]$, $C\mapsto\Homx[\Lambda][-][C]|_{\mathcal{P}}$. Then, the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{C}=\smdx[\mathcal{C}]$ in $\modd[\Lambda].$ \item[$\mathrm{(b)}$] $\mathcal{C}$ is idempotent complete. \item[$\mathrm{(c)}$] $i(\mathcal{C})=\smdx[i(\mathcal{C})]$ in $\Modx[\mathcal{P}^{op}]$. \end{itemize} \end{lem} \begin{proof} The equivalence between (b) and (c) follows from Proposition \ref{prop:completa por idempotentes sii cerrada por sumandos}. Moreover, since the idempotents in $\modd[\Lambda]$ split, we get that (a) $\Rightarrow$ (b) holds true. Finally, by using Remark \ref{rem:idempotentes en categorias exactas} (c), we can obtain (a) from (b). \end{proof} \begin{thm}\label{prop:mohamed vs n-X-tilting}\label{thm:mohammed vs n-X-tilting}\label{thm:mohammed vs n-X-tilting-1} Let $\Lambda$ be an Artin algebra and let $(\mathcal{C},\mathcal{X})$ be a Mohamed context in $\modd[\Lambda]$. Consider the exact category $(\mathcal{C},\mathcal{E}_{\mathcal{X}}^{\mathcal{C}})$, $\mathcal{P}:=\operatorname{Proj}_{\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}}(\mathcal{C})=\mathcal{X}$ and the embedding $i:\mathcal{C}\rightarrow\Modx[\mathcal{P}^{op}]$, $C\mapsto\Homx[\Lambda][-][C]|_{\mathcal{P}}$. Then, for a class $\mathcal{T}\subseteq\mathcal{C}$, the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is Auslander-Solberg $n$-tilting in $\p[\mathcal{C}][\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}]$ and $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}}}$. \item[$\mathrm{(b)}$] $\left[i(\mathcal{T})\right]$ is small $n$-$\left[i(\mathcal{C})\right]$-tilting in $\operatorname{Mod}(\mathcal{P}^{op})$. \item[$\mathrm{(c)}$] $\mathcal{T}$ is Zhu-Zhuang $n$-tilting in $\p[\mathcal{C}][\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}]$ and $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}}}$. \item[$\mathrm{(d)}$] $\mathcal{T}$ is small $n$-tilting in $\p[\mathcal{C}][\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}]$. \item[$\mathrm{(e)}$] $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}}}$ and $\Pres[\mathcal{T}][\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}][m]=\mathcal{T}^{\bot_{\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}}},$ where $m:=\max\{1,n\}.$ \end{itemize} \end{thm} \begin{proof} It follows from Theorem \ref{thm: teo2 notas}, Corollary \ref{cor:coro p 131}, Lemma \ref{lem:notas p17 enomoto}, Lemma \ref{lem:cerrado por sumandos exact} and Theorem \ref{thm:tilting exacto}.\end{proof} \begin{cor} Let $\Lambda$ be an Artin algebra, $\p[\mathcal{C}][\mathcal{X}]$ be a Mohamed context in $\modd[\Lambda]$ and let $T\in\modd[\Lambda].$ Then, for the embedding $i:\mathcal{C}\rightarrow\Modx[\mathcal{X}^{op}]$, $C\mapsto\Homx[\Lambda][-][C]|_{\mathcal{X}},$ the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $T$ is Auslander-Solberg $n$-tilting in $\p[\mathcal{C}][\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}]$. \item[$\mathrm{(b)}$] $i(T)$ is small $n$-$[i(\mathcal{C})]$-tilting in $\Modx[\mathcal{X}^{op}]$. \item[$\mathrm{(c)}$] $T$ is Zhu-Zhuang $n$-tilting in $\p[\mathcal{C}][\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}]$. \item[$\mathrm{(d)}$] $T$ is small $n$-tilting in $\p[\mathcal{C}][\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}]$. \item[$\mathrm{(e)}$] $\Pres[\operatorname{add}(T)][\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}][m]=T^{\bot_{\mathcal{E}_{\mathcal{X}}^{\mathcal{C}}}},$ where $m:=\max\left\{ 1,n\right\}.$ \end{itemize} \end{cor} \begin{proof} It follows from Proposition \ref{prop:mohamed vs n-X-tilting} since $\mathrm{add}(T)$ is precovering. \end{proof} In what follows, we will approach briefly to the precursor of S. K. Mohamed's tilting theory. Namely, we shall introduce the relative homological algebra presented by Maurice Auslander and {\O}yvind Solberg in \cite{auslander1993relative1, auslander1993relative2}. \ Let $\Lambda$ be an Artin algebra and $F$ be an additive subfunctor of $\Extx[1][\Lambda][-][-]$. A short exact sequence $\eta:\:\suc$ in $\modd[\Lambda]$ is \textbf{$F$-exact} if $\overline{\eta}\in F(K,N),$ and the class of all the short $F$-exact sequences is denoted by $\mathcal{E}_F.$ \ The class $\mathcal{P}(F)$ of the {\bf $F$-projective} modules consists of all the $P\in\mathrm{mod}(\Lambda)$ such that for every $F$-exact sequence $\eta:\suc,$ the sequence $\mathrm{Hom}_\Lambda(P,\eta)$ is exact. It is said that $F$ has enough projectives if any $M\in\modd[\Lambda]$ admits an $F$-exact sequence $\suc[M'][P][M]$ with $P\in\mathcal{P}(F).$ Dually, it is defined the class $\mathcal{I}(F)$ of all the {\bf $F$-injective} modules and the notion saying that $F$ has enough injectives. \ Each $\mathcal{X}\subseteq\modd[\Lambda]$ induces two subfuntors $F_{\mathcal{X}}$ and $F^{\mathcal{X}}$ of $\Extx[1][\Lambda][-][-],$ which are defined as follows. For every $A,C\in\modd[\Lambda],$ $F_{\mathcal{X}}(C,A)$ is formed by all the extensions $\overline{\eta}\in\Extx[1][\Lambda][C][A]$, with $\eta:\:\suc[A][B][C][\,][\,]$, such that $\Homx[\Lambda][-][B]|_{\mathcal{X}}\rightarrow\Homx[\Lambda][-][C]|_{\mathcal{X}}\rightarrow0$ is exact. The functor $F^{\mathcal{X}}$ is defined dually. \begin{prop}\cite{auslander1993relative1, auslander1993relative2} \label{def: auslader-solberg 2}\label{fact: auslander-solberg} For an Artin algebra $\Lambda$ and a class $\mathcal{X}\subseteq\modd[\Lambda],$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $F_{\mathcal{X}}$ and $F^{\mathcal{X}}$ are additive subfunctors of $\Extx[1][\Lambda][-][-].$ \item[$\mathrm{(b)}$] Let $F$ be an additive subfunctor of $\Extx[1][\Lambda][-][-]$. Then the class $\mathcal{E}_F$ is closed under pullbacks, pushouts and finite coproducts. \item[$\mathrm{(c)}$] The map $F\mapsto\mathcal{P}(F)$ is a bijection between the class of all the additive subfunctors of $\Extx[1][\Lambda][-][-]$ with enough projectives and the class of all the precovering classes $\mathcal{X}$ in $\modd[\Lambda]$ such that $\projx[\Lambda]\subseteq\mathcal{X}=\addx[\mathcal{X}]$. \item[$\mathrm{(d)}$] Let $F$ be an additive subfunctor of $\Extx[1][\Lambda][-][-]$ with enough projectives. Then, $(\modd[\Lambda],\mathcal{E}_{F})$ is an exact category with $\operatorname{Proj}_{\mathcal{E}_{F}}(\modd[\Lambda])=\mathcal{P}(F)$ and $\operatorname{Inj}_{\mathcal{E}_{F}}(\modd[\Lambda])=\mathcal{I}(F)$. \item[$\mathrm{(e)}$] Let $\mathcal{X}\subseteq\mathrm{mod}(\Lambda)$ be such that $\mathrm{proj}(\Lambda)\subseteq\mathcal{X}=\mathrm{add}(\mathcal{X}).$ Then, $F_\mathcal{X}$ has enough projectives and injectives if, and only if, $\mathcal{X}$ is functorially finite. \end{itemize} \end{prop} \begin{prop}\label{prop: prop3 notas} Let $\Lambda$ be an Artin algebra and let $F$ be an additive subfunctor of $\Extx[1][\Lambda][-][-]$. Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $F$ has enough projectives and injectives if, and only if, $\mathcal{P}(F)$ is functorially finite and $F=F_{\mathcal{P}(F)}$. \item[$\mathrm{(b)}$] Let $F$ be with enough projectives and injectives. Then, $(\modd[\Lambda],\mathcal{P}(F))$ is a Mohamed context in $\modd[\Lambda]$ and $\mathcal{E}_{\mathcal{P}(F)}^{\modd[\Lambda]}=\mathcal{E}_{F}$. \end{itemize} \end{prop} \begin{proof} (a) follows from \cite[Corollary 1.13]{auslander1993relative1}. Finally, (b) follows from (a), \end{proof} \begin{cor}\label{thm:auslander-solberg} Let $\Lambda$ be an Artin algebra, $F$ an additive subfunctor of $\Extx[1][\Lambda][-][-]$ with enough projectives and enough injectives, $\mathcal{T}\subseteq\modd[\Lambda]$ and the functor $i:\modd[\Lambda]\rightarrow\Modx[\mathcal{P}^{op}]$, $M\mapsto\Homx[\Lambda][-][M]|_{\mathcal{P}}$, where $\mathcal{P}:=\mathcal{P}(F)$. Then, the pair $(\modd[\Lambda],\mathcal{E}_{F})$ is an exact category with enough $\mathcal{E}_{F}$-projectives and $\mathcal{E}_{F}$-injectives. Moreover, the following statements are equivalent: \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is Auslander-Solberg $n$-tilting in $(\modd[\Lambda],\mathcal{E}_{F})$ and precovering in $\mathcal{T}^{\bot_{\mathcal{E}_{F}}}.$ \item[$\mathrm{(b)}$] $\left[i(\mathcal{T})\right]$ is small $n$-$\left[i(\modd[\Lambda])\right]$-tilting in $\Modx[\mathcal{P}^{op}].$ \item[$\mathrm{(c)}$] $\mathcal{T}$ is Zhu-Zhuang $n$-tilting in $(\modd[\Lambda],\mathcal{E}_{F})$ and precovering in $\mathcal{T}^{\bot_{\mathcal{E}_{F}}}.$ \item[$\mathrm{(d)}$] $\mathcal{T}$ is small $n$-tilting in $(\modd[\Lambda],\mathcal{E}_{F}).$ \item[$\mathrm{(e)}$] $\Pres[\mathcal{T}][\mathcal{E}_{F}][m]=\mathcal{T}^{\bot_{\mathcal{E}_{F}}}$ and $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{\mathcal{E}_{F}}},$ where $m:=\max\{1,n\}.$ \end{itemize} \end{cor} \begin{proof} It follows from Proposition \ref{prop: prop3 notas} (b) and Theorem \ref{thm:mohammed vs n-X-tilting-1}. \end{proof} \begin{cor} \label{cor:resultado de jiaqun wei} Let $\Lambda$ be an Artin algebra, $T\in\modd[\Lambda]$, $n\in\mathbb{N}$, $m:=\max\{1,n\}$ and let $F$ be an additive subfunctor of $\Extx[1][\Lambda][-][-]$ with enough injectives and projectives. Then, $T\in\modd[\Lambda]$ is Auslander-Solberg $n$-tilting in $(\modd[\Lambda],\mathcal{E}_{F})$ if, and only if, $T^{\bot_{\mathcal{E}_{F}}}=\operatorname{Pres}_{\mathcal{E}_{F}}^{m}(\addx[T])$.\end{cor} \begin{proof} It follows from Theorem \ref{thm:auslander-solberg} since $\mathrm{add}(T)$ is precovering. \end{proof} \begin{rem} Let $\Lambda$ be an Artin algebra. \begin{enumerate} \item[(1)] \cite[Theorem 3.10]{anoteonrelative} is a particular case of Corollary \ref{cor:resultado de jiaqun wei}. Indeed, if $F$ is an additive subfunctor of $\Extx[1][\Lambda][-][-]$ with enough projectives and such that $\mathcal{P}(F)$ is of finite type, then $F$ has enough projectives and injectives by \cite[Theorem 1.12]{auslander1993relative1}, \cite[Corollary 1.13]{auslander1993relative1} and \cite[Proposition 4.2]{auslander1980preprojective}. \item[(2)] There are examples where we can apply Corollary \ref{cor:resultado de jiaqun wei} but not \cite[Theorem 3.10]{anoteonrelative}. In order to see that, we need to give an example of an additive subfunctor $F$ with enough projectives and injectives and such that $\mathcal{P}(F)$ is not of finite type. Let $\Lambda$ be a quasi-hereditary algebra. Consider the class $\mathcal{F}(\triangle)$ of all the objects in $\modd[\Lambda]$ filtered by the set of standard modules $\Delta.$ It is well known, that $\mathcal{F}(\triangle)$ is resolving and functorially finite \cite[Theorem 1, Theorem 3]{ringel1991category}. Therefore, from Proposition \ref{fact: auslander-solberg} (e), we get that $F_{\mathcal{F}(\triangle)}$ has enough projectives and injectives. Finally, in \cite[Section 3.5]{erdmann2010auslander} we can find examples of quasi-hereditary algebras where $\mathcal{P}(F_{\mathcal{F}(\triangle)})=\mathcal{F}(\triangle)$ is not of finite type. \end{enumerate} \end{rem} \subsection{Tilting classes in categories of functors } \renewcommandx\modd[1][usedefault, addprefix=\global, 1=R]{\operatorname{f.p.}\left(#1\right)} In the early years of the last decade, Roberto Mart\'inez Villa and Mart\'in Ortiz Morales begun a series of research works with the goal of extending tilting theory to arbitrary functor categories \cite{martinez2011tilting,martinez2013tilting,martinez2014tilting}. In the following lines, we will see how such theory can be obtained through $n$-$\mathcal{X}$-tilting objects. We will start this description by following the steps of Maurice Auslander in \cite{doi:10.1080/00927877408548230}. Throughout this section, $\mathcal{C}$ will be an \textbf{skeletally small additive category} and $\Modx[\mathcal{C}^{op}]$ will denote the category of additive contravariant functors $\mathcal{C}\rightarrow\operatorname{Ab}$. \begin{lem}\cite[Proposition 2.1.(b),p.186]{doi:10.1080/00927877408548230}\label{lem: funtores finitamente generados} A functor $F\in\Modx[\mathcal{C}^{op}]$ is finitely generated if, and only if, there is an epimorphism $\bigoplus_{i\in I}\Homx[\mathcal{C}][-][C_{i}]\rightarrow F$ where $I$ is a finite set and $C_{i}\in\mathcal{C}\;\forall i\in I$. \end{lem} \begin{defn}\cite[Section 2, pp.187]{doi:10.1080/00927877408548230} A functor $F\in\Modx[\mathcal{C}^{op}]$ is \textbf{finitely presented} if it admits an exact sequence $\Homx[\mathcal{C}][-][C']\rightarrow\Homx[\mathcal{C}][-][C]\rightarrow F\rightarrow 0,$ where $C,C'\in\mathcal{C}$. We denote by $\modd[\mathcal{C}^{op}]$ the category of all the finitely presented objects of $\Modx[\mathcal{C}^{op}]$. It is worth mentioning that another common notation (used elsewhere) for this class is $\operatorname{mod}(\mathcal{C}^{op})$. \end{defn} \begin{prop}\cite[Section 2, pp.184-185]{doi:10.1080/00927877408548230} The following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] A sequence $M_{1}\rightarrow M_{2}\rightarrow M_{3}$ is exact in $\Modx[\mathcal{\mathcal{C}}^{op}]$ if, and only if, the sequence $M_{1}(C)\rightarrow M_{2}(C)\rightarrow M_{3}(C)$ is exact in $\operatorname{Ab}$, $\forall C\in\mathcal{C}$. \item[$\mathrm{(b)}$] $\Modx[\mathcal{C}^{op}]$ is an Ab3, Ab3{*} category. Furthermore, $\Modx[\mathcal{C}^{op}]$ is an Ab5 category. Thus, it is an Ab3 category such that limits, colimits, and filtered limits are exact. It is worth to mention, that Ab5 implies Ab4, see \cite[Chapter 2. Corollary 8.9]{Popescu}. \item[$\mathrm{(c)}$] An object $P\in\Modx[\mathcal{C}^{op}]$ is projective if, and only if, $P$ is a direct summand of a coproduct $\bigoplus_{i\in I}\Homx[\mathcal{C}][-][C_{i}],$ where $C_{i}\in\mathcal{C}\;\forall i\in I$. \item[$\mathrm{(d)}$] For any $M\in\Modx[\mathcal{C}^{op}],$ there is an epimorphism $\bigoplus_{i\in I}\Homx[\mathcal{C}][-][C_{i}]\rightarrow M$. Hence, $\Modx[\mathcal{C}^{op}]$ has enough projectives. \end{itemize} \end{prop} \begin{lem}\cite[Lemma 2]{martinez2014tilting}\label{lem: ModC tiene suficientes inyectivos} $\Modx[\mathcal{C}^{op}]$ has enough injectives. \end{lem} Let $\mathcal{C}$ be a skeletally small additive category. Following \cite{doi:10.1080/00927877408548230}, we denote by $\projx[\mathcal{C}^{op}]$ the category of all the finitely generated projective objects in $\Modx[\mathcal{C}^{op}]$. We also say that $\mathcal{C}$ is an \textbf{ annuli variety} if all the idempotents in $\mathcal{C}$ split. \begin{prop}\cite[Proposition 2.2, Proposition 2.5]{doi:10.1080/00927877408548230} For a skeletally small additive category $\mathcal{C},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\projx[\mathcal{C}^{op}]$ is an annuli variety. \item[$\mathrm{(b)}$] $\Modx[\mathcal{C}^{op}]$ is equivalent to $\Modx[\operatorname{proj}(\mathcal{C}^{op})^{op}]$. \end{itemize} \end{prop} \begin{defn}\cite[Definition 8]{martinez2014tilting}\label{def: funtor tilting} Let $\mathcal{C}$ be an annuli variety. A class $\mathcal{T}\subseteq\Modx[\mathcal{C}^{op}]$ is a \textbf{tilting category} if the following conditions hold true. \begin{description} \item [{(FT0)}] $\mathcal{T}=\smdx[\mathcal{T}]\subseteq\modd[\mathcal{C}^{op}].$ \item [{(FT1)}] $\pdx[\mathcal{T}]\leq1.$ \item [{(FT2)}] $\mathcal{T}\subseteq\mathcal{T}^{\bot_{1}}.$ \item [{(FT3)}] $\coresdimr{\mathcal{T}}{\operatorname{Hom}_{\mathcal{C}}(-,C)}{\,}\leq1$ $\forall C\in\mathcal{C}$. \end{description} An object $T\in\Modx[\mathcal{C}]$ is a \textbf{big (small) tilting functor} if $\Addx[T]$ ($\addx[T]$) is a tilting category. \end{defn} \begin{prop}\label{prop:funtor tilting} Let $\mathcal{C}$ be an annuli variety and let $\mathcal{T}$ be a tilting category in $\Modx[\mathcal{C}^{op}]$. Then $\Genn[\mathcal{T}][1]=\mathcal{T}^{\bot}$ and $\operatorname{gen}_{1}(\mathcal{T})=\mathcal{T}^{\bot}\cap\modd[\mathcal{C}^{op}]$. \end{prop} \begin{proof} Using that $\pdx[\mathcal{T}]\leq1,$ the first equality follows by \cite[Proposition 10]{martinez2014tilting}. The second equality follows from the first one, Definition \ref{def: compact, fg , etc} and Lemma \ref{lem: funtores finitamente generados}. \end{proof} \begin{thm}\label{thm:2.213 clase de funtores tilting} For an annuli variety $\mathcal{C}$ and $\mathcal{T}\subseteq\modd[\mathcal{C}^{op}],$ the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is big $1$-$\Modx[\mathcal{C}^{op}]$-tilting. \item[$\mathrm{(b)}$] $\mathcal{T}=\Addx[\mathcal{T}]$ is a tilting category which is precovering in $\mathcal{T}^{\bot_{1}}$. \end{itemize} \end{thm} \begin{proof} (a) $\Rightarrow$ (b) Let $\mathcal{T}$ be big $1$-$\Modx[\mathcal{C}^{op}]$-tilting. In particular $\mathcal{T}=\Addx[\mathcal{T}]$ and by (T5) $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{1}}=\mathcal{T}^{\bot}$. Now (FT1) follows from (T1), and (FT2) follows from (T2). Let $\rho:=\{\Homx[\mathcal{C}][-][C]\}_{C\in\mathcal{C}}$ and $\omega:=\Addx[\rho]$. Then, by Theorem \ref{thm:el par n-X-tilting} (a), we get $\omega\subseteq{}^{\bot}(\mathcal{T}^{\bot})=\mathcal{T}^{\vee}$. Moreover, by Corollary \ref{cor:coronuevo pag 55}, $\coresdimr{\mathcal{T}}{\omega}{\,}\leq\pdr[\,][\mathcal{T}]\leq1$ and thus (FT3) holds true. \ (b) $\Rightarrow$ (a) Let $\mathcal{T}=\Addx[\mathcal{T}]$ be a tilting category which is precovering in $\mathcal{T}^{\bot}.$ By Lemma \ref{lem: ModC tiene suficientes inyectivos}, $\mathcal{T}$ satisfies (T4). Moreover, $\mathcal{T}$ satisfies (T5) since $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot}=\mathcal{T}^{\bot_{1}}.$ Moreover, using that $\mathcal{T}=\Addx[\mathcal{T}]$ and Proposition \ref{prop:funtor tilting}, we get $\mathcal{T} ^{\bot}=\Genn[\mathcal{T}][1]=\operatorname{Gen}_{1}(\mathcal{T}).$ Thus $\mathcal{T}$ is $1$-$\Modx[\mathcal{C}^{op}]$-tilting by Proposition \ref{prop:primera generalizacion}. \end{proof} Let $\mathcal{C}$ be an additive category. Following \cite[Section 6]{auslander2}, we say that a morphism $g:C\rightarrow A$ is a \textbf{pseudo-kernel} of a morphism $f:A\rightarrow B$ if the sequence of functors \[ \Homx[\mathcal{C}][-][C]\xrightarrow{\Homx[\mathcal{C}][-][g]}\Homx[\mathcal{C}][-][A]\xrightarrow{\Homx[\mathcal{C}][-][f]}\Homx[\mathcal{C}][-][B] \] is exact. If any morphism in $\mathcal{C}$ has a pseudo-kernel, we say that $\mathcal{C}$ has pseudo-kernels. The notion of \textbf{pseudo-cokernel} is introduced dually. \begin{thm}\label{thm:2.213'} Let $\mathcal{C}$ be an annuli variety with pseudo-kernels and such that $\modd[\mathcal{C}^{op}]$ has enough injectives. Then, for a class $\mathcal{T}\subseteq\modd[\mathcal{C}^{op}]$, the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is small $\projx[\mathcal{C}^{op}]$-saturated $1$-$\modd[\mathcal{C}^{op}]$-tilting. \item[$\mathrm{(b)}$] $\mathcal{T}=\addx[\mathcal{T}]$ is a tilting category which is precovering in $\mathcal{T}^{\bot_{1}}\cap\modd[\mathcal{C}^{op}]$. \end{itemize} \end{thm} \begin{proof} Note that $\mathcal{T}^{\bot_{1}}=\mathcal{T}^{\bot}$ if $\pdr[\,][\mathcal{T}]\leq1$. Now, by \cite[Proposition 2.6]{enomoto2017classifying} and \cite[Proposition 2.7]{enomoto2017classifying}, it follows that $\modd[\mathcal{C}^{op}]$ is a thick class in $\Modx[\mathcal{C}^{op}]$ and $\projx[\mathcal{C}^{op}]$ is a relative generator in $\modd[\mathcal{C}^{op}]$. In the proof that follows, we will use the description of saturated tilting given in Proposition \ref{prop:satsat}. \ (a) $\Rightarrow$ (b) By (T5), $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{1}}\cap\modd[\mathcal{C}^{op}]$. Now, by (T1), it follows (FT1); and using $\mathcal{T}^{\bot_{1}}=\mathcal{T}^{\bot}$ and (T2), we get (FT2). Let $\rho:=\{\Homx[\mathcal{C}][-][C]\}_{C\in\mathcal{C}}$ and $\omega:=\addx[\rho]$. Then, by Theorem \ref{thm:el par n-X-tilting} (a), we get that $\omega\subseteq{}^{\bot}(\mathcal{T}^{\bot})=\mathcal{T}^{\vee}$. Thus, from Corollary \ref{cor:coronuevo pag 55}, it follows that $\coresdimr{\mathcal{T}}{\omega}{\,}\leq\pdr[\,][\mathcal{T}]\leq1$ and hence (FT3) holds true. \ (b) $\Rightarrow$ (a) Since $\modd[\mathcal{C}^{op}]$ has enough injectives, we get that $\mathcal{T}$ satisfies (T4). Moreover, $\mathcal{T}$ satisfies (T5) since $\mathcal{T}$ is precovering in $\mathcal{T}^{\bot_{1}}\cap\modd[\mathcal{C}^{op}]$ and $\mathcal{T}^{\bot}=\mathcal{T}^{\bot_{1}}$. By $\mathcal{T}=\addx[\mathcal{T}]$ and Proposition \ref{prop:funtor tilting}, we have that $\Genn[\mathcal{T}][1]\cap\modd[\mathcal{C}^{op}]=\operatorname{gen}_{1}(\mathcal{T})=\mathcal{T}^{\bot}\cap\modd[\mathcal{C}^{op}].$ Thus, we get (a) from Proposition \ref{prop:primera generalizacion}. \end{proof} Recall that a \textbf{dualizing $R$-variety} is an $R$-category $\mathcal{C}$, where $R$ is an artinian ring, such that $\mathcal{C}$ is an annuli variety where the functor $D:\modd[\mathcal{C}]\rightarrow\modd[\mathcal{C}^{op}]$, $D(M)(X)=\Homx[R][M(X)][I_{0}(\operatorname{top}(R))]$, is a duality \cite[p.307]{auslander1974stable}. \begin{cor}\label{cor:modC es gruesa} Let $\mathcal{C}$ be a skeletally small additive category with pseudo-kernels. Then, $\modd[\mathcal{C}^{op}]=(\projx[\mathcal{C}^{op}])_{\infty}^{\wedge}$ and it is a thick class in $\Modx[\mathcal{C}^{op}]$.\end{cor} \begin{proof} It follows from \cite[Proposition 2.6, Proposition 2.7]{enomoto2017classifying}.\end{proof} \begin{lem}\label{rem: obs a 2.213'} Let $\mathcal{C}$ be a dualizing $R$-variety with pseudo-kernels and pseudo-cokernels. Then, $\modd[\mathcal{C}^{op}]=(\projx[\mathcal{C}^{op}])_{\infty}^{\wedge}$ and it is a thick abelian subcategory of $\Modx[\mathcal{C}^{op}]$ with enough injectives. \end{lem} \begin{proof} By Corollary \ref{cor:modC es gruesa}, $\modd[\mathcal{C}^{op}]=(\projx[\mathcal{C}^{op}])_{\infty}^{\wedge}$ and it is thick in $\Modx[\mathcal{C}^{op}]$. Similarly, by Corollary \ref{cor:modC es gruesa}, $\modd[\mathcal{C}]=\mathcal{P}(\mathcal{C})_{\infty}^{\wedge}$ and it has enough projectives. Then, by the duality $D:\modd[\mathcal{C}]\rightarrow\modd[\mathcal{C}^{op}]$, we get that $\modd[\mathcal{C}^{op}]$ has enough injectives. \end{proof} In \cite{martinez2013tilting}, R. Mart\'inez y M. Ortiz introduced the following definition. \begin{defn}\cite[Definition 6]{martinez2013tilting} Let $\mathcal{C}$ be an annuli variety. A class $\mathcal{T}\subseteq\Modx[\mathcal{C}^{op}]$ is a \textbf{generalized $n$-tilting subcategory} if the following conditions hold true. \begin{description} \item [{(FGT0)}] $\mathcal{T}=\smdx[\mathcal{T}].$ \item [{(FGT1)}] $\mathcal{T}\subseteq(\projx[\mathcal{C}^{op}])_{n}^{\wedge}.$ \item [{(FGT2)}] $\mathcal{T}\subseteq\mathcal{T}^{\bot}.$ \item [{(FGT3)}] $\Homx[\mathcal{C}][-][C]\in\mathcal{T}^{\vee}\;\forall C\in\mathcal{C}$. \end{description} An object $T\in\Modx[\mathcal{C}^{op}]$ is a \textbf{generalized small (big) tilting functor} if $\addx[T]$ ($\Addx[T]$) is a generalized tilting subcategory. \end{defn} \begin{prop}\cite[Proposition 6, Proposition 7]{martinez2013tilting}\label{prop:funct precubriente} Let $\mathcal{C}$ be an annuli variety with pseudo-kernels and let $\mathcal{T}\subseteq\Modx[\mathcal{C}^{op}]$ be a generalized tilting subcategory. Then, $\mathcal{T}$ has pseudo-kernels if, and only if, $\mathcal{T}$ is precovering in $\modd[\mathcal{C}^{op}]$. \end{prop} \begin{prop}\cite[Proposition C.1.(2)]{bravo2019locally}\label{prop:gen proy mod funct} Let $\mathcal{C}$ be a skeletally small additive category with pseudo-kernels. Then, any object $X\in\modd[\mathcal{C}^{op}]$ admits an exact sequence $\cdots\rightarrow\Homx[\mathcal{C}][-][C'_{n}]\rightarrow\Homx[\mathcal{C}][-][C'_{n-1}]\rightarrow\cdots\rightarrow\Homx[\mathcal{C}][-][C'_{0}]\rightarrow X\rightarrow0\mbox{,}$ with $C'_{i}\in\mathcal{C}\:\forall i\geq0$. In particular, $\left\{ \Homx[\mathcal{C}][-][C]\right\} _{C\in\mathcal{C}}$ is a $\modd[\mathcal{C}^{op}]$-projective relative generator in $\modd[\mathcal{C}^{op}]$. \end{prop} \begin{prop}\cite[Proposition 2.1(c)]{doi:10.1080/00927877408548230}\label{prop:-Sea-funtor fg es compacto} Let $\mathcal{C}$ be a skeletally small additive category. Then, finitely generated functors are compact objects in $\Modx[\mathcal{C}^{op}]$. \end{prop} \begin{prop} \cite[Corollary 2]{martinez2013tilting}\label{prop:2.188} Let $\mathcal{C}$ be a skeletally small additive category. Then, for any $n>0$, $\Extx[n][\operatorname{Mod}(\mathcal{C}^{op})][M][-]$ commutes with coproducts $\forall M\in(\projx[\mathcal{C}^{op}])_{\infty}^{\wedge}$. \end{prop} \begin{cor} Let $\mathcal{C}$ be a skeletally small additive category with pseudo-kernels. Then, the functor $\Extx[n][\operatorname{Mod}(\mathcal{C}^{op})][M][-]$ commutes with coproducts $\forall M\in\modd[\mathcal{C}^{op}]$ and $\forall n\geq1$.\end{cor} \begin{proof} It follows from Proposition \ref{prop:2.188} and Corollary \ref{cor:modC es gruesa}.\end{proof} \begin{thm} Let $\mathcal{C}$ be an annuli variety with pseudo-kernels and $\mathcal{T}\subseteq\modd[\mathcal{C}^{op}]$. Consider the following statements: \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{T}$ is small $n$-$\modd[\mathcal{C}^{op}]$-tilting. \item[$\mathrm{(b)}$] $\mathcal{T}=\mathcal{T}^{\oplus_{<\infty}}$ is a generalized $n$-tilting subcategory and there is a $\modd[\mathcal{C}^{op}]$-injective relative cogenerator in $\modd[\mathcal{C}^{op}]$. \end{itemize} Then, (a) implies (b). Furthermore, if $\mathcal{T}$ has pseudo-kernels, then (a) and (b) are equivalent. \end{thm} \begin{proof} (a) $\Rightarrow$ (b) Let $\mathcal{T}$ be small $n$-$\modd[\mathcal{C}^{op}]$-tilting. By Proposition \ref{prop:gen proy mod funct}, it follows that $\rho:=\left\{ \Homx[\mathcal{C}][-][C]\right\} _{C\in\mathcal{C}}$ is a $\modd[\mathcal{C}^{op}]$-projective relative generator in $\modd[\mathcal{C}^{op}]$. Then, there is a long exact sequence \[ 0\rightarrow F_{n+1}\stackrel{f}{\rightarrow}\Homx[\mathcal{C}][-][C_{n}]\rightarrow\cdots\rightarrow\Homx[\mathcal{C}][-][C_{0}]\rightarrow T\rightarrow 0, \] where $F_{n+1}\in\modd[\mathcal{C}^{op}]$. Consider the exact sequence \[ \eta:\;\suc[F_{n+1}][\mbox{Hom}_{\mathcal{C}}(-,C_{n})][F_{n}][f][\,]\mbox{.} \] Thus, by the shifting Lemma and (T1), we have $$\Extx[1][\operatorname{Mod}\left(\mathcal{C}\right)][F_{n}][F_{n+1}]\simeq\Extx[n+1][\operatorname{Mod}\left(\mathcal{C}\right)][T][F_{n+1}]=0.$$ Hence $\eta$ splits and then $F_{n}\in\projx[\mathcal{C}^{op}].$ Therefore $T\in\projx[\mathcal{C}^{op}]_{n}^{\wedge}$ and so (FGT1) holds true. Condition (FGT2) follows from (T2). By Proposition \ref{prop:(a)-1}(e), $\rho\subseteq{}^{\bot}(\mathcal{T}^{\bot})\cap\modd[\mathcal{C}^{op}]={}{}^{\bot}(\mathcal{T}^{\bot})\cap\left(\modd[\mathcal{C}^{op}],\mathcal{T}\right)^{\vee}\subseteq\mathcal{T}{}^{\vee}\mbox{,}$ which proves (FGT3). Finally, by (T4), $\modd[\mathcal{C}^{op}]$ has a $\modd[\mathcal{C}^{op}]$-injective relative cogenerator in $\modd[\mathcal{C}^{op}].$ \ (b) $\Rightarrow$ (a) By (FGT1), $\pdr[\operatorname{f.p.}(\mathcal{C}^{op})][\mathcal{T}]\leq\pdx[\mathcal{T}]\leq n$ and thus (T1) is satisfied. Since $\mathcal{T}\subseteq\modd[\mathcal{C}^{op}]$, using (FGT2), we have $\mathcal{T}\cap\modd[\mathcal{C}^{op}]=\mathcal{T}\subseteq\mathcal{T}^{\bot}$ and then (T2) holds true. Note that (FGT3) implies (t3''). Thus (T4) is satisfied since there is a $\mbox{f.p.}(\mathcal{C}^{op})$-injective relative cogenerator in $\modd[\mathcal{C}^{op}]$ and $\mathcal{T}\subseteq\modd[\mathcal{C}^{op}]$. Now, using that $\mathcal{T}$ has pseudo-kernels and Proposition \ref{prop:funct precubriente}, we get that $\mathcal{T}$ is precovering in $\modd[\mathcal{C}^{op}].$ In particular, every $Z\in\mathcal{T}^{\bot}\cap\modd[\mathcal{C}^{op}]$ admits a $\mathcal{T}$-precover, and hence (T5) holds true. Finally, by Lemma \ref{lem: addT precub es AddT precub}, condition (T3) follows since Definition \ref{def:condiciones T3} (t3'') is satisfied and $\modd[\mathcal{C}^{op}]$ is thick by Corollary \ref{cor:modC es gruesa}. \end{proof} \subsection{Silting modules, quasitilting modules and $n$-$\mathcal{X}$-tilting objects} \renewcommandx\modd[1][usedefault, addprefix=\global, 1=R]{\operatorname{mod}\left(#1\right)} Silting modules were introduced in \cite{siltingmodulessurvey}, by Lidia Angeleri H\"ugel, Frederik Marks and Jorge Vitoria, as a simultaneous generalization of tilting modules over an arbitrary ring and support $\tau$-tilting modules over a finite dimensional algebra. In this section we will focus on understanding silting theory through $n$-$\mathcal{X}$-tilting objects. Let us begin recalling some known results on silting theory. \renewcommandx\Homx[3][usedefault, addprefix=\global, 1=R, 2=M, 3=N]{\operatorname{Hom}{}_{#1}(#2,#3)} \renewcommandx\Gen[1][usedefault, addprefix=\global, 1=M]{\operatorname{Gen}_{1}\left(#1\right)} \renewcommandx\Genn[2][usedefault, addprefix=\global, 1=M, 2=n]{\operatorname{Gen}_{#2}(#1)} \begin{lem}\cite[Lemma 2.3]{siltingmodulessurvey}\label{lem:GenT con T0 es par de torsion}\label{rem:observaci=0000F3n 3.110'} Let $R$ be a ring and let $T\in\mathrm{Mod}(R)$ be such that $\Gen[T]\subseteq T^{\bot_{1}}$. Then $(\Gen[T],T^{\bot_{0}})$ is a torsion pair in $\mathrm{Mod}(R)$. \end{lem} \begin{lem}\label{lemdef quasitilting} For a ring $R$ and $T\in\mathrm{Mod}(R),$ the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $\Genn[T][1]=\Genn[T][2]$ is a torsion class in $\mathrm{Mod}(R)$ and $\Homx[R][T][-]$ is exact on $\Genn[T][1]$. \item[$\mathrm{(b)}$] $\Genn[T][1]=\Genn[T][2]$ and $T\in{}^{\bot_{1}}\Genn[T][1]$. \item[$\mathrm{(c)}$] $\Genn[T][1]=\overline{\Genn[T][1]}\cap T^{\bot_{1}}$, where $\overline{\Genn[T][1]}$ is the class of all the submodules of modules in $\Genn[T][1]$. \end{itemize} \end{lem} \begin{proof} The equivalence (b) $\Leftrightarrow$ (c) follows from \cite[Lemdef 3.1]{siltingmodulessurvey}. By Lemma \ref{lem:GenT con T0 es par de torsion}, we have that (b) $\Rightarrow$ (a). To prove that (a) $\Rightarrow$ (b) holds true, it is enough to show that $T\in{}^{\bot_{1}}\Genn[T][1].$ Indeed, let $\eta:\;\suc[M][N][T][][g]$ be an exact sequence with $M\in\Gen[T].$ Since $\Gen[T]$ is closed under extensions, we have that $N\in\Gen[T]$. Thus $\eta$ is an exact sequence in $\Gen[T],$ and using that $\Homx[R][T][-]$ is exact on $\Genn[T][1],$ it follows that $\eta$ splits and hence $T\in{}^{\bot_{1}}\Genn[T][1].$ \end{proof} Let $R$ be a ring and $T\in\mathrm{Mod}(R)$. We recall from \cite[Lemdef 3.1]{siltingmodulessurvey} that $T$ is \textbf{quasitilting} if $T$ satisfies one of the equivalent conditions in Lemma \ref{lemdef quasitilting}. Let $E:=\End.$ We recall, from \cite{faith1972modules} that $T$ is \textbf{finendo} if $M$ is finitely generated as a left $E$-module. \begin{defn} For a ring $R,$ we denote by $\operatorname{qtilt}(R)$ the class of all the left $R$-modules which are quasitilting and finendo. \end{defn} Note that the relation $\sim$ on $\operatorname{qtilt}(R)$, where $T_{1}\sim T_{2}$ if $\Addx[T_{1}]=\Addx[T_{2}],$ is an equivalence relation on $\operatorname{qtilt}(R)$. \begin{thm} \label{thm:quasitilting finendo vs torsion}\cite[Theorem 3.4]{siltingmodulessurvey} Let $R$ be a ring. Then, the map $T\mapsto\Gen[T]$ induces a bijection between the quotient class $\operatorname{qtilt}(R)/\!\!\sim$ and the class of all the torsion classes $\mathcal{T}\subseteq\Modx[R]$ satisfying the following condition: \begin{description} \item [{(QT)}] every $M\in\mathrm{Mod}(R)$ admits a $\mathcal{T}$-preenvelope $\phi:M\to T_0$ with $\mathrm{CoKer}(\phi)$ in ${}^{\bot_{1}}\mathcal{T}.$ \end{description} \end{thm} \begin{lem} \label{lem:Add de quasitilt}\cite[Lemma 3.3]{siltingmodulessurvey} Let $R$ be a ring and let $T\in\mathrm{Mod}(R)$ be quasitilting. Then $\Addx[T]={}^{\bot_{1}}\Gen[T]\cap\Gen[T]\mbox{.}$ \end{lem} Let $R$ be a ring and let $\sigma$ be a morphism in $\Projx[R].$ Following \cite[Section 3.2]{siltingmodulessurvey}, we consider the class $\mathcal{D}_{\sigma}:=\left\{ X\in\Modx[R]\,|\,\Homx[][\sigma][X]\mbox{ is an epimorphism}\right\}.$ The principal properties of the class $\mathcal{D}_{\sigma}$ are collected in the following lemma. The first three items are from \cite[Lemma 3.6]{siltingmodulessurvey} and the last two items appear in the proof of \cite[Theorem 3.12]{siltingmodulessurvey}. Recall that $\mathcal{D}(R)$ denotes the derived category of $\Modx[R]$ and that a\textbf{ projective presentation} of an $R$-module $T$ is a morphism $\sigma:P_{-1}\rightarrow P_{0}$ with $P_{-1},P_{0}\in\Projx[R]$ and such that $\mathrm{CoKer}(\sigma)\simeq T$. \begin{lem}\cite{siltingmodulessurvey} \label{lem:D_sigma-1}\label{lem:D_sigma} For a ring $R$ and a projective presentation $\sigma:P_{-1}\rightarrow P_{0}$ of $T\in \mathrm{Mod}(R),$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\mathcal{D}_{\sigma}$ is closed under extensions, products and quotients. \item[$\mathrm{(b)}$] $\mathcal{D}_{\sigma}\subseteq T^{\bot_{1}}$. \item[$\mathrm{(c)}$] $X\in\mathcal{D}_{\sigma}$ if, and only if, $\Homk[\mathcal{D}(R)]=0$ for every projective presentation $\omega$ of $X$. \item[$\mathrm{(d)}$] Let $\{\tau_{i}:P_{i}\rightarrow Q_{i}\}_{i\in I}$ be a set of morphisms in $\mathrm{Proj}(R)$. Then, $\mathcal{D}_{\tau}=\bigcap_{i\in I}\mathcal{D}_{\tau_{i}}$ where $\tau:=\bigoplus_{i\in I}\tau_{i}:\bigoplus_{i\in I}P_{i}\rightarrow\bigoplus_{i\in I}Q_{i}$. \item[$\mathrm{(e)}$] Let $\alpha:P\rightarrow Q$ and $\beta:P\rightarrow Q'$ be morphisms in $\mathrm{Proj}(R)$. Then, $\mathcal{D}_{\alpha}\subseteq\mathcal{D}_{\gamma}$, where $\gamma=\begin{pmatrix}\alpha\\ \beta \end{pmatrix}:P\rightarrow Q\oplus Q'$. \end{itemize} \end{lem} Let $R$ be a ring and $T\in\mathrm{Mod}(R)$. Following \cite[Definition 3.7]{siltingmodulessurvey}, we recall that $T$ is\textbf{ partial silting} if there is a projective presentation $\sigma$ of $T$ such that the following two conditions hold true: {\bf (S1)} $\mathcal{D}_{\sigma}$ is a torsion class and {\bf(S2)} $T\in\mathcal{D}_{\sigma}.$ On the other hand, it is said that $T$ is \textbf{silting} if there is a projective presentation $\sigma$ of $T$ such that $\Gen[T]=\mathcal{D}_{\sigma}$. In such a case, we say that $T$ is silting with respect to $\sigma$. \begin{prop}\cite{siltingmodulessurvey} \label{prop:silting vs finendo quasitilting vs tilting} For a ring $R,$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] Every silting left $R$-module is finendo and quasitilting. \item[$\mathrm{(b)}$] Let $T\in\mathrm{Mod}(R).$ Then: $T$ is $1$-tilting $\Leftrightarrow$ $T$ is faithful and silting $\Leftrightarrow$ $T$ it is faithful, finendo and quasitilting. \end{itemize} \end{prop} \begin{proof} It follows from \cite[Propositions 3.10 and 3.13(2)]{siltingmodulessurvey}. \end{proof} \begin{prop}\cite[Proposition 3.11]{siltingmodulessurvey}\label{prop:silting parcial +S3 es silting-1}\label{prop:silting parcial +S3 es silting} For a ring $R$ and a projective presentation $\sigma$ of $T\in\mathrm{Mod}(R),$ the following statements are equivalent. \begin{itemize} \item[$\mathrm{(a)}$] $T$ is a silting $R$-module with respect to $\sigma$. \item[$\mathrm{(b)}$] $T$ is a partial silting $R$-module with respect to $\sigma$ and the following condition holds true: \begin{description} \item [{(S3)}] there is an exact sequence $R\overset{\phi}{\rightarrow}T_{0}\rightarrow T_{1}\rightarrow0$, where $T_{0}$,$T_{1}\in\Addx[T]$ and $\phi$ is a $\mathcal{D}_{\sigma}$-preenvelope of $R.$ \end{description} \end{itemize} \end{prop} Now, we are ready to start discussing silting theory through $n$-$\mathcal{X}$-tilting theory. \begin{lem}\label{lem:siltingvstiltin} Let $R$ be a ring and let $T\in\mathrm{Mod}(R)$ be such that $\pdr[\mathcal{Y}][T]\leq1$, where $\Gen[T]\subseteq\mathcal{Y}\subseteq\Modx[R]$. Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\left(\Gen[T]\right)^{\bot_{1}}\cap\mathcal{Y}\subseteq\left(\Addx[T]\right)^{\bot}\cap\mathcal{Y}$. \item[$\mathrm{(b)}$] $\left(\Gen[T]\right)^{\bot}\cap\mathcal{Y}=\left(\Gen[T]\right)^{\bot_{1}}\cap\mathcal{Y}$ if $\Gen[T]=\Genn[T][2].$ \end{itemize} \end{lem} \begin{proof} The item (a) follows from $\pdr[\mathcal{Y}][T]\leq1.$ To prove (b), it is enough to show that $\left(\Gen[T]\right)^{\bot_{1}}\cap\mathcal{Y}\subseteq$ $\left(\Gen[T]\right)^{\bot}\cap\mathcal{Y}$. Consider $M\in\left(\Gen[T]\right)^{\bot_{1}}\cap\mathcal{Y}$ and $N\in\Gen[T]$. In particular $\mathrm{Ext}^1_\mathcal{C}(N,M)=0.$ Let $m\geq2.$ Since $\Gen[T]=\Genn[T][2],$ we can build an exact sequence $ 0\rightarrow K_{m-1}\rightarrow T_{m-1}\rightarrow...\rightarrow T_{1}\rightarrow N\rightarrow0\mbox{,} $ where $T_{i}\in\Addx[T]$ $\forall i\in[1,m-1]$ and $K_{m-1}\in\Gen[T]$. Using that $M\in\left(\Gen[T]\right)^{\bot_{1}}\cap\mathcal{Y}$ $\subseteq\left(\Addx[T]\right)^{\bot},$ by the shifting lemma we have $ \Extx[m][][N][M]\cong\Extx[1][][K_{m-1}][M]=0 $ and thus $M\in\Gen[T]^{\bot}\cap\mathcal{Y}$. \end{proof} \begin{defn} Let $R$ be a ring and let $\left(\mathcal{X},\mathcal{Y}\right)$ be a pair of classes of objects in $\Modx[R].$ We denote by $\mathcal{X}_{(\mathcal{I},\mathcal{Y})}$ the class of all the $R$-modules $M\in\mathcal{X}$ admitting an exact sequence $ \suc[M][I_{0}(X)][Y]\mbox{,} $ where $X\in\mathcal{X}$, $Y\mathcal{\in\mathcal{Y}}$ and $I_{0}(X)$ is the injective envelope of $X$ in $\Modx[R]$. \end{defn} \renewcommandx\Gennr[3][usedefault, addprefix=\global, 1=\mathcal{T}, 2=n, 3=\mathcal{X}]{\operatorname{Gen}_{#2}^{#3}(#1)} \begin{lem}\label{lem: para silting implica Gen-tilting } Let $R$ be a ring and let $T\in\mathrm{Mod}(R)$ be quasitilting and such that $\pdr[\mathcal{Y}][T]\leq1$, where $\Gen[T]\subseteq\mathcal{Y}\subseteq\Modx[R]$. Then, for $\omega(T):=\Genn[T][1]_{(\mathcal{I},T^{\bot_{0}})},$ the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $T^{\bot}\cap\mathcal{Y}=T^{\bot_{1}}\cap\mathcal{Y}$. \item[$\mathrm{(b)}$] $\Gen[T]\subseteq T^{\bot}\cap\mathcal{Y}$. \item[$\mathrm{(c)}$] $\Addx[T]$ is a relative $\Gen[T]$-projective generator in $\Gen[T]$. \item[$\mathrm{(d)}$] $\left(\Gen[T]\right)^{\bot_{1}}\cap\mathcal{Y}=\left(\Gen[T]\right)^{\bot}\cap\mathcal{Y}$. \item[$\mathrm{(e)}$] $\omega(T)$ is a relative $\Gen[T]$-injective cogenerator in $\Gen[T]$. \item[$\mathrm{(f)}$] $\omega(T)\subseteq\Gen[T]^{\bot}\subseteq T^{\bot}$. \item[$\mathrm{(g)}$] $\Gennr[T][1][\operatorname{Gen}_{1}(T)]=\Gen[T]$. \end{itemize} \end{lem} \begin{proof} The item (a) follows from $\pdr[\mathcal{Y}][T]\leq1,$ (b) can be shown from (a) and Lemma \ref{lemdef quasitilting} (b), and (c) can be obtained from (b) and Lemma \ref{lemdef quasitilting} (b). On the other hand, (d) can be obtained from Lemmas \ref{lemdef quasitilting} (b) and \ref{lem:siltingvstiltin} (b). The first inclusion in (f) follows from (e) and the second one from $T\in\Genn[T][1]$. Moreover, (g) follows from Lemma \ref{lemdef quasitilting} (b). \ Let us show (e). By Lemma \ref{lem:GenT con T0 es par de torsion} and (b), $\mathfrak{C}:=\left(\Gen[T],T^{\bot_{0}}\right)$ is a torsion pair. Let us prove that $\omega(T)$ is a relative cogenerator in $\Genn[T][1]$. Let $X\in\Gen[T]$ and $i:X\rightarrow I_{0}(X)$ be its injective envelope. Consider the exact sequences $\eta:\;\suc[X][I_{0}(X)][Y][i]$ and $\mu:\;\suc[T'][I_{0}(X)][F][a][b]\mbox{,}$ where $\mu$ is the canonic exact sequence induced by $\mathfrak{C}$ with $T'\in\Gen[T]$ and $F\in T^{\bot_{0}}$. Note that $T'\in\omega(T)$ by definition. Now, since $X\in\Gen[T]$ and $F\in T^{\bot_{0}}$, we have $bi=0$. Then, $i$ factors through $a$. Consider the exact sequence $\suc[X][T'][D][f]$, where $af=i$. Note that $D\in\Gen[T]$ since $T'\in\Gen[T]$. Therefore, $\omega(T)$ is a relative cogenerator in $\Genn[T][1]$. \ It remains to show that $\omega(T)\subseteq\left(\Gen[T]\right)^{\bot}$. Let $W\in\omega(T)$. Then, there is an exact sequence $\nu:\quad\suc[W][I][F][\,][\,]$ with $I\in\Injx[R]$ and $F\in T^{\bot_{0}}$. Thus, for each $G\in\Gen[T],$ we have the exact sequence \[ \Homx[][G][F]\rightarrow\Extx[1][][G][W]\rightarrow\Extx[1][][G][I]. \] Hence $\Extx[1][][G][W]=0$ since $\mathfrak{C}$ is a torsion pair and $I\in\Injx[R]$. Finally, by (d), we have $W\in\Gen[T]^{\bot_{1}}\cap\Gen[T]\subseteq\Gen[T]^{\bot_{1}}\cap\mathcal{Y}\subseteq\Gen[T]^{\bot}$. \end{proof} \begin{thm}\label{thm: silting es Gen-tilting} Let $R$ be a ring and let $T\in\mathrm{Mod}(R)$ be quasitilting and such that $\pdr[\mathcal{Y}][T]\leq1$, where $\Gen[T]\subseteq\mathcal{Y}\subseteq\Modx[R]$. Then, $T$ is big $1$-$\Gen[T]$-tilting. \end{thm} \begin{proof} By Lemma \ref{lemdef quasitilting} (a), we get that $\Gen[T]$ is closed under extensions and direct summands. By Lemma \ref{lem: para silting implica Gen-tilting } (e,f), (T4) holds true; and (T5) is satisfied since $\Addx[T]$ is precovering. Then, by Lemma \ref{lem: para silting implica Gen-tilting } (g) and Proposition \ref{prop:primera generalizacion}, it remains to show the equality $\Gen[T]=T^{\bot}\cap\Gen[T],$ which follows from Lemma \ref{lem: para silting implica Gen-tilting } (b). \end{proof} \begin{rem} Let $R$ be a ring. \begin{enumerate} \item[(1)] We recall that $T\in\Modx[R]$ is \textbf{$\tau$-rigid} if there is a projective presentation $\varphi$ of $T$ such that $\Gen[T]\subseteq\mathcal{D}_{\varphi}$. It follows, from Proposition \ref{prop:silting vs finendo quasitilting vs tilting} (a), that every silting module is quasitilting, finendo and $\tau$-rigid. A natural question, that remains open, is if the converse of this implication holds true in general \cite[Section 5]{bazzoni2017pure}. In the study of the relation between these concepts, several examples have arised. Namely, in \cite[Example 5.12]{bazzoni2017pure} it was shown an example of a quasitilting finendo $R$-module that is not $\tau$-rigid for a Dubrovin-Puninski ring $R$. In \cite[Example 5.4]{silting2017}, it was shown, for a commutative local ring $R$ with maximal ideal $\mathfrak{m}\neq0$, that $R/\mathfrak{m}$ is a quasitilting finendo $R$-module that is not $\tau$-rigid. Lastly, in \cite[Example 5.11]{bazzoni2017pure}, it was shown an example of a faithful quasitilting $R$-module over a commutative von Neumann regular ring $R$ that is not finendo nor $\tau$-rigid. \item[(2)] In \cite[Example 3.14]{siltingmodulessurvey}, it was given an example of an $R$-module $T$, with $\pdx[T]\leq1$, such that $T$ is silting but it is not tilting. Therefore, Theorem \ref{thm: silting es Gen-tilting} show the existence of $1$-$\Gen[T]$-tilting $R$-modules of projective dimension $\leq1$ that are not tilting. \item[(3)] We recently had the opportunity to examine the results of \cite{parra2021tilting}. In this paper, the authors consider a quasitilting $R$-module $V$ and show that the full subcategory $\overline{\Genn[V][1]}$ is an abelian category and that $V$ is a $1$-tilting object of $\overline{\Genn[V][1]}$ \cite[Proposition 5.3]{parra2021tilting}. It is worth noting that it seems that $1$-$\overline{\Genn[V][1]}$-tilting objects of $\mathrm{Mod} (R)$ do not coincide with the $1$-tilting objects of the abelian category $\overline{\Genn[V][1]}$. Therefore, we consider that in a way the two approaches are complementary. \end{enumerate} \end{rem} In what follows, we will study $1$-$\Gen[T]$-tilting $R$-modules with the goal of finding the conditions needed for a $1$-$\Gen[T]$-tilting $R$-module to be silting. As a result of this pursuit, we will prove in Theorem \ref{thm:quasitilt sii 1-Gen-tiltin}, for an $R$-module $T$ with $\pdx[T]\leq1$, that $T$ is quasitilting if and only if $T$ is big $1$-$\Gen[T]$-tilting. \begin{prop}\label{prop:n-gen-tilting} Let $R$ be a ring and let $T\in\mathrm{Mod}(R)$ be such that $\Genn[T][1]$ is closed under extensions and $T$ is big $n$-$\Genn[T][1]$-tilting. Then, for $m:=\max\{1,n\}$, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\Gennr[T][k][\operatorname{Gen}_{1}(T)]=\Genn[T][k+1]$ $\forall k\geq m.$ \item[$\mathrm{(b)}$] $\Genn[T][m+1]=T^{\bot}\cap\Gen[T].$ \item[$\mathrm{(c)}$] $\Genn[T][k+1]=T^{\bot}\cap\Gen[T]$ $\forall k\geq m.$ \item[$\mathrm{(d)}$] $\Addx[T]\subseteq T^{\bot}\cap\Gen[T]\subseteq\Genn[T][2]$ and $T^{\bot}\cap\Gen[T]$ is closed by $m$-quotients in $\Gen[T].$ \end{itemize} \end{prop} \begin{proof} Let us show (a). By Proposition \ref{prop:primera generalizacion} (c), $\Gennr[T][k][\operatorname{Gen}_{1}(T)]=\Gennr[T][k+1][\operatorname{Gen}_{1}(T)]\subseteq\Genn[T][k+1].$ Consider $X\in\Genn[T][k+1].$ Then, there is an exact sequence $0\rightarrow K\rightarrow T_{k}\overset{f_{k}}{\rightarrow}...\rightarrow T_{1}\overset{f_{1}}{\rightarrow}X\rightarrow0\mbox{,}$ with $K\in\Gen[T]$ and $T_{i}\in\Addx[T]$ $\forall i\in[1,k]$. Thus $X\in\Gennr[T][k][\operatorname{Gen}_{1}(T)]$ since $\Kerx[f_{i}]\in\Gen[T]$ $\forall i\in[1,k].$ Therefore, (a) holds true. \ Finally, (b), (c) and (d) follow by (a) and Proposition \ref{prop:primera generalizacion} (b,c,d). \end{proof} \renewcommandx\Extx[4][usedefault, addprefix=\global, 1=i, 2=R, 3=M, 4=X]{\mathrm{Ext}{}_{#2}^{#1}\left(#3,#4\right)} \begin{lem}\label{cor: Gen-tilting es silting} Let $R$ be a ring and let $T\in\mathrm{Mod}(R)$ be big $1$-$\Gen[T]$-tilting and $\pdr[\mathcal{Y}][T]\le1$, where $\overline{\Genn[T][1]}\subseteq\mathcal{Y}\subseteq \mathrm{Mod}(R).$ Then, the following statements hold true. \begin{itemize} \item[$\mathrm{(a)}$] $\Gen[T]\subseteq T^{\bot}$. \item[$\mathrm{(b)}$] $\left(\Gen[T],T^{\bot_{0}}\right)$ is a torsion pair. \item[$\mathrm{(c)}$] $\Gen[T]=\Gen[T]\cap T^{\bot}=\Genn[T][k+1]$ $\forall k\geq1$. \item[$\mathrm{(d)}$] $T$ is quasitilting. \item[$\mathrm{(e)}$] $\left(\Gen[T]\right)^{\bot}\cap\mathcal{Y}=\left(\Gen[T]\right)^{\bot_{1}}\cap\mathcal{Y}\subseteq T^{\bot}\cap\mathcal{Y}$. \item[$\mathrm{(f)}$] $\Gen[T]\cap{}{}^{\bot_{1}}\left(\Gen[T]\right)=$ ${}{}^{\bot}(T^{\bot}\cap\Gen[T])\cap\Gen[T]=$ $\Addx[T]=$ $={}^{\bot}(T^{\bot})\cap\Gen[T]=$ $(\Addx[T])^{\vee}\cap\Gen[T]$. \end{itemize} \end{lem} \begin{proof} $ $ \begin{enumerate} \item Let $X\in\Gen[T]$, $H:=\Homx[R][T][X]$, and $u:T^{\left(H\right)}\rightarrow X$ be the morphism defined by $(m_{f})_{f\in H}\mapsto\sum_{f\in H}f(m_{f})$. Since $X\in\Gen[T]$, $u$ is an epimorphism. Thus, we have the exact sequence $\eta:\:\suc[K][T^{(H)}][X][][u]\mbox{.}$ Now, applying $\Homx[][T][-]$ to $\eta$, we get the long exact sequence \[ \Homx[][T][T^{(H)}]\xrightarrow{\Homx[][T][u]}\Homx[][T][X]\rightarrow\Extx[1][R][T][K]\rightarrow\Extx[1][R][T][T^{(H)}]\mbox{.} \] By (T2), $\Extx[1][R][T][T^{(H)}]=0$ and thus $\Extx[1][][T][K]=0$ since $\Homx[][T][u]$ is surjective. Therefore, $K\in T^{\bot_{1}}\cap\mathcal{Y}\subseteq T^{\bot}$ since $\pdr[\mathcal{Y}][T]\leq1$. Then, using that $T^{(H)}, K\in T^{\bot}$ and $T^{\bot}$ is closed under mono-cokernels, we get $X\in T^{\bot}$. \item It follows from (a) and Lemma \ref{lem:GenT con T0 es par de torsion}. \item By (b), $\Genn[T][1]$ is closed under extensions. Then, (c) follows straightforward by (a) and Proposition \ref{prop:n-gen-tilting} (c). \item Observe that $\Gen[T]=\Genn[T][2]$ by (c), and that $\Homx[][T][-]$ is exact on $\Gen[T]$ by (a). Moreover, $\Gen[T]$ is a torsion class by (b). Therefore, $T$ is quasitilting by Lemma \ref{lemdef quasitilting} (a). \item It follows by (c) and Lemma \ref{lem:siltingvstiltin}. \item By (d) and Lemma \ref{lem:Add de quasitilt}, we have $\Addx[T]=$ $\Gen[T]\cap{}{}^{\bot_{1}}\left(\Gen[T]\right)$. Note that $\Addx[T]\subseteq\Genn[T][1]\cap(\Addx[T])^{\vee}$. In order to prove that the inclusion above is an equality, observe firstly that $\pdr[\operatorname{Gen}_{1}(T)][(\operatorname{Add}(T))^{\vee}]=\pdr[\operatorname{Gen}_{1}(T)][\operatorname{Add}(T)]=0$ by \cite[Lemma 4.3]{parte1} and (a). Let $X\in\Genn[T][1]\cap(\Addx[T])^{\vee}.$ Then, there is an exact sequence $\eta:\:\suc[X][T_{0}][X'][\,][\,]$ with $T_{0}\in\Addx[T]$ and $X'\in(\Addx[T])^{\vee}.$ Since $\pdr[\operatorname{Gen}_{1}(T)][(\operatorname{Add}(T))^{\vee}]=0,$ we have that $\eta$ splits and then $X\in\Addx[T]$. Therefore $\Addx[T]=$ $\Gen[T]\cap(\Addx[T])^{\vee}$. \\ Now, by (b), we know that $\Genn[T][1]$ is closed under extensions and direct summands. Hence, applying Theorem \ref{thm:el par n-X-tilting} (a), we get the equalities ${}^{\bot}(T^{\bot}\cap\Genn[T][1])\cap\Genn[T][1]=(\Addx[T])_{\Genn[T][1]}^{\vee}\cap\Genn[T][1]={ }^{\bot}\left(T^{\bot}\right)\cap\Gen[T]\mbox{.}$ Finally, observe that $(\Addx[T])_{\Genn[T][1]}^{\vee}=(\Addx[T])^{\vee}$. \end{enumerate} \end{proof} \begin{thm}\label{thm:quasitilt sii 1-Gen-tiltin}Let $R$ be a ring and $T\in\mathrm{Mod}(R)$ with $\pdr[\mathcal{Y}][T]\le1$, where $\overline{\Genn[T][1]}\subseteq\mathcal{Y}\subseteq \mathrm{Mod}(R)$. Then, $T$ is quasitilting if and only if $T$ is big $1$-$\Gen[T]$-tilting. \end{thm} \begin{proof} Note that $\Gen[T]\subseteq\overline{\Genn[T][1]}$. Then, the result follows from Lemma \ref{cor: Gen-tilting es silting} (d) and Theorem \ref{thm: silting es Gen-tilting}. \end{proof} The following lemma is contained in the proof of \cite[Proposition 5.6]{bazzoni2017pure}. \begin{lem}\cite{bazzoni2017pure}\label{lem:superfluo} Let $R$ be a ring and let $T\in\mathrm{Mod}(R)$ be such that $\Gen[T]\subseteq T^{\bot_{1}}$. If $P_{1}\overset{\sigma}{\rightarrow}P_{0}$ is a projective presentation of $T$ such that $\Kerx[\sigma]$ is a superfluous submodule of $P_{1}$, then $\Genn[T][1]\subseteq\mathcal{D}_{\sigma}$.\end{lem} \begin{thm}\label{thm:silting vs 1-Gen-tilting} Let $R$ be a ring, $T\in\mathrm{Mod}(R)$ with $\pdr[\mathcal{Y}][T]\leq1$ and $\overline{\Gen[T]}\subseteq\mathcal{Y}\subseteq\Modx[R],$ and let $\sigma:P_{1}\rightarrow P_{0}$ be a projective presentation of $T$ with $\Kerx[\sigma]$ a superfluous submodule of $P_{1}.$ Then, the following conditions are equivalent: \begin{itemize} \item[$\mathrm{(a)}$] $T$ is silting with respect to $\sigma.$ \item[$\mathrm{(b)}$] $T$ is big $1$-$\Gen[T]$-tilting and there is a $\mathcal{D}_{\sigma}$-preenvelope $\phi:R\rightarrow T_{0}$ such that $T_{0}\in\Addx[T]$. \end{itemize} Moreover, $\mathcal{D}_{\sigma}=\Gen[T]=T^{\bot}\cap\mathcal{Y}=T^{\bot_{1}}\cap\mathcal{Y}$ if one of the above conditions holds true. \end{thm} \begin{proof} (a) $\Rightarrow$ (b) By Proposition \ref{prop:silting parcial +S3 es silting-1}, there is an exact sequence $R\overset{\phi}{\rightarrow}T_{0}\rightarrow T_{1}\rightarrow0\mbox{,}$ where $T_{0}$,$T_{1}\in\Addx[T]$ and $\phi$ is a \textit{\emph{$\mathcal{D}_{\sigma}$-preenv}}elope. Finally, note that $T$ is big 1-$\Gen[T]$-tilting by Proposition \ref{prop:silting vs finendo quasitilting vs tilting} (a) and Theorem \ref{thm: silting es Gen-tilting}. \ (b) $\Rightarrow$ (a) Observe that $\Gen[T]\subseteq\mathcal{D}_{\sigma}$ by Lemma \ref{cor: Gen-tilting es silting} (a) and Lemma \ref{lem:superfluo}. Let us show that $\mathcal{D}_{\sigma}\subseteq\Gen[T]$. We know that there is a $\mathcal{D}_{\sigma}$-preenvelope $\phi:R\rightarrow T_{0}$. Then, for $X\in\mathcal{D}_{\sigma}$, every epimorphism $R^{(\alpha)}\rightarrow X$ factors through the preenvelope $\phi^{(\alpha)}$ via an epimorphism $T_{0}^{(\alpha)}\rightarrow X$. Therefore, $\mathcal{D}_{\sigma}\subseteq\Gen[T]$.\end{proof} \begin{rem} Let $R$ be a ring, $T\in\mathrm{Mod}(R)$ and let $\sigma:P_{1}\rightarrow P_{0}$ be a projective presentation of $T$. The condition of $\mathrm{Ker}({\sigma})$ being a superfluous submodule of $P_{1}$ means that the induced morphism $P_{1}\rightarrow\im[\sigma]$ is a projective cover. We can find different contexts where this kind of projective resolution can be built. For example, in \cite[Corollary 5.7]{bazzoni2017pure}, the following conditions on the ring and the module are mentioned: \begin{enumerate} \item[(i)] $R$ is left perfect; \item[(ii)] $R$ is semiperfect and $T$ is finitely presented; \item[(iii)] $\pdx[T]\leq1$. \end{enumerate} \end{rem} \begin{rem} In \cite{breaz2018torsion}, Simion Breaz and Jan {\v{Z}}emli{\v{c}}ka studied the torsion classes generated by silting modules. In particular, this kind of torsion classes are characterized for perfect and hereditary rings. Namely, for a left perfect (or a left hereditary) ring $R$ and $S\in\Modx[R]$ such that $\Gen[S]$ is a torsion class, they proved in \cite[Theorem 2.4 \& Theorem 2.6]{breaz2018torsion} that $S$ is silting if and only if there is a $\Gen[S]$-preenvelope $\epsilon:R\rightarrow M$ such that $M\in{}{}^{\bot_{1}}\Gen[S]$. Note that, by Lemma \ref{lem:Add de quasitilt}, Proposition \ref{prop:silting vs finendo quasitilting vs tilting} and Theorem \ref{thm:quasitilt sii 1-Gen-tiltin}, the preenvelope $\epsilon:R\rightarrow M$ is the same preenvelope that appears in Theorem \ref{thm:silting vs 1-Gen-tilting} (b). Comparing these results, we observe the following. \begin{enumerate} \item [(1)] Let $R$ be a left perfect ring and $T\in\mathrm{Mod}(R)$ be big $1$-$\Gen[T]$-tilting. Since $R$ is left perfect, for every left $R$-module we can find a projective presentation $\tau:Q_{1}\rightarrow Q_{0}$, with $\Kerx[\tau]$ superfluous in $Q_{0}$. Then, by Theorem \ref{thm:silting vs 1-Gen-tilting} and \cite[Theorem 2.4]{breaz2018torsion}, $T$ is silting with respect to a projective presentation $\rho$ if and only if $T$ is silting with respect to every projective presentation $\sigma:P_{1}\rightarrow P_{0}$ of $T$, with $\Kerx[\sigma]$ superfluous in $P_{1}$. \item [(2)] Let $R$ be a left hereditary ring and $T\in\mathrm{Mod}(R)$ be big $1$-$\Gen[T]$-tilting. Since $R$ is left hereditary, for every left $R$-module we can find a monomorphic projective presentation $\tau:Q_{1}\rightarrow Q_{0}$, and consequently, with $\Kerx[\tau]$ superfluous in $Q_{0}$. Then, by Theorem \ref{thm:silting vs 1-Gen-tilting} and \cite[Theorem 2.6]{breaz2018torsion}, $T$ is silting with respect to a projective presentation $\rho$ if and only if $T$ es silting with respect to every projective presentation $\sigma:P_{1}\rightarrow P_{0}$ of $T$ with $\Kerx[\sigma]$ superfluous in $P_{1}$. \item [(3)] Let $R$ be a ring. Proposition \ref{prop:silting parcial +S3 es silting-1} states that, for every silting $S\in\mathrm{Mod}(R),$ there is a $\Gen[S]$-preenvelope $\epsilon:R\rightarrow M$ with $M\in\Addx[S]$. However, there are examples where the existence of this preenvelope does not imply that $S$ is silting (see \cite[Example 2.5]{breaz2018torsion} and \cite[Example 5.4]{silting2017}). Therefore, it is worth noting that Theorem \ref{thm:silting vs 1-Gen-tilting} give enough conditions in order to have that the existence of such preenvelope implies the silting property. \item [(4)] In \cite[Corollary 2.9]{breaz2018torsion}, it is proved for a left perfect (or a left hereditary) ring $R$ that, for every quasitilting finendo $Q\in \mathrm{Mod}(R),$ there is a silting $T\in\mathrm{Mod}(R)$ such that $\Addx[T]=\Addx[Q]$. It is important mentioning that this is not true for every ring (see \cite[Example 2.10]{breaz2018torsion} and \cite[Example 5.4]{silting2017}). Therefore, it is worth noting that Theorem \ref{thm:silting vs 1-Gen-tilting} give us enough conditions for a quasitilting finendo $R$-module to be silting. \\ Indeed, let $T$ be a quasitilting finendo $R$-module such that $\pdr[\operatorname{Gen}_{1}(T)][T]\leq1$. By Theorem \ref{thm:quasitilt sii 1-Gen-tiltin} and Theorem \ref{thm:quasitilting finendo vs torsion}, $T$ satisfies Theorem \ref{thm:silting vs 1-Gen-tilting}(b). Therefore, if $T$ admits a projective presentation $\sigma:P_{1}\rightarrow P_{0}$ with $\Kerx[\sigma]$ superfluous in $P_{1}$, then $T$ is silting with respect to $\sigma$ by Theorem \ref{thm:silting vs 1-Gen-tilting}. \end{enumerate} \end{rem} \bigskip \bibliographystyle{plain}
-410,965.910358
[ -2.30078125, 2.291015625 ]
28.174198
[ -2.951171875, 0.5595703125, -2.490234375, -5.7421875, -0.712890625, 8.84375 ]
[ 4.19140625, 9.3359375, 2.314453125, 6.97265625 ]
2,089
26,279
[ -3.04296875, 3.716796875 ]
35.209037
[ -5.49609375, -3.69140625, -5.4765625, -2.384765625, 1.7607421875, 13.625 ]
0.322509
14.872023
14.897827
1.884717
[ 1.9578964710235596 ]
-261,833.517448
7.504243
-408,159.484411
0.42849
6.344467
[ -1.4013671875, -3.177734375, -4.12109375, -5.359375, 1.7451171875, 12.359375 ]
[ -5.9453125, -2.419921875, -2.34375, -1.3408203125, 4.16796875, 5.27734375 ]
BkiUdNc5qsBDGOt8lJgH
\section{Introduction} {SAX J1753.5$-$2349}\ is a neutron star Low Mass X-ray Binary (LMXB) discovered in 1996 by {\it BeppoSAX}/Wide Field Camera (WFC) during a single type-I X-ray burst \cite{zand99}. However, no steady emission was detected from the source leading to an upper limit of about 5 mCrab (2--8 keV) for a total exposure of 300 ks \cite{zand99}. Cornelisse et al. (2004) proposed {SAX J1753.5$-$2349}\ being member of a possible non-homogeneous class of LMXBs, the so-called ``burst-only'' sources (see also Cocchi et al. 2001). These are a group of nine bursters discovered by {\it BeppoSAX}/WFC when exhibiting a type-I burst without any detectable persistent X-ray emission. Recently, {\it INTEGRAL}\ identified two new members of this class. In fact, photospheric radius expansion (PRE) bursts have been caught in two previously unclassified sources, namely XMMU~J174716.1--281048\ \cite{brandt06} and \ax\ \cite{chelo07}. Afterwards, both have been classified as ``quasi-persistent'' Very Faint X-ray Transients (VFXTs), since they undergo prolonged accretion episodes of many years at low $\dot{M}$\ (Del Santo et al. 2007, Bassa et al. 2008). VFXTs are transients showing outbursts with low peak luminosity ($10^{34}$--10$^{36}$ erg s$^{-1}$\ in 2--10 keV), mainly discovered with high sensitivity instruments on-board {\it Chandra}\ and {\it XMM-Newton}\ during surveys of the Galactic Center region \cite{wij06}. They are believed to be the faintest known accretors, and are very likely a non homogeneous class of sources. A significant fraction ($\sim 1/3$) of VFXTs are X-ray bursters (Degenaar \& Wijnands 2009, Del Santo et al. 2007, Del Santo et al. 2008, Cornelisse et al. 2004); thus they can be identified with neutron stars accreting matter from a low mass companion (M $\lesssim$ 1M$_\odot$). \begin{table} \begin{center} \caption{Log of the {\it INTEGRAL}\ observations of the {SAX J1753.5$-$2349}\ region: orbit number (Rev.), start and end time of the observations, exposures time for each orbit taking into account the whole data-set, and number of pointings (SCW) are reported. Observations within a single orbit are not continuous. The first {\it INTEGRAL}\ detection of {SAX J1753.5$-$2349}\ occurred in rev. 732. A data sub-set from rev. 732 to 736 has been used to compute the averaged spectra. The last column reports the exposures of spectra in each orbit.} \scriptsize \begin{tabular}{lccccc} \hline Rev. & Start & End & Total Exp. & SCW & Spec. Exp.\\ & (MJD) & (MJD) & (ks) & & (ks)\\ \hline 724 & 54727.50 & 54728.23 & 58 & 17 & - \\ 725 & 54729.11 & 54731.52 & 198 & 56 & - \\ 726 & 54732.52 & 54734.46 & 160 & 45 & - \\ 729 & 54741.37 & 54741.86 & 42 & 12 & - \\ 731 & 54749.22 & 54749.55 & 20 & 8 & -\\ 732 & 54749.90 & 54750.85 & 83 & 32 & 26.2 \\ 733 & 54754.96 & 54755.46 & 38 & 11 & 10.8\\ 734 & 54756.87 & 54758.54 & 128 & 48 & 36.5\\ 735 & 54760.91 & 54761.53 & 43 & 13 & 23.2 \\ 736 & 54762.03 & 54763.63 & 38 & 49 & 30.0\\ \hline \\ \end{tabular}\\ \label{tab:log} \end{center} \end{table} In 2002 observations with {\it Chandra}\ and {\it XMM-Newton}\ allowed to reveal the nature of four {\it BeppoSAX}\ ``burst-only'' sources: one persistent very-faint source, two faint transient systems (with 2--10 keV peak luminosity in the range $10^{36}$--10$^{37}$ erg s$^{-1}$), and one VFXT (see Wijnands et al. 2006 and reference therein). For the other five bursters, including {SAX J1753.5$-$2349}, only the quiescent emission could be derived ($\sim$10$^{32}$ erg s$^{-1}$; Cornelisse et al. 2004). Wijnands et al. (2006) proposed these systems, as good candidates to be classified as VFXTs (see also Campana 2009). In 2008 October 11, {\it RXTE}/PCA, {\it Swift}/BAT \cite{mark08} and {\it INTEGRAL}/IBIS \cite{cadol08} detected an outburst from {SAX J1753.5$-$2349}\ at 10 mCrab flux level. Then, {\it Swift}/XRT pointed {SAX J1753.5$-$2349}\ on October 23 \cite{degewij08}, during the decline phase of the outburst (Fig. \ref{fig:lc}). An improvement in the source position, R.A.(J2000)=$17^{h} 53^{m} 31.90^{s}$, Dec(J2000)=$-23^{\circ} 48' 16.7''$, has been provided \cite{starl08}. On 2009 March 13, it was re-pointed by {\it Swift}\ and a 3$\sigma$ upper-limit derived. This translates in a luminosity level $\lesssim$ $5 \times 10^{32}$ erg s$^{-1}$\ \cite{delsanto09}. In this paper we present the hard X-ray outburst of {SAX J1753.5$-$2349}\ observed by {\it INTEGRAL}/IBIS, as well as the first broad-band spectral analysis of the steady emission of a ``burst-only''. We estimate the long-term mass-accretion rate and discuss the nature of the transient system. \begin{figure} \centering \includegraphics[height=6cm]{delsanto10_fig1.ps} \caption{{SAX J1753.5$-$2349}\ BAT (top) and IBIS/ISGRI (bottom) count rate evolution in the 15--50 keV and 18-40 keV energy ranges, respectively. The XRT detection time is also shown on the bottom plot. The public BAT light curve starts from 54754 MJD; after MJD=54764 {SAX J1753.5$-$2349}\ was no longer pointed by {\it INTEGRAL}. \label{fig:lc}} \end{figure} \begin{figure*} \centering \includegraphics[height=8cm,angle=-90]{delsanto10_fig2.ps} \includegraphics[height=8cm,angle=-90]{delsanto10_fig3.ps} \caption{XRT and IBIS/ISGRI count rate spectra fitted with a simple power-law ({\it{left}}); the total \texttt{bb+comptt} model (continuous line) and the two single components (dashed lines) ({\it{right}}) \label{fig:model}} \end{figure*} \section{Observation and data analysis} \subsection{{\it INTEGRAL}} This paper is based on {\it INTEGRAL}\ observations of the Galactic Centre region carried out in the framework of the AO6 Key-Programme. Moreover, we used data from a public ToO on the source H 1743-322, at 8.6$^\circ$ from {SAX J1753.5$-$2349}, performed on 2008 October, for a total exposure time of 800 ks (see Tab. \ref{tab:log}). We reduced the data of the IBIS \cite{ube03} low energy detector ISGRI \cite{lebrun03}, and JEM-X \cite{lund03} data using the {\it INTEGRAL}\ Off-Line Scientific Analysis, release 8.0. Due to the source weakness, no signal was found in the JEM-X data. On October 10, the first IBIS detection of {SAX J1753.5$-$2349}\ was found (rev. 732). We extracted the IBIS/ISGRI light curves from each revolution as reported in Tab.\ref{tab:log} (binning size as the Total Exposure column) in the energy range 18--40 keV, 40--80 keV, 80--150 keV. For the spectral extraction, we used a sub-set of the data reported in Tab. \ref{tab:log}, selecting only pointings including {SAX J1753.5$-$2349}\ in the IBIS FOV up to 50\% coding (15$^\circ \times$15$^\circ$). We obtained four averaged spectra from revolutions 732, 733, 734 and 735-736 (the latests have been added together because of the poor statistics). Spectral fits were performed using the spectral X-ray analysis package XSPEC v. 11.3.1. \subsection{{\it Swift}} A {\it Swift}\ ToO was performed on October 23 (Degenaar \& Wijnands 2008). The {\it Swift}/XRT data of observation 00035713002 were collected in photon counting (PC) mode between 2008-10-23 17:48:53 and 21:08:57 UT, for a total on-source net exposure of 1 {\rm{ks}}. They were processed with standard procedures ({\tt xrtpipeline} v0.12.1), filtering and screening criteria by using the {\tt Heasoft} package (v.6.6.1). Moderate pile-up was present, so source events were extracted from an annular region (radii of 20 and 3 pixels; 1 pixel $\sim 2\farcs36$), while background events were extracted from an annular region (radii 120 and 80 pixels) away from background sources. An XRT spectrum was extracted and ancillary response files were generated with {\tt xrtmkarf}, to account for different extraction regions, vignetting and PSF corrections. We used the spectral redistribution matrices v011 in the Calibration Database maintained by HEASARC. All spectra were rebinned with a minimum of 20 counts per energy bin. We retrieved the BAT daily light curves (15--50 keV) available starting from MJD=54754, from the {\it Swift}/BAT transient monitor (Krimm et al. 2006, 2008; http://heasarc.gsfc.nasa.gov/docs/swift/results/transients/) page. \section{Results} The IBIS/ISGRI and BAT count rate of {SAX J1753.5$-$2349}\ are shown in Fig. \ref{fig:lc}. Based on the IBIS data, the hard X-ray outburst started on October 10 at a flux level of 10 mCrab (18--40 keV) and lasted at least 14 days (last pointing at 4 mCrab). This outburst is hence characterised by a fast increase of the flux and a linear decay with a slope of $-$0.13$\pm$0.01. An {\it INTEGRAL}\ pointing with no {SAX J1753.5$-$2349}\ detection was performed eight hours before the outburst started. We also averaged all our data (from rev 724 to 731) collected before the first source detection for a total of 500 ks, resulting in a 3$\sigma$ upper limit of 1 mCrab (Fig. \ref{fig:lc}). In order to look for any possible spectral variability, we fitted the four averaged IBIS spectra with a simple power law. We obtained a constant value (within the errors) of the photon index ($\Gamma$ $\sim$ 2) which indicates, in spite of the flux variation, a steady spectral state. The lack of spectral parameter variation led us to average the IBIS spectra of different revolutions. The 18-100 keV averaged spectrum is well described by a simple power law model with a slope as $2.2 \pm 0.3$. A mean 18--100 keV flux of $1.5\times 10^{-10}$ erg cm$^{-2}$ s$^{-1}$ can be derived. The XRT spectrum can be fitted by an absorbed power law model with a Hydrogen column density of N${\rm _H}=1.8 (\pm 0.6) \times 10^{22}$ cm$^{-2}$. The photon index is $\Gamma = 2.0 \pm 0.5$ and the resulting 2--10 keV absorbed and unabsorbed fluxes are $\sim$4.4 and $\sim$5.2 $\times 10^{-11}$ erg cm$^{-2}$ s$^{-1}$ , respectively. We note that the derived N${\rm _H}$ is higher than the absorption column of $0.83 \times 10^{22}$ cm$^{-2}$ \cite{corne02} found by interpolating the HI maps of Dickey \& Lockman (1990). In fact, the two values are perfectly consistent within the errors, given the large range of values (about $0.4-1.5 \times 10^{22}$ cm$^{-2}$) obtained in the box adopted to calculate the Weighted Average N${\rm _H}$ (with the nH Column Density Tool)\footnote{http://heasarc.gsfc.nasa.gov/docs/tools.html} from the HI maps. The joint IBIS and XRT spectrum (0.3--100 keV) was then fitted with different models. First we used an empirical model such as the power law (Fig. \ref{fig:model}, {\it left}), then the more physical Comptonisation model. Indeed, the 1--200 keV spectrum of X-ray bursters in low/hard state is most likely produced by the upscattering of soft seed photons by a hot optically thin electron plasma (i.e. Barret et al. 2000 and references therein). Moreover, a black-body emission from the neutron star surface is also expected to be observed in the low/hard states of bursters (i.e. Natalucci et al. 2000 and references therein). We tried to add a \texttt{BB} component to the two models. The best fit parameters and mean fluxes are reported in Tab. \ref{tab:fit_sim}. Thus, using a physical thermal Comptonisation model, \texttt{COMPTT} \cite{tita94} in XSPEC, the electron temperature is not constrained, while a lower limit of $\sim$24 keV (at $90\%$) can be inferred (see Tab. \ref{tab:fit_sim} and contour levels in Fig. \ref{fig:cont}). This is consistent with the electrons temperature observed in burster systems, even brighter than {SAX J1753.5$-$2349}\ \cite{barret00}. With the addition of the \texttt{BB} component to the thermal Comptonisation, a typical value of the black-body temperature (kT$_{\rm BB}$ $\sim$0.3 keV) is obtained (Fig. \ref{fig:model}, {\it right}), even though this component is not requested by the Ftest probability ($7 \times 10^{-2}$). We may argue that the high absorption observed in {SAX J1753.5$-$2349}\ could be a strong obstacle to the firm detection of this component. As a firts approximation, the accretion luminosity L$_{\rm acc}$ is coincident with the bolometric luminosity of the source (0.1--100 keV). Using the mean 0.1--100 keV flux obtained with the \texttt{COMPTT} model fit and assuming a distance of 8 kpc (Galactic Centre), a value of L$_{\rm acc}=4.3\times10^{36}$ erg s$^{-1}$\ ($\sim$0.02 L$_{Edd} $) is derived. The averaged mass-accretion rate ($\langle \dot{M}_{\mathrm{ob}} \rangle=R L_{\mathrm{acc}}/GM$, where $G$ is the gravitational constant, $M=1.4~\mathrm{M_{\odot}}$ and $R=10$~km for a neutron star accretor) during the outburst is $6.7 \times 10^{-10}$ M$_\odot$\ yr$^{-1}$. \begin{table*} \begin{center} \caption{The parameters the XRT/IBIS spectra fitted four different models.} \vspace{1em} \renewcommand{\arraystretch}{1.5} \begin{tabular}{lrccccccc} \hline Model & N$_{H}$ & $kT_{BB}$ & $\Gamma$ & $E_{c}$ & $kT_{e}$ & $\tau$ & $\chi^2_{\nu}$(dof) & $F_{\rm bol}^{\mathrm a}$ \\ & $10^{22}$ ($\rm {cm}^{-2}$) & (keV) & & (keV) & (keV) & & & (erg cm$^{-2}$ s$^{-1}$ ) \\ \hline POW & 2.2$^{+0.5}_{-0.4}$ & - & $2.3 \pm 0.3$ & - & - & - & 0.91(19)& $1.3\times 10^{-9}$ \\ BB+POW & 2.8$^{+2.0}_{-1.0}$ & $0.4^{+0.3}_{-0.1}$ & $2.1 \pm 0.3$ & - & & & 0.82(17) & $5.6\times 10^{-10}$ \\ Comptt & 1.9$\pm 0.4$ & - & - & - & $> 24$ & $0.2^{+1.3}_{-0.1}$ & 1.07(18) & $1.1\times 10^{-9}$ \\ BB+Comptt & 2.7$^{+2.0}_{-1.0}$ & $0.4^{+0.3}_{-0.2}$ & - & - & $> 17$ & $0.8^{+2.2}_{-0.6}$ & 0.86(16) & $6.3\times 10^{-10}$ \\ \hline \\ \end{tabular} \label{tab:fit_sim} \end{center} \vspace{-0.6cm} \begin{list}{}{} \item[$^{\mathrm{a}}$] The bolometric flux of the unabsorbed best-fit model spectrum. \end{list} \end{table*} \section{Discussion} We report here for the first time the broad-band spectrum, from soft to hard X-rays, of the persistent emission from a so-called ``burst-only'' source. In particular, none of these sources have ever been studied above 10 keV during their persistent emission. The outburst from {SAX J1753.5$-$2349}\ observed with {\it INTEGRAL}/IBIS has a duration of at least 14 days, without any evidence for type-I X-ray bursts, all along the performed {\it INTEGRAL}\ observations of the Galactic Centre region started in 2003. From the {\it RXTE}/PCA flux detection at 8 mCrab \cite{mark08} we can derive an absorbed 2--10 keV peak flux of about $1.7 \times 10^{-10}$ erg cm$^{-2}$ s$^{-1}$ \ which translates in an unabsorbed luminosity higher than $1.3 \times 10^{36}$ erg s$^{-1}$. This value seems to indicate {SAX J1753.5$-$2349}\ being a hybrid system (such as AX J1745.6--2901 and GRS 1741.9--2853, see Degenaar \& Wijnands 2009) which displays very-faint outbursts with 2--10 keV peak luminosity $L_{X} < 10^{36}$ erg s$^{-1}$\ (as resulted from WFC observations in 1996), as well as outbursts with luminosities in the range $10^{36-37}$ erg s$^{-1}$, which are classified as faint (FXT; Wijnands et al. 2006). However, it is worth to know that the $L_{X}$ boundary as $10^{36}$ erg s$^{-1}$\ is somewhat arbitrary (such as the VFXT/FXT classification). Nevertheless, our result reinforces the hypothesis that the so-called ``burst-only'' sources belong to the class of the subluminous neutron star X-ray binaries. A rough estimate of the duty cycle (as the ratio of $t_{\mathrm{ob}}/t_{\mathrm{rec}}$) can be obtained. The time interval between the two 2008 measurements of the quiescence (February 2008-March 2009) is about 13 months while the outburst recurrence ($t_{\mathrm{rec}}$) is about 12 years (from the burst event in 1996). However, it is possible that we missed other outbursts of {SAX J1753.5$-$2349}\ that occurred between 1996 and 2008 whithin periods not covered by Galactic Centre monitoring. The outburst duration ($t_{\mathrm{ob}}$) ranges from a minimum of 14 days (as observed) and a maximum of 13 months, since there are not any other X-ray observations but the ones in October. In fact, we cannot exclude that the hard X-ray outburst may be part of a longer outburst occurred at a lower luminosity level, only detectable by high-sensitivity X-ray telescopes. \begin{figure} \centering \includegraphics[height=7cm,angle=-90]{delsanto10_fig4.ps} \caption{Confidence contour levels of electron temperature and plasma optical depth for the {\texttt{comptt}} model fitting the broad-band spectrum. \label{fig:cont}} \end{figure} This translates into a duty cycle ranging from a minimum of 0.3$\%$ to a maximum of 9$\%$ and into a long-term time-averaged accretion rate ($\langle \dot{M}_{\mathrm{long}} \rangle=\langle \dot{M}_{\mathrm{ob}} \rangle \times t_{\mathrm{ob}} / t_{\mathrm{rec}}$) ranging from 2.2$\times$$10^{-12}$ to 6.0$\times$$10^{-11}$ M$_\odot$ yr$^{-1}$. King \& Wijnands (2006) suggested that neutron star in transient LMXBs with low time-averaged mass accretion rate might pose difficulties explaining their existence without invoking exotic scenarios such as accretion from a planetary donor. However, the regime of $\langle \dot{M}_{\mathrm{long}} \rangle$ estimated for {SAX J1753.5$-$2349}\ can be well explained within current LMXB evolution models. In spite of the flux variability along the outburst, the spectral state of {SAX J1753.5$-$2349}\ remains steady, in low/hard state. This is in agreement with the fact that a really low X-ray luminosity, $L_{X}$ $\lesssim$ $0.01 L_{Edd}$ or so, produces a hard state in most sources \cite{klis06}. Following in't Zand et al. (2007), we have estimated the hardness ratio 40--100/20--40 keV within each {\it INTEGRAL}\ revolutions. We find a value consistent with 1 which confirms the hard nature of the system. This is also consistent with the low mass accretion rate inferred (see also Paizis et al. 2006), i. e. {SAX J1753.5$-$2349}\ is not a fake faint system and there would be no reason to assume that the system is obscured to explain the low $\dot{M}$. Moreover, King (2000) argued that the faint low-mass X-ray transients are mainly neutron star X-ray binaries in very compact binaries with orbital periods lower than 80 min. We suggest that the {SAX J1753.5$-$2349}\ system is a good candidate to harbor an accreting neutron star in a very compact system. In conclusion, {SAX J1753.5$-$2349}\ joins a sample of low-luminosity transient LMXBs \cite{degewij09}, which display different behaviour in terms of peak luminosity, outburst duration and recurrence time from year to year. Up to now, it is not understood whether these variations should be interpreted as being due to changes in the mass-transfer rate or as results of instabilities in the accretion disc (Degenar \& Wijnands 2009 and reference therein). \section*{Acknowledgments} Data analysis is supported by the Italian Space Agency (ASI), via contracts ASI/INTEGRAL I/008/07/0, ASI I/088/06/0. MDS thanks Memmo Federici for the {\it INTEGRAL}\ data archival support at IASF-Roma. We thank the anonymous referee for his quick response and useful suggestions.
-14,399.861422
[ -2.42578125, 2.240234375 ]
45.857988
[ -2.96875, 0.434814453125, -1.2177734375, -4.6875, -0.318603515625, 6.48828125 ]
[ 1.736328125, 6.21484375, 4.25, 3.0078125 ]
286
2,655
[ -3.416015625, 3.94140625 ]
36.65608
[ -5.18359375, -1.822265625, -1.7578125, -1.5869140625, 0.75537109375, 7.61328125 ]
0.953009
20.148076
35.66855
8.83782
[ 1.940680742263794 ]
-10,804.413405
5.137853
-13,854.231203
0.659776
6.036869
[ -3.666015625, -3.4296875, -2.849609375, -3.607421875, 2.5078125, 10.265625 ]
[ -6.1875, -2.412109375, -1.986328125, -1.474609375, 3.642578125, 5.125 ]
BkiUd9c4uzqh_N6SCiss
\section{Introduction} The details of galaxies assembly and evolution processes still remain relatively unknown. Much of our current knowledge of the high-redshift galaxy populations still relies on the integrated spectra and multi-wavelength photometry acquired through deep and wide surveys (eg. VVDS and COSMOS). More detailed constraints are however needed to understand the main physical processes (angular momentum exchange, dissipation and cooling, feedback from star formation or AGN, etc) involved in the formation and evolution of galaxies. Such constraints are indeed crucial inputs for theories and simulations of galaxy formation and evolution. Spatially-resolved investigations of individual galaxies at early stages of their evolution is thus crucial to disentangle the different processes of galaxy mass assembly. Thanks to the advent of sensitive near-infrared (NIR) integral field spectrographs (eg. SINFONI, OSIRIS) mounted on 8-10m class telescopes such studies have been recently made possible (see Epinat et al. in these proceedings for a review), allowing to probe the complex kinematics and morphologies of high-redshift galaxies, and enabling the mapping of the distribution of star formation and physical properties such as chemical abundances. These powerful instruments indeed provide access, for $z\sim 1-4$ galaxies, to the well-calibrated spectral diagnostics of the physical properties from rest-frame optical emission lines such as \ifmmode {\rm H{\alpha}} \else $\rm H{\alpha}$\fi, \ifmmode {\rm H{\beta}} \else $\rm H{\beta}$\fi, [N\,{\sc ii}]$\lambda$6584, [S\,{\sc ii}]$\lambda\lambda$6717,6731, [O\,{\sc iii}]$\lambda$5007, and [O\,{\sc ii}]$\lambda$3727. Using the NIR integral field spectrograph SINFONI at ESO/VLT, we are conducting a major survey of spatially-resolved studies of high-redshift galaxy populations: the Mass Assembly Survey with SINFONI in VVDS, or "MASSIV". With the detailed information provided by SINFONI on individual galaxies, the key science goals of the MASSIV survey are to investigate in detail: (1) the nature of the dynamical support (rotation vs. dispersion) of high-z galaxies, (2) the respective role of mergers (minor and/or major) and gas accretion in galaxy mass assembly, (3) and the process of gas exchange (inflows/outflows) with the intergalactic medium through the derivation of metallicity gradients. The MASSIV sample includes 84 star-forming galaxies drawn from the VIMOS VLT Deep Survey (VVDS) in the redshift range $0.9 < z < 1.8$. So far, we have collected and reduced observations of 50 galaxies. In this paper, we present the MASSIV sample together with the first results coming out from the analysis of the "first epoch" 50 MASSIV galaxies. Throughout this paper, we assume a standard $\Lambda$-CDM cosmology, i.e. $h=0.7$, $\Omega_{\mathrm{m}}=0.3$ and $\Omega_{\Lambda}=0.7$. For this cosmology, 1\hbox{$^{\prime\prime}$}\ corresponds to $\sim 8$ kpc at $z\sim 1-2$. \section{The MASSIV sample: selection criteria and global properties} We have used the VVDS sample to select galaxies across the peak of star formation activity around $z\sim 1.5$. VVDS offers the advantage of combining a robust selection function and secure spectroscopic redshifts. For the MASSIV survey, we have defined a sample of 84 VVDS star-forming galaxies at $0.9 < z < 1.8$ suitable for SINFONI observations. Three selection crtiteria have been applied successively. First, the MASSIV targets were selected to be star-forming galaxies. In most of the cases, the selection was based on the measured intensity of [O\,{\sc ii}]$\lambda$3727\ emission line in the VIMOS spectrum (see Figure~\ref{zhist_complitt}) or, for a few cases where the [O\,{\sc ii}]$\lambda$3727\ emission line is out of the VIMOS spectral range, on their observed photometric $UBVRIK$ spectral energy distribution (SED) and/or UV rest-frame spectrum which is typical of star-forming galaxies. The star formation criteria ensure that the brightest rest-frame optical emission lines, mainly \ifmmode {\rm H{\alpha}} \else $\rm H{\alpha}$\fi\ and [N\,{\sc ii}]$\lambda$6584\ used to probe kinematics and chemical abundances, will be observed with SINFONI in the NIR $J$ and $H$ bands. Among these star-forming galaxy candidates, we have further restricted the sample taking into account one important observational constraint: the observed wavelength of H$\alpha$ line had to fall at least 9\AA\ away from strong OH night-sky lines to avoid heavy contamination of the galaxy spectrum by sky subtraction residuals. Finally, a fraction of MASSIV galaxies have been selected to be observed at higher spatial resolution with the adaptive optics (AO) system of SINFONI assisted with the Laser Guide Star facility. In these cases, a bright star ($R < 18$ mag) close enough to the target ($d < 60\hbox{$^{\prime\prime}$}$) is needed for the zero-order tip-tilt corrections. Most of the MASSIV galaxies were observed in seeing-limited mode with a median seeing of $\sim 0.65$\hbox{$^{\prime\prime}$}. Ten targets only have been acquired with AO, achieving FWHM resolution of $\sim 0.25$\hbox{$^{\prime\prime}$}. The total on-source integration times range on average from 80 to 120 minutes (see Contini et al., in prep. for details). MASSIV galaxies are distributed in the redshift range between $z\sim 0.94$ and $1.80$ with a median value $z=1.33$. MASSIV is thus probing a lower redshift range than the SINS, OSIRIS and LSD/AMAZE surveys (see Figure~\ref{zhist_complitt}). However, even if the LSD/AMAZE sample targets galaxies at the highest redshifts ($z \sim 2.6-3.8$), the MASSIV, OSIRIS and SINS surveys are probing the common redshift range $z \sim 1.3-1.8$. Stellar masses and SED-based Star Formation Rates (SFR) for MASSIV galaxies have been derived using standard SED fitting procedures (see Contini et al., in prep. for details). Stellar masses range between $\sim 3 \times 10^{9}$ and $6 \times 10^{11}$ M$_{\odot}$, with a median value of $1.4 \times 10^{10}$ M$_{\odot}$. The stellar mass range probed by MASSIV is rather similar to the one targeted by the SINS, OSIRIS, and LSD/AMAZE surveys, extending from $\sim 10^9$ to $5 \times 10^{11}$ M$_{\odot}$. The median value of the MASSIV sample is intermediate between the median values of the OSIRIS/LSD/AMAZE ($1.1 \times 10^{10}$ M$_{\odot}$) and the SINS ($2.5 \times 10^{10}$ M$_{\odot}$) samples. SED-based SFRs of MASSIV galaxies range between $\sim 5$ to $400$ M$_{\odot}$ yr$^{-1}$, with a median value of $\sim 31$ M$_{\odot}$ yr$^{-1}$. The SFR range probed by MASSIV is rather similar to the one targeted by the SINS, and OSIRIS surveys, extending from $\sim 5$ to $400$ M$_{\odot}$ yr$^{-1}$. The median value of the MASSIV sample is however smaller than the median values of the OSIRIS (47 M$_{\odot}$ yr$^{-1}$), the SINS (72 M$_{\odot}$ yr$^{-1}$), and LSD/AMAZE (100 M$_{\odot}$ yr$^{-1}$) samples. \begin{figure}[bt] \begin{center} \includegraphics[width=8cm]{cosmic_SFR_hopkins06_Massiv.pdf} \includegraphics[width=5.4cm]{selection_oii_flux_vs_ew_ouaga.pdf} \caption{{\it Left}. Evolution of the cosmic star formation rate density as a function of look-back time and redshift (adapted from Hopkins 2006). The redshift range of MASSIV ($0.9 < z < 1.8$, magenta box) is compared with other major IFU surveys of distant .galaxies: IMAGES ($z \sim 0.4-0.8$), SINS/OSIRIS ($z \sim 1.4-2.6$), and LSD/AMAZE ($z \sim 2.6-3.8$). The relative height of each boxes is proportional to the samples size. {\it Right}. Selection of star-forming galaxies with secure redshift and measured [O\,{\sc ii}]$\lambda$3727\ emission line in the VVDS fields for SINFONI follow-up observations. The green dashed line indicates the selection box of MASSIV targets (filled symbols). The 64 galaxies selected for the MASSIV survey based on their [O\,{\sc ii}]$\lambda$3727\ emission-line strength are indicated as red squares. } \label{zhist_complitt} \end{center} \end{figure} \section{Evidence for positive metallicity gradients in high-redshift galaxies} A visual classification has been performed on the "first epoch" sample of 50 MASSIV galaxies using mainly CFHT $I$-band images, and velocity field/dispersion maps (see Epinat et al., in prep. for details). For each galaxy, a velocity model has been fitted to its velocity map, and a derived velocity dispersion map (corrected from beam smearing, see Epinat et al. 2009) is produced. The classification has been performed by eight people independently and then reconcilied to produce the final classification with corresponding confidence levels. This broad classification is based on two criteria. The first one concerns the kinematical state of the main component and the second one the close environment of the galaxy. We thus defined three types of kinematical classes: disk in regular rotation, perturbed rotation and quasi non-rotating object (perturbed and rotating), and two types of environment: isolated and interacting/merging objects. The result of this classification is shown in Figure~\ref{gradZ} (left). At high redshifts the gas-phase metallicity of individual galaxies can only be measured with limited accuracy. Here, we use the [N\,{\sc ii}]$\lambda$6584/\ifmmode {\rm H{\alpha}} \else $\rm H{\alpha}$\fi\ emission-line ratio as an indicator of oxygen abundance and the calibration obtained by P\'erez-Montero \& Contini (2009). For 29 galaxies of the "first epoch" MASSIV sample, we have been able to quantify the radial behavior of the gas-phase metallicity (see Fig.~\ref{gradZ}, right). The radial gradients are more often very weak, some being positive and other negative. Contrary to the global trend in the local universe, where the gas-phase metallicity of spiral galaxies decreases with galactocentric radius, an important fraction of our galaxies have larger metallicities at larger distance from the center. However, it is not the first time that positive metallicity gradients are found. Recently, Werk et al. (2010) reported a positive gradient in a local galaxy and proposed several scenari to explain their discovery: (i) radial redistribution of the metal-rich gas produced in the nucleus, (ii) supernova blowing out metal-rich gas, enriching the IGM, then falling onto the outer parts of the disk, (iii) result of a past interaction. If we stick strictly to the numbers, our sample counts 57\% of positive gradients, of course some of them being very weak. Taking into account the $1 \sigma$ error on each slope (as shown in yellow in Figure~\ref{gradZ}), we count 7 galaxies with a secure positive gradient. Among these galaxies, 5 are classified as interacting systems, while the 2 other are isolated. We thus conclude that the majority of the galaxies for which we detected a secure positive metallicity gradient are interacting. This result balances the recent interpretation by Cresci et al. (2010) of positive metallicity gradients in three $z\sim 3$ galaxies as evidence for cold gas accretion as the main mechanism of galaxy mass assembly at high redshifts. \begin{figure}[bt] \begin{center} \includegraphics[width=6.7cm]{MASSIV_43gal.pdf} \includegraphics[width=6.7cm]{MASSIV_positivegradZ.pdf} \caption{{\it Left}. \ifmmode {\rm H{\alpha}} \else $\rm H{\alpha}$\fi-based velocity fields of the "first epoch" MASSIV sample. Galaxies have been classified in four broad classes depending on their close environment (isolated vs. interacting/merger system) and kinematics (rotating disks vs. slow/no rotation). About one third of MASSIV galaxies show clear signatures of close interactions and/or ongoing merging (see Epinat et al., in prep. for details). {\it Right}. Metallicity gradients for 29 MASSIV galaxies with spatially-resolved metallicities. The red lines are the best fits to the data and the yellow regions represent the 1$\sigma$ errors associated to the gradients. Positive gradients are clearly observed in 7 MASSIV galaxies, among which 5 are interacting systems (see Queyrel et al., in prep.). } \label{gradZ} \end{center} \end{figure} \begin{acknowledgements} This work is supported by the french ANR grant ANR-07-JCJC-0009, the CNRS-INSU and its Programme National Cosmologie-Galaxies. \end{acknowledgements} \vspace{-0.2cm}
-8,863.916237
[ -2.9609375, 2.8125 ]
36.633663
[ -6.0703125, -3.685546875, -3.125, -8.28125, 1.9970703125, 13.03125 ]
[ 4.76953125, 7.1328125, 3.494140625, 6.84375 ]
132
1,737
[ -0.75927734375, 0.275146484375 ]
25.705304
[ -6.12890625, -5.046875, -3.533203125, -1.1796875, 2.462890625, 10.6953125 ]
1.215516
15.45856
36.787565
1.852655
[ 2.4396002292633057 ]
-7,344.519733
5.446747
-8,604.614307
0.380509
5.649929
[ -4.01953125, -4.3046875, -3.50390625, -3.55859375, 2.88671875, 9.9921875 ]
[ -6.73046875, -5.10546875, -3.359375, -2.712890625, 5.3203125, 8.8125 ]
BkiUd6I4eILhQCVbe7nL
\section{Introduction} \label{sec:introduction} Reverberation mapping (RM) has become a powerful tool for determining the physical properties of active galactic nuclei (AGN; \citealt{Peterson1993}). As the continuum of an AGN varies, each broad emission line (BEL) in its spectrum usually responds with a mean lag, $\tau$, on the order of hours to months, depending on the luminosity of the system and the specific transition involved \citep{Onken2004,Kaspi1999, Kaspi2005}. By associating this lag with the light travel time from the central engine to the line-forming region, one obtains an estimate for the size of the broad line region (BLR), $R_{\rm BLR} \simeq c \tau$. Moreover, BELs in Type I AGN are substantially Doppler-broadened, so the width of a line is a measure of the velocity field in the BLR. If this is assumed to be roughly virialized -- $v_{\rm BLR} \simeq \sqrt{G M_{\rm BH}/R_{\rm BLR}}$ -- a black hole mass estimate can also be found immediately as $M_{\rm BH} \simeq c\tau_{\rm BLR} v_{\rm BLR}^2 / G$. \begin{figure} \includegraphics[width=\columnwidth]{isodelay} \label{fig:intro:isodelay} \caption{Isodelay contour from \protect{\citet{Peterson2004}}.} \end{figure} In reality, line-emitting material at any given distance from the central engine can produce a wide range of observed lags. Equivalently, any given lag can in principle be produced by line-emitting material across a wide range of distances. More specifically, isodelay surfaces around a point source form a set of concentric paraboloids of revolution centred on the line connecting the observer and the source (Figure~\ref{fig:intro:isodelay}; \cite{Peterson2004}). Which parts of an isodelay surface {\em actually} produce the observed line emission at a given lag -- and which isodelay surfaces actually contribute to a given line -- depends on the geometry, density and ionization state of the BLR, as well as on the inclination of the observer with respect to the system. In addition, different {\em parts} of a given line -- e.g. red and blue line wings -- will exhibit different responses, depending on the kinematics of the BLR. For example, if the BLR is dominated by outflow kinematics, material moving towards the observer is also physically closest to the observer. The response of the blue wing is then expected to lead that of the red wing (see Figure~\ref{fig:background:tf_example}). The true response of a given line to continuum variations is therefore a distribution function over (at least\footnote{In principle, the line response also depends on the (possibly variable) source spectral energy distribution (SED), on the amplitude and shape of the continuum variations and on any physical changes in the BLR structure over the course of the observing campaign.}) time delay and radial velocity. Moreover, the shape of this function encodes information about the structure of the BLR -- its geometry, kinematics and ionization state -- on otherwise unresolvably small physical scales \citep{Welsh1991}. Thus, provided we can decode them, time- and velocity-resolved response functions (aka ``velocity-delay maps") provide us with a powerful tool for determining the physical nature of the BLR. Several recent observational campaigns have obtained time-resolved spectroscopic data sets designed to allow the recovery of such 2-D response functions \citep{DeRosa2015a, Du2014}. These efforts have produced a wide range of apparent kinematic signatures, from simple rotation to both inflows \citep{Ulrich1996, Grier2013, Bentz2008, Bentz2010, Gaskell1988, Koratkar1989} and outflows \citep{Denney2009, Du2016}. This variety of recovered response functions suggests that either the BLR is a much more heterogeneous entity than envisaged by current unification models, or that we are not correctly recovering and/or interpreting response functions from observational data. The aim of this paper is to test the latter possibility. It is reasonable to worry about our ability to infer the physical structure of the BLR from the available observational data. Fundamentally, the issue is that this type of inference represents an ill-posed inverse problem. In particular, even though any time-resolved spectroscopic data set of finite quality contains {\em some} information about the BLR geometry and kinematics, this provides no guarantee of uniqueness. Thus, in principle, many physically different BLR structures might all produce very similar observational signatures. In practice, external constraints (aka ``regularization") are usually used to select a unique solution from the set of models that are consistent with the data. The challenge is establishing that the selected solution is the physically correct one. An excellent way to validate the inversion methods used in RM is to test them in a controlled manner against a \emph{known} response function. We can do this by calculating the response function for a physically-motivated BLR model and using this to generate a simulated spectroscopic time series that is comparable in quality to those provided by current observing campaigns \cite[e.g.][]{DeRosa2015a}. We can then apply the RM methods to this mock data set and check the results against ``ground truth" (which is known in this case). Here, we implement this idea by building on the results of \cite{Mangham2017}. There, we used a radiative transfer and ionisation code \citep{Long2002} to produce physically-motivated response functions for one candidate BLR geometry, a rotating biconical disc wind \citep{Shlosman1993}. We now use these response functions, together with a driving continuum derived from an actual UV data set \citep{Fausnaugh2016}, to generate mock spectroscopic observing campaigns. The two specific model response functions we use in our tests are based on a high-luminosity QSO and low-luminosity AGN disc wind model, respectively. One of these satisfies standard assumptions about response function behaviour (i.e. linear positive response with increasing luminosity), the other one does not. Taken as a pair, they provide a sensitive test of RM methods in very different regimes. The remainder of this paper is organised as follows. In Section~\ref{sec:background}, we outline the theory behind RM. In Section~\ref{sec:method}, we describe the radiative transfer and ionisation code we use to generate our simulated response functions and explain how these response functions are then used to generate the synthetic spectroscopic time series for our mock observing campaigns. The authors of the two specific RM methods we test -- {\sc MEMEcho}\ and CARAMEL -- then describe their respective analysis techniques. In Section~\ref{sec:results}, we present, assess and discuss the results obtained by each method. Finally, in Section~\ref{sec:conclusions}, we summarise our main conclusions. \section{Background} \label{sec:background} \subsection{Reverberation Mapping: Basic Principles} \label{sec:background:basics} Reverberation mapping is based on the premise that the BLR reprocesses the ionising continuum from the central source into line emission. Fluctuations in the continuum propagate outwards at the speed of light, causing changes in the line emission when they interact with material in the BLR at later times. The time-scale on which this material responds -- i.e. the time-scale on which ionization equilibrium is established in the BLR -- is thought to be short compared to both the continuum variability and light-travel time-scales. The contribution of BLR material at position ${\bf \underline{r}}$ to the total line luminosity observed at time $t$ then depends only on the continuum luminosity at the earlier time $t-\tau$. Here, $\tau$ is the additional time required for light emitted by the central engine to reach the observer via a path that includes ${\bf \underline{r}}$. \subsection{1-D Response Functions: Delay Distributions} \label{sec:backgrounsd:1drf} If the line response is perfectly linear -- i.e. if each location in the BLR always produces a fixed number of line photons per continuum photon -- the total line luminosity $L$ at time $t$ is a function of the continuum $C$ as \begin{equation} L(t) = \int_{0}^{\infty} C(t - \tau) \, \Psi(\tau) \, d\tau, \label{eqn:background:1d_rf} \end{equation} where $\Psi(\tau)$ is the so-called {\em 1-D transfer function} or {\em delay distribution}. More specifically, $\Psi(\tau)$ is the weighted average reprocessing efficiency of all parts of the BLR that contribute to the response observed at delay $\tau$. In reality, line responses are {\em not} perfectly linear. A better approximation is to linearise the response around reference (e.g. long-term average) continuum and line luminosities $C_0$ and $L_0$ (e.g. Horne 1994): \begin{equation} C(t) \simeq C_0 + \Delta C(t), \label{eqn:background:cont_lin} \end{equation} and \begin{equation} L(t) \simeq L_0 + \Delta L(t). \label{eq:linearize} \end{equation} We can then define the \emph{response function} $\Psi_R$ to refer to only the variable parts of the luminosities, \begin{equation} \Delta L(t) = \int_{0}^{\infty} \Delta C(t - \tau) \, \Psi_R(\tau) \, d\tau. \label{resp0} \end{equation} This model can deal with {\em globally} non-linear responses, so long as the continuum variations are small enough to ensure that the {\em local} response remains approximately linear. Conceptually, the 1-D response function, $\Psi_R(\tau)$ describes how the line emission produced by a sharp continuum pulse is spread across a range of time delays, $\tau$. Mathematically, the response function is just the partial derivative \begin{equation} \Psi_R(\tau) = \frac{\partial L(\tau)}{\partial C(t-\tau)}. \label{resp1} \end{equation} We can make the dependence of the response function on the local conditions within the BLR more explicit by writing it in terms of the line luminosity {\em per unit volume}, \begin{equation} \Psi_R(\tau) = \int_{V_{\rm BLR}} \frac{\partial l({\bf\underline{r}})} {\partial C(t-\tau)} \,\, \delta(\tau({\bf\underline{r}})) \,\,dV. \label{resp2} \end{equation} Here, the integral on the right-hand-side is over the entire volume of the BLR, $\partial l({\bf\underline{r}}) / \partial C(t-\tau)$ is the line response {\em per unit volume} at location {\bf\underline{r}}, and the delta function ensures that only locations that produce the correct delay contribute to the total response at $\tau$. In some previous works on reverberation mapping, there has been ambiguity over the use of the terms \emph{transfer} and \emph{response} functions as they refer to either the \emph{linear} or \emph{linearized} forms of the reverberation mapping equation. \citet{Mangham2017} shows that the two assumptions produce markedly different results, and clarity is important. Thus, in this work, we use the term \emph{transfer} function to refer to the parameter $\Psi$ from the linear form of the equation (Equation \ref{eqn:background:1d_rf}, also equivalent to the \emph{emissivity-weighted response function} $\Psi_{\rm EWRF}$ of \citet{Goad1993, Mangham2017}). We use the term \emph{response} function to refer to the parameter $\Psi_R$ from the linearized form of the equation (Equation \ref{resp0}). \subsection{2-D Response Functions: Velocity-Delay Maps} \label{sec:background:2drf} The 1-D response function can be calculated straightforwardly from Equation~\ref{resp2} for any given BLR geometry and emissivity distribution. However, many different physical models of the BLR could, in principle, produce the same delay distribution. One way to partially break this degeneracy is to add kinematic, i.e. velocity, information. We can do this by splitting our emission line light curve into distinct radial velocity bins and constructing the delay distribution separately for each bin. This immediately leads to velocity-resolved versions of Equations~\ref{resp0}-\ref{resp2}: \begin{equation} \Delta L(v,t) = \int_{0}^{\infty} \Delta C(t - \tau) \, \Psi_R(v,\tau) \, d\tau, \label{resp10} \end{equation} \begin{equation} \Psi_R(v,\tau) = \frac{\partial L(v,\tau)}{\partial C(t-\tau)}, \label{resp11} \end{equation} \begin{equation} \Psi_R(v,\tau) = \int_{V_{\rm BLR}} \frac{\partial l({\bf\underline{r}})} {\partial C(t-\tau)} \, \delta(\tau({\bf\underline{r}})) \, \delta (v({\bf\underline{r}})) \, dV. \label{resp12} \end{equation} Here, $v({\bf\underline{r}})$ is the radial velocity at position ${\bf\underline{r}}$ in the BLR. The 2-D response function, $\Psi_R(v,\tau)$, is also often referred to as the {\em velocity-delay map} of the BLR. If we make the assumption that within the limit of small changes around $C_0$ the dependence of the line luminosity on the driving continuum is a power law, i.e. \begin{equation} L(v,\tau) = L_0(v,\tau) \bigg( \frac{C}{C_0} \bigg) ^\eta, \label{eqn:background:2drf:eta} \end{equation} then we can re-express the response function equation \ref{resp11} in terms of the dimensionless \emph{responsivity} $\eta$ as \begin{equation} \Psi_R(v, \tau) = \eta \frac{L_0(v, \tau)}{C_0}. \end{equation} Broadly, then, responsivity is a measure of whether a change in the driving continuum results in an increase or \emph{decrease} in line emission, and can be described both for a region in the response function as $\eta(v, \tau)$, or for a point within the BLR as $\eta({\bf\underline{r}})$. Of note is that for a system where globally $\eta = 1$, the response function $\Psi_R$ becomes equivalent to the transfer function $\Psi$. \subsection{Reverberation Mapping in Practice: ~~~~ ~~~ ~~~ ~~~ ~~~ Constructing a Data-Driven BLR Model} \label{sec:background:inversion} The addition of kinematic information lifts some of the degeneracy associated with $\Psi_R(\tau)$. For example, inflows and outflows that differ only in the sign of their velocity vector will produce identical 1-D response functions, but very different velocity-delay maps. As illustrated in Figure~\ref{fig:background:tf_example} (c.f. Horne 1994), inflow [outflow] kinematics are generally expected to produce a ``red-leads-blue" [``blue-leads-red"] signature in $\Psi_R(v, \tau)$. Similarly, pure rotation tends to produce symmetric velocity-delay maps, whose envelope is defined by the dependence of the rotation speed on distance from the central object. However, in physically motivated models, such as the rotating disc wind model of \citet{Matthews2016}, we may expect to see a mix of these signatures. Moreover, the geometry of the line-emitting region will also depend strongly on the ionisation structure of the BLR. In fact, the line response may even be {\em negative} in parts of the velocity-delay space, as the associated sections of the wind become over-ionised and stop emitting. These effects can significantly complicate the interpretation of $\Psi_R(v, \tau)$ \citep{Mangham2017}. Fundamentally, the issue is that even 2-D reverberation mapping is non-unique: a given data set, $C(t)$ and $L(v,t)$, may be consistent with many different physical BLR models. This is easy to understand. A full physical picture of the BLR requires us to specify at least the density and velocity at each position. Suppose we were to discretize the BLR spatially and kinematically into just 10 bins in each of these 7 parameters (position vector, velocity vector, density). The resulting model space then contains $10^7$ distinct parameter combinations, which vastly exceeds the number of data points in any realistic set of observations. In order to find a unique solution for this type of ill-posed inverse problem, additional ``regularization" constraints have to be provided (e.g. based on physics, geometry, kinematics, smoothness, etc). \begin{figure} \includegraphics[width=\columnwidth]{fig-background-tf_example} \begin{flushright} \includegraphics[width=.9\columnwidth]{fig-background-tf_geometry} \end{flushright} \caption{Outline response functions and schematics for Hubble-type spherical outflow \textcolor{blue}{\textbf{(left)}}, a rotating Keplerian disc viewed at a $20^{\circ}$ angle \textcolor{magenta}{\textbf{(centre)}}, and Hubble-type spherical inflow \textcolor{red}{\textbf{(right)}}. Winds extend from $r_{\rm min}=20r_{g}$ to $r_{\rm max}=200r_{g}$ for an AGN of mass $10^{7} M_{\odot}$. Hubble out/inflows have $V(r_{\rm min})=\pm 3\times 10^{3}$ km s$^{-1}$. Solid lines denote the response from the inner and outer edges of the winds, dotted lines from evenly-spaced shells within the wind. Pale lines describe the edge of the velocity-delay shape of the response function.} \label{fig:background:tf_example} \end{figure} Two different approaches have so far been used to develop a data-driven physical picture of the BLR from time- and velocity-resolved observational reverberation mapping campaigns. In the first approach, the primary aim is the construction of $\Psi_R(v, \tau)$ from the data. Even this is an inverse problem, but determining a 2-D response function is a much more constrained problem than inferring an at least 7-D physical description of the BLR. The interpretation of the velocity-delay map in terms of an underlying physical picture is up to the user in this approach. Typically, this is based on a qualitative comparison of the recovered $\Psi_R(v, \tau)$ to the response functions produced by simple toy models, such as those in Figure~\ref{fig:background:tf_example}. We will refer to this as the ``inverse approach". In our tests, this approach is represented by {\sc MEMEcho}\ \citep{Horne1994}, which is described more fully in Section~\ref{sec:method:memecho}. The SOLA method \citep{Pijpers1994}, regularized linear inversion \citep{Krolik1995a, Skielboe2015} and the Sum-of-Gaussians method \citep{Li2016} all belong to this class as well. The second approach is to build a highly flexible physical model of the BLR, whose parameters can be adjusted to fit the observed $L(v,t)$, using the observed $C(t)$. In principle, this approach does not require $\Psi_R(v, \tau)$ to be calculated explicitly for either the model or the data. In practice, the response function of the best-fitting model still provides a useful visualization tool and is easy to calculate from Equation~\ref{resp12}. The advantage of this approach is that the optimal fit parameters immediately provide a basic physical picture of the BLR. A key risk is that the parametrization of the model is incapable of describing the true BLR. This method also requires the implementation of all the physics needed to predict $L(v,t)$ from $C(t)$ for any given set of model parameters. We will refer to this as the ``forward-modelling approach". It is represented in our test by {\sc CARAMEL} \citep{Pancoast2011, Pancoast2014, Pancoast2014a}, which models the BLR as a population of reflecting, non-interacting clouds. CARAMEL is described more fully in Section~\ref{sec:method:caramel}. \section{Methods} \label{sec:method} \subsection{Simulating an Observational Campaign} \label{sec:method:fundamentals} \subsubsection{Line Formation in a Rotating disc Wind} \label{sec:method:fundamentals:line} We use the radiative transfer and ionisation code \textsc{Python} to simulate the formation of the H$\alpha$ emission line in a rotating, biconical accretion disc wind model of the BLR. \textsc{Python} has already been described several times in the literature \citep{Long2002, Sim2005, Noebauer2010, Higginbottom2013, Higginbottom2014, Matthews2015, Matthews2016}, so we provide only a brief description of it here. Given a specification of the radiation sources, as well as of the disc wind geometry and kinematics, the code performs an iterative Monte Carlo ionization and radiative transfer simulation. It follows the paths of photons generated by a central X-ray source, an accretion disc and the wind itself through the system, records their interactions with the wind, and calculates their effect on the local temperature and ionization state. Once all photons in a given iteration have traversed the grid, the temperature and ionisation state of each wind cell is updated. This changes the emission profile and opacity of the wind, so the radiation transfer process is repeated. The ionisation state is then recalculated, and this process is iterated until the temperature and ionisation state of the wind have converged. The converged wind model is then used to generate detailed spectra for a range of user-specified observation angles. \begin{figure} \includegraphics[width=\columnwidth]{fig-background-geometry} \caption{Sketch of the biconical disc wind geometry from \protect\cite{Matthews2016}.} \label{fig:background:geometry} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{spectra} \caption{Mean line profiles for our the H$\alpha$ line in our QSO and Seyfert models.} \label{fig:method:profile} \end{figure} \begin{table*} \small \caption{Model parameters} \label{table:params} \begin{tabular}{l l | c c c c } \hline \hline Parameter & Symbol, units & Seyfert & Seyfert, rescaled & QSO & QSO, rescaled\\ \hline Original SMBH mass & $M_{\rm BH}$, ${\rm M}_{\odot}$ & $10^{7}$ & $1.33\times 10^{8}$ & $10^{9}$ & $1.94\times 10^{8}$ \\ Accretion rate & $\dot{M}_{\rm acc}$, M$_{\odot}$ yr$^{-1}$ & 0.02 & & 5 &\\ & $\dot{m}_{\rm acc}$, $\dot{\rm M}_{\rm Edd}$ & $\approx 0.1$ & & $\approx 0.2 $ &\\ X-ray power-law index & $\alpha_{X}$ & $-0.9$ & & $-0.9$ &\\ X-ray luminosity & $L_{X}$, erg s$^{-1}$ & $10^{43}$ & & $10^{45}$ & \\ X-ray source radius & $r_{X}$, r$_{g}$ & 6 & & 6 &\\ & $r_{X}$, cm & $8.8 \times 10^{12}$ & $1.17 \times 10^{14}$ & $8.8 \times 10^{14}$ & $1.71\times 10^{14}$\\ Accretion disc radii & $r_{\rm disc}$(min) & $r_{X}$ & & $r_{X}$ &\\ & $r_{\rm disc}$(max), r$_{g}$ & 3400 & & 3400 &\\ & $r_{\rm disc}$(max), cm & $5 \times 10^{15}$ & $6.63\times 10^{16}$ & $5 \times 10^{17}$ & $9.69\times 10^{16}$\\ & $r_{\rm disc}$(max), ld & $1.93$ & $25.6$ & $193$ & $37.41$\\ Wind outflow rate & $\dot{M}_{w}$, M$_{\odot}$ yr$^{-1}$ & 0.02 & & 5 &\\ Wind launch radii & $r_{\rm min}$, r$_{g}$ & 300 & & 300 &\\ & $r_{\rm min}$, cm & $4.4 \times 10^{14}$ & $5.83\times 10^{15}$ & $4.4 \times 10^{16}$ & $8.53\times 10^{15}$\\ & $r_{\rm min}$, ld & $0.1699$ & $2.251$ & $16.99$ & $3.293$\\ & $r_{\rm max}$, r$_{g}$ & 600 & & 600 &\\ & $r_{\rm max}$, cm & $8.8 \times 10^{14}$ & $1.17 \times 10^{16}$ & $8.8 \times 10^{16}$ & $1.71\times 10^{16}$\\ & $r_{\rm max}$, ld & $0.3397$ & $4.517$ & $33.97$ & $6.602$\\ Wind launch angles & $\theta_{\rm min}$ & $70^{\circ}$ & & $70^{\circ}$ &\\ & $\theta_{\rm max}$ & $82^{\circ}$ & & $82^{\circ}$ & \\ Terminal velocity & $v_{\infty}(r_0)$ & $v_{\rm esc}(r_0)$ & & $v_{\rm esc}(r_0)$ &\\ Acceleration length & $R_{\rm v}$, cm & $10^{16}$ & $1.33\times 10^{17}$ & $10^{19}$ & $1.94\times 10^{18}$\\ Acceleration index & $\alpha$ & 1.0 & & 0.5 &\\ Filling factor & $f_V$ & 0.01 & & 0.01 &\\ Viewing angle & $i$ & 40$^{\circ}$ & & 40$^{\circ}$ & \\ Number of photons & & $6\times 10^{9}$ & & $1\times 10^{10}$ & \\ \hline \hline \end{tabular} \end{table*} The basic disc wind geometry we adopt for the BLR is illustrated in Figure~\ref{fig:background:geometry}. For the purpose of testing RM inversion methods, we consider two specific parameter sets: the Seyfert and QSO models from \citet{Mangham2017}. These were chosen because they represent plausible, physically-motivated response function signatures and display a range of behaviours that allow us to investigate the capabilities of inversion tools to cope with both straightforward and unusual response functions (see Section~\ref{sec:method:fundamentals:response}). All relevant parameters of these models -- along with brief explanations of what they encode -- are provided in Table~\ref{table:params}. The corresponding mean line profiles are shown in Figure~\ref{fig:method:profile}. \subsubsection{Creating Response Functions} \label{sec:method:fundamentals:response} We use \textsc{Python} to generate 2-D response functions using the methodology described in \citet{Mangham2017}. Briefly, the response function, $\Psi_R(v,\tau)$, describes how a change in line emission at time $t$ depends upon changes in the continuum across a range of previous times $t-\tau$. This requires making the assumption that the response function itself is \emph{not} dependent on the incident continuum luminosity. If this is the case, then the response function can also be mirrored in time: it describes not only how the line emission observed at time $t$ depends on the continuum at time $t-\tau$, but \emph{also} how the continuum at time $t$ propagates out to change the line emission at time $t+\tau$. As a result of this, we can derive the response function $\Psi_R$ by forward-modelling. An instantaneous change in continuum luminosity at time $t$ drives changes in line emission at a range of times $t+\tau$. These changes can be calculated by tracking the photons in our Monte Carlo simulation and calculating their arrival time delay, $\tau$, from their path. These photons are then binned by velocity and delay to produce a response function $\Psi_{R}(v, \tau)$. In order to match the average $\approx 3$ day delays seen in H$\beta$ for NGC~5548 \citep{Bentz2013}, we re-scale the time axis in our response functions so that the peak delay occurs at $\approx 3$ days (the exact peak delays in our rescaled models are 2.25 days for the Seyfert model and 4.05 days for the QSO model). The resulting rescaled response functions no longer correspond directly to a fully self-consistent BLR model, but this is not a concern for the purpose of testing RM inversion methods. Here, our main requirement is simply to have known response functions that (a) are characterised by an empirically-motivated mean delay, and (b) account for the complexities associated with physically motivated BLR models (e.g. mixed kinematics, complex emissivity distributions, negative responses). \begin{figure*} \begin{minipage}{\columnwidth} \includegraphics[width=\columnwidth]{sey_line_resp} \caption{The `true' velocity-resolved response function (lower) for H$\alpha$ in the Seyfert model, rescaled to a peak delay of $\approx 3$ days.} \label{fig:method:response_sey} \end{minipage}\hfill\begin{minipage}{\columnwidth} \includegraphics[width=\columnwidth]{qso_line_resp} \caption{The `true' velocity-resolved response function (lower) for H$\alpha$ in the QSO model, rescaled to a peak delay of $\approx 3$ days.} \label{fig:method:response_qso} \end{minipage} \end{figure*} The response functions calculated for our two models in this way are shown in Figures~\ref{fig:method:response_sey} and \ref{fig:method:response_qso}. A full discussion of these velocity-delay maps has already been provided by \citet{Mangham2017}, so here we only highlight some salient features. In both Seyfert and QSO models, the line emission and response is dominated by the dense, rotation-dominated base of the disc wind (c.f. Figures~\ref{fig:geometry-emissivity} and ~\ref{fig:geometry-responsivity} below). Correspondingly, the mean wavelength-dependent response is double-peaked in both cases. Both response functions also clearly show the virial envelope associated with this part of the outflow. The emissivity distributions appear double-lobed, rather than quadruple-lobed as in \citet{Murray1996} A key difference between the models is that the H$\alpha$ response is always positive in the QSO model, but not in the Seyfert model. In fact, at low Doppler shifts and long delays, the velocity-delay map for the Seyfert model is so dominated by {\em negative} responsivities that the \emph{net} line response is also negative. As discussed in \citet{Mangham2017}, the Seyfert model includes substantial emission from the extended, low-density parts of the wind. As the ionising continuum luminosity increases, these regions become over-ionised, which reduces the H$\alpha$ emission produced at large radii. By contrast, the denser wind base is not over-ionised and responds positively to the increased continuum luminosity. The combination of a net negative response and a change in the characteristic emission radius with ionising continuum is physically plausible. It has been observed in NGC~5548, for example, \citet{Cackett2006}, although there the BLR radius appears to \emph{increase} with increasing ionising luminosity. A possible negative correlation of the H$\alpha$ response with ionising continuum may also be present in the AGN STORM dataset of \citet{Pei2017} (see epoch T2 in their Figure~7). \subsubsection{Creating Time-Series of Spectra} \label{sec:method:tss} Actual RM campaigns do not observe response functions; they observe spectroscopic time series. In order to test RM methods, we therefore have to generate such time series from our models. Equation \ref{resp10} shows how the response function can be used to translate changes in the ionising continuum into corresponding emission line changes. In the limit of small variations in the ionising continuum luminosity, the response function $\Psi_{R}$ is constant. In this case, the time-dependent emission line, $L(v, t)$, can be expressed straightforwardly as \begin{equation} L(v,t) = L_0(v)+\int^{\infty}_0 \Psi_R(v, \tau) \, \Delta C(t-\tau) \, d\tau, \label{eqn:method:basic} \end{equation} where $L_0(v)$ is the base-line reference spectrum (c.f. Equation~\ref{eq:linearize}). This is sufficient to generate spectra for CARAMEL which only requires continuum-subtracted line spectra. One key difference, however, is that this equation makes the assumption that the response is linearized, whereas CARAMEL itself does not. {\sc MEMEcho}\ requires a full spectrum including large regions of continuum either side of the line. Adding continuum variations to Equation \ref{eqn:method:basic} results in \begin{equation} L(v,t) = L_0(v)+C_0(v)+\Delta C(v, t)+\int^{\infty}_0 \Psi_R(v, \tau) \, \Delta C(t-\tau) \, d\tau. \label{eqn:method:memecho} \end{equation} We generate our time series from these equations using the response functions discussed in Section~\ref{sec:method:fundamentals:response} and a variable driving continuum $C(t)$ based on the empirical 1158~\AA\ light-curve of NGC~5548 from \citet{DeRosa2015a}. The light-curve is rescaled to match the mean luminosity for each model, and the range of variation is reduced to $\pm50\%$ about this mean value. The data set contains 171 observations spread over $\simeq 175$~days. Given a BLR extending out to $R_{\rm max}$, the line profile at time $t$ will depend on continuum levels as far back as $t - 2R_{\rm max}/c$. It is therefore not possible to calculate self-consistent line profiles for times earlier than $2R_{\rm max}/c$ in the time series, since these depend on unknown continuum values. This affects not only our ability to simulate spectra, but any attempt to invert observational spectroscopic time-series. Thus, we discard $L(v,t)$ calculated at early times, when the line profiles still depend on unobserved continuum fluxes. We do, however, retain the continuum values observed during these early times. We thus provide both {\sc MEMEcho}\ and CARAMEL with 171 continuum measurements across 175 days, but only 101 simultaneous line plus continuum spectra over 100 days. This is comparable to the best existing observation campaigns \citep{Du2014, DeRosa2015}. The spectra are given a constant error such that the error on the integrated line flux measured from a single spectrum is, on average, 2\% of the peak-to-peak variation in the integrated line flux, as requested by the CARAMEL team. These errors are also used to apply noise to our simulated time series. Representative MCMC samples of the line profiles from one of our simulated data sets are shown shown in Figure~\ref{fig:results:lightcurve_spectra}. The full time-series are shown in the form of trailed spectrograms in Figure~\ref{fig:method:tss}. The method used in the generation of time series is described in full in the the appendix to this work. \begin{figure} \includegraphics[width=\columnwidth]{lightcurve_spectra} \caption{Driving continuum and line light-curves for our model and the CARAMEL fit (upper). Spectra associated with three `observational' times (indicated by grey vertical lines on the light-curves), and their CARAMEL fit (lower).} \label{fig:results:lightcurve_spectra} \end{figure} \begin{figure*} \begin{minipage}{.48\textwidth} \includegraphics[width=\columnwidth]{out_tspec_qso_line} \end{minipage}\hfill \begin{minipage}{.48\textwidth} \includegraphics[width=\columnwidth]{out_tspec_sey_line} \end{minipage} \caption{Trailed spectrograms generated for the continuum-subtracted H$\alpha$ lines of our QSO (left) and Seyfert (right) models over a simulated observing campaign of 98.9 days.} \label{fig:method:tss} \end{figure*} \subsection{Benchmarks: Defining Success} \label{sec:method:benchmarks} To assess the results produced by the RM techniques we are testing, we need to define what constitutes success. At the most basic level, any successful RM analysis must be consistent with the input data. Thus quantities like the mean line profiles, response functions and full spectroscopic time-series should be well reproduced by whatever inversion method is being used. Unless this minimal requirement is met, it is impossible to have confidence in the results obtained or their interpretation. The relevant benchmarks for our input models have already been shown and discussed in Sections~\ref{sec:method:fundamentals:line} (mean line profiles: Figure~\ref{fig:method:profile}), \ref{sec:method:fundamentals:response} (response functions: Figures~\ref{fig:method:response_sey} and \ref{fig:method:response_qso}), and \ref{sec:method:tss} (spectroscopic time series: Figure~\ref{fig:method:tss}). \begin{figure*} \begin{minipage}{.48\textwidth} \includegraphics[width=\columnwidth]{lum-xyz} \caption{The emissivity distribution in the QSO model. Distances have been rescaled to correspond to the rescaled delays (see \ref{sec:method:fundamentals:response}). X and Y axes are along the disk plane, Z is normal to the disk plane. The red lines indicate the projection of the direction vector towards the observer in each plot. Note the different (smaller) dynamic range used for the z-axis.} \label{fig:geometry-emissivity} \end{minipage}\hfill\begin{minipage}{.48\textwidth} \includegraphics[width=\columnwidth]{lum-xyz_resp} \caption{The responsivity-weighted emissivity distribution in the QSO model. X and Y axes are along the disk plane, Z is normal to the disk plane. Distances have been rescaled to correspond to the rescaled delays (see \ref{sec:method:fundamentals:response}). The red lines indicate the projection of the direction vector towards the observer in each plot. Note the different (smaller) dynamic range used for the z-axis.} \label{fig:geometry-responsivity} \end{minipage} \end{figure*} Of course, the real goal of RM is to gain insight into the physical nature of the BLR. Success in this context means correctly inferring physical properties such as the characteristic size of the line-forming region, its geometry, and the dominant kinematics. As additional benchmarks for assessing performance in this area, we provide in Figures~\ref{fig:geometry-emissivity} and ~\ref{fig:geometry-responsivity} the spatially resolved ``raw" and responsivity-weighted emissivity distributions for our QSO model. \footnote{The corresponding emissivity distributions for the Seyfert model have been omitted here, since neither of the RM methods we tested was able to reproduce the negative response of this model (see Section~\ref{sec:results:seyfert}).} Both of these matter. The former controls the shape of the {\em mean} line profile, while the latter controls the velocity-dependent line response and the RMS line profile. The raw and responsivity-weighted emissivity distributions illustrate where in our biconical disc wind model the simulated H$\alpha$ line is primarily formed, and which parts of the line-emitting region are most sensitive to changes in the continuum. Note that there is no significant line emission from $z < 0$ (i.e. below the disk plane), since the optically thick accretion disc blocks the observer's view of this region. In addition, even though the line-forming region is vertically extended, its aspect ratio is small, $H/R \sim 0.1$ (note the different scales on the axes of Figures~\ref{fig:geometry-emissivity} and \ref{fig:geometry-responsivity}). Thus, geometrically, the H$\alpha$ line in this model could be reasonably described as being formed in a moderately thin disc or annulus, extending over 3 to 6 light days. The emission distribution in Figure \ref{fig:geometry-emissivity} differs from that shown in \citet{Murray1996}. This is an orientation effect; when observed at a $40^\circ$ angle instead of $87^\circ$, the observer-projected component of the outflow is consistently positive, and acts to suppress the quadripolar $dv/ds$ distribution. \subsection{Blinding} \label{sec:method:blinding} In order to ensure that our RM tests are realistic, we carried them out as blinded trials. Thus neither P.W. and A.P. (the CARAMEL team), nor K.H. (the one-man {\sc MEMEcho}\ team) were given prior access to the response functions used to generate the time-series. They were also not given the disc wind model parameters we adopted. Instead, both were provided only with the time-series for the QSO and Seyfert models in their preferred input format, as well as the rescaled continuum light curves used to generate them. Neither were informed that the Seyfert model would exhibit a negative response. The following methods sections for each technique were written by their respective teams, after they had access to the time-series, but before they were shown the actual \textsc{Python}-generated response functions that were used to produce the data. One crucial difference to note between the methods is that, like our method, {\sc MEMEcho}\ assumes a linearized response around a mean line profile as Equation~\ref{eq:linearize} and the velocity-delay maps it generates represent the \emph{response} function of the system. CARAMEL, however, does \emph{not} assume linearization, as Equation~\ref{eqn:background:1d_rf}, and the velocity-delay maps it generates are \emph{transfer} functions. This difference is important as the time-series of spectra were generated under the assumption of a \emph{linearized} response around a mean line profile. \begin{figure} \includegraphics[width=\columnwidth]{lightcurve} \caption{Original and rescaled driving light-curves used in generating time series of spectra, taken from NGC~5548 \citep{Fausnaugh2016}.} \label{fig:method:lightcurve} \end{figure} \subsection{Inversion Methods: {\sc MEMEcho}\ } \label{sec:method:memecho} \label{sec:1d} \typeout{1d} We interpret the observed spectral variations as time-delayed responses to a driving light-curve. By fitting a model to the reverberating spectrum $F(\lambda,t)$, we reconstruct a 2-dimensional wavelength-delay map $\Psi_R(\lambda,\tau)$. This effectively ``slices up'' the accretion flow on isodelay surfaces, which are paraboloids co-axial with the line of sight with a focus at the compact source. Each delay slice gives the spectrum of the response, revealing the fluxes and Doppler profiles of emission lines from gas located on the corresponding isodelay paraboloid. The resulting velocity-delay maps $\Psi_R(v,\tau)$ provide 2-dimensional images of the accretion flow, one for each emission line, resolved on isodelay and iso-velocity surfaces. \subsubsection{ {\sc MEMEcho}\ fits to the synthetic data } In our blind analysis, we treat the synthetic spectra in exactly the same way as in the analysis of time-resolved spectroscopy of real AGN. Thus a linear continuum model fit (using the linearized model as summarised in Equations~\ref{eqn:background:cont_lin}, \ref{eq:linearize} \& \ref{resp10}) to each of the synthetic spectra provides the continuum light-curve data, and the continuum-subtracted spectra isolate H$\alpha$ light-curve data in many wavelength channels. Note that {\sc MEMEcho}\ did not use the full 171 continuum measurements including the preceding times; only those corresponding to the times for which spectra were available. Using the {\sc MEMEcho}\ code \citep{Horne1994} we then perform regularised fits of the linearised echo model, with parameters $p_k$, to the synthetic data $D_i\pm\sigma_i$. The data $D$ comprise measurements at specific times of the continuum light-curve and of the emission-line flux in each wavelength channel. For 1-D echo mapping, the parameters $p$ include 3 parts: the continuum light-curve $C(t)$, the echo map $\Psi_R(\tau)$, and the line reference level $L_0$. $C(t)$ and $\Psi_R(\tau)$ are evaluated on suitable time and delay grids, with equal spacing $\Delta t=1$ day, interpolating as needed to match the observation times. We set the continuum reference level $C_0$ to the median of the continuum data, and the fit adjusts $L_0$ accordingly. For 2-D velocity-delay mapping, the data $D$ comprise the continuum light-curve plus H$\alpha$ light-curves in many wavelength channels, and the model parameters include $\Psi_R(\lambda,\tau)$ and $L_0(\lambda)$ in the same wavelength channels. Our {\sc MEMEcho}\ fit is achieved by varying the model parameters $p$ to minimise \begin{equation} Q(p,D) = \chi^2(p,D) - 2\, \alpha\, S(p) \ . \end{equation} Here the ``badness-of-fit'' statistic \begin{equation} \chi^2 = \sum_{i=1}^N \left( \frac{ D_i - \mu_i(p) } { \sigma_i } \right)^2 \end{equation} quantifies consistency between the linearised echo model predictions $\mu_i(p)$ and the data values $D_i\pm\sigma_i$. The fit employs a Bayesian prior $\propto \exp{\left\{\alpha\,S(p)\right\}}$, where the entropy is \begin{equation} S(p) = \sum_k w_k\, \left\{\, p_k - q_k - p_k\, \ln{\left( p_k / q_k \right)} \,\right\} \ , \end{equation} where $w_k$ is the weight and $q_k$ is the default value of parameter $p_k$. Note that $S(p)$ requires $p_k>0$ and that \begin{equation} \frac{\partial Q}{\partial p_k} = 2 \sum_{i=1}^N \frac{ D_i - \mu_i(p) }{ \sigma_i^2 } \frac{\partial \mu_i}{\partial p_k } + 2\, \alpha\, w_k\, \ln{ \left( p_k / q_k \right) } \ , \end{equation} so that as $\chi^2$ pulls the model prediction $\mu_i(p)$ toward the data $D_i$, $\alpha\, S$ pulls each parameter $p_k$ toward its default value $q_k$. The default values are set to weighted averages of ``nearby'' parameters, e.g. $q(t)=\sqrt{ p(t-\Delta t)\,p(t+\Delta t) }$, so that the entropy favours smoothly-varying functions $C(t)$ and $\Psi_R(\tau)$. We also ``pull down'' on $\Psi_R(\tau)$ at the maximum delay $\tau_{\rm max}=30$~d. The weights $w_k$ depend on control parameters. Increasing the control parameter $W$ ``stiffens'' structure in $\Psi_R(\tau)$ relative to that in $C(t)$. Similarly, in 2-D velocity-delay mapping, a second parameter $A$ controls the trade-off in $\Psi_R(\lambda,\tau)$ between structure in the delay vs wavelength direction, and parameter $B$ controls ``stiffness'' of $L_0(\lambda)$. In practice we use {\sc MEMEcho}\ to follow a maximum entropy trajectory from an initial large value of $\alpha$ and corresponding large $\chi^2$, decreasing $\alpha$ and thus $\chi^2$ gradually until an ``acceptable'' $\chi^2$ is reached. A measure of the angle between the gradients of $S$ and $\chi^2$ provides evidence that the fit at each chosen $\chi^2$ level is converged to machine precision. A value near $\chi^2/N=1$ is expected for $N$ data values with reliable error bars when the linearised echo model can achieve an acceptable fit. Attempts to lower $\chi^2$ too far result in a dramatic increase in $S$ as the parameters become noisy due to over-fitting noise in the data. From the resulting series of {\sc MEMEcho}\ fits we choose the control parameters $W$ and $A$ and the $\chi^2$ level so as to achieve plausible fits that reproduce the data well while keeping relatively smooth the continuum light-curve $C(t)$ and response maps $\Psi_R(\lambda,\tau)$. \subsection{Inversion Methods: CARAMEL} \label{sec:method:caramel} CARAMEL produces transfer functions from data by directly modelling the broad emission line region as a distribution of many massless point particles surrounding an ionizing continuum source. The particles instantaneously and linearly reprocess the AGN continuum emission and re-emit the light towards the observer in the form of emission lines. Each particle re-emits at a wavelength determined by its line-of-sight velocity and with a time-lag determined by its position. Using the particles' positions and velocities, CARAMEL can then calculate the transfer function resulting from a given BLR model. The model consists of a geometric component describing the positions of the point particles and a dynamical component describing the particle velocities. In addition, CARAMEL models the continuum light curve using Gaussian processes as a flexible interpolator to produce simulated spectra at arbitrary times. By feeding the continuum light curve through the BLR model, the code produces a time-series of spectra that can be directly compared to data. We use a Gaussian likelihood function to compare the model to the data, and use the diffusive nested sampling code {\sc DNest3} \citep{Brewer2011} to explore the parameter space of the BLR and continuum models. The full details of CARAMEL and the BLR model are discussed by \citet{Pancoast2014}, but the main components are described in the rest of this section. The particles in the BLR model are first assigned radial positions drawn from a Gamma distribution, \begin{align} p(x | \alpha, \theta) \propto x^{\alpha - 1} \exp\left(-\frac{x}{\theta}\right) \end{align} which allows for Gaussian-like, exponential, and heavy-tailed distributions. The distribution is then offset from the origin by the Schwarzschild radius plus a minimum BLR radius, $r_{\rm min}$, and a change of variables between ($\alpha$, $\theta$, $r_{\rm min}$) and ($\mu$, $\beta$, $F$) is applied: \begin{align} \mu &= r_{\rm min} + \alpha \, \theta \\ \beta &= \frac{1}{\sqrt{\alpha}} \\ F &= \frac{r_{\rm min}}{r_{\rm min} + \alpha \, \theta}, \end{align} where $\mu$ is the mean radius, $\beta$ is the shape parameter, and $F$ is the minimum radius in units of $\mu$. In this formalism, the standard deviation of the Gamma distribution is $\sigma_r = \mu\beta(1 - F)$. The particles are then rotated out of the plane of the disc by a random angle uniformly distributed over the range $\pm\theta_o$, and the distribution is inclined by an angle $\theta_i$ relative to the observer, where $\theta_{i} \rightarrow 0^\circ$ is face-on. The opening angle prior $\theta_0$ is uniform between $0^\circ$ and $90^\circ$, and the inclination angle prior is uniform in $\cos\theta_i$ between $0^\circ$ and $90^\circ$ The emission from each particle is assigned a weight between $0$ and $1$, determined by \begin{align} W(\phi) = \frac{1}{2} + \kappa \, \cos\phi, \end{align} \label{eq:caramel_weight} where $\phi$ is the angle from the observer's line of sight to the origin to the particle position, and $\kappa$ is a free parameter with uniform prior between $-0.5$ and $0.5$. In this set-up, $\kappa = 0$, $-0.5$, and $0.5$ correspond to particles that emit isotropically, back towards the ionizing source, and away from the ionizing source, respectively. A parameter $\gamma$, with uniform prior between 1 and 5, allows the particles to be distributed uniformly throughout the BLR ($\gamma\rightarrow 1$) or clustered near the faces of the disc ($\gamma\rightarrow 5$). This is achieved by setting the angle between a point particle and the disc to be \begin{align} \theta = \arccos \left[\cos \theta_o + (1 - \cos\theta_o) \, U^\gamma\right], \end{align} where $U$ is drawn randomly from a uniform distribution between 0 and 1. Finally, an additional free parameter, $\xi$ (uniform prior between $0$ and $1$), allows the disc mid-plane to be opaque ($\xi\rightarrow 0$) or transparent ($\xi\rightarrow 1$). The wavelength of light emitted by each particle is determined by its velocity, which is in turn determined by the black hole mass, a free parameter with uniform prior in the log of $M_{\rm BH}$ between $2.78\times 10^4$ and $1.67\times 10^9$ $M_\odot$, and the parameters $f_{\rm ellip}$, $f_{\rm flow}$, and $\theta_e$. First, the particles are assigned to be on near-circular elliptical orbits or on either inflowing or outflowing orbits. The fraction of particles on near-circular orbits is determined by the free parameter $f_{\rm ellip}$ which has a uniform prior between 0 and 1. Those with near-circular elliptical orbits have their radial and tangential velocities drawn from a Gaussian distribution centred on the circular velocity in the $v_\phi - v_r$ plane. The remaining particles are drawn from Gaussian distributions centred on the radial inflowing or outflowing escape velocities, where the direction of flow is determined by the parameter $f_{\rm flow}$. $f_{\rm flow}$ has a uniform prior between 0 and 1, where $f_{\rm flow} < 0.5$ ($>0.5$) indicates inflow (outflow). We also allow the centres of the inflowing and outflowing distributions to be rotated by an angle $\theta_e$ towards the circular velocity in the $v_\phi - v_r$ plane, where $\theta_e$ has a uniform prior between $0^\circ$ and $90^\circ$. \section{Results and Discussion} \label{sec:results} In the following sections, we will present the blinded analysis and interpretation of the simulated QSO data, as obtained by the two methods and written by their respective teams ({\sc MEMEcho}\: Section~\ref{sec:results:memecho}; CARAMEL: \ref{sec:results:caramel}). We will then unblind the analysis and compare the blind results to ``ground-truth", i.e. to the known response function and the underlying QSO BLR model (\ref{sec:results:models}). First, however, in Section~\ref{sec:results:seyfert}, we briefly consider the results obtained for the Seyfert data set. Perhaps unsurprisingly, both methods struggled to deal with the negative line response exhibited by this model. Since neither method obtained acceptable fits to this data set, it does not make sense to force a detailed analysis and interpretation of the ``best" models. If similarly poor fits were obtained for actual data, we would expect this to be interpreted (correctly) as evidence of a mis-specified model. We will therefore simply summarize the difficulties encountered by both deconvolution methods when faced with this model. \subsection{{\sc MEMEcho}\ and CARAMEL Results for the Seyfert model} \label{sec:results:seyfert} Neither {\sc MEMEcho}\ nor CARAMEL were able to successfully fit our simulated Seyfert data set. In both cases, the underlying problem is the negative line response presented by this model. On the one hand, it is reassuring that neither method was ``fooled" by this data set -- i.e. both methods failed, rather than producing misleading results. On the other hand, the inability of both methods to deal with negative emission line responses is a significant limitation. In particular, it is unclear whether this could cause serious systematics in cases where the {\em net} response is clearly positive, but where specific {\em parts} of the BLR exhibit negative responsivity. Answering this question is beyond the scope of the present paper, but must be the focus of future work. It is also worth noting that there {\em are} deconvolution methods that explicitly allow for negative responsivities, such as regularized linear inversion \citep{Krolik1995a,Skielboe2015} and even an extension to {\sc MEMEcho}\ discussed by \citet{Horne1994}. {\sc MEMEcho}\ was able to produce a response function for the Seyfert model (Figure \ref{fig:twodseymap}). As the {\sc MEMEcho}\ code is not designed to model negative responses the response function can only display the region of negative response as simply zero response, but surprisingly the regions of \emph{positive} response are captured reasonably well. The comparatively fine features corresponding to the positive response from the far side of the inner disk are even reflected, albeit smoothed out by regularization. The response is also shifted to a lower delay- with the peak moved from $\approx 2$ days to $\approx 0$. Despite this, the response function recovered still matches the Keplerian envelope of a $10^{8} M_\cdot$ central mass well, very close to the $1.33^10{8} M_\cdot$ rescaled mass for this model. \begin{figure} \includegraphics[width=0.9\columnwidth]{display_caramelsey} \caption{Model fits to the Seyfert model H$\alpha$ line profile, integrated H$\alpha$ flux, and AGN continuum flux. Panel $1$: The provided H$\alpha$ emission-line profile for each epoch. Panel $2$: The H$\alpha$ emission-line profile for each epoch produced by one sample of the BLR and continuum model. Panel $3$: The provided H$\alpha$ line profile for one randomly chosen epoch (black), and the corresponding profile (red) produced by the model in Panel 2. Cyan lines show the H$\alpha$ profile produced by other randomly chosen models. Panels $4$ and $5$: Time series of the provided integrated H$\alpha$ and continuum flux (black), the time series produced by the model in Panel 2 (red), and time series produced by other sample BLR and continuum models (cyan). } \label{fig:display_caramelsey} \end{figure} \begin{figure*} \begin{center} \includegraphics[angle=0,width=90mm]{2d_sey_fit.pdf} \caption{ \label{fig:display_memechosey} {\sc MEMEcho}\ fit to the synthetic Seyfert data. Bottom panel is the driving light-curve, above which are the 1-D echo maps (left) and echo light-curves (right) at selected wavelengths. Note the high target $\chi^2/101 = 3$ and the poor fits achieved near line center. } \end{center} \end{figure*} \begin{figure*} \begin{center} \includegraphics[angle=270,width=170mm]{2d_sey_map.pdf} \caption{ \label{fig:twodseymap} Two-dimensional wavelength-delay map $\Psi_R(\lambda,\tau)$ reconstructed from the {\sc MEMEcho}\ fit to the synthetic Seyfert data. Given below the greyscale map are projections of $\Psi_R(\lambda,\tau)$ giving delay-integrated responses $\Psi_R(\lambda)$ for the full delay range (black), and for restricted delay slices 0-5~d (purple), 5-10~d (green), 10-15~d (orange), and 15-20~d (red). To the right of the greyscale map are wavelength-integrated responses $\Psi_R(\tau)$ for the full line (black), and for 4000~km/s wide velocity bins centred at $V=0$ (green), at $\pm4000$~\mbox{\rm km~s$^{-1}$}\ (red and blue), and at $\pm8000$~\mbox{\rm km~s$^{-1}$}\ (orange and purple). For a black hole mass of $\mbox{M$_{\rm BH}$}=10^8\,\mbox{M$_\odot$}$, the orange dotted curves show the virial envelope for edge-on circular Keplerian orbits. } \end{center} \end{figure*} In order to provide some insight into how and why our benchmark methods struggle with the negative-response Seyfert model, we show in Figures~\ref{fig:display_caramelsey} and~\ref{fig:display_memechosey} a summary of the fits to this data set achieved by CARAMEL end {\sc MEMEcho}, respectively. Taking the CARAMEL results first (Figure~\ref{fig:display_caramelsey}), we see that the overall {\em shape} of the H$\alpha$ line profile is reproduced very well, but that none of the models drawn from the posterior parameter distribution succeed in reproducing the integrated emission line light curve. In their interpretation of these results, the CARAMEL team correctly highlighted that this failure may be due to a non-linear response. Turning to {\sc MEMEcho}\ (Figures~\ref{fig:display_memechosey} and \ref{fig:twodseymap}), we first note that a high target $\chi^2/101 = 3$ had to be adopted in order to fit this data set. The resulting model reproduces well the light curve features in the wings of the emission line, but fails to reproduce the variations in the core of the H$\alpha$ line. Here, the model light curve is too low at the start and too high after the main peak at $t\approx6820$. This difficulty in matching the line center behaviour makes sense, since this is where the true response is most strongly negative. Both the CARAMEL and {\sc MEMEcho}\ teams correctly interpreted the difficulties their methods encountered in fitting the Seyfert data set as pointing towards the presence of a significant negative response in the Seyfert model. \subsection{Blind Analysis and Interpretation: {\sc MEMEcho}\ Results for the QSO model} \label{sec:results:memecho} \subsubsection{1-D delay maps $\Psi_R(\tau)$ for the QSO simulation} \begin{figure} \begin{center} \includegraphics[angle=0,width=90mm]{1d_qso_fit} \caption{ \label{fig:onedqsofit} {\sc MEMEcho}\ fit to the continuum and integrated H$\alpha$ light-curve from the synthetic QSO dataset. The fit achieves $\chi^2/N=1$ both for the continuum variations (lower panel) and the line variations (upper right panel). Blue curves show the fitted model, including the continuum light-curve $C(t)$ (bottom panel), the delay map $\Psi_R(\tau)$ (upper left panel) and the line light-curve $L(t)$ (upper right panel). Horizontal red lines indicate the reference levels $C_0$ for the continuum and $L_0$ for the line. } \end{center} \end{figure} In Figure~\ref{fig:onedqsofit} we show the results of a successful {\sc MEMEcho}\ fit to the synthetic QSO data. The lower panel shows the continuum light-curve data, with error bars too small to see, and in blue the fitted model driving light-curve $C(t)$. The upper right panel shows the integrated line profile data points with error bars (green) and the fitted line light-curve $L(t)$ (blue). The reference levels, $C_0$ for the continuum and $L_0$ for the line, are indicated by horizontal red lines. The line variations $L(t)-L_0$ are obtained by convolving the continuum variations $C(t)-C_0$ with the delay distribution $\Psi_R(\tau)$ shown in the upper left panel. This fit was successful, achieving $\chi^2/N=1$ for the $N=101$ data points in the continuum light-curve and the same for the line light-curve. The continuum light-curve has a large peak near $t=6820$ MJD, and shows numerous smaller peaks and troughs that are well detected at a high signal-to-noise ratio. The strongest peak in the line light-curve crests at $t=6828$ MJD, $\sim8$ days later than the corresponding peak in the continuum light-curve. The line light-curve data is low near the start, then rises to a plateau that has 3 or perhaps 4 local maxima before the main peak, and drops to a low level again near the end. Given the quality of the {\sc MEMEcho}\ fit, these features can evidently be interpreted in terms of the linear echo model. The 1-D delay map $\Psi_R(\tau)$ is well determined from the synthetic QSO data. The prompt response at $\tau=0$ is small. The response rises rapidly to a ledge near 4 days, a peak near 8 days, then declines to a ledge at 17 days, and declines to near 0 at 30 days. The data quality is high enough to warrant interpretation of these features. \subsubsection{ 2-D Velocity-Delay Maps $\Psi_R(v,\tau)$ } \label{sec:2d} \typeout{2d} \begin{figure} \begin{center} \includegraphics[angle=0,width=90mm]{2d_qso_fit} \caption{ \label{fig:twodqsofit} {\sc MEMEcho}\ fit to the synthetic QSO data. Bottom panel is the driving light-curve, above which are the 1-D echo maps (left) and echo light-curves (right) at selected wavelengths. } \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[angle=270,width=170mm]{2d_qso_map} \caption{ \label{fig:twodqsomap} Two-dimensional wavelength-delay map $\Psi_R(\lambda,\tau)$ reconstructed from the {\sc MEMEcho}\ fit to the synthetic QSO data. Given below the grey-scale map are projections of $\Psi_R(\lambda,\tau)$ giving delay-integrated responses $\Psi_R(\lambda)$ for the full delay range (black), and for restricted delay slices 0-5~d (purple), 5-10~d (green), 10-15~d (orange), and 15-20~d (red). To the right of the grey-scale map are wavelength-integrated responses $\Psi_R(\tau)$ for the full line (black), and for 4000~km/s wide velocity bins centred at $V=0$ (green), at $\pm4000$~\mbox{\rm km~s$^{-1}$}\ (red and blue), and at $\pm8000$~\mbox{\rm km~s$^{-1}$}\ (orange and purple). For a black hole mass of $\mbox{M$_{\rm BH}$}=10^8\,\mbox{M$_\odot$}$, the orange dotted curves show the virial envelope for edge-on circular Keplerian orbits, and for a Keplerian disc inclined by $i=45^\circ$ and extending from $5$ to $15$ light days. } \end{center} \end{figure*} We now present the more detailed results of a 2-D {\sc MEMEcho}\ fit to the synthetic QSO data including variations not just in the continuum and integrated line flux, but also in the emission-line velocity profile. The quality of this {\sc MEMEcho}\ fit may be judged from Figure~\ref{fig:twodqsofit}. This shows the continuum light-curve and the integrated H$\alpha$ line light-curve in the lower two panels. Above these are delay maps on the left and echo light-curves on the right for selected wavelengths as indicated. The echo light-curve data, with green error bars, is shown along with the fitted model $L(\lambda,t)$ (blue curve) and background level $L_0(\lambda)$ (red line). The 2-D velocity-delay map $\Psi_R(\lambda,\tau)$ resulting from the {\sc MEMEcho}\ fit is shown as a grey-scale image in Figure~\ref{fig:twodqsomap}. To the right are projections of $\Psi_R(\lambda,\tau)$ to form the velocity-integrated delay map $\Psi_R(\tau)$ (black) and $\Psi_R(\tau|\lambda)$ for selected 4000~\mbox{\rm km~s$^{-1}$}\ wide velocity bins centred at $V=0$ (green), $\pm4000$~\mbox{\rm km~s$^{-1}$}\ (orange and blue), and $\pm8000$~\mbox{\rm km~s$^{-1}$}\ (red and purple). Below the grey-scale map are projections of $\Psi_R(\lambda,\tau)$ to form the velocity profiles $\Psi_R(V)$ for the delay-integrated response (black), and $\Psi_R(V|\tau)$, in colour, for 4 delay bins, 0-5~d (purple), 5-10~d (green), 10-15~d (orange) and 15-20~d (red). This {\sc MEMEcho}\ fit achieved an overall $\chi^2/N=1.2$ for $N=10302$ data values. A lower $\chi^2$ could also be achieved but not without introducing small-scale structure in the map, suggestive of over-fitting to noise in the data. Maps with higher $\chi^2$ were also constructed, and give poorer fits to the data while smearing out the structure seen in the map for $\chi^2/N=1.2$. The map shown is thus a good compromise between noise and resolution. The {\sc MEMEcho}\ map exhibits interesting velocity-delay structure. To first order, the H$\alpha$ response has a double-peaked velocity structure with a delay structure that is symmetric on the red and blue sides of the line profile. In more detail, the response is stronger on the red side than on the blue side. In the lower panel of Figure~\ref{fig:twodqsomap}, the delay-integrated response $\Psi_R(V)$ (black) extends to $\pm10000$~\mbox{\rm km~s$^{-1}$}\, with two roughly triangular peaks cresting at $V\approx\pm4000$~\mbox{\rm km~s$^{-1}$}. The red peak is stronger and sharper than the blue one, and the central minimum is at $-1500$~\mbox{\rm km~s$^{-1}$}. The velocity profiles in different delay bins are also double-peaked. The sharpness, velocity separation and red/blue asymmetry in strength of the peaks, and the velocity at the minimum between the peaks, all change with the time delay. In the 0-5 day bin (purple), the response extends to $\pm10000$~\mbox{\rm km~s$^{-1}$}\, with two smooth dome-shaped peaks, stronger and broader on the red than the blue side, and with a minimum near $-2000$~\mbox{\rm km~s$^{-1}$}. In the $5-10$~d bin (green), the response still covers $\pm10000$~\mbox{\rm km~s$^{-1}$}\, but the triangular peaks have now appeared near $\pm4000$~\mbox{\rm km~s$^{-1}$}, and the central minimum moves redward to perhaps $-1600$~\mbox{\rm km~s$^{-1}$}. In the $10-15$~d bin (orange), the response declines in the far wings, the sharp peaks remain but move inward somewhat, and the central minimum moves redward to $+400$~\mbox{\rm km~s$^{-1}$}. In the $15-20$~d bin (red), the wings decline further, the two peaks move together to $\pm3000$~\mbox{\rm km~s$^{-1}$}\, and the central minimum is near $+1200$~\mbox{\rm km~s$^{-1}$}. The two velocity peaks are separated by $\pm4000$~\mbox{\rm km~s$^{-1}$}\ at $\tau\sim15$~d delays, moving closer together at longer delays, perhaps toward a merger at a maximum delay of $25$~d. This structure suggests the top half of an elliptical ring feature, such as might arise from an annulus of orbiting gas with $R\approx15$ light days and $V\,\sin{i}\approx4000$~\mbox{\rm km~s$^{-1}$}. For an inclined circular Keplerian orbit, with $\tau = (R/c)\, \left( 1 + \sin{i} \cos{\theta} \right)$ and $V = V_{\rm Kep}\, \sin{i}\,\sin{\theta}$, the implied black hole mass is $M_{\rm BH} \sin^2{i} \sim 5\times10^7\,\mbox{M$_\odot$}$. For an ellipse with maximum delay 25~d and mean delay 15~d, $1+\cos{i}\approx1.67$ and thus $i\approx45^\circ$. For $i=45^\circ$, then $M_{\rm BH}\approx10^8\,\mbox{M$_\odot$}$. For a $10^8\,\mbox{M$_\odot$}$ black hole, the orange dotted curves on Figure~\ref{fig:twodqsomap} show the virial envelope for edge-on circular Keplerian orbits, and for $45^\circ$ inclined orbits extending from 5 to 15 light days. This framework captures much of the structure evident in the velocity-delay map, and provides a reference against which to consider evidence for departures from that simple model. Note that the outer disc ring becomes indistinct at maximum delay of $\sim25$~d, and its lower edge is also missing or obscured. There appears to be low or negative response at $-3000<V<-1000$~\mbox{\rm km~s$^{-1}$}\ and $\tau<10$~d. Such response gaps may arise from azimuthal structure on the ring, suppressing the line response at the corresponding azimuth. \subsection{Blind Analysis and Interpretation: CARAMEL Results for the QSO model} \label{sec:results:caramel} \begin{figure} \includegraphics[width=0.9\columnwidth]{display_caramelqso} \caption{Model fits to the QSO model H$\alpha$ line profile, integrated H$\alpha$ flux, and AGN continuum flux. Panel $1$: The provided H$\alpha$ emission-line profile for each epoch. Panel $2$: The H$\alpha$ emission-line profile for each epoch produced by one sample of the BLR and continuum model. Panel $3$: The provided H$\alpha$ line profile for one randomly chosen epoch (black), and the corresponding profile (red) produced by the model in Panel 2. Cyan lines show the H$\alpha$ profile produced by other randomly chosen models. Panels $4$ and $5$: Time series of the provided integrated H$\alpha$ and continuum flux (black), the time series produced by the model in Panel 2 (red), and time series produced by other sample BLR and continuum models (cyan). } \label{fig:display_caramelqso} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{posterior_caramelqso} \caption{Posterior distributions of select model parameters for the QSO model.} \label{fig:posterior_caramelqso} \end{figure*} The CARAMEL BLR model is able to reproduce both the H$\alpha$ line profile shape as well as the integrated H$\alpha$ flux light curve for the QSO model (Figure~\ref{fig:display_caramelqso}). Below, we discuss the modelling results, giving median values and 68\% confidence intervals for the key model parameters. In cases where the posterior PDF is one-sided, upper and lower 68\% confidence limits are given instead. The full posterior PDFs are provided in Figure \ref{fig:posterior_caramelqso}. The CARAMEL modelling results for the QSO model show an H$\alpha$-emitting BLR that is steeper than exponential with a shape parameter $\beta = $~$1.28_{-0.16}^{+0.20}$\ for the Gamma distribution. The distribution is shifted from the origin by $r_{\rm min} = $~$2.1_{-1.1}^{+1.2}$\ light days. The mean and median radii are $r_{\rm mean} = $~$11.7_{-2.5}^{+4.0}$\ and $r_{\rm median} = $~$7.5_{-1.7}^{+2.4}$\ light days, and the radial thickness of the distribution is $\sigma_r = $~$12.3_{-2.7}^{+6.9}$\ light days. The mean and median lags are $\tau_{\rm mean} = $~$11.6_{-2.3}^{+3.1}$\ and $\tau_{\rm median} = $~$6.6_{-1.5}^{+1.9}$\ days. The full posterior PDFs show multiple solutions for the opening angle and inclination angle, so a spherical BLR is not completely ruled out. Taking the median value and $68\%$ confidence intervals, we find an H$\alpha$-emitting BLR that is a thick disc, with $\theta_o = $~$32_{-12}^{+36}$\ degrees. The disc is inclined relative to the observer's line of sight by an inclination angle $\theta_i = $~$38_{-11}^{+18}$\ degrees. The 2d posterior PDF for $\theta_i$ vs. $\theta_o$ shows that the models in which $\theta_{i} \approx 75$ degrees are the same models for which $\theta_{o} \rightarrow 90$ degrees. The emission comes preferentially from the far side of the BLR ($\kappa = $~$-0.35_{-0.08}^{+0.24}$), which is what one would expect if the BLR clouds emit preferentially back towards the ionizing source. Models with an opaque disc mid-plane are slightly preferred over those with a transparent mid-plane ($\xi = $~$0.29_{-0.19}^{+0.21}$). Finally, the results show that the emission comes preferentially from the faces of the disc ($\gamma \geq 3.4$), making the geometry closer to a cone than a uniformly distributed thick disc. Dynamically, models are preferred in which there is a mixture of gas on near-circular elliptical orbits and gas in inflowing or outflowing trajectories ($f_{\rm ellip} = $~$0.33_{-0.21}^{+0.21}$). There is a slight preference for outflowing gas over inflowing gas ($f_{\rm flow} = $~$0.64_{-0.34}^{+0.24}$) for the gas that is not on near-circular elliptical orbits. We note that since $\theta_e = $~$36_{-23}^{+28}$\ degrees, the radial and tangential velocities are drawn from a distribution that may be rotated a non-negligible angle towards the circular velocity, resulting in fewer of the BLR particles with truly unbound trajectories. To summarize the total amount of inflowing or outflowing gas, we calculate an additional parameter: \begin{align} {\rm In. - Out.} = {\rm sgn}(f_{\rm flow} - 0.5) \times (1 - f_{\rm ellip}) \times \cos\theta_e, \end{align} where ${\rm sgn}$ is the sign function. This parameter is constructed such that a BLR with pure radial outflow (inflow) will have ${\rm In. - Out.} = 1~(-1)$, and a BLR with no preference for either solution will have ${\rm In. - Out.} = 0$. The posterior distribution for this parameter shows solutions for both inflow and outflow, although those with outflowing gas are preferred. Finally, we find that macro-turbulence may be important to the dynamics, with $\sigma_{\rm turb} = $~$0.052_{-0.031}^{+0.025}$ * $v_{\rm circ}$. The black hole mass inferred for this model is $\log_{10}(M_{\rm BH}/M_\odot) = $~$8.20_{-0.17}^{+0.23}$. \begin{figure} \includegraphics[width=\columnwidth]{transfer_caramelqso_int_1132-1} \caption{Velocity-resolved transfer function for the QSO model, chosen to be representative of the full posterior sample. The right-hand panel shows the velocity-integrated transfer function and the bottom panel shows the time-averaged line profile.} \label{fig:transfer_caramelqso} \end{figure} \begin{figure} \includegraphics[width=0.9\columnwidth]{geo_caramelqso} \caption{Geometric model of the broad line region that was used to create the transfer function in Figure~\ref{fig:transfer_caramelqso}. Each circle represents one of the particles in the model, and the size of the circle is proportional to the relative strength of emission from the particle, as determined by Equation \ref{eq:caramel_weight}. The observer is situated along the positive $x$-axis.} \label{fig:geo_caramelqso} \end{figure} \subsection{Unblinding: Comparison to Ground Truth} \label{sec:results:models} \subsubsection{{\sc MEMEcho}\ vs ground truth} \label{sec:results:models:memecho} The primary output of the {\sc MEMEcho}\ analysis is the recovered 2-D response function, shown in Figure~\ref{fig:twodqsomap}. This can be directly compared to the true (input) response function in Figure~\ref{fig:method:response_qso}. In our view, the performance of {\sc MEMEcho}\ in recovering the input velocity-delay map is quite impressive. To a good approximation, the recovered map is a smoothed version of the input map, exactly as one might hope and expect. However, the true peak in the overall delay distribution lies at $\simeq 4$~days, whereas the peak in the recovered distribution is estimated to be $\simeq 6$~days. This is almost certainly associated with the inevitable smoothing associated with regularization and exacerbated by the skewness of the delay distribution. The {\sc MEMEcho}\ velocity-delay map correctly reproduces the shape of the virial envelope in the input map, as well as the weakness of the response near line centre. The difference in the responses of the two line wings -- with the red wing exhibiting a stronger response than the blue wing -- is also captured correctly in the {\sc MEMEcho}\ map. Turning to the physical interpretation of the {\sc MEMEcho}\ results provided by K.H., he notes that the dominant structure in the recovered velocity-delay map can be explained by a BLR that consists mainly of gas extending from 5 to 15~light days in Keplerian rotation around a $M_{\rm BH} \simeq 10^8$~M$_{\odot}$ black hole, viewed from an inclination of $i \simeq 45^{\circ}$. Comparing this to Figures~\ref{fig:geometry-emissivity} and ~\ref{fig:geometry-responsivity}, as well as to the numbers in Table~\ref{table:params}, we see that this interpretation does capture the basic shape and kinematics of the line-forming region. More specifically, in our model, the emission line (and its response) are formed primarily in the dense, rotation-dominated base of the outflow. The geometry of and kinematics in this region are indeed similar to those of an annulus in Keplerian rotation between $r_{\rm min}$ and $r_{\rm max}$, and our adopted viewing angle is $i = 40^{\circ}$, similar to that inferred from the {\sc MEMEcho} results. The main discrepancy between the physical interpretation of the {\sc MEMEcho}\ results and the input model concerns the physical scale of the line-forming region. Figures~\ref{fig:geometry-emissivity} and ~\ref{fig:geometry-responsivity} show that -- in line with $r_{\rm min}$ and $r_{\rm max}$ in Table~\ref{table:params} (3.29-6.602 light days)-- the actual radius of the line-forming ``annulus" in our model is roughly 2/3 of that inferred from the {\sc MEMEcho}\ reconstruction (i.e. $\simeq 7$~light-days~$=1.8\times 10^{16}$~cm). This is exactly in line with the factor of $\simeq 50\%$ difference between the true and inferred mean delays, and is therefore also presumably caused by the effective smoothing of the response function during the maximum entropy inversion. Given that the BLR radius is overestimated, we might have expected the black hole mass to be overestimated also (since the virial estimator scales as $M_{\rm BH} \propto v^2 R$). However, the estimate obtained from the {\sc MEMEcho}\ reconstruction is $M_{\rm BH} \simeq 10^{8}$~M$_{\odot}$, whereas the black hole mass in our (rescaled) model is $M_{\rm BH} \simeq 2\times 10^8$~M$_{\odot}$. Based on Figure~\ref{fig:twodqsofit}, the main reason for this difference appears to be that the {\sc MEMEcho}\ estimate is derived from the {\em outer envelope} of the brightest parts of the 2-D response function. This outer envelope lies at velocities that are higher than typical for the bulk of the line-forming region. This, coupled with the slightly overestimated inclination, biases the {\sc MEMEcho}\ black hole mass estimate towards higher values and more than compensates for the effect of the overestimated BLR radius. In any case, agreement to within a factor of $\simeq 2$ is in line with the accuracy expected for this qualitative assessment. We finally note that neither the {\sc MEMEcho}\ velocity-delay map itself, nor its interpretation by an expert, point towards a rotating {\em outflow} as the source of the variable emission line. As noted above, given that the line formation in our disc wind model takes place primarily within the dense, rotation-dominated base of the outflow, this should not come as a surprise. It is nevertheless important to keep this in mind when interpreting observational data: physically motivated BLR models can have complex geometries and kinematics that may not be easy to discern even from 2-D response functions. Comparisons with toy models -- e.g. Hubble inflows/outflows, pure Keplerian discs -- may still provide useful insights in these cases. However, it is crucial to remember that we are only studying those parts of the BLR that dominate the responsivity-weighted line emission. Even if the inferred geometry and kinematics for these regions are broadly correct, they may not reflect the overall geometry and kinematics of the flow that constitutes the BLR. \subsubsection{CARAMEL vs ground truth} \label{sec:results:models:caramel} The primary output of the CARAMEL analysis is the set of parameter distributions shown in Figure~\ref{fig:posterior_caramelqso} and discussed in Section~\ref{sec:results:caramel}. These parameters define the properties of the cloud population used by CARAMEL to fit the simulated data. The overall geometry of this population is shown in Figure~\ref{fig:geo_caramelqso}, which can be compared to our raw and responsivity-weighted emissivity maps (Figure~\ref{fig:geometry-emissivity}~and~\ref{fig:geometry-responsivity}). Even though a spherical BLR was not completely ruled out by the CARAMEL modelling, the preferred geometry was a strongly flared disc (opening angle $\theta_{o} = 32^{+36}_{-12}$~degrees viewed at an inclination of $i \simeq 40^{\circ}$. The inferred inclination is in excellent agreement with the true value, and a flared disc is a reasonable description of the line-forming region in our biconical disc wind model. Indeed, line emission in the CARAMEL models is produced preferentially near the face of the disc, in line with a conical geometry. In the model, the inner part of the wind cone lies at an angle of $90^{\circ}-\theta_{\rm min} = 20^{\circ}$ from the disc surface, which is smaller than, but still consistent with, the inferred opening angle. The velocity-integrated median delay obtained by the CARAMEL analysis is $\tau_{\rm median} = 6.6^{+1.9}_{-1.5}$~days. This agrees well with the actual median delay $\tau_{\rm median} \simeq 6$~day. In line with this, the characteristic scale of the line-forming region is also correctly recovered, $r_{\rm median} \simeq 7$~light days, in good agreement with the approximate radius of the line-emitting annulus in our model (see Figures~\ref{fig:geometry-emissivity} and ~\ref{fig:geometry-responsivity}). CARAMEL also correctly finds that the line emission comes preferentially from the far side of the BLR, and that the mid-plane of the disc is opaque. Turning to the kinematics of the BLR, the picture is less clear. CARAMEL correctly finds that a significant part of the BLR material is on near-circular Keplerian orbits and also that an additional velocity field is required. However, it cannot decisively distinguish between inflow and outflow kinematics, even though it does (correctly) favour a net outflow of material. CARAMEL also finds marginal evidence for a significant macro-turbulent velocity, which we suspect is an artefact of its kinematic parameterization being unable to faithfully describe the ``true" BLR kinematics. The black hole mass of $M_{\rm BH} \simeq 2 \times 10^8$~M$_{\odot}$ is correctly recovered by CARAMEL. Perhaps the most surprising and concerning aspect of the CARAMEL analysis is the velocity-delay map constructed from a model drawn at random from the posterior distribution (Figure~\ref{fig:transfer_caramelqso}). Some difference would be expected as CARAMEL assumes that emission is \emph{not} linearised around a mean line luminosity, however this velocity-delay map looks completely different from both our input response function (Figure~\ref{fig:method:response_qso}) and the response function recovered by {\sc MEMEcho}\ (Figure~\ref{fig:twodqsomap}). For example, it does not recover the double-peaked nature of the response, i.e. the suppressed response near line centre. It also shows no bright emission from the virial envelope associated with any particular annulus, just a smooth distribution across the entire width of the envelope at long delays, and a bright, diagonal ``line" at short delays (with a blue-leads-red signature). We have checked whether the particular model shown in Figure~\ref{fig:transfer_caramelqso} was just an ``unlucky" draw from the posterior distribution, i.e. that it is not representative. We find that other models drawn from the posterior distribution can exhibit (weakly) double-peaked mean line profiles, but the velocity-delay maps always tend to be quite similar to Figure~\ref{fig:transfer_caramelqso} (and hence dissimilar to the input response function). \begin{figure} \includegraphics[width=\columnwidth]{spectra_rms} \caption{RMS residuals for the noisy output time series of spectra and CARAMEL fit to it.} \label{fig:discussion:rms} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{caramel_model_fit.png} \caption{CARAMEL model fits to the QSO model H$\alpha$ line profile, compared to the model line profiles. Red lines indicate line centres. Panel $1$: The mean model and CARAMEL line profiles across all epochs. Panel $2$: The model emission-line profile for each epoch. Panel $3$: The CARAMEL fit emission-line profile for each epoch. Panel $4$: The standardised model - fit residuals for the emission-line profile for each epoch. } \label{fig:discussion:caramel_residuals} \end{figure} Given that the velocity-delay map is just a by-product of the CARAMEL analysis, it is important to test whether this discrepancy is associated purely with the construction of the response function from the CARAMEL models (rather than the models themselves). What is actually being fit by CARAMEL is the spectroscopic time series itself. We have therefore also constructed the RMS line profile directly from the input time series and also from the CARAMEL model fit to this time-series. These RMS line profiles are shown in Figure~\ref{fig:discussion:rms}. It is immediately apparent that there does seem to be a fundamental problem with the CARAMEL model fits: the RMS profile constructed from the CARAMEL model has a completely different shape than that constructed from the input data. Most importantly, the CARAMEL RMS profile is single-peaked, whereas the input disc wind model produces a clearly double-peaked RMS profile (as we would expect, given its rotation-dominated kinematics). The standardized model - fit residuals for the time series of spectra go some way further illustrate this deviation (Figure~\ref{fig:discussion:caramel_residuals}); it is apparent that whilst CARAMEL can capture the variation around the line peak well, variations of the regions on either side of the line peak are poorly-captured. CARAMEL simultaneously under- and over-predicts, unable to capture the asymmetry to the line response. We currently have no simple explanation for this behaviour. It is perplexing that a model that produces a response function and RMS profile that appear to be inconsistent with ground truth should nevertheless be able to match the individual spectra in our time series (as suggested by Figure~\ref{fig:display_caramelqso}). The discrepancy is potentially due to CARAMEL's use of \emph{linear} line response to the continuum, rather than \emph{linearised} response around the mean flux as assumed by {\sc MEMEcho}\ . This would explain why CARAMEL can fit the \emph{mean} line profile relatively well, but fail to capture variations around that mean. \section{Conclusions} \label{sec:conclusions} We have tested the ability of the two main inversion techniques used in AGN reverberation mapping to recover two physically-motivated response functions from a simulated time-series of spectra. The two reverberation mapping codes we tested were {\sc MEMEcho}\ and CARAMEL which represent two different classes of inversion techniques. {\sc MEMEcho}\ simply aims to recover the 2-D response function, with the physical interpretation of the results being left to the expert user. CARAMEL carries out forward modelling of the spectroscopic time-series, using a simple, but flexible, description of the BLR as a population of orbiting clouds with known geometric and kinematic properties. All tests were carried out as blind trials, i.e. the {\sc MEMEcho}\ and CARAMEL modelling teams were only provided the simulated time series. The benchmark BLR models used in our test describe a rotating, biconical accretion disc wind. The simulated spectroscopic time series were generated from a self-consistent ionization and radiative transfer simulation that follows the H$\alpha$ line formation process within the outflow. Two different sets of model parameters were used as input, which were roughly designed to represent Seyfert galaxies and QSOs. In both models, the H$\alpha$ line-forming region lies primarily in the dense base of the wind, where the kinematics are rotation-dominated. Neither the maximum-entropy technique of {\sc MEMEcho}\ nor Markov-Chain Monte-Carlo forward modelling technique of CARAMEL were able to successfully recover the Seyfert response function, due to the significant negative responsivity in large parts of the velocity-delay space of this model. However, both methods fail ``gracefully'', in the sense of not generating spurious results. In the case of the QSO model, the velocity-delay map recovered by {\sc MEMEcho}\ was a good match to the input 2-D response function, after accounting for the inevitable smoothing associated with the inversion process. The expert interpretation of the map also correctly captured the annular geometry and rotation-dominated kinematics of the line-forming region. In addition, the estimated observer orientation and black hole mass were in reasonable agreement with those in the input model. The characteristic size of the BLR was overestimated by roughly 50\%, however. CARAMEL also captured the overall geometry of the line-forming region, describing it as a flared, inclined disc with the correct size and orientation. The importance of rotation to the kinematics was also recovered, but while an additional kinematic component was required by the modelling, CARAMEL was unable to reliably distinguish between inflow and outflow velocity fields for this component. Nevertheless, the black hole mass was correctly estimated by CARAMEL. The most surprising and concerning result of the CARAMEL analysis is that the velocity-delay map it recovers is strongly inconsistent with the true 2-D response function. In line with this, the RMS profile of the CARAMEL fits to the spectroscopic time series is also inconsistent with that of the input time series. We currently have no explanation for these discrepancies. They are difficult to understand in light of the apparently successful fits CARAMEL achieves to the individual spectra. Overall, we consider the results of these tests to be quite positive. Even though neither model was able to deal with the Seyfert model, with its net negative response, neither generated misleading results in this case. In the case of the QSO model, both methods broadly recovered the correct geometry of the line-forming region, as well as its dominant kinematics. However, neither method was able to capture that the input model described a disc wind. This should not come as a surprise, given that the rotation dominates the kinematics in the line-forming region. It is nevertheless critical to keep this lesson in mind when interpreting observational data sets: even correctly recovered and interpreted response functions can only tell us about the conditions in the (responsive parts of the) line-forming region. This region can be dominated by rotation, for example, even if this part of the BLR is just the inner part of a larger-scale outflow. The results do raise the concern that real time-series that exhibit negative responses (as suggested by \citet{Pei2017}) cannot be reliably analysed by current techniques. This suggests that the inversion method of regularized linear inversion \citep{Krolik1995}, which has the capability to handle negative responses, warrants revisiting. Promisingly, the {\sc MEMEcho}\ team are also exploring modifications to the code that would enable it to handle such time series. Similarly, the CARAMEL team are currently implementing photoionization physics into their model, which has the potential to introduce a spatially-dependent responsivity into the model, allowing it to handle negative responses. This work only covers the analysis of two possible models with a single continuum, and whilst the simulated observing campaigns satisfy the criteria in \citet{Horne2004}, we believe simulating campaigns with a more diverse range of models and (in particular) continuum variation profiles would allow us to more closely define the limits of existing deconvolution techniques. This could potentially give this method a role in the planning of observational campaigns, allowing them to place bounds on their capabilities. \section*{Acknowledgements} Most figures in this paper were prepared using \emph{PGPLOT} \citep{Pearson2017}, \emph{MatPlotLib} \citep{Hunter2007} and \emph{Dia Diagram Editor} \citep{Oualline2018}. SWM acknowledges the University of Southampton`s Institute for Complex Systems Simulation and the Engineering and Physical Sciences Research Council for the PhD student that funded his research. CK and NSH acknowledge support by the Science and Technology Facilities Council grant ST/M001326/1. JHM is supported by the Science and Technology Facilities Council under grant ST/N000919/1. KH acknowledges support from STFC grant ST/R000824/1. \section{Appendix} \input{section_appendix.tex} \bibliographystyle{mnras}
-43,961.234281
[ -3.271484375, 2.935546875 ]
45.443787
[ -2.66796875, 0.541015625, -1.8916015625, -5.7421875, -0.87451171875, 8.046875 ]
[ 3.169921875, 6.5234375, 2.27734375, 4.5390625 ]
681
12,386
[ -3.17578125, 3.419921875 ]
25.514159
[ -6.37890625, -4.4453125, -4.43359375, -2.26953125, 2.068359375, 12.421875 ]
1.238126
22.079104
18.49669
2.746936
[ 2.671053886413574 ]
-28,877.414519
5.575327
-42,111.017633
0.347544
6.203261
[ -3.130859375, -3.791015625, -3.43359375, -4.390625, 2.423828125, 11.5234375 ]
[ -5.53125, -2.75, -2.798828125, -2.130859375, 3.82421875, 6.203125 ]
BkiUbv_xK7Ehm2ZQzdWT
\section{Introduction} Supersymmetry (SUSY) is a promising candidate for physics beyond the Standard Model (SM). The supersymmetric extension predicts the superpartners of the SM particles, and the masses of the SUSY particles are expected to be at least TeV-scale, in order to explain the origin of the electroweak (EW) scale.\footnote{See for reviews e.g.~\cite{Martin:1997ns,Chung:2003fi}.} In the Minimal Supersymmetric Standard Model (MSSM), there is a supersymmetric mass parameter, what is called $\mu$-parameter, for higgsino that is the superpartner of Higgs bosons. In order to realize the EW scale without fine-tuning, $\mu$-parameter should be EW-scale. Besides, the lightest particle in the MSSM becomes stable because of R-parity, so that higgsino becomes a good dark matter (DM) candidate if there is no lighter SUSY particle. So far, a lot of efforts are devoted to the SUSY search in the collider experiments and the dark matter observations~\cite{Jungman:1995df}. There are no decisive signals of the SUSY particles, but higgsino is still one of the possible and attractive DM candidates that reveal the origin of the EW scale. In the MSSM, there are a lot of parameters, so that we can consider many possibilities of the mass spectrum for the SUSY particles. The direct searches for the SUSY particles as well as the 125~GeV Higgs boson mass measurement at the LHC~\cite{Aad:2015zhl}, however, constrain the parameter space strictly. It is getting very difficult to construct SUSY models, as long as the explanation of the EW scale is not discarded. One possible setup to achieve both the 125 GeV Higgs boson mass and the explanation of the EW scale is known as the Non-Universal Gaugino Masses (NUGM) scenario~\cite{Abe:2007kf,Abe:2012xm}. In this scenario, a suitable ratio of the wino mass to the gluino mass achieves the EW scale and the 125~GeV Higgs boson mass. Then, the $\mu$-parameter is predicted to be close to the EW scale. The current status and the future prospect of the discovery of the SUSY particles at the LHC have been investigated in this scenario~\cite{Abe:2015xva,Kawamura:2016drh,JK_PhDthesis}. We find that the superpartners of top quark and gluon, what are called top squark and gluino, are promising particles to test this scenario. Expected reaches of these SUSY particles decaying to higgsinos are studied in Refs.~\cite{Baer:2016bwh,Baer:2016wkz}. Note that there are some models that lead such a ratio of the gauginos. One possibility is the mirage mediation~\cite{Choi:2004sx,Choi:2005ge,Choi:2005uz}, that is a mixture of the moduli mediation~\cite{Brignole:1995fb,Brignole:1997dp} and anomaly mediation~\cite{Randall:1998uk,Giudice:1998xp}. The phenomenology of the mirage mediation is discussed before the Higgs boson discovery in Refs.~\cite{Choi:2006xb,Kitano:2005wc,Choi:2006im,Cho:2007fg,Nagai:2007ud,Nakamura:2008ey,Choi:2009jn,Holmes:2009mx,Altunkaynak:2010xe,Everett:2008ey,Everett:2008qy} and after that in Refs.~\cite{Asano:2012sv,Kobayashi:2012ee,Abe:2014kla,Hagimoto:2015tua,Everett:2015dqa,Barger:2015xqk,Baer:2016hfa}. There are some works to realize the ratio of the gauginos in the GUT models~\cite{Younkin:2012ui} and superstring models~\cite{Blumenhagen:2006ci}. In this kind of SUSY models, higgsino is light because of the explanation of the origin of the EW scale, and the SUSY particle is expected to be discovered in experiments. There are neutral and charged components in higgsino, and the neutral component mixes with bino and wino, and the charged component mixes with wino.\footnote{Wino and bino are the superpartners of $SU(2)_L$ and $U(1)_Y$ gauge bosons, respectively. } In our scenario, the gauginos are relatively heavy, so that all components of higgsino are light and almost degenerate; in fact, the mass difference is a few GeV~\cite{Abe:2015xva,Kawamura:2016drh}. Then, higgsino is hard to be detected at the LHC due to the certainly small mass differences. On the other hand, dark matter direct detection experiments can efficiently observe higgsinos, if the neutral component of higgsino slightly mixes with the gauginos and dominates over our universe. It is also interesting that the higgsino mass should be lighter than about 1 TeV, if higgsino is thermally produced. Then, our DM mass, that mainly comes from the neutral component of higgsino, is predicted to be between the EW scale and 1 TeV. In this paper, we study dark matter physics in the NUGM scenario. Direct detection experiments are sensitive to not only the higgsino mass itself, but also the gaugino masses, because the higgsino-gaugino mixing gives the most significant contribution to the detection rate. We also discuss the constraints from the LHC experiments, based on the results in Refs.~\cite{Abe:2015xva,Kawamura:2016drh,JK_PhDthesis}. We explicitly show the exclusion limit and the future prospect on the plane of the higgsino and the gaugino masses. In the end, we find that this scenario can be fully covered by the future experiments, as far as the gluino mass is below 2.5 TeV in a certain parameter set. This paper is organized as follows. The NUGM scenario is reviewed in Section 2, and we discuss dark matter physics in Section 3. The results of numerical calculations are shown in Section 4. Section 5 is devoted to conclusion. \section{NUGM scenario} \subsection{Review of NUGM} The NUGM scenario is known as one of the attractive SUSY models to realize $\mu$-parameter near the EW scale and the 125~GeV Higgs boson mass simultaneously. The $\mu$-parameter is related to the EW symmetry breaking scale through the minimization condition for the Higgs potential as \begin{eqnarray} \label{eq-EWSB} m_Z^2 \simeq 2 |m_{H_u}^2| - 2|\mu|^2, \end{eqnarray} where $m_Z$ is the Z-boson mass and $m_{H_u}^2$ is the soft scalar mass squared for the up-type Higgs boson. This relation shows that $|\mu|^2$ and $ |m_{H_u}^2|$ should be around the EW scale to avoid the fine-tuning between those parameters. The $\mu$-parameter is an unique SUSY-preserving parameter in the MSSM. On the other hand, all other dimensional parameters softly break SUSY and would be originated from some mediation mechanisms of SUSY breaking: i.e., the soft SUSY breaking terms would have same origin. Let us assume that the all ratios of soft SUSY breaking parameters are fixed by some mediation mechanisms and the overall scale is given by $M_0$. In this assumption, Eq. (\ref{eq-EWSB}) corresponds to the relation between $\mu$ and $M_0$. In Ref.~\cite{Barbieri:1987fn}, the parameter, $\Delta_x$, to measure the sensitivity of the parameter $x$ to the EW scale is introduced: \begin{eqnarray} \Delta_x = \left|\frac{\partial \ln{m_Z^2}}{\partial \ln{x^2}}\right|\ (x = \mu, M_0). \end{eqnarray} Since $m_{H_u}^2(m_{\rm SUSY})$ is expressed as a quadratic polynomial function of the boundary conditions, we can derive $\Delta_\mu + \Delta_{M_0}=1$ at the tree-level and $\Delta_{\mu} \simeq \Delta_{M_0}$ is satisfied. Thus the tuning of the $\mu$-parameter represents the degree of tuning to realize the EW symmetry breaking in the model. From the relation Eq.~(\ref{eq-EWSB}), the tuning measure of the $\mu$-parameter can be written as $\Delta_\mu = 2|\mu|^2/m_Z^2$ up to radiative corrections to the condition, so that small $|\mu|$ is simply required to avoid the fine-tuning in this assumption. The details of this kind of discussions in the NUGM scenario are shown in Refs.~\cite{JK_PhDthesis,preparation}. We proceed to study collider and dark matter phenomenology with the NUGM in this assumption. In this paper, we assume universal soft scalar mass $m_0$ and A-term $A_0$, while the gaugino masses $M_{1,2,3}$ are non-universal at the gauge coupling unification scale ($\simeq 10^{16}$ GeV). We assume the ratio of two Higgs vacuum expectation values (VEVs) $\tan\beta \equiv \langle H_u\rangle/\langle H_d\rangle =10$ throughout this paper. The soft mass squared $m_{H_u}^2$ at $m_{\rm SUSY}=1$ TeV relates to the boundary conditions at the unification scale as \begin{eqnarray} m_{H_u}^2(m_{\rm SUSY})&\simeq&0.005M_1^2 - 0.005 M_1M_2 + 0.201 M_2^2 -0.021 M_1M_3 - 0.135 M_2M_3 \\ \nonumber && - 1.57 M_3^2+ A_0 (0.011 M_1 + 0.065 M_2 + 0.243 M_3 - 0.099 A_0) - 0.075 m_0^2. \end{eqnarray} This relation shows that the contribution from the gluino mass is dominant among the renormalization group (RG) effects, but we find that the gluino mass contribution can be canceled by the RG effects from the other gaugino masses $M_{1,2}$. In particular, the $M_2^2$ term cancels the $M^2_3$ term if the ratio of $M_2/M_3$ satisfies $M_2/M_3\simeq$ 3-4. Similarly, the top squark mass parameters $m^2_{\tilde{t}_L}$, $m^2_{\tilde{t}_R}$ and $A_t$ at $m_{\rm SUSY}=1$ TeV are related to the boundary conditions as \begin{eqnarray} m_{\tilde{t}_L}^2(m_{\rm SUSY}) &\simeq& -0.007 M_1^2 -0.002 M_1M_2 + 0.354 M_2^2 -0.007 M_1 M_3 -0.051 M_2 M_3 + 3.25 M_3^2 \nonumber \\ \label{eq-mQ} && +(0.004 M_1+0.025 M_2+0.094 M_3-0.039 A_0) A_0+ 0.622 m_0^2, \\ m_{\tilde{t}_R}^2(m_{\rm SUSY}) &\simeq& 0.044 M_1^2 -0.003 M_1 M_2 - 0.158 M_2^2 -0.014 M_1 M_3 -0.090 M_2 M_3 +2.76 M_3^2 \nonumber \\ \label{eq-mu} && +(0.008 M_1+0.044 M_2+0.162 M_3-0.066 A_0) A_0+ 0.283 m_0^2, \\ A_t(m_{\rm SUSY}) &\simeq& -0.032 M_1 - 0.237 M_2 - 1.42 M_3 + 0.277 A_0. \label{eq-At} \end{eqnarray} We see that $A_t(m_{\rm SUSY})$ increases and $m_{\tilde{t}_R}^2(m_{\rm SUSY})$ decreases as the wino mass $M_2$ increases. Note that the latter effect is induced by the top Yukawa coupling. As a result, the ratio $A_t^2/\sqrt{m_{\tilde{t}_L}^2m_{\tilde{t}_R}^2}$ increases and the SM-like Higgs boson mass around 125 GeV can be achieved due to the relatively large wino. \subsection{Mass spectrum of NUGM} We see that the suitable wino-to-gluino mass ratio reduces the $\mu$-parameter and also enhances the Higgs boson mass. Besides, some of sparticle masses are within reaches of the LHC experiment thanks to the sizable left-right mixing of the top squarks \cite{Abe:2015xva,Kawamura:2016drh}. When the wino mass is large, left-handed sparticles become heavy due to the RG evolution. The right-handed slepton masses are determined by the bino mass, while the right-handed squark masses mainly depend on both the gluino and bino masses. The bino mass plays a crucial role in shifting the top squark mass, as well. This means that the bino mass have to be so heavy that the top squark mass is enough heavy to be consistent with the LHC results. Another important point derived from the relatively heavy bino and wino is that the mass differences among the components of higgsino become small. The mass differences are induced by the mixing with higgsino and gauginos, so that these are suppressed by the bino and wino masses as explicitly shown in next section. The mass differences among the components of higgsino are typically 2 GeV as shown in Ref.~\cite{Abe:2015xva}. This small mass difference makes it difficult to detect higgsino directly at the LHC, because their daughter particles are too soft to be distinguished from backgrounds and their lifetimes are too short to be recognized as charged tracks unlike the case that wino is the lightest SUSY particle (LSP)~\cite{Ibe:2006de}~\footnote{There are recent works to study searching for charged higgsinos that exploit their relatively long lifetime~\cite{Mahbubani:2017gjh,Fukuda:2017jmk}. }. This feature also indicates that we can treat all of the particles from higgsino as invisible particles at the LHC. Let us summarize the important features of our mass spectrum discussed below: \begin{itemize} \item All gauginos are ${\cal O}(1)$~TeV. \item The higgsino mass is between the EW scale and 1~TeV, and the mass differences are ${\cal O}(1)$~GeV. \item Right-handed top squark is relatively light. \end{itemize} \subsection{LHC bounds} In our scenario, the top squark and the gluino are the good candidates to be detected at the LHC. The current exclusion limit and the future prospect have been studied in Refs.~\cite{Abe:2015xva,Kawamura:2016drh,JK_PhDthesis}. In the NUGM scenario, a top squark decays as $\tilde{t}_1\rightarrow t\tilde{\chi}_{1,2}^0/b\tilde{\chi}_1^\pm$ where each branching fraction is $50\%$ as long as the mass difference between the top squark and each of the higgsino-like particles is significantly larger than the top quark mass. Note that the neutralinos consist of higgsino that slightly mixes with wino and bino in our scenario. The relevant top squark searches at the LHC are discussed in Ref.~\cite{LHC13:sb} and Ref.~\cite{LHC1313_hadMET}. The former analysis aims to a pair of bottom squarks that decay as $\tilde{b}_1\tilde{b}_1\rightarrow b\tilde{\chi}^0 b\tilde{\chi}^0$. This gives same signal as $\tilde{t}_1\tilde{t}_1\rightarrow b\tilde{\chi}^\pm b\tilde{\chi}^\pm$ in the NUGM scenario. The latter analysis aims to hadronically decaying top squarks, $\tilde{t}_1\tilde{t}_1\rightarrow t \tilde{\chi}^0 t \tilde{\chi}^0$ $ \rightarrow bjj\tilde{\chi}^0 bjj\tilde{\chi}^0$. In Ref. \cite{LHC1313_hadMET}, the signal regions require more than 4 jets, where 2 of these should be b-tagged. Such signal regions will be sensitive to events $\tilde{t}_1 \tilde{t}_1 \rightarrow t (\rightarrow bjj)\ \tilde{\chi}^0 b \tilde{\chi}^\pm$ in the NUGM scenario, although this analysis is not completely optimized. This decay pattern is realized in almost half of the events with the pair produced top squarks if the mass difference between the top squark and higgsino is enough large. Thus this channel that targets to the hadronically decaying top squark is sensitive to the large mass difference region, while the former channel that targets to bottom squarks decaying to a bottom quark and a neutralino is sensitive to the mass degenerate region. Referring the analysis in Ref.~\cite{JK_PhDthesis}, top squark lighter than 800 GeV is excluded if $\mu\lesssim200$ GeV is satisfied, and top squark lighter than 600 GeV is excluded in the range with $200$ GeV $\lesssim \mu \lesssim 270$ GeV. There is no exclusion limit for top squarks if $\mu$ is greater than 270 GeV. In present scenario, a gluino decays as $\tilde{g}\rightarrow t\tilde{t}_1\rightarrow t+t\tilde{\chi}^0/b\tilde{\chi}^\pm$. Hence, the the signal from the gluino pair production is expected to have 4 b-tagged jets, jets/leptons coming from 2-4 W-bosons and large missing energies in the final state. The analysis in Ref.~\cite{LHC13:glu} aims to this type of signals, and we refer the exclusion limit obtained in Ref.~\cite{JK_PhDthesis}. Gluino lighter than 1.8 TeV is excluded if the $\mu$-parameter is less than 800 GeV. The bound is relaxed if the mass difference is smaller than about 300 GeV. Note that there is another channel, $\tilde{g} \rightarrow g\tilde{\chi}^0$, that is induced by the top squark loop. If the mass difference between gluino and higgsino is near or less than the top quark mass, this decay channel becomes important. We need to consider the limits based on data such as Ref.~\cite{LHC:2jMET}, but it is beyond the scope of this paper. Let us comment on the case with light bino. If gluino is enough heavy, bino can be as light as higgsino and top squark can also decay to bino. The decay is, however, usually suppressed unless bino is significantly lighter than higgsino because the coupling of bino with top squark is much weaker than the one of higgsinos because of the top Yukawa coupling. Such a light bino is less attractive from the experimental point of view. If the bino mass is light, gluino has to be much heavier than the experimental reach in order to shift the top squark mass. Then, the light bino case would be unfavorable from the naturalness point of view. Furthermore, it is known that bino LSP tends to overclose the universe and some dilution mechanisms are necessary. \section{Dark matter physics} \subsection{Neutralino sector} In our study, we assume that the signs of all the gaugino masses are positive and the sign of the $\mu$-parameter is either negative or positive. After the EW symmetry breaking, gauginos and higgsino are mixed each other. The neutralino mass matrix in a basis of $\psi =(\tilde{B},\tilde{W},\tilde{H}_d^0,\tilde{H}_u^0)$ is given by \begin{eqnarray} M_{\tilde{\chi}} = \begin{pmatrix} M_1 & 0 & -c_\beta s_W m_Z & s_\beta s_W m_Z \\ 0 & M_2 & c_\beta c_W m_Z & -s_\beta c_W m_Z \\ -c_\beta s_W m_Z & c_\beta c_W m_Z & 0 & -\mu \\ s_\beta s_W m_Z & -s_\beta c_W m_Z & -\mu & 0 \end{pmatrix}, \end{eqnarray} where $c_\beta=\cos\beta$, $s_\beta=\sin\beta$, $c_W=\sin\theta_W$ and $s_W = \sin\theta_W$ are defined and $\theta_W$ is the Weinberg angle. This matrix is diagonalized by an unitary matrix $N$ as \begin{equation} \psi_i = N_{ij} \tilde{\chi}_j ~~{\rm and}~~ N^\dagger M_{\tilde{\chi}} N = {\rm diag} (m_{\tilde{\chi}_1},m_{\tilde{\chi}_2},m_{\tilde{\chi}_3},m_{\tilde{\chi}_4}). \end{equation} The masses, $m_{\tilde{\chi}_1}$, $m_{\tilde{\chi}_2}$, $m_{\tilde{\chi}_3}$ and $m_{\tilde{\chi}_4}$ approach to $M_1$, $M_2$, $\mu$, and $-\mu$ in the limit that $m_Z$ is vanishing, respectively. The mass eigenstate $\tilde{\chi}_3$ ($\tilde{\chi}_4$) becomes the lightest one if the $\mu$-parameter is positive (negative) and $|\mu| < M_1, M_2$. The neutralino-neutralino-Higgs coupling, $\mathcal{L} \ni (1/2) \lambda_{hnn} h \overline{\tilde{\chi}}_n \tilde{\chi}_n$, is given by \begin{eqnarray} \label{eq-LLH} \lambda_{hnn} = g (s_\alpha N_{3n}+c_\alpha N_{4n})(N_{2n}-t_W N_{1n}), \end{eqnarray} where $t_W$, $s_\alpha$ and $c_\alpha$ are short for $\tan\theta_W$, $\sin\alpha$ and $ \cos\alpha$, respectively. $\alpha$ is a mixing angle of the Higgs boson. The mixing matrix is given by \begin{eqnarray} \label{eq-N1} (N_{11},N_{21},N_{31},N_{41}) &=& \left(1, \, 0, \, -\frac{m_Zs_W(c_\beta M_1+s_\beta\mu)}{M_1^2-\mu^2} ,\, \frac{m_Zs_W(c_\beta \mu+s_\beta M_1)}{M_1^2-\mu^2}\right), \\ \label{eq-N2} (N_{12},N_{22},N_{32},N_{42}) &=& \left(0,\, 1,\, \frac{m_Zc_W(c_\beta M_2+s_\beta\mu)}{M_2^2-\mu^2},\, -\frac{m_Zc_W(c_\beta \mu+s_\beta M_2)}{M_2^2-\mu^2}\right), \\ \label{eq-N3} (N_{13},N_{23},N_{33},N_{43}) &=& \frac{1}{\sqrt{2}}\left( \frac{m_Zs_W(c_\beta+s_\beta)}{M_1-\mu}, \, - \frac{m_Zc_W(c_\beta+s_\beta)}{M_2-\mu}, \,1, \, -1\right), \\ \label{eq-N4} (N_{14},N_{24},N_{34},N_{44}) &=& \frac{1}{\sqrt{2}}\left( \frac{m_Zs_W(c_\beta-s_\beta)}{M_1+\mu}, \, - \frac{m_Zc_W(c_\beta-s_\beta)}{M_2+\mu}, \,1,\, 1 \right), \end{eqnarray} where $m_Z \ll |M_{1,2}\pm\mu|$ is assumed. \subsection{Thermal relic abundance} It is known that the thermal relic density of the purely higgsino LSP saturates the universe when the higgsino mass is about 1 TeV~\cite{Cirelli:2005uq,Cirelli:2007xd}. If we assume that there is no dilution effect after the thermal production of the LSP, the higgsino-like LSP heavier than 1 TeV overcloses the universe and is cosmologically excluded unless the higgsino and another sparticle, such as a top squark, are so degenerate that co-annihilation processes between them reduce the relic density. Let us comment on possibilities that gauginos contribute to dark matter considerably. In our scenario, the wino mass should be as large as the gluino mass at the TeV scale and it hardly contributes to the dark matter. The bino mass can be as light as the higgsino mass if the gluino mass is enough large to keep the top squark mass. It was interesting that the well-tempered bino-higgsino LSP explains the observed abundance in the thermal scenario~\cite{ArkaniHamed:2006mb}, but most of parameter space has been already excluded by the direct detections as will be discussed later~\footnote{ There are narrow regions where the thermal bino-higgsino LSP explains the abundance by the Higgs- or Z-boson resonances without tension with the DM direct detection experiments~\cite{Hamaguchi:2015rxa}.}. In our scenario, the relic DM abundance thermally produced may not be sufficient to satisfy the observed DM abundance in our universe. When we denote the relic abundance of the LSP as $\Omega_\chi h^2$, we can simply consider two possibilities to saturate the observed value, $\Omega_{\rm obs} h^2= 0.1188\pm0.0001$ \cite{Planck}: \begin{enumerate} \renewcommand{\labelenumi}{(\Alph{enumi})} \item $\Omega_\chi h^2$ is only given by the thermal production, and $\Omega_\chi h^2 \leq \Omega_{\rm obs} h^2$ is satisfied. \item $\Omega_\chi h^2 = \Omega_{\rm obs} h^2$ is always satisfied, assuming non-thermal production of LSP works. \end{enumerate} In the case (A), what is called $thermal$ $scenario$, the LSP may not saturate our universe, depending on the parameter region. Then, we need other dark matter candidates such as axion to achieve the observed relic abundance of DM. In the case (B), what is called $non$-$thermal$ $scenario$, we simply assume that the LSP dominates our universe and satisfies $\Omega_\chi h^2 = \Omega_{\rm obs} h^2$. We do not explicitly calculate the relic abundance, but several mechanisms for the non-thermal productions have been proposed so far. For instance, it is known that the decays of long-lived heavy particles, such as gravitino, saxion and moduli field, can significantly produce the LSP after the LSP is frozen out from the thermal bath ~\cite{Kohri:2005ru,Baer:2011uz,Baer:2014eja,Allahverdi:2013noa}. Note that the important difference of the two scenarios is whether $\Omega_\chi h^2 < \Omega_{\rm obs} h^2$ is allowed or not. In our study, we estimate the thermal relic density of the LSP, and we exclude the region with $\Omega_\chi h^2 > \Omega_{\rm obs} h^2$~\footnote{ Note that this region is not truly excluded in the non-thermal scenario, but such region satisfies $|\mu|\gtrsim 1.0$ TeV that is less attractive from both the testability and the naturalness point of view.}. When we estimate the direct detection rate of DM, the abundance of the LSP is important. Then we draw the exclusion limits of both cases. \subsection{direct detection} The direct detection for dark matter is a promising way to probe the neutralino sector of the MSSM. The current limits on the spin-independent and spin-dependent cross sections are given by the XENON100~\cite{Garny:2012it,Aprile:2012nq,Aprile:2013doa}, LUX \cite{LUX2015,LUX2016}, PANDAX-II \cite{Panda,Fu:2016ega} and PICO~\cite{Amole:2015pla,Amole:2016pye}. The XENON1T \cite{XENON1T} and LZ \cite{Akerib:2015cja} will cover wider range in near future. Let us discuss spin-independent cross section of neutralino scattering with nucleons. Note that the limits on the gaugino masses from the spin-independent cross section are stronger than those from the spin-dependent cross section in most cases. At tree-level, spin-independent scatterings are induced by the t-channel Higgs boson exchange and the s-channel squark exchange. Since only one top squark is light in the NUGM scenario, the latter contribution is negligibly small. The mixing between gauginos and higgsino are important in the Higgs boson exchange, because the LSP-LSP-Higgs coupling in the mass eigenstate basis is originated from the gaugino-higgsino-Higgs couplings in the gauge eigenstate basis. In the limit of $m_Z \ll |M_{1,2}\pm \mu|$, the mixing effects are suppressed by $m_Z/|M_{1,2}\pm\mu|$ as shown in Eqs. (\ref{eq-N3}) and (\ref{eq-N4}). It has been shown that there is a parameter set to lead vanishing gaugino-higgsino mixing, what is called the blind spot \cite{Cheung:2012qy}. As we see Eqs.(\ref{eq-LLH}), (\ref{eq-N1}) and (\ref{eq-N2}), the mixing is proportional to $M_{1,2} + \mu \sin 2\beta$, so that the mixing vanishes when the relative signs of $M_{1,2}$ and $\mu$ are opposite, and $|M_{1,2}| \lesssim |\mu|$ and $\tan\beta \gtrsim 1$ are satisfied. Thus the blind spot appears only in the gaugino-like LSP scenario. Note that the mixing is suppressed when the LSP is higgsino-like and signs of $\mu$ and $M_{1,2}$ are opposite, as we can see from Eqs.(\ref{eq-N3}) and (\ref{eq-N4}). Since the mixing is proportional to $1\pm\sin2\beta$, smaller $\tan\beta$ induces larger enhancement (suppression) for the same (opposite) sign. We need $\tan\beta \gtrsim 10$ in order to realize the SM-like Higgs boson mass unless the sparticle masses are much heavier than 1 TeV, so that such effect is at most $~20\%$-level. Thus we conclude that the gaugino-higgsino mixing is sizable and the factor, $1\pm\sin2\beta$, leads significant difference between the positive and the negative $\mu$-parameter cases in the DM scattering cross section. The spin-independent cross section per nucleon at the tree-level can be written as \begin{eqnarray} \sigma_N^{\rm SI} = \frac{g^2}{4\pi} \frac{m_N^4}{m_h^4m_W^2} \left(1+\frac{m_N}{m_\chi} \right)^{-2} \left[\frac{2}{9}+\frac{7}{9}\sum_{q=u,d,s}f^N_{T_q} \right]^2 \lambda_{h\chi\chi}^2, \label{eq-SIsimple} \end{eqnarray} where $m_N$ is the nucleon mass and $m_N f_{T_q}^N = \langle N|m_q \bar{q}q |N\rangle$. In the decoupling limit $m_A \gg m_Z$ that is a good approximation for our case, using Eqs. (\ref{eq-N3}) and (\ref{eq-N4}), the LSP-LSP-Higgs coupling $\lambda_{h\chi\chi}$ is derived from Eq. (\ref{eq-LLH}): \begin{eqnarray} \label{eq-chxx} \lambda_{h\chi\chi} = \frac{g}{2}(1\pm s_{2\beta}) c_W \left(\frac{m_Z}{M_2-|\mu|} + t_W^2 \frac{m_Z}{M_1-|\mu|}\right), \end{eqnarray} where $\pm$ corresponds to a sign of the $\mu$-parameter. \newcommand{\tilde{\chi}^0_1}{\tilde{\chi}^0_1} \newcommand{\tilde{\chi}^0_2}{\tilde{\chi}^0_2} \newcommand{\tilde{\chi}^0_3}{\tilde{\chi}^0_3} \newcommand{\tilde{\chi}^0_4}{\tilde{\chi}^0_4} \newcommand{\tilde{\chi}^\pm_1}{\tilde{\chi}^\pm_1} \newcommand{\tilde{\chi}^\pm_2}{\tilde{\chi}^\pm_2} \newcommand{\times 10^{-3}}{\times 10^{-3}} \newcommand{\times 10^{-1}}{\times 10^{-1}} \newcommand{\langle\sigma v\rangle_0 \times10^{25} [\rm cm^3/s]}{\langle\sigma v\rangle_0 \times10^{25} [\rm cm^3/s]} \newcommand{{\rm Br}(\chi\chi \rightarrow W^+W^-)}{{\rm Br}(\chi\chi \rightarrow W^+W^-)} \newcommand{{\rm Br}(\chi\chi \rightarrow ZZ)}{{\rm Br}(\chi\chi \rightarrow ZZ)} \newcommand{\sigma_{\rm SD}\times10^{-6 } [{\rm pb}]}{\sigma_{\rm SD}\times10^{-6 } [{\rm pb}]} \newcommand{\sigma_{\rm SI}\times10^{-11} [{\rm pb}]}{\sigma_{\rm SI}\times10^{-11} [{\rm pb}]} \newcommand{\sigma^h_{\rm SI}\times10^{-11}[{\rm pb}]}{\sigma^h_{\rm SI}\times10^{-11}[{\rm pb}]} \begin{table}[!t] \centering \caption{Values of boundary conditions at the unification scale $M_U$, Higgs boson masses, sparticle masses and dark matter observables at several sample points.} \vspace{0.5cm} \begin{tabular}{|c|cc|cc|} \hline input [GeV]& (a) & (b) & (c) & (d) \\ \hline $\mu$ & -250 & 250 & -1000 & 1000 \\ $M_1(M_U)$ & 10000& 10000& 5000& 5000 \\ $M_3(M_U)$ & 1000 & 1000 & 1500& 1500 \\ $m_0(M_U)$ & 1000 & 1000 & 1000 & 1000 \\ \hline output [GeV] & & & & \\ \hline $M_2(M_U)$ &4223 & 4175 & 4698 & 4504 \\ $A_0(M_U)$ &-2378 & -2325& -1916 & -1657\\ \hline mass [GeV] & & & & \\ \hline $m_h$ & 125.0& 125.0& 125.0 & 125.0\\ $m_A$ & 3349& 3326 & 3351 & 3248 \\ $m_{\tilde{t}_1}$& 1606& 1636 & 1431 & 1581 \\ $m_{\tilde{t}_2}$& 2780& 2762 & 3582 & 3520 \\ $m_{\tilde{g}}$ & 2250& 2250 & 3225 & 3223 \\ $m_{\tilde{\chi}^0_1}$ & 258.8& 255.7& 1016 & 1013 \\ $m_{\tilde{\chi}^0_2}$ & 260.5& 258.3& 1019 & 1017 \\ $m_{\tilde{\chi}^0_3}$ & 3438 & 3400 & 2239 & 2237 \\ $m_{\tilde{\chi}^0_4}$ & 4455 & 4454 & 3839 & 3682 \\ $m_{\tilde{\chi}^\pm_1}$ & 260.5& 257.1& 1018 & 1015 \\ $m_{\tilde{\chi}^\pm_2}$ & 3439 & 3400 & 3840 & 3682 \\ \hline observables & & & & \\ \hline $\Omega_\chi h^2$&7.82$\times 10^{-3}$&7.58$\times 10^{-3}$ &1.14$\times 10^{-1}$&1.16 $\times 10^{-1}$ \\ $\langle\sigma v\rangle_0 \times10^{25} [\rm cm^3/s]$ & 1.39 & 1.42 & 0.104 & 0.105 \\ ${\rm Br}(\chi\chi \rightarrow W^+W^-)$ & 0.533& 0.535& 0.488 & 0.489 \\ ${\rm Br}(\chi\chi \rightarrow ZZ)$ & 0.436& 0.435& 0.408 & 0.407 \\ \hline $\sigma_{\rm SD}\times10^{-6 } [{\rm pb}]$ &1.096 & 1.138& 0.1677&0.1757 \\ $\sigma_{\rm SI}\times10^{-11} [{\rm pb}]$ &3.499 & 8.505& 8.918 & 22.37 \\ $\sigma^h_{\rm SI}\times10^{-11}[{\rm pb}]$ &3.302 & 7.793& 7.853 & 19.50 \\ \hline \end{tabular} \label{tab-samp} \end{table} We list the explicit values of masses and observables at the sample points in Table~\ref{tab-samp}. We can see that the A-term is same order as other input parameters, but the Higgs boson mass is about 125 GeV owing to the suitable wino-to-gluino mass ratio. The top squark mass is about 1.5 TeV and the gluino mass is 2-3 TeV, so that they could be in the reach of the HL-LHC. The bino and wino masses are between 2 TeV and 5 TeV and they are far beyond the experimental reach of the LHC experiment. When $|\mu|=250 (1000)$ GeV in the samples (a), (b), (c) and (d), the thermal relic abundance is $\sim 0.01 (0.1)$. The self-annihilation rate of the neutralinos in the zero-velocity limit, denoted by $\langle \sigma v \rangle_0$, is $\mathcal{O}(0.1-1.0)\times 10^{-25} [{\rm cm}^3/s]$ and they are dominantly decaying to weak gauge bosons. These processes are induced by the t-channel neutralino or chargino exchange, and then the rate is determined by the higgsino mass itself. These are important for the indirect detections as discussed below. We also show the spin-dependent and spin-independent LSP-proton cross sections, $\sigma_{\rm SD},\ \sigma_{\rm SI}$, calculated by using micrOMEGA-4.2.5~\cite{Belanger:2013oya}. $\sigma_{\rm SI}^h$ is obtained from Eqs.~(\ref{eq-SIsimple}) and (\ref{eq-chxx}), where $f_{T_q}^p$ are taken same as the values adopted in micrOMEGA~\cite{Belanger:2014vza}. We can see the SI cross section is well described by the tree-level Higgs-exchanging process, but there are small deviations from the results of micrOMEGA. A dominant source for the deviation come from the QCD corrections to the heavy quark matrix elements~\cite{Belanger:2008sj}, which enhance the cross section about 10$\%$ against the tree-level contribution. Besides, the top squarks could give contribution to the cross section, when a mass difference $m^2_{\tilde{t}_1}-m^2_{\tilde{\chi}}$ is small. However, it is known that the leading contribution, which is suppressed by $(m^2_{\tilde{t}_1}-m^2_{\tilde{\chi}}) m_t^2$, is proportional to the size of non-trivial mixing of the top squarks~\cite{Drees:1993bu}. The top squark is almost right-handed in our scenario and thus such contribution can not be sizable. We take the top squark corrections derived in Ref.~\cite{Drees:1993bu} into account, and confirm that these are about 1$\%$ against the tree-level countribution at the sample~(d) and fewer for the other sample points. We have checked that our results agree with the results of micrOMEGA exhibited in Table~\ref{tab-samp} within several $\%$-level after including these effects. There are potentially sizable corrections from neutralino/Z-boson and chargino/W-boson mediated loop diagrams, where the neutralino and chargino are higgsino-like, but these are almost canceled out among them as shown in Ref.~\cite{Hisano:2011cs}. \subsection{Indirect detection} Let us comment on indirect detections for the dark matter. A pair of neutralinos decay to $W^+ W^-$ or $ZZ$ with the zero-velocity cross section: that is $\mathcal{O}(10^{-25})[{\rm cm}^3/s]$ as shown in Table~\ref{tab-samp}. One of the most promising observables may be the neutrino flux from the sun. The capture rate of neutralino by the sun is determined by the interaction between neutralino and nucleons. Since the spin-dependent cross section is much larger than the spin-independent one, the observations would give significant bounds on the spin-dependent cross section. The weak bosons produced by the annihilation of dark matter decay to neutrinos. The observed limit of neutrinos given by the IceCube is $ 3.76 \times 10^{-5}$ pb when the dark matter mass is 500 GeV and they decay to W-bosons exclusively~\cite{Aartsen:2016zhm}. This limit is comparable to the expected limit at the XENON1T~\cite{Garny:2012it}. We will see that exclusion limits for the parameter space from the XENON1T are much weaker than limits from the spin-independent cross section, so that the current limit from IceCube experiment can not be important one. \begin{figure}[!t] \centering \includegraphics[width=0.65\linewidth]{fig_higgsinoDM_indirect_loglog.eps} \caption{Exclusion limits and expected values in the NUGM scenario of the dark matter annihilation cross section. The blue (red) dots correspond to the non-thermal (thermal) scenario. } \label{fig-indirect} \end{figure} Cosmic ray observations such as photons, positrons and anti-protons could be powerful tools to detect dark matter. These limits of the annihilation cross section of DM reach to $\mathcal{O}(10^{-25})[{\rm cm}^3/s]$ and the parameter region discussed in present paper is competing with these bounds. We consider the recent experimental results obtained by the Fermi-LAT~\cite{Ackermann:2015zua} and AMS-02~\cite{Aguilar:2016kjl}. The former observes gamma rays coming from the dwarf spheroidal satellite galaxies (dSphs) of the Milky Way and the latter observes anti-protons coming from dark matter annihilations in the Milky Way. We refer the exclusion limit from the AMS-02 experiment obtained in the analysis~\cite{Cuoco:2016eej}~\footnote{ Similar analysis is done in Ref.~\cite{Cui:2016ppb}.}. The Fermi-LAT experiment also observes gamma-rays coming from the galactic center and this potentially gives significant constraints on the dark matter annihilation rate. However, the results are highly dependent on dark matter density profiles~\cite{Gomez-Vargas:2013bea}, so that we do not discuss about this in present paper. Figure~\ref{fig-indirect} shows the upper limits on the annihilation cross section from the recent results of the Fermi-LAT (black line) and the AMS-02 (green line). The dots are predictions from the NUGM scenario and obtained by the parameter scanning to draw figures in next section. We plot the points with $M_1 \ge 2.5$ TeV at the unification scale. The blue dots indicate the lightest neutralino mass and the annihilation rate itself, but it is multiplied by $(\Omega_{\rm \chi}/\Omega_{\rm obs})^2$ for the red dots. Since the higgsino-like dark matter dominantly annihilate to W-bosons or Z-bosons by the t-channel exchange of the higgsino-like chargino or neutralino, the annihilation rate is mostly determined by the higgsino mass itself and almost independent of other parameters. We see that the Fermi-LAT result excludes the neutralino lighter than about 300 GeV and the AMS-02 excludes the neutralino lighter than about 800 GeV in the non-thermal scenario. On the other hand, the indirect detections do not give limits on the thermal scenario, because the annihilation rate is suppressed by the factor $(\Omega_{\rm \chi}/\Omega_{\rm obs})^2$. Exclusion limits on the higgsino dark matter produced from some non-thermal processes at the Fermi-LAT and the future planned CTA experiments~\cite{Carr:2015hta} have been discussed in Ref.~\cite{Aparicio:2016qqb}. \section{Numerical results} Based on the above discussion, we summarize the experimental bounds and show the allowed region. As mentioned in Section 3.2, our analysis of the relic density includes two possibilities: thermal scenario and non-thermal scenario. We calculate only thermal relic density and exclude the region with $\Omega_\chi h^2 > \Omega_{\rm obs} h^2$. The difference of two scenarios only appear in the bound from the direct detection of DM. $\Omega_\chi h^2 < \Omega_{\rm obs} h^2$ is possible in the thermal scenario, so that the bound is relaxed. \begin{figure}[!t] \centering \includegraphics[width=0.98\linewidth]{fig_higgsinoDM_M1mu_M31500_v3.eps} \caption{Values of the dark matter observables with $M_3=1.5$ TeV.} \label{fig-M1mu_M31500} \centering \includegraphics[width=0.98\linewidth]{fig_higgsinoDM_M1mu_M31000_v3.eps} \caption{Values of the dark matter observables with $M_3=1.0$ TeV.} \label{fig-M1mu_M31000} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.98\linewidth]{fig_higgsinoDM_M3mu_M110000_v3.eps} \caption{Values of the dark matter observables with $M_1=10$ TeV.} \label{fig-M3mu_M110000} \centering \includegraphics[width=0.98\linewidth]{fig_higgsinoDM_M3mu_M15000_v3.eps} \caption{Values of the dark matter observables with $M_1=5.0$ TeV.} \label{fig-M3mu_M15000} \end{figure} Figure~\ref{fig-M1mu_M31500} shows the allowed region for the dark matter observables, the top squark mass and exclusion limits from the collider experiments. We assume $m_0= 1$ TeV, $M_3=1.5$ TeV at the unification scale and $A_0, M_2$ are chosen to realize the SM-like Higgs boson mass and the $\mu$-parameter at each point. We take the ratio of the Higgs VEVs as $\tan\beta=10$. We use softsusy-3.5.1~\cite{Allanach:2001kg} to calculate the RG effects and the mass spectrum of sparticles and Higgs bosons. Their width and branching ratios are calculated by SDECAY and HDECAY~\cite{Djouadi:2006bz}. The dark matter observables are calculated by micrOmega-4.2.5~\cite{Belanger:2013oya}. The red lines represent the thermal relic density of the neutralino, where the solid (dashed) lines correspond to $\Omega_\chi/\Omega_{\rm obs}=0.5\ (0.1)$ respectively. $\Omega_\chi = \Omega_{\rm obs} = 0.1188\pm0.0001$ \cite{Planck} is achieved in the red band around $|\mu| \simeq 1$ TeV. The thermal relic density of the dark matter exceeds the observed value, $\Omega_\chi > \Omega_{\rm obs}$, in the light gray region, so that this region is excluded if there is no dilution effects after the freeze-out of the neutralino. The gray region at $|\mu| \le 90 $ GeV is excluded by the LEP experiment~\cite{Jakobs:2001yg}. Although the charged and neutral components of higgsino are certainly degenerate, they can be probed by the mono-photon channel. The background color represent the mass of the lightest top squark. The purple line around $M_1 \lesssim 2.0$ TeV and $\mu\simeq -100$ GeV is the expected exclusion limits for the spin-dependent cross section from the XENON1T experiment~\cite{Garny:2012it}. The spin-independent cross section exceeds the current limit given by the LUX experiment~\cite{LUX2016} in the blue band. This should be understood as the limits for the non-thermal scenario and the limit would be relaxed as the $\mu$-parameter decreases in the thermal scenario. Such a suppression is, however, not so significant in this region, because the thermal relic density is enhanced due to the sizable fraction of bino to the lightest neutralino. The blue shaded region covered by the solid blue lines (XENON1T-N) is the expected limit from the XENON1T experiment in the non-thermal case $\Omega_\chi = \Omega_{\rm obs}$, while the dashed blue line (XENON1T-T) corresponds to the same limit in the thermal case, where the detection rate is suppressed due to the fewer neutralino relic density. Note that the cross section of the spin-independent direct detection is always larger than $0.25\times 10^{-10}$ pb in all figures in this paper. Then, we expect that the future experiments, the XENON1T \cite{XENON1T} and the LZ~\cite{Akerib:2015cja}, could cover our parameter region in the non-thermal scenario. On the other hand, the current limit from the spin-dependent cross section is fully covered by the spin-independent one. The exclusion limit from the spin-independent cross section becomes stronger as the $\mu$-parameter decreases in the non-thermal scenario. The reason is that the experimental limits for the cross section becomes tighter for lighter dark matter masses as long as the dark matter mass is heavier than about 40 GeV. On the other hand, this effect is erased by the smaller LSP density $\Omega_\chi$ in the thermal scenario. The light bino mass region is easier to be excluded due to the large bino-higgsino mixing, especially the well-tempered region has already excluded by the current LUX limit as well known. The spin-independent cross section is significantly large for the positive $\mu$-parameter compared with the case of the negative $\mu$-parameter. This is because the cross section is proportional to $(1+{\rm sign}(\mu)\sin 2\beta)^2$ as can be read from Eq.~(\ref{eq-chxx}). Note that the exclusion limits on the $\mu$-$M_1$ plane are severer than the ones derived in Ref.~\cite{Cheung:2012qy}. The difference comes from the fact that wino does not decouple completely in the NUGM scenario. In order to keep the $\mu$-parameter smaller than 1 TeV, the wino mass at the unification scale has to be 3-4 times larger than the gluino mass. The higher wino-to-gluino ratio is required for the lower typical sparticle scale which is defined as the geometric mean of the top squark masses. In this case, ($M_2$, $M_3$) are about (4 TeV, 1.5 TeV) at the unification scale and it enhances the spin-independent cross section. Figure~\ref{fig-M1mu_M31000} shows the allowed region for $\mu$ and $M_1$ at $M_3=1.0$ TeV. The different value of $M_3$ influences to the direct detection rate and the top squark mass. Top squark becomes the lightest SUSY particle in the dark gray region, and the top squark search at the LHC excludes the brown region. The LHC bounds are projected from the analysis in Ref.~\cite{JK_PhDthesis}. The bino mass has to be so large that top squark mass is larger than the higgsino mass. The lighter gluino mass leads the lighter wino mass and the spin-independent cross section is enhanced by the wino-higgsino mixing. We see that the XENON1T experiment covers the whole region with $\mu > 0$ in the non-thermal scenario. Figures~\ref{fig-M3mu_M110000} and \ref{fig-M3mu_M15000} show the allowed region for $\mu$ and $M_3$ where $M_1$ is $5.0$ TeV and $10.0$ TeV at the unification scale, respectively. Other parameters are set to be the same as in Figures~\ref{fig-M1mu_M31500} and \ref{fig-M1mu_M31000}. The constraint from the gluino search at the LHC is also applied to these figures and it excludes the dark brown region. The gluino mass lower bound is around 800 GeV, so that there was no exclusion bounds in Figs.~\ref{fig-M1mu_M31500} and \ref{fig-M1mu_M31000}. We can see that experimental reaches from direct detections for the gluino mass can be much severer than those from the LHC experiment in the non-thermal scenario. The wino-higgsino mixing is reduced as gluino becomes heavy. The mixing, however, is not vanishing in our model-dependent analysis. We see that the gaugino-higgsino mixing predicts the spin-independent cross section larger than $2.5\times 10^{-11}$ pb everywhere in all of the four figures. Thus the parameter region is on the neutrino floor~\cite{Billard:2013qya} and the region in our analysis would be fully covered by the future planned observations such as the XENON-nT, LZD, PandaX-4T and so on. \section{Conclusion} In this paper, we study the dark matter physics in the Non-Universal Gaugino Mass scenario. The NUGM scenario is one of the possible setups of the MSSM to achieve the 125 GeV Higgs boson mass and the $\mu$-parameter below 1 TeV, that naturally explain the origin of the EW scale. Since one top squark is relatively light in our scenario, the authors in Refs. \cite{Abe:2015xva,Kawamura:2016drh} study the current status and the future prospect on the direct search for top squark and gluino at the LHC. Although the higgsino mass is the most important from the naturalness point of view, higgsino can not be probed by the LHC due to their suitable mass difference $\sim 2$ GeV. On the other hand, the higgsino mass is critically important for dark matter physics and can be tested by the dark matter observations. The higgsino mass can not be larger than 1 TeV in order not to overclose the universe if we assume that there is no dilution effect after the LSP is frozen out. Direct detections for dark matter are powerful tool to probe the neutralino sector of the MSSM. Even the bino and the wino masses are 3-4 TeV, the spin-independent cross section between higgsino and nucleon is in the observational reach. Therefore, the wider parameter space can be covered by the direct detection than the gluino search at the LHC, when the wino-to-gluino mass ratio is fixed to realize the small $\mu$-parameter and the higgsino-like LSP dominates the relic density of dark matter. If the neutralino density is determined by the standard thermal process, the direct detection is sensitive to the parameter region where the higgsino mass is around 1 TeV, while the top squark and the gluino searches at the LHC are generally sensitive to lighter higgsino. Thus the direct detection complement the direct search at the LHC. The universal gaugino masses are clearly disfavored by the recent dark matter observations. The LSP is either bino or higgsino in this case, but the bino LSP easily overclose the universe. Even if the higgsino LSP is realized in some ways such as considered in Refs.~\cite{Feng:2012jfa,Baer:2012up}, light bino and wino are severely constrained by the direct detections. The direct detection constraints push up the gluino mass far above the experimental reach and such a heavy gluino indicates all other sparticles are also hopeless to be discovered except in some special cases. Thus the non-universal gaugino masses with relatively heavy bino and wino masses seems to be more interesting than the universal gaugino masses. \subsection*{Acknowledgments} The work of J. K. was supported by Grant-in-Aid for Research Fellow of Japan Society for the Promotion of Science No. 16J04215.
-30,023.088518
[ -2.685546875, 2.544921875 ]
41.684902
[ -2.298828125, 0.9404296875, -1.955078125, -5.09375, -1.1201171875, 7.4296875 ]
[ 3.224609375, 9.09375, 3.466796875, 6.10546875 ]
430
6,290
[ -2.33203125, 2.19140625 ]
30.873604
[ -5.859375, -4.359375, -4.21484375, -2.236328125, 1.837890625, 11.921875 ]
1.013424
16.61471
21.860095
5.562192
[ 2.364609718322754 ]
-21,356.583823
5.412242
-29,309.960686
0.599242
5.841275
[ -2.666015625, -3.65625, -3.537109375, -4.5234375, 2.189453125, 11.796875 ]
[ -5.39453125, -1.40625, -1.6376953125, -0.5791015625, 2.8671875, 3.57421875 ]
BkiUdmA5qhLAB_d2fBwl
\section{Introduction} \label{sec.1} In a paper published in 2014, Chen \cite{2014GReGr..46.1833C} showed that if a Lorentz manifold $(M,\,g)$ possesses a timelike vector field $X$ for which there exists a real-valued function $f:M\rightarrow\mathbf{R}$ such that the following condition is fulfilled, \begin{equation} \label{eqn:1-1} X_{\alpha ; \,\beta} = f g_{\alpha\beta}, \end{equation} \noindent then $M$ can be expressed as the (warped) product $I\times\Sigma$ of an interval $I$ and a spacelike hypersurface $(\Sigma,\,\sigma)$, with the metric $g$ having the form \begin{equation} \label{eqn:1-2} g= -\,dt\otimes dt + a^2(t) \sigma. \end{equation} \noindent Here $a$ is a real-valued function on $M$ which is constant on the hypersurfaces $t=\textrm{constant}$. Spaces of this form are usually referred to as \textit{Generalized Robertson-Walker spaces} (GRW). If the curvature of $(\Sigma,\,\sigma)$ is constant, then it is a \textit{Robertson-Walker space} (RW). \vspace{0.5em} Chen's result has been used in subsequent works to derive necessary and sufficient conditions for a space-time to be a GRW or RW space. In particular, the research of \cite{2016JMP....57j2502M,2019JMP....60e2506M,2019IJGMM..1650016D} focuses on the characterisation of GRW and RW spaces via conditions on the curvature tensor, more specifically the Ricci and Weyl ones. \vspace{0.5em} In this work we follow a different approach. Instead of assuming the existence of a timelike cocircular vector field, i.e. a field that satisfies~Eq.\eqref{eqn:1-1}, we require the existence of a unit vector field $u$ such that the Riemann tensor is \begin{align} \label{eqn:1-3} & R(X,\,u)\,u = fX, \\[5pt] \label{eqn:1-4} & R(X,\,Y)Z = h\left(g(Y,\,Z)X - g(X,\,Z)Y\right), \end{align} \noindent where $f,\,h$ are real-valued functions and $X,\,Y,\,Z$ are \textit{arbitrary} vector fields perpendicular to $u$. \textit{The main result of our work is proving that this is a necessary and sufficient criterion for the space-time to be a RW space.} \vspace{0.5em} The condition above is equivalent to requiring that the vector field $u$ distinguishes at each point $p$ two sectional curvatures in the following way: Let $\Pi_1,\,\Pi_2$ be arbitrary planes on the tangent space $T_p\,M$. If both planes are either perpendicular to $u$ or contain $u$, then their sectional curvatures must be equal. \vspace{0.5em} We note that the curvature of the space is constant if the sectional curvatures of the two arbitrary planes $\Pi_1$ and $\Pi_2$ (with $\Pi_1$ being perpendicular to $u$ and $\Pi_2$ containing $u$) are equal. This scenario corresponds to the well-known result that if two spaces have the same dimension, metric sign and constant curvature, then they are locally isometric (see e.g. \cite{o1983semi,wolf2011curvature}) and therefore we will not consider this situation in our work. This condition is expressed using the functions $f$ and $h$, as defined by~Eqs.\eqref{eqn:1-3} and \eqref{eqn:1-4} as follows \begin{equation} \label{eqn:1-5} h(p)-\varepsilon f(p) \neq 0,\textrm{ for every }p\in M \end{equation} \noindent where $\varepsilon\equiv g(u,u) = \pm 1$. \vspace{0.5em} In Section ~(\ref{sec.Statement}) we introduce the theorem and sketch its proof. The full proof is presented in Section ~(\ref{sec.Proof}), with most of the calculations being done in Sections ~(\ref{Sec.Riem}) and ~(\ref{Sec.Bianchi}), where we determine the Riemann tensor of the spaces under study and subsequently derive some important formulae as a consequence of the Bianchi identities. Lastly, in Sec.~(\ref{sec.Impl}) we discuss the implications of our findings. \section{Statement of the theorem} \label{sec.Statement} \noindent Before stating the theorem, we will briefly state some definitions. \begin{definition} A semi-Riemannian space $(M,\,g)$ is said to be \begin{enumerate} \item \textit{locally isotropic} with respect to a unit vector field $u$ if and only if the following condition is satisfied: Let $p$ be an arbitrary point of $M$ and $\Lambda$ be a linear isometry of $T_p\, M$ that leaves $u\rvert_p$ invariant. There exists a local isometry $\phi:\mathscr{U}\rightarrow M$ defined on an open neighborhood $\mathscr{U}$ of $p$ such that $\phi(p)=p$ and $\phi_*\rvert_p = \Lambda$. \item a \textit{generalized Robertson-Walker space} (GRW) with sign $\varepsilon = \pm 1$ if and only if it is the warped product of an open interval $I$ and a semi-Riemannian space $(\Sigma,\sigma)$ such that \begin{equation} \label{eqn:2-1} g = \varepsilon dt\otimes dt + a^2(t)\sigma, \end{equation} \noindent where $a$ is a real-valued function on $M$ which is constant on the hypersurfaces $t=\textrm{constant}$. \item a \textit{Robertson-Walker space} (RW) with sign $\varepsilon = \pm 1$ if and only if it is a GRW space, with $(\Sigma,\sigma)$ having constant curvature. \end{enumerate} \end{definition} GRW and RW spaces are usually defined as Lorentz spaces, with the vector field $u=\partial_t$ being time-like. However, as we will prove, our theorem is satisfied for general semi-Riemannian spaces. \begin{theorem} \label{thm-main} Let $(M,g)$ be a semi-Riemannian space of dimension $n \geq 4$ and of non-constant curvature, and $u$ be a unit vector field. The following propositions are equivalent: \begin{enumerate} \item $(M,g)$ is locally isotropic with respect to $u$. \item The Riemann tensor of $(M,g)$ is defined by \eqref{eqn:1-3} and \eqref{eqn:1-4}. \item $(M,g)$ is locally isometric to a RW space such that $u=\partial_t$. \end{enumerate} In addition, if $M$ is simply-connected, then $M$ is isometric to a RW space. \end{theorem} The proof follows a derivation of the RW metric given by \cite{kriele1999spacetime}, in which the author uses the Einstein field equations with matter described as a perfect fluid. The proof that $(1)$ implies $(2)$ is presented in the following section. The proof that $(3)$ implies $(1)$ relies on the local isometry of a space of constant curvature to one of the simply-connected space forms $\mathbf{R}_{\,\nu}^{n-1},\,\mathbf{S}_\nu^{n-1}(R),\,\mathbf{H}_\nu^{n-1}(R)$ (see \cite{o1983semi} for the definition of these spaces). It therefore remains to prove that $(2)$ implies $(3)$. \vspace{0.5em} The first (and most demanding) step, is to show that $u$ is locally orthogonal to a one-parameter family of hypersurfaces of $M$. This follows from an integrability condition via Poincar\'e's lemma, from which it also follows that the hypersurfaces can be extended globally if $M$ is simply-connected. This will be proved by a set of conditions (involving $f$ and $h$) derived from the second Bianchi identity. The second step is to show that every hypersurface has a constant curvature, which can be derived from Gauss formula for curvatures of hypersurfaces. The third and last step is to prove that for every vector field $X$ orthogonal to $u$, $df(X)=0$ and $dh(X)=0$. Once we have proved these three steps, the proof of the theorem is straightforward, as we will see. \section{The Riemann tensor of an isotropic space} \label{Sec.Riem} Throughout this section, we will denote by $R$ the Riemann tensor of a semi-Riemannian manifold $(M,g)$ that is locally isotropic with respect to a unit vector field $u$, and whose dimension is $\geq 4$. \begin{lemma} \label{lem.RXUY} Let $x,y\in u\rvert^\perp_p$ two unit vectors. Then \begin{equation} R(x,u)y = -R(y,u)x.\label{eqn:A-1} \end{equation} \noindent Additionally, if $x,y$ are of the same causal character then \begin{equation} R(x,u)x = R(y,u)y, \end{equation} \noindent while if $x,y$ are of different causal character then \begin{equation} R(x,u)x = -R(y,u)y.\label{eqn:A-3} \end{equation} \end{lemma} \begin{proof} First, suppose that the vectors x,y are of the same causal character. Consider the isometry $\ell$ on the plane spanned by $x,y$ defined by the relation \begin{align} & \ell x = \cos\theta x - \sin\theta y,\label{eqn:A-4}\\ & \ell y = \sin\theta x + \cos\theta y,\label{eqn:A-5} \end{align} \noindent leaving invariant any vector $z$ which is orthogonal to that plane: \begin{equation} \ell z = z, \;\; z\perp x,y.\label{eqn:A-6} \end{equation} \noindent Because we are working with an isotropic space, we have \begin{align} g\big(R(x,u)y,z\big) &= g\big(R(\ell x,u)\ell y,z\big) \label{eqn:A-7}\\ &= \sin\theta g\big(R(\ell x,u)x,z\big) + \cos\theta g\big(R(\ell x,u)y,z\big)\nonumber\\ &= \begin{aligned}[t] & \sin\theta\cos\theta g\big(R(x,u)x,z\big) - \sin^2\theta g\big(R(y,u)x,z\big)\\ & + \cos\theta^2 g\big(R(x,u)y,z\big)-\sin\theta\cos\theta g\big(R(y,u)y,z\big) \end{aligned}\nonumber\\ &= \begin{aligned}[t] & - \sin^2\theta \Big[g\big(R(x,u)y,z\big) + g\big(R(y,u)x,z\big)\Big]\\ & + \sin\theta\cos\theta \Big[g\big(R(x,u)x,z\big) - g\big(R(y,u)y,z\big)\Big]\\ & + g\big(R(x,u)\,y,z\big). \end{aligned}\nonumber \end{align} \noindent Because this equation holds for every real number $\theta$, we have that \begin{align} & g\big(R(x,u)y + R(y,u)x,z\big) = 0,\label{eqn:A-8}\\ & g\big(R(x,u)x - R(y,u)y,z\big) = 0.\label{eqn:A-9} \end{align} \noindent We now derive the components parallel to the vectors $x,y$. Since \begin{equation} R(\ell x,\ell y) = R(\cos\theta x - \sin\theta y, \sin\theta x + \cos\theta y) = R(x,y), \label{eqn:A-10} \end{equation} \noindent it follows that \begin{align} g\big(R(x,u)y,x\big) &= g\big(R(\ell y,\ell x)\ell x,u\big) \label{eqn:A-11}\\ &= g\big(R(y,x)\ell x,u\big)\nonumber\\ &= \cos\theta g\big(R(y,u)x,u\big) - \sin\theta g\big(R(y,x)y,u\big)\nonumber\\ &= \cos\theta g\big(R(x,u)y,x\big) - \sin\theta g\big(R(y,u)y,x\big).\nonumber \end{align} \noindent For $\theta=\pi$, the last equation \eqref{eqn:A-11} yields \begin{equation} g\big(R(x,u)y,x\big) = 0 = -g\big(R(y,u)x,x\big) \label{eqn:A-12} \end{equation} \noindent and therefore \begin{equation} g\big(R(y,u)y,x\big) = 0.\label{eqn:A-13} \end{equation} \noindent Interchanging the vectors $x$ and $y$ in \eqref{eqn:A-13} gives us \begin{equation} g\big(R(x,u)x,y\big) = 0, \label{eqn:A-14} \end{equation} \noindent proving the first half of the lemma. \vspace{0.5em} Suppose now that $x,y$ are of different causal character. We define the isometry $\Lambda$ on the plane spanned by $x,y$ which leaves any vector z orthogonal to that plane invariant as \begin{align} & \Lambda x = \cosh\theta x + \sinh\theta y,\label{eqn:A-15}\\ & \Lambda y = \sinh\theta x + \cosh\theta y,\label{eqn:A-16}\\ & \Lambda z = z, \;\; z\perp x,y.\label{eqn:A-17} \end{align} \noindent Again, since we are dealing with isotropic spaces, \begin{align} g\big(R(x,u)y,z\big) &= g\big(R(\Lambda x,u)\Lambda y,z\big)\label{eqn:A-18}\\ &= \begin{aligned}[t] & g\big(R(x,u)y,z\big)\\ & + \sinh^2\theta \Big[g\big(R(x,u)y,z\big) + g\big(R(y,u)x,z\big)\Big]\\ & + \sinh\theta\cosh\theta\Big[g\big(R(x,u)x,z\big) + g\big(R(y,u)y,z\big)\Big]. \end{aligned}\nonumber \end{align} \noindent This relation holds for every real number $\theta$ and thus \begin{align} g\big(R(x,u)y + R(y,u)x,z\big) = 0,\label{eqn:A-19}\\ g\big(R(x,u)x + R(y,u)y,z\big) = 0,\label{eqn:A-20} \end{align} \noindent The isometry $\Lambda$ also preserves the Riemann operator \begin{equation} R(\Lambda x,\Lambda y) = R(\cosh\theta x + \sinh\theta y, \sinh\theta x + \cosh\theta y) = R(x,y), \label{eqn:A-21} \end{equation} \noindent from which it follows that \begin{align} g\big(R(x,u)y,x)\big) &= g\big(R(\Lambda y,\Lambda x)\Lambda x,u\big)\nonumber\\ &= g\big(R(y,x)\Lambda x,u\big)\nonumber\\ &= \cosh\theta g\big(R(y,x)x,u\big) + \sinh\theta g\big(R(y,x)y,u\big)\nonumber\\ &= \cosh\theta g\big(R(x,u)y,x\big) + \sinh\theta g\big(R(y,u)y,x\big).\label{eqn:A-22} \end{align} \noindent As such \begin{equation} g\big(R(x,u)y,x\big) = -g\big(R(y,u)x,x\big) = 0, \label{eqn:A-23} \end{equation} \noindent and as a consequence \begin{equation} g\big(R(x,u)x,y\big) = g\big(R(y,u)y,x\big) = 0, \label{eqn:A-24} \end{equation} \noindent which completes the proof. \end{proof} \begin{lemma} \label{lem.XYZOrth} \noindent If $X,Y,Z$ vector fields orthogonal to $u$, then the vector field $R(X,Y)Z$ is also orthogonal to $u$. \end{lemma} \begin{proof} The proof is basically a combinatorial argument involving the first Bianchi identity. Without loss in generality, we can suppose that $X,Y,Z$ are orthogonal to each other. For some point $p\in M$ let $x=X\rvert_p$, $x=Y\rvert_p$ and $z=Z\rvert_p$. We will first show that the mapping \begin{equation} R_u(x,y,z)\equiv g\big(R(x,y)z,u\big)\label{eqn:A-25} \end{equation} \noindent is antisymmetric. Because of the antisymmetry property, we have that $R_u(x,y,z) = -R_u(y,x,z)$. It also holds that \begin{align} R_u(x,y,z) &= g\big(R(z,u)x,y\big) \label{eqn:A-26}\\ &= -g\big(R(x,u)z,u\big) \nonumber\\ &= -g\big(R(z,y)x,u\big) \nonumber\\ &= -R_u(z,y,z)\nonumber \end{align} \noindent and \begin{align} R_u(x,y,y) &= g\big(R(x,y)y,u\big) \label{eqn:A-27}\\ &= g\big(R(x,u)x,y\big) \nonumber\\ &= -g\big(R(x,u)y,y\big) \nonumber\\ &=0,\nonumber \end{align} \noindent which means that \begin{equation} 0 = R_u(x,y+z,y+z) = R_u(x,y,z) + R_u(x,z,y), \label{eqn:A-28} \end{equation} \noindent showing that $R_u$ is antisymmetric. Finally, from the first Bianchi identity it follows that \begin{equation} 0=R_u(x,y,z) + R_u(y,z,x) + R_u(z,x,y) = 3R_u(x,y,z). \label{eqn:A-29} \end{equation} \end{proof} \begin{lemma} \label{lem.PiXY} Let $\Pi_{x,y}$ be the non-degenerate plane of $T_pM$ spanned by two vectors $x,y$. \begin{enumerate} \item For every pair of unit vectors $x,y$ orthogonal to $u\rvert_p$ and to each other, the sectional curvatures $K_p(\Pi_{x,u}),K_p(\Pi_{y,u})$ of the planes $\Pi_{x,u},\Pi_{y,u}$ are equal: \begin{equation} K_p(\Pi_{x,u}) = K_p(\Pi_{y,u}).\label{eqn:A-30} \end{equation} \item For every three vectors $x,y,z$ orthogonal to $u\rvert_p$ and to each other, the sectional curvatures $K_p(\Pi_{x,y}),K_p(\Pi_{y,z})$ of the planes $\Pi_{x,y},\Pi_{y,z}$ are equal: \begin{equation} K_p(\Pi_{x,y}) = K_p(\Pi_{y,z}).\label{eqn:A-31} \end{equation} \end{enumerate} \end{lemma} \begin{proof} The first part is a direct consequence of Lemma \ref{lem.RXUY}, so we need to prove only the second part. In the case in which all three vectors have the same causal character this follows by considering a rotation on the plane spanned by $x$ and $y$. We will therefore prove only the case in which one of the vectors -- say $z$ -- is of different causal character than $x,y$. Let us consider the isometry $\Lambda$ defined by \begin{align} & \Lambda x = \cosh\theta x + \sinh\theta z,\label{eqn:A-32}\\ & \Lambda z = \sinh\theta x + \cosh\theta z,\label{eqn:A-33}\\ & \Lambda w = w, \;\; w\perp x,z. \label{eqn:A-34} \end{align} \noindent Then \begin{align} g\big(R(x,y)y,x\big) &= g\big(R(\Lambda x,y)y,\Lambda x\big) = \label{eqn:A-35}\\ &= \cosh\theta g\big(R(x,y)y,\Lambda x\big) + \sinh\theta g\big(R(z,y)y,\Lambda x\big)\nonumber\\ &= \begin{aligned}[t] & \cosh^2\theta g\big(R(x,y)y,x\big) + \cosh\theta \sinh\theta g\big(R(x,y)y,z\big)\\ & + \cosh\theta \sinh\theta g\big(R(z,y)y,x\big) + \sinh^2\theta g\big(R(z,y)y,z\big) \end{aligned}\nonumber\\ &= \begin{aligned}[t] & \sinh^2\theta \Big[g\big(R(x,y)y,x\big) + g\big(R(z,y)y,z\big)\Big]\\ & + \cosh\theta \sinh\theta \Big[g\big(R(z,y)y,x\big) + g\big(R(x,y)y,z\big)\Big]\\ & + g\big(R(x,y)y,x\big). \end{aligned}\nonumber \end{align} \noindent Since this holds for every real number $\theta$, \begin{equation} g\big(R(x,y)y,x\big) = -g\big(R(y,x)x,y\big), \label{eqn:A-36} \end{equation} \noindent which proves the lemma. \end{proof} \noindent We are now in a position to prove that the condition of local isotropy completely specifies the Riemann tensor. \begin{theorem} If $(M,g)$ is a semi-Riemannian manifold with dimension $\geq 4$ that is locally isotropic with respect to a unit vector $u$, its Riemann tensor satisfies Eqs.~\eqref{eqn:1-3}, \eqref{eqn:1-4}. \end{theorem} \begin{proof} We will first prove \eqref{eqn:1-3}. At any point $p$ of $M$ the sectional curvature $\Psi(p)$ of a non-degenerate plane containing $u\vert_p$ is independent of the choice of the plane, and therefore it is a pointwise function on $M$. Consider now two vectors $x,y$ orthogonal to $u\rvert_p$ and a real number $\lambda$ such that the vector $x+\lambda y$ is not null. Then Lemma \ref{lem.PiXY} implies that: \begin{equation} g\big(R(x+\lambda y,u)u,x+\lambda y\big) = \varepsilon g(x+\lambda y,x+\lambda y)\Psi(p).\label{eqn:A-37} \end{equation} \noindent Thanks to continuity, \eqref{eqn:A-37} holds for $\lambda=1$, which leads us to \begin{equation} g\big(R(x,u)u - \varepsilon\Psi(p)x,y\big) = 0 \label{eqn:A-38} \end{equation} \noindent for every vector $y$ orthogonal to $u$, which proves this for $f=\varepsilon\Psi$. \vspace{0.5em} We will now prove \eqref{eqn:1-4}. Using Lemma \ref{lem.PiXY} along with the algebraic properties of the Riemann tensor (cf. \cite{kobayashi1963geom1}) it follows that at each point $p$ there exists a real number $h(p)$ such that \begin{equation} g\Big(R(x,y)z -h(p)\big(g(y,z)x - g(x,z)y\big),w\Big) = 0 \label{eqn:A-39} \end{equation} \noindent for every $x,y,z,w\in T_pM$ orthogonal to $u\vert_p$. Additionally, using Lemma \ref{lem.XYZOrth} we get \begin{equation} g\Big(R(x,y)z -h(p)\big(g(y,z)x - g(x,z)y\big),u\Big) = 0. \label{eqn:A-40} \end{equation} \end{proof} \begin{theorem} If the Riemann tensor of a semi-Riemannian manifold $(M,g)$ satisfies the relations \begin{align} & R(X,u)u = fX, \label{eqn:A-41}\\[5pt] & R(X,Y)Z = h\big(g(Y,Z)X - g(X,Z)Y\big),\label{eqn:A-42} \end{align} \noindent with $u$ a unit vector field and $X,Y,Z$ vector fields orthogonal to $u$, then it also must satisfy the following relations: \begin{align} & R(X,Y)u = 0, \label{eqn:A-43}\\ & R(X,u)Y = -\varepsilon f g(X,Y)u. \label{eqn:A-44} \end{align} \end{theorem} \begin{proof} In order to prove the first relation, we will prove that $R(X,Y)u$ is orthogonal to every vector field. Eq. \eqref{eqn:A-41} implies that \begin{equation} g\big(R(X,Y)u,u\big) = -g\big(R(X,Y)u,u\big) = 0. \end{equation} Now, if $Z$ is an arbitrary vector field orthogonal to $u$ then it's also orthogonal to $R(X,Y)u$ due to \eqref{eqn:A-42}: \begin{equation} g\big(R(X,Y)u,Z\big) = - g\big(R(X,Y)Z,u\big) = 0. \end{equation} Eq. \eqref{eqn:A-44} is proved similarly: \begin{equation} g\big(R(X,u)Y,u\big) = -g\big(R(X,u)u,Y\big) = -fg(X,Y). \end{equation} \end{proof} \noindent Since every other component of the Riemann tensor vanishes identically, it follows that the conditions \eqref{eqn:1-3}, \eqref{eqn:1-4} completely specify the Riemann tensor. \section{The Bianchi identities} \label{Sec.Bianchi} We will now use the second Bianchi identities to derive some important relations involving the differentials $df,dh$ of the sectional curvatures $f,h$ respectively. The main result proved in this section is the following. \begin{theorem} \label{lem.OrthCond} Let $(M,g)$ be a semi-Riemannian manifold with dimension $\geq 4$ whose Riemann tensor $R$ satisfies \eqref{eqn:1-3} and \eqref{eqn:1-4}. If $X,Y$ are vector fields on $M$ that are orthogonal to $u$, then \begin{align} & df(X) = -(h-\varepsilon f) g(X,\nabla_u u), \label{eqn:3-1} \\ & g(\nabla_X u, Y) = g(\nabla_Y u, X), \label{eqn:3-2} \\ & dh(X) = 0. \label{eqn:3-3} \end{align} \end{theorem} \begin{proof} Instead of working with arbitrary vector fields, we shall use vector fields $X,Y,Z$ orthogonal to $u$ that are Fermi-parallel along an integral curve of $u$ and therefore satisfy the following equations: \begin{eqnarray} && \nabla_u X = -\varepsilon g\left(X,\nabla_u u\right) u, \label{eqn:C-1}\\ && \nabla_u Y = -\varepsilon g\left(Y,\nabla_u u\right) u, \label{eqn:C-2}\\ && \nabla_u Z = -\varepsilon g\left(Z,\nabla_u u\right) u. \label{eqn:C-3} \end{eqnarray} Given that \begin{enumerate} \item the relations to be proved are tensor equations, \item the dimension of the space-time is at least $4$, \item a Fermi-parallel basis of vector fields always exist, \end{enumerate} \noindent if Theorem \ref{lem.OrthCond} holds for every pair of vector fields orthogonal to $u$ and Fermi-parallel along an arbitrary integral curve of u then it holds for every pair of vector fields orthogonal to $u$. \vspace{0.5em} \noindent To see this we begin by considering the second Bianchi identity: \begin{equation} (\nabla_u R)(X,Y)Z + (\nabla_X R)(Y,u)Z + (\nabla_Y R)(u,X)Z = 0. \label{eqn:C-4} \end{equation} \noindent We have \begin{align} (\nabla_u R)(X,Y)Z &= \begin{aligned}[t] & \nabla_u\big(R(X,Y)Z\big) - R(\nabla_u X,Y)Z \\ & - R(X,\nabla_u Y)Z - R(X,Y)\nabla_u Z \end{aligned}\nonumber\\ &= \nabla_u\big(R(X,Y)Z\big) - R(\nabla_u X,Y)Z - R(X,\nabla_u Y)Z, \label{eqn:C-5}\\ \nonumber\\ \nabla_u\big(R(X,Y)Z\big) &= \begin{aligned}[t] & dh(u)\big(g(Y,Z)X - g(X,Z)Y\big) \\ & + h\big(g(Y,Z)\nabla_u X - g(X,Z)\nabla_u Y \big), \end{aligned} \label{eqn:C-6}\\ \nonumber\\ R(X,\nabla_u Y)Z &= -\varepsilon g(Y,\nabla_u u)R(X,u)Z = \varepsilon f g(X,Z)\nabla_u Y, \label{eqn:C-7}\\ \nonumber\\ R(\nabla_u X,Y)Z &= -\varepsilon fg(Y,Z)\nabla_u X. \label{eqn:C-8}\\ \nonumber \end{align} \noindent Replacing eqs. \eqref{eqn:C-6},\eqref{eqn:C-7} and \eqref{eqn:C-5} gives the first term in the second Bianchi identity \eqref{eqn:C-4}: \begin{equation} (\nabla_u R)(X,Y)Z = \begin{aligned}[t] & (h-\varepsilon f)\big( g(Y,Z)\nabla_u X - g(X,Z)\nabla_u Y \big) \\ & + dh(u)\big( g(Y,Z)X - g(X,Z)Y \big). \end{aligned}\label{eqn:C-9} \end{equation} \noindent We now address the second term, \begin{align} (\nabla_X R)(Y,u)Z &= \begin{aligned}[t] &\nabla_X\big(R(Y,u)Z\big) - R(\nabla_X Y,u)Z \\ &-R(Y,\nabla_X u)Z - R(Y,u)\nabla_X Z, \end{aligned} \label{eqn:C-10}\\ \nonumber\\ \nabla_X\big(R(Y,u)Z\big) &= \nabla_X\big(-\varepsilon fg(Y,Z)u\big)\label{eqn:C-11}\\ &= -\varepsilon df(X)g(Y,Z)u - \varepsilon fg(Y,Z)\nabla_X u,\nonumber\\ \nonumber\\ R(\nabla_X Y,u)Z &= R\big((\nabla_X Y)_\perp,u\big)Z = -\varepsilon f g(\nabla_X Y,Z)u.\label{eqn:C-12} \end{align} \noindent Since $u$ is of unit norm, the vector field $\nabla_X u$ is orthogonal to $u$, and then \begin{equation} R(Y,\nabla_X u)Z = h\big(g(\nabla_X u,Z)Y - g(Y,Z)\nabla_X u\big).\label{eqn:C-13} \end{equation} \noindent Lastly, we have \begin{align} \nabla_X Z &= (\nabla_X Z)_\perp + \varepsilon g(\nabla_X Z,u)u \label{eqn:C-14}\\ &= (\nabla_X Z)_\perp - \varepsilon g(Z, \nabla_X u)u \nonumber \end{align} \noindent which leads to \begin{align} R(Y,u)(\nabla_X Z)_{||} &= -\varepsilon g(\nabla_X u, Z)R(Y,u)u = -\varepsilon fg(\nabla_X u,Z)Y, \label{eqn:C-15}\\ R(Y,u)(\nabla_X Z)_\perp &= -\varepsilon f g(\nabla_X Z,Y)u = \varepsilon f g(Z,\nabla_X Y)u. \label{eqn:C-16} \end{align} \noindent The last relation follows from the fact that $g(Z,Y)$ is a constant along the curve on which these vector fields are defined. Summing \eqref{eqn:C-15} and \eqref{eqn:C-16} yields \begin{equation} R(Y,u)\nabla_X Z = -\varepsilon f \big( g(\nabla_X u,Z)Y - g(Z,\nabla_X Y)u \big)\label{eqn:C-17} \end{equation} \noindent and finally, substituting \eqref{eqn:C-11}, \eqref{eqn:C-12}, \eqref{eqn:C-13} and \eqref{eqn:C-17} gives us \begin{align} (\nabla_X R)(Y,u)Z &= \begin{aligned}[t] & \varepsilon df(X)g(Y,Z)u - \varepsilon fg(Y,Z)\nabla_X u\\ & + \varepsilon f g(\nabla_X Y,Z)u\\ & - h\big(g(\nabla_X u,Z)Y - g(Y,Z)\nabla_X u\big)\\ & + \varepsilon f \big( g(\nabla_X u,Z)Y - g(Z,\nabla_X Y)u \big) \end{aligned}\nonumber\\ &= \begin{aligned}[t] & (h-\varepsilon f)\big(g(Y,Z)\nabla_X u - g(\nabla_X u,Z)Y \big)\\ & -\varepsilon df(X)g(Y,Z)u \end{aligned}\label{eqn:C-18} \end{align} \noindent and \begin{equation} (\nabla_Y R)(u,X)Z = \begin{aligned}[t] & (h-\varepsilon f)\big(g(\nabla_Y u,Z)X - g(X,Z)\nabla_Y u\big) \\ & + \varepsilon df(Y)g(X,Z)u. \end{aligned}\label{eqn:C-19} \end{equation} \noindent Substituting \eqref{eqn:C-9}, \eqref{eqn:C-18} and \eqref{eqn:C-19} leads to the result \begin{equation} 0 = \begin{aligned}[t] & (h-\varepsilon f)\Big[\big( g(Y,Z)\nabla_u X - g(X,Z)\nabla_u Y\big)\\ & + \big(g(\nabla_Y u,Z)X - g(\nabla_X u,Z)Y \big)\\ & + \big(g(Y,Z)\nabla_X u - g(X,Z) \nabla_Y u \big)\Big]\\ & + dh(u)\big( g(Y,Z)X - g(X,Z)Y \big)\\ & + \varepsilon\big( df(Y)g(X,Z) - df(X)g(Y,Z) \big)u. \end{aligned}\label{eqn:C-20} \end{equation} For $X\perp Y$ and $Z=Y$, the component of \eqref{eqn:C-20} which is parallel to $u$ gives us \eqref{eqn:3-1}. Eq. \eqref{eqn:3-2} follows by taking the normal component of \eqref{eqn:C-20} with respect to $u$, with the vector fields $X,Y,Z$ being orthogonal to $u$ and to each other. To prove \eqref{eqn:3-3} we will again use the second Bianchi identity, \begin{equation} (\nabla_X R)(Y,Z)Z + (\nabla_Y R)(Z,X)Z + (\nabla_Z R)(X,Y)Z = 0.\label{eqn:C-21} \end{equation} \noindent with the vector fields $X,Y,Z$ being orthogonal to $u$ and to each other. Then: \begin{align} (\nabla_X R)(Y,Z)Z &= \begin{aligned}[t] & \nabla_X\big(R(Y,Z)Z\big) - R(\nabla_X Y,Z)Z\\ & - R(Y,\nabla_X Z)Z - R(Y,Z)\nabla_X Z, \end{aligned}\label{eqn:C-22}\\ \nonumber\\ \nabla_X\big(R(Y,Z)Z\big) &= \begin{aligned}[t] & dh(X)\big(g(Z,Z)Y - g(Y,Z)Z\big) \\ & + h\big(g(Z,Z)\nabla_X Y - g(Y,Z)\nabla_X Z\big) \end{aligned}\label{eqn:C-23}\\ &= g(Z,Z)\big(dh(X)Y + h\nabla_X Y\big),\nonumber \nonumber\\ R\big((\nabla_X Y)_\perp,Z\big)Z &= h\big(g(Z,Z)(\nabla_X Y)_\perp - g(\nabla_X Y,Z)Z\big),\label{eqn:C-24}\\ \nonumber\\ R\big(Y,(\nabla_X Z)_\perp\big)Z &= h\big(g(\nabla_X Z,Z)Y - g(Y,Z)(\nabla_X Z)_\perp\big)=0,\label{eqn:C-25}\\ \nonumber\\ R(Y,Z)(\nabla_X Z)_\perp &= h\big( g(Z,\nabla_X Z)Y - g(Y,\nabla_X Z)Z\big)\label{eqn:C-26}\\ &= -hg(Y,\nabla_X Z)Z.\nonumber \end{align} \noindent Summing \eqref{eqn:C-24} and \eqref{eqn:C-26} we get \begin{align} R\big((\nabla_X Z)_\perp, Z\big)Z + R(Y,Z)(\nabla_X Z)_\perp &= \begin{aligned}[t] & hg(Z,Z)(\nabla_X Z)_\perp\\ & - \big(g(\nabla_X Y, Z) + g(Y,\nabla_X Z)\big)hZ \end{aligned}\label{eqn:C-27}\\ &= hg(Z,Z)(\nabla_X Z)_\perp,\nonumber \end{align} \noindent while from \eqref{eqn:C-23} and \eqref{eqn:C-27} we get \begin{equation} \big[(\nabla_X R)(Y,Z)Z\big]_\perp = g(Z,Z)dh(X)Y,\label{eqn:C-28} \end{equation} \noindent and since \begin{equation} (\nabla_Y R)(Z,X)Z = -(\nabla_Y R)(X,Z)Z,\label{eqn:C-29} \end{equation} \noindent it follows that \begin{equation} \big[(\nabla_X R)(Y,Z)Z + (\nabla_Y R)(Z,X)Z \big]_\perp = g(Z,Z)\big(dh(X)Y - dh(Y)X).\label{eqn:C-30} \end{equation} As for the last term, \begin{equation} (\nabla_Z R)(X,Y)Z = \begin{aligned}[t] & \nabla_Z \big(R(X,Y)Z\big) - R(\nabla_Z X,Y)Z\\ & - R(X,\nabla_Z Y)Z - R(X,Y)\nabla_Z Z. \end{aligned}\label{eqn:C-31} \end{equation} \noindent We have \begin{align} R\big((\nabla_Z X)_\perp,Y\big)Z &= \begin{aligned}[t] & h\big(g(Y,Z)(\nabla_Z X)_\perp - g(\nabla_Z X,Z)Y\big)\\ & - hg(\nabla_Z X,Z)Y, \end{aligned}\label{eqn:C-32}\\ R\big(X,(\nabla_Z Y)_\perp\big)Z &= h\big(g(\nabla_Z Y,Z)X - g(X,Z)(\nabla_Z Y)_\perp\big)\label{eqn:C-33}\\ &= hg(\nabla_Z Y,Z)X.\nonumber \end{align} \noindent Summing these expressions, \begin{equation} R\big((\nabla_Z X)_\perp,Y\big)Z + R\big(X,(\nabla_Z Y)_\perp\big)Z + R(X,Y)(\nabla_Z Z)_\perp = 0,\label{eqn:C-34}\\ \end{equation} \noindent and as such the second Bianchi identity \eqref{eqn:C-21} is equivalent to \begin{equation} g(Z,Z)\big(dh(X)Y - dh(Y)X\big) = 0, \label{eqn:C-35} \end{equation} \noindent which concludes the proof. \end{proof} Eq. \eqref{eqn:C-20} can also be used to derive a relation involving $dh(u)$. As we will see in the next section, this relation will give us the second fundamental form of the hypersurfaces of constant time. \begin{theorem} If $X,Y$ are vector fields that are orthogonal to $u$, then \begin{equation} g(X,\nabla_Y u) = -\frac{dh(u)}{2(h-\varepsilon f)}\, g(X,Y). \end{equation} \end{theorem} \begin{proof} As before, because this is a tensor equation, it suffices to prove it in an orthonormal basis of vector fields that are Fermi-parallel along the flow of $u$. Let $X,Y$ be non-null vector fields of this basis that are orthogonal to $u$. Assuming that $X,Y$ are orthogonal to each other and to $u$, \eqref{eqn:C-20} gives us for $Y=Z$ \begin{equation} dh(u) = -(h-\varepsilon f)\left(\frac{g(X,\nabla_X u)}{g(X,X)} + \frac{g(Y,\nabla_Y u)}{g(Y,Y)}\right). \label{eqn:3-12} \end{equation} \noindent Since the dimension of $M$ is $\geq 4$ the aforementioned basis contains a vector field $Z$ orthogonal to $X,Y,u$ such that \begin{equation} dh(u) = -(h-\varepsilon f)\left(\frac{g(Z,\nabla_Z u)}{g(Z,Z)} + \frac{g(Y,\nabla_Y u)}{g(Y,Y)}\right) \label{eqn:3-13} \end{equation} \noindent and as such \begin{equation} \frac{g(X,\nabla_X u)}{g(X,X)} = \frac{g(Z,\nabla_Z u)}{g(Z,Z)}.\label{eqn:3-14} \end{equation} \noindent Thus \begin{equation} \frac{g(X,\nabla_X u)}{g(X,X)} = \frac{g(Y,\nabla_Y u)}{g(Y,Y)} \end{equation} for every pair of non-null vector fields $X,Y$ orthogonal to each other and to $u$, which in turn leads to \begin{equation} dh(u) = -2(h-\varepsilon f)\frac{g(X,\nabla_X u)}{g(X,X)}.\label{eqn:3-15} \end{equation} \noindent The proof then follows from the polarization identity and \eqref{eqn:3-2}. \end{proof} \section{Proof of Theorem~\ref{thm-main}} \label{sec.Proof} We can now prove the main result of this paper as stated in Section \ref{sec.Statement}. We start by showing that $u$ is orthogonal to a 1-parameter family of hypersurfaces. \begin{lemma} Let $u^\flat = g(u,\cdot)$ be the metrically equivalent 1-form of the vector field $u$. Then \begin{equation} d\left[(h-\varepsilon f)u^\flat \right] = 0.\label{eqn:3-4} \end{equation} \end{lemma} \begin{proof} Let $\omega = (h-\varepsilon f) u^\flat$. Then for every pair of vector fields $X,Y$ on $M$ \begin{align} d\omega(X,Y) &= X\left(\omega(Y)\right) - Y\left(\omega(X)\right) - \omega\left([X,Y]\right) \label{eqn:3-5} \\ &= \begin{aligned}[t] & g\left(\nabla_X\left[\left(h-\varepsilon f\right)u\right],Y\right) + g\left(\left(h-\varepsilon f\right)u,\nabla_X Y\right) \\ & - g\left(\nabla_Y\left[\left(h-\varepsilon f\right)u\right],X\right) - g\left(\left(h-\varepsilon f\right)u,\nabla_Y X\right) \\ & - g\left(\left(h-\varepsilon f\right)u,\nabla_X Y - \nabla_Y X \right) \end{aligned}\nonumber\\ &= g\left(\nabla_X\left[\left(h-\varepsilon f\right)u\right],Y\right) - g\left(\nabla_Y\left[\left(h-\varepsilon f\right)u\right],X\right).\nonumber \end{align} \noindent The proof is thus concluded if and only if \begin{equation} g\left(\nabla_X\left[\left(h-\varepsilon f\right)u\right],Y\right) = g\left(\nabla_Y\left[\left(h-\varepsilon f\right)u\right],X\right). \label{eqn:3-6} \end{equation} If $X,Y$ are orthogonal to $u$ then due to \eqref{eqn:3-2} \begin{align} g\left(\nabla_X\left[\left(h-\varepsilon f\right)u\right],Y\right) &= (h-\varepsilon f) g(\nabla_X u, Y) \label{eqn:3-7} \\ &= (h-\varepsilon f) g(\nabla_Y u, X) \nonumber \\ &= g\left(\nabla_Y\left[\left(h-\varepsilon f\right)u\right],X\right). \nonumber \end{align} \noindent To finish proving this lemma, we need only prove the special case where $Y=u$ and $X$ is orthogonal to $u$. Using Eq.~(3.1) we get \begin{align} g\left(\nabla_X\left[\left(h-\varepsilon f\right)u\right],u\right) &= -df(X) \nonumber \\ &= \left(h-\varepsilon f\right) g\left(X,\nabla_u u\right) \nonumber \\ &= g\left(X,\nabla_u\left[\left(h-\varepsilon f\right)u\right] \right) \label{eqn:3-8} \end{align} \end{proof} \noindent From Poincar\'e's lemma we have that for a sufficiently small neighborhood $\mathscr{U}$ of $M$ there exists a function $t$ on $\mathscr{U}$ such that \begin{equation} dt = \left(h-\varepsilon f\right) u^\flat.\label{eqn:3-9} \end{equation} \noindent The domain $\mathscr{U}$ can be extended to all of $M$ if it is simply-connected. \vspace{0.5em} Since $dt\neq 0$ everywhere, the set $\Sigma_\tau$ of points $p\in\mathscr{U}$ that satisfies the equation $t(p)=\tau$ is a hypersurface of $\mathscr{U}$, while the one-parameter group of diffeomorphisms of $\partial_t$ maps the $\Sigma_\tau$ to $\Sigma_s$ for every pair of $\tau,s$. In order to find out whether the sectional curvature of the 1-parameter hypersurfaces is constant or not, we need to calculate their second fundamental form \begin{equation} \text{II}_\tau(X,Y) = \varepsilon \, g_\tau(\nabla_X Y,u)u = -\varepsilon \, g_\tau (X,\nabla_Y u)u = \frac{\varepsilon dh(u)}{2(h-\varepsilon f)} \, g_\tau(X,Y)u, \label{eqn:3-10} \end{equation} \noindent where $g_\tau$ is the induced metric on $\Sigma_\tau$. The substitution of the second fundamental form to the Gauss formula for the Riemann tensor $R_\tau$ for the hypersurface $\Sigma_\tau$ \begin{align} g_\tau\big(R(X,Y)Z,W\big) = g_\tau\big(R_\tau(X,Y)Z,W\big) &+ g_\tau\big(\text{II}_\tau(X,Z),\text{II}_\tau(Y,W)\big)\label{eqn:3-16}\\ & - g_\tau\big(\text{II}_\tau(Y,Z),\text{II}_\tau(X,W)\big)\nonumber \end{align} \noindent gives us \begin{equation} R_\tau(X,Y)Z = \left(h + \varepsilon \left[\frac{dh(u)}{2(h-\varepsilon f)}\right]^2 \right) \left(g_\tau(Y,Z)X - g_\tau(X,Z)Y\right). \label{eqn:3-17} \end{equation} From Schur's lemma, which holds for semi-Riemannian manifolds \cite{o1983semi}, we conclude that $\Sigma_\tau$ is a space of constant curvature for every $\tau$. Therefore, the function \begin{equation} K_\tau = h + \varepsilon \left[\frac{dh(u)}{2(h-\varepsilon f)}\right]^2, \label{eqn:3-18} \end{equation} depends only on $\tau$. \begin{proof}[Proof of Theorem~\ref{thm-main}] \noindent For an arbitrary $\tau$ and $p\in\Sigma_\tau$ we choose the vectors $e_1,...,e_{n-1}\in T_p\Sigma_\tau$ and extend them to vector fields $E_1,...,E_{n-1}$ along the flow of $\partial_t$ using Lie transport: \begin{equation} [\partial_t,E_i] = 0, \;\; i=1,...,n-1.\label{eqn:3-19} \end{equation} \noindent The vectors $e_1,...,e_{n-1}$ are by definition orthogonal to $u$, implying that $\partial_t\perp E_i$ for every $i=1,...,n-1$. \vspace{0.5em} Let us denote by $\pi$ the projection tensor to the orthogonal complement of $u$. Using eq. \eqref{eqn:3-9}, $\pi$ can be expressed as \begin{equation} \pi = g -\varepsilon u^\flat \otimes u^\flat = g - \frac{\varepsilon}{(h-\varepsilon f)^2}dt \otimes dt\label{eqn:3-20}, \end{equation} \noindent and its components with respect to the basis $\{E_i\}$ are \begin{equation} \pi_{ij} = g(E_i,E_j)\equiv g_{ij}.\label{eqn:3-21} \end{equation} We have \begin{align} g(E_i,\nabla_{E_j}u) &= g\left(E_i,\nabla_{E_j}\left(\frac{1}{h-\varepsilon f}\partial_t\right)\right) \label{eqn:3-22}\\ &= \frac{1}{h-\varepsilon f}\,g(E_i,\nabla_{E_j}\partial_t) \nonumber\\ &= \frac{1}{h-\varepsilon f}\,g(E_i,\Gamma_{jt}^t\partial_t + \Gamma_{jt}^k E_k)\nonumber\\ &= \frac{1}{h-\varepsilon f}\,g_{ik}\Gamma_{jt}^k\nonumber\\ &= \frac{1}{h-\varepsilon f}\,g_{ik}\cdot\frac{1}{2}g^{k\mu}(g_{\mu j,t} + g_{\mu t,j} - g_{jt,\mu})\nonumber\\ &= \frac{1}{2(h-\varepsilon f)}\,\delta_i^\mu(g_{\mu j,t} + g_{\mu t,j})\nonumber\\ &= \frac{1}{2(h-\varepsilon f)}\,g_{ij,t},\nonumber \end{align} \noindent On the other hand, \begin{equation} dh(u) = \frac{1}{h-\varepsilon f}\,\partial_t h,\label{eqn:3-23} \end{equation} \noindent and therefore \begin{equation} \frac{dh(u)}{h-\varepsilon f} = \frac{1}{(h-\varepsilon f)^2}\,\partial_t h.\label{eqn:3-24} \end{equation} \noindent Since $dh(X)=0$ for every vector field $X$ orthogonal to $u$, $h$ is a function of $\tau$ and the left term of \eqref{eqn:3-24} is also only a function of $\tau$. Furthermore, the right term of \eqref{eqn:3-24} is a product of a function of $\tau$ with $K_\tau$. Thus, it follows that $f$ only depends on $\tau$: \begin{equation} df(X) = 0, ~\textrm{for~every}~ X\perp u,\label{eqn:3-25} \end{equation} \noindent which also shows that the flow of $u$ is geodesic (cf. \eqref{eqn:3-1}). Lastly, using \eqref{eqn:3-25} we get \begin{equation} g_{ij,\,t} = -\frac{\partial_t h}{h-\varepsilon f}\,g_{ij},\label{eqn:3-26} \end{equation} \noindent or, equivalently, \begin{equation} \mathcal{L}_{\partial_t}\pi = -\frac{\partial_t h}{h-\varepsilon f}\pi.\label{eqn:3-27} \end{equation} \noindent Eq. \eqref{eqn:3-26} can be integrated in a coordinate independent way. Let us choose an arbitrary point $p\in\Sigma_0$ and arbitrary vectors $x,y\in T_pM$. The function \begin{equation} \psi = -\frac{\partial_t h}{h-\varepsilon f}\label{eqn:3-28} \end{equation} \noindent is constant on the hypersurfaces $\Sigma_\tau$, and the function $\tau\longrightarrow F(\tau)$ defined by \begin{equation} F(\tau) = (\phi_\tau^*\pi)(x,y),\label{eqn:3-29} \end{equation} \noindent where $\{\phi_\tau\}$ is the one-parameter family of diffeomorphisms of $\partial_t$, is smooth and its derivative is equal to \begin{align} F'(\tau) &= \frac{d}{ds}F(\tau + s)\bigg\rvert_{s=0} \nonumber \\ &= \frac{d}{ds}\bigg\rvert_{s=0}\left(\phi_\tau^* \phi_s^* \pi\right)(x,y)\nonumber\\ &= (\phi_\tau^* \mathcal{L}_{\partial / \partial t}\pi)(x,y)\nonumber\\ &= (\phi_\tau^* \psi\pi)(x,y)\nonumber\\ &= \psi\circ\phi_\tau(p) F(\tau). \label{eqn:3-30} \end{align} \noindent The proof is concluded by solving \eqref{eqn:3-30} and then changing variables \begin{equation} \bigg\rvert\frac{1}{h-\varepsilon f}\bigg\rvert dt \longrightarrow dt.\label{eqn:3-31} \end{equation} \end{proof} \section{Conclusions} \label{sec.Impl} In this work we derive a geometrical necessary and sufficient condition for a semi-Riemannian space to be locally isotropic with respect to a unit vector field $u$. We prove that this is fulfilled when the vector field pointwise distinguishes two sectional curvatures. Our result might be envisaged as an extension of the theorem concerning local isometry between spaces of equal constant curvature to a class of spaces of non-constant curvature. Due to the properties of the Riemann tensor, this condition can be expressed in terms of it without any further assumption. Since we do not need to make any kind of assumption about the field equations or matter fields, this represents an advantage from the point of view of theoretical cosmology and the study of alternative theories of gravity. \vspace{0.5em} In addition, the local isometry is extended to a global one if the space is simply-connected. Since the equation of geodesic deviation for a specific class of geodesics parallel or orthogonal to $u$ is independent of direction, our condition is an \textit{isotropy of geodesic deviation}. \section{Acknowledgements} This work was supported by the National Key R\&D Program of China (2016YFA0400702) and the National Science Foundation of China (11721303, 11873022 and 11991053). We thank H. Ringstr\"om and L. Andersson for their input.
-65,012.555336
[ -2.974609375, 2.61328125 ]
23.217923
[ -2.73828125, 1.892578125, -1.2021484375, -5.6640625, -1.80078125, 7.23828125 ]
[ 3.96484375, 9.6640625, 1.4423828125, 5.76171875 ]
224
4,201
[ -3.630859375, 4.125 ]
35.511884
[ -5.00390625, -3.052734375, -3.720703125, -2.1328125, 1.21875, 10.65625 ]
3.326609
19.446284
26.85075
2.114329
[ 1.7935701608657837 ]
-43,396.280004
6.253987
-64,168.2045
0.989609
5.966358
[ -1.87890625, -2.98828125, -3.486328125, -5.2109375, 1.92578125, 11.703125 ]
[ -5.92578125, -2.4453125, -2.62109375, -2.005859375, 3.845703125, 5.08984375 ]
BkiUakrxK5YsWOzBAUlJ
\section{Introduction} Computer Vision research aims to converge at human-like abilities to interpret and extract useful information regarding behavioural patterns and anomalies from a descriptive set of visual data. However, human abilities have glaring limitations when it comes to analyzing simultaneously changing signals\cite{sulman2008effective}. A crowd presents itself as a considerably large collection of simultaneously changing parameters, characterized by usual dominant patterns and some observable abnormalities. Safety is the primary reason to understand crowd dynamics and isolate anomalous patterns. With crowd-related violent incidents on the rise, it is paramount that we expand our studies to analyze the intricate and complex nature of crowds. Understanding anomalies in a crowded scene enables better public space design and also allows better surveillance systems to be built. Earlier works like those of Kim et al.\cite{kim2009observe} used a Mixture of Probabilistic Principal Component Analyzers to learn patterns of local optical flow and then validate the consistency by Markov Random Field. Cong et al.\cite{cong2013abnormal} used a multi-scale histogram of Optical Flow as the feature descriptor and used it as the basis for a sparse reconstruction. Ali et al.\cite{ali2007lagrangian} used Lagrangian Particle Dynamics to model coherent crowd flow as fluid flow. \begin{figure}[htbp] \subfigure[Snapshot from the Pilgrim sequence of the UCF Database of crowded scenes] {\label{fig:edge-a} \includegraphics[scale=0.11]{typicalcrowd4.PNG}} \subfigure[Snapshot from the Crowded Intersection sequence of the UCF Database of crowded scenes] {\label{fig:edge-b} \includegraphics[scale=0.11]{typicalcrowd1.PNG}} \caption{Typical Crowded Scenes} \label{fig:edge} \end{figure} In general, Supervised methods require a considerable amount of labeled data, which is directly utilized to build the connection between video features and video labels. Therefore, developing Unsupervised anomaly detection systems prove to be more challenging than supervised ones. An anomaly in a crowded scene can be determined from the motion patterns of it's constituent pedestrians and objects. Analyzing trajectory data enables one to predict and identify anomalies with an excellent degree of accuracy. The early works on trajectory analysis includes that of Fu et al.\cite{fu2005similarity} which proposed a hierarchical clustering framework to classify vehicle motion trajectories based on pairwise similarities, but with the limitation of using only a single feature for clustering. Progressing further, Anjum and Cavallaro\cite{anjum2008multifeature} proposed the use of multiple features in a Mean shift clustering based framework. They could identify outliers using a basic mean trajectory location based measure. Antonini et al.\cite{antonini2006counting} transformed the input trajectories using Independent Components Analysis and then use Euclidean distance to find similarities between various trajectories. The Shannon Entropy measure has presented itself as an excellent tool for many applications including video key selection\cite{xu2014browsing}, Network anomaly detection\cite{santiago2015entropy} and worm detection\cite{ranjan2007dowitcher}. The principal contributions of this paper include the incorporation of a multi-feature object tracker that works excellently well for crowded scenes\cite{Sharma2016ATC} and the use of multiple features for independent clustering. Furthermore, an information theory based Shannon entropy measure is proposed to detect anomalies for each cluster and then identify overall anomalous trajectories for the entire scene using a voting mechanism. The paper is organized as follows: Section II discusses the trajectory estimation and feature extraction procedure. Section III discusses the Clustering task with Section IV focusing on the Anomaly detection mechanism while Section V sheds light on the results obtained using the algorithm. \section{Trajectory and Feature Extraction} The first task is to evaluate trajectory paths for all moving objects. \subsection{Trajectory Extraction} The estimation of trajectories in crowded scenes is a challenging task due to various factors like high degree of occlusion, difficulty in tracking individual objects and arbitrary changes in nature of the motion. To tackle this problem, we incorporate the use of a multi-object tracker, that works exceedingly well in crowded scenes as demonstrated by Sharma et al.\cite{Sharma2016ATC}. Using this approach, each frame is divided into non-overlapping boxes and low-level features are detected inside each box. Following this, the centroids of all the detected feature points in each box is tracked using the standard Kanade Lucas tracking algorithm. Fresh boxes are introduced periodically to track newly introduced objects. \subsection{Feature Extraction} Most trajectory clustering and anomaly classifiers used a single feature descriptor for the task. We propose the use of multiple features, namely: \subsubsection{Density} A trajectory can have varying densities around it, depending on the size of it's neighbourhood. The density feature is thus computed using varying sizes of neighbourhood $\epsilon$. We have considered three varying sizes as proposed by Sharma et al.\cite{Sharma2016ATC}. \begin{equation} n_{T,j, \epsilon} = |\{ T^{i} | \forall i \neq j, d(f^{j},f^{i})<\epsilon\} | \end{equation} \begin{equation*} F^{j} = [ n_{j,\epsilon1}, n_{j,\epsilon2}, n_{j,\epsilon3} ] \end{equation*} In this work, we are also interested in distances that describe the similarity of objects along time and therefore are computed by analysing the way distance between the objects varies over time. This gives us a measure of the \emph{spatio-temporal density} in the most natural way possible: \begin{equation} D(\tau_{1}, \tau_{2})|_{T} = \frac{\int d(\tau_{1}(t), \tau_{2}(t))dt }{|T| } \end{equation} Where $ d(\tau_{1}(t), \tau_{2}(t)$ represents the pairwise distance between two trajectories at the instant $t$. \subsubsection{Shape} All trajectory sketch a particular shape across the spatio-temporal scene, and this is represented as a polynomial function. The coefficients are calculated separately for the $x$ and $y$ coordinates yielding the $f_s$ feature vector. \begin{eqnarray} x(t) = a_0 +a_1t+a_2t^2+a_3t^3\\ y(t) = b_0 +b_1t+b_2t^2+b_3t^3 \end{eqnarray} \begin{equation*} f_s = [a_0,\dots,a_3,b_0,\ldots,b_3] \end{equation*} \begin{figure}[htbp] \subfigure[Snapshot from the Crowded Subway exit sequence] {\label{fig:edge-a} \includegraphics[scale=0.12]{typicalcrowd1.PNG}} \subfigure[Extracted trajectories. Green dots are starting points] {\label{fig:edge-b} \includegraphics[width=4cm]{trajectorystation.PNG}} \caption{Crowded scene and extracted trajectories} \label{fig:2} \end{figure} \subsubsection{Mean Position} It may be possible that trajectories separated over large distances may have similar velocities, directions and density features and consequently, get clustered in the same group. To avoid this, a location measure is needed as $\mathnormal{f_l = [mean_x, mean_y] } $. \subsubsection{Standard Deviation} Standard Deviation is an extremely popular measure that quantifies the amount of variation or dispersion in a time-series data. \begin{equation} \sigma = \sqrt{(E[(X-\mu)^2])} \end{equation} The trajectories extracted from each surveillance video will give rise to a distinct feature-space for each of the features mentioned above. These distinct feature spaces will be used for identifying anomalies for that particular feature, and thereafter, the detection of overall anomalies. \section{Clustering} Clustering methods have gained immense popularity as a data analysis tool ever since Clements\cite{clements1954use} introduced it in 1954. It is observed that significantly dominant and usual features correspond to the denser regions of the probability density function of the data points. Using a Kernel Density Estimate, the modes of the probability density function can be found using either the Mean Shift\cite{fukunaga1975estimation,cheng1995mean} or the Mountain method\cite{yager1994approximate}. We would be using the Mean Shift method here as proposed by Fukunaga and Hostetler\cite{fukunaga1975estimation}. Moreover, since the anomaly detection algorithm proposed here revolves around clustering similar data points, the clustering algorithm used here has to be highly effective, as demonstrated by the Mean Shift Clustering algorithm. \subsection{Mean Shift Clustering} It is a non-parametric and versatile, iterative algorithm with applications in varied fields like object tracking, texture segmentation and data mining. After learning estimate of the probability density of the data points using a Kernel Density Estimate, a gradient ascent procedure associates each data point with the nearby peak of the data-set's density function. It defines a window around it and computes the mean of all the data-points within the window and shifts the centre of the window to the new mean until the process converges. When the process converges, we obtain the modes of the density estimate which serve as the centre-points of the clusters in the data. Suppose, there are $n$ data-points in the d-dimensional space $\mathcal{R}_d$, then the density estimate with Kernel $K(x)$ and bandwidth $h$, can be denoted as $ f_{h,k}(x)$. If we define $g(x) = -\acute{K}(x)$ as a shadow function\cite{wu2007mean} of $K(x)$, with the assumption that the derivative of the kernel $K$ exists for all $x \in [0,\infty)$, then the gradient of the density estimate can be written as: \begin{equation} \nabla f_{h,k}(x) = \frac{2c_{k,d}}{nh^{d+2}} \sum_{i=1}^{n} (x_{i} - x)\acute{K}( || \dfrac{x-x_{i}}{h} ||^{2}) \end{equation} \begin{equation} \nabla f_{h,k}(x) =\\ \dfrac{2c_{k,d}}{nh^{d+2}}[ \sum_{i=1}^{n} g(|| \dfrac{x-x_i}{h} ||^2) ]\textbf{m}_{h,G}(x) \end{equation} The modes of the density function are obtained among the zeros of the gradient of the density function. The first term in the product is proportional to the density estimate at $x$ computed with kernel $G$, while the second term, or the \emph{mean shift} is defined as the difference between the weighted mean and the centre of the Kernel window. \begin{equation} \textbf{m}_{h,G}(x) = [ \dfrac{\sum_{i=1}^{n} x_{i}g(|| \frac{x-x_{i}}{h} ||^{2})}{\sum_{i=1}^{n} g( || \frac{x-x_{i}}{h} ||^{2}) } - x] \end{equation} It can be observed that the mean shift vector always points towards the direction of maximum increase in the density\cite{comaniciu2002mean}. These obtained modes, or cluster centres, are found for each independent feature obtained, therefore, giving us a non-overlapping set of trajectories that are characteristic of the cluster they belong to. \begin{algorithm} \SetAlgoLined \KwIn{Crowded Video Sequence} \KwOut{Anomalous Trajectories in the Sequence} Extract Trajectories from Video\\ Compute Features $F_1, F_2,\ldots,F_n$\\ \For{$i = 1$ to $n$}{ Compute Cluster Centres for $F_i$\\ $C_i = [c_{i,1}, c_{i,2},\dots, c_{i,k}]$\\ \For{$i = 1$ to $numTrajec$}{ \For{$j=1$ to $k$}{ $distvec(l,j) = dist(trajec_l, c_{i,j})$\\ } } \For{$i = 1$ to $numTrajec$}{ \For{$j=1$ to $k$}{ $P(i,j) = distvec(i,j)/ \sum\limits_{m=1}^{k}distvec(i,m)$\\ } } \For{$i = 1$ to $numTrajec$ }{ $H(i) = - \sum_{j=1}^{k} P_{i,j} log P_{i,j}$\\ \uIf{$H(i) > thresh$}{ Vote(i)++\\ } } } \uIf{$Vote(i) > n/2$}{ $Anom(i) = 1$ } \Return $Anom$ \caption{Overall Algorithm} \label{alg1} \end{algorithm} \section{Anomaly Detection} The entire crowd is often characterized by some dominant patterns, based on which, the entire set of trajectories is clustered. The anomalous trajectories, present throughout the crowded scene may belong to any one of these clusters but as a general property, will not have a substantial degree of belongingness to any of the clusters. The entire mechanism depends on two major tasks, as follows: Detecting Anomalies for each independent feature space followed by the selection of those trajectories that exhibit anomalous behaviour in most of the cases, using a voting mechanism. \\ Shannon Entropy has found widespread applications in numerous domains, with anomaly detection being one. The greatest advantage of this technique is that it allows the summarization of the feature distributions in the form of a single number. Our approach is based on the simple idea that an anomalous trajectory would exhibit higher levels of entropy when compared to normal trajectories. Instead of comparing the distances between the means of the cluster centres and trajectories as in previous work\cite{anjum2008multifeature}, we build a probability distribution using the distances between a trajectory and all of the cluster centres. The entropy of this probability distribution is evaluated and if it exceeds a threshold, it is classified as an anomaly. The threshold should be data adaptive and must adapt itself with the changing properties of the data. \begin{center} $\mathnormal{C_i = [c_{1,i},c_{2,i},\ldots,c_{n,i}]}$\\[0.2cm] $\mathnormal{distvec_j = [distance(c_{1,i},f_j),\ldots,distance(c_{n,i},f_j)]}$\\[0.2cm] \end{center} $C_i$ represents the Cluster centres for a specific feature $i$ and the $distvec$ vector contains the distance measures between each of the cluster centre and trajectory $f_j$ . We further build the probability distribution $P_j = [p_{j,1}, p_{j,2}, \dots, p_{j,n}]$ where \begin{eqnarray} p_{j,k} = \dfrac{distance(c_{k,i},f_j)}{\sum_{m=1}^{n}distance(c_{m,i}, f_j)} \end{eqnarray} An entropy measure is computed for each trajectory: \begin{equation} H_i = -\sum_{k=1}^{n} p_{i,k} \log_{a}p_{i,k} \end{equation} \begin{equation} H = [H_1, H_2,\ldots, H_{numTraj}] \end{equation} Trajectories with an entropy value exceeding that of a threshold are marked anomalous for that feature. In a crowded scene, the changes in it's attributes occur randomly and most definitely. A particular section of the crowd can exhibit spatio-temporal changes in density and may also suddenly slow down or fasten up, thereby affecting individual feature parameters of the trajectories. Moreover, new trajectories that are introduced after a fixed interval of time may have similar features as a particular cluster but may exhibit one or more abnormalities due to it's late introduction. Therefore, we cannot club all trajectories marked as anomalous from the above stated procedure as our desired set of abnormalities. A simple voting mechanism sieves out those trajectories that are marked anomalous for majority of the cases. \section{Results} The method is tested on videos from two datasets, namely the Crowded scenes dataset used by Cheriyadat et al.\cite{cheriyadat2008detecting} to detect dominant motions in crowds and the UCF crowd dataset, first used by Ali et al.\cite{ali2007lagrangian}. To measure the efficiency of the method, we first identify all possible anomalous trajectories from the video and then, compare it with the classification test results. Since, the method involves the use of videos directly, we had to mark the anomalous trajectories in the actual video for the evaluation procedure. The results for three standard crowded videos from the mentioned datasets are tabulated as follows: \begin{table}[htb] \begin{center} \begin{tabular}[h]{|c|c|c|c|c|}\hline Video&Precision&Recall&$f$-Score&Accuracy\\\hline Crowded Subway Exit&0.8258&0.9944&0.9023&96.31\%\\\hline Pilgrim Sequence&0.8221&0.9965&0.9009&98.15\%\\\hline Intersection Sequence&0.7287&0.9971&0.842&98.68\%\\\hline \end{tabular} \end{center} \label{tab1} \caption{Results on several Crowded Scene Videos} \end{table} The results indicate that this method exhibits excellent Specificity, i.e. the probability of classifying a normal trajectory as anomalous is extremely low. However, improvement can be achieved in the Sensitivity of the approach by improving the True Positive rate. It is to be noted that the method indicates almost all anomalous trajectories in the expected regions of interest with commendable accuracy. The graphical plots reveal the effective nature of the results produced. The plots as depicted in Figure 3 are from the Crowded subway exit sequence. The trajectories have been detected from the entire video sequence and thereafter, clustering has been done on the several feature-spaces as shown in Figures 3(a),3(c),3(e) and 3(g). The anomalous trajectories detected in each such feature space has been plotted in Figures 3(b),3(d),3(f) and 3(h). Following the voting mechanism, the final anomalous trajectories have been displayed as red curves with their origin points shown as blue dots in Figure 3(i). Figure 3(j) shows the overall crowded scene as being composed of the anomalous trajectories shown in red and the normal trajectories shown in blue. If the video is analyzed properly, one can find that the trajectories responsible for slowing down the crowd exiting the subway are closely represented by the ones detected as anomalous by the algorithm. These are in essence, the peripheral trajectories present together with the principal crowd flow that has been represented closely by the blue section in Figure 3(j).\\ \begin{table}[htb] \begin{center} \begin{tabular}[h]{|c|c|}\hline Method&Accuracy\\\hline Guo et. al.&96\%\\\hline Xu et. al.&87\%\\\hline Biswas et. al.&96.7\%\\\hline Proposed&98.68\%\\\hline \end{tabular} \end{center} \label{tab2} \caption{Comparison of Results} \end{table} The method performs well when compared with different state of the art methods. The overall accuracy has been used as the metric for comparison here. The Information Bottleneck based approach\cite{guo2014anomaly} only extracts a speed based feature to improve the shape analysis of trajectory data. This method shows an accuracy of about 96\% on their task-specific datasets. The other unsupervised methods, like the one based on hierarchical pattern discovery methods\cite{xu2014video}, although using a completely different approach; exhibit an accuracy of around 87\%. Other abnormality methods that use the property of sparsity in abnormal events\cite{biswas2017abnormality} exhibit an accuracy in the range of 88.71\% to 96.7\%. \section{Conclusion} This paper stresses on the need for understanding crowd dynamics better and presents an unsupervised mechanism to detect anomalous trajectories. The method is an application-ready one that itself generates trajectories from a video using a multi-object tracker and then cluster them based on multiple independent features. The use of multiple features for determining the clusters and the anomalies is based on the fact that an anomalous trajectory may posses similarity with a dominant pattern in one aspect, but differs significantly in a majority of aspects. A trajectory that may be similar to most trajectories in terms of mean location and position may cause disturbance in the scene due to its unnatural speed. This has been taken care of by using multiple features to detect the overall anomalies. The use of Shannon Entropy provides a novel approach to determine the anomalies, considering the fact that a probability distribution is developed using the distances from all cluster centres and not only the specific cluster with which the trajectory is associated. An anomalous trajectory is unlikely to belong to any specific cluster to a significant degree, thereby maximizing entropy in the probability distribution. The proposed approach yields excellent results on the chosen crowd videos. This work can be made efficient by developing a substantially large dataset that demarcates abnormal trajectories where the trajectories are represented as a time series as used here. Trajectory representation such as this has been used to evaluate crowd flow segmentation but here it has been put to use for abnormality detection. This lends this approach the added advantage of detecting the specific areas in the scene that contribute majorly to disturbance. Finally, this work may find extensive use in improving surveillance methods, better public space design, efficient event organization and possibly, even in tracking rogue naval and air routes. This work can be improved by making it real-time and also by generalizing the Entropy measure that could classify the anomalies optimally. \begin{figure}[htbp] \subfigure[Clustering based on the density feature] {\label{fig:edge-a} \includegraphics[scale=0.35]{ClusterDensity.PNG}} \subfigure[Anomalous trajectories based on Density feature] {\label{fig:edge-b} \includegraphics[scale=0.35]{AnomalousDensity.PNG}}\\ \subfigure[Clustering based on the Shape feature] {\label{fig:edge-c} \includegraphics[scale=0.35]{ClusterShape.PNG}} \subfigure[Anomalous trajectories based on Shape feature] {\label{fig:edge-d} \includegraphics[scale=0.35]{AnomalousShape.PNG}}\\ \subfigure[Clustering based on the Mean Position] {\label{fig:edge-e} \includegraphics[scale=0.35]{ClusterPosition.PNG}} \subfigure[Anomalous trajectories based on Mean Position] {\label{fig:edge-f} \includegraphics[scale=0.35]{AnomalousPosition.PNG}}\\ \subfigure[Clustering based on the Standard Deviation] {\label{fig:edge-g} \includegraphics[scale=0.35]{ClusterSTD}} \subfigure[Anomalous trajectories based on Standard Deviation] {\label{fig:edge-h} \includegraphics[scale=0.35]{AnomalousSTD}}\\ \subfigure[Overall Anomalies] {\label{fig:edge-i} \includegraphics[scale=0.35]{OverallAnomalous.PNG}} \subfigure[Overall Scene] {\label{fig:edge-j} \includegraphics[scale=0.35]{scenario.PNG}} \caption{Different Clusterings and Anomalous Trajectory Classification} \label{fig:edge} \end{figure} \bibliographystyle{IEEEtran}
-13,990.141074
[ -2.484375, 2.5546875 ]
20.361991
[ -3.263671875, 0.1124267578125, -1.6494140625, -4.90234375, -0.426513671875, 7.015625 ]
[ 3.3828125, 6.984375, 2.736328125, 7.15234375 ]
166
2,790
[ -2.2734375, 2.212890625 ]
23.342453
[ -6.0234375, -3.328125, -3.609375, -1.904296875, 2.041015625, 10.6796875 ]
1.051279
12.193244
33.154122
3.457911
[ 2.645979881286621 ]
-9,970.842741
6.136918
-13,803.326429
0.327065
5.841191
[ -3.3515625, -3.671875, -2.923828125, -3.798828125, 2.7109375, 10.3828125 ]
[ -5.58203125, -2.41015625, -2.564453125, -2.078125, 3.994140625, 5.9609375 ]
BkiUd6TxK6wB9lIs9ivi
\section{Stage 2: Descent} \label{descent} We now adapt the neural sparse coding approach of \citet{arora15-neural} to obtain an improved estimate of $A^*$. As mentioned earlier, at a high level the algorithm is akin to performing approximate gradient descent. The insight is that within a small enough neighborhood (in the sense of $\delta$-closeness) of the true $A^*$, an estimate of the ground-truth code vectors, $X^*$, can be constructed using a neurally plausible algorithm. The innovation, in our case, is the double-sparsity model since we know \emph{a priori} that $A^*$ is itself sparse. Under sufficiently many samples, the support of $A^*$ can be deduced from the initialization stage; therefore we perform an extra \emph{projection} step in each iteration of gradient descent. In this sense, our method is non-trivially different from \cite{arora15-neural}. The full algorithm is presented as Algorithm \ref{alg_neural_doubly_sdl}. As discussed in Section \ref{setup}, convergence of noisy approximate gradient descent can be achieved as long as $\widehat{g}^s$ is correlated-w.h.p. with the true solution. However, an analogous convergence result for \emph{projected} gradient descent does not exist in the literature. We fill this gap via a careful analysis. Due to the projection, we only require the correlated-w.h.p.~property for \emph{part} of $\widehat{g}^s$ (i.e.,~when it is restricted to some support set) with $A^*$. The descent property is still achieved via Theorem \ref{main_thm_columnwise_descent_in_expectation}. Due to various perturbation terms, $\widehat{g}$ is only a biased estimate of $\nabla_A\mathcal{L}(A, X)$; therefore, we can only refine the estimate of $A^*$ until the column-wise error is of order $O(\sqrt{k/n})$. The performance of Algorithm~\ref{alg_neural_doubly_sdl} can be characterized via the following theorem. \begin{algorithm}[t] \begin{algorithmic} \State \textbf{Initialize} $A^0$ is $(\delta, 2)$-near to $A^*$. $H = (h_{ij})_{n\times m}$ where $h_{ij} = 1$ if $i \in \textnormal{supp}(\cA{j}^0)$ and 0 otherwise. \State \textbf{Repeat} for $s = 0, 1, \dots, T$ \State \indent Encode: $x^{(i)} = \textnormal{threshold}_{C/2}((A^{s})^Ty^{(i)})$ \quad for $i = 1, 2, \dots, p$ \State \indent Update: $A^{s+1} = \mathcal{P}_{H}(A^s - \eta\widehat{g}^s) = A^s - \eta\mathcal{P}_{H}(\widehat{g}^s)$ \State \indent where $\widehat{g}^s = \frac{1}{p}\sum_{i=1}^p (A^{s}x^{(i)} - y^{(i)})\textnormal{sgn}(x^{(i)})^T$ and $\mathcal{P}_{H}(G) = H \circ G$ \end{algorithmic} \caption{Double-Sparse Coding Descent Algorithm} \label{alg_neural_doubly_sdl} \end{algorithm} \begin{Theorem} \label{main_thm_columnwise_descent_in_expectation} Suppose that the initial estimate $A^0$ has the correct column supports and is $(\delta, 2)$-near to $A^*$ with $\delta = O^*(1/\log n)$. If Algorithm \ref{alg_neural_doubly_sdl} is provided with $p = \widetilde{\Omega}(mr)$ fresh samples at each step and $\eta = \Theta(m/k)$, then $$\mathbb{E}[\norm{\cA{i}^s - \cA{i}^*}^2] \leq (1-\rho)^s \norm{\cA{i}^0 - \cA{i}^*}^2 + O(\sqrt{k/n}) $$ for some $0 < \rho < 1/2$ and for $s = 1, 2, \dots, T$. Consequently, $A^s$ converges to $A^*$ geometrically until column-wise error $O(\sqrt{k/n})$. \end{Theorem} We defer the full proof of Theorem \ref{main_thm_columnwise_descent_in_expectation} to Section \ref{sample_complexity}. In this section, we take a step towards understanding the algorithm by analyzing $\widehat{g}^s$ in the infinite sample case, which is equivalent to its expectation $g^s \triangleq \mathbb{E}[(A^sx - y)\textnormal{sgn}(x)^T]$. We establish the $(\alpha, \beta, \gamma_s)$-correlation of a truncated version of $\cg{i}^s$ with $\cA{i}^*$ to obtain the descent in Theorem \ref{thm_columnwise_descent_infinite_samples} for the infinite sample case. \begin{Theorem} \label{thm_columnwise_descent_infinite_samples} Suppose that the initial estimate $A^0$ has the correct column supports and is $(\delta, 2)$-near to $A^*$. If Algorithm \ref{alg_neural_doubly_sdl} is provided with infinite number of samples at each step and $\eta = \Theta(m/k)$, then $$\norm{\cA{i}^{s+1} - \cA{i}^*}^2 \leq (1-\rho) \norm{\cA{i}^s - \cA{i}^*}^2 + O\bigl(k^2/n^2 \bigr) $$ for some $0 < \rho < 1/2$ and for $s = 1, 2, \dots, T$. Consequently, it converges to $A^*$ geometrically until column-wise error is $O(k/n)$. \end{Theorem} Note that the better error $O(k^2/n^2)$ is due to the fact that infinitely many samples are given. The term $O(\sqrt{k/n})$ in Theorem \ref{main_thm_columnwise_descent_in_expectation} is a trade-off between the accuracy and the sample complexity of the algorithm. The proof of this theorem composes of two steps with two main results: 1) an explicit form of $g^s$ (Lemma \ref{thm_columnwise_descent_infinite_samples}); 2) $(\alpha, \beta, \gamma_s)$-correlation of column-wise $g^s$ with $A^*$ (Lemma \ref{lm_correlation_gs}). The proof of those lemmas are deferred to Appendix \ref{appdx_main_algorithm}. Since the correlation primarily relies on the $(\delta, 2)$-nearness of $A^s$ to $A^*$ that is provided initially and maintained at each step, then we need to argue that the nearness is preserved after each step. \begin{Lemma} \label{lm_expected_columwise_update} Suppose that the initial estimate $A^0$ has the correct column supports and is $(\delta, 2)$-near to $A^*$. The column-wise update has the form $g_{R, i}^s = p_iq_i(\lambda^s_i\AR{i}^s - \AR{i}^* + \xi_i^s \pm \zeta)$ where $R = \textnormal{supp}(\cA{i}^s)$, $\lambda_i^s = \inprod{\cA{i}^s}{\cA{i}^*}$ and $$\xi_i^s = \AR{-i}^s\textnormal{diag}(q_{ij})(\cA{-i}^s)^T\cA{i}^*/q_i.$$ Moreover, $\xi_i$ has norm bounded by $O(k/n)$ for $\delta = O^*(1/\log n)$ and $\zeta$ is negligible. \end{Lemma} We underline that the correct support of $A^s$ allows us to obtain the closed-form expression of $g_{R_i, i}^s$ in terms of $\cA{i}^s$ and $\cA{i}^*$. Likewise, the expression (\ref{eq_expected_columwise_update}) suggests that $\cg{i}^s$ is almost equal to $p_iq_i(\cA{i}^s - \cA{i}^*)$ (since $\lambda_i^s \approx 1$), which directs to the desired solution $\cA{i}^*$. With Lemma \ref{lm_expected_columwise_update}, we will prove the $(\alpha, \beta, \gamma_s)$-correlation of the approximate gradient to each column $\cA{i}^*$ and the nearness of each new update to the true solution $A^*$. \subsection{$(\alpha, \beta, \gamma_s)$-Correlation} \begin{Lemma} \label{lm_correlation_gs} Suppose that $A^s$ to be $(\delta, 2)$-near to $A^*$ and $R = \textnormal{supp}(\cA{i}^*)$, then $2g_{R, i}^s$ is $(\alpha, 1/2\alpha, \epsilon^2/\alpha)$-correlated with $\AR{i}^*$; that is \begin{equation*} \inprod{2g_{R, i}^s} {\AR{i}^s - \AR{i}^*} \geq \alpha \Vert \AR{i}^s - \AR{i}^*\Vert^2 + {1}/{(2\alpha)} \Vert g_{R, i}^s\Vert^2 - {\epsilon^2}/{\alpha} \end{equation*} Futhermore, the descent is achieved by $$\norm{\cA{i}^{s+1} - \cA{i}^*}^2 \leq (1-2\alpha\eta)^s\norm{\cA{i}^0 - \cA{i}^*}^2 + \eta\epsilon_s^2/\alpha$$ where $\delta = O^*(1/\log n)$ and $\epsilon = O\bigl( \frac{k^2}{mn} \bigr)$. \end{Lemma} \proof Throughout the proof, we omit the superscript $s$ for simplicity and denote $2\alpha = p_iq_i$. First, we rewrite $\cg{i}^s$ as a combination of the true direction $\cA{i}^s - \cA{i}^*$ and a term with small norm: \begin{equation} \label{eq:3} g_{R, i} = 2\alpha(\AR{i} - \AR{i}^*) + v, \end{equation} where $v = 2\alpha[(\lambda_i - 1)\cA{i} + \epsilon_i]$ with norm bounded. In fact, since $\cA{i}$ is $\delta$-close to $\cA{i}^*$, and both have unit norm, then $\norm{2\alpha(\lambda_i - 1)\cA{i}} = \alpha \norm{ \cA{i} - \cA{i}^*}^2 \leq \alpha \norm{\cA{i} - \cA{i}^*}$ and $\norm {\xi_i} \leq O(k/n)$ from the inequality (\ref{eqn:eps_norm}). Therefore, \begin{equation*} \norm{v} = \norm{2\alpha(\lambda_i - 1)\AR{i} + 2\alpha \xi_i} \leq \alpha \norm{\AR{i} - \AR{i}^*} + \epsilon \end{equation*} where $\epsilon = O(k^2/mn)$. Now, we make use of (\ref{eq:3}) to show the first part of Lemma \ref{lm_correlation_gs}: \begin{align} \inprod{2g_{R, i}}{\AR{i} - \AR{i}^*} = 4\alpha \norm{\AR{i} - \AR{i}^*}^2 + \inprod{2v}{\AR{i} - \AR{i}^*}. \label{eq:4} \end{align} We want to lower bound the inner product term with respect to $\norm{g_{R_i, i}}^2$ and $\norm{\AR{i} - \AR{i}^*}^2$. Effectively, from \eqref{eq:3} \begin{align} 4\alpha \inprod{v}{ \cA{i} - \cA{i}^*} &= \norm{g_{R, i}}^2 - 4\alpha^2\norm{\AR{i} - \AR{i}^*}^2 - \norm{v}^2 \nonumber \\ &\geq \norm{g_{R, i}}^2 - 6\alpha^2\norm{\AR{i} - \AR{i}^*}^2 - 2\epsilon^2, \label{eq:5} \end{align} where the last step is due to Cauchy-Schwarz inequality: $\norm{v}^2 \leq 2(\alpha^2 \norm{\AR{i} - \AR{i}^*}^2 + \epsilon^2)$. Substitute $2\inprod{v}{ \cA{i} - \cA{i}^*}$ in \eqref{eq:4} for the right hand side of \eqref{eq:5}, we get the first result: $$ \inprod{2g_{R, i}}{\AR{i} - \AR{i}^*} \geq \alpha \norm{\AR{i} - \AR{i}^*}^2 + \frac{1}{2\alpha}\norm{g_{R, i}}^2 - \frac{\epsilon^2}{\alpha}.$$ The second part is directly followed from Theorem \ref{thm_descent_from_correlation_z}. Moreover, we have $p_i = \Theta(k/m)$ and $q_i = \Theta(1)$, then $\alpha = \Theta(k/m)$, $\beta = \Theta(m/k)$ and $\gamma_s = O(k^3/mn^2)$. Then $g_{R, i}^s$ is $(\Omega(k/m), \Omega(m/k), O(k^3/mn^2))$-correlated with the true solution $\AR{i}^*$. \qedhere \proof[Proof of Theorem \ref{thm_columnwise_descent_infinite_samples}] The descent in Theorem \ref{thm_columnwise_descent_infinite_samples} directly follows from the above lemma. Next, we will establish the nearness for the update at step $s$: \subsection{Nearness} \begin{Lemma} \label{lm_nearness_infinite_sample} Suppose that $A^s$ is $(\delta, 2)$-near to $A^*$, then $\norm{A^{s+1} - A^*} \leq 2\norm{A^*} $ \end{Lemma} \proof[Proof] From Lemma \ref{lm_expected_columwise_update} we have $\cg{i}^s = p_iq_i(\lambda_i\cA{i}^s - \cA{i}^*) + \cB{-i}\textnormal{diag}(q_{ij})\cB{-i}^T\cA{i}^* \pm \zeta$. Denote $\bar{R} = [n] \backslash R$, then it is obvious that $g_{\bar{R}, i}^s = A_{\bar{R}, -i}\textnormal{diag}(q_{ij})\cB{-i}^T\cA{i}^* \pm \zeta$ is bounded by $O(k^2/m^2)$. Then we follows the proof of Lemma 24 in~\citep{arora15-neural} for the nearness with full $g^s = g_{R,i}^s + g_{\bar{R}, i}^s$ to finish the proof for this lemma. \qed In sum, we have shown the descent property of Algorithm~\ref{alg_neural_doubly_sdl} in the infinite sample case. The study of the concentration of $\widehat{g}^s$ around its mean to the sample complexity is provided in Section~\ref{sample_complexity}. In the next section, we corroborate our theory by some numerical results on synthetic data. \section{Analysis of Main Algorithm} \label{appdx_main_algorithm} \subsection{Simple Encoding} We can see that $(A^sx - y)\textnormal{sgn}(x)^T$ is random over $y$ and $x$ that is obtained from the encoding step. We follow~\citep{arora15-neural} to derive the closed form of $g^s = \mathbb{E}[(A^sx - y)\textnormal{sgn}(x)^T]$ by proving that the encoding recovers the sign of $x^*$ with high probability as long as $A^s$ is close enough to $A^*$. \begin{Lemma} \label{lm_sign_recovery_x} Assume that $A^s$ is $\delta$-close to $A^*$ for $\delta = O(r/n\log n)$ and $\mu \leq \frac{\sqrt{n}}{2k}$, and $k \geq \Omega(\log m)$ then with probability over random samples $y = A^*x^* + \varepsilon $ \begin{equation} \label{eq_sign_recovery_x} \textnormal{sgn}(\textnormal{threshold}_{C/2}{\left( (A^{s})^Ty \right)} = \textnormal{sgn}(x^*) \end{equation} \end{Lemma} \proof[Proof of Lemma \ref{lm_sign_recovery_x}] We follow the same proof strategy from~\citep{arora15-neural} (Lemmas 16 and 17) to prove a more general version in which the noise $\varepsilon$ is taken into account. Write $S = \textnormal{supp}(x^*)$ and skip the superscript $s$ on $A^s$ for the readability. What we need is to show $S = \{i \in [m] : \inprod{\cA{i}}{y} \geq C/2\}$ and then $\textnormal{sgn}(\inprod{\cA{i}^s}{y}) = \textnormal{sgn}(x_i^*)$ for each $i \in S$ with high probability. Following the same argument of~\citep{arora15-neural}, we prove in below a stronger statement that, even conditioned on the support $S$, $S = \{i \in [m] : \inprod{\cA{i}}{y} \geq C/2\}$ with high probability. Rewrite $$\inprod{\cA{i}}{y} = \inprod{\cA{i}}{ A^*x^* + \varepsilon} = \inprod{\cA{i}}{\cA{i}^*}x_i^* + \sum_{j \neq i}\inprod{\cA{i}}{\cA{j}^*}x_j^* + \inprod{\cA{i}}{\varepsilon},$$ and observe that, due to the closeness of $\cA{i}$ and $\cA{i}^*$, the first term is either close to $x_i^*$ or equal to 0 depending on whether or not $i \in S$. Meanwhile, the rest are small due to the incoherence and the concentration in the weighted average of noise. We will show that both $Z_i = \sum_{S\backslash \{i\}}\inprod{\cA{i}}{\cA{j}^*}x_j^*$ and $\inprod{\cA{i}}{\varepsilon}$ are bounded by $C/8$ with high probability. The cross-term $Z_i = \sum_{S\backslash \{i\}}\inprod{\cA{i}}{\cA{j}^*}x_j^*$ is a sum of zero-mean independent sub-Gaussian random variables, which is another sub-Gaussian random variable with variance $\sigma^2_{Z_i} = \sum_{S\backslash \{i\}}\inprod{\cA{i}}{\cA{j}^*}^2$. Note that $$\inprod{\cA{i}}{\cA{j}^*}^2 \leq 2\bigl(\inprod{\cA{i}^*}{\cA{j}^*}^2 + \inprod{\cA{i} - \cA{i}^*}{\cA{j}^*}^2\bigr) \leq 2\mu^2/n + 2\inprod{\cA{i} - \cA{i}^*}{\cA{j}^*}^2,$$ where we use Cauchy-Schwarz inequality and the $\mu$-incoherence of $A^*$. Therefore, $$\sigma^2_{Z_i} \leq 2\mu^2k/n + 2\norm{\cA{S}^{*T}(\cA{i} - \cA{i}^*)}_F^2 \leq 2\mu^2k/n + 2\norm{\cA{S}^*}^2\norm{\cA{i} - \cA{i}^*}^2 \leq O(1/\log n),$$ under $\mu \leq \frac{\sqrt{n}}{2k}$, to conclude $2\mu^2k/n\le O(1/\log n)$ we need $1/k=O(1/\log n)$, i.e.~$k=\Omega(\log n)$. Applying Bernstein's inequality, we get $\abs{Z_i} \leq C/8$ with high probability. What remains is to bound the noise term $\inprod{\cA{i}}{\varepsilon}$. In fact, $\inprod{\cA{i}}{\varepsilon}$ is sum of $n$ Gaussian random variables, which is a sub-Gaussian with variance $\sigma_\varepsilon^2$. It is easy to see that $|\inprod{\cA{i}}{\varepsilon}| \leq \sigma_\varepsilon\log n$ with high probability. Notice that $\sigma_\varepsilon=O(1/\sqrt{n})$. Finally, we combine these bounds to have $\abs{Z_i + \inprod{\cA{i}}{\varepsilon}} \leq C/4$. Therefore, for $i \in S$, then $\abs{\inprod{\cA{i}}{y}} \geq C/2$ and negligible otherwise. Using union bound for every $i = 1, 2, \dots, m$, we finish the proof of the Lemma. \qedhere Lemma \ref{lm_sign_recovery_x} enables us to derive the expected update direction $g^s = \mathbb{E}[(A^sx - y)\textnormal{sgn}(x)^T]$ explicitly. \subsection{Approximate Gradient in Expectation} \proof[Proof of Lemma \ref{lm_expected_columwise_update}] Having the result from Lemma \ref{lm_sign_recovery_x}, we are now able to study the expected update direction $g^s = \mathbb{E}[(A^sx - y)\textnormal{sgn}(x)^T]$. Recall that $A^s$ is the update at the $s$-th iteration and $x \triangleq \textnormal{threshold}_{C/2}((A^{s})^Ty)$. Based on the generative model, denote $p_i = \mathbb{E}[x_i^*\textnormal{sgn}(x_i^*) | i \in S], q_i = \mathbb{P}[i \in S]$ and $q_{ij} = \mathbb{P}[i, j \in S]$. Throughout this section, we will use $\zeta$ to denote any vector whose norm is negligible although they can be different across their appearances. $A_{-i}$ denotes the sub-matrix of $A$ whose $i$-th column is removed. To avoid overwhelming appearance of the superscript $s$, we skip it from $A^s$ for neatness. Denote $\mathcal{F}_{x^*}$ is the event under which the support of $x$ is the same as that of $x^*$, and $\bar{\mathcal{F}}_{x^*}$ is its complement. In other words, $\textbf{1}_{\mathcal{F}_{x^*}} = \bm{1}[\textnormal{sgn}(x)=\textnormal{sgn}(x^*)]$ and $\textbf{1}_{\mathcal{F}_{x^*}} + \textbf{1}_{\bar{\mathcal{F}}_{x^*}} = 1$. \begin{align*} \cg{i}^s = \mathbb{E}[(Ax - y)\textnormal{sgn}(x_i)] &= \mathbb{E}[(Ax-y)\textnormal{sgn}(x_i)\textbf{1}_{\mathcal{F}_{x^*}}] \pm \zeta \end{align*} Using the fact that $y = A^*x^* + \varepsilon$ and that under $\mathcal{F}_{x^*} $ we have $Ax = \cA{S} x_S = \cA{S}\cAS^Ty = \cA{S}\cAS^TA^*x^* + \cB{S}\cBS^T\varepsilon$. Using the independence of $\varepsilon$ and $x^*$ to get rid of the noise term, we get \begin{align*} \cg{i}^s &= \mathbb{E}[(\cB{S}\cBS^T - I_n)A^*x^*\textbf{1}_{\mathcal{F}_{x^*}}] + \mathbb{E}[(\cB{S}\cBS^T - I_n)\varepsilon \textnormal{sgn}(x_i)\textbf{1}_{\mathcal{F}_{x^*}}] \pm \zeta \\ &= \mathbb{E}[(\cB{S}\cBS^T - I_n)A^*x^* \textnormal{sgn}(x_i)\textbf{1}_{\mathcal{F}_{x^*}}] \pm \zeta \quad \text{(Independence of $\varepsilon$ and $x$'s)} \\ &= \mathbb{E}[(\cB{S}\cBS^T - I_n)A^*x^* \textnormal{sgn}(x^*_i)(1 - \textbf{1}_{\bar{\mathcal{F}}_{x^*}})] \pm \zeta \quad \text{(Under $\mathcal{F}_{x^*}$ event)} \\ &= \mathbb{E}[(\cB{S}\cBS^T - I_n)A^*x^* \textnormal{sgn}(x^*_i)] \pm \zeta \end{align*} Recall from the generative model assumptions that $S = \textnormal{supp}(x^*)$ is random and the entries of $x^*$ are pairwise independent given the support, so \begin{align*} \cg{i}^s &= \mathbb{E}_S\mathbb{E}_{x^*|S}[(\cB{S}\cBS^T - I_n)A^*x^* \textnormal{sgn}(x^*_i)] \pm \zeta \\ &= p_i\mathbb{E}_{S, i \in S}[(\cB{S}\cBS^T - I_n)\cA{i}^*] \pm \zeta \\ &= p_i\mathbb{E}_{S, i \in S}[(\cB{i}\cBi^T - I_n)\cA{i}^*] + p_i\mathbb{E}_{S, i\in S}[\sum_{l\in S, l \neq i}\cB{l}\cB{l}^T\cA{i}^*] \pm \zeta \\ &= p_iq_i(\cB{i}\cBi^T - I_n)\cA{i}^* + p_i\sum_{l \in [m], l \neq i}q_{il}\cB{l}\cB{l}^T\cA{i}^* \pm \zeta \\ &= p_iq_i(\lambda_i\cB{i} - \cA{i}^*) + p_i\cB{-i}\textnormal{diag}(q_{ij})\cB{-i}^T\cA{i}^* \pm \zeta \end{align*} where $\lambda_i^s = \langle \cA{i}^s, \cA{i}^*\rangle$. Let $\xi_i^s = A_{R, -i}\textnormal{diag}(q_{ij})\cB{-i}^T\cA{i}^*/q_i$ for $j = 1, \dots, m$, we now have the full expression of the expected approximate gradient at iteration $s$: \begin{equation} \label{eq_expected_columwise_update} g_{R, i}^s = p_iq_i(\lambda_i\AR{i}^s - \AR{i}^* + \xi_i^s) \pm \zeta_{R}. \end{equation} What remains is to bound norms of $\xi_s$ and $\zeta$. We have $\norm{A_{R, -i}^s} \leq \norm{A_{-i}^s} \leq O(\sqrt{m/n})$ \text{w.h.p.}\ Then, along with the fact that $\norm{A^*_i} = 1$, we can bound $\norm{\xi_i^s}$ \begin{equation} \norm{\xi_i^s} \leq \norm{A_{R_i, -i}^s} \max_{j \neq i} \frac{q_{ij}}{q_i} \norm{A_{-i}^s} \leq O(k/n).\label{eqn:eps_norm} \end{equation} Next, we show that norm of $\zeta$ is negligible. In fact, $\mathcal{F}_{x^*}$ happens with very high probability, then it suffices to bound norm of $(Ax - y)\textnormal{sgn}(x_i)$ which will be done using Lemma \ref{cl_norm_bound_ghi} and Lemma \ref{aux_lm_pull_out_prob_bound} in Section \ref{sample_complexity}. This concludes the proof for Lemma \ref{lm_expected_columwise_update}. \qed \section{Auxiliary Lemma} \label{appdx_assumption} \begin{Claim}[Maximal row $\ell_1$-norm] \label{cl_bound_operator_norm_true_A} Given that $\norm{A^*}^2_F = m$ and $\norm{A^*} = O(\sqrt{m/n})$, then $\norm{A^{*T}}_{1, 2} = \Theta(\sqrt{m/n})$. \end{Claim} \proof Recall the definition of the operator norm: $$\norm{A^{*T}}_{1, 2} = \sup_{x \neq 0} \frac{\norm{A^Tx}}{\norm{x}_1} \leq \sup_{x \neq 0} \frac{\norm{A^Tx}}{\norm{x}} = \norm{A^{*T}} = O(\sqrt{m/n}). $$ Since $\norm{A^*}^2_F = m$, $\norm{A^{*T}}_{1, 2} \geq \norm{A^*}_F/\sqrt{n} = \sqrt{m/n}$. Combining with the above, we have $\|A^{*T}\|_{1,2}=\Theta(\sqrt{m/n})$. \qed Along with Assumptions \textbf{A1} and \textbf{A3}, the above claim implies the number of nonzero entries in each row is $O(r)$. This Claim is an important ingredient in our analysis of our initialization algorithm shown in Section~\ref{init}. \section{Analysis of Initialization Algorithm} \label{appdx_initialization_analysis} \subsection{Proof of Lemma \ref{lm_diagonal_entries_in_expectation}} \label{prf_lm_diagonal_entries} The proof of Lemma \ref{lm_diagonal_entries_in_expectation} can be divided into three steps: 1) we first establish useful properties of $\beta$ with respect to $\alpha$; 2) we then explicitly derive $e_l$ in terms of the generative model parameters and $\beta$; and 3) we finally bound the error terms in $E$ based on the first result and appropriate assumptions. \begin{Claim} \label{cl_bound_subgaussian_rv} In the generative model, $\norm{x^*} \leq \widetilde{O}(\sqrt{k})$ and $\norm{\varepsilon} \leq \widetilde{O}(\sigma_\varepsilon\sqrt{n})$ with high probability. \end{Claim} \proof The claim directly follows from the fact that $x^*$ is a $k$-sparse random vector whose nonzero entries are independent sub-Gaussian with variance $1$. Meanwhile, $\varepsilon$ has $n$ independent Gaussian entries of variance $\sigma_\varepsilon^2$. \qedhere Despite its simplicity, this claim will be used in many proofs throughout the paper. Note also that in this section we will calculate the expectation over $y$ and often refer probabilistic bounds (\text{w.h.p.}) under the randomness of $u$ and $v$. \begin{Claim} Suppose that $u = A^*\alpha + \varepsilon_u$ is a random sample and $U = \textnormal{supp}(\alpha)$. Let $\beta = A^{*T}u$, then, \text{w.h.p.}, we have (a) $\abs{\beta_i - \alpha_i} \leq \frac{\mu k\log n}{\sqrt{n}} + \sigma_\varepsilon\log n$ for each $i$ and (b) $\norm{\beta} \leq \widetilde{O}(\sqrt{k} + \sigma_\varepsilon\sqrt{n})$. \label{cl_bounds_of_beta} \end{Claim} \proof The proof mostly follows from Claim 36 of \cite{arora15-neural}, with an additional consideration of the error $\varepsilon_u$. Write $W = U \backslash \{i\}$ and observe that $$\abs{\beta_i - \alpha_i} = \abs{\cA{i}^{*T}\cA{W}^*\alpha_W + \cA{i}^{*T}\varepsilon_u} \leq \abs{\inprod{\cA{W}^{*T}\cA{i}^*}{\alpha_W}} + \abs{\inprod{\cA{i}^*}{\varepsilon_u}}$$ Since $A^*$ is $\mu$-incoherence, then $\norm{\cA{i}^{*T}\cA{W}^*} \leq \mu\sqrt{k/n}$. Moreover, $\alpha_W$ has $k-1$ independent sub-Gaussian entries of variance $1$, therefore $\abs{\inprod{\cA{W}^{*T}\cA{i}^*}{\alpha_W} } \leq \frac{\mu k\log n}{\sqrt{n}}$ with high probability. Also recall that $\varepsilon_u$ has independent Gaussian entries of variance $\sigma_\varepsilon^2$, then $\cA{i}^{*T}\varepsilon_u$ is Gaussian with the same variance ($\norm{\cA{i}^*} = 1$). Hence $\abs{\cA{i}^{*T}\varepsilon} \leq \sigma_\varepsilon\log n$ with high probability. Consequently, $\abs{\beta_i - \alpha_i} \leq \frac{\mu k\log n}{\sqrt{n}} + \sigma_\varepsilon\log n$, which is the first part of the claim. Next, in order to bound $\norm{\beta}$, we express $\beta$ as $$\norm{\beta} = \norm{A^{*T}\cA{U}^*\alpha_U + A^{*T}\varepsilon_u} \leq \norm{A^*}\norm{\cA{U}^*}\norm{\alpha_U} + \norm{A^*}\norm{\varepsilon_u}$$ Using Claim \ref{cl_bound_subgaussian_rv} to get $\norm{\alpha_U} \leq \widetilde{O}(\sqrt{k})$ and $\norm{\varepsilon_u} \leq \widetilde{O}(\sigma_\varepsilon\sqrt{n})$ \text{w.h.p.}, and further noticing that $\norm{\cA{U}^*} \leq \norm{A^*} \leq O(1)$ , we complete the proof for the second part. \qed Claim \ref{cl_bounds_of_beta} suggests that the difference between $\beta_i$ and $\alpha_i$ is bounded above by $O^*(1/\log^2 n)$ \text{w.h.p.}\ if $\mu = O^*(\frac{\sqrt{n}}{k\log^3n})$. Therefore, \text{w.h.p.}, $C - o(1) \leq \abs{\beta_i} \leq \abs{\alpha_i} + o(1) \leq O(\log m)$ for $i \in U$ and $\abs{\beta_i} \leq O^*(1/\log^2n)$ otherwise. On the other hand, under Assumption \textbf{B4}, $\norm{\beta} \leq \widetilde{O}(\sqrt{k})$ \text{w.h.p.}\ We will use these results multiple times in the next few proofs. \proof[Proof of Lemma \ref{lm_diagonal_entries_in_expectation}] We decompose $d_l$ into small parts so that the stochastic model $\mathcal{D}$ is made use. \begin{align*} \label{eq_full_decomposition_dl} e_l &= \mathbb{E}[\inprod{y}{u}\inprod{y}{v} y_l^2] = \mathbb{E}[\inprod{A^*x^* + \varepsilon}{u} \inprod{A^*x^* + \varepsilon}{v} (\inprod{A^*_{l\cdot}}{x^*} + \varepsilon)^2] \nonumber \\ &= \mathbb{E}\bigl[\bigl\{\inprod{x^*}{\beta} \inprod{x^*}{\beta'} + x^{*T}(\beta v^T + \beta'u^T)\varepsilon + u^T\varepsilon\varepsilon^Tv\bigr\}\bigl\{\inprod{\rA{l}^*}{x^*}^2 + 2\inprod{\rA{l}^*}{x^*}\varepsilon_l + \varepsilon_l\bigr\}\bigr] \nonumber \\ &= E_1 + E_2 + \dots + E_9 \end{align*} where the terms are \begin{align} \begin{split} &E_1 = \mathbb{E}[\inprod{x^*}{\beta} \inprod{x^*}{\beta'} \inprod{\rA{l}^*}{x^*}^2] \\ &E_2 = 2\mathbb{E}[\inprod{x^*}{\beta} \inprod{x^*}{\beta'} \inprod{\rA{l}^*}{x^*}\varepsilon_l] \\ &E_3 = \mathbb{E}[\inprod{x^*}{\beta} \inprod{x^*}{\beta'}\varepsilon_l^2] \\ &E_4 = \mathbb{E}\bigl[\inprod{A^*_{l\cdot}}{x^*}^2 x^{*T}(\beta v^T + \beta'u^T)\varepsilon \bigr] \\ &E_5 = \mathbb{E}\bigl[\inprod{A^*_{l\cdot}}{x^*} x^{*T}(\beta v^T + \beta'u^T)\varepsilon \varepsilon_l \bigr] \\ &E_6 = \mathbb{E}\bigl[(\beta v^T + \beta'u^T)\varepsilon \varepsilon_l^2 \bigr] \\ &E_7 = \mathbb{E}[u^T\varepsilon\varepsilon^Tv\inprod{\rA{l}^*}{x^*}^2] \\ &E_8 = 2\mathbb{E}[u^T\varepsilon\varepsilon^Tv\inprod{\rA{l}^*}{x^*}\varepsilon_l] \\ &E_9 = \mathbb{E}[u^T\varepsilon\varepsilon^Tv\varepsilon_l^2] \end{split} \end{align} Because $x^*$ and $\varepsilon$ are independent and zero-mean, $E_2$ and $E_4$ are clearly zero. Moreover, $$E_6 = (\beta v^T + \beta'u^T) \mathbb{E}[\varepsilon \varepsilon_l^2] = 0$$ due to the fact that $\mathbb{E}[\varepsilon_j\varepsilon_l^2] = 0$, for $j\neq l$, and $\mathbb{E}[\varepsilon_l^3] = 0$. Also, $$E_8 = \rA{l}^{*T}\mathbb{E}[x^*]\mathbb{E}\bigl[ u^T\varepsilon\varepsilon^Tv \varepsilon_l\bigr] = 0.$$ We bound the remaining terms separately in the following claims. \begin{Claim} In the decomposition \eqref{eq_full_decomposition_dl}, $E_1$ is of the form \label{cl_bound_E1} \begin{align*} E_1 &= \sum_{i \in U \cap V}q_ic_i\beta_i\beta'_iA^{*2}_{li} + \sum_{i \notin U \cap V}q_ic_i\beta_i\beta'_iA^{*2}_{li} + \sum_{j \neq i} q_{ij}(\beta_i\beta'_iA^{*2}_{lj} + 2\beta_i\beta'_jA^*_{li}A^*_{lj}) \end{align*} where all those terms except $\sum_{i \in U \cap V}q_ic_i\beta_i\beta'_iA^{*2}_{li}$ have magnitude at most $O^*(k/m\log^2n)$ \text{w.h.p.} \end{Claim} \proof Using the generative model in Assumptions \textbf{B1}-\textbf{B4}, we have \begin{align*} E_1 &= \mathbb{E}[\inprod{x^*}{\beta} \inprod{x^*}{\beta'} \inprod{\rA{l}^*}{x^*}^2] \\ &= \mathbb{E}_S\big[\mathbb{E}_{x^*|S}[\sum_{i \in S}\beta_ix_i^*\sum_{i \in S}\beta'_ix_i^*\big(\sum_{i\in S}A^*_{li}x_i^*\big)^2]\big] \\ &= \sum_{i\in [m]}q_ic_i\beta_i\beta'_iA^{*2}_{li} + \sum_{i,j \in [m], j \neq i} q_{ij}(\beta_i\beta'_iA^{*2}_{lj} + 2\beta_i\beta'_jA^*_{li}A^*_{lj}) \\ &= \sum_{i \in U \cap V}q_ic_i\beta_i\beta'_iA^{*2}_{li} + \sum_{i \notin U \cap V}q_ic_i\beta_i\beta'_iA^{*2}_{li} + \sum_{j \neq i} q_{ij}(\beta_i\beta'_iA^{*2}_{lj} + 2\beta_i\beta'_jA^*_{li}A^*_{lj}), \end{align*} where we have used the $q_i = \mathbb{P}[i \in S]$, $q_{ij} = \mathbb{P}[i, j \in S]$ and $c_i = \mathbb{E}[x_i^4|i \in S]$ and Assumptions \textbf{B1}-\textbf{B4}. We now prove that the last three terms are upper bounded by $O^*(k/m\log n)$. The key observation is that all these terms typically involve a quadratic form of the $l$-th row $\rA{l}^*$ whose norm is bounded by $O(1)$ (by Claim \ref{cl_bound_operator_norm_true_A} and Assumption \textbf{A4}). Moreover, $\abs{\beta_i\beta_i'}$ is relatively small for $i \notin U \cap V$ while $q_{ij} = \Theta(k^2/m^2)$. For the second term, we apply the Claim \ref{cl_bounds_of_beta} for $i \in [m]\backslash (U \cap V)$ to bound $\abs{\beta_i\beta_i'}$ . Assume $\alpha_i = 0$ and $\alpha_i' \neq 0$, then with high probability \begin{align*} \abs{\beta_i\beta_i'} &\leq \abs{(\beta_i - \alpha_i)(\beta'_i - \alpha'_i)} + \abs{\beta_i\alpha'_i} \leq O^*(1/\log n) \end{align*} Using the bound $q_ic_i = \Theta(k/m)$, we have \text{w.h.p.}, \begin{equation*} \abs[\Big]{\sum_{i \notin U \cap V}q_ic_i\beta_i\beta'_iA^{*2}_{li}} \leq \max_{i}\abs{q_ic_i\beta_i\beta'_i} \sum_{i \notin U \cap V}A^{*2}_{li} \leq \max_{i}\abs{q_ic_i\beta_i\beta'_i}\norm{A^*}^2_{1,2} \leq O^*(k/m\log n). \end{equation*} For the third term, we make use of the bounds on $\norm{\beta}$ and $\norm{\beta'}$ from the previous claim where $\norm{\beta}\norm{\beta'} \leq \widetilde{O}(k)$ \text{w.h.p.}, and on $q_{ij} = \Theta(k^2/m^2)$. More precisely, \text{w.h.p.}, \begin{align*} \abs[\Big]{\sum_{j \neq i} q_{ij}\beta_i\beta'_iA^{*2}_{lj}} &= \abs[\Big]{\sum_i\beta_i\beta'_i\sum_{j\neq i}q_{ij}A^{*2}_{lj}} \leq \sum_i\abs{\beta_i\beta'_i}\bigl(\sum_{j\neq i}q_{ij}A^{*2}_{lj}\bigr) \\ &\leq (\max_{i\neq j}q_{ij}) \sum_i\abs{\beta_i\beta_i'} \Bigl( \sum_j A^{*2}_{lj} \Bigr) \leq (\max_{i\neq j}q_{ij}) \norm{\beta}\norm{\beta'}\norm{A^*}^2_{1,2} \leq \widetilde{O}(k^3/m^2), \end{align*} where the second last inequality follows from the Cauchy-Schwarz inequality. For the last term, we write it in a matrix form as $\sum_{j \neq i} q_{ij} \beta_i\beta'_jA^*_{li}A^*_{lj} = \rA{l}^{*T}Q_\beta\rA{l}^*$ where $(Q_\beta)_{ij} = q_{ij} \beta_i\beta'_j$ for $i \neq j$ and $(Q_\beta)_{ij} = 0$ for $i = j$. Then \begin{align*} \abs{\rA{l}^{*T}Q_\beta\rA{l}^*} \leq \norm{Q_\beta}\norm{\rA{l}^*}^2 \leq \norm{Q_\beta}_F \norm{A^*}^2_{1,2}, \end{align*} where $ \norm{Q_\beta}^2_F = \sum_{i\neq j} q_{ij}^2 \beta_i^2(\beta'_j)^2 \leq (\max_{i\neq j}q_{ij}^2)\sum_i \beta_i^2\sum_j(\beta'_j)^2 \leq (\max_{i\neq j}q_{ij}^2) \norm{\beta}^2 \norm{\beta'}^2$. Ultimately, $$\abs[\Big]{\sum_{j \neq i} q_{ij} \beta_i\beta'_jA^*_{li}A^*_{lj}} \leq (\max_{i\neq j}q_{ij}) \norm{\beta} \norm{\beta'}\norm{A^*}^2_{1,2} \leq \widetilde{O}(k^3/m^2).$$ Under Assumption $k = O^*(\frac{\sqrt{n}}{\log n})$, then $\widetilde{O}(k^3/m^2) \leq O^*(k/m\log^2 n)$. As a result, the two terms above are bounded by the same amount $O^*(k/m\log n)$ \text{w.h.p.}, so we complete the proof of the claim. \qed \begin{Claim} \label{cl_bound_Es} In the decomposition (\ref{eq_full_decomposition_dl}), $\abs{E_3}$, $\abs{E_5}$, $\abs{E_7}$ and $\abs{E_9}$ is at most $O^*(k/m\log^2n)$. \end{Claim} \proof Recall that $\mathbb{E}[x_i^2|S] = 1$ and $q_i = \mathbb{P}[i \in S] = \Theta(k/m)$ for $S = \textnormal{supp}(x^*)$, then \begin{align*} E_3 &= \mathbb{E}[\inprod{x^*}{\beta} \inprod{x^*}{\beta'}\varepsilon_l^2] = \sigma_\varepsilon^2\mathbb{E}_S\bigl[\mathbb{E}_{x^*|S}[\sum_{i, j \in S}\beta_{i}\beta'_{j}x_{i}^*x_{j}^*]\bigr] \\ &= \sigma_\varepsilon^2\mathbb{E}_S[\sum_{i \in S}\beta_i\beta'_i] = \sum_{i}\sigma_\varepsilon^2q_i\beta_i\beta'_i \end{align*} Denote $Q = \textnormal{diag}(q_1, q_2, \dots, q_m)$, then $\abs{E_3} = \abs{\sigma_\varepsilon^2\inprod{Q\beta}{\beta'}} \leq \sigma_\varepsilon^2\norm{Q}\norm{\beta}\norm{\beta'} \leq \widetilde{O}(\sigma_\varepsilon^2 k^2/m) = \widetilde{O}(k^3/mn)$ where we have used $\norm{\beta} \leq \widetilde{O}(\sqrt{k})$ \text{w.h.p.}\ and $\sigma_\varepsilon \leq O(1/\sqrt{n})$. For convenience, we handle the seventh term before $E_5$: \begin{align*} E_7 = \mathbb{E}[u^T\varepsilon\varepsilon^Tv\inprod{\rA{l}^*}{x^*}^2] = \mathbb{E}[\inprod{\rA{l}^*}{x^*}^2]u^T\mathbb{E}[\varepsilon\varepsilon^T]v = \sum_{i}\sigma_\varepsilon^2\inprod{u}{v}q_iA_{li}^2 = \sigma_\varepsilon^2\inprod{u}{v}\rA{l}^TQ\rA{l} \end{align*} To bound this term, we use Claim \ref{cl_bound_y} in Appendix \ref{sample_complexity} to have $\norm{u} = \norm{A^*\alpha + \varepsilon_u} \leq \widetilde{O}(\sqrt{k})$ \text{w.h.p.}\ and $\inprod{u}{v} \leq \widetilde{O}(\sqrt{k})$ \text{w.h.p.}\ Consequently, $\abs{E_7} \leq \sigma_\varepsilon^2 \norm{Q}\norm{\rA{l}}^2\abs{\inprod{u}{v}} \leq \widetilde{O}(k^2/mn)$ because $\norm{\rA{l}}^2 \leq O(m/n)$ and $\sigma_\varepsilon \leq O(1/\sqrt{n})$. Now, the firth term $E_5$ is expressed as follows \begin{align*} E_5 &= \mathbb{E}\bigl[\inprod{A^*_{l\cdot}}{x^*} x^{*T}(\beta v^T + \beta'u^T)\varepsilon \varepsilon_l \bigr] \\ &= \rA{l}^{*T}\mathbb{E}\bigl[x^*x^{*T}\bigr] (\beta v^T + \beta'u^T) \mathbb{E}[\varepsilon \varepsilon_l] \\ &= \sigma_\varepsilon^2\rA{l}^{*T}Q(v_l\beta + u_l\beta') \end{align*} Observe that $\abs{E_5} \leq \sigma_\varepsilon^2\norm{\rA{l}^{*T}}\norm{Q(v_l\beta + u_l\beta')} \leq \sigma_\varepsilon^2\norm{\rA{l}^{*T}}\norm{Q}\norm{v_l\beta + u_l\beta'}$ and that $\norm{v_l\beta + u_l\beta'} \leq 2\norm{u}\norm{\beta} \leq \widetilde{O}(k)$ \text{w.h.p.}\ using the result $\norm{u} \leq \widetilde{O}(k)$ and $\norm{\beta} \leq \widetilde{O}(k)$ from Claim \ref{cl_bounds_of_beta}, then $E_5$ bounded by $\widetilde{O}(k^2/mn)$. The last term \begin{align*} E_9 &= \mathbb{E}[u^T\varepsilon\varepsilon^Tv\varepsilon_l^2] = u^T\mathbb{E}\bigl[\varepsilon\varepsilon^T \varepsilon_l^2\bigr]v = 9\sigma_\varepsilon^4\inprod{u}{v} \end{align*} because the independent entries of $\varepsilon$ and $\mathbb{E}[\varepsilon_l^4] = 9\sigma_\varepsilon^4$. Therefore, $\abs{E_9} \leq 9\sigma_\varepsilon^4\norm{u}\norm{v} \leq \widetilde{O}(k^2/n^2)$. Since $m=O(n)$ and $k \leq O^*(\frac{\sqrt{n}}{\log n})$, we obtain the same bound $O^*(k/m\log^2n)$ for $\abs{E_3}$, $\abs{E_5}$, $\abs{E_7}$ and $\abs{E_9}$, and conclude the proof of the claim. \qed Combining the bounds from Claim \ref{cl_bound_E1}, \ref{cl_bound_Es} for every single term in \eqref{eq_full_decomposition_dl}, we finish the proof for Lemma \ref{lm_diagonal_entries_in_expectation}. \qedhere \subsection{Proof of Lemma \ref{lm_reweighted_cov_matrix_in_expectation}} \label{sec:prf_lm_reweighted_cov_matrix_in_expectation} We prove this lemma by using the same strategy used to prove Lemma \ref{lm_diagonal_entries_in_expectation}. \begin{align*} \label{eq_full_decomposition_Muv} M_{u,v} &\triangleq \mathbb{E}[\inprod{y}{u}\inprod{y}{v} y_Ry_R^T] \\ & = \mathbb{E}[\inprod{A^*x^* + \varepsilon}{u} \inprod{A^*x^* + \varepsilon}{v} (\rA{R}^*x^* + \varepsilon_R)(\rA{R}^*x^* + \varepsilon_R)^T] \nonumber \\ &= \mathbb{E}\bigl[\bigl\{\inprod{x^*}{\beta} \inprod{x^*}{\beta'} + x^{*T}(\beta v^T + \beta'u^T)\varepsilon + u^T\varepsilon\varepsilon^Tv\bigr\}\bigl\{\rA{R}^*x^*x^{*T}\rA{R}^{*T} + \rA{R}^*x^*\varepsilon_R^T + \varepsilon_Rx^{*T}\rA{R}^{*T} + \varepsilon_R\varepsilon_R^T \bigr\}\bigr] \nonumber \\ &= M_1 + \dots + M_8, \end{align*} in which only nontrivial terms are kept in place, including \begin{align} \begin{split} &M_1 = \mathbb{E}[\inprod{x^*}{\beta} \inprod{x^*}{\beta'}\rA{R}^*x^*x^{*T}\rA{R}^{*T}] \\ &M_2 = \mathbb{E}[\inprod{x^*}{\beta}\inprod{x^*}{\beta'}\varepsilon_R\varepsilon_R^T] \\ &M_3 = \mathbb{E}[x^{*T}(\beta v^T + \beta'u^T)\varepsilon\rA{R}^*x^*\varepsilon_R^T] \\ &M_4 = \mathbb{E}[x^{*T}(\beta v^T + \beta'u^T)\varepsilon\varepsilon_Rx^{*T}\rA{R}^{*T}] \\ &M_5 = \mathbb{E}[u^T\varepsilon\varepsilon^Tv\rA{R}^*x^*x^{*T}\rA{R}^{*T}] \\ &M_6 = \mathbb{E}[u^T\varepsilon\varepsilon^Tv\rA{R}^*x^*\varepsilon_R^T] \\ &M_7 = \mathbb{E}[u^T\varepsilon\varepsilon^Tv\varepsilon_R^Tx^{*T}\rA{R}^{*T}] \\ &M_8 = \mathbb{E}[u^T\varepsilon\varepsilon^Tv\varepsilon_R\varepsilon_R^T] \end{split} \end{align} By swapping inner product terms and taking advantage of the independence, we can show that $M_6 = \mathbb{E}[\rA{R}^*x^*u^T\varepsilon\varepsilon^Tv\varepsilon_R^T] = 0$ and $M_7 = \mathbb{E}[u^T\varepsilon\varepsilon^Tv\varepsilon_R^Tx^{*T}\rA{R}^{*T}] = 0$. The remaining are bounded in the next claims. \begin{Claim} In the decomposition \eqref{eq_full_decomposition_Muv}, \label{cl_bound_M1} \begin{align*} M_1 = \sum_{i \in U \cap V}q_ic_i\beta_i\beta'_i\AR{i}^*\AR{i}^{*T} + E'_1 + E'_2 + E'_3 \end{align*} where $E'_1 = \sum_{i \notin U \cap V}q_ic_i\beta_i\beta'_i\AR{i}^*\AR{i}^{*T}$, $E'_2 = \sum_{i \neq j} q_{ij}\beta_i\beta'_i\AR{j}^*\AR{j}^{*T}$ and $E'_3 = \sum_{i \neq j} q_{ij}(\beta_i\AR{i}^*\beta'_j\AR{j}^{*T} + \beta'_i\AR{i}^*\beta_j\AR{j}^{*T})$ have norms bounded by $O^*(k/m\log n)$. \end{Claim} \proof The expression of $M_1$ is obtained in the same way as $E_1$ is derived in the proof of Lemma \ref{lm_diagonal_entries_in_expectation}. To prove the claim, we bound all the terms with respect to the spectral norm of $\rA{R}^*$ and make use of Assumption \textbf{A4} to find the exact upper bound. For the first term $E'_1$, rewrite $E'_1 = \AR{S}^*D_1\AR{S}^{*T}$ where $S = [m] \backslash (U \cap V)$ and $D_1$ is a diagonal matrix whose entries are $q_ic_i\beta_i\beta_i'$. Clearly, $\norm{D_1} \leq \max_{i \in S}\abs{q_ic_i\beta_i\beta_i'} \leq O^*(k/m\log n)$ as shown in Claim \ref{cl_bound_E1}, then $$\norm{E'_1} \leq \max_{i \in S}\abs{q_ic_i\beta_i\beta_i'}\norm{\AR{S}^*}^2 \leq \max_{i \in S}\abs{q_ic_i\beta_i\beta_i'}\norm{\rA{R}^*}^2 \leq O^*(k/m\log n)$$ where $\norm{\AR{S}^*} \leq \norm{\rA{R}^*} \leq O(1)$. The second term $E'_2$ is a sum of positive semidefinite matrices, and $\norm{\beta} \leq O(k\log n)$, then \begin{align*} E'_2 = \sum_{i \neq j} q_{ij}\beta_i\beta'_i\AR{j}^*\AR{j}^{*T} &\preceq \max_{i\neq j}q_{ij} \Bigl( \sum_i\beta_i\beta_i' \Bigr) \Bigl( \sum_j \AR{j}^*\AR{j}^{*T} \Bigr) \preceq (\max_{i\neq j}q_{ij}) \norm{\beta} \norm{\beta'} \rA{R}^*\rA{R}^{*T} \end{align*} which implies that $\norm{E'_2} \leq (\max_{i\neq j}q_{ij}) \norm{\beta} \norm{\beta'} \norm{\rA{R}^*}^2 \leq \widetilde{O}(k^3/m^2)$. Observe that $E_3'$ has the same form as the last term in Claim \ref{cl_bound_E1}, which is $E_3' = \rA{R}^{*T}Q_{\beta}\rA{R}^*$. Then $$\norm{E'_3} \leq \norm{Q_{\beta}}\norm{\rA{R}^*}^2 \leq (\max_{i\neq j}q_{ij}) \norm{\beta} \norm{\beta'}\norm{\rA{R}^*}^2 \leq \widetilde{O}(k^3/m^2)$$ By Claim \ref{cl_bounds_of_beta}, we have $\norm{\beta}$ and $\norm{\beta'}$ are bounded by $O(\sqrt{k}\log n)$, and note that $k \leq O^*(\sqrt{n}/\log n)$, then we complete the proof for Lemma 6. \qed \begin{Claim} \label{cl_bound_Ms} In the decomposition \eqref{eq_full_decomposition_Muv}, $M_2$, $M_3$, $M_4$, $M_5$ and $M_8$ have norms bounded by $O^*(k/m\log n)$. \end{Claim} \proof Recall the definition of $Q$ in Claim \ref{cl_bound_Es} and use the fact that $\mathbb{E}[x^*x^{*T}] = Q$, we can get $M_2 = \mathbb{E}[\inprod{x^*}{\beta}\inprod{x^*}{\beta'}\varepsilon_R\varepsilon_R^T] = \sum_{i}\sigma_\varepsilon^2q_i\beta_i\beta'_iI_r$. Then, $\norm{M_2} \leq \sigma_\varepsilon^2\max_{i}q_i\norm{\beta}\norm{\beta'} \leq O(\sigma_\varepsilon^2k^2\log^2n/m)$. The next three terms all involve $\rA{R}^*$ whose norm is bounded according to Assumption \textbf{A4}. Specifically, \begin{align*} M_3 &= \mathbb{E}[x^{*T}(\beta v^T + \beta'u^T)\varepsilon\rA{R}^*x^*\varepsilon_R^T] = \mathbb{E}[\rA{R}^*x^*x^{*T}(\beta v^T + \beta'u^T)\varepsilon\varepsilon_R^T] \\ &= \rA{R}^*\mathbb{E}[x^*x^{*T}](\beta v^T + \beta'u^T)\mathbb{E}[\varepsilon\varepsilon_R^T] \\ &= \rA{R}^*Q(\beta v^T + \beta'u^T)\mathbb{E}[\varepsilon\varepsilon_R^T], \end{align*} and \begin{align*} M_4 &= \mathbb{E}[x^{*T}(\beta v^T + \beta'u^T)\varepsilon\varepsilon_Rx^{*T}\rA{R}^{*T}] = \mathbb{E}[\varepsilon_R\varepsilon^T(v\beta^T + u\beta'^T)x^*x^{*T}\rA{R}^{*T}] \\ &= \mathbb{E}[\varepsilon_R\varepsilon^T](v\beta^T + u\beta'^T)\mathbb{E}[x^*x^{*T}]\rA{R}^{*T} \\ &= \mathbb{E}[\varepsilon_R\varepsilon^T](v\beta^T + u\beta'^T)Q\rA{R}^{*T}, \end{align*} and the fifth term $M_5 = \mathbb{E}[u^T\varepsilon\varepsilon^Tv\rA{R}^*x^*x^{*T}\rA{R}^{*T}] = \sigma_\varepsilon^2u^Tv\rA{R}^*\mathbb{E}[x^*x^{*T}]\rA{R}^{*T} = \sigma_\varepsilon^2u^Tv\rA{R}^*Q\rA{R}^{*T}$. We already have $\norm{\mathbb{E}[\varepsilon\varepsilon_R^T]} = \sigma_\varepsilon^2$, $\norm{Q} \leq O(k/m)$ and $\abs{u^Tv} \leq \widetilde{O}(k)$ (proof of Claim \ref{cl_bound_y}), then the remaining work is to bound $\norm{\beta v^T + \beta'u^T}$, then the bound of $v\beta^T + u\beta'^T$ directly follows. We have $\norm{\beta v^T}= \norm{A^*uv^T} \leq \norm{A^*}\norm{u}\norm{v} \leq \widetilde{O}(k)$. Therefore, all three terms $M_3$, $M_4$ and $M_5$ are bounded in norm by $\widetilde{O}(\sigma_\varepsilon^2k^2/m) \leq \widetilde{O}(k^3/mn)$. The remaining term is \begin{align*} M_8 &= \mathbb{E}[u^T\varepsilon\varepsilon^Tv\varepsilon_R\varepsilon_R^T] = \mathbb{E}[\bigl(\sum_{i, j}u_iv_j\varepsilon_i\varepsilon_j\bigr)\varepsilon_R\varepsilon_R^T] \\ &= \mathbb{E}[\bigl(\sum_{i\in R}u_iv_i\varepsilon_i^2\varepsilon_R\varepsilon_R^T\bigr)] + \mathbb{E}[\bigl(\sum_{i \neq j}u_iv_j\varepsilon_i\varepsilon_j\bigr)\varepsilon_R\varepsilon_R^T] \\ &= \sigma_\varepsilon^4u_Rv_R^T \end{align*} where $u_R = \rA{R}^*\alpha + (\varepsilon_u)_R$ and $v_R = \rA{R}^*\alpha' + (\varepsilon_v)_R$. We can see that $\norm{u_R} \leq \norm{\rA{R}^*}\norm{\alpha} + \norm{(\varepsilon_u)_R} \leq \widetilde{O}(\sqrt{k})$. Therefore, $\norm{M8} \leq \widetilde{O}(\sigma_\varepsilon^4k) = \widetilde{O}(k^3/n^2)$. Since $m = O(n)$ and $k \leq O^*(\frac{\sqrt{n}}{\log n})$, then we can bound all the above terms by $O^*(k/m\log n)$ and finish the proof of Claim \ref{cl_bound_Ms}. \qed Combine the results of Claim \ref{cl_bound_M1} and \ref{cl_bound_Ms}, we complete the proof of Lemma \ref{lm_reweighted_cov_matrix_in_expectation}. \section{Extensions of \citet{arora15-neural}} \label{appdx_improve_arora} \subsection{Sample complexity in noisy case} In this section, we study the sample complexity of the algorithms in~\citet{arora15-neural} in the presence of noise. While noise with order $\sigma_\varepsilon = O(1/\sqrt{n})$ does not change the sample complexity of the initialization algorithm, it affects that of the descent stage. The analysis involves producing a sharp bound for $\norm{\widehat{g}^s_{\bullet, i} - \cg{i}^s}$. \begin{Lemma} For a regular dictionary $A^*$, suppose $A^s$ is $(\delta_s, 2)$-near to $A^*$ with $\delta_s = O^*(1/\log n)$, then with high probability $\norm{\widehat{g}^s_{\bullet, i} - \cg{i}^s} \leq O(k/m)\cdot( o(\delta) + O(\sqrt{k/n}))$ when $p = \widetilde{\Omega}(m + \sigma_\varepsilon^2\frac{mn^2}{k})$. \label{lm_concentration_of_gradient_regular_case} \end{Lemma} \proof This follows directly from Lemma~\ref{lm_concentration_of_gradient_sparse_case} where $r = n$.~\qedhere We tighten the original analysis to obtain the complexity $\widetilde{\Omega}(m)$ instead of $\widetilde{\Omega}(mk)$ for the noiseless case. Putting together with $p = \widetilde{\Omega}(mk)$ required by the initialization, we then have the overall sample complexity $\widetilde{O}(mk + \sigma_\varepsilon^2\frac{mn^2}{k})$ for the algorithms in~\citet{arora15-neural} in the noise regime. \subsection{Extension of~\citet{arora15-neural}'s initialization algorithm for sparse case} We study a simple and straightforward extension of the initialization algorithm of~\citet{arora15-neural} for the sparse case. This extension is produced by adding an extra projection, and is described in Figure~\ref{alg_arora_modified}. The recovery of the support of $A^*$ is guaranteed by the following Lemma: \begin{Lemma} Suppose that $z^* \in \mathbb{R}^n$ is $r$-sparse whose nonzero entries are at least $\tau$ in magnitude. Provided $z$ is $\delta$-close to $z^*$ and $z^0 = \mathcal{H}_r(z)$ with $\delta = O^*(1/\log n)$ and $r = O^*(\log^2 n)$, then $z^0$ and $z^*$ has the same support. \end{Lemma} \proof Since $z^0$ is $\delta$-close to $z^*$, then $\Vert z^0 - z^*\Vert \leq \delta$ and $|z_i - z_i^*| \leq \delta$ for every $i$. For $i \in \textnormal{supp}(z^*)$, $$|z_i| \geq |z_i^*| - |z_i - z_i^*| \geq \tau - \delta$$ and for $i \notin \textnormal{supp}(z^*)$, $|z_i| \leq \delta$. Since $\tau > O(1/\sqrt{r}) \gg \delta $, then the $r$-largest entries of $z$ are in the support $z^*$, and hence $z^0$ and $z^*$ has the same support. \qedhere \begin{algorithm}[!h] \begin{algorithmic} \State \textbf{Initialize} $L = \emptyset$ \State Randomly divide $p$ samples into two disjoint sets $\mathcal{P}_1$ and $\mathcal{P}_2$ of sizes $p_1$ and $p_2$ respectively \State \textbf{While} $|L| < m$. Pick $u$ and $v$ from $\mathcal{P}_1$ at random \State \indent Reconstruct the re-weighted covariance matrix $\widehat{M}_{u,v}$: $$\widehat{M}_{u,v} = \frac{1}{p_2}\sum_{i=1}^{p_2}\langle y^{(i)}, u\rangle \langle y^{(i)}, v\rangle y^{(i)}(y^{(i)})^T$$ \State \indent Compute the top singular values $\delta_1, \delta_2$ and top singular vector $z$ of $\widehat{M}_{u,v}$ \State \indent \textbf{If} $\delta_1 \geq \Omega(k/m)$ and $\delta_2 < O^*(k/m\log n)$ \State \indent \indent $z = \mathcal{H}_r(z)$, where $\mathcal{H}_r$ keeps $r$ largest entries of $z$ \State \indent \indent \textbf{If} $z$ is not within distance $1/\log n$ of any vector in $L$ even with sign flip \State \indent \indent \indent $L = L \cup \{z\}$ \State \textbf{Return} $A^0 = (L_1, \dots, L_m)$ \end{algorithmic} \caption{Pairwise Reweighting with Hard-Thresholding} \label{alg_arora_modified} \end{algorithm} \begin{Theorem} \label{main_initialization_ext} Suppose that Assumptions \textnormal{\textbf{B1-B4}} hold and Assumptions \textnormal{\textbf{A1-A3}} satify with $\mu = O^*\bigl(\frac{\sqrt{n}}{k\log^3n}\bigr)$ and $r = O^*(\log^2 n)$. When $p_1 = \widetilde{\Omega}(m)$ and $p_2 = \widetilde{\Omega}(mk)$, then with high probability Algorithm \ref{alg_arora_modified} returns an initial estimate $A^0$ whose columns share the same support as $A^*$ and with $(\delta, 2)$-nearness to $A^*$ with $\delta = O^*(1/\log n)$. \end{Theorem} This algorithm requires $r = O^*(\log^2n)$, which is somewhat better than ours. However, the sample complexity and running time is inferior as compared with our novel algorithm. \section{Neural Implementation of Our Approach} \label{neural_implementation} \begin{figure}[h] \centering \input{neural_diag} \caption{Neural network implementation of Algorithm~\ref{alg_neural_doubly_sdl}. The network takes the image $y$ as input and produces the sparse representation $x$ as output. The hidden layer represents the residual between the image and its reconstruction $Ax$. The weights $A_{ij}$'s are stored on synapses, but most of them are zero and shown by the dotted lines.} \label{neural_diag} \end{figure} We now briefly describe why our algorithm is ``neurally plausible''. Basically, similar to the argument in~\cite{arora15-neural}, we describe at a very high level how our algorithm can be implemented via a neural network architecture. One should note that although both our initialization and descent stages are non-trivial modifications of those in~\cite{arora15-neural}, both still inherit the nice neural plausiblity property. \subsection{Neural implementation of Stage 1: Initialization} Recall that the initialization stage includes two main steps: (i) estimate the support of each column of the synthesis matrix, and (ii) compute the top principal component(s) of a certain truncated weighted covariance matrix. Both steps involve simple vector and matrix-vector manipulations that can be implemented plausibly using basic neuronal manipulations. For the support estimation step, we compute the product $\inprod{y}{u}\inprod{y}{u} y \circ y$, followed by a thresholding. The inner products, $\inprod{y}{u}$ and $\inprod{y}{v}$ can be computed using neurons via an online manner where the samples arrive in sequence; the thresholding can be implemented via a ReLU-type non-linearity. For the second step, it is well known that the top principal components of a matrix can be computed in a neural (Hebbian) fashion using Oja's Rule~\cite{oja}. \subsection{Neural implementation of Stage 2: Descent} Our neural implementation of the descent stage (Algorithm~\ref{alg_neural_doubly_sdl}), shown in Figure~\ref{neural_diag}, mimics the architecture of \cite{arora15-neural}, which describes a simple two-layer network architecture for computing a single gradient update of $A$. The only difference in our case is that most of the value in $A$ are set to zero, or in other words, our network is sparse. The network takes values $y$ from the input layer and produce $x$ as the output; there is an intermediate layer in between connecting the middle layer with the output via synapses. The synaptic weights are stored on $A$. The weights are updated by Hebbian learning. In our case, since $A$ is sparse (with support given by $R$, as estimated in the first stage), we enforce the condition the corresponding synapses are inactive. In the output layer, as in the initialization stage, the neurons can use a ReLU-type non-linear activation function to enforce the sparsity of $x$. \section{A Special Case: Orthonormal $A^*$} \label{orthonormal_case} We extend our results for the special case where the dictionary is orthonormal. As such, the dictionary is perfectly incoherent and bounded (i.e., $\mu = 0$ and $\norm{A^*} = 1$). \begin{Theorem} \label{thm_initialization_orthonormal} Suppose that $A^*$ is orthonormal. When $p_1 = \widetilde{\Omega}(n)$ and $p_2 = \widetilde{\Omega}(nr)$, then with high probability Algorithm \ref{alg_neural_initialization} returns an initial estimate $A^0$ whose columns share the same support as $A^*$ and with $(\delta, 2)$-nearness to $A^*$ with $\delta = O^*(1/\log n)$. The sparsity of $A^*$ can be achieved up to $r = O^*\bigl( \min(\frac{\sqrt{n}}{\log^2 n}, \frac{n}{k^2\log^2n}) \bigr)$. \end{Theorem} We use the same initialization procedure for this special case and achieve a better order of $r$. The proof of Theorem~\ref{thm_initialization_orthonormal} follows the analysis for the general case with following two results: \begin{Claim}[Special case of Claim \ref{cl_bounds_of_beta}] \label{cl_bounds_of_beta_spec} Suppose that $u = A^*\alpha + \varepsilon_u$ is a random sample and $U = \textnormal{supp}(\alpha)$. Let $\beta = A^{*T}u$, then \text{w.h.p.}, we have (a) $\abs{\beta_i - \alpha_i} \leq \sigma_\varepsilon\log n$ for each $i$ and (b) $\norm{\beta} \leq O(\sqrt{k}\log n + \sigma_\varepsilon\sqrt{n}\log n)$. \end{Claim} \proof We have $\beta = A^{*T}u = \alpha + A^{*T}\epsilon_u$, then $\beta_i - \alpha_i = \inprod{\cA{i}^*}{\epsilon_u}$ and $\norm{\beta - \alpha} = \norm{\epsilon_u}$. Using probability bounds of $\inprod{\cA{i}^*}{\epsilon_u}$, $\norm{\epsilon_u}$ and $\norm{\alpha}$ in Claim~\ref{cl_bound_subgaussian_rv}, we have the claim proved.~\qedhere We draw from the claim that for any $i \notin U \cap V$, $\abs{\beta_i\beta'_i} \leq O(\sigma_\varepsilon\log^2n)$ and have the following result: \begin{Lemma} Fix samples $u$ and $v$ and suppose that $y = A^*x^* + \varepsilon$ is a random sample independent of $u,v$. The expected value of the score for the $l^{\textrm{th}}$ component of $y$ is given by: \begin{align*} e_l \triangleq \mathbb{E}[\inprod{y}{u}\inprod{y}{v} y_l^2] = \sum_{i \in U \cap V}q_ic_i\beta_i\beta'_iA^{*2}_{li} + ~\text{perturbation terms} \end{align*} where $q_i = \mathbb{P}[i \in S]$, $q_{ij} = \mathbb{P}[i, j \in S]$ and $c_i = \mathbb{E}[x_i^4|i \in S]$. Moreover, the perturbation terms have absolute value at most $O^*\bigl(k/n\log^2n \max (1/\sqrt{n}, k^2/n) \bigr) $. \label{lm_diagonal_entries_in_expectation_spec} \end{Lemma} \proof Lemma follows Lemma \ref{lm_diagonal_entries_in_expectation} via Claim \ref{cl_bounds_of_beta} except that the second term of $E_1$ is bounded by $O(k\log^2n/n^{3/2})$. \section{Conclusion} \label{conc} In this paper, we have addressed an open theoretical question on learning sparse dictionaries under a special type of generative model. Our proposed algorithm consists of a novel initialization step followed by a descent-style step, both are able to take advantage of the sparse structure. We rigorously demonstrate its efficacy in both sample- and computation-complexity over existing heuristics as well as provable approaches for double-sparse and regular sparse coding. This results in \emph{the first known provable approach for double-sparse coding problem} with statistical and algorithmic guarantees. Besides, we also show three benefits of our approach: neural plausibility, robustness to noise and practical usefulness via the numerical experiments. Nevertheless, several fundamental questions regarding our approach remain. First, our initialization method (in the overcomplete case) achieves its theoretical guarantees under fairly stringent limitations on the sparsity level $r$. This arises due to our reweighted spectral initialization strategy, and it is an open question whether a better initialization strategy exists (or whether these types of initialization are required at all). Second, our analysis holds for complete (fixed) bases $\Phi$, and it remains open to study the setting where $\Phi$ is over-complete. Finally, understanding the reasons behind the very promising practical performance of methods based on heuristics, such as {\text{Trainlets}}, on real-world data remains a very challenging open problem. \section{Empirical Study} \label{empirical} We compare our method with three different methods for both standard sparse and double-sparse coding. For the standard approach, we implement the algorithm proposed in~\citet{arora15-neural}, which currently is the best theoretically sound method for provable sparse coding. However, since their method does not explicitly leverage the double-sparsity model, we also implement a heuristic modification that performs a hard thresholding (HT)-based post-processing step in the initialization and learning procedures (which we dub \emph{Arora + HT}). The final comparison is the \emph{Trainlets} approach of~\citet{sulam16-trainlets}. We generate a synthetic training dataset according to the model described in Section \ref{setup}. The base dictionary $\Phi$ is the identity matrix of size $n =64$ and the square synthesis matrix $A^*$ is a block diagonal matrix with 32 blocks. Each $2 \times 2$ block is of form $[1~1; 1~-1]$ (i.e., the column sparsity $r = 2$) . The support of $x^*$ is drawn uniformly over all $6$-dimensional subsets of $[m]$, and the nonzero coefficients are randomly set to $\pm 1$ with equal probability. In our simulations with noise, we add Gaussian noise $\varepsilon$ with entrywise variance $\sigma_\varepsilon^2 = 0.01$ to each of those above samples. For all the approaches except {\text{Trainlets}}, we use $T = 2000$ iterations for the initialization procedure, and set the number of steps in the descent stage to 25. Since {\text{Trainlets}} does not have a specified initialization procedure, we initialize it with a random Gaussian matrix upon which column-wise sparse thresholding is then performed. The learning step of {\text{Trainlets}}\footnote{We utilize {\text{Trainlets}}'s implementation provided at {http://jsulam.cswp.cs.technion.ac.il/home/software/.}} is executed for $50$ iterations, which tolerates its initialization deficiency. For each Monte Carlo trial, we uniformly draw $p$ samples, feed these samples to the four different algorithms, and observe their ability to reconstruct $A^*$. Matlab implementation of our algorithms is available online\footnote{https://github.com/thanh-isu/double-sparse-coding}. We evaluate these approaches on three metrics as a function of the number of available samples: (i) fraction of trials in which each algorithm successfully recovers the ground truth $A^*$; (ii) reconstruction error; and (iii) running time. The synthesis matrix is said to be ``successfully recovered'' if the Frobenius norm of the difference between the estimate $\widehat{A}$ and the ground truth $A^*$ is smaller than a threshold which is set to $10^{-4}$ in the noiseless case, and to $0.5$ in the other. All three metrics are averaged over 100 Monte Carlo simulations. As discussed above, the Frobenius norm is only meaningful under a suitable permutation and sign flip transformation linking $\widehat{A}$ and $A^*$. We estimate this transformation using a simple maximum weight matching algorithm. Specifically, we construct a weighted bipartite graph with nodes representing columns of $A^*$ and $\widehat{A}$ and adjacency matrix defined as $G = \abs{A^{*T}\widehat{A}}$, where $\abs{\cdot}$ is taken element-wise. We compute the optimal matching using the Hungarian algorithm, and then estimate the sign flips by looking at the sign of the inner products between the matched columns. The results of our experiments are shown in Figure \ref{fig_simulation_results} with the top and bottom rows respectively for the noiseless and noisy cases. The two leftmost figures suggest that all algorithms exhibit a ``phase transitions'' in sample complexity that occurs in the range of 500-2000 samples. In the noiseless case, our method achieves the phase transition with the fewest number of samples. In the noisy case, our method nearly matches the best sample complexity performance (next to \text{Trainlets}, which is a heuristic and computationally expensive). Our method achieves the best performance in terms of (wall-clock) running time in all cases. \section{Stage 1: Initialization} \label{init} In this section, we present a neurally plausible algorithm that can produce a coarse initial estimate of the ground truth $A^*$. We give a neural implementation of the algorithm in Appendix~\ref{neural_implementation}. Our algorithm is an adaptation from the algorithm in~\citet{arora15-neural}. The idea is to estimate dictionary atoms in a greedy fashion by iteratively re-weighting the given samples. The samples are re-scaled in a way that the weighted (sample) covariance matrix has the dominant first singular value, and its corresponding eigenvector is close to one particular atom with high probability. However, while this algorithm is conceptually very appealing, it incurs severe computational costs in practice. More precisely, the overall running time is $\widetilde{O}(mn^2p)$ in expectation, which is unrealistic for large-scale problems. To overcome this burden, we leverage the double-sparsity assumption in our generative model to obtain a more efficient approach. The high-level idea is to first estimate the support of each column in the synthesis matrix $A^*$, and then obtain a coarse estimate of the nonzero coefficients of each column based on knowledge of its support. The key ingredient of our method is a novel spectral procedure that gives us an estimate of the column supports purely from the observed samples. The full algorithm, that we call \emph{Truncated Pairwise Reweighting}, is listed in pseudocode form below as Algorithm \ref{alg_neural_initialization}. \begin{algorithm}[!t] \begin{algorithmic} \State \textbf{Initialize} $L = \emptyset$ \State Randomly divide $p$ samples into two disjoint sets $\mathcal{P}_1$ and $\mathcal{P}_2$ of sizes $p_1$ and $p_2$ respectively \State \textbf{While} $|L| < m$. Pick $u$ and $v$ from $\mathcal{P}_1$ at random \State \indent For every $l = 1, 2, \dots, n$; compute $$\widehat{e}_{l} = \frac{1}{p_2}\sum_{i=1}^{p_2}\inprod{y^{(i)}}{u} \inprod{y^{(i)}}{v}(y_l^{(i)})^2$$ \State \indent Sort $(\widehat{e}_1, \widehat{e}_2, \dots, \widehat{e}_n)$ in descending order \State \indent \textbf{If} $r' \leq r$ s.t $\widehat{e}_{(r')} \geq O(k/mr)$ and $\widehat{e}_{(r'+1)}/\widehat{e}_{(r')} < O^*(r/\log^2n)$ \State \indent \indent Let $\widehat{R}$ be set of the $r$ largest entries of $\widehat{e}$ \State \indent \indent $\widehat{M}_{u,v} = \frac{1}{p_2}\sum_{i=1}^{p_2}\langle y^{(i)}, u\rangle \langle y^{(i)}, v\rangle y^{(i)}_{\widehat{R}}(y^{(i)}_{\widehat{R}})^T$ \State \indent \indent $\delta_1, \delta_2 \leftarrow$ top singular values of $\widehat{M}_{u,v}$ \State \indent \indent $z_{\widehat{R}} \leftarrow$ top singular vector of $\widehat{M}_{u,v}$ \medskip \State \indent \indent \textbf{If} $\delta_1 \geq \Omega(k/m)$ and $\delta_2 < O^*(k/m\log n)$ \medskip \State \indent \indent \indent \textbf{If} $\text{dist}(\pm z,l) > 1/\log n $ for any $l \in L$ \medskip \State \indent \indent \indent \indent Update $L = L \cup \{z\}$ \State \textbf{Return} $A^0 = (L_1, \dots, L_m)$ \end{algorithmic} \caption{Truncated Pairwise Reweighting} \label{alg_neural_initialization} \end{algorithm} Let us provide some intuition of our algorithm. Fix a sample $y = A^* x^* + \varepsilon$ from the available training set, and consider samples $$u = A^*\alpha + \varepsilon_u, v = A^*\alpha' + \varepsilon_v.$$ Now, consider the (very coarse) estimate for the sparse code of $u$ with respect to $A^*$: $$\beta = A^{*T}u = A^{*T}A^*\alpha + A^{*T}\varepsilon_u.$$ As long as $A^*$ is incoherent enough and $\varepsilon_u$ is small, the estimate $\beta$ behaves just like $\alpha$, in the sense that for each sample $y$: $$\inprod{y}{u} \approx \inprod{x^*}{\beta} \approx \inprod{x^*}{\alpha}.$$ Moreover, the above inner products are large only if $\alpha$ and $x^*$ share some elements in their supports; else, they are likely to be small. Likewise, the weight $\inprod{y}{u}\inprod{y}{v}$ depends on whether or not $x^*$ shares the support with both $\alpha$ and $\alpha'$. Now, suppose that we have a mechanism to isolate pairs $u$ and $v$ who share exactly one atom among their sparse representations. Then by scaling each sample $y$ with an increasing function of $\inprod{y}{u}\inprod{y}{v}$ and linearly adding the samples, we magnify the importance of the samples that are aligned with that atom, and diminish the rest. The final direction can be obtained via the top \emph{principal component} of the reweighted samples and hence can be used as a coarse estimate of the atom. This is exactly the approach adopted in~\citet{arora15-neural}. However, in our double-sparse coding setting, we know that the estimated atom should be sparse as well. Therefore, we can naturally perform an extra ``sparsification'' step of the output. An extended algorithm and its correctness are provided in Appendix~\ref{appdx_improve_arora}. However, as we discussed above, the computational complexity of the re-weighting step still remains. We overcome this obstacle by first identifying the locations of the nonzero entries in each atom. Specifically, define the matrix: $$M_{u,v} = \frac{1}{p_2}\sum_{i=1}^{p_2}\inprod{y^{(i)}}{u} \inprod{y^{(i)}}{v}y^{(i)}\circ y^{(i)}.$$ Then, the diagonal entries of $M_{u,v}$ reveals the support of the atom of $A^*$ shared among $u$ and $v$: the $r$-largest entries of $M_{u,v}$ will correspond to the support we seek. Since the desired direction remains unchanged in the $r$-dimensional subspace of its nonzero elements, we can restrict our attention to this subspace, construct a reduced covariance matrix $\widehat{M}_{u,v}$, and proceed as before. This truncation step alleviates the computational burden by a significant amount; the running time is now $\widetilde{O}(mnp)$, which improves the original by a factor of $n$. The success of the above procedure relies upon whether or not we can isolate pairs $u$ and $v$ that share one dictionary atom. Fortunately, this can be done via checking the decay of the singular values of the (reduced) covariance matrix. Here too, we show via our analysis that the truncation step plays an important role. Overall, our proposed algorithm not only accelerates the initialization in terms of running time, but also improves the sample complexity over~\cite{arora15-neural}. The performance of Algorithm~\ref{alg_neural_initialization} is described in the following theorem, whose formal proof is deferred to Appendix~\ref{appdx_initialization_analysis}. \begin{Theorem} \label{main_thm_initialization} Suppose that Assumptions \textnormal{\textbf{B1-B4}} hold and Assumptions \textnormal{\textbf{A1-A3}} satify with $\mu = O^*\bigl(\frac{\sqrt{n}}{k\log^3n}\bigr)$ and $r = O^*(\log n)$. When $p_1 = \widetilde{\Omega}(m)$ and $p_2 = \widetilde{\Omega}(mr)$, then with high probability Algorithm \ref{alg_neural_initialization} returns an initial estimate $A^0$ whose columns share the same support as $A^*$ and with $(\delta, 2)$-nearness to $A^*$ with $\delta = O^*(1/\log n)$. \end{Theorem} The limit on $r$ arises from the minimum non-zero coefficient $\tau$ of $A^*$. Since the columns of $A^*$ are standardized, $\tau$ should degenerate as $r$ grows. In other words, it is getting harder to distinguish the ``signal'' coefficients from zero as $r$ grows with $n$. However, this limitation can be relaxed when a better incoherence available, for example the orthonormal case. We study this in Appendix~\ref{orthonormal_case}. To provide some intuition about the working of the algorithm (and its proof), let us analyze it in the case where we have access to infinite number of samples. This setting, of course, is unrealistic. However, the analysis is much simpler and more transparent since we can focus on expected values rather than empirical averages. Moreover, the analysis reveals several key lemmas, which we will reuse extensively for proving Theorem \ref{main_thm_initialization}. First, we give some intuition behind the definition of the ``scores'', $\widehat{e}_l$. \begin{Lemma} Fix samples $u$ and $v$ and suppose that $y = A^*x^* + \varepsilon$ is a random sample independent of $u,v$. The expected value of the score for the $l^{\textrm{th}}$ component of $y$ is given by: \begin{align*} e_l \triangleq \mathbb{E}[\inprod{y}{u}\inprod{y}{v} y_l^2] = \sum_{i \in U \cap V}q_ic_i\beta_i\beta'_iA^{*2}_{li} + ~\text{perturbation terms} \end{align*} where $q_i = \mathbb{P}[i \in S]$, $q_{ij} = \mathbb{P}[i, j \in S]$ and $c_i = \mathbb{E}[x_i^4|i \in S]$. Moreover, the perturbation terms have absolute value at most $O^*(k/m \log n)$. \label{lm_diagonal_entries_in_expectation} \end{Lemma} From \AsmpB{1}, we know that $q_i= \Theta(k/m)$, $q_{ij} = \Theta(k^2/m^2)$ and $c_i = \Theta(1)$. Besides, we will show later that $\abs{\beta_i} \approx \abs{\alpha_i} = \Omega(1)$ for $i \in U$, and $\abs{\beta_i} = o(1)$ for $i \notin U$. Consider the first term $E_0 = \sum_{i \in U \cap V}q_ic_i\beta_i\beta'_iA^{*2}_{li}$. Clearly, $E_0 = 0$ if $U \cap V = \emptyset$ or that $l$ does not belong to support of any atom in $U \cap V$. On the contrary, as $E_0 \neq 0$ and $U\cap V = \{i\}$ , then $E_0 = \abs{q_ic_i\beta_i\beta'_iA^{*2}_{li}} \geq \Omega(\tau^2k/m) = \Omega(k/mr)$ since $\abs{q_ic_i\beta_i\beta'_i} \geq \Omega(k/m)$ and $\abs{A_{li}^*} \geq \tau$. Therefore, Lemma \ref{lm_diagonal_entries_in_expectation} suggests that if $u$ and $v$ share a unique atom among their sparse representations, and $r$ is not too large, then we can indeed recover the correct support of the shared atom. When this is the case, the expected scores corresponding to the nonzero elements of the shared atom will dominate the remaining of the scores. Now, given that we can isolate the support $R$ of the corresponding atom, the remaining questions are how best we can estimate its non-zero coefficients, and when $u$ and $v$ share a unique elements in their supports. These issues are handled in the following lemmas. \begin{Lemma} \label{lm_reweighted_cov_matrix_in_expectation} Suppose that $u = A^*\alpha + \varepsilon_u$ and $v = A^*\alpha' + \varepsilon_v$ are two random samples. Let $U$ and $V$ denote the supports of $\alpha$ and $\alpha'$ respectively. $R$ is the support of some atom of interest. The truncated re-weighting matrix is formulated as \begin{align*} M_{u,v} &\triangleq \mathbb{E}[\inprod{y}{u}\inprod{y}{v} y_Ry_R^T] = \sum_{i \in U \cap V}q_ic_i\beta_i\beta'_i\AR{i}^*\AR{i}^{*T} + ~\text{perturbation terms} \end{align*} where the perturbation terms have norms at most $O^*(k/m\log n)$. \end{Lemma} Using the same argument for bounding $E_0$ in Lemma \ref{lm_diagonal_entries_in_expectation}, we can see that $M_0 \triangleq q_ic_i\beta_i\beta'_i\AR{i}^*\AR{i}^{*T}$ has norm at least $\Omega(k/m)$ when $u$ and $v$ share a unique element $i$ ($\norm{\AR{i}^*} = 1$). According to this lemma, the spectral norm of $M_0$ dominates those of the other perturbation terms. Thus, given $R$ we can use the first singular vector of $M_{u,v}$ as an estimate of $\cA{i}^*$. \begin{Lemma} \label{lm_support_consists_and_closeness} Under the setup of Theorem \ref{main_thm_initialization}, suppose $u = A^*\alpha + \varepsilon_u$ and $v = A^*\alpha' + \varepsilon_v$ are two random samples with supports $U$ and $V$ respectively. $R = \textnormal{supp}(A^*_i)$. If $u$ and $v$ share the unique atom $i$, the first $r$ largest entries of $e_l$ is at least $O(k/mr)$ and belong to $R$. Moreover, the top singular vector of $M_{u,v}$ is $\delta$-close to $\AR{i}^*$ for $O^*(1/\log n)$. \end{Lemma} \proof The recovery of $\cA{i}^*$'s support directly follows Lemma \ref{lm_diagonal_entries_in_expectation}. For the latter part, recall from Lemma \ref{lm_reweighted_cov_matrix_in_expectation} that $$M_{u,v} = q_ic_i\beta_i\beta'_i\AR{i}^*\AR{i}^{*T} + ~\text{perturbation terms}$$ The perturbation terms have norms bounded by $O^*(k/m\log n)$. On the other hand, the first term is has norm at least $\Omega(k/m)$ since $\norm{\AR{i}^*} = 1$ for the correct support $R$ and $\abs{q_ic_i\beta_i\beta'_i} \geq \Omega(k/m)$. Then using Wedin's Theorem to $M_{u,v}$, we can conclude that the top singular vector must be $O^*(k/m\log n)/\Omega(k/m) = O^*(1/\log n)$ -close to $\AR{i}^*$. \qed \begin{Lemma} \label{lm_condition_uv_share_unique_supp} Under the setup of Theorem \ref{main_thm_initialization}, suppose $u = A^*\alpha + \varepsilon_u$ and $v = A^*\alpha' + \varepsilon_v$ are two random samples with supports $U$ and $V$ respectively. If the top singular value of $M_{u,v}$ is at least $\Omega(k/m)$ and the second largest one is at most $O^*(k/m\log n)$, then $u$ and $v$ share a unique dictionary element with high probability. \end{Lemma} \proof The proof follows from that of Lemma 37 in~\citet{arora15-neural}. The main idea is to separate the possible cases of how $u$ and $v$ share support and to use Lemma \ref{lm_reweighted_cov_matrix_in_expectation} with the bounded perturbation terms to conclude when $u$ and $v$ share exactly one. We note that due to the condition where $\widehat{e}_{(s)} \geq \Omega(k/mr)$ and $\widehat{e}_{(s+1)}/\widehat{e}_{(s)} \leq O^*(r/\log n)$, it must be the case that $u$ and $v$ share only one atom or share more than one atoms with the same support. When their supports overlap more than one, then the first singular value cannot dominate the second one, and hence it must not be the case. \qed \iffalse \textbf{Termination of Algorithm~\ref{alg_neural_initialization}.} The remaining question is how many samples in $\mathcal{P}_1$, defined in Algorithm~\ref{alg_neural_initialization}, and how many steps in expectation are required in order for the algorithm to terminate. As shown in~\citet{arora15-neural}, the sharing probability that a pair of $u, v$ share a unique atom is bounded below in the following manner: \begin{align*} \mathbb{P}[U \cap V = \{i\}] &\geq \mathbb{P}[i \in V](1 - \sum_{i \neq j \in [m]}\mathbb{P}[j \in U \cap V |i \in U, j \in V]) \\ &= \Omega(k^2/m^2) \end{align*} where we have used the fact $\mathbb{P}[i \in U] = \Theta(k/m)$ and $\mathbb{P}[i, j \in U] \leq O(k^2/m^2)$. Therefore, given $\Omega(m^2\log^2n/k^2)$ trials, we can sample a pair of $u, v$ whose supports intersect uniquely at one $i \in [m]$ with high probability. According to the ``coupon collection'' phenomenon, we require $\widetilde{O}(m)$ samples in $\mathcal{P}_1$ and at most $\widetilde{O}(m)$ trials to collect $m$ pairs each of which shares one distinct dictionary atom. By Lemma~\ref{lm_support_consists_and_closeness}, we only add a vector close to $A^*_i$ to the candidate set $L$, which is guaranteed to be $1/\log n$ far away from existing ones \text{w.h.p.}\ As such, Algorithm~\ref{alg_neural_initialization} terminates in $\widetilde{O}(m)$ iterations in expectation. All the proofs of Lemma \ref{lm_diagonal_entries_in_expectation} and \ref{lm_reweighted_cov_matrix_in_expectation} are deferred to Appendix \ref{appdx_initialization_analysis}. \ray{check} \fi Similar to~\citep{arora15-neural}, our initialization algorithm requires $\widetilde{O}(m)$ iterations in expectation to estimate all the atoms, hence the expected running time is $\widetilde{O}(mnp)$. All the proofs of Lemma \ref{lm_diagonal_entries_in_expectation} and \ref{lm_reweighted_cov_matrix_in_expectation} are deferred to Appendix \ref{appdx_initialization_analysis}. \section{Introduction} \label{intro} \subsection{Motivation} Representing signals as sparse linear combinations of atoms from a dictionary is a popular approach in many domains. In this paper, we study the problem of \textit{dictionary learning} (also known as sparse coding), where the goal is to learn an efficient basis (dictionary) that represents the underlying class of signals well. In the typical sparse coding setup, the dictionary is \emph{overcomplete} (\text{i.e.}, the cardinality of the dictionary exceeds the ambient signal dimension) while the representation is \emph{sparse} (\text{i.e.}, each signal is encoded by a combination of only very few dictionary atoms.) Sparse coding has a rich history in diverse fields such as signal processing, machine learning, and computational neuroscience. Discovering optimal basis representations of data is a central focus of image analysis~\citep{donoho,elad06-denoising,elad2}, and dictionary learning has proven widely successful in imaging problems such as denoising, deconvolution, inpainting, and compressive sensing~\citep{elad06-denoising, candes05-decoding, elad2}. Sparse coding approaches have also been used as a core building block of deep learning systems for prediction~\citep{lista,boureau2010learning} and associative memory~\citep{mazumdar2017}. Interestingly, the seminal work by \citet{olshausen97-sc} has shown intimate connections between sparse coding and neuroscience: the dictionaries learned from image patches of natural scenes bear strikingly resemblance to spatial receptive fields observed in mammalian primary visual cortex. From a mathematical standpoint, the sparse coding problem is formulated as follows. Given $p$ data samples $Y = [y^{(1)}, y^{(2)}, \dots, y^{(p)}] \in\mathbb{R}^{n\times p}$, the goal is to find a dictionary $D \in \mathbb{R}^{n\times m}$ ($m > n$) and corresponding sparse code vectors $X = [x^{(1)}, x^{(2)}, \dots, x^{(p)}]\in\mathbb{R}^{m\times p}$ such that the representation $D X$ fits the data samples as well as possible. Typically, one obtains the dictionary and the code vectors as the solution to the following optimization problem: \begin{align} \begin{split} \min_{D, X}\mathcal{L}(D, X) &= \frac{1}{2}\sum^p_{j=1}\Vert y^{(j)} - Dx^{(j)}\Vert^2_2,\label{eq_objective_func} \\ \text{s.t.}~&\sum^p_{j=1}\mathcal{S}(x^{(j)}) \leq S \end{split} \end{align} where $\mathcal{S}(\cdot)$ is some sparsity-inducing penalty function on the code vectors, such as the $\ell_1$-norm. The objective function $\mathcal{L}$ controls the reconstruction error while the constraint enforces the sparsity of the representation. However, even a cursory attempt at solving the optimization problem \eqref{eq_objective_func} reveals the following obstacles: \begin{enumerate} \item {\itshape{\textbf{Theoretical challenges}}}. The constrained optimization problem \eqref{eq_objective_func} involves a non-convex (in fact, bilinear) objective function, as well as potentially non-convex constraints depending on the choice of the sparsity-promoting function $\mathcal{S}$ (for example, the $\ell_0$ function.) Hence, obtaining \emph{provably} correct algorithms for this problem can be challenging. Indeed, the vast majority of practical approaches for sparse coding have been heuristics~\citep{engan99-mod, aharon06-ksvd, mairal09-online}; Recent works in the theoretical machine learning community have bucked this trend, providing provably accurate algorithms if certain assumptions are satisfied~\citep{spielman12-exact,agarwal2014learning, arora15-neural, sun2015complete, blasiok16-erspud-improved, law16_note_erspud, bartlett17}. However, relatively few of these newer methods have been shown to provide good empirical performance in actual sparse coding problems. \item {\itshape{\textbf{Practical challenges}}}. Even if theoretical correctness issues were to be set aside, and we are somehow able to efficiently learn sparse codes of the input data, we often find that applications using such learned sparse codes encounter \emph{memory} and \emph{running-time} issues. Indeed, in the overcomplete case, only the storage of the learned dictionary $D$ incurs $mn = \Omega(n^2)$ memory cost, which is prohibitive when $n$ is large. Therefore, in practical applications (such as image analysis) one typically resorts to chop the data into smaller blocks (\text{e.g.}, partitioning image data into patches) to make the problem manageable. \end{enumerate} A related line of research has been devoted to learning dictionaries that obey some type of \emph{structure}. Such structural information can be leveraged to incorporate prior knowledge of underlying signals as well as to resolve computational challenges due to the data dimension. For instance, the dictionary is assumed to be separable, or obey a convolutional structure. One such variant is the \emph{double-sparse} coding problem~\citep{rubinstein10-sparsedl, sulam16-trainlets} where the dictionary $D$ \emph{itself} exhibits a sparse structure. To be specific, the dictionary is expressed as: $$D = \Phi A,$$ \text{i.e.}, it is composed of a known ``base dictionary" $\Phi \in \mathbb{R}^{n \times n}$, and a learned ``synthesis'' matrix $A \in \mathbb{R}^{n \times m}$ whose columns are sparse. The base dictionary $\Phi$ is typically any fixed basis chosen according to domain knowledge, while the synthesis matrix $A$ is column-wise sparse and is to be learned from the data. The basis $\Phi$ is typically orthonormal (such as the canonical or wavelet basis); however, there are cases where the base dictionary $\Phi$ is overcomplete~\citep{rubinstein10-sparsedl, sulam16-trainlets}. There are several reasons why such the double-sparsity model can be useful. First, the double-sparsity assumption is rather appealing from a conceptual standpoint, since it lets us combine the knowledge of decades of modeling efforts in harmonic analysis with the flexibility of learning new representations tailored to specific data families. Moreover, such a double-sparsity model has computational benefits. If the columns of $A$ are (say) $r$-sparse (i.e., each column contains no more than $r \ll n$ non-zeroes) then the overall burden of storing, transmitting, and computing with $A$ is much lower than that for general unstructured dictionaries. Finally, such a model lends itself well to \emph{interpretable} learned features if the atoms of the base dictionary are semantically meaningful. All the above reasons have spurred researchers to develop a series of algorithms to learn doubly-sparse codes~\citep{rubinstein10-sparsedl,sulam16-trainlets}. However, despite their empirical promise, no theoretical analysis of their performance have been reported in the literature and to date, we are unaware of a provably accurate, polynomial-time algorithm for the double-sparse coding problem. Our goal in this paper is precisely to fill this gap. \subsection{Our Contributions} In this paper, we provide a new framework for double-sparse coding. To the best of our knowledge, our approach is the first method that enjoys \emph{provable} statistical and algorithmic guarantees for this problem. In addition, our approach enjoys three benefits: we demonstrate that the method is \emph{neurally plausible} (\text{i.e.}, its execution can plausibly be achieved using a neural network architecture), \emph{robust} to noise, as well as \emph{practically useful}. Inspired by the aforementioned recent theoretical advances in sparse coding, we assume a learning-theoretic setup where the data samples arise from a ground-truth {generative model}. Informally, suppose there exists a true (but unknown) synthesis matrix $A^*$ that is column-wise $r$-sparse, and the $i^{\textrm{th}}$ data sample is generated as: $$ y^{(i)} = \Phi A^* x^{*(i)} +~\text{noise},~~~i=1,2,\ldots,p $$ where the code vector $x^{*(i)}$ is independently drawn from a distribution supported on the set of $k$-sparse vectors. We desire to learn the underlying matrix $A^*$. Informally, suppose that the synthesis matrix $A^*$ is \emph{incoherent} (the columns of $A^*$ are sufficiently close to orthogonal) and has bounded spectral norm. Finally, suppose that the number of dictionary elements, $m$, is at most a constant multiple of $n$. All of these assumptions are standard\footnote{We clarify both the data and the noise model more concretely in Section~\ref{setup} below.}. We will demonstrate that the true synthesis matrix $A^*$ can be recovered (with small error) in a tractable manner as sufficiently many samples are provided. Specifically, we make the following novel contributions: \begin{enumerate} \item We propose a new algorithm that produces a coarse estimate of the synthesis matrix that is sufficiently close to the ground truth $A^*$. In contrast with previous double-sparse coding methods (such as~\citet{sulam16-trainlets}), our algorithm is \emph{not} based on alternating minimization. Rather, it builds upon spectral initialization-based ideas that have recently gained popularity in non-convex machine learning~\citep{zhang2016spectral,wang2016unified}. \item Given the above coarse estimate of the synthesis matrix $A^*$, we propose a descent-style algorithm to refine the above estimate of $A^*$. This algorithm is simpler than previously studied double-sparse coding algorithms (such as the Trainlets approach of~\citet{sulam16-trainlets}), while still giving good statistical performance. Moreover, this algorithm can be realized in a manner amenable to neural implementations. \item We provide a rigorous analysis of both algorithms. Put together, our analysis produces the first provably polynomial-time algorithm for double-sparse coding. We show that the algorithm provably returns a good estimate of the ground-truth; in particular, in the absence of noise we prove that $\Omega(mr~\textnormal{polylog}~n)$ samples are sufficient for a good enough initialization in the first algorithm, as well as guaranteed linear convergence of the descent phase up to a precise error parameter that can be interpreted as the radius of convergence. Indeed, our analysis shows that employing the double-sparsity model helps in this context, and leads to a strict improvement in sample complexity, as well as running time over previous rigorous methods for (regular) sparse coding such as~\citet{arora15-neural}. \item We also analyze our approach in a more realistic setting with the presence of additive noise and demonstrate its stability. We prove that $\Omega(mr~\textnormal{polylog}~n)$ samples are sufficient to obtain a good enough estimate in the initialization, and also to obtain guaranteed linear convergence during descent to provably recover $A^*$. \item We underline the benefit of the double-sparse structure over the regular model by analyzing the algorithms in~\citet{arora15-neural} under the noisy setting. As a result, we obtain the sample complexity $O\bigl( (mk + \sigma_\varepsilon^2\frac{mn^2}{k})\textnormal{polylog}~n\bigr)$, which demonstrates a negative effect of noise on this approach. \item We rigorously develop a hard thresholding intialization that extends the spectral scheme in~\citet{arora15-neural}. Additionally, we provide more results for the case where $A$ is orthonormal, sparse dictionary to relax the condition on $r$, which may be of independent interest. \item While our analysis mainly consists of sufficiency results and involves several (absolute) unspecified constants, in practice we have found that these constants are reasonable. We justify our observations by reporting a suite of numerical experiments on synthetic test datasets. \end{enumerate} \begin{table}[] \scriptsize \centering \begin{tabular}{|c|c|c|c|c|c|} \hline {Setting} & Reference & \makecell{Sample complexity \\ (w/o noise)} & \makecell{Sample complexity \\ (w/ noise)} & \makecell{Upper bound on \\ running time} & {Expt} \\ \hline \hline \multirow{6}{*}{Regular} & MOD \citep{engan99-mod} & \ding{55} & \ding{55} & \ding{55} & \ding{51} \\ \hhline{~-----} & K-SVD \citep{aharon06-ksvd} & \ding{55} & \ding{55} & \ding{55} & \ding{51} \\ \hhline{~-----} & \citet{spielman12-exact} & $O(n^2 \log n)$ & \ding{55} & $\widetilde{\Omega}(n^4)$ & \ding{51} \\ \hhline{~-----} & \citet{arora14-new-algorithms} & $\widetilde{O}(m^2/k^2)$ & \ding{55} & $\widetilde{O}(np^2)$ & \ding{55} \\ \hhline{~-----} & \citet{gribonval15-sparsespurious} & $O(nm^3)$ & $O(nm^3)$ & \ding{55} & \ding{55} \\ \hhline{~-----} & \citet{arora15-neural} & $\widetilde{O}(mk)$ & \ding{55} & $\widetilde{O}(mn^2p)$ & \ding{55} \\ \hline \hline \multirow{4}{*}{\makecell{Double \\ Sparse}} & Double Sparsity \citep{rubinstein10-sparsedl} & \ding{55} & \ding{55} & \ding{55} & \ding{51} \\ \hhline{~-----} & \citet{gribonval15-sample} & $\widetilde{O}(mr)$ & $\widetilde{O}(mr)$ & \ding{55} & \ding{55} \\ \hhline{~-----} & Trainlets \citep{sulam16-trainlets} & \ding{55} & \ding{55} & \xmar & \ding{51} \\ \hhline{~-----} & This paper & $\widetilde{O}(mr)$ & $\widetilde{O}(mr + \sigma_\varepsilon^2\frac{mnr}{k})$ & $\widetilde{O}(mnp)$ & \ding{51} \\ \hline \end{tabular} \caption{\small \sl Comparison of various sparse coding techniques. Expt: whether numerical experiments have been conducted. \ding{55}\, in all other columns indicates no provable guarantees. Here, $n$ is the signal dimension, and $m$ is the number of atoms. The sparsity levels for $A$ and $x$ are $r$ and $k$ respectively, and $p$ is the sample size \label{tbl_overview}} \end{table} Overall, our approach results in strict improvement in sample complexity, as well as running time, over previous rigorously analyzed methods for (regular) sparse coding, such as~\citet{arora15-neural}. See Table~\ref{tbl_overview} for a detailed comparison. \subsection{Techniques} At a high level, our method is an adaptation of the seminal approach of~\citet{arora15-neural}. As is common in the statistical learning literature, we assume a ``ground-truth" generative model for the observed data samples, and attempt to estimate the parameters of the generative model given a sufficient number of samples. In our case, the parameters correspond to the synthesis matrix $A^*$, which is column-wise $r$-sparse. The natural approach is to formulate a loss function in terms of $A$ such as Equation~\eqref{eq_objective_func}, and perform gradient descent with respect to the surface of the loss function to learn $A^*$. The key challenge in sparse coding is that the gradient is inherently coupled with the \emph{codes} of the training samples (\text{i.e.}, the columns of $X^*$), which are unknown \emph{a priori}. However, the main insight of~\cite{arora15-neural} is that within a small enough neighborhood of $A^*$, a noisy version of $X^*$ can be estimated, and therefore the overall method is similar to performing \emph{approximate gradient descent.} Formulating the actual algorithm as a noisy variant of approximate gradient descent allows us to overcome the finite-sample variability of the loss, and obtain a descent property directly related to (the population parameter) $A^*$. The second stage of our approach (\text{i.e.}, our descent-style algorithm) leverages this intuition. However, instead of standard gradient descent, we perform approximate \emph{projected} gradient descent, such that the column-wise $r$-sparsity property is enforced in each new estimate of $A^*$. Indeed, such an extra projection step is critical in showing a sample complexity improvement over the existing approach of~\citet{arora15-neural} The key novelty is in figuring out how to perform the projection in each gradient iteration. For this purpose, we develop a novel initialization algorithm that identifies the locations of the non-zeroes in $A^*$ even before commencing the descent phase. This is nontrivially different from initialization schemes used in previous rigorous methods for sparse coding, and the analysis is somewhat more involved. In~\citet{arora15-neural}, (the principal eigenvector of) a weighted covariance matrix of $y$ (estimated by the weighted average of outer products $y_iy_i^T$) is shown to provide a coarse estimate of a dictionary atom. We extend this idea and rigoriously show that the diagonal of the weighted covariance matrix serves as a good indicator of the support of a column in $A^*$. The success relies on the concentration of the diagonal vector with dimension $n$, instead of the covariance matrix with dimensions $n\times n$. With the support selected, our scheme only utilizes a reduced weighted covariance matrix with dimensions at most $r\times r$. This initialization scheme enables us to effectively reduce the dimension of the problem, and therefore leads to significant improvement in sample complexity and running time over previous (provable) sparse coding methods when the data representation sparsity $k$ is much smaller than $m$. Further, we rigorously analyze the proposed algorithms in the presence of noise with a bounded expected norm. Our analysis shows that our method is stable, and in the case of i.i.d. Gaussian noise with bounded expected $\ell_2$-norms, is at least a polynomial factor better than previous polynomial time algorithms for sparse coding The empirical performance of our proposed method is demonstrated by a suite of numerical experiments on synthetic datasets In particular, we show that our proposed methods are simple and practical, and improve upon previous provable algorithms for sparse coding. \subsection{Paper Organization} The remainder of this paper is organized as follows. Section~\ref{setup} introduces notation, key model assumptions, and informal statements of our main theoretical results. Section~\ref{init} outlines our initialization algorithm (along with supporting theoretical results) while Section~\ref{descent} presents our descent algorithm (along with supporting theoretical results). Section~\ref{empirical} provides a numerical study of the efficiency of our proposed algorithms, and compares it with previously proposed methods. Finally, Section~\ref{conc} concludes with a short discussion. All technical proofs are relegated to the appendix. \section{Sample Complexity} \label{sample_complexity} In previous sections, we rigorously analyzed both initialization and learning algorithms as if the expectations $g^s$, $e$ and $M_{u,v}$ were given. Here we show that corresponding estimates based on empirical means are sufficient for the algorithms to succeed, and identify how may samples are required. Technically, this requires the study of their concentrations around their expectations. Having had these concentrations, we are ready to prove Theorems \ref{main_thm_initialization} and \ref{main_thm_columnwise_descent_in_expectation}. The entire section involves a variety of concentration bounds. Here we make heavy use of Bernstein's inequality for different types of random variables (including scalar, vector and matrix). The Bernstein's inequality is stated as follows. \begin{Lemma} [Bernstein's Inequality] \label{aux_lm_bernstein} Suppose that $Z^{(1)}, Z^{(2)}, \dots, Z^{(p)}$ are $p$ \text{i.i.d.}\ samples from some distribution $\mathcal{D}$. If $\mathbb{E}[Z] = 0$, $\norm{Z^{(j)}} \leq \mathcal{R}$ almost surely and $\norm{\mathbb{E}[Z^{(j)}(Z^{(j)})^T} \leq \sigma^2$ for each $j$, then \begin{equation} \frac{1}{p}\norm[\Big]{\sum_{j=1}^p Z^{(j)}} \leq \widetilde{O}\biggl(\frac{\mathcal{R}}{p} + \sqrt{\frac{\sigma^2}{p}}\biggr) \label{ineq_bernstein} \end{equation} holds with probability $1-n^{-\omega(1)}$. \end{Lemma} Since all random variables (or their norms) are not bounded almost surely in our model setting, we make use of a technical lemma that is used in \cite{arora15-neural} to handle the issue. \begin{Lemma} [\cite{arora15-neural}] \label{aux_lm_bernstein_truncate_condition} Suppose a random variable $Z$ satisfies $\mathbb{P}[\norm{Z} \geq \mathcal{R}(\log(1/\rho))^C] \leq \rho$ for some constant $C > 0$, then (a) If $p = n^{O(1)}$, it holds that $\norm{Z^{(j)}} \leq \widetilde{O}(\mathcal{R})$ for each $j$ with probability $1 - n^{-\omega(1)}$. (b) $\norm{\mathbb{E}[Z\bm{1}_{\norm{Z} \geq \widetilde{\Omega}(\mathcal{R})}]} = n^{-\omega(1)}$. \end{Lemma} This lemma suggests that if $\frac{1}{p} \sum_{i=1}^p Z^{(j)}(1 - \bm{1}_{\norm{Z^{(j)}} \geq \widetilde{\Omega}(\mathcal{R})})$ concentrates around its mean with high probability, then so does $\frac{1}{p} \sum_{i=1}^p Z^{(j)}$ because the part outside the truncation level can be ignored. Since all random variables of our interest are sub-Gaussian or a product of sub-Gaussian that satisfy this lemma, we can apply Lemma \ref{aux_lm_bernstein} to the corresponding truncated random variables with carefully chosen truncation levels. Then the original random variables concentrate likewise. In the next proofs, we define suitable random variables and identify good bounds of $\mathcal{R}$ and $\sigma^2$ for them. Note that in this section, the expectations are taken over $y$ by conditioning on $u$ and $v$. This aligns with the construction that the estimators of $e$ and $M_{u,v}$ are empirical averages over i.i.d. samples of $y$, while $u$ and $v$ are kept fixed. Due to the dependency on $u$ and $v$, these (conditional) expectations inherit randomness from $u$ and $v$, and we will formulate probabilistic bounds for them. The application of Bernstein's inequality requires a bound on $\norm{\mathbb{E}[{Z}{Z}^T(1 - \bm{1}_{\norm{{Z}} \geq \widetilde{\Omega}(\mathcal{R}))}]}$. We achieve that by the following technical lemma, where $\tilde{Z}$ is a standardized version of $Z$. \begin{Lemma} \label{aux_lm_pull_out_prob_bound} Suppose a random variable $\tilde{Z}\tilde{Z}^T = aT$ where $a \geq 0$ and $T$ is positive semi-definite. They are both random. Suppose $\mathbb{P}[a \geq \mathcal{A}]=n^{-\omega(1)}$ and $\mathcal{B}>0$ is a constant. Then, $$\norm{\mathbb{E}[\tilde{Z}\tilde{Z}^T(1 - \bm{1}_{\norm{\tilde{Z}} \geq \mathcal{B}})]} \leq \mathcal{A}\norm{\mathbb{E}[T]} + O( n^{-\omega(1)})$$ \end{Lemma} \proof To show this, we make use of the decomposition $\tilde{Z}\tilde{Z}^T = aT$ and a truncation for $a$. Specifically, \begin{align*} \norm{\mathbb{E}[\tilde{Z}\tilde{Z}^T(1 - \bm{1}_{\norm{\tilde{Z}} \geq \mathcal{B}})]} &= \mathbb{E}[aT(1 - \bm{1}_{\norm{\tilde{Z}} \geq \mathcal{B}})] \\ &\leq \norm{\mathbb{E}[a(1 - \bm{1}_{a \geq \mathcal{A}})T(1 - \bm{1}_{\norm{\tilde{Z}} \geq \mathcal{B}})]} + \norm{\mathbb{E}[a\bm{1}_{a \geq\mathcal{A}}T(1-\bm{1}_{\norm{\tilde{Z}} \geq \mathcal{B}})]} \\ &\leq \norm{\mathbb{E}[a(1 - \bm{1}_{a \geq \mathcal{A}})T]} + \mathbb{E}[a\bm{1}_{a \geq \mathcal{A}}\norm{T}(1-\bm{1}_{\norm{\tilde{Z}} \geq \mathcal{B}})] \\ &\leq \mathcal{A}\norm{\mathbb{E}[T]} + \bigl(\mathbb{E}[\norm{aT}^2(1-\bm{1}_{\norm{\tilde{Z}} \geq \mathcal{B}})] \mathbb{E}[\bm{1}_{a \geq \mathcal{A}}]\bigr)^{1/2} \\ &\leq \mathcal{A}\norm{\mathbb{E}[T]} + \bigl(\mathbb{E}[\norm{\tilde{Z}}^4(1-\bm{1}_{\norm{\tilde{Z}} \geq \mathcal{B}})]\mathbb{P}[a \geq \mathcal{A}]\bigr)^{1/2} \\ &\leq \mathcal{A}\norm{\mathbb{E}[T]} + \mathcal{B}^2\bigl(\mathbb{P}[a \geq \mathcal{A}]\bigr)^{1/2}\\ &\leq \mathcal{A}\norm{\mathbb{E}[T]} + O( n^{-\omega(1)}), \end{align*} where at the third step we used $T(1 - \bm{1}_{\norm{\tilde{Z}} \geq \mathcal{B}})] \preceq T$ because of the fact that $T$ is the positive semi-definite and $1 - \bm{1}_{\norm{\tilde{Z}} \geq \mathcal{B}} \in \{0, 1\}$ . Then, we finish the proof of the lemma. \qed \subsection{Sample Complexity of Algorithm \ref{alg_neural_initialization}} In Algorithm \ref{alg_neural_initialization}, we empirically compute the ``scores'' $\widehat{e}$ and the reduced weighted covariance matrix $\widehat{M}_{u,v}$ to produce an estimate for each column of $A^*$. Since the construction of $\widehat{M}_{u,v}$ depends upon the support estimate $\widehat{R}$ given by ranking $\widehat{e}$, we denote it by $\widehat{M}_{u,v}^{\widehat{R}}$. We will show that we only need $p = \widetilde{O}(m)$ samples to be able to recover the support of one particular atom and up to some specified level of column-wise error with high probability. \begin{Lemma} \label{lm_concentration_ehat_Mhat} Consider Algorithm \ref{alg_neural_initialization} in which $p$ is the given number of samples. For any pair $u$ and $v$, then with high probability a) $\norm{\widehat{e} - e} \leq O^*(k/m\log^2n)$ when $p = \widetilde{\Omega}(m)$ and b) $\norm{\widehat{M}_{u,v}^{\widehat{R}} - M_{u,v}^R} \leq O^*(k/m\log n)$ when $p = \widetilde{\Omega}(mr)$ where $\widehat{R}$ and $R$ are respectively the estimated and correct support sets of one particular atom. \end{Lemma} \subsubsection{Proof of Theorem \ref{main_thm_initialization}} \label{sec:proof-thm-initialization} Using Lemma \ref{lm_concentration_ehat_Mhat}, we are ready to prove the Thereom \ref{main_thm_initialization}. According to Lemma \ref{lm_diagonal_entries_in_expectation} when $U \cap V = \{i\}$, we can write $\widehat{e}$ as $$\widehat{e} = q_ic_i\beta_i\beta'_i\AR{i}^* \circ \AR{i}^* + ~\text{perturbation terms} + (\widehat{e} - e),$$ and consider $\widehat{e} - e$ as an additional perturbation with the same magnitude $O^*(k/m\log^2n)$ in the sense of $\|\cdot\|_\infty$ \text{w.h.p.}\ The first part of Lemma \ref{lm_support_consists_and_closeness} suggests that when $u$ and $v$ share exactly one atom $i$, then the set $\widehat{R}$ including $r$ largest elements of $\widehat{e}$ is the same as $\textnormal{supp}(A_i^*)$ with high probability. Once we have $\widehat{R}$, we again write $\widehat{M}_{u,v}^{\widehat{R}}$ using Lemma \ref{lm_reweighted_cov_matrix_in_expectation} as $$\widehat{M}_{u,v}^{\widehat{R}} = q_ic_i\beta_i\beta'_i\AR{i}^*\AR{i}^{*T} + ~\text{perturbation terms} + (\widehat{M}_{u,v}^{\widehat{R}} - M_{u,v}^{R}),$$ and consider $\widehat{M}_{u,v}^{\widehat{R}} - M_{u,v}^{R}$ as an additional perturbation with the same magnitude $O^*(k/m\log n)$ in the sense of the spectral norm $\|\cdot\|$ \text{w.h.p.}\ Using the second part of Lemma \ref{lm_support_consists_and_closeness}, we have the top singular vectors of $\widehat{M}_{u,v}^{\widehat{R}}$ is $O^*(1/\log n)$ -close to $\AR{i}^*$ with high probability. Since every vector added to the list $L$ in Algorithm~\ref{alg_neural_initialization} is close to one of the dictionary, then $A^0$ must be $\delta$-close to $A^*$. In addition, the nearness of$A^0$ to $A^*$ is guaranteed via an appropriate projection onto the convex set $\mathcal{B} = \{A | A ~\text{close to } A^0 ~\text{and}~ \norm{A} \leq 2\norm{A^*} \}$. Finally, we finish the proof of Theorem \ref{main_thm_initialization}. \qedhere \subsubsection{Proof of Lemma \ref{lm_concentration_ehat_Mhat}, Part a} \label{sec:proof-lm_concentration_e} For some fixed $l \in [n]$, consider $p$ \text{i.i.d.}\ realizations $Z^{(1)}, Z^{(2)}, \dots, Z^{(p)}$ of the random variable $Z \triangleq \inprod{y}{u} \inprod{y}{v} y_l^2$, then $\widehat{e}_l = \frac{1}{p}\sum_{i=1}^pZ^{(i)}$ and $e_l = \mathbb{E}[Z]$. To show that $\norm{\widehat{e} - e}_\infty \leq O^*(k/m\log^2n)$ holds with high probability, we first study the concentration for the $l$-th entry of $\widehat{e} - e$ and then take the union bound over all $l = 1, 2, \dots, n$. We derive upper bounds for $\abs{Z}$ and its variance $\mathbb{E}[Z^2]$ in order to apply Bernstein's inequality in \eqref{ineq_bernstein} to the truncated version of $Z$. \begin{Claim} \label{cl_bound_z} $\abs{Z} \leq \widetilde{O}(k)$ and $\mathbb{E}[Z^2] \leq \widetilde{O}(k^2/m)$ with high probability. \end{Claim} Again, the expectation is taken over $y$ by conditioning on $u$ and $v$, and therefore is still random due to the randomness of $u$ and $v$. To show Claim \ref{cl_bound_z}, we begin with proving the following auxiliary claim. \begin{Claim} \label{cl_bound_y} $\norm{y} \leq \widetilde{O}(\sqrt{k})$ and $\abs{\inprod{y}{u}} \leq \widetilde{O}(\sqrt{k})$ with high probability. \end{Claim} \proof From the generative model, we have $$\norm{y} = \norm{\cA{S}^*x_S^* + \varepsilon} \leq \norm{\cA{S}^*x_S^*} + \norm{\varepsilon} \leq \norm{\cA{S}^*}\norm{x_S^*} + \norm{\varepsilon}, $$ where $S = \textnormal{supp}(x^*)$. From Claim \ref{cl_bound_subgaussian_rv}, $\norm{x_S^*} \leq \widetilde{O}(\sqrt{k})$ and $\norm{\varepsilon} \leq \widetilde{O}(\sigma_\varepsilon \sqrt{n})$ \text{w.h.p.}\ In addition, $A^*$ is overcomplete and has bounded spectral norm, then $\norm{\cA{S}^*} \leq \norm{A^*} \leq O(1)$. Therefore, $\norm{y} \leq \widetilde{O}(\sqrt{k})$ \text{w.h.p.}, which is the first part of the proof. To bound the second term, we write it as $$\abs{\inprod{y}{u}} = \abs{\inprod{\cA{S}^*x_S^* + \varepsilon}{u}} \leq \abs{\inprod{x_S^*}{\cA{S}^{*T}u}} + \abs{\inprod{\varepsilon}{u}}.$$ Similar to $y$, we have $\norm{u} \leq \widetilde{O}(\sqrt{k})$ \text{w.h.p.}\ and hence $\norm{\cA{S}^{*T}u} \leq \norm{\cA{S}^{*T}}\norm{u} \leq O(\sqrt{k})$ with high probability. Since $u$ and $x^*$ are independent sub-Gaussian and $\inprod{x_S^*}{\cA{S}^{*T}u}$ are sub-exponential with variance at most $O(\sqrt{k})$, $\abs{\inprod{x_S^*}{\cA{S}^{*T}u}} \leq \widetilde{O}(k)$ \text{w.h.p.}\ Similarly, $\abs{\inprod{\varepsilon}{u}} \leq \widetilde{O}(\sqrt{k})$ \text{w.h.p.}\ Consequently, $\abs{\inprod{y}{u}} \leq \widetilde{O}(\sqrt{k})$ \text{w.h.p.}, and we conclude the proof of the claim. \qed \proof[Proof of Claim \ref{cl_bound_z}] We have $Z = \inprod{y}{u}\inprod{y}{v} y_l^2 = \inprod{y}{u} \inprod{y}{v} (\inprod{\rA{l}^*}{x^*} + \varepsilon_l)^2$ with $\inprod{y}{u}\inprod{y}{v} \leq \widetilde{O}(k)$ \text{w.h.p.}\ according to Claim \ref{cl_bound_y}. What remains is to bound $y_l^2 = (\inprod{\rA{l}^*}{x^*} + \varepsilon_l)^2$. Because $\inprod{\rA{l}^*}{x^*}$ is sub-Gaussian with variance $\mathbb{E}_S(\sum_{i \in S}A_{li}^{*2}) \leq \norm{A^{*T}}^2_{1, 2} = O(1)$, then $\abs{\inprod{\rA{l}^*}{x^*}} \leq O(\log n)$ \text{w.h.p.}\ Similarly for $\varepsilon_l$, $\abs{\varepsilon_l} \leq O(\sigma_\varepsilon\log n)$ \text{w.h.p.}\ Ultimately, $\abs{\inprod{\rA{l}^*}{x^*} + \varepsilon_l} \leq O(\log n)$, and hence we obtain with high probability the bound $\abs{Z} \leq \widetilde{O}(k)$. To bound the variance term, we write $Z^2 = \inprod{y}{v}^2 y_l^2\inprod{y}{u}^2 y_l^2$. Note that, from the first part, we get $\inprod{y}{v}^2 y_l^2 \leq \widetilde{O}(k)$ and $\abs{Z} \leq \widetilde{O}(k)$ \text{w.h.p.}. We apply Lemma \ref{aux_lm_pull_out_prob_bound} with some appropriate scaling to both terms, then $$\mathbb{E}[Z^2(1 - \bm{1}_{\abs{Z} \geq \widetilde{\Omega}(k)})] \leq \widetilde{O}(k)\mathbb{E}[\inprod{y}{u}^2y_l^2] + O(n^{-\omega(1)}),$$ where $\mathbb{E}[\inprod{y}{u}^2y_l^2]$ is equal to $e_l$ for pair $u, v$ with $v = u$. From Lemma \ref{lm_diagonal_entries_in_expectation} and its proof in Appendix Section ``Analysis of Initialization Algorithm", \begin{align*} \mathbb{E}[\inprod{y}{u}^2y_l^2] &= \sum_{i=1}^m q_ic_i\beta_i^2A^{*2}_{li} + ~\text{perturbation terms}, \end{align*} in which the perturbation terms are bounded by $O^*(k/m\log^2n)$ \text{w.h.p.}\ (following Claims \ref{cl_bound_E1} and \ref{cl_bound_Es}). The dominant term $\sum_{i}q_ic_i\beta_i^2A^{*2}_{li} \leq (\max{q_ic_i\beta_i^2})\norm{\rA{l}^*}^2 \leq \widetilde{O}(k/m)$ \text{w.h.p.}\ because $\abs{\beta_i} \leq O(\log m)$ (Claim \ref{cl_bounds_of_beta}). Then we complete the proof of the second part. \qed \proof[Proof of Lemma \ref{lm_concentration_ehat_Mhat}, Part a] We are now ready to prove Part a of Lemma \ref{lm_concentration_ehat_Mhat}. We apply Bernstein's inequality in Lemma \ref{aux_lm_bernstein} for the truncated random variable $Z^{(i)}(1-\bm{1}_{\abs{Z^{(i)}} \geq \widetilde{\Omega}(\mathcal{R})})$ with $\mathcal{R} = \widetilde{O}(k)$ and variance $\sigma^2 = \widetilde{O}(k^2/m)$ from Claim \ref{cl_bound_z}, then \begin{equation} \norm[\bigg]{\frac{1}{p} \sum_{i=1}^p Z^{(i)}(1 - \bm{1}_{\abs{Z^{(i)}} \geq \widetilde{\Omega}(\mathcal{R})}) - \mathbb{E}[Z(1 - \bm{1}_{\abs{Z} \geq \widetilde{\Omega}(\mathcal{R})})]} \leq \frac{\widetilde{O}(k)}{p} + \sqrt{\frac{\widetilde{O}(k^2/m)}{p}} \leq O^*(k/m\log n), \end{equation} \text{w.h.p.}\ for $p = \widetilde{\Omega}(m)$. Then $\widehat{e}_l = \frac{1}{p}\sum_{i=1}^pZ^{(i)}$ also concentrates with high probability. Take the union bound over $l = 1, 2, \dots, n$, we get $ \norm{\widehat{e} - e}_\infty \leq O^*(k/m\log n)$ with high probability and complete the proof of \ref{lm_concentration_ehat_Mhat}, Part a. \qedhere \subsubsection{Proof of Lemma \ref{lm_concentration_ehat_Mhat}, Part b} \label{sec:proof-lm_concentration_M} Next, we will prove that $\norm{\widehat{M}_{u,v}^{\widehat{R}} - M_{u,v}^R} \leq O^*(k/m\log n)$ with high probability. We only need to prove the concentration inequalities for the case when conditioned on the event that $\widehat{R}$ is equivalent to $R$ \text{w.h.p.}\ Again, what we need to derive are an upper norm bound $\mathcal{R}$ of the matrix random variable $Z \triangleq \inprod{y}{u} \inprod{y}{v} y_Ry_R^T$ and its variance. \begin{Claim} \label{cl_bound_Mh_A41} $\norm{Z} \leq \widetilde{O}(kr)$ and $\norm{\mathbb{E}[ZZ^T]} \leq \widetilde{O}(k^2r/m)$ hold with high probability. \end{Claim} \proof We have $\norm{Z} \leq \abs{\inprod{y}{u} \inprod{y}{v}} \norm{y_R}^2$ with $\abs{\inprod{y}{u} \inprod{y}{v}} \leq \widetilde{O}(k)$ \text{w.h.p.}\ (according to Claim \ref{cl_bound_y}) whereas $\norm{y_R}^2 = \sum_{i \in R}y_l^2 \leq O(r\log^2n)$ \text{w.h.p.}\ because $y_l \leq O(\log n)$ \text{w.h.p.}\ (proof of Claim \ref{cl_bound_z}). This implies $\norm{Z} \leq \widetilde{O}(kr)$ \text{w.h.p.}\ The second part is handled similarly as in the proof of Claim \ref{cl_bound_z}. We take advantage of the bounds of $\widehat{M}_{u,v}$ in Lemma \ref{lm_reweighted_cov_matrix_in_expectation}. Specifically, using the first part $\norm{Z} \leq \widetilde{O}(kr)$ and $\inprod{y}{v}^2\norm{y_R}^2 \leq \widetilde{O}(kr)$, and applying Lemma \ref{aux_lm_pull_out_prob_bound}, then $$\norm{\mathbb{E}[ZZ^T(1 - \bm{1}_{\norm{Z} \geq \widetilde{\Omega}(kr)})]} \leq \widetilde{O}(kr)\norm{\mathbb{E}[\inprod{y}{u}^2 y_Ry_R^T]} + \widetilde{O}(kr)O(n^{-\omega(1)}) \leq \widetilde{O}(kr)\norm{M_{u, u}}, $$ where $M_{u,u}$ arises from the application of Lemma \ref{lm_reweighted_cov_matrix_in_expectation}. Recall that $$M_{u,u} = \sum_{i}q_ic_i\beta_i^2\AR{i}^*\AR{i}^{*T} + ~\text{perturbation terms},$$ where the perturbation terms are all bounded by $O^*(k/m\log n)$ \text{w.h.p.}\ by Claims \ref{cl_bound_M1} and \ref{cl_bound_Ms}. In addition, $$\norm{\sum_{i}q_ic_i\beta_i^2\AR{i}^*\AR{i}^{*T}} \leq (\max_{i}q_ic_i\beta_i^2)\norm{\rA{R}^*}^2 \leq \widetilde{O}(k/m)\norm{A^*}^2 \leq \widetilde{O}(k/m)$$ \text{w.h.p.}\ Finally, the variance bound is $\widetilde{O}(k^2r/m)$ \text{w.h.p.}\ \qed Then, applying Bernstein's inequality in Lemma \ref{aux_lm_bernstein} to the truncated version of $Z$ with $\mathcal{R} = \widetilde{O}(kr)$ and variance $\sigma^2 = \widetilde{O}(k^2r/m)$ and obtain the concentration for the full $Z$ to get $$\norm{\widehat{M}_{u,v}^R - M_{u,v}^R} \leq \frac{\widetilde{O}(kr)}{p} + \sqrt{\frac{\widetilde{O}(k^2r/m)}{p}} \leq O^*(k/m\log n)$$ \text{w.h.p.}\ when the number of samples is $p = \widetilde{\Omega}(mr)$ under \Asmp{4.1}. We have proved that $\norm{\widehat{M}_{u,v}^R - M_{u,v}^R} \leq O^*(k/m\log n)$ as conditioned on the support consistency event holds \text{w.h.p.}\ $\norm{\widehat{M}_{u,v}^{\widehat{R}} - M_{u,v}^R} \leq O^*(k/m\log n)$ is easily followed by the law of total probability through the tail bounds on the conditional and marginal probabilities (i.e. $\mathbb{P}[\norm{\widehat{M}_{u,v}^R - M_{u,v}^R} \leq O^*(k/m\log n)|\widehat{R} = R])$ and $\mathbb{P}[\widehat{R} \neq R]$. We finish the proof of Lemma \ref{lm_concentration_ehat_Mhat}, Part b for both cases of the spectral bounds. \qed \subsection{Proof of Theorem \ref{main_thm_columnwise_descent_in_expectation} and Sample Complexity of Algorithm \ref{alg_neural_doubly_sdl}} In this section, we prove Theorem \ref{main_thm_columnwise_descent_in_expectation} and identify sample complexity per iteration of Algorithm \ref{alg_neural_doubly_sdl}. We divide the proof into two steps: 1) show that when $A^s$ is $(\delta_s, 2)$-near to $A^*$ for $\delta_s= O^*(1/\log n)$, the approximate gradient estimate $\widehat{g}^s$ is $(\alpha, \beta, \gamma_s)$-correlated-whp with $A^*$ with $\gamma_s \leq O(k^2/mn) + \alpha o(\delta_s^2)$ , and 2) show that the nearness is preserved at each iteration. These correspond to showing the following lemmas: \begin{Lemma} At iteration $s$ of Algorithm \ref{alg_neural_doubly_sdl}, suppose that $A^s$ has each column correctly supported and is $(\delta_s, 2)$-near to $A^*$ and that $\eta = O(m/k)$. Denote $R = \textnormal{supp}(\cA{i}^s)$, then the update $\widehat{g}_{R, i}^s$ is $(\alpha, \beta, \gamma_s)$-correlated-whp with $\AR{i}^*$ where $\alpha = \Omega(k/m)$, $\beta = \Omega(m/k)$ and $\gamma_s \leq O(k^2/mn) + \alpha o(\delta_s^2)$ for $\delta_s= O^*(1/\log n)$. \label{lm_correlation_g_hat} \end{Lemma} Note that this is a finite-sample version of Lemma \ref{lm_correlation_gs}. \begin{Lemma} \label{lm_nearness_finite_sample} If $A^s$ is $(\delta_s, 2)$-near to $A^*$ and number of samples used in step $s$ is $p=\widetilde{\Omega}(m)$, then with high probability $\norm{A^{s+1} - A^*} \leq 2\norm{A^*}$. \end{Lemma} \proof[Proof of Theorem \ref{main_thm_columnwise_descent_in_expectation}] The correlation of $\widehat{g}_i$ with $A_i^*$, described in Lemma \ref{lm_correlation_g_hat}, implies the descent of column-wise error according to Theorem \ref{thm_descent_from_correlation_z}. Along with Lemma \ref{lm_nearness_finite_sample}, the theorem follows directly. \subsubsection{Proof of Lemma \ref{lm_correlation_g_hat}} We prove Lemma \ref{lm_correlation_g_hat} by obtaining a tail bound on the difference between $\widehat{g}^s_{R, i}$ and $\gR{i}^s$ using the Bernstein's inequality in Lemma \ref{aux_lm_bernstein}. \begin{Lemma} At iteration $s$ of Algorithm \ref{alg_neural_doubly_sdl}, suppose that $A^s$ has each column correctly supported and is $(\delta_s, 2)$-near to $A^*$. For $R = \textnormal{supp}(A_i^s) = \textnormal{supp}(A_i^*)$, then $\norm{\widehat{g}^s_{R, i} - \gR{i}^s} \leq O(k/m)\cdot( o(\delta_s) + O(\epsilon_s))$ with high probability for $\delta_s = O^*(1/\log n)$ and $\epsilon_s = O(\sqrt{k/n})$ when $p = \widetilde{\Omega}(m + \sigma_\varepsilon^2\frac{mnr}{k})$. \label{lm_concentration_of_gradient_sparse_case} \end{Lemma} To prove this lemma, we study the concentration of $\widehat{g}^s_{R, i}$, which is a sum of random vector of the form $(y - Ax)_R\textnormal{sgn}(x_i)$. We consider random variable $Z \triangleq (y - Ax)_R\textnormal{sgn}(x_i) | i \in S$, with $S = \textnormal{supp}(x^*)$ and $x = \textnormal{threshold}_{C/2}(A^Ty)$. Then, using the following technical lemma to bridge the gap in concentration of the two variables. We adopt this strategy from \cite{arora15-neural} for our purpose. \begin{Claim} \label{cl_concentration_sparse_Zr} Suppose that $Z^{(1)}, Z^{(2)}, \dots, Z^{(N)}$ are \text{i.i.d.}\ samples of the random variable $Z = (y - Ax)_R\textnormal{sgn}(x_i) | i \in S$. Then, \begin{equation} \norm[\Big]{\frac{1}{N}\sum_{j=1}^N Z^{(j)} - \mathbb{E}[Z]} \leq o(\delta_s) + O(\epsilon_s) \label{ineq_bernstein} \end{equation} holds with probability when $N = \widetilde{\Omega}(k + \sigma_\varepsilon^2nr)$, $\delta_s = O^*(1/\log n)$ and $\epsilon_s = O(\sqrt{k/n})$. \end{Claim} \proof[Proof of Lemma \ref{lm_concentration_of_gradient_sparse_case}] Once we have done the proof of Claim \ref{cl_concentration_sparse_Zr}, we can easily prove Lemma \ref{lm_concentration_of_gradient_sparse_case}. We recycle the proof of Lemma 43 in \cite{arora15-neural}. Write $W = \{j: i \in \textnormal{supp}(x^{*(j)})\}$ and $N = |W|$, then express $\widehat{g}_{R,i}$ as $$\widehat{g}_{R,i} = \frac{N}{p}\frac{1}{N} \sum_{j}(y^{(j)} - Ax^{(j)})_R\textnormal{sgn}(x_i^{(j)}),$$ where $\frac{1}{|W|} \sum_{j}(y^{(j)} - Ax^{(j)})_R\textnormal{sgn}(x_i^{(j)})$ is distributed as $\frac{1}{N}\sum_{j=1}^N Z^{(j)}$ with $N = |W|$. Note that $\mathbb{E}[(y - Ax)_R\textnormal{sgn}(x_i)] = \mathbb{E}[(y - Ax)_R\textnormal{sgn}(x_i)\bm{1}_{i\in S}] = \mathbb{E}[Z]\mathbb{P}[i \in S] = q_i\mathbb{E}[Z]$ with $q_i = \Theta(k/m)$. Following Claim \ref{cl_concentration_sparse_Zr}, we have $$\norm{\widehat{g}^s_{R, i} - \gR{i}^s} \leq O(k/m)\norm[\Big]{\frac{1}{N}\sum_{j=1}^N Z^{(j)} - \mathbb{E}[Z]} \leq O(k/m)\cdot( o(\delta_s) + O(\epsilon_s)),$$ holds with high probability as $p = \Omega(mN/k)$. Substituting $N$ in Claim \ref{cl_concentration_sparse_Zr}, we obtain the results in Lemma \ref{lm_concentration_of_gradient_sparse_case}. \qedhere \proof[Proof of Claim \ref{cl_concentration_sparse_Zr}] We are now ready to prove the claim. What we need are good bounds for $\norm{Z}$ and its variance, then we can apply Bernstein's inequality in Lemma \ref{aux_lm_bernstein} for the truncated version of $Z$, then $Z$ is also concentrates likewise. \begin{Claim} \label{cl_norm_bound_ghi} $\norm{Z} \leq \mathcal{R}$ holds with high probability for $\mathcal{R} = \widetilde{O}(\delta_s\sqrt{k} + \mu k/\sqrt{n} + \sigma_\varepsilon \sqrt{r})$ with $\delta_s = O^*(1/\log n)$. \end{Claim} \proof From the generative model and the support consistency of the encoding step, we have $y = A^*x^* + \varepsilon = \cA{S}^*x^*_S + \varepsilon$ and $x_S = \cA{S}^Ty = \cA{S}^T\cA{S}^*x^*_S + \cA{S}^T\varepsilon$. Then, \begin{align*} (y - Ax)_R &= (\AR{S}^*x^*_S + \varepsilon_R) - \AR{S}\cA{S}^T\cA{S}^*x^*_S - \AR{S}\cA{S}^T\varepsilon \\ &= (\AR{S}^* - \AR{S})x^*_S + \AR{S}(I_k - \cA{S}^T\cA{S}^*)x^*_S + (I_n - \cA{S}\cAS^T)_{R\bullet}\varepsilon. \end{align*} Using the fact that $x^*_S$ and $\varepsilon$ are sub-Gaussian and that $\norm{Mw} \leq \widetilde{O}(\sigma_w\norm{M}_F)$ holds with high probability for a fixed $M$ and a sub-Gaussian $w$ of variance $\sigma_w^2$, we have $$\norm{(y - Ax)_R\textnormal{sgn}(x_i)} \leq \widetilde{O}(\norm{\AR{S}^* - \AR{S}}_F + \norm{\AR{S}(I_k - \cA{S}^T\cA{S}^*)}_F + \sigma_\varepsilon\norm{(I_n - \cA{S}\cAS^T)_{R\bullet}}_F).$$ Now, we need to bound those Frobenius norms. The first quantity is easily bounded as \begin{equation} \label{ineq_C2.1} \norm{\AR{S}^* - \AR{S}}_F \leq \norm{\cA{S}^* - \cA{S}}_F \leq \delta_s\sqrt{k}, \end{equation} since $A$ is $\delta_s$-close to $A^*$. To handle the other two, we use the fact that $\norm{UV}_F \leq \norm{U} \norm{V}_F$. Using this fact for the second term, we have $$\norm{\AR{S}(I_k - \cA{S}^T\cA{S}^*)}_F \leq \norm{\AR{S}}\norm{(I_k - \cA{S}^T\cA{S}^*)}_F, $$ where $\norm{\AR{S}} \leq \norm{\rA{R}} \leq O(1)$ due to the nearness. The second part is rearranged to take advantage of the closeness and incoherence properties: \begin{align*} \label{ineq_C2.2} \norm{I_k - \cA{S}^T\cA{S}^*}_F &\leq \norm{I_k - \cA{S}^{*T}\cA{S}^* - (\cA{S} - \cA{S}^*)^T\cA{S}^*}_F \\ &\leq \norm{I_k - \cA{S}^{*T}\cA{S}^*}_F + \norm{(\cA{S} - \cA{S}^*)^T\cA{S}^*}_F \\ &\leq \norm{I_k - \cA{S}^{*T}\cA{S}^*}_F + \norm{\cA{S}^*}\norm{\cA{S} - \cA{S}^*}_F \\ &\leq \mu k/\sqrt{n} + O(\delta_s\sqrt{k}), \end{align*} where we have used $\norm{I_k - \cA{S}^{*T}\cA{S}^*}_F \leq \mu k/\sqrt{n}$ because of the $\mu$-incoherence of $A^*$, $\norm{\cA{S} - \cA{S}^*}_F \leq \delta_s\sqrt{k}$ in \eqref{ineq_C2.1} and $\norm{\cA{S}^*} \leq \norm{A^*} \leq O(1)$. Accordingly, the second Frobenius norm is bounded by \begin{equation} \label{ineq_C2.31} \norm{\AR{S}(I_k - \cA{S}^T\cA{S}^*)}_F \leq O\bigl(\mu k/\sqrt{n} + \delta_s\sqrt{k}\bigr). \end{equation} The noise term is handled using the eigen-decomposition $U\Lambda U^T$ of $\cA{S}\cAS^T$, then with high probability \begin{equation} \label{ineq_C2.4} \norm{(I_n - \cA{S}\cAS^T)_{R\bullet}}_F = \norm{(UU^T - U\Lambda U^T)_{R\bullet}}_F = \norm{U_{R\bullet}(I_n - \Lambda)}_F \leq \norm{I_n - \Lambda}\norm{U_{R\bullet}}_F \leq O(\sqrt{r}), \end{equation} where the last inequality $\norm{I_n - \Lambda} \leq O(1)$ follows by $\norm{\cA{S}} \leq \norm{A} \leq \norm{A - A^*} + \norm{A^*} \leq 3\norm{A^*} \leq O(1)$ due to the nearness. Putting \eqref{ineq_C2.1}, \eqref{ineq_C2.31} and \eqref{ineq_C2.4} together, we obtain the bounds in Claim \ref{cl_norm_bound_ghi}. \qed Next, we determine a bound for the variance of $Z$. \begin{Claim} \label{cl_bound_variance_ghi} $\mathbb{E}[\norm{Z}^2] = \mathbb{E}[\norm{(y-Ax)_R\textnormal{sgn}(x_i)}^2|i\in S] \leq \sigma^2$ holds with high probability for $\sigma^2 = O(\delta_s^2k + k^2/n + \sigma_\varepsilon^2r)$ with $\delta_s = O^*(1/\log n)$. \end{Claim} \proof We explicitly calculate the variance using the fact that $x_S^*$ is conditionally independent given $S$, and so is $\varepsilon$. $x_S^*$ and $\varepsilon$ are also independent and have zero mean. Then we can decompose the norm into three terms in which the dot product is zero in expectation and the others can be shortened using the fact that $E[x^*_Sx^{*T}_S] = I_k$, $E[\varepsilon\varepsilon^T] = \sigma_\varepsilon I_n$. \begin{align*} \mathbb{E}[\norm{(y-Ax)_R\textnormal{sgn}(x_i)}^2|i\in S] &= \mathbb{E}[\norm{(\AR{S}^* - \AR{S}\cA{S}^T\cA{S}^*)x^*_S + (I_n - \cA{S}\cAS^T)_{R\cdot}\varepsilon}^2|i\in S]] \\ &= \mathbb{E}[\norm{\AR{S}^* - \AR{S}\cA{S}^T\cA{S}^*}_F^2| i\in S] + \sigma_\varepsilon^2\mathbb{E}[\norm{I_n - \cA{S}\cAS^T)_{R\bullet}}_F^2|i \in S]. \end{align*} Then, by re-writing $\AR{S}^* - \AR{S}\cA{S}^T\cA{S}^*$ as before, we get the form $(\AR{S}^* - \AR{S}) + \AR{S}(I_k - \cA{S}^T\cA{S}^*)$ in which the first term has norm bounded by $\delta_s\sqrt{k}$. The second is further decomposed as \begin{align} \label{ineq_C2.5} \mathbb{E}[\norm{\AR{S}(I_k - \cA{S}^T\cA{S}^*)}_F^2|i \in S] &\leq \sup_S\norm{\AR{S}}^2\mathbb{E}[\norm{I_k - \cA{S}^T\cA{S}^*}_F^2|i \in S], \end{align} where $\sup_S\norm{\AR{S}} \leq \norm{\rA{R}} \leq O(1)$. We will bound $\mathbb{E}[\norm{I_k - \cA{S}^T\cA{S}^*}_F^2|i \in S] \leq O(k\delta_s^2) + O(k^2/n)$ using the proof from \cite{arora15-neural}: \begin{align*} &\mathbb{E}[\norm{I_k - \cA{S}^T\cA{S}^*}_F^2|i \in S] = \mathbb{E}[\sum_{j \in S}(1 - \cA{j}^T\cA{j}^*)^2 + \sum_{j \in S}\norm{\cA{j}^TA_{\bullet, -j}^*}^2|i \in S] \\ &= \mathbb{E}[\sum_{j \in S}\frac{1}{4}\norm{\cA{j} - \cA{j}^*}^2] + q_{ij}\sum_{j \neq i}\norm{\cA{j}^TA_{\bullet, -j}^*}^2 + q_i \norm{\cA{i}^TA_{\bullet, -i}^*}^2 + q_i \norm{A_{\bullet, -i}^T\cA{i}^*}^2, \end{align*} where $A_{\bullet, -i}$ is the matrix $A$ with the $i$-th column removed, $q_{ij} \leq O(k^2/m^2)$ and $q_i \leq O(k/m)$. For any $j = 1, 2, \dots, m$, \begin{align*} \norm{\cA{j}^TA_{\bullet, -j}^*}^2 &= \norm{\cA{j}^{*^T}A_{\bullet, -j}^* + (\cA{j} - \cA{j}^*)^TA_{\bullet, -j}^*}^2 \\ &\leq \sum_{l \neq j}\inprod{\cA{j}^*}{\cA{l}^*}^2 + \norm{(\cA{j} - \cA{j}^*)^TA_{\bullet, -j}^*}^2 \\ &\leq \sum_{l \neq j}\inprod{\cA{j}^*}{\cA{l}^*}^2 + \norm{\cA{j} - \cA{j}^*}^2\norm{A_{\bullet, -j}^*}^2 \leq \mu^2 + \delta_s^2. \end{align*} The last inequality invokes the $\mu$-incoherence, $\delta$-closeness and the spectral norm of $A^*$. Similarly, we come up with the same bound for $\norm{\cA{i}^TA_{\bullet, -i}^*}^2$ and $\norm{A_{\bullet, -i}^T\cA{i}^*}^2$. Consequently, \begin{align} \label{ineq_C2.6} \mathbb{E}[\norm{I_k - \cA{S}^T\cA{S}^*}_F^2|i \in S] \leq O(k\delta_s^2) + O(k^2/n). \end{align} For the last term, we invoke the inequality \eqref{ineq_C2.4} (Claim \ref{cl_norm_bound_ghi}) to get \begin{equation} \label{ineq_C2.7} \mathbb{E}[\norm{(I_n - \cA{S}\cAS^T)_{R\bullet}}_F^2|i \in S] \leq r \end{equation} Putting \eqref{ineq_C2.5}, \eqref{ineq_C2.6} and \eqref{ineq_C2.7} together and using $\norm{\rA{R}} \leq 1$, we obtain the variance bound of $Z$: $\sigma^2 = O(\delta_s^2k + k^2/n + \sigma_\varepsilon^2r)$ with $\delta_s = O(1/\log^2n)$ . Finally, we complete the proof. \qedhere We now apply truncated Bernstein's inequality to the random variable $Z^{(j)}(1-1_{\norm{Z^{(j)}} \geq \Omega(\mathcal{R})})$ with $\mathcal{R}$ and $\sigma^2$ in Claims \ref{cl_norm_bound_ghi} and \ref{cl_bound_variance_ghi}, which are $\mathcal{R} = \widetilde{O}(\delta_s\sqrt{k} + \mu k/\sqrt{n} + \sigma_\varepsilon \sqrt{r})$ and $\sigma^2 = O(\delta_s^2k + k^2/n + \sigma_\varepsilon^2r)$. Then, $(1/N)\sum_{j=}^N Z^{(j)}$ also concentrates: \begin{equation*} \norm[\Big]{\frac{1}{N} \sum_{i=1}^N Z^{(j)} - E[Z]} \leq \widetilde{O}\Bigl(\frac{\mathcal{R}}{N}\Bigr) + \widetilde{O}\biggl(\sqrt{\frac{\sigma^2}{N}}\biggr) = o(\delta_s) + O(\sqrt{k/n}) \end{equation*} holds with high probability when $N = \widetilde{\Omega}(k + \sigma_\varepsilon^2nr)$. Then, we finally finish the proof of Claim \ref{cl_concentration_sparse_Zr}. \qed \proof[Proof of Lemma \ref{lm_correlation_g_hat}] With Claim \ref{cl_concentration_sparse_Zr}, we study the concentration of $\widehat{g}^s_{R, i}$ around its mean $g^s_{R, i}$. Now, we consider this difference as an error term of the expectation $g^s_{R, i}$ and using Lemma \ref{lm_correlation_gs} to show the correlation of $\widehat{g}^s_{R, i}$. Using the expression in Lemma \ref{lm_expected_columwise_update} with high probability, we can write $$\widehat{g}^s_{R, i} = g^s_{R, i} + (g^s_{R, i} - \widehat{g}^s_{R, i}) = 2\alpha(\AR{i} - \AR{i}^*) + v,$$ where $\norm{v} \leq \alpha \norm{\AR{i} - \AR{i}^*} + O(k/m)\cdot( o(\delta_s) + O(\epsilon_s))$. By Lemma \ref{lm_correlation_gs}, we have $\widehat{g}^s_{R, i}$ is $(\alpha, \beta, \gamma_s)$-correlated-whp with $\AR{i}^*$ where $\alpha = \Omega(k/m)$, $\beta = \Omega(m/k)$ and $\gamma_s \leq O(k/m)\cdot( o(\delta_s) + O(\sqrt{k/n}))$ , then we have done the proof Lemma \ref{lm_correlation_g_hat}. \qedhere \subsubsection{Proof of Lemma \ref{lm_nearness_finite_sample}} \label{sec:proof_lm_15} We have shown the correlation of $\widehat{g}^s$ with $A^*$ \text{w.h.p.}\ and established the descent property of Algorithm \ref{alg_neural_doubly_sdl}. The next step is to show that the nearness is preserved at each iteration. To prove $\norm{A^{s+1} - A^*} \leq 2\norm{A^*}$ holds with high probability, we recall the update rule $$A^{s+1} = A^s - \eta \mathcal{P}_H(\widehat{g}^s),$$ where $\mathcal{P}_H(\widehat{g}^s) = H \circ \widehat{g}^s$. Here $H = (h_{ij})$ where $h_{ij} = 1$ if $i \in \textnormal{supp}(\cA{j})$ and $h_{ij} = 0$ otherwise. Also, note that $A^s$ is $(\delta_s, 2)$-near to $A^*$ for $\delta_s = O^*(1/\log n)$. We already proved that this holds for the exact expectation $g^s$ in Lemma \ref{lm_nearness_infinite_sample}. To prove for $\widehat{g}^s$, we again apply matrix Bernstein's inequality to bound $\norm{\mathcal{P}_H(g^s) - \mathcal{P}_H(\widehat{g}^s)}$ by $O(k/m)$ because $\eta = \Theta(m/k)$ and $\norm{A^*} = O(1)$. Consider a matrix random variable $Z \triangleq \mathcal{P}_H((y - Ax)\textnormal{sgn}(x)^T)$. Our goal is to bound the spectral norm $\norm{Z}$ and, both $\norm{\mathbb{E}[ZZ^T]}$ and $\norm{\mathbb{E}[Z^TZ]}$ since $Z$ is asymmetric. To simplify our notations, we denote by $x_R$ the vector $x$ by zeroing out the elements not in $R$. Also, denote $R_i = \textnormal{supp}(h_i)$ and $S = \textnormal{supp}(x)$. Then $Z$ can be written explicitly as $$Z = [(y - Ax)_{R_1}\textnormal{sgn}(x_1), \dots, (y - Ax)_{R_m}\textnormal{sgn}(x_m)],$$ where many columns are zero since $x$ is $k$-sparse. The following claims follow from the proof of Claim 42 in \cite{arora15-neural}. Here we state and detail some important steps. \begin{Claim} \label{cl_norm_bound_gh} $\norm{Z} \leq \widetilde{O}(k)$ holds with high probability. \end{Claim} \proof With high probability $$\norm{Z} \leq \sqrt{\sum_{i \in S}\norm{(y - Ax)_{R_i}\textnormal{sgn}(x_i)}^2} \leq \sqrt{k}\norm{(y - Ax)_{R_i}}$$ where we use Claim \ref{cl_norm_bound_ghi} with $\norm{(y - Ax)_R} \leq \widetilde{O}(\delta_s\sqrt{k})$ \text{w.h.p.}, then $\norm{Z} \leq \widetilde{O}(k)$ holds \text{w.h.p.} \qed \begin{Claim} \label{cl_variance_bound_gh} $\norm{\mathbb{E}[ZZ^T]} \leq O(k^2/n)$ and $\norm{\mathbb{E}[Z^TZ]} \leq \widetilde{O}(k^2/n)$ with high probability. \end{Claim} \proof The first term is easily handled. Specifically, with high probability $$\norm{\mathbb{E}[ZZ^T]} \leq \norm{\mathbb{E}[\sum_{i \in S} (y - Ax)_{R_i}\textnormal{sgn}(x_i)^2(y - Ax)^T_{R_i}]} = \norm{\mathbb{E}[\sum_{i \in S}(y - Ax)_{R_i}(y - Ax)_{R_i}^T]} \leq O(k^2/n),$$ where the last inequality follows from the proof of Claim 42 in \cite{arora15-neural}, which is tedious to be repeated. To bound $\norm{\mathbb{E}[Z^TZ]}$, we use bound of the full matrix $(y-Ax)\textnormal{sgn}(x)^T$. Note that $\norm{y - Ax} \leq \widetilde{O}(\sqrt{k})$ \text{w.h.p.}\ is similar to what derived in Claim \ref{cl_norm_bound_ghi}. Then with high probability, $$\norm{\mathbb{E}[Z^TZ]} \leq \norm{\mathbb{E}[\textnormal{sgn}(x)(y - Ax)^T(y - Ax)\textnormal{sgn}(x)^T]} \leq \widetilde{O}(k)\norm{\mathbb{E}[\textnormal{sgn}(x)\textnormal{sgn}(x)^T]} \leq \widetilde{O}(k^2/m).$$ where $\mathbb{E}[\textnormal{sgn}(x)\textnormal{sgn}(x)^T] = \textnormal{diag}(q_1, q_2, \dots, q_m)$ has norm bounded by $O(k/m)$. We now can apply Bernstein's inequality for the truncated version of $Z$ with $\mathcal{R}= \widetilde{O}(k)$ and $\sigma^2 = \widetilde{O}(k^2/m)$, then with $p = \widetilde{O}(m)$, $$\norm{\mathcal{P}_H(g^s) - \mathcal{P}_H(\widehat{g}^s)} \leq \frac{\widetilde{O}(k)}{p} + \sqrt{\frac{\widetilde{O}(k^2/m)}{p}} \leq O^*(k/m)$$ holds with high probability. Finally, we invoke the bound $\eta = O(m/k)$ and complete the proof. \qed \section{Setup and Main Results} \label{setup} \subsection{Notation} \label{notation} We define $[m] \triangleq \{1, \ldots, m\}$ for any integer $m > 1$. For any vector $x = [x_1, x_2, \ldots, x_m]^T \in\mathbb{R}^{m}$, we write $\textnormal{supp}(x)\triangleq \{i \in [m]: x_i \neq 0 \}$ as the support set of $x$. Given any subset $S \subseteq [m]$, $x_S$ corresponds to the sub-vector of $x$ indexed by the elements of $S$. For any matrix $A \in \mathbb{R}^{n\times m}$, we use $\cA{i}$ and $\rA{j}^T$ to represent the $i$-th column and the $j$-th row respectively. For some appropriate sets $R$ and $S$, let $\rA{R}$ (respectively, $\cA{S}$) be the submatrix of $A$ with rows (respectively columns) indexed by the elements in $R$ (respectively $S$). In addition, for the $i$-th column $\cA{i}$, we use $A_{R, i}$ to denote the sub-vector indexed by the elements of $R$. For notational simplicity, we use $\rA{R}^T$ to indicate $(\rA{R})^T$, the tranpose of $A$ after a row selection. Besides, we use $\circ$ and $\textnormal{sgn}(\cdot)$ to represent the element-wise Hadamard operator and the element-wise sign function respectively. Further, $\mathrm{threshold}_{K}(x)$ is a thresholding operator that replaces any elements of $x$ with magnitude less than $K$ by zero. The $\ell_2$-norm $\norm{x}$ for a vector $x$ and the spectral norm $\norm{A}$ for a matrix $A$ appear several times. In some cases, we also utilize the Frobenius norm $\norm{A}_F$ and the operator norm $\norm{A}_{1,2}\triangleq\max_{\norm{x}_1 \leq 1} \norm{Ax}$. The norm $\norm{A}_{1, 2}$ is essentially the maximal Euclidean norm of any column of $A$. For clarity, we adopt asymptotic notations extensively. We write $f(n) = O(g(n))$ (or $f(n) = \Omega(g(n))$) if $f(n)$ is upper bounded (respectively, lower bounded) by $g(n)$ up to some positive constant. Next, $f(n) = \Theta(g(n))$ if and only if $f(n) = O(g(n))$ and $f(n) = \Omega(g(n))$. Also $\widetilde{\Omega}$ and $\widetilde{O}$ represent $\Omega$ and $O$ up to a multiplicative poly-logarithmic factor respectively. Finally $f(n) = o(g(n))$ (or $f(n) = \omega(g(n))$) if $\lim_{n\rightarrow\infty} |f(n)/g(n)|=0$ ($\lim_{n\rightarrow\infty} |f(n)/g(n)|=\infty$). Throughout the paper, we use the phrase ``with high probability'' (abbreviated to \text{w.h.p.}) to describe an event with failure probability of order at most $n^{-\omega(1)}$. In addition, $g(n)=O^*(f(n))$ means $g(n)\le Kf(n)$ for some small enough constant $K$. \subsection{Model} \label{model} Suppose that the observed samples are given by $$y^{(i)} = D x^{*(i)} + \varepsilon,~~i = 1, \ldots, p,$$ \text{i.e.}, we are given $p$ samples of $y$ generated from a fixed (but unknown) dictionary $D$ where the sparse code $x^*$ and the error $\varepsilon$ are drawn from a joint distribution $\mathcal{D}$ specified below. In the double-sparse setting, the dictionary is assumed to follow a decomposition $D = \Phi A^*$, where $\Phi\in\mathbb{R}^{n\times n}$ is a known \emph{orthonormal} basis matrix and $A^*$ is an unknown, ground truth synthesis matrix. An alternative (and interesting) setting is an overcomplete $\Phi$ with a square $A^*$, which our analysis below does not cover; we defer this to future work. Our approach relies upon the following assumptions on the synthesis dictionary $A^*$: \begin{enumerate} \item[\textbf{A1}] $A^*$ is overcomplete (i.e.,~$m \geq n$) with $m = O(n)$. \item[\textbf{A2}] $A^*$ is $\mu$-incoherent, \text{i.e.}, for all $i \neq j$, $\abs{\inprod{\cA{i}^*}{\cA{j}^*}} \leq \mu/\sqrt{n}$. \item[\textbf{A3}] $\cA{i}^*$ has at most $r$ non-zero elements, and is normalized such that $\norm{\cA{i}^*} = 1$ for all $i$. Moreover, $\abs{A^*_{ij}} \geq \tau$ for $A^*_{ij} \neq 0$ and $\tau = \Omega(1/\sqrt{r})$. \item[\textbf{A4}] $A^*$ has bounded spectral norm such that $\norm{A^*} \leq O(\sqrt{m/n})$. \end{enumerate} All these assumptions are standard. In \Asmp{2}, the incoherence $\mu$ is typically of order $O(\log n)$ with high probability for a normal random matrix~\citep{arora14-new-algorithms}. Assumption \textbf{A3} is a common assumption in sparse signal recovery. The bounded spectral norm assumption is also standard~\citep{arora15-neural}. In addition to Assumptions \textbf{A1-A4}, we make the following distributional assumptions on $\mathcal{D}$: \begin{enumerate} \item[\textbf{B1}] Support $S=\textnormal{supp}(x^*)$ is of size at most $k$ and uniformly drawn without replacement from $[m]$ such that $\mathbb{P}[i \in S] = \Theta(k/m)$ and $\mathbb{P}[i, j \in S] = \Theta(k^2/m^2)$ for some $i, j \in [m]$ and $i \neq j$. \item[\textbf{B2}] The nonzero entries $x^*_S$ are pairwise independent and sub-Gaussian given the support $S$ with $\mathbb{E}[x^*_i|i \in S] = 0$ and $\mathbb{E}[x^{*2}_i|i \in S] = 1$. \item[\textbf{B3}] For $i \in S$, $|{x^*_i}| \geq C$ where $0 < C \leq 1$. \item[\textbf{B4}] The additive noise $\varepsilon$ has i.i.d.\ Gaussian entries with variance $\sigma_\varepsilon^2$ with $\sigma_\varepsilon = O(1/\sqrt{n})$. \end{enumerate} For the rest of the paper, we set $\Phi = I_n$, the identity matrix of size $n$. This only simplifies the arguments but does not change the problem because one can study an equivalent model: $$y' = A x^* + \varepsilon',$$ where $y'=\Phi^T y$ and $\varepsilon' = \Phi^T \varepsilon$, as $\Phi^T\Phi=I_n$. Due to the Gaussianity of $\varepsilon$, $\varepsilon'$ also has independent entries. Although this property is specific to Gaussian noise, all the analysis carried out below can be extended to sub-Gaussian noise with minor (but rather tedious) changes in concentration arguments. Our goal is to devise an algorithm that produces an provably ``good'' estimate of $A^*$. For this, we need to define a suitable measure of ``goodness''. We use the following notion of distance that measures the maximal column-wise difference in $\ell_2$-norm under some suitable transformation. \begin{Definition}[$(\delta, \kappa)$-nearness] $A$ is said to be $\delta$-close to $A^*$ if there is a permutation $\pi : [m] \rightarrow [m]$ and a sign flip $\sigma : [m] : \{\pm 1\}$ such that $\norm{\sigma(i) \cA{\pi(i)} - \cA{i}^*} \leq \delta$ for every $i$. In addition, $A$ is said to be $(\delta, \kappa)$-near to $A^*$ if $\norm{\cA{\pi} - A^*} \leq \kappa \norm{A^*}$ also holds. \label{def_closeness} \end{Definition} For notational simplicity, in our theorems we simply replace $\pi$ and $\sigma$ in Definition \ref{def_closeness} with the identity permutation $\pi(i) = i$ and the positive sign $\sigma(\cdot) = +1$ while keeping in mind that in reality we are referring to one element of the equivalence class of all permutations and sign flip transforms of $A^*$. We will also need some technical tools from~\citet{arora15-neural} to analyze our gradient descent-style method. Consider any iterative algorithm that looks for a desired solution $z^* \in \mathbb{R}^n$ to optimize some function $f(z)$. Suppose that the algorithm produces a sequence of estimates $z^1, \dots, z^s$ via the update rule: $$z^{s+1} = z^s - \eta g^s,$$ for some vector $g^s$ and scalar step size $\eta$. The goal is to characterize ``good'' directions $g^s$ such that the sequence converges to $z^*$ under the Euclidean distance. The following gives one such sufficient condition for $g^s$. \begin{Definition \label{def_correlated_direction} A vector $g^s$ at the $s^{\textrm{th}}$ iteration is $(\alpha, \beta, \gamma_s)$-correlated with a desired solution $z^*$ if $$\inprod{g^s} {z^s - z^*} \geq \alpha \norm{z^s - z^*}^2 + \beta \norm{g^s}^2 - \gamma_s.$$ \end{Definition} We know from convex optimization that if $f$ is $2\alpha$-strongly convex and $1/2\beta$-smooth, and $g^s$ is chosen as the gradient $\nabla_z f(z)$, then $g^s$ is $(\alpha, \beta, 0)$-correlated with $z^*$. In our setting, the desired solution corresponds to $A^*$, the ground-truth synthesis matrix. In \cite{arora15-neural}, it is shown that $g^s= \mathbb{E}_y[(A^s x-y)\textnormal{sgn}(x)^T]$, where $x=\textnormal{threshold}_{C/2}((A^{s})^Ty)$ indeed satisfies Definition~\ref{def_correlated_direction}. This $g^s$ is a population quantity and not explicitly available, but one can estimate such $g^s$ using an empirical average. The corresponding estimator $\widehat{g}^s$ is a random variable, so we also need a related \emph{correlated-with-high-probability} condition: \begin{Definition} \label{def_correlated_direction_whp} A direction $\widehat{g}^s$ at the $s^{\textrm{th}}$ iteration is $(\alpha, \beta, \gamma_s)$-correlated-w.h.p. with a desired solution $z^*$ if, w.h.p., $$\inprod{\widehat{g}^s} {z^s - z^*} \geq \alpha \norm{z^s - z^*}^2 + \beta \norm{\widehat{g}^s}^2 - \gamma_s.$$ \end{Definition} From Definition \ref{def_correlated_direction}, one can establish a form of descent property in each update step, as shown in Theorem \ref{thm_descent_from_correlation_z}. \begin{Theorem} \label{thm_descent_from_correlation_z} Suppose that $g^s$ satisfies the condition described in Definition \ref{def_correlated_direction} for $s = 1, 2, \dots, T$. Moreover, $0 < \eta \leq 2\beta$ and $\gamma = \max_{s=1}^T\gamma_s$. Then, the following holds for all $s$: $$\norm{z^{s+1} - z^*}^2 \leq (1-2\alpha \eta) \norm{z^s - z^*}^2 + 2\eta\gamma_s.$$ In particular, the above update converges geometrically to $z^*$ with an error $\gamma/\alpha$. That is, $$\norm{z^{s+1} - z^*}^2 \leq (1-2\alpha \eta)^s \norm{z^0 - z^*}^2 + 2\gamma/\alpha.$$ \end{Theorem} We can obtain a similar result for Definition \ref{def_correlated_direction_whp} except that $\norm{z^{s+1} - z^*}^2$ is replaced with its expectation. Armed with the above tools, we now state some informal versions of our main results: \begin{Theorem}[Provably correct initialization, informal] \label{result_thm_neural_initialization} There exists a neurally plausible algorithm to produce an initial estimate $A^0$ that has the correct support and is $(\delta, 2)$-near to $A^*$ with high probability. Its running time and sample complexity are $\widetilde{O}(mnp)$ and $\widetilde{O}(mr)$ respectively. This algorithm works when the sparsity level satisfies $r = O^*(\log n)$. \end{Theorem} Our algorithm can be regarded as an extension of~\citet{arora15-neural} to the double-sparse setting. It reconstructs the support of one single column and then estimates its direction in the subspace defined by the support. Our proposed algorithm enjoys neural plausibility by implementing a thresholding non-linearity and Oja's update rule. We provide a neural implementation of our algorithm in Appendix~\ref{neural_implementation}. The adaption to the sparse structure results in a strict improvement upon the original algorithm both in running time and sample complexity. However, our algorithm is limited to the sparsity level $r = O^*(\log n)$, which is rather small but plausible from the modeling standpoint. For comparison, we analyze a natural extension of the algorithm of~\citet{arora15-neural} with an extra hard-thresholding step for every learned atom. We obtain the same order restriction on $r$, but somewhat worse bounds on sample complexity and running time. The details are found in Appendix~\ref{appdx_improve_arora}. We hypothesize that a stronger incoherence assumption can lead to provably correct initialization for a much wider range of $r$. For purposes of theoretical analysis, we consider the special case of a \emph{perfectly incoherent} synthesis matrix $A^*$ such that $\mu = 0$ and $m = n$. In this case, we can indeed improve the sparsity parameter to $r = O^*\bigl( \min(\frac{\sqrt{n}}{\log^2 n}, \frac{n}{k^2\log^2n}) \bigr)$, which is an exponential improvement. This analysis is given in Appendix \ref{orthonormal_case}. \begin{Theorem}[Provably correct descent, informal] \label{result_thm_neural_algorithm} There exists a neurally plausible algorithm for double-sparse coding that converges to $A^*$ with geometric rate when the initial estimate $A^0$ has the correct support and $(\delta, 2)$-near to $A^*$. The running time per iteration is $O(mkp+mrp)$ and the sample complexity is $\widetilde{O}(m + \sigma_\varepsilon^2\frac{mnr}{k})$. \end{Theorem} Similar to~\citet{arora15-neural}, our proposed algorithm enjoys neural plausibility. Moreover, we can achieve a better running time and sample complexity per iteration than previous methods, particularly in the noisy case. We show in Appendix~\ref{appdx_improve_arora} that in this regime the sample complexity of~\citet{arora15-neural} is $\widetilde{O}(m + \sigma_\varepsilon^2\frac{mn^2}{k})$. For instance, when $\sigma_\varepsilon\asymp n^{-1/2}$, the sample complexity bound is significantly worse than $\widetilde{O}(m)$ in the noiseless case. In contrast, our proposed method leverages the sparse structure to overcome this problem and obtain improved results. We are now ready to introduce our methods in detail. As discussed above, our approach consists of two stages: an initialization algorithm that produces a coarse estimate of $A^*$, and a descent-style algorithm that refines this estimate to accurately recover $A^*$.
-136,256.809927
[ -3.087890625, 2.783203125 ]
33.751425
[ -3.029296875, 0.89306640625, -1.6865234375, -5.359375, -1.6484375, 7.3828125 ]
[ 2.830078125, 7.9140625, -0.55419921875, 5.390625 ]
919
15,859
[ -3.26171875, 4.0234375 ]
33.24248
[ -5.7578125, -4.31640625, -4.0859375, -1.869140625, 2.19140625, 11.765625 ]
0.53743
19.162635
18.298758
1.481765
[ 1.9318490028381348 ]
-77,222.190262
5.983732
-135,453.818398
1.001096
6.435128
[ -1.6630859375, -3.26171875, -4.06640625, -5.51953125, 2.21875, 12.625 ]
[ -5.234375, -1.1123046875, -1.482421875, -1.099609375, 2.875, 2.93359375 ]
BkiUdjg5qX_Bw5o9IG_E
\section{Introduction} \begin{definition}\label{dip} (M. Gromov) A complex space $X$ is said to {\em satisfy the $h$-principle} (a property abbreviated by: `hP(X)') if: for every Stein manifold $S$ and every continuous map $f:S\to X$ there exists a holomorphic map $F:S\to X$ which is homotopic to $f$. \end{definition} The origin of this notion lies in the works of Grauert and Oka. Grauert indeed showed that any holomorphic principal bundle with fibre a complex Lie Group $G$ over a Stein Manifold $S$ has, for any given continuous section $s$, a holomorphic section homotopic to $s$. The classifications of continuous complex and holomorphic vector bundles on $S$ thus coincide. This was established by Oka for complex line bundles. Considering products $G\times S$, Grauert's result also shows that complex Lie groups satisfy the $h$-principle. This has been extended by M. Gromov to `elliptic' (and later to Forstneric `subelliptic') manifolds. These include homogeneous complex manifolds (for example $\P_n$, Grassmannians and tori) and complements ${\mathbb C}^n\setminus A$ where $A$ is an algebraic subvariety of codimension at least two. (Sub-)elliptic manifolds contain as many `entire' curves as possible, and therefore opposite to Brody hyperbolic complex manifolds. Since `generic' hyperbolicity is conjectured (and sometimes known) to coincide with``general type'' in algebraic geometry, it is thus natural to assume that for projective varieties ``fulfilling the $h$-principle'' is related to being ``special'' as introduced in \cite{C01}, since `specialness' is conjectured there to be equivalent to $\Bbb C$-connectedness. In this article we investigate these relationships with particular emphasis on projective manifolds. The Main result is: \begin{maintheorem} Let $X$ be a complex projective manifold fulfilling the $h$-principle. Then: \begin{enumerate} \item $X$ is {\em special}. \item Every holomorphic map from $X$ to a Brody hyperbolic K\"ahler manifold is constant. \end{enumerate} \end{maintheorem} For an arbitrary complex manifold we prove the statements below. \begin{theorem} Let $X$ be a complex manifold fulfilling the $h$-principle. \begin{enumerate} \item Then is {\em weakly ${\mathbb C}$-connected} (see definition~\ref{dcc} below). \item For every holomorphic map from $X$ to a complex semi-torus $T$, the Zariski closure of $f(X)$ in $T$ is the translate of a complex sub semi-torus of $T$. \item If $X$ is an algebraic variety, its Quasi-Albanese map is dominant. \end{enumerate} \end{theorem} Let us now recall resp.~introduce some notation. \begin{definition}\label{dcc} We say that a complex space $X$ is: \begin{enumerate} \item $\Bbb C$-connected if any two points of $X$ can be connected by a chain of `entire curves' (i.e.,~holomorphic maps from ${\mathbb C}$ to $X$) . This property is preserved by passing to unramified coverings and images by holomorphic maps. If $X$ is smooth this property is also preserved under proper modifications. \item `Brody-hyperbolic' if any holomorphic map $h:\Bbb C\to X$ is constant. \item $X$ is said to be `weakly $\Bbb C$-connected' if every holomorphic map $f:X'\to Y$ from any unramified covering $X'$ of $X$ to a Brody-hyperbolic complex space $Y$ induces maps $\pi_k(f):\pi_k(X')\to \pi_k(Y)$ between the respective homotopy groups which are zero for any $k>0$. Observe that any holomorphic map $f:X\to Y$ between complex spaces is constant if $X$ is $\Bbb C$-connected and $Y$ Brody-hyperbolic. Thus $\Bbb C$-connectedness implies `weak $\Bbb C$-connectedness'. Also, any contractible $X$ is `weakly $\Bbb C$-connected'. There exists projective smooth threefolds which are `weakly $\Bbb C$-connected', but not $\Bbb C$-connected. An example can be found in \cite{CW}. \end{enumerate} \end{definition} It is easy to verify that every `subelliptic' manifold $X$ is $\Bbb C$-connected. Conversely all known examples of connected complex manifolds satisfying the $h$-principle admit a holomorphic homotopy equivalence to a `subelliptic' complex space. This suggest the following question: \begin{question}\label{qhpcc} Let $X$ be a complex connected manifold. If $X$ satisfies the $h$-principle, does this imply that there exists a holomorphic homotopy equivalence between $X$ and a $\Bbb C$-connected complex space $Z$ ? \end{question} Since a compact manifold can not be homotopic to a proper analytic subset for compact manifolds this question may be reformulated as follows: \begin{question}\label{qhpccK} Let $X$ be a compact complex connected manifold. If $X$ satisfies the $h$-principle, does this imply that $X$ is $\Bbb C$-connected ? \end{question} Combining Theorem \ref{hps} with the `Abelianity conjecture' of \cite {C01}, we obtain the following purely topological conjectural obstruction to the $h$-principle: \begin{conjecture}\label{cab} Every projective manifold satisfying the $h$-principle should have an almost abelian fundamental group. \end{conjecture} Our proof of the implication ``$hp(X)\ \Longrightarrow$ {\em special}'' depends on `Joua\-nou\-lou's trick' which is not available for non-algebraic manifolds. Still we believe that the statement should also hold in the K\"ahler case (for which specialness is defined as in definition 2.1 below): \begin{conjecture} Every compact K\"ahler manifold satisfying the $h$-principle should be special. \end{conjecture} This implication might also hold for quasi-projective manifolds, provided their topology is sufficiently rich (non-contractibility being obviously a minimal requirement). Particular cases involving the quasi-Albanese map (dominance and connectedness) are established, using \cite{NWY}. See theorems \ref{ndqa} and \ref{niqa} in \S \ref{QAm}. The converse direction (``does specialness imply the $h$-principle?'') is almost completely open. Based on classification and known results, the implication does hold for curves as well as surfaces which are either rational, or ruled over an elliptic curve, or blown-ups of either Abelian or bielliptic surfaces. The question remains open for all other special surfaces, and thus in particular for K3, even Kummer, surfaces. In higher dimensions even less is known, e.g.~the case of $\P^3$ blown-up in a smooth curve of degree $3$ or more is far from being understood. Still, with a sufficient amount of optimism one might hope for a positive answer to the question below. \begin{question} Let $X$ be a smooth (or at least normal) quasi-projective variety. Assume that $X$ is either `special', or ${\mathbb C}$-connected. Does it follow that $X$ satisfies the $h$-principle ? In this case, is it Oka (see \S\ref{EO})? \end{question} We present some examples showing that there is no positive answer for arbitrary (ie: non-normal, non-K\" ahler, or non-algebraic varieties): There are examples of the following types which do {\em not} fulfill the $h$-principle despite being ${\mathbb C}$-connected, or staisfying definition 2.1 (recall that we reserve the term `special' for the K\" ahler or quasi-K\" ahler case only): \begin{enumerate} \item A non-normal projective curve which satisfies definition 2.1 and is ${\mathbb C}$-connected. \item A non-compact and non-algebraic complex manifold which is ${\mathbb C}$-connected. \item A compact non-K\"ahler surface which satisfies definition 2.1. \end{enumerate} See \S \ref{EO} for more details. \begin{remark} \begin{enumerate} \item Any contractible complex space trivially satisfies the $h$-principle. The notion ``$h$-principle'' is thus of interest only for non-contractible $X's$. Since positive-dimensional compact manifolds are never contractible this is not relevant for projective manifolds. However, there do exist examples of contractible affine varieties of log general type (\cite{R},\cite{M}) indicating that for non-compact varieties an equivalence ``hP $\iff$ {\em special}'' can hold only if the topology of the variety is sufficiently non-trivial. \item Let $u:X'\to X$ be an unramified covering, with $X$ and $X'$ smooth and connected. Then $hP(X)$ implies $hP(X')$ (see Lemma \ref{et}), but the converse is not true. To see this, consider a compact Brody-hyperbolic manifold $X$ which is an Eilenberg-MacLane $K(\pi,1)$-space, but not contractible (for example: a projective curve of genus $g\geq 2$ or a compact ball quotient). Then its universal cover $\tilde X$ is contractible and therefore satisfies the $h$-principle. On the other hand, being Brody-hyperbolic and non-contractible, $X$ itself can not satisfy the $h$-principle. \item For any given $X$ and $f$, possibly replacing the initial complex structure $J_0$ of $S$ by another one $J_1=J_1(f)$, homotopic to $J_0$, the existence of $F$ as in definition \ref{dip} above is always true (if $dim_{\Bbb C}(S)\geq 3$ at least. If $dim_{\Bbb C}(S)=2$, one must first compose with an orientation preserving homeomorphism of $S$). See \cite{F}, \S 9.10). \end{enumerate} \end{remark} We thank Finnur L\'arusson for useful comments on an initial version of the present text. \section{`Specialness'}\label{Sp} \subsection{`Specialness' and the `core map'} \ \ We refer to \cite{C01} for more details on this notion, to which the present section is an extremely sketchy introduction. Roughly speaking, special manifolds are higher-dimensional generalisations of rational and elliptic curves, thus `opposite' to manifolds of `general type' in the sense that they, and their finite \'etale covers, do not admit non-degenerate meromorphic maps to `orbifolds' of general type. Many qualitative properties of rational or elliptic curves extend or are expected to extend to `special' manifolds, although they are much more general (see remark \ref{rspec}.(7) below). Let $X$ be a connected compact K\" ahler manifold. \begin{definition} Let $p>0$, and $L\subset \Omega_X^p$ be a saturated rank $1$ coherent subsheaf. We define: $$\kappa^{sat}(X,L):=\limsup_{m>0} \left\{\frac{log(h^0(X,\overline{mL}))}{log(m)} \right\},$$ where $H^0(X,\overline{mL})\subset H^0(X,(\Omega_X^p)^{\otimes m})$ is the subspace of sections taking values in $L_x^{\otimes m}\subset (\Omega_X^p)_x^{\otimes m}$ at the generic point $x$ of $X$. By a generalisation of Castelnuovo-De Franchis due to F. Bogomolov, $\kappa^{sat}(X,L)\leq p$, with equality if and only if $L=f^*(K_Y)$ at the generic point of $X$, for some meromorphic dominant map $f:X\dasharrow Y$, with $Y$ a compact $p$-dimensional manifold. We say that $L$ is a `Bogomolov sheaf' on $X$ if $\kappa^{sat}(X,L)=p>0$, and that $X$ is `special' if it has no Bogomolov sheaf. \end{definition} \begin{remark}\label{rspec} 1. A `special' manifold is `very weakly special' (ie: has no dominant meromorphic map $f:X\dasharrow Y$ onto a positive-dimensional manifold $Y$ of `general type'), since $L:=f^*(K_Y)^{sat}$ would provide a Bogomolov sheaf on $X$. In particular, $X$ is not of general type (ie: $\kappa(X):=\kappa(X,K_X)<dim(X))$. 2. `Specialness' is a bimeromorphic property. If $X$ is special, so is any $Y$ `dominated' by $X$ (ie: such that a dominant rational map $f:X\dasharrow Y$ exists). 3. If $X$ is special, and if $f: X'\to X$ is unramified finite, then $X'$ is special, too. The proof (see \cite{C01}) is surprisingly difficult. It shows that `specialness' implies `weak specialness', defined as follows: $X$ is weaky special if any of its unramified covers is `very weakly special', as defined in (1) above. 4. The notion of `weak specialness' looks natural, and is easy to define. Unfortunately, it does not lead to any meaningfull structure result, such as the one given by the core map, stated below. On the other hand, it is also too weak to characterise the vanishing of the Kobayashi pseudometric (see (10) below). 5. Geometrically speaking, a manifold $X$ is `special' if and only if it has no dominant rational map onto an `orbifold pair' $(Y,\Delta)$ of general type. We do not define these concepts here. See \cite{C01} and \cite{C11} for details. 6. Compact k\" ahler manifolds which are either rationally connected, or with $\kappa=0$ are special (see \cite{C01}). 7. For any $n>0$ and any $\kappa\in \{-\infty, 0, 1,\dots, (n-1)\}$, there exists special manifolds with $dim(X)=n$ and $\kappa(X)=\kappa$. See, more precisely, \cite{C01}, \S 6.5. 8. For curves, `special' is equivalent to `very weakly special', and also to: non-hyperbolic. For surfaces, `special' is equivalent to `weakly special', and also to: $\kappa<2$, jointly with $\pi_1(X)$ almost abelian. Thus `special' surfaces are exactly the ones with either: a. $\kappa=-\infty$ and $q\leq 1$, or: b. $\kappa=0$, or: c. $\kappa=1$, and $q(X')\leq 1$ for any finite \'etale cover $X'$ of $X$. 9. Another quite different characterisation of compact K\"ahler special surfaces $X$ is: $X$ is special if and only if it is $\Bbb C^2$-dominable. (with the possible exception of non-elliptic K3 surfaces, which are special, but not known to be $\Bbb C^2$-dominable). One direction is essentially due to \cite{BL}. 10. When $n:=dim(X)\geq 3$, there exists $X$ which are `special', but not `weakly special' (see \cite{BT}), and no simple characterisation of specialness depending only on $\kappa$ and $\pi_1$ does exist. Moreover, there are examples of weakly special varieties for which the Kobayashi pseudometric does not vanish identically (see \cite{CP}, \cite{CW}). \end{remark} The central results concerning `specialness' and having motivated its introduction are the following two structure theorems (see \cite{C01} and \cite{C11} for definitions and details): \begin{theorem} For any compact K\"ahler manifold $X$, there exists a unique almost holomorphic meromorphic map with connected fibres $c_X:X\dasharrow C(X)$ such that: 1. Its general fibre is special, and: 2. Its orbifold base $(C(X), \Delta_{c_X})$ is of general type (and a point exactly when $X$ is special). The map $c_X$ is called the `core map' of $X$. It functorially `splits' $X$ into its parts of `opposite' geometries (special vs general type). \end{theorem} \begin{conjecture} For any $X$ as above, $c_X=(J\circ r)^n$, where $n:=dim(X)$. Here $J$ (resp. $r$) are orbifold versions of the Moishezon fibration and of the `rational quotient' respectively. In particular, special manifolds are then towers of fibrations with general fibres having either $\kappa=0$, or $\kappa_+=-\infty$. \end{conjecture} \begin{theorem} The preceding conjecture holds, if the orbifold version of Iitaka's $C_{n,m}$-conjecture is true. \end{theorem} \begin{remark} The above two theorems extend naturally to the full orbifold category. \end{remark} The last (conditional) decomposition naturally leads (see \cite{C11}) to the following conjectures: \begin{conjecture}\label{cj} 1. If $X$ is special, $\pi_1(X)$ is almost abelian. 2. $X$ is special if and only if its Kobayashi pseudometric vanishes identically. 3. $X$ is special if and only if any two of its points can be connected by an entire curve (ie: the image of a holomorphic map from $\Bbb C$ to $X$). \end{conjecture} \subsection{Orbifold Kobayashi-Ochiai and Factorisation through the core map}\label{ss-core} The following orbifold version of Kobayashi-Ochiai extension theorem will be crucial in the proof of our main result. \begin{theorem}\label{koo} (\cite{C01}, Theorem 8.2) Let $X$ be a compact K\"ahler manifold, $c_X: X\dasharrow C(X)$ be its core map\footnote{Or, more generally, any map $f:X\to Y$ of general type in the sense of \cite{C01}.}, $M\subset \overline{M}$ be a non-empty Zariski open subset of the connected complex manifold $\overline{M}$, and $\varphi:M\to X$ be a meromorphic map such that $g:=c_X\circ \varphi: M\to C(X)$ is non-degenerate (ie: submersive at some point of $M$). Then $g$ extends meromorphically to $\overline{M}$. \end{theorem} Applying this result to $M:=\Bbb C^n\subset \overline{M}:=\Bbb P^n$, we obtain that a non-degenerate meromorphic map $\varphi: \Bbb C^n\to X$ can exist only if $X$ is special. This is an indication in direction of the conjecture \ref{cj} (2) above. \begin{theorem}\label{ftcm} Let $X, Z$ be complex projective manifolds and let $M$ be a smooth algebraic variety admitting a surjective algebraic map $\tau: M\to Z$ with all fibers affine spaces (isomorphic to $\Bbb C^k$). Let $G: M\dasharrow X$ be a meromorphic map, such that $g:=c_X\circ G:M\to C(X)$ is non-degenerate. Then $g$ also factorises through $\tau$ and the core map $c_Z:Z\dasharrow C(Z)$ (ie: $g=\varphi\circ c_Z\circ \tau$, for some $\varphi: C(Z)\dasharrow C(X)$). \end{theorem} \begin{proof} $M$ can be compactified to a compact smooth projective variety $\overline{M}$ by adding a hypersurface $D$ with normal crossings. By theorem \ref{koo} above, $g$ extends algebraically to $\bar g:\overline{M}\to C(X)$. Denote also by $\bar\tau: \overline{M}\to Z$ the extension of $\tau$ to $\overline{M}$. The orbifold base of the map $\bar g: \overline{M}\to C(X)$ is still $(C(X),\Delta_{c_X})$, and hence of general type in the sense of \cite{C01}, since it factorises through $X$ over $M$, and all the components of $D$ are mapped surjectively onto $C(X)$, since the fibres of $\tau$ are $\Bbb C^k$. The fact that the core map $c_{\overline{M}}$ dominates every general type fibration on $\overline{M}$ now yields a map $c_{\bar g}: C(\overline{M})\to C(X)$ such that $\bar g=c_{\bar g}\circ c_{\overline{M}}$. The map $\bar\tau$ induces also a map $c_{\bar\tau}: C(\overline{M})\to C(Z)$ such that $c_{\bar\tau}\circ c_{\overline{M}}=c_Z\circ \bar\tau$. Because the fibres of $\bar\tau$ are rationally connected, the map $c_{\bar\tau}$ is isomorphic, by \cite{C01}, Theorem 3.26. The composed map $\varphi:=c_{\bar g}\circ c_{\bar\tau}^{-1}:C(Z)\to C(X)$ provides the sought-after factorisation, since $\bar g=c_{\bar g}\circ c_{\overline{M}}=c_{\bar g}\circ c_{\bar\tau}^{-1}\circ c_Z\circ \bar\tau= \varphi\circ c_Z\circ \bar\tau$. \end{proof} \begin{remark} The conclusion still holds if we replace $c_X$ by any fibration with general type orbifold base, and only assume that the fibres of $\bar\tau$ are rationally connected manifolds, and that all components of $D$ are mapped surjectively onto $Z$ by $\bar\tau$. This follows from \cite{GHS}, and \cite{C01}, theorem 3.26. \end{remark} \section{Jouanoulou's trick} \subsection{Jouanoulou's trick} \begin{proposition}\label{jtrick} Let $X$ be a projective manifold. Then there exists a smooth affine complex variety $M$ and a surjective morphism $\tau:M\to X$ such that \begin{enumerate} \item $\tau:M\to X$ is a homotopy equivalence. \item Every fiber of $\tau$ is isomorphic to some ${\mathbb C}^n$. In particular, every fiber has vanishing Kobayashi pseudodistance. \item $\tau$ is a locally holomorphically trivial fiber bundle. \item $\tau$ admits a real-analytic section. \end{enumerate} \end{proposition} \begin{remark} This is known as `Jouanoulou's trick' (see \cite J). This construction was introduced in Oka's theory in \cite{lar}, where the class $G$ of `good manifolds' is introduced, these being defined as the ones having a Stein affine bundle with fibre $\Bbb C^n$, for some $n$, observing that this class contains Stein manifolds, quasi-projective manifolds, and is stable by various usual geometric operations. \end{remark} \begin{proof} We first treat the case of $X:=\Bbb P^N$, denoting with $\Bbb P^{N*}$ its dual projective space. Let $D\subset P:=\Bbb P^N\times \Bbb P^{N*}$ be the divisor consisting of pairs $(x,H)$ such that $x\in H$ (ie: the incidence graph of the universal family of hyperplanes of $\Bbb P^N$). This divisor $D$ is ample, since intersecting positively the two family of lines contained in the fibres of both projections of $P$. Let $V$ be its complement in $P$. The projection $\tau_P$ on the first factor of $P$, restricted to $V$, satisfies the requirements for $X:=\Bbb P^N$. A real-analytic section is obained by choosing a hermitian metric on $\Bbb C^{n+1}$, and sending a complex line to its orthogonal hyperplane. In the general case, embed first $X$ in some $\P_N$. Let then $M=\tau_P^{-1}(X)$ and let $\tau$ denote the restriction of $\tau_P$ to $M$. Now $M$ is a closed algebraic subset of $V$ and therefore likewise affine. Everything then restricts from $\Bbb P^N$ to $X$. \end{proof} Remark that, when $X=\Bbb P^1$, we recover the two-dimensional affine quadric as $M$ (and indeed, $\Bbb P^1$ is diffeomorphic to $S^2$). If $X$ is a projective curve, we may obtain a bundle $M\to X$ with the desired properties also in a different way: Let $Q_2=\Bbb P^1\times \Bbb P^1-D$, where $D$ is the diagonal. Taking the first projection, we get an affine bundle $Q_2\to \Bbb P^1$ with fibre $\Bbb C$ over $\Bbb P^1$, which is an affine variety. Now we choose a finite morphism $f$ from $X$ to $\P_1$ and define $M\to X$ via base change. \begin{question} Given a complex manifold $Z$, does there exists a Stein manifold $S$ and a holomorphic map $f:S\to Z$ whose fibers are isomorphic to ${\mathbb C}^n$ ? Is this true at least when $Z$ is compact K\" ahler? \end{question} \section{Opposite complex structures and associated cohomological integrals} \subsection{Inverse images of forms under meromorphic maps} \begin{lemma}\label{pull-back-mero} Let $f:X\to Y$ be a dominant meromorphic map between compact complex manifolds, $\dim X=n$, with $I(f)\subset X$ being the indeterminacy set. For every $c\in H^{k,k}(Y)$ there exists a unique cohomology class $c'\in H^{k,k}(X)$ such that: \[ [\alpha].c'=\int_{X\setminus I(f)}\alpha\wedge f^*\beta \] for every closed smooth $(n-k,n-k)$-form $\alpha$ on $X$ and every closed smooth $(k,k)$-form $\beta$ with $[\beta]=c$. We define the inverse image of the De Rham cohomology class $[c]$ with respect to the meromorphic map $f$ by: $f^*([c]):= c'$. \end{lemma} \begin{proof} Let $\tau:X'\to X$ be a blow up such that $f$ lifts to a holomorphic map $F:X'\to Y$. Using Poincar\'e duality, $F^*\beta$ may be identified with a linear form on $H^{n-k,n-k}(X')$. Restricting this linear form to $\tau^*H^{n-k,n-k}(X)$ and again using Poincar\' e duality there is a unique cohomology class $c'$ such that: \[ [\alpha].c'=\int_{X'}\alpha\wedge F^*\beta. \] Furthermore \[ \int_{X'}\alpha\wedge F^*\beta=\int_{X\setminus I(f)}\alpha\wedge f^*\beta \] since $\alpha\wedge f^*\beta$ is a top degree form and both the exceptional divisor of the blow up and the indeterminacy set $I(f)$ of the meromorphic $f$ are sets of measure zero. \end{proof} From the characterization of this inverse image, it is clear that is is compatible with composition of dominant meromorphic maps. It is also clear that it specializes to the usual pull-back if the meromorphic map under discussion happens to be holomorphic. (Caveat: This inverse image gives linear maps between the cohomology groups, but (as can be seen by easy examples) it does not define a ring homomorphism between the cohomology rings.) \subsection{Opposite complex structures} Given a complex manifold $X$ we define the {\em opposite}, or {\em conjugate} complex manifold (also called {\em opposite complex structure} on $M$) as follows: If $X_0$ is the underlying real manifold and $J$ is the almost complex structure tensor of $X$, we define as the opposite complex manifold $X_0$ equipped with $-J$ as complex structure tensor. Recall that an almost complex structure is integrable if and only if the Nijenhuis-tensor vanishes. This implies immediately that $(X_0,-J)$ is also a complex manifold (i.e.~$-J$ is an {\em integrable} almost complex structure). One can also argue directly without Newlander-Nirenberg's theorem. Now consider the complex projective space $\P_n({\mathbb C})$. The map \[ [z_0:\ldots:z_n]\mapsto [\bar z_0:\ldots:\bar z_n] \] defines a biholomorphic map between $\P_n({\mathbb C})$ and its opposite. As a consequence, we deduce: If a complex manifold $X$ is projective, so is its opposite $\overline{X}$. Now assume $X$ admits a K\"ahler form $\omega$. Then the opposite complex manifold $\bar X$ is again a K\"ahler manifold. Indeed, since $\omega(v,w)=g(Jv,w)$ defines the K\"ahler form on a complex manifold admitting a Riemannian metric $g$ for which $J$ is an isometry, we see that $\bar X$ admits a K\"ahler metric with $-\omega$ as K\"ahler form. The same property applies if $g$ is, more generally, a hermitian metric on $X$, and $\omega$ its associated `K\" ahler' form, defined from $J, g$ by the formula above. {\em Orientation}. On a K\"ahler manifold $X$ with K\"ahler form $\omega$ the orientation is defined by imposing that $\omega^n$ is positively oriented where $n=\dim_{{\mathbb C}} X$. This implies: If $X$ is a K\"ahler manifold and $\bar X$ is its opposite, the identity map of the underlying real manifold defines an orientation preserving diffeomorphism if $n=\dim_{{\mathbb C}}(X)$ is even and an orientation reversing one if $n$ is odd. \subsection{Inverse image of forms and opposite complex structures} \begin{lemma}\label{int} Let $X$ be an $n$-dimensional compact complex manifold, $\overline{X}$ its conjugate, and $\zeta: \overline{X}\to X$ a smooth map homotopic to the identity map $id_X$ of $X$. Let $c:X\dasharrow Y$ be a meromorphic map to a complex manifold $Y$. Let $c\circ \zeta=:\varphi: \overline{X}\to Y$. Let $\alpha$ be a $d$-closed smooth differential form of degree $2d$ on $Y$, and $\omega_X$ a smooth closed $(1,1)$-form on $X$. Then: $I'=:\int_{\overline X} \zeta^*(\omega_X^{n-d}\wedge c^*(\alpha))=(-1)^{d}.\int_X \omega_X^{n-d}\wedge c^*(\alpha):=(-1)^d.I$ \end{lemma} \begin{proof} From the above remarks on the orientations of $X$ and $\overline{X}$, and the fact that $id_X^*(\omega_X)=-\omega_{\overline{X}}$, we get: $I=(-1)^n\int_{\bar X} \omega_X^{n-d} \wedge c^*\alpha$. Since $\zeta$ is homotopic to $id_X$, and $c\circ \zeta=\varphi$, we get: $I=(-1)^n\int_{\bar X} \zeta^*(\omega_X^{n-d} \wedge c^*(\alpha))$ $= (-1)^n\int_{\bar X} \zeta^*(\omega_X^{n-d}) \wedge\varphi^*(\alpha)$ $=(-1)^n\int_{\bar X} (-1)^{n-d}\omega_{\bar X}^{n-d} \wedge \varphi^*(\alpha) $ $=(-1)^d\int_{\bar X} \omega_{\bar X}^{n-d} \wedge \varphi^*(\alpha)=(-1)^d. I'$ \end{proof} \begin{corollary} \label{cint} In the situation of the preceding Lemma \ref{int} , assume that $X$ is compact K\" ahler, that $dim(Y)>0$, and that $c: X\dasharrow Y$ is non-degenerate (ie: dominant). Then $\varphi:=c\circ \zeta: \overline{X}\dasharrow Y$ is not meromorphic. \end{corollary} \begin{proof} Assume $\varphi$ is meromorphic. After suitable modifications, we may assume that $Y$ is K\" ahler. Let $\alpha:=\omega_Y$ be a K\" ahler form on $Y$. Choose $d=1$ in Lemma \ref{int}. Then $I:=\int_{ X} \omega_X^{n-1} \wedge c^*(\omega_Y)>0$. On the other hand, $I':=\int_{ \overline{X} } \omega_{\overline{X} }^{n-1} \wedge \varphi^*(\omega_Y)>0$. From Lemma \ref{int} we deduce: $I'=-I$ and a contradiction. \end{proof} \section{$h$-principle and Brody-hyperbolicity} \subsection{$h$-principle and weak $\Bbb C$-connectedness} \begin{proposition}\label{stein-sphere} For any $n>0$, the $n$-dimensional sphere $S^n$ is homotopic to the (complex) $n$-dimensional affine quadric $Q_n(\Bbb C)$ defined by the equation \[ Q_n=\left\{z=(z_0,\ldots,z_n)\in \Bbb C^{n+1}:\sum_k z_k^2=1\right\}, \] Any two points of $Q_n$ are connected by an algebraic $\Bbb C^*$, and so its Kobayashi pseudometric vanishes identically. \end{proposition} \begin{proof} Let $q$ be the standard non-degenerate quadratic form in $\Bbb R^{n+1}$. The set $Q_n(\Bbb R)$ of real points of $Q_n$ obviously coincides with $S^n$. An explicit real analytic isomorphism $\rho:Q_n\to N_n$ with the real normal (ie: orthogonal) bundle $N_n:=\{(x,y)\in S^n\times \Bbb R^{n+1}: q(x,y)=0\}$ of $S^n$ in $\Bbb R^{n+1}$, is given by: $\rho(z=x+i.y):=(\lambda(z).x,\lambda.y)$, where $\lambda(z)^{-1}:=\sqrt{1+q(y,y)}$. The map $\rho$ is in particular a homotopy equivalence. The last assertion is obvious, since any complex affine plane in $\Bbb C^{n+1}$ intersects $Q_n$ either in a conic with one or two points deleted, or in a two-dimensional complex affine space. \end{proof} \begin{question} Let $Z$ be a connected differentiable manifold or a finite-dimensional $CW$-complex. Does there exist topological obstructions to the existence of a Stein manifold $S$ homotopic to $Z$ with vanishing Kobayashi pseudodistance? In particular, does there exist a Stein manifold with vanishing Kobayashi pseudodistance (eg. ${\mathbb C}$-connected) and homotopic to a smooth connected projective curve of genus $g\geq 2$? \end{question} The main difficulty here is the condition on the Kobayashi pseudodistance. In fact, it is not too hard to to give an always positive answer if one drops the condition on the Kobayashi pseudodistance: \begin{proposition} Let $Z$ be a connected differentiable manifold or a finite-dimensional $CW$-complex (as always with countable base of topology). Then there exists a Stein manifold $M$ homotopic to $Z$. \end{proposition} \begin{proof} This is a known consequence of the classical characterisation of Stein spaces by H. Grauert (see \cite{F}, corollary 3.5.3, and the references there, for example). We give here a short proof, using a deep theorem of Eliashberg. If $Z$ is a $CW$-complex, we embedd into some ${\mathbb R}^n$. Then $Z$ is homotopic to some open neighborhood of $Z$ in ${\mathbb R}^n$. Since open subsets of ${\mathbb R}^n$ are manifolds, it thus suffices to deal with the case where $Z$ is a differentiable manifold. By taking a direct product with some ${\mathbb R}^k$, we may furthermore assume that $\dim_{{\mathbb R}}(Z)>2$. Let $M=T^*Z\stackrel{\tau}\mapsto Z$ denote the cotangent bundle. Then $M$ carries a symplectic structure in a natural way and therefore admits an almost complex structure. Fixing a metric $h$ on $M=T^*Z$ and choosing an exhaustive Morse function $\rho$ on $Z$, we can use $p(v)=\rho(\tau(v))+h(v)$ as an exhaustive Morse function on $M$. By construction the critical points of $p$ are all in the zero-section of the cotangent bundle of $Z$ and coincide with the critical points of $\rho$. Therefore there is no critical point of index larger than $\dim(Z)=\frac12\dim M$. By a result of Eliashberg (\cite{E}) it follows from the existence of such a Morse function and the existence of an almost complex structure that $M$ can be endowed with the structure of Stein complex manifold. This completes the proof since $M$ is obviously homotopy equivalent to $Z$. \end{proof} \begin{theorem}\label{thpcc} Let $X$ be a complex space which fulfills the $h$-principle. Then $X$ is `weakly $\Bbb C$-connected'. \end{theorem} \begin{proof} Assume not. Since $hP(X)$ is preserved by passing to unramified coverings (see lemma~\ref{et}), we may assume that $X'=X$ in definition \ref{dcc}(3). Then there exists a holomorphic map $g:X\to Y$, with $Y$ Brody-hyperbolic, and such that there exists a non-zero induced homotopy map $\pi_k(g):\pi_k(X)\to \pi_k(Y), k>0$. Let $f:S^k\to X$ be a continuous map defining a non-trivial element of $g\circ f:S^k\to \pi_k(Y)$, where $S^k:=$ the $k$-dimensional sphere. Let $Q_k$ be the $k$-dimensional affine quadric, and a continuous map $\varphi:Q_k\to S^k$ which is a homotopy equivalence (its existence is due to proposition~\ref{stein-sphere}). Then $f\circ\varphi:Q_k\to Y$ is a continuous map which is not homotopic to a constant map. But due to the Brody-hyperbolicity of $Y$, every holomorphic map from $Q_k$ to $Y$ must be constant, contradicting our initial assumption. \end{proof} Applying the preceding result to $Y:=X$, we get: \begin{corollary}\label{bhnhp} Let $X$ be a Brody-hyperbolic complex manifold. Then $X$ fulfills the $h$-principle if and only if it is contractible. \end{corollary} \begin{corollary} Let $X$ be a positive-dimensional compact complex Brody-hyperbolic manifold. Then $X$ does not fulfill the $h$-principle. \end{corollary} \begin{proof} Positive-dimensional compact manifolds are not contractible \end{proof} In particular, compact Riemann surfaces of genus $g\ge 2$ do not fulfill the $h$-principle. \begin{remark} There exist holomorphic maps $f:X\to Y$ with $X$ and $Y$ both smooth and projective which are not homotopic to a constant map, although $\pi_k(f)=0$ for all $k>0$. For example, take a compact Riemann surface $X$ of genus $g\ge 2$ and let $f$ be any non-constant map to $\Bbb P^1$ (example suggested by F. Bogomolov). Therefore it is not clear whether the property ``weakly ${\mathbb C}$-connected'' implies that every holomorphic map to a Brody-hyperbolic complex space must be homotopic to a constant map. The following theorem \ref{tphpwcc} solves this issue in the projective case, assuming the $h$-principle. \end{remark} \subsection{Projective Brody-hyperbolic quotients} \begin{theorem}\label{tphpwcc} Let $X$ be an irreducible projective complex space satisfying the $h$-principle. Let $f:X\dasharrow Y$ be a meromorphic map to a Brody hyperbolic K\"ahler manifold $Y$. Assume that $f$ is holomorphic or that $X$ is smooth. Then $f$ is constant. \end{theorem} \begin{proof} For every meromorphic map $f:X\dasharrow Y$ there exists a proper modification $\hat X\to X$ such that $f$ can be lifted to a holomorphic map defined on $\hat X$. If $X$ is smooth, this modification can be obtained by blowing-up smooth centers, implying that the fibers of $\hat X\to X$ are rational. Since $Y$ is Brody-hyperbolic, holomorphic maps from rational varieties to $Y$ are constant. Hence $X$ being smooth implies that $f$ is already holomorphic. Thus in any case, we may assume that $f$ is holomorphic. Because $X$ is projective, we may find a compact complex curve $C$ on $X$ such that $f|_C$ is non-constant. Let $\bar C$ be $C$ equipped with its conjugate (ie: opposite) complex structure, and $j:\bar C\to C$ be the set-theoretic identity map. Let $\tau:E\to \bar C$ be an holomorphic affine ${\mathbb C}$-bundle as given by proposition~\ref{jtrick}. Since $X$ is assumed to fulfill the $h$-principle, the continuous map $j\circ \tau: E\to X$ is homotopic to a holomorphic map $h:E\to X$. Because $Y$ is Brody hyperbolic, the map $f\circ h:E\to Y$ is constant along the fibres of $\tau$. Hence $f\circ h$ is equal to $\varphi\circ \tau$ for a holomorphic map $\varphi: \bar C\to Y$. Observe that $\varphi,f\circ j : \bar C\to Y$ are homotopic too each other, but the first map is holomorphic while the latter is antiholomorphic. This is a contradiction, because now \[ 0 < \int_{\bar C}\varphi^*\omega = \int_{\bar C}(f\circ j)^*\omega < 0 \] for any K\"ahler form $\omega$ on $Y$. \end{proof} \section{$h$-principle implies specialness for projective manifolds} \begin{theorem}\label{hps} Let $X$ be a complex projective manifold. If $X$ fulfills the $h$-principle, then $X$ is special in the sense of \cite{C01}. \end{theorem} \begin{proof} Let $\bar X$ denote the underlying real manifold equipped with the opposite complex structure and let $id_X: \bar X\to X$ denote the antiholomorphic diffeomorphism induced by the identity map of this underlying real manifold. Recall that $\bar X$ is projective, too. Hence we can find a Stein manifold $M$ together with a holomorphic fiber bundle $\tau:M\to\bar X$ with some ${\mathbb C}^k$ as fiber (proposition~\ref{jtrick}). Let $\sigma:\bar X\to M$ denote a smooth (real-analytic, for example) section (whose existence is guaranteed by proposition~\ref{jtrick}). Since we assumed that $X$ fulfills the $h$-principle, there must exist a holomorphic map $h:M\to X$ homotopic to $id_X\circ\tau$. Define $\zeta:=h\circ \sigma: \bar X\to X$. Thus: $\zeta$ is homotopic to $id_X$. Let $c:X\dasharrow C$ be the core map of $X$. We assume that $X$ is not special, i.e., that $d:=\dim(C)>0$. Let also: $n=\dim X$. We claim that $c\circ \zeta:\bar X\dasharrow C$ is non-degenerate, and thus, that so is $g:=c\circ h:M\to C(X)$. Let indeed, $\omega_C$ (resp. $\omega_X)$ be a K\" ahler form on $C$ (resp. on $X$), and let $d:= dim(C)$. Then $I:=\int_X\omega_X^{n-d}\wedge c^*(\omega_C^d)>0$. By lemma \ref{int}, we have: $I':=\int_{\bar X} \zeta^*(\omega_X^{n-d}\wedge c^*(\omega_C^d))=(-1)^d.I\neq 0$. This implies that $(c\circ\zeta)^*(\omega_C^d)\neq 0$, and so that $c\circ \zeta$ is not of measure zero. By Sard's theorem, this implies that $c\circ \zeta$ is non-degenerate, and so is thus $c\circ h$. We consider the meromorphic map $c\circ h:=g:M\to C$. By theorem \ref{ftcm}, it follows that we obtain an induced meromorphic map $\varphi:\bar X\to C$ such that $\varphi\circ\tau= g$, and thus such that: $\varphi=\varphi\circ \tau\circ \sigma=c\circ h\circ \sigma=c\circ \zeta$. We consider now the integral: $J=\int_X \omega_X^{n-1}\wedge c_X^*(\omega_C)$. Thus $J>0$. From corollary \ref{cint} we get a contradiction. Hence $X$ cannot fulfill the $h$-principle, unless $\dim(C)=0$, i.e. unless $X$ is special. \end{proof} A consequence of theorem \ref{hps} and conjecture \ref{cj} is the following homotopy restriction for the $h$-principle to hold: \begin{conjecture}\label{ab} If $X$ is complex projective manifold satisfying the $h$-principle, then $\pi_1(X)$ is almost abelian. \end{conjecture} Notice that this conjecture is true if $\pi_1(X)$ has a faithfull linear representation in some $Gl(N,\Bbb C)$, or is solvable, by \cite{C11}, and \cite{C10} respectively. The above result on projective manifolds rises the following questions. \begin{question} \begin{enumerate} \item Are compact K\"ahler manifolds satisfying the $h$-principle special? This is true, at least, for compact K\"ahler surfaces (see proposition \ref{hpws} and its corollary below). \item Let $X$ be a quasi-projective manifold satisfying the $h$-principle. Assume that $X$ is not homopy-equivalent to any proper subvariety $Z\subset X$. Does it follow that $X$ is special? \end{enumerate} \end{question} We have some partial results towards answering these questions. \begin{theorem}\label{hpws} Let $X$ be a compact K\"ahler manifold satisfying the $h$-principle. Then the Albanese map of $X$ is surjective. \end{theorem} \begin{proof} The proof of theorem \ref{ndqa} applies \end{proof} \begin{corollary} Let $X$ be a compact K\"ahler surface satisfying the $h$-principle. Then $X$ is special. \end{corollary} \begin{proof} Assume not. Then $X$ is in particular not weakly special. Since $X$ is not of general type, by theorem \ref{hps}, there exists a finite \'etale cover $\pi:X'\to X$ and a surjective holomorphic map $f:X'\to C$ onto a curve $C$ of general type. Because $X'$ also satisfies the $h$-principle, by Lemma \ref{et} below, this contradicts theorem \ref{hpws} \end{proof} \begin{lemma}\label{et} Let $\pi:X'\to X$ be an unramified covering between complex spaces. If $X$ fulfills the $h$-principle, so does $X'$. \end{lemma} \begin{proof} Let $f:S\to X'$ be a continuous map from a Stein space $S$. By assumption, there is a holomorphic map $g:S\to X$ homotopic to $\pi\circ f$. The homotopy lifting property for coverings implies that $g$ can be lifted to a holomorphic map $G:S\to X'$ which is homotopic to $f$. \end{proof} \section{Necessary conditions on the Quasi-Albanese map}\label{QAm} We give two necessary conditions, bearing on its quasi-Albanese map, in order that a quasi-projective manifold $X$ satisfies the $h$-principle. These conditions are necessary for $X$ to be special. \begin{theorem}\label{ndqa} Let $X$ be a complex quasi projective manifold for which the Quasi-Albanese map is not dominant. Then $X$ does not satisfy the $h$-principle. \end{theorem} \begin{proof} Let $A$ be the Quasi Albanese variety of $X$ and let $Z$ denote the closure of the image of $X$ under the Quasi Albanese map $a:X\to A$. We may assume $e_A\in Z$. By the theorem of Kawamata (\cite{K}), there are finitely many subtori $T_i\subset A$ and $T_i$-orbits $S_i\subset A$ such that $S_i\subset Z$ and such that every translated subtorus of $A$ which is contained in $Z$ must already be contained in one of the $S_i$. Due to lemma~\ref{lemx} below, there is an element $\gamma_0\in\pi_1(A)$ which is not contained in any of the $\pi_1(S_i)$. By the functoriality properties of the Albanese map the group homomorphism $\pi_1(X)\to\pi_1(A)$ is surjective. Thus we can lift $\gamma_0$ to an element $\gamma\in\pi_1(X)$. Let us now assume that the $h$-principle holds. In this case there must exist a holomorphic map $f$ from ${\mathbb C}^*$ to $X$ inducing $\gamma$. By composition we obtain a holomorphic map \[ F=a\circ f \circ\exp:{\mathbb C}\to Z\subset A \] Now Noguchis logarithmic version of the theorem of Bloch-Ochiai implies that the analytic Zariski closure of $F({\mathbb C})$ in $Z$ is a translated sub semitorus of $A$. Therefore $F({\mathbb C})$ must be contained in one of the $S_i$. But this implies \[ (a\circ f)_*\left(\pi_1({\mathbb C}^*)\right)\subset\pi_1(S_i) \] which contradicts our choice of $\gamma$. \end{proof} \begin{lemma}\label{lemx} Let $\Gamma_1,\ldots,\Gamma_k$ be a family of subgroups of $G={\mathbb Z}^n$ with $rank_{{\mathbb Z}}\Gamma_i<n$. Then $\cup_i\Gamma_i\ne G$. \end{lemma} \begin{proof} For a subgroup $H\subset G\subset{\mathbb R}^n$ let $N(H,r)$ denote the number of elements $x\in H$ with $||x||\le r$. Then $N(H,r)=O(r^d)$ if $d$ is the rank of the ${\mathbb Z}$-module $H$. Now $N(\Gamma_i,r)=O(r^{n-1})$, but $N(G,r)=O(r^n)$. This implies the statement. \end{proof} We find again: \begin{corollary} Let $X$ be an algebraic variety which admits a surjective morphism onto an algebraic curve $C$. If $C$ is hyperbolic, then $X$ does not fulfill the $h$-principle. \end{corollary} \begin{proof} Let $A$ resp.~$J$ denote the quasi-Albanese variety of $X$ resp.~$C$. By functoriality of the quasi Albanese we have a commutative diagram Since $\dim(J)>\dim(C)$ due to hyperbolicity of $C$, the quasi-Albanese map $X\to A$ can not be dominant. \end{proof} By similar reasoning, using \cite{NWY}: \begin{proposition}\label{niqa} Let $X$ be a quasi projective manifold which admits a finite map onto an semi abelian variety. Then $X$ fulfills the $h$-principle only if $X$ is a semi-abelian variety. \end{proposition} \section{(Counter-)examples}\label{CE} We now present some examples showing that the desired implications ``special $\implies$ $h$-principle'' and ``${\mathbb C}$-connected $\implies$ special'' certainly do not hold without imposing some normality and algebraicity/K\"ahler condition on the manifold in question. \begin{example} There is a non-normal projective curve $X$ which is rational and ${\mathbb C}$-connected, but does not fulfill the $h$-principle. We start with $\hat X=\P_1$ and define $X$ by identifying $0$ and $\infty$ in $\hat X={\mathbb C}\cup\{\infty\}$. Via the map $[x_0:x_1]\mapsto[ x_0^3+x_1^3:x_0^2x_1:x_0x_1^2]$ the quotient space $X$ can be realized as \[ X\simeq\{[z_0:z_1:z_2]: z_0z_1z_2=z_1^3 + z_2^3\}. \] Let $\tilde X$ denote the universal covering of $X$. Then $\tilde X$ consists of countably infinitely many $2$-spheres glued together. By Hurewitz $\pi_2(\tilde X)\simeq H_2(\tilde X,{\mathbb Z})\simeq{\mathbb Z}^\infty$. The long homotopy sequence associated to the covering map implies $\pi_2(X)\simeq{\mathbb Z}^\infty$. As a consequence the group homomorphism \[ {\mathbb Z}\simeq \pi_2(\hat X)\ \longrightarrow\ \pi_2(X)\simeq{\mathbb Z}^\infty \] induced by the natural projection $\pi:\hat X\to X$ is not surjective. Now let $Q$ denote the two-dimensional affine quadric. Note that $Q$ is a Stein manifold which is homotopic to the $2$-sphere. Because $\pi_2(\hat X) \to \pi_2(X)$ is not surjective, there exists a continuous map $f:Q\to X$ which can not be lifted to a continuous map from $Q$ to $\hat X$. On the other hand, every holomorphic map from the complex manifold $Q$ to $X$ can be lifted to $\hat X$, because $\hat X$ is the normalization of $X$. Therefore there exists a continuous map from $Q$ to $X$ which is not homotopic to any holomorphic map. Thus $X$ does not fulfill the $h$-principle. \end{example} \begin{example} There are non-K\"ahler compact surfaces, namely Inoue surfaces, which do not fulfill the $h$-principle, although they are special. These Inoue surfaces are compact complex surface of algebraic dimension zero with $\Delta\times{\mathbb C}$ as universal covering and which are foliated by complex lines. They are special (meaning that they satisfy definition 2.1, but the term `special' is reserved strictly speaking to the compact K\" ahler case), because due to algebraic dimension zero there are no Bogomolov sheaves. On the other hand, every holomorphic map from ${\mathbb C}^*$ to such a surface has its image contained in one of those leaves. This implies that there are many group homomorphisms from ${\mathbb Z}$ to the fundamental group of the surface which are not induced by holomorphic maps from ${\mathbb C}^*$. For this reason Inoue surfaces do not fulfill the $h$-principle. \end{example} \begin{example} There is a non-compact complex manifold which is ${\mathbb C}$-connected, but does not satisfy the $h$-principle. Due to Rosay and Rudin (\cite{RR}) there exists a discrete subset $S\subset{\mathbb C}^2$ such that $F({\mathbb C}^2)\cap S\ne\{\}$ for any non-degenerate holomorphic map $F:{\mathbb C}^2\to{\mathbb C}^2$. (Here $F$ is called non-degenerate iff there is a point $p$ with $rank(DF)_p=2$.) Let $X={\mathbb C}^2\setminus S$. Due to the discreteness of $S$ it is easy to show that $X$ is ${\mathbb C}$-connected. Now let $G=SL_2({\mathbb C})$. Then $G$ is a Stein manifold which is homotopic to $S^3$. Let $p\in SL_2({\mathbb C})$ and $v,w\in T_pG$. Using the exponential map there is a holomorphic map from ${\mathbb C}^2$ to $G$ for which $v$ and $w$ are in the image. From this it follows easily that for every holomorphic map $F:G\to X$ and every $p\in G$ we have $rank(DF)_p\le 1$. Hence $F^*\omega\equiv 0$ for every $3$-form $\omega$ on $X$ and every holomorphic map $F:G\to X$. This implies that for every holomorphic map $F:G\to X$ the induced map $F^*:H^3(X,{\mathbb R})\to H^3(G,{\mathbb R})$ is trivial. On the other hand there are continuous maps $f:S^3\to X$ for which $f^*:H^3(X,{\mathbb C})\to H^3(S^3,{\mathbb C})$ is non-zero: Choose $p\in S$. Since $S$ is countable, there is a number $r>0$ such that $||p-q||\ne r\ \forall q\in S$. Then $f:v\mapsto p+rv$ defines a continuous from $S^3=\{v\in{\mathbb C}^2:||v||=1\}$ to $X$ which induces a non-zero homomorphism $f^*:H^3(X,{\mathbb C})\to H^3(S^3,{\mathbb C})$. As a consequence, $X$ does not fulfill the $h$-principle. \end{example} \section{``special'' $\implies$ $h$-principle ?}\label{EO} We consider the question: if $X$ is projective, smooth and special, does it satisfy the $h$-principle? The question is very much open, even in dimension $2$. For projective curves, we have the equivalence: $h$-principle satisfied if and only if special. The projective surfaces known to satisfy the $h$-principle are the following ones: the rational surfaces, the minimal surfaces ruled over an elliptic curve, the blown-up Abelian surfaces and their \'etale undercovers, termed `bielliptic'. This means that the special projective surfaces not known to satisfy the $h$-principle are, on the one hand, the blown-up K3 and Enriques surfaces, and on the other hand the blown-ups of surfaces with $\kappa=1$, which are elliptic fibrations over, either: \begin{enumerate} \item an elliptic base, and without multiple fibre, or: \item a rational base, and with at most $4$ multiple fibres, the sum of the inverses of the multiplicities being at least $2$ (resp. $1$) if there are $4$ (resp. $3$) multiple fibres. \end{enumerate} In higher dimension (even $3$), essentially nothing is known. In particular, the cases of Fano, rationally connected, and even rational manifolds (for example: $\Bbb P^3$ blown-up along a smooth curve of degree $3$ or more) is open. For $n$-dimensional Fano or rationally connected manifolds, $n\geq 3$, even the existence of a non-degenerate meromorphic map from $\Bbb C^n$ to $X$ is open. This inexistence would contradict the Oka property (see definition below). In case such a map exists, nothing is known about the unirationality of $X$ (see \cite{U}, and \cite{C01}, for example). Let us first remark that the $h$-principle satisfaction is not known to be preserved by many standard geometric operations preserving specialness. In particular, this concerns: \begin{enumerate} \item Smooth blow-ups and blow-down. \item For (finite) \'etale coverings only one direction is known (cf.~X). \end{enumerate} Except for trivial cases it is very hard to verify the $h$-principle directly. The most important method for verifying the $h$-principle is Gromov's theorem that the $h$-principle is satisfied by `elliptic manifolds'. In the terminology of M. Gromov ``ellipticity'' means the existence of a holomorphic vector bundle $p:E\to X$ with zero section $z:X\to E$, and a holomorphic map $s: E\to X$ such that $s\circ z:X\to X$ is the identity map, and the derivative $ds:E\to TX$ is surjective along $z(E)$, where $E\subset TE$ is the kernel of the derivative $dp: TE\to TX$ along $z(X)\subset E$. Homogeneous complex manifolds (e.g.~$\P_n$, Grassmannians, tori) are examples of elliptic manifolds. Complements ${\mathbb C}^n\setminus A$ of algebraic subvarieties $A$ of codimension at least two are also known to be elliptic. For a complex manifold $X$ being elliptic also implies that $X$ is `Oka', i.e.: every holomorphic map $h:K\to X$ on a compact convex subset $K$ of $\Bbb C^n$ can be uniformly approximated to any precision by holomorphic maps $H:\Bbb C^ n\to X$. Forstneric's theorems (\cite{F}) show that Oka manifolds satisfy stronger approximation properties. All known examples of Oka manifolds are subelliptic, a slight weakening of ellipticity. We refer to \cite{G}, \cite{F}, and \cite{FL} for more details and generalisations of these statements. See also \cite{L} for an interpretation of the Oka property in terms of `Model structures'. We have thus the following sequence of implications (the first two being always valid, the last for projective manifolds): \[ \text{elliptic} \Rightarrow \text{Oka} \Rightarrow \text{$h$-principle} \Rightarrow \text{special} \] Although the notions `Oka' and `$h$-principle satisfied' differ in general (for example the unit disc is evidently not Oka, but satisfies the $h$-principle, because it is contractible), one may ask: \begin{question} Is any projective manifold satisfying the $h$-principle Oka? \end{question}
-43,020.466181
[ -2.62890625, 2.3671875 ]
52.574526
[ -2.4140625, 0.1851806640625, -2.8515625, -5.27734375, -1.0205078125, 8.765625 ]
[ 4.359375, 9.3515625, 1.4052734375, 6.53515625 ]
458
7,159
[ -3.1171875, 3.384765625 ]
28.292904
[ -4.89453125, -3.88671875, -5.62109375, -2.265625, 1.6943359375, 13.203125 ]
0.490116
32.216155
19.76533
0.652215
[ 1.875923752784729 ]
-28,191.569757
5.130046
-42,700.99323
0.239612
5.931088
[ -1.2802734375, -3.10546875, -3.76953125, -5.29296875, 1.78125, 12.0546875 ]
[ -5.69140625, -1.9296875, -1.7392578125, -0.69677734375, 3.927734375, 3.97265625 ]
BkiUe0nxK7kjXLlzdoBA
\section{Introduction} Cooperative relaying is traditionally seen as a physical layer scheme for analyzing and designing wireless link layer protocols \cite{Sendonaris2003}, with limited network-layer insights originating from such schemes. Indeed, the not-so-uncommon perception is: whatever be the physical layer transmission/coding scheme, the network can abstract it into a ``rate region'' and then determine algorithms to stabilize queues, perform rate control and other tasks at the higher layers. From this perspective, it seems unimportant for researchers at either layer to learn much about the intricacies of the other. There is a significant and growing body of work suggesting that such abstractions may not be accurate \cite{Dong2008} and that physical layer parameters must be included into the analysis. A large class of this work is based on signal-to-noise ratio (SNR) or signal-to-interference-and-noise ratio (SINR) models for the physical medium. While S(I)NR is a worthwhile abstraction for physical-layer schemes that ``treat interference as noise'', it is often overused and does not capture more involved physical layer transmission schemes \cite{Ephremides2010}. From information theory, it is well known that ``treating interference as noise'' represents a very limited class of transmission schemes, and a much larger class of schemes exist that achieve significantly higher throughput. Therefore, a framework that brings the information-theoretic coding scheme together with network-stability analysis is needed, to bridge the gap caused by the ``unconsummated union'' \cite{Ephremides1998}. In this paper, we explore building this bridge in the context of cooperative relay networks. We emphasize that a natural separation between network stability and physical layer coding exists only for specific classes of networks (such as capacitated networks \cite{Bodas2007}) and not in general, and a joint framework is needed that can capture notions such as physical layer cooperation. In this paper, we focus on cooperative relay networks, where multiple reasons exist for combining network and physical layer aspects. \begin{itemize} \item First, the rate-maximizing physical-layer coding strategy automatically imposes scheduling restrictions on the relays/transmitters in the network. For coherent combination at the receivers to be at all possible, all nodes involved must transmit simultaneously in that block. \item Second, it is codebooks and functions of codebooks being received, stored and transmitted by nodes and not traditional data packets. \item Finally, the codebook chosen by the source(s) determines the rate of transmission, which may or may not be alterable at intermediate nodes (this is a key distinction between general information-theoretic coding theorems and say, packetized or linear network coded systems where rate can always be varied at every node). For example, if a relay were to use amplify-and-forward or compress-and-forward as its physical-layer strategies, it has no control over rate and has a real vector as its ``packet''. \end{itemize} Given the need for a joint physical and network layer framework for cooperative networks, the rest of the paper is organized as follows: in the next section, we present a brief summary of cooperative relay networking from a physical layer perspective. In Section \ref{sec:mainresults}, we present our main results in this paper. In Section \ref{sec:sysmodel}, we describe our system model in the context of heterogeneous cellular networks. In Section \ref{sec:rate}, we describe cooperative schemes for such networks in detail and present a queue-architecture that enables both efficient and optimal operation of the network. In Section \ref{sec:main}, we present the main algorithm for operating such networks, and establish that this algorithm is throughput-optimal. We conclude with Section \ref{sec:conclude}. \section{Background: Cooperative Relay Networks} \label{sec:cooprelay} Cooperative relay networks have been researched extensively since the ``MIMO effect'' was established. Until recently, it was considered hard if not impractical for nodes to coordinate transmissions to enable cooperative relaying. However, emerging heterogeneous cellular networks are increasingly moving in the direction of standardizing and evaluating schemes with node cooperation \cite{3GPP,Sawahashi2010}. As cell sizes decrease, an increase in cell edges and interference requires node cooperation to increase throughput, and cooperative relaying is an important step in making this happen. \begin{figure} \centering \scalebox{0.6}{\input{./two_hop_network.pstex_t}} \caption{Two-hop Cooperative Network} \label{fig:two_hop} \end{figure} Figure \ref{fig:two_hop} shows the most basic configuration that incorporates cooperative relaying in heterogeneous cellular networks. To motivate this setting, we take the example of a macrocellular network. Here, the source node $s$ corresponds to the macro-cell base-station, the relay nodes $r_1$ and $r_2$ correspond to pico-cell base-stations and the destination nodes $d_1$, $d_2$ and $d_3$ correspond to mobiles. We focus on the downlink scenario where the source $s$ has independent messages/bits for the mobiles. The relays' role is to help the source in transmitting these messages. Further, we assume a half-duplex cooperative constraint so that either the first-hop or the second-hop links can be activated at any given time, with no direct-links from the source to the destinations. A more general and detailed system model for such cooperative relay networks is provided in Section \ref{sec:sysmodel}. Even for such simple networks with two relays and one destination and fixed channels, information-theoretic capacity is not yet known. However, there has been significant progress in developing cooperative communication schemes for such systems by using coherence and physical-layer coordination among nodes. There are multiple strategies studied in literature that enable this coordination, referred to as {\em forwarding} schemes. One such scheme of interest is the so-called decode-and-forward scheme that requires relays to decode messages. In contrast to traditional networks, the relays decode common messages, that are then transmitted cooperatively. However, the relays still have decoded messages or packets as in traditional networks. In \cite{J:YB07}, the authors develop a throughput-optimal network algorithm that can handle common messages. In \cite{C:YB07}, the authors consider more general network configurations, but the applicability is still limited to decode-and-forward schemes with fixed channels. In essence, all of these apply only in packet-in-packet-out networks. Complimentary to this is the work on optimal resource allocation for non-cooperative wireless networks \cite{J:TE92,Lin_Shroff_Srikant_06,B:GNT06} (and references therein). In our effort, we do not want to constrain the system to a packet-in-packet-out framework. We desire that the relays use {\em any} information-theoretic cooperative coding strategy of its choice, be it amplify/compress/quantize or any alter-and-forward scheme. This couples coding, resource allocation and stability into one joint problem, and the analyses in \cite{J:YB07,J:TE92,B:GNT06} and the vast literature on non-cooperative networks do not apply. Even the analyses in \cite{J:YB07,C:YB07} for decode-and-forward cooperative networks do not apply. This motivates the need for a new framework and stability analysis. Before proceeding to describe our results, a note to state the obvious: if the channel state is fixed and thus its capacity is precomputed, a simple static split scheme will ensure stable operation while maximizing the information theoretic rate (region) for the network. The challenge, of course, is when the fading state distribution and input arrival rates are unknown, and the fading state can only be observed causally. For example, consider a fading channel with block fading of $T$ symbols each. When $T$ is much smaller compared to the packet duration (or equivalently the channel-coding duration), queueing/buffering of packets at relays is not required as the first-hop and second-hop can be operated sequentially without reducing data-rates. When $T$ is comparable to (or larger than) the packet duration, queueing of packets at relays can provide significant gains in terms of data-rates. Furthermore, when $T$ is roughly the same as the packet duration, queueing at relays is inevitable as the source does not know the fading state of the second-hop while encoding the packet. In this paper, we focus on the second scenario when $T$ is larger than the packet/codeword duration. Given that the channel distribution is unknown and the fading state is only known causally, we ask the question: Is it possible to stabilize the network while operating it close to the boundary of its information-theoretic rate region? \section{Main Results} \label{sec:mainresults} The answer to the preceding question in Section \ref{sec:cooprelay} is ``yes", which is proved for a simpler network with two relays and one destination in \cite{Jose-Ying-Vishwanath-09}. In this setting, for cooperative schemes such as amplify/quantize-and-forward and partial-decode-and-forward, the relays receive and transmit real-valued ``packets''. In order to accomplish this in \cite{Jose-Ying-Vishwanath-09}, we introduce a new ``state-based'' virtual-queue-architecture for these real-valued ``packets'', and develop a throughput-optimal network algorithm that does not require the knowledge of the fading distribution. Each ``state'' corresponds to a vector comprised of the {\em entire} channel-state of each link in the network. This approach, although analytically very helpful, suffers from a major issue that makes it practically uninteresting - requiring that a virtual-queue be maintained for each channel-state at each node in the network leads to an explosion of queues, even for simple network configurations. Moreover, the approach in \cite{Jose-Ying-Vishwanath-09} is particular to a single destination setting. In this paper, we develop a simpler queue-architecture to enable stable operation of cooperative relay networks. Further, we generalize it to any forwarding scheme with multiple destinations. The virtual-queue-architecture we introduce in this paper is primarily {\em encoding-based}. This architecture is motivated by the manner in which adaptive modulation and coding is currently implemented in practice. In systems today, the source node implements a limited number of encoding schemes (encoding functions and rate-vectors). Each encoding scheme is designed so that it can be successfully employed for a particular subset of states. Even though encoding schemes belong to a finite (and usually small) set, the mapping functions at the relays and the decoding functions at the destinations are usually state-dependent. A queue-architecture that keeps virtual queues at the relays for each state corresponding to the first-hop and each encoding scheme is sufficient. This considerably reduces the number of virtual queues that must be maintained while still remaining a ``sufficient statistic", i.e., these encoding-based queues are a sufficiently rich representation for us to develop throughput optimal algorithms using them. Using this new and somewhat intuitive virtual-queue-architecture, we develop a network algorithm that has the following properties. \begin{enumerate} \item It does not require the knowledge of the fading distribution. \item It does not require the knowledge of the arrival rates. \item It keeps all the queues stable for any arrival rate-vector within the throughput region, i.e., it is throughput-optimal. \end{enumerate} Note that limiting ourselves to a small set of possible encoding schemes and rates inherently reduces the network's information-theoretic rate region. The more fine-grained the encoding schemes and resulting queue-architecture, the smaller the loss in rate region. However, note that the encoding-based queue-architecture itself does not introduce any sub-optimality. In summary, we introduce and study a new encoding-based queue-architecture, which is inspired by an adaptive coded modulation system analyzed and implemented at the physical layer in systems today. However, in today's systems, there is limited interaction, if any, between network-layer algorithms and adaptive coding/modulation, and we argue that coupling them together can be very useful in both the analysis and design of cooperative relay networks. Indeed, we show that such a queuing architecture can result in throughput optimal algorithms, and the network can achieve its information-theoretic rate region corresponding to its choice of encoding/decoding strategies while maintaining stability. \section{System Model} \label{sec:sysmodel} We consider discrete-time two-hop cooperative networks that include the network shown in Figure \ref{fig:two_hop}. We allow for arbitrary number of relays and destinations, i.e, the network consists of a source node denoted by $s$, $N$ relay nodes denoted by $r_1, r_2, \ldots, r_N$, and $K$ destination nodes denoted by $d_1, d_2, \ldots, d_K$. The source has independent messages for all the destinations. The relays aid in transmitting these messages to their respective destinations. Throughout this paper, ``first-hop'' refers to the links from the source to the relays, and ``second-hop'' refers to the links from the relays to the destinations. At any given time, half-duplex and cooperative-communication constraints require that either the first-hop or the second-hop can be activated and not both. The presence of direct links from source to destinations will not invalidate the analysis presented in this paper, but would render it considerably harder. For simplicity, we assume that they are absent and thus concentrate on equal-path length networks. The channel model does not directly impact the queue-architecture, and thus the network algorithm and stability analysis presented in this paper. The channel is state dependent, and the joint-state distribution be unknown. A particular channel model of interest is a linear interaction model with additive white Gaussian noise (AWGN). In the context of an AWGN channel, an example of state is a multiplicative fading parameter. We focus on a framework with i.i.d. block-fading model with a block-length of $T$ symbols in the remainder of this paper. The channels remain constant for the duration of one block, and then change to a new (independent) realization from an underlying distribution from block to block. Let $t \in \mathbb{Z}_+$ denote the channel fading blocks, and let $\mathcal{F}$ denote the fading state-space, which is assumed to be discrete. In block $t$, $\mathbf{f}_1[t] \in \mathcal{F}^N$ denotes the fading realization for the first-hop and $\mathbf{f}_2[t] \in \mathcal{F}^{NK}$ denotes the fading realization for the second-hop. The combined fading-state is denoted by $\mathbf{f}[t]=(\mathbf{f}_1[t],\mathbf{f}_2[t])$. The corresponding random vectors are denoted by $\mathbf{F}_1[t]$, $\mathbf{F}_2[t]$ and $\mathbf{F}[t]$. Note that $\mathbf{F}[t]$ is i.i.d. over time, but can be spatially correlated. Let the probability that $\mathbf{F}[t]$ takes value $\mathbf{f}$ be $\pi_{\mathbf{f}}$. This is the underlying probability distribution that is unknown to the central controller. Next, we explain the time-scales in which network and channel parameters evolve in our system. The coherence time $T$ is assumed to be comparable to the channel-coding length in symbols. For the ease of presentation, the ``packet" (which is either the channel codeword or any real-vector representing the actual data packet) length is assumed to be equal to the coherence time $T$. It is straightforward to extend the analysis when the ``packet'' length is a sub-multiple of the coherence time $T$. Each ``packet" is transmitted on the first-hop and the second-hop exactly once. These transmissions need not happen in consecutive time-blocks, i.e., these ``packets" can be buffered at the relays. The coding performed at the source, the mappings performed at the relays, and the decoding at the destinations can be arbitrary, i.e., this includes any and all schemes that are information-theoretically capacity-optimal or, if capacity is unknown, then the best known coding scheme. Further, we assume that the instantaneous fading-state is causally known globally to the central controller. In other words, prior to transmission, the central controller is aware of the entire network channel state for that particular time-block. \subsection{Notation} Vectors are denoted by bold letters. For vectors, equality and inequality operators are defined component-wise. $\mathbf{a} \cdot \mathbf{b}$ denotes the dot product of $\mathbf{a}$ and $\mathbf{b}$. $|\cdot|$ denotes the cardinality of a set. $\mathbf{1}_{\{E\}}$ denotes the indicator function of event $E$. $(a)^+$ denotes $\max(a,0)$. ${\mathbb E}[\cdot]$ denotes the expectation operator. \section{Achievable Rates \& Queue-Architecture} \label{sec:rate} The notion of a ``packet'' here is different from traditional networks where a packet is decoded at all intermediate relays, and is usually meant for one destination. In this paper, the term ``packet'' refers to the set of coded symbols transmitted/received in the network. Note that each of the relays receives a different noisy version of the transmitted vector (transmitted ``packet"), which is subsequently mapped to a transmit vector (``packet") at each relay. Again, the destinations receive a noisy version of a linear combination of relays' transmit ``packets". In this paper, we refer to the physical-layer signalling vectors as {\em packets} at each node in the network. We choose to use this language as the entire network layer analysis is based on understanding the dynamics of these transmit vectors as they traverse the system. Consider a packet that is transmitted from the source to the $K$ destinations. Let this packet be transmitted on the first-hop during block $t_1$, and be transmitted on the second-hop during block $t_2$. Then, $\mathbf{g} = (\mathbf{f}_1[t_1],\mathbf{f}_2[t_2])$ is said to be the ``state'' seen by this packet. Note that this notion of state is different from physical channel fading state, but is it of equal importance in our analysis A packet transmitted by the source is received by all the destinations in two hops, but the amount of information each destination receives varies depending on the encoding rates. Given a state seen by the packet, the set of encoding rates that can be supported is known as the rate region for the given state. An extremely challenging problem even in the single destination setting is to find the set of all achievable rates, or the capacity region for the given state. Even though the capacity region is unknown in most cases, there are many efficient cooperative communication schemes that have been developed. Therefore, the main aims of this paper are: (\emph{i}) to develop a queue-architecture that can support existing (and future) cooperative schemes, and (\emph{ii}) to develop a throughput-optimal network algorithm using this queue-architecture. The queue-architecture developed in \cite{Jose-Ying-Vishwanath-09} for single-destination setting keeps ``virtual'' queues at relays for every state. Suppose that each rate-region can be quantized such that the convex-hull of the set of quantized rate-vectors is ``nearly'' same as the rate-region itself. Further, let us assume that the rate corresponding to each destination have to be quantized to $L$ levels. Now, a direct extension of the state-based virtual-queue-architecture would require ``virtual'' queues at relays for each state and each quantized rate-vector, which results in $L^K|\mathcal{F}|^{K(N+1)}$ ``virtual'' queues. This scales exponentially in the number of destinations $K$. Clearly, such a queue-architecture is not scalable in practice, and will face implementation issues. In order to design a low-complexity queue-architecture, we exploit the fact that practical systems implement limited number of encoding schemes, as in the case of adaptive modulation and coding. For example, the source might choose to encode only two destinations at a time using superposition encoding. In this case, the total number of encoding schemes would be $K(K-1)L^2$. In another example, the source might choose to encode at limited boundary rate-vectors again with superposition encoding. Let ${\mathcal M}$ denote the set of encoding schemes, and $\mathbf{r}_m$ denote the rate-vector corresponding to each encoding scheme $m \in {\mathcal M}$. Given that $|{\mathcal M}| \ll L^K|\mathcal{F}|^{KN},$ a queue-architecture needs to support these limited choices. While a queue-architecture can take advantage of this, it needs to allow for arbitrary mapping at the relays and decoding at the destinations. These are usually state-dependent, for example, an amplify-and-forward mapping is state-dependent. Before describing our queue-architecture, we characterize the throughput region of the two-hop cooperative network. For this, we assume the knowledge of the fading distribution. Define $\mathcal{I}=\{(m,\mathbf{g})|m\in {\mathcal M} \text{ can be supported by state }\mathbf{g} \in \mathcal{F}^{(N+1)K}\}$, which represents whether an encoding scheme is supported by a state or not\footnote{We do not explicitly deal with packet error rate, as it is assumed that the achievable rate-vector is defined appropriately with required packet error rate.}. Now, let $\mathbf{f}=(\mathbf{f}_1,\mathbf{f}_2)$ be any fading-state where $\mathbf{f}_1$ is the fading-state of first-hop and $\mathbf{f}_2$ is the fading-state of second-hop. Similarly, let $\mathbf{g}=(\mathbf{g}_1,\mathbf{g}_2)$ by any state. We define $\hat{\mathcal{F}}=\mathcal{F}^{(N+1)K}$, $\mathcal{I}_1=\{(\mathbf{f},\mathbf{g})|\mathbf{g}_{1}=\mathbf{f}_1\}$, and $\mathcal{I}_2=\{(\mathbf{f},\mathbf{g})|\mathbf{g}_{2}=\mathbf{f}_2\}$. With the above definitions, the throughput region of the network is characterized in the following lemma. \begin{lemma} \label{lem:thruput} A rate-vector $\hat{\mathbf{r}}$ is in the throughput region denoted by $\mathcal{T}$ only if there exists $a_{\mathbf{f}}^{m,\mathbf{g}}\geq 0$ and $b_{\mathbf{f}}^{m,\mathbf{g}}\geq 0$ for all $m \in {\mathcal M}$, $\mathbf{g} \in \hat{\mathcal{F}}$ and $\mathbf{f} \in \hat{\mathcal{F}}$ such that \begin{eqnarray} &&{\hat{\mathbf{r}}=\sum_{m, \mathbf{g}, \mathbf{f}} \left( \pi_{\mathbf{f}}a_{\mathbf{f}}^{m,\mathbf{g}} \mathbf{r}_{m} \mathbf{1}_{\{(\mathbf{f},\mathbf{g})\in \mathcal{I}_1\}}\mathbf{1}_{\{(m,\mathbf{g})\in \mathcal{I} \}} \right),}\label{eq: fc1}\\ &&{\sum_{\mathbf{f} \in \hat{\mathcal{F}}} \pi_{\mathbf{f}}a_{\mathbf{f}}^{m,\mathbf{g}} \mathbf{1}_{\{(\mathbf{f},\mathbf{g})\in \mathcal{I}_1\}}=} \sum_{\mathbf{f} \in \hat{\mathcal{F}}} \pi_{\mathbf{f}}b_{\mathbf{f}}^{m,\mathbf{g}} \mathbf{1}_{\{(\mathbf{f},\mathbf{g})\in \mathcal{I}_2\}}, \forall (m,\mathbf{g})\in \mathcal{I}\label{eq: fc2},\\ &&{\sum_{m, \mathbf{g}}a_{\mathbf{f}}^{m,\mathbf{g}} +b_{\mathbf{f}}^{m,\mathbf{g}} \leq 1, \forall \mathbf{f}.} \label{eq: overall} \end{eqnarray} \end{lemma} \begin{IEEEproof} Let $a_{\mathbf{f}}^{m,\mathbf{g}}$ be the fraction of time for which packets corresponding to encoding scheme $m$ and state $\mathbf{g}$ is transmitted from the source to the relays when the system is in fading state $\mathbf{f}.$ Similarly, let $b_{\mathbf{f}}^{m,\mathbf{g}}$ be the fraction of time for which these packets are transmitted from the relays to the destinations. (\ref{eq: fc1}) is flow conservation constraint for the source, and (\ref{eq: fc2}) is the flow conservation constraint for each encoding scheme and state. (\ref{eq: overall}) is the time conservation constraint for each fading-state. A central controller with the knowledge of the fading distribution can achieve these rates using static time-division. \end{IEEEproof} An immediate corollary of this lemma is the following. \begin{cor} The throughput region $\mathcal{T}$ is convex. \end{cor} {\bf Encoding-based Queue-architecture:} At the source node $s$, there are $K$ queues consisting of bits (or data) corresponding to the $K$ destinations. We denote the queue at the source corresponding to $k$-th destination by $Q_s^k$ with queue-length $Q_s^k[t]$ during block $t$. There is an exogenous i.i.d. arrival process $A^k[t]$ of data-bits into $Q_s^k$ with mean rate $\lambda_k T$ bits/block and bounded variance. The vector of arrival rates $\lambda_k$ is denoted by $\boldsymbol\lambda$. At each relay (say $n$), we keep virtual queues corresponding to each encoding scheme $m$ and each fading state for the first-hop $\mathbf{g}_1$ denoted $Q_n^{m,\mathbf{g}_1}$ with queue-length $Q_n^{m,\mathbf{g}_1}[t]$ during block $t$. This queue consists of real-valued packets encoded at rate $\mathbf{r}_m.$ Since we keep virtual queues for each fading state corresponding to the first-hop, the mapping function performed at the relays can a function of the fading state. Similarly, the decoding function can be a function of the fading state. With this queue-architecture, the number of virtual queues at each relay is $|{\mathcal M}||\mathcal{F}|^N.$ This is considerably less compared to the number of virtual queues required in the state-based approach, and thus provides a low-complexity queue-architecture. Note that the gain is high in the setting when the number of destinations are large and number of relays are small, which is the case in cellular systems. The queue dynamics is as follows: During block $t$, if the fading state for the first-hop is $\mathbf{g}_1$ and if the central controller decides that the source should transmit a packet using encoding scheme $m$, then the following queues get updated: \begin{eqnarray} \label{eq:Qs1} Q_s^k[t+1] &=& (Q_s^k[t] + A^k[t] - r_m^kT)^+, \forall k,\\ \label{eq:Qn1} Q_n^{m,\mathbf{g}_1}[t+1] &=& Q_n^{m,\mathbf{g}_1}[t] + T, \forall n. \end{eqnarray} During block $t$, if the fading state for the second-hop is $\mathbf{g}_2$, then the central controller can decide to transmit packets from queues $Q_n^{m,\mathbf{g}_1},\forall n$ for some given $m$ and $\mathbf{g}_1$ only if $(m,\mathbf{g}_1,\mathbf{g}_2)\in \mathcal{I} $. This ensures that the packet is received successfully at all the destinations. In this case, the following queues get updated: \begin{eqnarray} \label{eq:Qs2} Q_s^k[t+1] &=& Q_s^k[t] + A^k[t], \forall k,\\ \label{eq:Qn2} Q_n^{m,\mathbf{g}_1}[t+1] &=& (Q_n^{m,\mathbf{g}_1}[t] - T)^+, \forall n. \end{eqnarray} Next, we address the question of designing a central controller that does not have the knowledge of the arrival rates or the fading state distribution. \section{Throughput-Optimal Network Algorithm} \label{sec:main} In this section, we show that a throughput-optimal central controller can be designed without the knowledge of the arrival rates or the fading state distribution. Since cooperative schemes require strong node coordination, the centralized nature of the algorithm does not create additional system requirements. The following algorithm is motivated from back-pressure based Max-Weight algorithms for non-cooperative networks. {\bf Back-pressure-based Algorithm:} In every block, the central controller makes decisions based on the current fading state of the system and the current queue-lengths. Let the fading-state during block $t$ be $\mathbf{f}[t]=(\mathbf{f}_1,\mathbf{f}_2)$. The network algorithm run by the controller is as follows: \begin{enumerate} \item It computes \begin{eqnarray} A = \max \limits_{m} \sum_{k}\left(Q_s^k[t] - r_m^k \sum_{n=1}^NQ_{n}^{m,\mathbf{f}_1}[t]\right)r_{m}^k \nonumber \end{eqnarray} and an optimal parameter $m^*$ for this problem. \item It computes \begin{eqnarray} B = &\max \limits_{m,\mathbf{g}_1} &(\mathbf{r}_m \cdot \mathbf{1})^2 \sum_{n=1}^NQ_{n}^{m,\mathbf{g}_1}[t],\nonumber \\ &\text{s.t.}& (m,(\mathbf{g}_1,\mathbf{f}_2))\in \mathcal{I}, \nonumber \end{eqnarray} and a set of optimal parameters $\hat{m}$ and $\hat{\mathbf{g}}_1$ for this problem. \item If $A\ge B$, then the central controller decides to transmit a packet from the source to the relays using encoding scheme $m^*$. \item Otherwise, the central controller decides to transmit a packet from queues $Q_n^{\hat{m},\hat{\mathbf{g}}_1},\forall n$, i.e., from the relays to the destinations. \end{enumerate} The controller repeats steps $1-4$ in every block. The following theorem provides a strong theoretical guarantee on the throughput performance of this algorithm. \begin{theorem} \label{thm:main} The above algorithm stochastically stabilizes all the queues for any $\boldsymbol\lambda$ if there exists $\epsilon>0$ such that $\boldsymbol\lambda+\epsilon\mathbf{1}$ is within the throughput region given in Lemma \ref{lem:thruput}, i.e., the underlying network Markov chain is positive recurrent. In simple terms, the algorithm is throughput-optimal. \end{theorem} Before proceeding to the proof of this theorem, we state the following lemma that is used in the proof. \begin{lemma} \label{lem:for_main} Suppose that there exists $\epsilon>0$ such that $\boldsymbol\lambda+\epsilon\mathbf{1}$ is within the throughput region. Then, there exists $a_{\mathbf{f}}^{m,\mathbf{g}} \ge 0$, $b_{\mathbf{f}}^{m,\mathbf{g}} \ge 0$ and $\delta > 0$ such that the following set of conditions are satisfied: \begin{eqnarray} &&\lambda_k - \sum_{m,\mathbf{g},\mathbf{f}} (\pi_{\mathbf{f}} r_{m}^{k} a_{\mathbf{f}}^{m,\mathbf{g}}) \le -\delta, \forall k, \nonumber \\ &&\sum_{\mathbf{f}} \pi_{\mathbf{f}} (a_{\mathbf{f}}^{m,\mathbf{g}}-b_{\mathbf{f}}^{m,\mathbf{g}}) \le -\delta, \forall m,\mathbf{g}, \nonumber \\ &&\sum_{m,\mathbf{g}}a_{\mathbf{f}}^{m,\mathbf{g}} +b_{\mathbf{f}}^{m,\mathbf{g}} \leq 1, \forall \mathbf{f}, \nonumber \\ &&a_{\mathbf{f}}^{m,\mathbf{g}} = 0, \forall (\mathbf{f},\mathbf{g}) \notin \mathcal{I}_1,\forall (m,\mathbf{g}) \notin \mathcal{I},\nonumber \\ &&b_{\mathbf{f}}^{m,\mathbf{g}} = 0, \forall (\mathbf{f},\mathbf{g}) \notin \mathcal{I}_2,\forall (m,\mathbf{g}) \notin \mathcal{I}. \nonumber \end{eqnarray} \end{lemma} \begin{IEEEproof} The proof of this lemma is fairly straightforward, and is omitted for brevity. \end{IEEEproof} \subsection{Proof of Theorem \ref{thm:main}} Since the queues form a Markov chain, we use Foster-Lyapunov theorem in order to prove the stability \cite{Meyn-Tweedie-93,B:A03}. Without loss of generality, we assume that $\mathbf{r}_{m} \ne \mathbf{0},\forall m$. Otherwise, those queues at the relays can be removed without affecting the throughput region and the stability of the system. Now, consider the Lyapunov function $$V(\mathbf{Q}[t]) = \sum_{k}\left(Q^k_s[t]\right)^2 + \sum_{n=1}^{N}\sum_{m,\mathbf{g}_1} \left(\mathbf{r}_m\cdot \mathbf{1} Q_{n}^{m,\mathbf{g}_1}[t]\right)^2,$$ where $\mathbf{Q}[t]$ denotes the vector of all queue lengths. Next, we consider an optimization problem that captures the algorithm given in this section. Consider a fading-state $\mathbf{f}$ and the following discrete optimization problem: \begin{eqnarray} &\max \limits_{\alpha_{\mathbf{f}}^{m,\mathbf{g}} , \beta_{\mathbf{f}}^{m,\mathbf{g}}} & \sum_{m,\mathbf{g},k} \left[\left(Q_s^k[t] - r_m^k \sum_{n=1}^NQ_{n}^{m,\mathbf{g}_1}[t]\right)r_{m}^k \alpha_{\mathbf{f}}^{m,\mathbf{g}}\right] \nonumber \\ \label{back_pressure} & & +\sum_{m,\mathbf{g}} \left[(\mathbf{r}_m \cdot \mathbf{1})^2 \left(\sum_{n=1}^NQ_{n}^{m,\mathbf{g}_1}[t]\right)\beta_{\mathbf{f}}^{m,\mathbf{g}}\right], \\ &\text{s.t.} & \sum_{m,\mathbf{g}} (\alpha_{\mathbf{f}}^{m,\mathbf{g}} + \beta_{\mathbf{f}}^{m,\mathbf{g}}) \le 1, \nonumber \\ & & \alpha_{\mathbf{f}}^{m,\mathbf{g}} = 0, \forall (\mathbf{f},\mathbf{g}) \notin \mathcal{I}_1, \nonumber \\ & & \beta_{\mathbf{f}}^{m,\mathbf{g}} = 0, \forall (\mathbf{f},\mathbf{g}) \notin \mathcal{I}_2,\forall (m,\mathbf{g}) \notin \mathcal{I}, \nonumber \\ & & \alpha_{\mathbf{f}}^{m,\mathbf{g}}, \beta_{\mathbf{f}}^{m,\mathbf{g}} \in \{0,1\}, \forall m,\mathbf{g}. \nonumber \end{eqnarray} It is fairly straightforward to check that the algorithm given in this section results from this optimization problem. We remark that this optimization has many redundant variables that are introduced for the purpose of the proof. Let an optimal assignment to the optimization problem in (\ref{back_pressure}) be ${\hat{\alpha}}_{\mathbf{f}}^{m,\mathbf{g}},{\hat{\beta}}_{\mathbf{f}}^{m,\mathbf{g}}$. Now, from (\ref{eq:Qs1}), (\ref{eq:Qs2}), (\ref{eq:Qn1}) and (\ref{eq:Qn2}), we can bound queue-lenths during block $t+1$ as follows: \begin{eqnarray} (Q_s^k[t+1])^2 &= &\left(Q_s^k[t] + A^k[t] - \left(\sum_{m,\mathbf{g}}r_{m}^kT{\hat{\alpha}}_{\mathbf{f}}^{m,\mathbf{g}}\right)\right)^2 \nonumber \\ &\le &(Q^k_s[t])^2 + ({A}^k[t])^2 + \left(\sum_{m,\mathbf{g}}r_{m}^kT{\hat{\alpha}}_{\mathbf{f}}^{m,\mathbf{g}}\right)^2 \nonumber \\&&- 2Q^k_s[t]\left(\sum_{m,\mathbf{g}}r_{m}^kT{\hat{\alpha}}_{\mathbf{f}}^{m,\mathbf{g}}-{A}^k[t]\right), \forall k,\nonumber \end{eqnarray} \begin{eqnarray} {(\mathbf{r}_m\cdot \mathbf{1} Q_{n}^{m,\mathbf{g}_1}[t+1])^2} &\le&\left(\mathbf{r}_m\cdot \mathbf{1} Q_{n}^{m,\mathbf{g}_1}[t] + \mathbf{r}_m\cdot \mathbf{1} T \sum_{\mathbf{g}_2}\left({\hat{\alpha}}_{\mathbf{f}}^{m,\mathbf{g}} - {\hat{\beta}}_{\mathbf{f}}^{m,\mathbf{g}}\right)\right)^2 \nonumber \\ &= &\left(\mathbf{r}_m\cdot \mathbf{1} Q_{n}^{m,\mathbf{g}_1}[t]\right)^2 + \left(\mathbf{r}_m\cdot \mathbf{1} T \sum_{\mathbf{g}_2}\left({\hat{\alpha}}_{\mathbf{f}}^{m,\mathbf{g}} - {\hat{\beta}}_{\mathbf{f}}^{m,\mathbf{g}}\right)\right)^2 \nonumber \\ &&-2(\mathbf{r}_m\cdot \mathbf{1})^2 Q_{n}^{m,\mathbf{g}_1}[t] T \sum_{\mathbf{g}_2}\left({\hat{\alpha}}_{\mathbf{f}}^{m,\mathbf{g}} - {\hat{\beta}}_{\mathbf{f}}^{m,\mathbf{g}}\right),\forall m, \mathbf{g}_1. \nonumber \end{eqnarray} Applying the law of iterated expectations, we obtain \begin{eqnarray} {\mathbf{E}\left[V\left(\mathbf{Q}\left[t+1\right]\right)-V\left(\mathbf{Q}\left[t\right]\right)|\mathbf{Q}\left[t\right]\right]-M}& \le& \sum_{\mathbf{f}} \pi_{\mathbf{f}} \left[- \sum_{k}2Q^k_s[t]\left(\sum_{m,\mathbf{g}}r_{m}^kT{\hat{\alpha}}_{\mathbf{f}}^{m,\mathbf{g}}-\lambda_kT\right)- \right.\nonumber \\ & &\left. \sum_{m,\mathbf{g}_1,n} \left(2(\mathbf{r}_m\cdot \mathbf{1})^2 Q_{n}^{m,\mathbf{g}_1}[t] T \sum_{\mathbf{g}_2}\left({\hat{\alpha}}_{\mathbf{f}}^{m,\mathbf{g}} - {\hat{\beta}}_{\mathbf{f}}^{m,\mathbf{g}}\right)\right)\right] \nonumber \\ \label{eq:before_lp_relax} & =&2T\left[\sum_{k}Q_s^k[t]\left(\lambda_k - \sum_{m,\mathbf{g},\mathbf{f}} \left(\pi_{\mathbf{f}} r_{m}^{k} {\hat{\alpha}}_{\mathbf{f}}^{m,\mathbf{g}}\right)\right)+ \right.\nonumber \\ &&\left.\sum_{m,\mathbf{g},n} (\mathbf{r}_m\cdot \mathbf{1})^2 Q_n^{m,\mathbf{g}_1}[t] \left(\sum_{\mathbf{f}} \pi_{\mathbf{f}} \right)\right]. \end{eqnarray} where $M$ is a finite positive value, as the variance associated with the arrival processes are bounded and the throughput region is compact. Let $a_{\mathbf{f}}^{m,\mathbf{g}},b_{\mathbf{f}}^{m,\mathbf{g}}$ be the values given by Lemma \ref{lem:for_main}. Now, substituting values $a_{\mathbf{f}}^{m,\mathbf{g}}$ instead of $\hat{\alpha}_{\mathbf{f}}^{m,\mathbf{g}}$ and $b_{\mathbf{f}}^{m,\mathbf{g}}$ instead of $\hat{\beta}_{\mathbf{f}}^{m,\mathbf{g}}$ in right hand side of (\ref{eq:before_lp_relax}) increases its value. This is due to the following reason. First, consider the linear program (LP) obtained by relaxing the integer constraints of the optimization problem (\ref{back_pressure}) and introducing non-negativity constraints. This relaxation is tight as LPs have at least one optimal solution which is a boundary point. Next, the possible values for $a_{\mathbf{f}}^{m,\mathbf{g}},b_{\mathbf{f}}^{m,\mathbf{g}}$ is a subset of the feasible set for the LP. Therefore, by substituting results from Lemma \ref{lem:for_main} in (\ref{eq:before_lp_relax}), we have \begin{eqnarray} {\mathbf{E}\left[V(\mathbf{Q}[t+1])-V(\mathbf{Q}[t])|\mathbf{Q}[t]\right]-M} &\le &2T\left[\sum_{k}Q_s^k[t]\left(\lambda_k - \sum_{m,\mathbf{g},\mathbf{f}} (\pi_{\mathbf{f}} r_{m}^{k} a_{\mathbf{f}}^{m,\mathbf{g}})\right)+ \right.\nonumber \\ &&\left.\sum_{m,\mathbf{g},n} (\mathbf{r}_m\cdot \mathbf{1})^2 Q_n^{m,\mathbf{g}_1}[t] \left(\sum_{\mathbf{f}} \pi_{\mathbf{f}} (a_{\mathbf{f}}^{m,\mathbf{g}}-b_{\mathbf{f}}^{m,\mathbf{g}})\right)\right]\nonumber \\ \label{eq:negdrift} &\le &-2T\delta\left[\sum_{k}Q_s^k[t]+ \sum_{m,\mathbf{g},n} (\mathbf{r}_m\cdot \mathbf{1})^2 Q_n^{m,\mathbf{g}_1}[t] \right]. \end{eqnarray} Now, from (\ref{eq:negdrift}), it is fairly straightforward to see that there is strict negative drift except on a compact subset of the set of queue-lengths. This completes the proof. \QED \section{Conclusion} \label{sec:conclude} In this paper, we develop encoding-based queue architecture for cooperative relay networks. Cooperative relay networks are fundamentally different from traditional capacitated and non-cooperative wireless networks as they require physical layer coordination. This physical layer coordination cannot be abstracted out at the network layer in terms of bits-in-bits-out models, and thus a stability analysis that incorporates both the physical layer encoding and the network layer dynamics is needed, as performed in this paper. The encoding-based queue architecture is a succinct representation needed for generating network stabilizing algorithms. Using this queue-architecture, we show that throughput-optimal network algorithms can be developed even when the fade-distribution and input queue distributions are unknown.
-31,568.772818
[ -3.1015625, 2.875 ]
47.982063
[ -2.677734375, 0.320556640625, -2.986328125, -6.28515625, -1.3544921875, 9.78125 ]
[ 2.9375, 7.578125, 1.0224609375, 6.16015625 ]
255
4,667
[ -2.34765625, 2.314453125 ]
25.783742
[ -6.54296875, -5.17578125, -5.6015625, -2.482421875, 2.826171875, 14.3671875 ]
0.510869
26.565418
24.555389
1.102454
[ 1.513136863708496 ]
-22,332.493446
6.081637
-31,516.255892
0.465067
5.86185
[ -2.60546875, -3.65625, -3.759765625, -5.03125, 2.466796875, 12.234375 ]
[ -5.09375, -2.890625, -2.6953125, -2.234375, 4.25390625, 6.1171875 ]
BkiUd2Y4ubnhDSvJRyHO
\section{Introduction} Despite twelve years of intesive experimental and theoretical studies of copper-oxide based superconducting compounds, \cite{conferences} no consensus about the fundamental physics or even about the minimum necessary Hamiltonian to describe the phenomena has emerged. One of the few theoretical ideas which has clearly survived experimental tests is that at density $x \approx x_c$ near that for the maximum superconducting transition temperature, the normal state is a marginal Fermi-liquid (MFL)\cite{mfl}. The MFL is characterized by a scale-invariant particle-hole fluctuation spectrum which is only very weakly momentum dependent. One of the predictions of MFL hypothesis is that the single-particle spectral function $G ( {\bf k} , \omega )$ has a nearly momentum-independent self-energy proportional to $max ( | \omega |, T )$. The frequency and temperature dependence as well as the momentum independence have been tested in angle-resolved photoemission experiments.\cite{arpes} The observed non-Fermi-liquid behavior near $x \approx x_c$ in resistivity, thermal conductivity, optical conductivity, Raman scattering, tunneling spectra, and the Cu nuclear relaxation rate follow from the MFL hypothesis. The scale-invariance of the MFL fluctuations implies that a quantum-critical-point (QCP) exists at $x = x_c$, near the optimum composition. One expects that, in two or three dimensions, the QCP at $T = 0$ is the end-point of a phase of reduced symmetry as $x$ is varied. Similarly a line of transitions or at least a cross-over is expected at a finite temperature $T_p (x)$ terminating at $(x=x_c, T=0)$. Indeed the generic phase-diagram, Fig. (1), of the copper-oxide compounds around $x \approx x_c$ displays such a topology. Region I has MFL properties dominated by quantum-fluctuations, Region III displays properties characteristic of a Fermi-liquid, while Region III - the pseudo-gap region displays a loss of low-energy excitations compared to Region II. Below the line $T_p (x)$ between regions I and II, the single-particle spectrum displays lowered rotational symmetry, while no translational symmetry appears broken. The superconductivity region sits spanning the three-distinct normal state regions. Fig. (1) may be compared to the topologically similar phase-diagram of some heavy-Fermion compounds, in which the line $T_p (x)$ corresponds to an antiferromagnetic transition.\cite{mathur} From this point of view the crucial question in Cu-O compounds is the symmetery in Region II of the so-called pseudo-gap phase. A systematic theory\cite{varma1,varma2} starting with a general model for Cu-O compounds provides an answer to this question. The region (II) in Fig. (1) is derived to be a phase in which a four-fold pattern of current flows in the ground state in each unit cell as shown in Fig. (2). Time-reversal symmetry as well as rotational symmetry is broken but the product of the two is conserved. This phase has been called the circulating-current (CC) phase. Quantum fluctuations about this phase are shown to have MFL fluctuations, characteristic of Region I. The same fluctuations promote ''$d$" or generalized ''$s$"-state pairing depending on the Fermi-surface at a given doping. While a microscopic theory in agreement with most of the principal experimental results has been presented, one can be confident of the applicability of the theory only if the CC phase is directly observed. The CC phase has a very elusive-order parameter. The four-fold pattern of microscopic magnetic moments in each unit cell changes the Bragg intensity for polarized neutrons at certain pre-existing Bragg spots. But the intensity for nuclear-scattering at these Bragg spots is $O (10^4 )$ the predicted magnetic intensity. Muon spin-resonance ($\mu$-SR) would be a possible probe, but the magnetic field from the current pattern in Fig. (2) is zero at most symmetry points and along the principal symmetry lines, where muons are known to sit preferentially. Perhaps, an additional perturbation such as an external magnetic field can be used to lower the symmetry at the sites preferred by muons. In that case $\mu-SR$ could be used to search for the predicted phase. I propose here a new kind of experiment, which is a microscopic analog of circular dichorism. The idea is that ARPES at a specific point near the Fermi-surface should have an electron yield which is different for right circularly polarized and left circularly polarized photons if the ground state has T-breaking of the form shown in Fig. (2). Further the relative intensity should change in a systematic fashion with the momentum around the Fermi-surface. I present below the results of the calculation based on this idea and then discuss the feasibility of the experiment. The idea itself is more general than the specific application to copper-oxides. Any time reversal breaking phase will in general yield a different current density for right and left circularly polarized photons. (But the characterestic signature of the state, revealed by the momentum dependence of the asymmetry in the current for the left and right circular polarizations, must be calculated anew for each possibility.) The experiment may for example be tried to see if the superconducting state of the compound $Sr_2RuO_4$ \cite{maeno} breaks time-reversal symmetry. \section{ARPES With Polarized Photons} My object is to deduce the polarization and symmetry dependence of ARPES current and a rough estimate of its magnitude. For this purpose, a simple calculation using tight-binding wave-functions in the solid is sufficient. Assume that a photon of energy $\omega$ shone on the crystal produces a free-electron with momentum ${\bf p}$ and energy $E_{\bf p}$ at the detector due to absorption of the photon by an electronic state $| {\bf k} >$ inside the crystal of the crystal momentum ${\bf k}$. The momentum of the photon is assumed very small compared to ${\bf k}$ and ${\bf p}$. The current $J_{\bf p,\bf k}$ collected at the detector for uniform illumination over a given area is \cite{ashcroft} \begin{equation} J_{\bf p,\bf k} = 2 \pi \, e \, f \left( \epsilon_{\bf k} \right) \left | \left< \: {\bf p} \, | H^{\prime} | \, {\bf k} \right> \: \right|^2 \delta \left( E_{\bf p} - \epsilon_{\bf k} + \hbar \omega \right) \end{equation} where $f ( \epsilon_{\bf k} )$ is the Fermi-function. The primary contribution of the current is from the matrix element \begin{equation} \left< {\bf p} \, | H^{\prime} | \, {\bf k} \right> = \frac{-ie}{2mc} \int d \; {\bf r} \: e^{i {\bf p} \cdot {\bf r}} {\bf A} \cdot {\bf \nabla} \Psi_{\bf k} (r) \end{equation} where {\bf A} is the vector potential of the incident photons and $\Psi_{\bf k} ({\bf r})$ is the wave function of the state $| {\bf k} >$. There is a smaller contribution due to the gradient of the potential at the surface which is briefly discussed at the end. \subsection{Wavefunctions} The creation operator for the tight-binding wavefunctions for the conduction band of Cu-O metals (assumed to be a two-dimensional metal) for the case that the difference in energy of the Cu-$d_{x^2 - y^2}$ level $\epsilon_d$ and the O-$p_{x,y}$ levels $\epsilon_{\bf p}$ is much less than their hybridization energy, and when the direct Oxygen-Oxygen hopping parameter $t_{pp}$ is set to zero are \begin{equation} | {\bf k}_o \rangle = \frac{d_k^+}{\sqrt{2}} \: + i\left(\: \frac{s_x \, p_{kx}^+ + s_y \, p_{ky}^+}{\sqrt{2} s_{xy}}\right) \end{equation} where $s_{x,y} = \sin k_{x,y} a/2$ and $s_{xy}^2 = \frac{\sin^2 k_x a}{2} + \frac{\sin^2 k_y a}{2}$. Spin labels have been suppressed. $d_k^+$, $p_{kx,y}^+$ are respectively the creation operators for the basis wave-functions \begin{eqnarray} \phi_d({\bf k}) & = & \frac{1}{\sqrt{N}} \sum_i e^{-i {\bf k \cdot R_i}} \: \phi_d ({\bf r - R_i}) , \nonumber \\ \phi_{p_{x,y}}({\bf k}) & = & \frac{1}{\sqrt{N}} \sum_i \: e^{-i \bf k \cdot R_i} \: e^{-i k_{x,y}\frac{a}{2}} \phi_{p_{x,y}}({\bf r - R_i} - \frac{a_{x,y}}{2} ) \end{eqnarray} where $\phi_d ({\bf r - R_i})$ is the $d_{x^2-y^2}$ atomic orbital at the Cu-site $R_i$ and $\phi_{px} \left( {\bf r} - {\bf R}_i - \frac{a_x}{2} \right)$ is the $p_x$ wavefunction at the oxygen site at ${\bf R}_i + \frac{a_x}{2}$, etc. In the circulating current phase, the conduction band wave-function is modified to \cite{footnote} \begin{equation} |{\bf k} \rangle = \left( | {\bf k}_o \rangle + \theta_0 | {\bf k}_1 \rangle \right) / \sqrt{1 + \theta_o^2 \, s_x^2 \, s_y^2} \end{equation} where \begin{equation} | {\bf k}_1 \rangle \simeq s_x \, s_y \left( s_y \: p_{kx}^+ - s_x \: p_{ky}^+ \right) / s_{xy}. \end{equation} In Eq. (5) $\theta_0$ characterises the strength of the symmetry-breaking. \subsection{Matrix-elements and Current} In order to evaluate the matrix element, Eq. (2), I write \begin{equation} \begin{array}{rl} \phi_d ( {\bf r}) & = \psi_d (| {\bf r} | ) \: \frac{(x^2 - y^2 )}{r^2} , \\ \phi_{p_\mu} ( {\bf r} ) & = \psi_p ( | {\bf r} |) \: \frac{\mu}{|r|}; \: \mu = x,y \end{array} \end{equation} $\psi_d (| {\bf r}|)$ and $\psi_{p_\mu} (| r |)$ are characterized by a fall-off distance {\it a} of the order of the atomic size. Then \begin{equation} \begin{array}{rl} {\bf \nabla} \phi_{p_\mu} ( {\bf r} ) & \approx \left[ \frac{1}{{\it a}} \left( \frac{\hat{x}x + \hat{y}y + \hat{z}z}{| {\bf r} |} \right) \frac{\mu}{| r |} + \frac{\hat{\mu}}{| r |} \right] \psi_p ( | r | ), \\ {\bf \nabla} \phi_d ({\bf r}) & \approx \left[ \frac{1}{{\it a}} \left( \frac{\hat{x}x + \hat{y}y + \hat{z}z}{| {\bf r}|} \right) \frac{( x^2 - y^2 )}{r^2} + \frac{2 (x \hat{x} - y \hat{y} )}{r^2} \right] \psi_d ( | r | ) . \end{array} \end{equation} We need the (two-dimensional) momentum distribution of the wavefunctions in Eq. (2). For this purpose define \begin{equation} \int d \, {\bf r} \: e^{i(p_x x + p_y y)} \nabla_{\nu} \phi_d ( {\bf r}) \; \equiv i \; f_d^{\nu} ( p_x , p_y ); \; \nu = x,y \: \:. \end{equation} Note that $f_d^x (p_x , p_y )$ can be written as the product of an odd function of $p_x$ and an even function of $p_y$, etc. Similarly, \begin{equation} \int d \, {\bf r} \: e^{i ( p_x x + p_y y )} \nabla_{\nu} \phi_{p_\mu} (r) \; \equiv \; f_{p_\mu}^\nu (p_x , p_y ) \end{equation} $f_{p_\mu}^{\mu} (p_x , p_y )$ is the product of an even function of $p_x$ and an even function of $p_y$, whereas $f_{p_\mu}^{\nu} (p_x , p_y )$ is the product of an odd function of $p_x$ and an odd function of $p_y$. The definitions in Eqs.(9,10) ensure that all the $f's$ are real. The $f(p)'s$ fall off for $p$ of order the inverse atomic size. Therefore for $p's$ in the first or second Brillouin zone they are approximately constant. In terms of these quantities, the matrix element in Eq. (2) is calculated. Consider the case of left and right circularly polarized photons with vector potentials ${\bf A}_\ell$ and ${\bf A}_r$ respectively \begin{equation} {\bf A}_{\ell , r} \: = \: A \left( \hat{x} \pm i \, \hat{y} \right) \:. \end{equation} Then a straight forward calculation leads, to leading order in $\theta_0$ to \begin{eqnarray} \langle {\bf p} \, | H^{\prime} | \, {\bf k} \rangle_{l,r} & = (\frac{e}{2\sqrt{2}mc})A \sum_{Gx, Gy} \delta (\bf{ p - k -G} ) \left\{ \left( R_o ({\bf G}, {\bf p},{\bf k}) \pm iI_o ({\bf G}, {\bf p},{\bf k}) \right) \right. \\ & \left. + \theta_0 \left( \pm R_1 ({\bf G}, {\bf p},{\bf k}) + i \, I_1 ({\bf G},{\bf p},{\bf k}) \right) \right\} \end{eqnarray} where ${\bf G} = (G_x , G_y )$ are the reciprocal vectors, and \begin{equation} R_o ({\bf G}, {\bf p},{\bf k}) = f_d^x ( {\bf p} ) + \left( g(G_y,k_x) f_{px}^y ( {\bf p}) + g(G_y,k_x) f_{py}^y ( {\bf p})\right) \end{equation} \begin{equation} I_o ( {\bf G}, {\bf p},{\bf k} ) = f_d^y ( {\bf p} ) - \left( g(G_x,k_y) f_{p_x}^x ( {\bf p} ) + g(G_y,k_x) f_{p_y}^x ({\bf p}) \right) \end{equation} \begin{equation} R_1 ({\bf G}, {\bf p},{\bf k} ) = \left( \sin^2 \left( \frac{k_y a}{2}\right) g(G_x,k_y) f_{p_x}^x ({\bf p}) - \sin^2 \left( \frac{k_x a}{2}\right) g(G_y,k_x) f_{p_y}^x ({\bf p})\right) \end{equation} \begin{equation} I_1 ({\bf G}, {\bf p},{\bf k},{\bf k}) = \left(sin^2 \left( \frac{k_y a}{2} \right) g(G_x,k_y) f_{p_x}^y ({\bf p}) - \sin^2 \left(\frac{k_x a}{2} \right) g(G_y,k_x) f_{p_y}^y\right) \end{equation} In the above \begin{equation} g(r,s)= \frac{sin(ra/2)}{\sqrt{sin^2\left(ra/2\right)+sin^2\left(sa/2\right)}} \end{equation} In the usual experimental geometry, the contribution from each {\bf G} is selected separately. For a particular {\bf G}, the current with polarization $\ell$ or $r$ to first order in $\theta$ is \begin{equation} J_{\ell , r} ({\bf G, p}) \, = \frac{e^2}{8m^2c^2}\, \left[ \left( R_o^2 + I_o^2 \right) \pm 2 \theta \left( R_o \, R_1 \, + \, I_o I_1 \right) \right] \end{equation} so the relative asymmetry of the current, \begin{equation} \Xi ({\bf G}, {\bf p}) \equiv ( J_{\ell} - J_r ) / \frac{1}{2} ( J_{\ell} + J_r ) , \; \approx \; 8 \, \theta_0 \; \left( R_o R_1 + I_o I_1 \right) / \left( R_o^2 + I_o^2 \right) \end{equation} \section{Discussion of ARPES - Asymmetry} Eqs. (19) and (20) are the principal result of the calculation. It is worthwhile noting several aspects of the predictions. For $G_x = G_y = 0$, when only the d-orbitals contribute to the photo-current, $\Xi = 0$ for all ${\bf k}$. For $G_x = G_y$, the asymmetry vanishes along the zone-diagonal $k_x = k_y$ and is maximum for the zone-boundaries $( k_x a = \pi , k_y a = 0 )$; $(k_x a = 0, k_y a = \pi )$ with a smooth variation in between. Asymmetry patterns for other $G's$ may be obtained from Eqs. (14)-(17). In Ref. (6), $\theta_0$ is estimated to be $O (10^{-1} ) (x_c - x)^{1/2}$ for $x \leq x_c$. So at $x_c - x \approx 5 \times 10^{-2}$, the asymmetry is predicted to be $O (10^{-1})$, at $T \approx T_p (x)$. The proposed experiment is to measure the ARPES current in under-doped samples for a fixed relative geometry of the incident photon-beam, crystalline surface, and the detector to select a {\bf p} and {\bf G} and then simply switch the polarization of the incident photons, and measure the current again. The experiment should then be repeated for different {\bf p} and {\bf G}. The effect should set in for temperatures $T\lesssim T_p(x)$ and have a momentum dependence predicted by Eq. (20) and Eqs. (14)-(17). The principal difficulty of the experiment is the possible presence of domains of the CC phase. The domains consist of regions in which $( \theta_0 )$ in the wave-function (5) is replaced by $( -\theta_0 )$. This leads to a mutual switching of the pattern of the current within the unit cells (and a current flow along the domain boundary). The effect calculated in Eq. (20) to linear order in $\theta_0$ then averages to zero for equal number of the two-kinds of domains in the surface area $S$ from which the current is collected. If the characteristic domain size is $D$, an effect proportional to $(D/S)^{1/2} \; \theta_0$ is still to be expected. Also, Eqn. (1) yields asymmetry terms proportional to $\theta_0^2$, which are not affected by the domains. However, these may be too small to be observable. In the above, circularly polarized photons, with the plane of polarization along the surface of the crystal have been considered. There is also an effect linear in $\theta_0$ for photons linearly polarized along the normal to the surface due to the potential gradient at the surface $(\nabla V)_s$. This effect, proportional to $(\nabla V)^2_s$ changes sign for a given $G_x = G_y$, as $p_x$ and $p_y$ are interachanged in a d-wave like fashion. Observation of this effect requires rotating the sample. It is also affected by domains. \newpage
-16,658.456506
[ -3.279296875, 3.0234375 ]
16.231884
[ -3.595703125, -0.59423828125, -2.013671875, -5.0625, -0.0999755859375, 7.83984375 ]
[ 3.203125, 8.0234375, 2.994140625, 5.62890625 ]
103
2,267
[ -3.1015625, 3.625 ]
32.901718
[ -6.06640625, -4.15234375, -4.109375, -2.404296875, 1.7958984375, 11.890625 ]
1.764487
10.266464
31.892369
1.474778
[ 1.9453973770141602 ]
-10,860.805481
4.74989
-16,257.394442
0.371471
5.597099
[ -3.080078125, -4.05078125, -3.67578125, -4.78125, 2.529296875, 12.5234375 ]
[ -5.26953125, -1.4833984375, -1.6494140625, -0.62646484375, 2.689453125, 2.91015625 ]
BkiUe404uzliCt-nLV60
\section{Introduction} Quantum communications and quantum computation apply quantum states to store and transmit information. The capacity of a state for the purpose is dependent on its dimension, so the higher dimension of a state means the higher capacity to carry information. In addition, the use of higher dimensional quantum states, e.g., qudits and entangled qudits, enjoys many advantages such as enhanced security in quantum cryptography \cite{Langford}, more efficient quantum logic gate \cite{Ralph} and others. Qudits and entangled qudits therefore attract many researches recently. The proposals for generating qudits and entangled qudits include orbital angular momentum entangled qutrits \cite{Mair}, pixel entangled qudits \cite{Neves}, energy-time entangled and time-bin entangled qudits \cite{Thew}, and bi-photonic qutrits encoded with polarization degree of freedom \cite{Howell, Mikami, Lanyon}. In this paper we will focus on bi-photonic qutrits which are represented with the polarizations of two photons in the same spatial-temporal mode \cite{note}---$\left\vert 0\right\rangle _{3}\equiv\left\vert HH\right\rangle $, $\left\vert 1\right\rangle _{3}\equiv\left\vert HV\right\rangle $, and $\left\vert 2\right\rangle _{3}\equiv\left\vert VV\right\rangle $, where $H$ and $V$ denotes the horizontal and vertical polarization, respectively. The generation of such qutrits including the entangled ones has been demonstrated \cite{Howell, Lanyon}. In an recent work by Lanyon, \textit{et al.} \cite{Lanyon}, with an ancilla qubit and a Fock state filter associated with some wave plates, a bi-photonic state as the linear combination of $\{\left\vert 0\right\rangle _{3},\left\vert 1\right\rangle _{3},\left\vert 2\right\rangle _{3}\}$ is generated from the logic state $\left\vert 0\right\rangle _{3}$. To manipulate a bi-photonic qutrit in this form, one should know how to implement a unitary operation on such qutrits. However, due to the indistinguishableness of two photons in the same spatial-temporal mode, it is very difficult to realize a simple unitary operation on such bi-photonic qutrit \cite{Bogdanov}. Here we present two schemes realizing the transformation from a bi-photonic qutrit to any other bi-photonic qutrit, i. e., arbitrary unitary operations $U(3)$ on bi-photonic qutrits. The schemes work with transforming the input bi-photonic qutrits to the corresponding single photon qutrits in spatial modes, and then mapping the single photon qutrits back to the original polarization modes of two photons. The rest of the paper is organized as follows. In Sec. II, we present a purely linear optical scheme of the transformation and inverse transformation from a bi-photonic qutrit in the same spatial-temporal mode to the corresponding single photon qutrit. In Sec. III, we improve on the linear optical scheme with weak cross-Kerr nonlinearity, making the realization of bi-directional mapping much more efficient. Sec. IV concludes the work with a brief discussion. \section{\bigskip Bi-directional mapping with linear optical elements} Any unitary operation on a single photon qudits in spatial modes can be performed by a linear optical multi-port interferometer (LOMI) \cite{Reck}. It is therefore possible to manipulate bi-photonic qutrits following such strategy---first transforms a bi-photonic qutrit to a single-photon qutrit, and then performs the desired operations on this single photon qutrit, and finally transforms the single photon qutrit back to a bi-photonic qutrit. In what follows, we present the details of the procedure, which is realized only with linear optical elements. \subsection{\bigskip Transforming bi-photon qutrit to single-photon qutrit} \begin{figure}[tbh] \begin{center} \epsfig{file=transformation.eps,width=6cm} \end{center} \caption{Schematic setup for the transformation from a bi-photonic qutrit to the corresponding single-photon qutrit. At first, the input qutrit is transmitted through a VBS, and then the two output modes are transmitted through a PBS, respectively. Two single photons are used as the ancillas, which will interfere with the output modes of PBS$_{1}$. The part in dashed line is used to erase the path information of the modes $P_{1},P_{2},P_{3}$, which are all in the state $\left\vert V\right\rangle $. The detection results are used as control signals of the conditional phase shift summarized in Tab. I through the classical feed-forward. By the proper post-selection, the bi-photonic qutrit can be transformed to the corresponding single-photon qutrit as in Eq. (\ref{2}). For details, see the text.}% \end{figure} Suppose a bi-photonic qutrit is initially prepared as \begin{align} \left\vert \psi\right\rangle _{in}=\alpha\left\vert 0\right\rangle _{3}% +\beta\left\vert 1\right\rangle _{3}+\gamma\left\vert 2\right\rangle _{3}, \label{1}% \end{align} where $\left\vert \alpha\right\vert ^{2}+\left\vert \beta\right\vert ^{2}+\left\vert \gamma\right\vert ^{2}=1$. The operations shown in Fig. 1 implements the map \begin{align} \left\vert \psi\right\rangle _{in}\rightarrow\alpha\left\vert 0\right\rangle _{S}+\beta\left\vert 1\right\rangle _{S}+\gamma\left\vert 2\right\rangle _{S}=\left\vert \psi\right\rangle _{S}, \label{2}% \end{align} where $\left\vert \psi\right\rangle _{S}$ is a single-photon qutrit encoded with the spatial modes $|i\rangle_{S}$ ($i=0,1,2$) of the single photon. Here we first apply a variable beam splitter (VBS) to the input bi-photonic qutrit, realizing the following transformation% \begin{align} & \left( \frac{\alpha}{\sqrt{2}}a_{H}^{\dagger2}+\beta a_{H}^{\dagger}% a_{V}^{\dagger}+\frac{\gamma}{\sqrt{2}}a_{V}^{\dagger2}\right) \left\vert vac\right\rangle \nonumber\\ & \rightarrow\left[ \frac{\alpha}{\sqrt{2}}\left( ra_{H1}^{\dagger}% +ta_{H2}^{\dagger}\right) ^{2}+\beta\left( ra_{H1}^{\dagger}+ta_{H2}% ^{\dagger}\right) \right. \nonumber\\ & \times\left. \left( ra_{V1}^{\dagger}+ta_{V2}^{\dagger}\right) +\frac{\gamma}{\sqrt{2}}\left( ra_{V1}^{\dagger}+ta_{V2}^{\dagger}\right) ^{2}\right] \left\vert vac\right\rangle , \end{align} where $1$, $2$ denote the different paths, and $t$ $(r)$ is the transmissivity (reflectivity) of the VBS. Next, in order to project out the proper components, we introduce two single photons $\left\vert H\right\rangle $ and $\left\vert V\right\rangle $ as the ancilla, and make them interfere with the output modes of the polarizing beam splitters (PBS$_{1}$) on two 50:50 beam splitters (BS), respectively. Due to the Hong-Ou-Mandal interference effect \cite{Hong}, two indistinguishable photon will be bunching to the same output mode of BS, and then we can use the proper post-selection to get the desired components. To see the details, we show the evolution of each input mode of two photons as follows:% \begin{align} a_{H1}^{\dagger} & \rightarrow a_{H3}^{\dagger},a_{H2}^{\dagger}% \rightarrow\frac{1}{\sqrt{2}}\left( a_{H4}^{\dagger}+a_{HD_{1}}^{\dagger }\right) ,\nonumber\\ a_{V1}^{\dagger} & \rightarrow a_{VP_{1}}^{\dagger},a_{V2}^{\dagger }\rightarrow\frac{1}{\sqrt{2}}\left( a_{V5}^{\dagger}+a_{VD_{2}}^{\dagger }\right) , \end{align} where the subscripts $D_{1},D_{2}$ denote the modes going to photon number non-resolving detectors. Meanwhile, for the ancilla photons, the evolutions are \begin{align} a_{H}^{\dagger} & \rightarrow\frac{1}{\sqrt{2}}\left( a_{H4}^{\dagger }-a_{HD_{1}}^{\dagger}\right) ,\ \nonumber\\ a_{V}^{\dagger} & \rightarrow\frac{1}{\sqrt{2}}\left( a_{V5}^{\dagger }-a_{VD_{2}}^{\dagger}\right) . \end{align} The 50:50 BS placed on path 4 (5) is to split the mode 4 (5) into two output modes 6, P$_{2}$ (7, P$_{3}$), making the transformations, \begin{align} a_{H4}^{\dagger} & \rightarrow\frac{1}{\sqrt{2}}\left( a_{H6}^{\dagger }+a_{HP_{2}}^{\dagger}\right) \rightarrow\frac{1}{\sqrt{2}}\left( a_{H6}^{\dagger}+a_{VP_{2}}^{\dagger}\right) ,\nonumber\\ a_{V5}^{\dagger} & \rightarrow\frac{1}{\sqrt{2}}\left( a_{V7}^{\dagger }+a_{VP_{3}}^{\dagger}\right) \rightarrow\frac{1}{\sqrt{2}}\left( a_{H7}^{\dagger}+a_{VP_{3}}^{\dagger}\right) . \end{align} After that, one obtains the following state:% \begin{align} & \left( -\frac{\alpha}{4\sqrt{2}}t^{2}a_{H6}^{\dagger}a_{VP_{2}}^{\dagger }+\frac{\beta}{2}r^{2}a_{H3}^{\dagger}a_{VP_{1}}^{\dagger}-\frac{\gamma }{4\sqrt{2}}t^{2}a_{V7}^{\dagger}a_{VP_{3}}^{\dagger}\right) \nonumber\\ & \times a_{HD_{1}}^{\dagger}a_{VD_{2}}^{\dagger}\left\vert vac\right\rangle +rest. \label{sq}% \end{align} where $rest.$ denotes the components of two photons appearing in the same spatial mode. If we discard the modes $P_{1},P_{2},P_{3}$ without changing anything else, i.e., erase the path information of $P_{1},P_{2},P_{3}$, the first three terms in Eq. (\ref{sq}) will be just the desired single photon qutrit, which carries the same coefficients of the input bi-photon qutrit. Since there is only one photon in the modes $P_{1},P_{2},P_{3}$, we will use a quantum Fourier transform (QFT) ($j,k^{\prime}$ denote the spatial modes) \cite{Nielsen}, \begin{equation} a_{Vj}^{\dagger}\left\vert vac\right\rangle =\frac{1}{\sqrt{3}}\overset {2}{\underset{k^{\prime}=0}{{\sum}}}e^{2\pi ijk/3}a_{Vk^{\prime}}^{\dagger }\left\vert vac\right\rangle , \label{qft}% \end{equation} to do it. The QFT is an unitary operation for a single photon in three spatial modes, so we can use an LOMI shown in the dashed line of Fig. 1 to implement it. Just like the setups in the dash-dotted line, three photon number non-resolving detectors are used and the detection results are to control the conditional phase shift (PS) through classical feedforward. The relations between the detection results and the corresponding PS operations are summarized in Tab. \ref{tb1}. \begin{table}[ptb] $% \begin{array} [c]{|c|c|c|c|}\hline & D_{3} & D_{4} & D_{5}\\\hline 3 & 0 & 0 & 0\\\hline 6 & 0 & \frac{2\pi}{3} & \frac{4\pi}{3}\\\hline 7 & 0 & \frac{4\pi}{3} & \frac{8\pi}{3}\\\hline \end{array} $\caption{The relations between the detections and the corresponding phase shifters on path 3, 6, 7.}% \label{tb1}% \end{table}After that, with the coincident measurements of the detectors $D_{1}$, $D_{2}$, and one of the detectors $D_{3},D_{4},D_{5}$, the state% \begin{equation} \left( \frac{\alpha}{4\sqrt{2}}t^{2}a_{H6}^{\dagger}+\frac{\beta}{2}% r^{2}a_{H3}^{\dagger}+\frac{\gamma}{4\sqrt{2}}t^{2}a_{V7}^{\dagger}\right) \left\vert vac\right\rangle , \end{equation} will be projected out by the post-selection. We can rewrite it as \begin{equation} \frac{\alpha}{4\sqrt{2}}t^{2}\left\vert H\right\rangle _{6}+\frac{\beta}% {2}r^{2}\left\vert H\right\rangle _{3}+\frac{\gamma}{4\sqrt{2}}t^{2}\left\vert V\right\rangle _{7}. \end{equation} It is straightforward that the state will be $\left\vert \psi\right\rangle _{S}$, given that $t^{2}=2\sqrt{2}r^{2}$ or $t^{2}=\frac{2\sqrt{2}}% {1+2\sqrt{2}}$. The corresponding success probability of the process is $\left( \frac{t^{2}}{4\sqrt{2}}\right) ^{2}=1.71\times10^{-2}.$ \subsection{Transformation back to bi-photonic qutrit} \begin{figure}[tbh] \begin{center} \epsfig{file=interference.eps,width=6cm} \end{center} \caption{Schematic setup for the inverse transformation from a single-photon qutrit back to a bi-photon qutrit. Two variable beam splitters (VBS) are applied, and two extra single photon work as the ancilla. The single-photon qutrit can be transformed back to a bi-photonic qutrit by post-selection. For details, see the text.}% \end{figure} After the desired operations performed on the single photon qutirt, we should transform the single-photon qutrit $\left\vert \psi^{^{\prime}}\right\rangle _{S}=\alpha^{^{\prime}}\left\vert 0^{^{\prime}}\right\rangle _{S}% +\beta^{^{\prime}}\left\vert 1^{^{\prime}}\right\rangle _{S}+\gamma^{^{\prime }}\left\vert 2^{\prime}\right\rangle _{S}$ back to a bi-photonic qutrit. The inverse transformation is shown in Fig. 2. Three VBSs (VBS$_{1}$, VBS$_{2}$, VBS$_{3}$) with the transmissivities (reflectivities) $t_{1}$, $t_{2}$, $t_{3}$ ($r_{1}$, $r_{2}$, $r_{3}$), two PSs and $\sigma_{x}$ operation are applied to perform the transformation% \begin{align} & \left( \alpha^{^{\prime}}a_{H0^{^{\prime}}}^{\dagger}+\beta^{^{\prime}% }a_{H1^{^{\prime}}}^{\dagger}+\gamma^{^{\prime}}a_{H2^{^{\prime}}}^{\dagger }\right) \left\vert vac\right\rangle \nonumber\\ & \rightarrow\left[ \left( \alpha^{^{\prime}}r_{2}-\beta^{^{\prime}}% t_{1}t_{2}\right) a_{H1}^{\dagger}+\beta^{^{\prime}}r_{1}a_{H2}^{\dagger }+\gamma^{^{\prime}}t_{3}a_{V3}^{\dagger}\right] \left\vert vac\right\rangle . \end{align} In order to select out the desired components, we introduce a single photon $\left\vert H\right\rangle $\ as an ancilla, which will interfere with the mode 1 through a 50:50 BS, and meanwhile combine the modes 2 and 3 by a PBS into the mode 5, which will then interfere with another ancilla single photon $\left\vert V\right\rangle $ through a 50:50 BS. The total state will be then transformed to% \begin{align} & \left\{ \frac{1}{2\sqrt{2}}\left( \alpha^{^{\prime}}r_{2}-\beta ^{^{\prime}}t_{1}t_{2}\right) \left( a_{H4}^{\dagger2}-a_{HP_{1}}^{\dagger 2}\right) \left( a_{V7}^{\dagger}-a_{V6}^{\dagger}\right) \right. \nonumber\\ & +\frac{1}{2\sqrt{2}}\beta^{^{\prime}}r_{1}\left( a_{H4}^{\dagger }-a_{HP_{1}}^{\dagger}\right) \left( a_{H7}^{\dagger}+a_{H6}^{\dagger }\right) \left( a_{V7}^{\dagger}-a_{V6}^{\dagger}\right) \nonumber\\ & \left. +\frac{1}{2\sqrt{2}}\gamma^{^{\prime}}t_{3}\left( a_{H4}^{\dagger }-a_{HP_{1}}^{\dagger}\right) \left( a_{V7}^{\dagger2}-a_{V6}^{\dagger 2}\right) \right\} \left\vert vac\right\rangle . \end{align} The $\left\vert V\right\rangle $ mode on path 6 will be reflected to mode $P_{2}$. Now, the following state can be achieved: \begin{align} & -\frac{1}{2\sqrt{2}}\left[ \left( \alpha^{^{\prime}}r_{2}-\beta ^{^{\prime}}t_{1}t_{2}\right) a_{H4}^{\dagger2}a_{VP_{2}}^{\dagger}\right. \nonumber\\ & +\beta^{^{\prime}}r_{1}\left( a_{H7}^{\dagger}a_{V7}^{\dagger}a_{HP_{1}% }^{\dagger}+a_{H4}^{\dagger}a_{H7}^{\dagger}a_{VP_{2}}^{\dagger}\right) \nonumber\\ & \left. +\gamma^{^{\prime}}t_{3}a_{V7}^{\dagger2}a_{HP_{1}}^{\dagger }\right] \left\vert vac\right\rangle +rest. \end{align} The left work will be the erasure of the path information of the modes $P_{1}, P_{2}$ by the detection similar to that in II. A. Because there are only two spatial modes, the realization of the QFT will be simplified with just one 50:50 BS as shown in dashed line. Now, the state% \begin{align} & -\frac{1}{4}\left[ \left( \alpha^{^{\prime}}r_{2}-\beta^{^{\prime}}% t_{1}t_{2}\right) a_{H4}^{\dagger2}\right. \nonumber\\ & +\beta^{^{\prime}}r_{1}\left( a_{H7}^{\dagger}a_{V7}^{\dagger}% +a_{H4}^{\dagger}a_{H7}^{\dagger}\right) \nonumber\\ & \left. +\gamma^{^{\prime}}t_{3}a_{V7}^{\dagger2}\right] a_{VD_{1}% }^{\dagger}\left\vert vac\right\rangle +rest., \end{align} can be achieved, where we only keep the terms with the photonic modes on $P_{1},P_{2}$ being detected by the detector $D_{1}$. In the other case when the photon is detected by the detector $D_{2}$, there will be an additional phase shift $\pi$ to the components including $a_{VP_{2}}^{\dagger}$, and it seems difficult to remove it by a simple operation. After the erasure of $P_{1}$ and $P_{2}$ modes, the modes 4 and 7 will interfere with each other through a 50:50 BS. If there are two photons in the final output (which can be realized by common bi-photonic qutrit tomograph \cite{Lanyon}) and a click on one of the two detectors $D_{1}$, $D_{2}$, we will project out the state \begin{align} & -\frac{1}{8}\left[ \left( \alpha^{^{\prime}}r_{2}-\beta^{^{\prime}}% t_{1}t_{2}\right) a_{Hout}^{\dagger2}\right. \nonumber\\ & \left. +\beta^{^{\prime}}r_{1}\left( a_{Hout}^{\dagger}a_{Vout}^{\dagger }+a_{Hout}^{\dagger2}\right) +\gamma^{^{\prime}}t_{3}a_{Vout}^{\dagger 2}\right] \left\vert vac\right\rangle \nonumber\\ & =-\frac{1}{8}\left[ \left( \alpha^{^{\prime}}r_{2}-\beta^{^{\prime}}% t_{1}t_{2}+\beta^{^{\prime}}r_{1}\right) a_{Hout}^{\dagger2}\right. \nonumber\\ & \left. +\beta^{^{\prime}}r_{1}a_{Hout}^{\dagger}a_{Vout}^{\dagger}% +\gamma^{^{\prime}}t_{3}a_{Vout}^{\dagger2}\right] \left\vert vac\right\rangle \end{align} by post-selection. Choosing $t_{1}t_{2}=r_{1}$ and $\sqrt{2}r_{2}=$\ $r_{1}$, i.e., $t_{1}^{2}=\frac{\sqrt{17}-3}{2}$ ($r_{1}^{2}=\frac{5-\sqrt{17}}{2}$), associated with $t_{3}^{2}=\frac{5-\sqrt{17}}{4},$ we can achieve the final state \begin{equation} \frac{r_{1}}{8}\left( \frac{\alpha^{^{\prime}}}{\sqrt{2}}a_{Hout}^{\dagger 2}+\beta^{^{\prime}}a_{Hout}^{\dagger}a_{Vout}^{\dagger}+\frac{\gamma ^{^{\prime}}}{\sqrt{2}}a_{Vout}^{\dagger2}\right) \left\vert vac\right\rangle , \end{equation} which is the target bi-photonic qutrit $\alpha^{^{\prime}}\left\vert 0\right\rangle _{3}+\beta^{^{\prime}}\left\vert 1\right\rangle _{3}% +\gamma^{^{\prime}}\left\vert 2\right\rangle _{3}$. The corresponding success probability is $\left( \frac{r_{1}}{8}\right) ^{2}=6.85\times10^{-3}$. Assocaited with the above transformation, we could manipulate the bi-photonic qutrits, such as perform an arbitrary unitary operation $U(3)$ on them, with a success probability $1.71\times6.85\times10^{-5}=1.17\times10^{-4}$. The scheme succeeds with a very small probability, but in principle it can realize any unitary operation on a bi-photon qutrit. In summary, with four ancilla single photons, we could realize arbitrary manipulation with linear optical elements and coincidence measurements. Since only two cases--no photon or any number of photons---should be discriminated, the common photon number non-resolving detector, e.g., silicon avalanche photodiodes (APDs) will be necessary for the scheme. \section{Bi-directional mapping with weak cross-Kerr nonlinearity} \label{sec3} The success probability of the above scheme with only linear optical elements could be too small for practical application. This success probability, however, can be greatly increased if we apply some weak nonlinearity in the circuit. The application of weak cross-Kerr nonlinearity has been proposed in various fields of quantum information science. It was firstly applied to realize parity projector \cite{Barrett} and deterministic CNOT gate \cite{Nemoto}, and then in some quantum computation and communication schemes (see, e.g., \cite{Spiller, Lin, He}). The effective Hamiltonian for cross-Kerr nonlinearity is $\mathcal{H}=-\hbar\chi\hat{n}_{i}\hat{n}_{j}$ ($\chi$ is the nonlinear intensity and $\hat{n}_{i/j}$ the number operator of the interacting modes). The cross phase modulation (XPM) process caused by such interaction between a Fock state $|n\rangle$ and a coherent state $|\alpha\rangle$ gives rise to the transformation, $|n\rangle|\alpha\rangle\rightarrow|n\rangle |\alpha e^{i\theta}\rangle$, where $\theta=\chi t$ induced during the interaction time $t$ could be small with weak nonlinearity. Another useful technique to our schem is homodyne-heterodyne measurement for the quadratures of coherent state. A state like $\sum_{k}|k\rangle|\alpha e^{ik\theta}\rangle$ can be projected to a definite Fock state or a superposition of some Fock states by such measurement, which can be performed with high fidelity. \subsection{\bigskip Transformation with XPM process} \begin{figure}[tbh] \begin{center} \epsfig{file=transformation-n.eps,width=6cm} \end{center} \caption{Schematic setup for the transformation with XPM process. Two QND modules working with XPM process are used here. The transformation is realized under the condition that the two qubus beams pick no phase shift, with the corresponding success probability 1/6. For the details, see text.}% \end{figure} With weak cross-Kerr nonlinearity, we implement the transformation from a bi-photonic qutrit to a single photon qutrit as shown in Fig. 3. An initial bi-photonic qutrit in the state $\left\vert \psi\right\rangle _{in}$ of Eq. (\ref{1}) is first sent to a 50:50 BS, making the following transformation \begin{align} & \left( \frac{\alpha}{\sqrt{2}}a_{H}^{\dagger2}+\beta a_{H}^{\dagger}% a_{V}^{\dagger}+\frac{\gamma}{\sqrt{2}}a_{V}^{\dagger2}\right) \left\vert vac\right\rangle \nonumber\\ & \rightarrow\left[ \frac{\alpha}{2\sqrt{2}}\left( a_{H1}^{\dagger}% +a_{H2}^{\dagger}\right) ^{2}+\frac{\beta}{2}\left( a_{H1}^{\dagger}% +a_{H2}^{\dagger}\right) \right. \nonumber\\ & \left. \times\left( a_{V1}^{\dagger}+a_{V2}^{\dagger}\right) +\frac{\gamma}{2\sqrt{2}}\left( a_{V1}^{\dagger}+a_{V2}^{\dagger}\right) ^{2}\right] \left\vert vac\right\rangle . \label{die}% \end{align} Next a VBS is placed on path 2 such that \begin{align} a_{H2}^{\dagger} & \rightarrow ra_{H3}^{\dagger}+ta_{H4}^{\dagger },\nonumber\\ a_{V2}^{\dagger} & \rightarrow ra_{V3}^{\dagger}+ta_{V4}^{\dagger}. \end{align} Then, after three PBSs change the spatial modes as $1\rightarrow1,1^{\prime}$, $3\rightarrow5,6$, $4\rightarrow7$, two qubus beams (i.e. coherent states) $\left\vert \alpha_{1}\right\rangle \left\vert \alpha_{2}\right\rangle $ will be coupled to the corresponding photonic modes through the XPM processes in two quantum nondemolition detection (QND) modules, which are shown in dashed line of Fig. 3. The result will be the following transformation of the total system: \begin{align} & \left( \frac{\alpha}{\sqrt{2}}ra_{H1}^{\dagger}a_{H5}^{\dagger}% +\frac{\beta}{2}ra_{H1}^{\dagger}a_{V6}^{\dagger}+\frac{\gamma}{\sqrt{2}% }ta_{V1^{\prime}}^{\dagger}a_{V7}^{\dagger}\right) \left\vert vac\right\rangle \nonumber\\ & \times\left\vert \alpha_{1}\right\rangle \left\vert \alpha_{2}\right\rangle +rest., \end{align} where we only give the terms that two qubus beams pick no phase shift. These terms can be separated from the others by the quadrature measurement $\left\vert X\right\rangle \left\langle X\right\vert $, which is implementable with homodyne-heterodyne measurement \cite{Nemoto,Spiller}, to obtain the following state \begin{align} & \left( \frac{\alpha}{\sqrt{2}}ra_{H1}^{\dagger}a_{H5}^{\dagger}% +\frac{\beta}{2}ra_{H1}^{\dagger}a_{V6}^{\dagger}+\frac{\gamma}{\sqrt{2}% }ta_{V1^{\prime}}^{\dagger}a_{V7}^{\dagger}\right) \left\vert vac\right\rangle \nonumber\\ & =\frac{\alpha}{\sqrt{2}}r\left\vert H\right\rangle _{1}\left\vert H\right\rangle _{5}+\frac{\beta}{2}r\left\vert H\right\rangle _{1}\left\vert V\right\rangle _{6}+\frac{\gamma}{\sqrt{2}}t\left\vert V\right\rangle _{1}\left\vert V\right\rangle _{7}. \label{kuie}% \end{align} This state can be expressed as \begin{align} & \frac{\alpha}{2}r\left( \left\vert +\right\rangle _{1}+\left\vert -\right\rangle _{1}\right) \left\vert H\right\rangle _{5}+\frac{\beta}% {2\sqrt{2}}r\left( \left\vert +\right\rangle _{1}+\left\vert -\right\rangle _{1}\right) \left\vert V\right\rangle _{6}\nonumber\\ & +\frac{\gamma}{2}t\left( \left\vert +\right\rangle _{1}-\left\vert -\right\rangle _{1}\right) \left\vert V\right\rangle _{7}, \end{align} where $\left\vert \pm\right\rangle =\frac{1}{\sqrt{2}}\left( \left\vert H\right\rangle +\left\vert V\right\rangle \right) $. Now, we use a PBS$_{\pm }$ which transmits $\left\vert +\right\rangle $ and reflects $\left\vert -\right\rangle $, and the following two photon number non-resolving detectors. If the detection is $\left\vert +\right\rangle $, the state% \begin{equation} \frac{\alpha}{\sqrt{2}}r\left\vert H\right\rangle _{5}+\frac{\beta}% {2}r\left\vert V\right\rangle _{6}+\frac{\gamma}{\sqrt{2}}t\left\vert V\right\rangle _{7} \label{22}% \end{equation} will be projected out; on the other hand, if the detection is $\left\vert -\right\rangle $, what is realized is \begin{equation} \frac{\alpha}{\sqrt{2}}r\left\vert H\right\rangle _{5}+\frac{\beta}% {2}r\left\vert V\right\rangle _{6}-\frac{\gamma}{\sqrt{2}}t\left\vert V\right\rangle _{7}, \end{equation} which can be transform to the state in Eq. (\ref{22}) by the conditional phase shifter $\pi$ on path 7. By selecting $\frac{r}{2}=\frac{t}{\sqrt{2}}$, i.e., $t=\frac{1}{\sqrt{3}}$, and using a 50:50 BS for the mode 5 and two $\sigma_{x}$ operations for modes 6, 7, we can achieve the following state, \begin{equation} \frac{1}{\sqrt{6}}\left( \alpha\left\vert H\right\rangle _{5}+\beta\left\vert H\right\rangle _{6}+\gamma\left\vert H\right\rangle _{7}\right) , \end{equation} which is the single photon qutrit $\left\vert \psi\right\rangle _{S}$ in Eq. (\ref{2}). The success probability is $\frac{1}{6}$, which is much higher than that of the linear optical scheme. Moreover, no ancilla single photon is necessary here. The scheme is based on quadrature projection after XPM process, so it does not require any post-selection by coincidence measurement. But it needs an XPM phase shift of $-\theta$, which is only possible with the equivalent phase shift $2\pi-\theta$ and could be impractical \cite{Kok}. The XPM phase shift $\theta\sim10^{-2}$ is possible with, for example, electromagnetically induced transparencies (EIT) \cite{EIT}, whispering-gallery microresonators \cite{WGM}, optical fibers \cite{OF}, or cavity QED systems \cite{QED}, but the corresponding $2\pi-\theta$ will be too large to realize by the available techniques. To avoid the XPM phase shift of $-\theta$, we propose a different design of the transformation shown in Fig. 4. Here we use the double XPM method in \cite{He} to replace the two XPM processes without changing anything else. \begin{figure}[tbh] \begin{center} \epsfig{file=transformation-n-2.eps,width=6cm} \end{center} \caption{Schematic setup for the transformation from bi-photon qutrits to the corresponding single photon qutrits with double XPM method. The only difference from Fig. 3 is that the two separate XPM processes are replaced by a double XPM process of two identical qubus beams. In the design, no XPM phase shift of $-\theta$ will be necessary, and it makes the scheme more feasible.}% \end{figure} We describe it briefly as the process is similar. In the double XPM process two qubus beams $\left\vert \alpha\right\rangle \left\vert \alpha\right\rangle $ will be coupled to the corresponding photonic modes as shown in Fig. 4. The XPM pattern in Fig. 4 is that the first beam being coupled to the $\left\vert H\right\rangle $ mode on path 1 and the $\left\vert V\right\rangle $ mode on path 4, while the second beam to $\left\vert V\right\rangle $ mode on path $1^{\prime}$ and the modes on path 3. Suppose the XPM phase shifts induced by the couplings are all $\theta$. After that, the total system will be transformed to \begin{align} & \left( \frac{\alpha}{\sqrt{2}}ra_{H1}^{\dagger}a_{H3}^{\dagger}% +\frac{\beta}{2}a_{H1}^{\dagger}a_{V1^{\prime}}^{\dagger}+\frac{\beta}% {2}ra_{H1}^{\dagger}a_{V3}^{\dagger}\right. \nonumber\\ & \left. +\frac{\beta}{2}rta_{H3}^{\dagger}a_{V4}^{\dagger}+\frac{\gamma }{2\sqrt{2}}ta_{V1}^{\dagger}a_{V4}^{\dagger}+\frac{\gamma}{2\sqrt{2}}% rta_{V3}^{\dagger}a_{V4}^{\dagger}\right) \nonumber\\ & \times\left\vert vac\right\rangle \left\vert \alpha e^{i\theta }\right\rangle \left\vert \alpha e^{i\theta}\right\rangle +\frac{\alpha }{2\sqrt{2}}t^{2}a_{H4}^{\dagger2}\left\vert vac\right\rangle \left\vert \alpha\right\rangle \left\vert \alpha\right\rangle +rest., \end{align} where $rest.$ denotes the terms that the two qubus beams pick up the different phase shifts. A phase shifter of $-\theta$ is respectively applied to two qubus beams, and then one more 50:50 BS implements the transformation $\left\vert \alpha_{1}\right\rangle \left\vert \alpha_{2}\right\rangle \rightarrow\left\vert \frac{\alpha_{1}-\alpha_{2}}{\sqrt{2}}\right\rangle \left\vert \frac{\alpha_{1}+\alpha_{2}}{\sqrt{2}}\right\rangle $ of the coherent-state components. The above state will be therefore transformed to \begin{align} & \left( \frac{\alpha}{\sqrt{2}}ra_{H1}^{\dagger}a_{H3}^{\dagger}% +\frac{\beta}{2}a_{H1}^{\dagger}a_{V1^{\prime}}^{\dagger}+\frac{\beta}% {2}ra_{H1}^{\dagger}a_{V3}^{\dagger}\right. \nonumber\\ & \left. +\frac{\beta}{2}rta_{H3}^{\dagger}a_{V4}^{\dagger}+\frac{\gamma }{2\sqrt{2}}ta_{V1}^{\dagger}a_{V4}^{\dagger}+\frac{\gamma}{2\sqrt{2}}% rta_{V3}^{\dagger}a_{V4}^{\dagger}\right) \nonumber\\ & \times\left\vert vac\right\rangle \left\vert 0\right\rangle \left\vert \sqrt{2}\alpha\right\rangle +\frac{\alpha}{2\sqrt{2}}t^{2}a_{H4}^{\dagger 2}\left\vert vac\right\rangle \left\vert 0\right\rangle \left\vert \sqrt {2}\alpha\right\rangle +rest. \end{align} Then, we could use the projections $\left\vert n\right\rangle \left\langle n\right\vert $ on the first qubus beam to get the proper output. If $n=0$, and by the post-selection that one photon will appear on the output (5, 6, 7) while a click on one of the two detectors after the PBS$_{\pm}$, the state in Eq. (\ref{kuie}) can be therefore projected out. Similar to the process in Fig. 3, we can achieve the final single photon qutrit $\left\vert \psi\right\rangle _{S}$ with the success probability $\frac{1}{6}.$ Though this design requires the post-selection, it dispenses with the XPM phase shift of $-\theta$, so it could be more experimentally feasible. \subsection{Inverse transformation with XPM process} \begin{figure}[tbh] \begin{center} \epsfig{file=inverse-n.eps,width=8.5cm} \end{center} \caption{Schematic setup for the inverse transformation with XPM process. The part called Entangler shown in dashed line entangles the single photon qutrit and the ancilla single photon. Out of the Entangler, the polarization of the ancilla photon will be the same as those of the single photon qutrit. This inverse transformation can be implemented with a success probability 1/2.}% \end{figure} Now we should transform the output single photon qutrit back to a bi-photonic qutrit. We apply the inverse transformation procedure shown in Fig. 5, and will show that it can be realized with a success probability as high as $\frac{1}{2}$. At first, we apply a setup called Entangler shown in dashed line to the transformed single photon qutrit $\left\vert \psi^{^{\prime}}\right\rangle _{S}=\alpha^{^{\prime}}\left\vert H\right\rangle _{0}+\beta^{^{\prime}% }\left\vert H\right\rangle _{1}+\gamma^{^{\prime}}\left\vert H\right\rangle _{2}$ with an ancilla single photon in the state $\left\vert \pm\right\rangle _{a}$, after a $\sigma_{x}$ operation performed on the spatial modes $1$, $2$, respectively. The Entangler is to implement the transformation, \begin{equation} \left\vert \psi^{^{\prime}}\right\rangle _{S}\left\vert +\right\rangle _{a}\rightarrow\alpha^{^{\prime}}\left\vert HH\right\rangle _{0,a}% +\beta^{^{\prime}}\left\vert VV\right\rangle _{1,a}+\gamma^{^{\prime}% }\left\vert VV\right\rangle _{2,a}, \end{equation} where the polarization of the ancilla single photon will be the same as that of the single photon qutrit. In the Entangler, two qubus beams $\left\vert \alpha\right\rangle \left\vert \alpha\right\rangle $ are introduced, and then coupled to the corresponding photonic modes through the XPM processes. The XPM pattern in Fig. 5 is that the first beam being coupled to $\left\vert V\right\rangle $ modes on path 1, 2 and the $\left\vert H\right\rangle $ mode of the ancilla photon, while the second beam to $\left\vert H\right\rangle $ mode on path 0 and the $\left\vert V\right\rangle $ mode of the ancilla photon. Suppose the XPM phase shifts induced by the couplings are all $\theta $. As the result, we will transform the total system to \begin{align} & \frac{1}{\sqrt{2}}\left( \alpha^{^{\prime}}\left\vert HH\right\rangle _{0,a}+\beta^{^{\prime}}\left\vert VV\right\rangle _{1,a}+\gamma^{^{\prime}% }\left\vert VV\right\rangle _{2,a}\right) \left\vert \alpha e^{i\theta }\right\rangle \left\vert \alpha e^{i\theta}\right\rangle \nonumber\\ & +\frac{1}{\sqrt{2}}\alpha^{^{\prime}}\left\vert HV\right\rangle _{0,a}\left\vert \alpha\right\rangle \left\vert \alpha e^{i2\theta }\right\rangle \nonumber\\ & +\frac{1}{\sqrt{2}}\left( \beta^{^{\prime}}\left\vert VH\right\rangle _{1,a}+\gamma^{^{\prime}}\left\vert VH\right\rangle _{2,a}\right) \left\vert \alpha e^{i2\theta}\right\rangle \left\vert \alpha\right\rangle . \end{align} After that, a phase shifter of $-\theta$ is respectively applied to two qubus beams, and then one more 50:50 BS implements the transformation $\left\vert \alpha_{1}\right\rangle \left\vert \alpha_{2}\right\rangle \rightarrow \left\vert \frac{\alpha_{1}-\alpha_{2}}{\sqrt{2}}\right\rangle \left\vert \frac{\alpha_{1}+\alpha_{2}}{\sqrt{2}}\right\rangle $ of the coherent-state components. The state of the total system will be therefore transformed to \begin{align} & \frac{1}{\sqrt{2}}\left( \alpha^{^{\prime}}\left\vert HH\right\rangle _{0,a}+\beta^{^{\prime}}\left\vert VV\right\rangle _{1,a}+\gamma^{^{\prime}% }\left\vert VV\right\rangle _{2,a}\right) \left\vert 0\right\rangle \left\vert \sqrt{2}\alpha\right\rangle \nonumber\\ & +\frac{1}{\sqrt{2}}\alpha^{^{\prime}}\left\vert HV\right\rangle _{0,a}\left\vert -i\sqrt{2}\alpha\sin\theta\right\rangle \left\vert \sqrt {2}\alpha\cos\theta\right\rangle \nonumber\\ & +\frac{1}{\sqrt{2}}\left( \beta^{^{\prime}}\left\vert VH\right\rangle _{1,a}+\gamma^{^{\prime}}\left\vert VH\right\rangle _{2,a}\right) \left\vert i\sqrt{2}\alpha\sin\theta\right\rangle \left\vert \sqrt{2}\alpha\cos \theta\right\rangle . \label{ki}% \end{align} The first coherent-state component in Eq. (\ref{ki}) is either vacuum or a cat state (the superposition of $\left\vert \pm i\sqrt{2}\alpha\sin\theta \right\rangle $ in the second piece). The target output could be therefore obtained by the projection $\left\vert n\right\rangle \left\langle n\right\vert $ on the first qubus beam. If $n=0$, we will obtain \begin{equation} \alpha^{^{\prime}}\left\vert HH\right\rangle _{0,a}+\beta^{^{\prime}% }\left\vert VV\right\rangle _{1,a}+\gamma^{^{\prime}}\left\vert VV\right\rangle _{2,a}, \label{io}% \end{equation} with the polarization of ancilla photon the same to the single photon qutrit. If $n\neq0$, on the other hand, there will be the output \begin{equation} e^{-in\frac{\pi}{2}}\alpha^{^{\prime}}\left\vert HV\right\rangle _{0,a}+e^{in\frac{\pi}{2}}\left( \beta^{^{\prime}}\left\vert VV\right\rangle _{1,a}+\gamma^{^{\prime}}\left\vert VV\right\rangle _{2,a}\right) , \end{equation} which can be transformed to the form in Eq. (\ref{io}) by a phase shift $\pi$ following the classically feed-forwarded measurement result $n$ and a $\sigma_{x}$ operation on the ancilla photon. Next, the second Entangler will be applied to the above output and another ancilla single photon $\left\vert \pm\right\rangle _{b}$, after a $\sigma_{x}$ operation is performed on spatial mode $1$. In this Entangler, the transmitted path for mode 1 is now active while the reflected port is only active in the first Entangler. Similar to the first Entangler, the second implements the transformation \begin{align} & \left( \alpha^{^{\prime}}\left\vert HH\right\rangle _{0,a}+\beta ^{^{\prime}}\left\vert HV\right\rangle _{1,a}+\gamma^{^{\prime}}\left\vert VV\right\rangle _{2,a}\right) \left\vert +\right\rangle _{b}\nonumber\\ & \rightarrow\alpha^{^{\prime}}\left\vert HHH\right\rangle _{0,a,b}% +\beta^{^{\prime}}\left\vert HVH\right\rangle _{1,a,b}+\gamma^{^{\prime}% }\left\vert VVV\right\rangle _{2,a,b}. \end{align} Also we need to erase the path information of the first photon. We first combine the modes 1 and 2 by a PBS, and then make them interfere with the mode 0 through a 50:50 BS to achieve the following state \begin{align} & \frac{\alpha^{^{\prime}}}{\sqrt{2}}\left( \left\vert H\right\rangle _{3}+\left\vert H\right\rangle _{4}\right) \left\vert HH\right\rangle _{a,b}+\frac{\beta^{^{\prime}}}{\sqrt{2}}\left( \left\vert H\right\rangle _{3}-\left\vert H\right\rangle _{4}\right) \left\vert VH\right\rangle _{a,b}\nonumber\\ & +\frac{\gamma^{^{\prime}}}{\sqrt{2}}\left( \left\vert V\right\rangle _{3}-\left\vert V\right\rangle _{4}\right) \left\vert VV\right\rangle _{a,b}. \end{align} By two PBS$_{\pm}$, the state \begin{align} & \frac{1}{2}\left( \alpha^{^{\prime}}\left\vert HH\right\rangle _{a,b}+\beta^{^{\prime}}\left\vert VH\right\rangle _{a,b}+\gamma^{^{\prime}% }\left\vert VV\right\rangle _{a,b}\right) \left\vert +\right\rangle _{5}\nonumber\\ & +\frac{1}{2}\left( \alpha^{^{\prime}}\left\vert HH\right\rangle _{a,b}+\beta^{^{\prime}}\left\vert VH\right\rangle _{a,b}-\gamma^{^{\prime}% }\left\vert VV\right\rangle _{a,b}\right) \left\vert -\right\rangle _{6}\nonumber\\ & +\frac{1}{2}\left( \alpha^{^{\prime}}\left\vert HH\right\rangle _{a,b}-\beta^{^{\prime}}\left\vert VH\right\rangle _{a,b}-\gamma^{^{\prime}% }\left\vert VV\right\rangle _{a,b}\right) \left\vert +\right\rangle _{7}\nonumber\\ & +\frac{1}{2}\left( \alpha^{^{\prime}}\left\vert HH\right\rangle _{a,b}-\beta^{^{\prime}}\left\vert VH\right\rangle _{a,b}+\gamma^{^{\prime}% }\left\vert VV\right\rangle _{a,b}\right) \left\vert -\right\rangle _{8}% \end{align} will be then obtained. With four detectors on path 5, 6, 7 and 8, as well as the classical feedforward, the state \begin{equation} \alpha^{^{\prime}}\left\vert HH\right\rangle _{a,b}+\beta^{^{\prime}% }\left\vert VH\right\rangle _{a,b}+\gamma^{^{\prime}}\left\vert VV\right\rangle _{a,b}, \end{equation} will be finally realized. The above processes could be deterministic. The final step is to merge the two photons into the same spatial mode, which could be simply realized by a BS and the following QND module. In this QND module, a qubus beam $\left\vert \alpha\right\rangle $ will be coupled to one of the output modes of BS$_{1}$. After that, the state in Eq. (35) plus the qubus beam will evolve to% \begin{align} & \frac{1}{\sqrt{2}}\left( \alpha^{^{\prime}}\left\vert HH\right\rangle _{1,1}+\frac{1}{\sqrt{2}}\beta^{^{\prime}}\left\vert VH\right\rangle _{1,1}+\gamma^{^{\prime}}\left\vert VV\right\rangle _{1,1}\right) \left\vert \alpha e^{i2\theta}\right\rangle \nonumber\\ & +\frac{1}{\sqrt{2}}\beta^{^{\prime}}\left( \left\vert VH\right\rangle _{1,2}+\left\vert VH\right\rangle _{2,1}\right) \left\vert \alpha e^{i\theta }\right\rangle \nonumber\\ & +\frac{1}{\sqrt{2}}\left( \alpha^{^{\prime}}\left\vert HH\right\rangle _{2,2}+\frac{1}{\sqrt{2}}\beta^{^{\prime}}\left\vert VH\right\rangle _{2,2}+\gamma^{^{\prime}}\left\vert VV\right\rangle _{2,2}\right) \left\vert \alpha\right\rangle . \end{align} Through the quadrature measurement $\left\vert X\right\rangle \left\langle X\right\vert $, the following state \begin{equation} \alpha^{^{\prime}}\left\vert HH\right\rangle _{1,1}+\frac{1}{\sqrt{2}}% \beta^{^{\prime}}\left\vert VH\right\rangle _{1,1}+\gamma^{^{\prime}% }\left\vert VV\right\rangle _{1,1}\label{out-1}% \end{equation} or \begin{equation} \alpha^{^{\prime}}\left\vert HH\right\rangle _{2,2}+\frac{1}{\sqrt{2}}% \beta^{^{\prime}}\left\vert VH\right\rangle _{2,2}+\gamma^{^{\prime}% }\left\vert VV\right\rangle _{2,2}\label{out-3}% \end{equation} can be selected out, and the output with only one photon at each output port, which picks up the phase shift $\theta$ in the XPM process, will be discarded. The different coefficients between the mid term and the other two terms in Eqs. (\ref{out-1}) and (\ref{out-3}) are caused by the Hong-Ou-Mandal (HOM) interference effect on BS$_{1}$. In order to balance the coefficients, we should use a 50:50 BS respectively on path 0 and 2 (see Fig. 5). After that we could achieve the bi-photonic qutrit $\alpha^{^{\prime}}\left\vert 0\right\rangle _{3}+\beta^{^{\prime}}\left\vert 1\right\rangle _{3}% +\gamma^{^{\prime}}\left\vert 2\right\rangle _{3}$ with the success probability $\frac{1}{2}$, and then the total success probability for an arbitrary unitary operation on biphoton qutrits will be $\frac{1}{6}% \times\frac{1}{2}=\frac{1}{12}$. \section{Discussion} \label{sec4} We have presented two schemes for unitary operations on biphoton qutrits, which are realized through bi-directional mapping between polarization and spatially encoded photonic qutrits. Through the bi-directional mapping any unitary operation $U(3)$ on bi-photonic qutrits can be reduced to that on single photon qutrits. The linear optical scheme succeeds with a small probability $1.17\times10^{-4}$, but it can be increased to $1/12$ with weak cross-Kerr nonlinearity. The probabilistic nature of the schemes is due to the two indistinguishable photons in the same spatial-temporal modes. For example, at the last merging step in Fig. 5, the probability to get the proper output state will be lowered by $1/2$ because of the HOM interference. Finally, we look at the feasibility of the schemes. The first scheme applies common experimental tools such as linear optical circuits, coincidence measurements, and detection with APDs. The difficulty in the implementation is the accuracy for the numerous interferences between the photonic modes. The additional requirement in the second scheme is the good performance of weak cross-Kerr nonlinearity. The error in each XPM process can be effectively eliminated under the condition $\alpha\theta\gg1$ \cite{Spiller}, which means that the small XPM phase $\theta$ can be compensated by the large amplitude $|\alpha|$ of the qubus or communication beams. The other advantage of the scheme based on weak nonlinearity is the fewer ancilla photons---the ancilla photons are only required in the inverse transformation. This could make the experimental implementation more simplified. \begin{acknowledgments} Q. L. thanks Dr. Jian Li and Ru-Bing Yang for helpful discussions. \end{acknowledgments}
-57,300.589033
[ -3.23828125, 2.9296875 ]
10.843373
[ -3.01953125, 0.68994140625, -2.228515625, -4.96875, -1.0517578125, 7.74609375 ]
[ 1.876953125, 7.02734375, 1.6357421875, 5.5390625 ]
244
4,374
[ -3.1015625, 3.40234375 ]
31.601022
[ -5.39453125, -3.197265625, -3.158203125, -1.857421875, 1.4853515625, 9.6171875 ]
0.667305
7.043833
22.519433
2.864693
[ 2.3773794174194336 ]
-38,828.176644
6.680841
-57,235.644423
0.800767
5.757226
[ -2.59765625, -3.6171875, -3.958984375, -4.98828125, 2.361328125, 12.1171875 ]
[ -5.1640625, -1.345703125, -2.181640625, -1.212890625, 3.1015625, 3.8203125 ]
BkiUdmQ5qoTA_-bStSha
\section{Introduction} The classification of associative algebras was instituted by Benjamin Peirce in the 1870's \cite{pie}, who gave a partial classification of the complex associative algebras of dimension up to 6, although in some sense, one can deduce the complete classification from his results, with some additional work. The classification method relied on the following remarkable fact: \begin{thm} Every finite dimensional algebra which is not nilpotent contains a nontrivial idempotent element. \end{thm} A nilpotent algebra $A$ is one which satisfies $A^n=0$ for some $n$, while an idempotent element $a$ satisfies $a^2=a$. This observation of Peirce eventually leads to two important theorems in the classification of finite dimensional associative algebras. Recall that an algebra is said to be simple if it has no nontrivial proper ideals, and it is not the 1-dimensional nilpotent algebra over \mbox{$\mathbb K$}, given by the trivial product. \begin{thm}[Fundamental Theorem of Finite Dimensional Associative Algebras] Suppose that $A$ is a finite dimensional algebra over a field \mbox{$\mathbb K$}. Then $A$ has a maximal nilpotent ideal $N$, called its radical. If $A$ is not nilpotent, then $A/N$ is a semisimple algebra, that is, a direct sum of simple algebras. \end{thm} Moreover, when $A/N$ satisfies a property called separability over \mbox{$\mathbb K$}, then $A$ is a semidirect product of its radical and a semisimple algebra. Over the complex numbers, every semisimple algebra is separable. To apply this theorem to construct algebras by extension, one uses the following characterization of simple algebras. \begin{thm} [Wedderburn] If $A$ is a finite dimensional algebra over \mbox{$\mathbb K$}, then $A$ is simple iff $A$ is isomorphic to a tensor product $M\otimes D$, where $M=\mathfrak{gl}(n,\mbox{$\mathbb K$})$ and $D$ is a division algebra over \mbox{$\mathbb K$}. \end{thm} In the nongraded case, a division algebra is a unital algebra where every nonzero element has a multiplicative inverse. For \mbox{$\Z_2$}-graded associative algebras, the situation is a bit more complicated. First, we need to consider graded ideals, so that a \mbox{$\Z_2$}-graded algebra is simple when it has no proper nontrivial graded ideals. Secondly, the definition of a division algebra needs to be changed as well, in order to generalize Wedderburn's theorem to the \mbox{$\Z_2$}-graded case. A \mbox{$\Z_2$}-graded division algebra is a division algebra when every nonzero \emph{homogeneous} element is invertible. With these changes, the Fundamental Theorem remains the same, except that the radical is the maximal graded nilpotent ideal, and Wedderburn's theorem is also true, if we understand that by a matrix algebra, we mean the general linear algebra of a \mbox{$\Z_2$}-graded vector space. In this paper, we shall be concerned with the moduli space of associative algebras on a \mbox{$\Z_2$}-graded algebra $A$ of dimension $2|1$, so that $A_0$ has dimension 2 and $A_1$ has dimension 1. However, we recall that this space coincides with the equivalence of odd codifferentials of degree 2 on the parity reversion $W=\Pi A$, which has dimension $1|2$, so in this paper, we shall study codifferentials on a space of dimension $1|2$, but the reader should keep in mind that this corresponds to associative algebras on a $2|1$-dimensional space. The main goal of this paper is to give a complete description of the moduli space of $2|1$-dimensional associative algebras, including a computation of the miniversal deformation of every element. \section{Construction of algebras by extensions} In \cite{fp11}, the theory of extensions of an algebra $W$ by an algebra $M$ is described in the language of codifferentials. Consider the diagram $$ 0\rightarrow M\rightarrow V\rightarrow W\rightarrow 0 $$ of associative \mbox{$\mathbb K$}-algebras, so that $V=M\oplus W$ as a \mbox{$\mathbb K$}-vector space, $M$ is an ideal in the algebra $V$, and $W=V/M$ is the quotient algebra. Suppose that $\delta\in C^2(W)$ and $\mu\in C^2(M)$ represent the algebra structures on $W$ and $M$ respectively. We can view $\mu$ and $\delta$ as elements of $C^2(V)$. Let $T^{k,l}$ be the subspace of $T^{k+l}(V)$ given recursively by $T^{0,0}=\mbox{$\mathbb K$}$, \begin{align*} T^{k,l}&=M\otimes T^{k-1,l}\oplus V\otimes T^{k,l-1}. \end{align*} Let $C^{k,l}=\mbox{\rm Hom}(T^{k,l},M)\subseteq C^{k+l}(V)$. If we denote the algebra structure on $V$ by $d$, we have $$ d=\delta+\mu+\lambda+\psi, $$ where $\lambda\in C^{1,1}$ and $\psi\in C^{0,2}$. Note that in this notation, $\mu\in C^{2,0}$. Then the condition that $d$ is associative: $[d,d]=0$ gives the following relations: \begin{align*} [\delta,\lambda]+\tfrac 12[\lambda,\lambda]+[\mu,\psi]&=0, \quad\text{The Maurer-Cartan equation}\\ [\mu,\lambda]&=0,\quad\text{The compatibility condition}\\ [\delta+\lambda,\psi]&=0,\quad\text{The cocycle condition} \end{align*} Since $\mu$ is an algebra structure, $[\mu,\mu]=0$, so if we define $D_\mu$ by $D_\mu(\varphi)=[\mu,\varphi]$, then $D^2_\mu=0$. Thus $D_\mu$ is a differential on $C(V)$. Moreover $D_\mu:C^{k,l}\rightarrow C^{k+1,l}$. Let \begin{align*} Z_\mu^{k,l}&=\ker(D_\mu:C^{k,l}\rightarrow C^{k+1,l}),\quad\text{the $(k,l)$-cocycles}\\ B_\mu^{k,l}&=\operatorname{Im}(D_\mu:C^{k-1,l}\rightarrow C^{k,l}),\quad\text{the $(k,l)$-coboundaries}\\ H_\mu^{k,l}&=Z_\mu^{k,l}/B_\mu^{k,l},\quad\text{the $D_u$ $(k,l)$-cohomology} \end{align*} Then the compatibility condition means that $\lambda\in Z^{1,1}$. If we define $D_{\delta+\lambda}(\varphi)=[\delta+\lambda,\varphi]$, then it is not true that $D^2_{\delta+\lambda}=0$, but $D_{\delta+\lambda}D_\mu=-D_{\mu}D_{\delta+\lambda}$, so that $D_{\delta+\lambda}$ descends to a map $D_{\delta+\lambda}:H^{k,l}_\mu\rightarrow H^{k,l+1}_\mu$, whose square is zero, giving rise to the $D_{\delta+\lambda}$-cohomology $H^{k,l}_{\mu,\delta+\lambda}$. If the pair $(\lambda,\psi)$ give rise to a codifferential $d$, and $(\lambda,\psi')$ give rise to another codifferential $d'$, then if we express $\psi'=\psi+\tau$, it is easy to see that $[\mu,\tau]=0$, and $[\delta+\lambda,\tau]=0$, so that the image $\bar\tau$ of $\tau$ in $H^{0,2}_\mu$ is a $D_{\delta+\lambda}$-cocycle, and thus $\tau$ determines an element $\{\bar\tau\}\in H^{0,2}_{\mu,\delta+\lambda}$. If $\beta\in C^{0,1}$, then $g=\exp(\beta):\mbox{$\T(V)$}\rightarrow\mbox{$\T(V)$}$ is given by $g(m,w)=(m+\beta(w),w)$. Furthermore $g^*=\exp(-\operatorname{ad}_{\beta}):C(V)\rightarrow C(V)$ satisfies $g^*(d)=d'$, where $d'=\delta+\mu+\lambda'+\psi'$ with \begin{align*} \lambda'&=\lambda+[\mu,\beta]\\ \psi'&=\psi+[\delta+\lambda+\tfrac12[\mu,\beta],\beta], \end{align*} In this case, we say that $d$ and $d'$ are equivalent extensions in the restricted sense. Such equivalent extensions are also equivalent as codifferentials on $\mbox{$\T(V)$}$. Note that $\lambda$ and $\lambda'$ differ by a $D_\mu$-coboundary, so $\bar\lambda=\bar\lambda'$ in $H^{1,1}_\mu$. If $\lambda$ satisfies the MC-equation for some $\psi$, then any element $\lambda'$ in $\bar\lambda$ also gives a solution of the MC equation, for the $\psi'$ given above. The cohomology classes of those $\lambda$ for which a solution of the MC equation exists determine distinct restricted equivalence classes of extensions. Let $G_{M,W}=\mbox{\bf GL}(M)\times\mbox{\bf GL}(W)\subseteq\mbox{\bf GL}(V)$. If $g\in G_{M,W}$ then $g^*:C^{k,l}\rightarrow C^{k,l}$, and $g^*:C^k(W)\rightarrow C^k(W)$, so $\delta'=g^*(\delta)$ and $\mu'=g^*(\mu)$ are codifferentials on $\mathcal T(M)$ and $\mbox{$\T(W)$}$ respectively. The group $G_{\delta,\mu}$ is the subgroup of $G_{M,W}$ consisting of those elements $g$ such that $g^*(\delta)=\delta$ and $g^*(\mu)=\mu$. Then $G_{\delta,\mu}$ acts on the restricted equivalence classes of extensions, giving the equivalence classes of general extensions. Also, $G_{\delta,\mu}$ acts on $H^{k,l}_\mu$, and induces an action on the classes $\bar\lambda$ of $\lambda$ giving a solution $(\lambda,\psi)$ to the MC equation. Next, consider the group $G_{\delta,\mu,\lambda}$ consisting of the automorphisms $h$ of $V$ of the form $h=g\exp(\beta)$, where $g\in G_{\delta,\mu}$, $\beta\in C^{0,1}$ and $\lambda=g^*(\lambda)+[\mu,\beta]$. If $d=\delta+\mu+\lambda+\psi+\tau$, then $h^*(d)=\delta+\mu+\lambda+\psi+\tau'$ where \begin{equation*} \tau'=g^*(\psi)-\psi+[\delta+\lambda-\tfrac12[\mu,\beta],\beta]+g^*(\tau). \end{equation*} Moreover, the group $G_{\delta,\mu,\lambda}$ induces an action on $H^{0,2}_{\mu,\delta+\lambda}$ given by $\{\bar\tau\}\rightarrow\{\overline{\tau'}\}$. In fact, $\{\overline{g^*(\tau)}\}$ is well defined as well, and depends only on $\{\bar\tau\}$. The general group of equivalences of extensions of the algebra structure $\delta$ on $W$ by the algebra structure $\mu$ on $M$ is given by the group of automorphisms of $V$ of the form $h=\exp(\beta)g$, where $\beta\in C^{0,1}$ and $g\in G_{\delta,\mu}$. We have the following classification of such extensions up to equivalence. \begin{thm} The equivalence classes of extensions of $\delta$ on $W$ by $\mu$ on $M$ is classified by the following: \begin{enumerate} \item Equivalence classes of $\bar\lambda\in H^{1,1}_\mu$ which satisfy the MC equation \begin{equation*} [\delta,\lambda]+\tfrac12[\lambda,\lambda]+[\mu,\psi]=0 \end{equation*} for some $\psi\in C^{0,2}$, under the action of the group $G_{\delta,\mu}$. \item Equivalence classes of $\{\bar\tau\}\in H^{0,2}_{\mu,\delta+\lambda}$ under the action of the group $G_{\delta,\mu,\lambda}$. \end{enumerate} \end{thm} Equivalent extensions will give rise to equivalent codifferentials on $V$, but it may happen that two codifferentials arising from nonequivalent extensions are equivalent. This is because the group of equivalences of extensions is the group of invertible block upper triangular matrices on the space $V=M\oplus W$, whereas the equivalence classes of codifferentials on $V$ are given by the group of all invertible matrices, which is larger. The fundamental theorem of finite dimensional algebras allows us to restrict our consideration of extensions to two cases. First, we can consider those extensions where $\delta$ is a semisimple algebra structure on $W$, and $\mu$ is a nilpotent algebra structure on $M$. In this case, because we are working over $\mbox{$\mathbb C$}$, we can also assume that $\psi=\tau=0$. Thus the classification of the extension reduces to considering equivalence classes of $\lambda$. Secondly, we can consider extensions of the trivial algebra structure $\delta=0$ on a 1-dimensional space $W$ by a nilpotent algebra $\mu$. This is because a nilpotent algebra has a codimension 1 ideal $M$, and the restriction of the algebra structure to $M$ is nilpotent. However, in this case, we cannot assume that $\psi$ or $\tau$ vanish, so we need to use the classification theorem above to determine the equivalence classes of extensions. In many cases, in solving the MC equation for a particular $\lambda$, if there is any $\phi$ yielding a solution, then $\psi=0$ also gives a solution, so the action of $G_{\delta,\mu,\lambda}$ on $H^{0,2}_\mu$ takes on a simpler form than the general action we described above. In fact, if in addition to $\psi=0$ providing a solution to the MC equation, any element $h=g\exp(\beta)$ satisfies $[\mu,\beta]=0$, then the action of $h$ on $H^{0,2}_{\delta,\mu,\lambda}$ is just the action $g^*(\{\bar\tau\})=\{\overline{g^*(\tau)}\}$, which is easy to calculate in practice. \section{Associative algebra structures on a $1|2$ vector space} Let $A$ be a $1|2$-dimensional vector space, and $V=\Pi A$ be the parity reversion of $A$, so that $V$ is $2|1$-dimensional. Let $\{v_1,v_2,v_3\}$ be a basis of $V$ with $v_1, v_2$ even elements and $v_3$ an odd element, and let $d$ be a codifferential on $V$ representing an associative algebra structure on $A$. By results in \cite{bppw1}, there are only two \mbox{$\Z_2$}-graded division algebras, the complex numbers, and a certain $1|1$-dimensional algebra. As a consequence, there are no $1|2$-dimensional simple algebras, so we can express $V$ as an extension of an algebra structure $\delta$ on $W$ by an algebra structure $\mu$ on $M$, where $V=M\oplus W$, and $M$ is an ideal in $V$. By the fundamental theorem of finite dimensional associative algebras, we can assume that $\mu$ is a nilpotent algebra structure on $M$. Moreover, $\delta$ is a semisimple algebra structure on $W$, unless $d$ is a nilpotent algebra structure. Since every nilpotent algebra has a codimension 1 ideal, if $d$ is nilpotent, we can assume that $W$ is 1-dimensional (either even or odd), and that $\delta=0$. The only semisimple algebras we need to consider are the simple $1|1$-dimensional algebra, and the simple $0|1$-dimensional algebra. Moreover, when considering extensions of a semisimple algebra, the ``cocycle`` $\psi$ can be taken to be zero, because we are considering extensions over $\mbox{$\mathbb C$}$, for which every semisimple algebra is separable. Now,suppose that $W=\langle v_{w(1)},\cdots, v_{w(p)}\rangle$ where the first $s$ vectors are even and the other $p-s$ elements are odd, and that $M=\langle v_{m(1)},\cdots, v_{m(q)}\rangle$, where the first $t$ elements are even and the other $q-t$ elements are odd. (This conforms to the principle that in a \mbox{$\Z_2$}-graded space, a basis should be listed with the even elements first.) Then a formula for an arbitrary $\lambda\in C^{1,1}$ is of the form \defLE{LE} \begin{align*} \lambda&=\sum_{|v_{w(k)}|=1}\psa{w(k)m(j)}{m(i)}(LE_k)^i_j+\psa{m(j)w(k)}{m(i)}(RE_k)^i_j \\&+\sum_{|v_{w(k)}|=0} \psa{w(k)m(j)}{m(i)}(LO_k)^i_j+\psa{m(j)w(k)}{m(i)}(RO_k)^i_j, \end{align*} where $LE_k$ and $RE_k$ are matrices of even maps $M\rightarrow M$, and $LO_k$ and $RO_k$ are matrices of odd maps $M\rightarrow M$. For simplicity, we shall denote $(LE_k)^i_J$ as $LE_{kj}^i$ and similarly for the components of the other matrices. Let $L_k=LE_k+LO_k$ and $R_k=RE_k+RO_k$. Then \begin{align*} \tfrac12[\lambda,\lambda]&=\,\psa{w(k)w(l)m(j)}{m(i)}((LO_k-LE_k)L_l))^i_j+ \psa{m(j)w(k)w(l)}{m(i)}(R_lR_k)^i_j \\&\,+\psa{w(k)m(j)w(l)}{w(i)}(R_lL_k+(LO_k-LE_k)R_l). \end{align*} It is important to note that the formula above is given in terms of matrix multiplication. This is significant from the computational view as we shall illustrate below. It is interesting to note that matrices in $G_{M,W}$, which are block diagonal maps $\operatorname {diag}(G_1,G_2)$, where $G_1\in\mbox{\bf GL}(M)$ and $G_2\in\mbox{\bf GL}(W)$, act on $\lambda$ in a manner which can be described in terms of the matrices above. First, $G_1$ acts by conjugating all the matrices $L_k$ and $R_k$ simultaneously. Secondly, the matrix $G_2$ acts on the $k$ indices. We shall say more about these actions later. Let us give one concrete application of the remarks above. Suppose that $W$ is completely odd, and $M$ is $r|s$-dimensional. Then we can express $M=\langle v_1,\cdots, v_{r+s}\rangle$ and $W=\langle v_{r+s+1},\cdots, v_{r+s+n}\rangle$. In this case $L_k=LE_k$ and $R_k=RE_k$. We can express $R_k=\operatorname {diag}(T_k,B_k)$, where $T_k:M_0\rightarrow M_0$ and $B_k:M_1\rightarrow M_1$ and $M=M_0\oplus M_1$ represents the decomposition of $M$ into its even and odd parts. Then we have \begin{equation*} \tfrac12[\lambda,\lambda]=-\psa{klj}i(L_kL_l)^i_j+\psa{kjl}i(R_lL_k-L_kR_l)^i_j +\psa{jkl}i(R_lR_k)^i_j. \end{equation*} Let $\delta=\sum_{k=r+s+1}^{r+s+n}\psa{kk}k$ be the codifferential semisimple algebra $\mbox{$\mathbb C$}^n$. Then \begin{equation*} [\delta,\lambda]=\psa{kkj}i(L_{k})^i_j+\s{j}\psa{jkk}i(R_{k})^i_j. \end{equation*} When considering an extension of the algebra $\mbox{$\mathbb C$}^n$ by an algebra structure on $M$, we can assume that the cocycle ``$\psi$`` vanishes, so the MC equation is just $[\delta,\lambda]+\tfrac12[\lambda,\lambda]=0$, which is equivalent to the following: \begin{center} \begin{align*} L_k=L_k^2,\quad &L_kL_l=0,\text{ if $k\ne l$}\\ L_k&R_l=R_lL_k\\ T_k=-T_k^2,\quad B_k=&B_k^2,\quad R_kR_l=0,\text{ if $k\ne l$}. \end{align*} \end{center} As a consequence, the matrices above give a commuting set of diagonalizable matrices, so they can be simultaneously diagonalized. Moreover, $L_k$ and $B_k$ have only 0 and 1 as possible eigenvalues and $T_k$ has only 0 and $-1$ as possible eigenvalues. If we consider $\mu=0$ as the algebra structure on $M$, then the elements $\operatorname {diag}(G_1,G_2)\in G_{\delta,\mu}$ are given by an arbitrary matrix $G_1$ and $G_2$ is just a permutation matrix. Thus we can apply an element $G\in G_{\delta,\mu}$ to $\lambda$ to put it in the form where all the matrices are diagonal, and are ordered in such a manner that the nonzero $L_k$ matrices appear first. Moreover, since the $L_k$ matrices are orthogonal to each other, there are at most $m=r+s$ nonzero $L$ matrices. Similar considerations apply to the $R$ matrices, so that in total, there can be no more than $2m$ pairs $(L_k,R_k)$, such that at least one matrix does not vanish. Therefore, when $n\ge 2m$, the number of distinct equivalence classes of extensions of $\mbox{$\mathbb C$}^n$ by a trivial algebra structure on $M$ is exactly equal to $2m$. We say that this is the stable situation. The number of extensions is independent of $n$ as long as it is at least $2m$. Moreover, the cohomology and deformation theory also becomes stable, in a natural way. When $\mu\ne0$, the situation is a bit more complicated, but there is also an $n$ beyond which the situation becomes stable. We now give a construction of the elements in the moduli space of $2|1$-dimensional codifferentials. Table \ref{coho21 table} below gives the cohomology of the 21 nonequivalent codifferentials. \begin{table}[h!] \begin{center} \begin{tabular}{lccccc} Codifferential&$H^0$&$H^1$&$H^2$&$H^3$&$H^4$\\ \hline \\ $d_1=\psa{23}2-\psa{32}2+\psa{22}3-\psa{33}3$&$1|1$&$1|0$&$1|0$&$1|0$&$1|0$\\ $d_2=\psa{33}3+\psa{31}1+\psa{32}2$&$0|0$&$3|0$&$0|0$&$0|0$&$0|0$\\ $d_3=\psa{33}3-\psa{13}1-\psa{23}2$&$0|0$&$3|0$&$0|0$&$0|0$&$0|0$\\ $d_4=\psa{33}3+\psa{31}1-\psa{23}2$&$0|0$&$1|0$&$0|0$&$1|0$&$0|0$\\ $d_5=\psa{33}3-\psa{13}1+\psa{32}2-\psa{23}2$&$1|0$&$1|0$&$1|0$&$1|0$&$1|0$\\ $d_6=\psa{33}3+\psa{31}1+\psa{32}2-\psa{23}2$&$1|0$&$1|0$&$1|0$&$1|0$&$1|0$\\\hline $d_7=\psa{33}3+\psa{32}2$&$1|0$&$1|0$&$2|0$&$2|0$&$2|0$\\ $d_8=\psa{33}3-\psa{23}2$&$1|0$&$1|0$&$2|0$&$2|0$&$2|0$\\ $d_9=\psa{33}3+\psa{31}1-\psa{13}1+\psa{32}2-\psa{23}2$&$3|0$&$4|0$&$6|0$&$12|0$&$24|0$\\ $d_{10}=\psa{33}3$&$3|0$&$4|0$&$8|0$&$16|0$&$32|0$\\ $d_{11}=\psa{33}3+\psa{32}2-\psa{23}2$&$2|1$&$2|1$&$2|1$&$2|1$&$2|1$\\ $d_{12}=\psa{22}3+\psa{23}1-\psa{32}1$&$1|1$&$2|0$&$1|1$&$2|0$&$1|1$\\\hline $d_{13}(p:q)=\psa{22}3+\psa{21}3p+\psa{12}3q$&$0|1$&$2|0$&$2|1$&$3|0$&$4|0$\\ $d_{13}(1:1)=\psa{22}3+\psa{21}3+\psa{12}3$&$0|1$&$2|0$&$2|1$&$5|0$&$4|2$\\ $d_{13}(1:-1)=\psa{22}3+\psa{21}3-\psa{12}3$&$1|1$&$2|1$&$3|1$&$4|1$&$5|1$\\ $d_{13}(1:0)=\psa{22}3+\psa{21}3$&$1|0$&$2|0$&$4|1$&$6|2$&$8|3$\\ $d_{13}(0:0)=\psa{22}3$&$1|1$&$3|1$&$5|4$&$10|7$&$18|14$\\\hline $d_{14}=\psa{21}3-\psa{12}3$&$2|1$&$4|2$&$5|4$&$8|4$&$10|5$\\ $d_{15}(p:q)=\psa{23}1p+\psa{32}1q$&$1|0$&$2|0$&$1|2$&$2|1$&$2|2$\\ $d_{15}(1:1)=\psa{23}1+\psa{32}1$&$1|0$&$2|1$&$2|2$&$4|2$&$3|4$\\ $d_{15}(1:0)=\psa{23}1$&$1|0$&$2|0$&$2|3$&$5|3$&$5|6$\\ $d_{15}(0:1)=\psa{32}1$&$1|0$&$2|0$&$2|3$&$5|3$&$5|6$\\ $d_{15}(1:-1)=\psa{23}1-\psa{32}1$&$2|1$&$3|2$&$4|3$&$5|4$&$6|5$\\ \\ \hline \end{tabular} \end{center} \label{coho21 table} \caption{Cohomology of the 15 families of codifferentials on a $2|1$-dimensional space} \end{table} \section{Extensions where $W$ is $1|1$-dimensional and $M$ is $1|0$-dimensional} Let $W=\langle v_2,v_3\rangle$ and $M=\langle v_1\rangle$. The unique $1|1$-dimensional simple algebra is given by the codifferential $\delta=\psa{23}2-\psa{32}2+\psa{22}3-\psa{33}3$. The only algebra structure on $M$ is the trivial algebra $\mu=0$. The generic lambda is of the form $\lambda=\psa{31}1LE_{21}^1+\psa{13}1RE_{21}^1$. However, \begin{align*} [\delta,\lambda]&=\pha{221}1LE_{21}^1-\pha{122}1RE_{21}^1 -\pha{331}1LE_{21}^1-\pha{133}1RE_{21}^1\\ \tfrac12[\lambda,\lambda]&=-\pha{331}1(LE_{21}^1)^2+\pha{133}1(RE_{21}^1)^2 \end{align*} so the MC equation forces $\lambda=0$. Therefore, the unique extension of $\delta$ is the direct sum of $\delta$ and the trivial 1-dimensional algebra, which is the codifferential $d_1$. \section{Extensions where $M$ is $2|0$-dimensional and $W$ is $0|1$-dimensional} Let $M=\langle v_1,v_2\rangle$ be $2|0$-dimensional and $W=\langle v_3\rangle$ be $0|1$-dimensional. The group $G_{M,W}$ consists of diagonal matrices $G=\operatorname {diag}(r,s,t)$, with $rst\ne0$. A generic element of $C^{1,1}$ is of the form \begin{align*} \lambda&= \psa{31}1aL_{1}^1+\psa{31}2L_{2}^1+\psa{32}1L_2^1+\psa{32}2L_2^2 \psa{13}1aR_{1}^1+\psa{13}2R_{2}^1+\psa{23}1R_2^1+\psa{23}2R_2^2 \end{align*} corresponding to the matrices \begin{equation*} L_{{1}}= \left[ \begin {matrix} L_1^1&L_2^1\\\noalign{\medskip} L_1^2&L_2^2 \end{matrix} \right] , R_{{1}}= \left[ \begin {matrix} R_1^1&R_2^1\\\noalign{\medskip} R_1^2&R_2^2 \end {matrix} \right]. \end{equation*} \subsubsection{Extensions of the simple $0|1$-dimensional algebra} In this case $\delta=\psa{33}3$. From the MC-equation, we obtain that \begin{equation*} L^2=L,\quad RL=LR,\quad R^2=-R, \end{equation*} so that $L$ is a diagonalizable matrix with eigenvalues 1 and 0, and $R$ is diagonalizable with eigenvalues $-1$ and 0. Thus $R$ and $L$ can be simultaneously diagonalized, which means that we have the following solutions. If $L=I$ or $L=0$, then $R$ can be taken to be either $-I$, $\operatorname {diag}(-1,0)$ or $0$. Otherwise $L=\operatorname {diag}(1,0)$ and $R$ is either $I$, $\operatorname {diag}(-1,0)$, $\operatorname {diag}(0,-1)$, or 0. These 10 possibilities correspond to the codifferentials $d_2$,\dots $d_{11}$. \subsubsection{Extensions of the trivial $0|1$-dimensional algebra}. In this case, $\delta=0$ and $\mu=0$. Then $G_{\delta,\mu}=G_{M,W}$. The MC-equation yields $L^2=0$, $LR=RL$ and $R^2=0$. The eigenvalues of the matrices $R$ and $L$ are only 0, and since these matrices commute, they can put in simultaneous upper triangular form. Thus we can express $\lambda=\psa{32}2p+\psa{23}2q$, which gives a family of nonequivalent codifferentials, $d_{15}(p:q)$. This family is parameterized projectively by elements $(p:q)\in\mbox{$\mathbb C$}\mathbb P^1$. \section{Extensions where $M$ is $1|1$-dimensional and $W$ is $1|0$-dimensional} We have $M=\langle v_1,v_3\rangle$ and $W=\langle v_2\rangle$. The group $G_{M,W}$ consists of matrices of the form $G=\operatorname {diag}(r,s,t)$, where $rst\ne0$. The only codifferential on $W$ is $\delta=0$. A generic element in $C^{1,1}$ is \begin{equation*} \lambda=\psa{23}1LO^1_2+\psa{21}3LO^2_1 +\psa{32}1RO^1_2+\psa{12}3RO^2_1, \end{equation*} corresponding to the matrices \begin{equation*} L= \left[ \begin {array}{cc} 0&LO^1_2 \\\noalign{\medskip}LO^2_1&0\end {array} \right] ,R= \left[ \begin {array}{cc} 0&RO^1_2\\\noalign{\medskip}RO^2_1&0 \end {array} \right] ] \end{equation*} Moreover a generic element in $C^{0,2}$ is of the form $\tau=\psa{22}3c$ and a generic element of $C^{0,1}$ is of the form $\beta=\pha21b$. There are two possibilities for $\mu$, the trivial codifferential and $\mu=\psa{11}2$. In this case, it is more convenient to consider the trivial case $\mu=0$ first. Then the MC equation yields that either $LO^2_1=RO^2_1=0$ or $LO^1_2=RO^1_2=0$. Let us consider the first case. Let $LO^1_2=p$ and $RO^1_2=q$. Then $\lambda=\psa{23}1p+\psa{32}1q$. Now $[\delta+\lambda,\tau]=\pha{222}1(p+q)c$, so unless $p=-q$, we can assume that $\tau=0$. Therefore, unless $p=-q$, we have $d=\psa{23}1p+\psa{32}1q$, which is just the codifferential $d_{15}(p: q)$ we already encountered. We also note that the action of $G_{\delta,\mu}$ on $\lambda$ is to multiply it by a nonzero constant, so that $\lambda$ can be parameterized projectively. Thus if $p=-q$, we can assume that either $p=1$ and $q=-1$ or $p=q=0$. In the either case, the action of $G_{\delta,\mu,\lambda}$ on $\tau$ multiplies it by a constant, so we can assume that $\tau=\psa{22}3$, since the case $\tau=0$ is generic. This gives the codifferential $d_{12}$ when $\lambda=\psa{23}1-\psa{32}1$, and the codifferential $d_{13}(0:0)$ when $\lambda=0$. In the second case, we can take $\lambda=\psa{21}3p+\psa{12}3q$. Then $G_{\delta,\mu,\lambda}$ consists of those transformations of the form $h=g\exp(\beta)$ where $\beta=\pha{2}1b$ and g is given by a matrix $G=\operatorname {diag}(r,s,rs)$. Note that $[\delta+\lambda,\beta]=\psa{22}3(p+q)b$, which means that all values of $c$ produce an equivalent codifferential unless $p=-q$. It might seem that this would dictate choosing $c=0$ in the generic case, but this actually is not what deformation theory requires. The reason is that in the special cases where $p=-q$, for all values of $c$ except $c=0$ we obtain an equivalent codifferential, so the codifferential arising from taking $c=0$ jumps to the one arising from taking $c=1$. Therefore, the case $c=1$ is generic. In the generic case, we take $c=1$ and we obtain the codifferential $d_{13}(p:q)$, which is parameterized projectively by $(p:q)\in\mbox{$\mathbb C$}\mathbb P^1$. In the case where $p=1$ and $q=-1$ and $\tau=0$, we obtain the codifferential $d_{14}$, and when $p=q=0$ and $\tau=0$, we obtain the zero codifferential. Now, consider the nontrivial case $\mu=\psa{11}3$. Then $G_{\delta,\mu}$ is given by matrices of the form $\operatorname {diag}(r,s,s^2)$ such that $rs\ne0$. The compatibility condition $[\mu,\lambda]=0$ yields $LO^2_1=0$ and $RO^2_1=0$, and $[\mu,\beta]=\psa{21}3b+\psa{12}3b$, so we can assume that $RO^1_2=0$ as well. Taking in to account the action of the group $G_{\delta,\mu}$, we can reduce to the cases $\lambda=\psa{21}3$ or $\lambda=0$. If $\lambda=\psa{21}3$, then the action of $G_{\delta,\mu,\lambda}$ on $\tau$ leaves it unchanged, so we have to consider all values of $c$. It turns out that the codifferentials we obtain are equivalent to $d_{13}(-1+\sqrt{1-4c}:1+\sqrt{1-4c})$. This complicated formula is why it was more convenient to study the case $\mu=0$ first. If $\lambda=0$, then the action of $G_{\delta,\mu,\lambda}$ on $\tau$ multiplies it by a nonzero constant, so we can consider the case $c_1=0$, which gives a codifferential equivalent to $d_{13}(0:0)$, or $c_1=1$, which gives the codifferential $d_{13}(1:1)$. Thus we have completed the classification of the elements in the moduli space of associative algebra structures on a space $V$ of dimension $1|2$, or, in other words, the codifferentials on a $2|1$-dimensional space. \section{Hochschild Cohomology and Deformations} \emph{Hochschild cohomology} was introduced in \cite{hoch}, and used to classify infinitesimal deformations of associative algebras. Suppose that \begin{equation*} m_t=m+t\varphi, \end{equation*} is an infinitesimal deformation of $m$. By this we mean that the structure $m_t$ is associative up to first order. From an algebraic point of view, this means that we assume that $t^2=0$, and then check whether associativity holds. It is not difficult to show that is equivalent to the following. \begin{equation*} a\varphi(b,c)-\varphi(ab,c)+\varphi(a,bc)-\varphi(a,b)c=0, \end{equation*} where, for simplicity, we denote $m(a,b)=ab$. Moreover, if we let \begin{equation*} g_t=I+t\lambda \end{equation*} be an infinitesimal automorphism of $A$, where $\lambda\in\mbox{\rm Hom}(A,A)$, then it is easily checked that \begin{equation*} g_t^*(m)(a,b)=ab+t(a\lambda(b)-\lambda(ab)+\lambda(a)b). \end{equation*} This naturally leads to a definition of the Hochschild coboundary operator $D$ on $\mbox{\rm Hom}(\mbox{$\T(A)$},A)$ by \begin{align*} D(\varphi)(a_0,\cdots, a_n)=&a_0\varphi(a_1,\cdots, a_n)+\s{n+1}\varphi(a_0,\cdots, a_{n-1})a_n\\ &+\sum_{i=0}^{n-1}\s{i+1}\varphi(a_0,\cdots, a_{i-1},a_ia_{i+1},a_{i+2},\cdots, a_n) . \end{align*} If we set $C^n(A)=\mbox{\rm Hom}(A^n,A)$, then $D:C^n(A)\rightarrow C^{n+1}(A)$. One obtains the following classification theorem for infinitesimal deformations. \begin{thm} The equivalence classes of infinitesimal deformations $m_t$ of an associative algebra structure $m$ under the action of the group of infinitesimal automorphisms on the set of infinitesimal deformations are classified by the Hochschild cohomology group \begin{equation*} H^2(m)=\ker(D:C^2(A)\rightarrow C^3(A))/\operatorname{Im}(D:C^1(A)\rightarrow C^2(A)). \end{equation*} \end{thm} When $A$ is \mbox{$\Z_2$}-graded, the only modifications that are necessary are that $\varphi$ and $\lambda$ are required to be even maps, so we obtain that the classification is given by $H^2_e(A)$, the even part of the Hochschild cohomology. We wish to transform this classical viewpoint into the more modern viewpoint of associative algebras as being given by codifferentials on a certain coalgebra. To do this, we first introduce the \emph{parity reversion} $\Pi A$ of a \mbox{$\Z_2$}-graded vector space $A$. If $A=A_e\oplus A_o$ is the decomposition of $A$ into its even and odd parts, then $W=\Pi A$ is the \mbox{$\Z_2$}-graded vector space given by $W_e=A_o$ and $W_o=A_e$. In other words, $W$ is just the space $A$ with the parity of elements reversed. Denote the tensor (co)-algebra of $W$ by $\mbox{$\T(W)$}=\bigoplus_{k=0}^\infty W^k$, where $W^k$ is the $k$-th tensor power of $W$ and $W^0=\mbox{$\mathbb K$}$. For brevity, the element in $W^k$ given by the tensor product of the elements $v_i$ in $W$ will be denoted by $v_1\cdots v_k$. The coalgebra structure on $\mbox{$\T(W)$}$ is given by \begin{equation*} \Delta(v_1\cdots v_n)=\sum_{i=0}^n v_1\cdots v_i\otimes v_{i+1}\cdots v_n. \end{equation*} Define $d:W^2\rightarrow W$ by $d=\pi\circ m\circ (\pi^{-1}\otimes\pi^{-1})$, where $\pi:A\rightarrow W$ is the identity map, which is odd, because it reverses the parity of elements. Note that $d$ is an odd map. The space $C(W)=\mbox{\rm Hom}(\mbox{$\T(W)$},W)$ is naturally identifiable with the space of coderivations of $\mbox{$\T(W)$}$. In fact, if $\varphi\in C^k(W)=\mbox{\rm Hom}(W^k,W)$, then $\varphi$ is extended to a coderivation of $\mbox{$\T(W)$}$ by \begin{equation*} \varphi(v_1\cdots v_n)= \sum_{i=0}^{n-k}\s{(v_1+\cdots+ v_i)\varphi}v_1\cdots v_i\varphi(v_{i+1}\cdots v_{i+k})v_{i+k+1}\cdots v_n. \end{equation*} The space of coderivations of $\mbox{$\T(W)$}$ is equipped with a \mbox{$\Z_2$}-graded Lie algebra structure given by \begin{equation*} [\varphi,\psi]=\varphi\circ\psi-\s{\varphi\psi}\psi\circ\varphi. \end{equation*} The reason that it is more convenient to work with the structure $d$ on $W$ rather than $m$ on $A$ is that the condition of associativity for $m$ translates into the codifferential property $[d,d]=0$. Moreover, the Hochschild coboundary operation translates into the coboundary operator $D$ on $C(W)$, given by \begin{equation*} D(\varphi)=[d,\varphi]. \end{equation*} This point of view on Hochschild cohomology first appeared in \cite{sta4}. The fact that the space of Hochschild cochains is equipped with a graded Lie algebra structure was noticed much earlier \cite{gers,gers1,gers2,gers3,gers4}. For notational purposes, we introduce a basis of $C^n(W)$ as follows. Suppose that $W=\langle v_1,\cdots, v_m\rangle$. Then if $I=(i_1,\cdots, i_n)$ is a \emph{multi-index}, where $1\le i_k\le m$, denote $v_I=v_{i_1}\cdots v_{i_n}$. Define $\varphi^{I}_i\in C^n(W)$ by \begin{equation*} \varphi^I_i(v_J)=\delta^I_Jv_i, \end{equation*} where $\delta^I_J$ is the Kronecker delta symbol. In order to emphasize the parity of the element, we will denote $\varphi^I_i$ by $\psi^I_i$ when it is an odd coderivation. For a multi-index $I=(i_1,\cdots, i_k)$, denote its \emph{length} by $\ell(I)=k$. If $K$ and $L$ are multi-indices, then denote $KL=(k_1,\cdots, k_{\ell(K)},l_l,\cdots, l_{\ell(L)})$. Then \begin{align*} (\varphi^I_i\circ\varphi^J_j)(v_K)&= \sum_{K_1K_2K_3=K}\s{v_{K_1}\varphi^J_j} \varphi^I_i(v_{K_1},\varphi^J_j(v_{K_2}), v_{K_3}) \\&= \sum_{K_1K_2K_3=K}\s{v_{K_1}\varphi^J_j}\delta^I_{K_1jK_3}\delta^J_{K_2}v_i, \end{align*} from which it follows that \begin{equation}\label{braform} \varphi^I_i\circ\varphi^J_j=\sum_{k=1}^{\ell(I)}\s{(v_{i_1}+\cdots+ v_{i_{k-1}})\varphi^J_j} \delta^k_j \varphi^{(I,J,k)}_i, \end{equation} where $(I,J,k)$ is given by inserting $J$ into $I$ in place of the $k$-th element of $I$; \hbox{\it i.e.}, $(I,J,k)=(i_1,\cdots, i_{k-1},j_1,\cdots, j_{\ell(J)},i_{k+1},\cdots, i_{\ell(I)})$. Let us recast the notion of an infinitesimal deformation in terms of the language of coderivations. We say that \begin{equation*} d_t=d+t\psi \end{equation*} is a deformation of the codifferential $d$ precisely when $[d_t,d_t]=0 \mod t^2$. This condition immediately reduces to the cocycle condition $D(\psi)=0$. Note that we require $d_t$ to be odd, so that $\psi$ must be an odd coderivation. One can introduce a more general idea of parameters, allowing both even and odd parameters, in which case even coderivations play an equal role, but we will not adopt that point of view in this paper. For associative algebras, we require that $d$ and $\psi$ lie in $\mbox{\rm Hom}(W^2,W)$. This notion naturally generalizes to considering $d$ simply to be an arbitrary odd codifferential, in which case we would obtain an \mbox{$A_\infty$}\ algebra, a natural generalization of an associative algebra. More generally, we need the notion of a versal deformation, in order to understand how the moduli space is glued together. To explain versal deformations we introduce the notion of a deformation with a local base. A local base $A$ is a \mbox{$\Z_2$}-graded commutative, unital $\mbox{$\mathbb K$}$-algebra with an augmentation $\epsilon:A\rightarrow\mbox{$\mathbb K$}$, whose kernel $\mbox{$\mathfrak m$}$ is the unique maximal ideal in $A$, so that $A$ is a local ring. It follows that $A$ has a unique decomposition $A=\mbox{$\mathbb K$}\oplus\mbox{$\mathfrak m$}$ and $\epsilon$ is just the projection onto the first factor. Let $W_A=W\otimes A$ equipped with the usual structure of a right $A$-module. Let $T_A(W_A)$ be the tensor algebra of $W_A$ over $A$, that is $T_A(W_A)=\bigoplus_{k=0}^\infty T^k_A(W_A)$ where $T^0_A(W_A)=A$ and $T^{k+1}_A(W_A)=T^k(W_A)_A\otimes_A W_A$. It is a standard fact that $T^k_A(W_A)=T^k(W)\otimes A$ in a natural manner, and thus $T_A(W_A)=T(W)\otimes A$. Any $A$-linear map $f:T_A(W)\rightarrow T_A(W)$ is induced by its restriction to $T(W)\otimes \mbox{$\mathbb K$}=T(W)$ so we can view an $A$-linear coderivation $\delta_A$ on $T_A(W_A)$ as a map $\delta_A:T(W)\rightarrow T(W)\otimes A$. A morphism $f:A\rightarrow B$ induces a map $$f_*:\operatorname{Coder}_A(T_A(W_A))\rightarrow \operatorname{Coder}_B(T_B(W_B))$$ given by $f_*(\delta_A)=(1\otimes f)\delta_A$, moreover if $\delta_A$ is a codifferential then so is $f_*(A)$. A codifferential $d_A$ on $T_A(W_A)$ is said to be a deformation of the codifferential $d$ on $T(W)$ if $\epsilon_*(d_A)=d$. If $d_A$ is a deformation of $d$ with base $A$ then we can express \begin{equation*} d_A=d+\varphi \end{equation*} where $\varphi:T(W)\rightarrow T(W)\otimes\mbox{$\mathfrak m$}$. The condition for $d_A$ to be a codifferential is the Maurer-Cartan equation, \begin{equation*} D(\varphi)+\frac12[\varphi,\varphi]=0 \end{equation*} If $\mbox{$\mathfrak m$}^2=0$ we say that $A$ is an infinitesimal algebra and a deformation with base $A$ is called infinitesimal. A typical example of an infinitesimal base is $\mbox{$\mathbb K$}[t]/(t^2)$, moreover, the classical notion of an infinitesimal deformation $$d_t=d+t\varphi$$ is precisely an infinitesimal deformation with base $\mbox{$\mathbb K$}[t]/(t^2)$. A local algebra $A$ is complete if \begin{equation*} A=\displaystyle{\invlm}_kA/\mbox{$\mathfrak m$}^k \end{equation*} A complete, local augmented $\mbox{$\mathbb K$}$-algebra will be called formal and a deformation with a formal base is called a formal deformation. An infinitesimal base is automatically formal, so every infinitesimal deformation is a formal deformation. An example of a formal base is $A=\mbox{$\mathbb K$}[[t]]$ and a deformation of $d$ with base $A$ can be expressed in the form $$d_t=d+t\psi_1+t^2\psi_2+\dots$$ This is the classical notion of a formal deformation. It is easy to see that the condition for $d_t$ to be a formal deformation reduces to \begin{align*} D(\psi_{n+1})=-\frac12\sum_{k=1}^{n}[\psi_k,\psi_{n+1-k}] \end{align*} An automorphism of $W_A$ over $A$ is an $A$-linear isomorphism $g_A:W_A\rightarrow W_A$ making the diagram below commute. \begin{figure}[h!] $$\xymatrix{ W_A \ar[r]^{g_A} \ar[d]^{\epsilon_*} & W_A \ar[d]^{\epsilon_*} \\ W \ar[r]^I & W}$$ \end{figure} The map $g_A$ is induced by its restriction to $T(W)\otimes\mbox{$\mathbb K$}$ so we can view $g_A$ as a map $$g_A:T(W)\rightarrow T(W)\otimes A$$ so we ca express $g_A$ in the form $$g_A=I+\lambda$$ where $\lambda:T(W)\rightarrow T(W)\otimes\mbox{$\mathfrak m$}$. If $A$ is infinitesimal then $g_A^{-1}=I-\lambda$. Two deformations $d_A$ and $d_A'$ are said to be equivalent over $A$ if there is an automorphism $g_A$ of $W_A$ over $A$ such that $g_A^*(d_A)=d_A'$. In this case we write $d'_A\sim d_A$. An infinitesimal deformation $d_A$ with base $A$ is called universal if whenever $d_B$ is an infinitesimal deformation with base $B$, there is a unique morphism $f:A\rightarrow B$ such that $f_*(d_A)\sim d_B$. \begin{thm} If $\dim H^2_{odd}(d)<\infty$ then there is a universal infinitesimal deformation $\mbox{$d^\infty$}$ of $d$. Given by $$\mbox{$d^\infty$}=d+\delta^it_i$$ where $H^2_{odd}(d)=\langle\bar{\delta^i}\rangle$ and $A=\mbox{$\mathbb K$}[t_i]/(t_it_j)$ is the base of deformation. \end{thm} A formal deformation $d_A$ with base $A$ is called versal if given any formal deformation of $d_B$ with base $B$ there is a morphism $f:A\rightarrow B$ such that $f_*(d_A)\sim d_B$. Notice that the difference between the versal and the universal property of infinitesimal deformations is that $f$ need not be unique. A versal deformation is called \emph{miniversal} if $f$ is unique whenever $B$ is infinitesimal. The basic result about versal deformation is: \begin{thm} If $\dim H^2_{odd}(d)<\infty$ then a miniversal deformation of $d$ exists. \end{thm} In this paper we will only need the following result to compute the versal deformations. \begin{thm} Suppose $H^2_{odd}(d)=\langle\bar{\delta^i}\rangle$ and $[\delta^i,\delta^j]=0$ for all $i,j$ then the infinitesimal deformation $$\mbox{$d^\infty$}=d+\delta^it_i$$ is miniversal, with base $A=\mbox{$\mathbb K$}[[t_i]].$ \end{thm} The construction of the moduli space as a geometric object is based on the idea that codifferentials which can be obtained by deformations with small parameters are ``close'' to each other. From the small deformations, we can construct 1-parameter families or even multi-parameter families, which are defined for small values of the parameters, except possibly when the parameters vanish. If $d_t$ is a one parameter family of deformations, then two things can occur. First, it may happen that $d_t$ is equivalent to a certain codifferential $d'$ for every small value of $t$ except zero. Then we say that $d_t$ is a jump deformation from $d$ to $d'$. It will never occur that $d'$ is equivalent to $d$, so there are no jump deformations from a codifferential to itself. Otherwise, the codifferentials $d_t$ will all be nonequivalent if $t$ is small enough. In this case, we say that $d_t$ is a smooth deformation. In \cite{fp10}, it was proved for Lie algebras that given three codifferentials $d$, $d'$ and $d''$, if there are jump deformations from $d$ to $d'$ and from $d'$ to $d''$, then there is a jump deformation from $d$ to $d''$. The proof of the corresponding statement for associative algebras is essentially the same. Similarly, if there is a jump deformation from $d$ to $d'$, and a family of smooth deformations $d'_t$, then there is a family $d_t$ of smooth deformations of $d$, such that every deformation in the image of $d'_t$ lies in the image of $d_t$, for sufficiently small values of $t$. In this case, we say that the smooth deformation of $d$ factors through the jump deformation to $d'$. In the examples of complex moduli spaces of Lie and associative algebras which we have studied, it turns out that there is a natural stratification of the moduli space of $n$-dimensional algebras by orbifolds, where the codifferentials on a given strata are connected by smooth deformations, which don't factor through jump deformations. These smooth deformations determine the local neighborhood structure. The strata are connected by jump deformations, in the sense that any smooth deformation from a codifferential on one strata to another strata factors through a jump deformation. Moreover, all of the strata are given by projective orbifolds. In fact, in all the complex examples we have studied, the orbifolds either are single points, or $\mbox{$\mathbb C$}\mathbb P^n$ quotiented out by either $\Sigma_{n+1}$ or a subgroup, acting on $\mbox{$\mathbb C$}\mathbb P^n$ by permuting the coordinates. We don't have a concrete proof at this time, but we conjecture that this pattern holds in general. In other words, we believe the following conjecture. \begin{con}[Fialowski-Penkava] The moduli space of Lie or associative algebras of a fixed finite dimension $n$ are stratified by projective orbifolds, with jump deformations and smooth deformations factoring through jump deformations providing the only deformations between the strata. \end{con} \section{Deformations of the elements in the moduli space} \subsection{$d_1=\psa{23}2-\psa{32}2+\psa{22}3-\psa{33}3$} $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&1&0&-1&0\\\noalign{\medskip} 0&0&0&1&0&0&0&0&-1 \end {array} \right],$$ This is the direct sum of the $1|1$-dimensional complex simple algebra and the trivial $1|0$-dimensional algebra. This algebra is not unital. Its center is spanned by $\{v_1, v_3\}$. We have $H^n=\langle\pha{1^n}1\rangle$ for $n>0$. Since $H^2$ has no odd elements, the algebra is rigid. \subsection{$d_2=\psa{33}3+\psa{31}1+\psa{32}2$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&1&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&1&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right],$$ This algebra is the first of a family of rigid extensions of the $0|1$-dimensional simple algebras. We have $H^1-\langle\pha22,\pha21,\pha12\rangle$, and $H^n=0$ otherwise. Its opposite algebra is $d_3$. \subsection{$d_3=\psa{33}3-\psa{13}1-\psa{23}2$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&-1&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&-1&0&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right],$$ This algebra is the second of a family of rigid extensions of the $0|1$-dimensional simple algebras. We have $H^1=\langle\pha22,\pha21,\pha12\rangle$, and $H^n=0$ otherwise. Its opposite algebra is $d_3$. \subsection{$d_4=\psa{33}3+\psa{31}1-\psa{23}2$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&1&0&0\\\noalign{\medskip} 0&0&0&0&0&-1&0&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right],$$ This algebra is the third of a family of rigid extensions of the $0|1$-dimensional simple algebras. We have $h^n=1|0$ if $n$ is odd and $h^n=0|0$ otherwise. The algebra is isomorphic to its opposite algebra. \subsection{$d_5=\psa{33}3-\psa{13}1+\psa{32}2-\psa{23}2$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&1&0&0\\\noalign{\medskip} 0&0&0&0&0&-1&0&1&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right],$$ This algebra is the fourth of a family of rigid extensions of the $0|1$-dimensional simple algebras. We have $H^n=\langle\pha{2^n}2\rangle$ for all $n$. Its opposite algebra is $d_6$. \subsection{$d_6=\psa{33}3+\psa{31}1+\psa{32}2-\psa{23}2$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&-1&1&0&0&0&0&0&0\\\noalign{\medskip} 0&0&0&1&0&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right],$$ This algebra is the fifth of a family of rigid extensions of the $0|1$-dimensional simple algebras. We have $H^n=\langle\pha{2^n}2\rangle$ for all $n$. Its opposite algebra is $d_5$. \subsection{$d_7=\psa{33}3+\psa{32}2$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&1&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right],$$ This algebra is the fifth of a family of rigid extensions of the $0|1$-dimensional simple algebras. We have \begin{align*} H^0&=\langle\pha{}1\rangle\\ H^1&=\langle\pha{1}1\rangle\\ H^{n}&=\langle\pha{1^n}1,\pha{21^{n-1}}2\rangle, \text{ if $n>1$}. \end{align*} Since $h^2=2|0$, there are no odd elements in $H^2$, so this algebra is rigid. Its opposite algebra is $d_8$. \subsection{$d_8==\psa{33}3-\psa{23}2$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&-1&0&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right].$$ This algebra is the sixth of a family of rigid extensions of the $0|1$-dimensional simple algebras. We have \begin{align*} H^0&=\langle\pha{}1\rangle\\ H^1&=\langle\pha{1}1\rangle\\ H^{n}&=\langle\pha{1^n}1,\pha{1^{n-1}2}2\rangle, \text{ if $n>1$}. \end{align*} Since $h^2=2|0$, there are no odd elements in $H^2$, so this algebra is rigid. Its opposite algebra is $d_7$. \subsection{$d_9=\psa{33}3+\psa{31}1-\psa{13}1+\psa{32}2-\psa{23}2$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&-1&0&1&0&0\\\noalign{\medskip} 0&0&0&0&0&-1&0&1&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right].$$ This algebra is the seventh of a family of rigid extensions of the $0|1$-dimensional simple algebras. It is both unital and commutative. Since $h^2=6|0$, there are no odd elements in $H^2$, so this algebra is rigid. \subsection{$d_{10}=\psa{33}3$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right].$$ This algebra is the direct sum of the $0|1$-dimensional simple algebra $\mbox{$\mathbb C$}$ and the trivial nilpotent $2|0$-dimensional algebra. The algebra is not unital but is commutative. Since $h^2=8|0$, the algebra is rigid. In fact, except for $H^0$, the cohomology is the subspace of all cochains in $C(M)$. so $h^n=2^{n+1}|0$ for $n>0$. \subsection{$d_{11}=\psa{33}3+\psa{32}2-\psa{23}2$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&-1&0&1&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right].$$ This algebra is a extension of the $0|1$-dimensional simple algebra $\mbox{$\mathbb C$}$ by the trivial $2|0$-dimensional algebra. The algebra is not unital but is commutative. Its opposite algebra is $d_{10}$. We have $H^n=\langle\pha{1^n}1,\pha{2^n}2,\psa{2^n}3\rangle$ for all $n$. The versal deformation is given by $\mbox{$d^\infty$}=d+\psa{22}3t$, which is a jump deformation to $d_1$. \subsection{$d_{12}=\psa{22}3+\psa{23}1-\psa{32}1$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&1&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&1&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&1 \end {array} \right].$$ This algebra is an extension of the $1|1$-dimensional nilpotent algebra $\delta=\psa{22}3$ by the trivial $1|0$-dimensional algebra $\mu=0$. As is true for all nilpotent algebras, it is not unital. Its center is spanned by $\{v_1,v_3\}$. We have $h^n=2|0$ when $n$ is odd and $h^n=1|1$ when $n$ is even. The versal deformation is given by \begin{equation*} \mbox{$d^\infty$}=d+\psa{31}1t-\psa{13}1t+\psa{11}3t^2-\psa{12}3t=\psa{21}3t+\psa{33}3t, \end{equation*} which is a jump deformation to $d_1$. \subsection{$d_{13}=\psa{22}3+\psa{21}3p+\psa{12}3q$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&q&p&1&0&0&0&0&0 \end {array} \right].$$ This is a family of nilpotent extensions of the $1|1$-dimensional nilpotent algebra $\delta=\psa{22}3$ by the trivial $1|0$-dimensional algebra. The family is projectively parameterized by $(p:q)\in\mbox{$\mathbb C$}\mathbb P^1/\Sigma_2$, where the action of $\Sigma_2$ on $\mbox{$\mathbb C$}\mathbb P^1$ is given by permuting the coordinates. Thus $d_{13}(p:q)\sim d_{13}(q:p)$. The center of this algebra is spanned by $\{v_3\}$. The special points $(1:0)$, $(1:1)$ $(1:-1)$ and $(0:0)$ have different cohomology than the generic pattern. Generically $h^2=2|1$ and the versal deformation is given by $\mbox{$d^\infty$}=d_{13}(p+t:q)$, which is a smooth deformation along the family. For the special point $(1:1)$, $h^2=2|1$, and the generic formula for the versal deformation holds, so it is not special in terms of its deformations. For the special point $(1:0)$, $h^2=4|1$ and the versal deformation is given by $\mbox{$d^\infty$}=d+\psa{11}3t$, which is equivalent to $d_{13}(1+\sqrt{1-4t}:2t)$. For the special point $(1:-1)$, $h^2=3|1$ the versal deformation is given by the same formula as for $(1:0)$, but this time it is equivalent to $d_{13}(1+t:-1+t+\sqrt{-t})$. Finally, the generic point $d_{13}(0:0)$ has a more interesting deformation theory. We have $h^2=5|4$, and $h^3=10|7$, so it is not surprising that there are relations on the base of the versal deformation. Its versal deformation has matrix \begin{equation*} \left[ \begin {array}{ccccccccc} 0&0&0&0&0&-t_{{1}}&t_{{1}}t_{{3}}&t_{{1}}&0\\\noalign{\medskip} 0&0&0&0&-t_{{2}}t_{{3}}&-t_{{2}}+t_{{1}}t_{{3}} &t_{{1}}t_{{4}}+t_{{2}}t_{{3}}-2\,{t_{{3}}}^{2}t_{{1}}&t_{{2}}-2\, t_{{1}}t_{{3}}&0\\\noalign{\medskip}t_{{4}}&0&t_{{3}}&1&0&0&0&0&t_{{2} }\end {array} \right]. \end{equation*} The third order deformation is versal, and there are 9 nontrivial relations \begin{align*} t_1t_4-t_2t_3=0,\quad t_1(2t_3^2-t_4)=0,\quad t_1t_3=0\\ (2t_1t_3-t_2)(t_4-t_3^2)=0,\quad t_1(t_4-t_3)=0,\quad t_1t_4=0\\ t_1t_3=0,\quad t_1t_3t_4=0,\quad t_1t_4+t_2t_3-2t_1t_3^2=0. \end{align*} The solutions to these relations are \begin{equation*} t_3=t_4=0,\quad t_1=t_2=0, \end{equation*} which means the base of the versal deformation is given by two planes that intersect transversally at the origin. The first solution gives the codifferential \begin{equation*} \mbox{$d^\infty$}=d+\psa{32}1t_1-\psa{23}1t_1+\psa{32}2t_2-\psa{23}2t_2+\psa{33}3t_2, \end{equation*} which is equivalent to $d_1$ except on the line $t_2=0$, where it jumps to $d_{12}$. The second solution gives the codifferential \begin{equation*} \mbox{$d^\infty$}=d+\psa{21}3t_3+\psa{11}3t_4, \end{equation*} which gives jump deformations to $d_{13}(p:q)$ for all $(p:q)$ except $(0:0)$. This behaviour is consistent with the behaviour of the generic point $(0:0)$ in $\mbox{$\mathbb C$}\mathbb P^1$, which is dense in that space. \subsection{$d_{14}=\psa{21}3-\psa{12}3$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&-1&1&0&0&0&0&0&0 \end {array} \right].$$ This algebra is an extension of the $0|1$-dimensional trivial algebra $\mbox{$\mathbb C$}$ by the trivial $2|0$-dimensional algebra. The algebra is commutative. We have $h^2=6|3$ and $h^3=8|4$, so one might expect there to be relations on the base of the versal deformation, but in this case there aren't any relations; moreover the infinitesimal deformation is versal. We have \begin{equation*} \mbox{$d^\infty$}=d+\psa{21}3t_1+\psa{22}3t_2+\psa{11}3t_3. \end{equation*} This gives jump deformations to $d_{13}(p:q)$ for all values of $(p:q)$ except $(1:1)$ and $(0:0)$ \subsection{$d_{15}=\psa{23}1p+\psa{32}1q$} The matrix of this codifferential is $$\left[ \begin {array}{ccccccccc} 0&0&0&0&0&p&0&q&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&0\\\noalign{\medskip} 0&0&0&0&0&0&0&0&0 \end {array} \right].$$ This family of algebras are extensions of the trivial $0|1$-dimensional algebra $\mbox{$\mathbb C$}$ by the trivial $2|0$-dimensional algebra. They can also be considered as extensions of the trivial $1|0$-dimensional algebra by the trivial $1|1$-dimensional algebra. The family is parameterized projectively by $(p:q)\in\mbox{$\mathbb C$}\mathbb P^1$. However, in this case, unlike the $d_{13}(p:q)$ case, there is no action of the group $\Sigma_2$. In other words, $d_{15}(p:q)$ is not equivalent to $d_{15}(q:p)$ in general. The algebra is not commutative unless $p=-q$; generically its center is spanned by $\{v_1\}$. For the special points $(0:1)$, $(1:0)$, $(1:1)$ and $(1:-1)$, the cohomology and deformation theory is not generic. In the generic case, we have $h^2=1|2$ and $h^3=2|1$, The matrix of the versal deformation is \begin{equation*} \left[ \begin {smallmatrix} 0&0&0&0&{\frac {t_{{1}} \left( p-q-t_{{2}} \right) }{2(q+t_{{2}})}}&p&t_{{1}}&q+t_{{2}}&0 \\\noalign{\medskip} 0&0&0&0& -{\frac {{t_{{1}}}^{2} \left( p+q+t_{ {2}} \right) \left( p-q-t_{{2}} \right) }{4p \left( q+t_{{2}} \right) ^{2}}} &-{\frac {t_{{1}} \left( p+q+t_{{2}} \right) }{2(q+t_{{2}}})} & -{\frac {{t_{{1}}}^{2} \left( p+q+t_{{2}} \right) \left( p-q-t_{ {2}} \right) }{ 4\left( q+t_{{2}} \right) {p}^{2}}}&0&0 \\\noalign{\medskip}0&0&0&0&0&0&0&0&t_{{1}}\end {smallmatrix} \right]. \end{equation*} However, there is one relation: \begin{equation*} t_1^2(p+q+t_2)(p-q-t_2)/p^2=0, \end{equation*} which in the generic case has only the solution $t_1=0$, which simplifies the formula for the versal deformation to $\mbox{$d^\infty$}=d_{15}(p+t_2:q)$. This means that in the generic case, the deformations are only along the family. In the $(0:1)$ case, we have $h^2=2|3$ and $h^3=5|3$. The versal deformation is given by \begin{equation*} \mbox{$d^\infty$}=d+\psa{31}1t_1+\psa{33}3t_1+\psa{23}1t_2+\psa{32}2t_3+\psa{31}1(t_3+t_1t_2). \end{equation*} There are two nontrivial relations on the base of the versal deformation, \begin{equation*} t_3(t_1+t_3)=0,\quad t_2(t_1+2t_3+t_1t_2)=0, \end{equation*} which have solutions \begin{align*} t_1=t_3=0,\quad t_2=t_3=0,\quad t_2=0,t_3=-t_1,\quad t_3=0,t_2=-1,\quad t_3=-t_1, t_2=1. \end{align*} The last two solutions are not local, in the sense that the lines they parameterize do not pass through the origin, and thus are not relevant to the deformation picture. Thus the base of the versal deformation consists of three lines through the origin. The first line is just $d_{15}(t_2:1)$, which is a deformation along the family, the second is a jump deformation to $d_7$ and the third is a jump deformation to $d_5$. In the $(1:0)$ case, we have $h^2=2|3$ and $h^3=5|3$. Note that this algebra is the opposite algebra to $d_{15}(0:1)$, so its versal deformation could be given by the opposite algebra to the versal deformation of $d_{15}(0:1)$. In particular, we have a similar description of the solutions to the versal deformation, and we obtain deformations along the family, as well as jump deformations to $d_8$ and $d_6$, the opposite algebras to $d_7$ and $d_5$. In the $(1:1)$ case, we have $h^2=2|2$ and $h^3=4|2$. The versal deformation is given by \begin{equation*} \mbox{$d^\infty$}=d+\psa{31}1t_1+\psa{33}3t_1+\psa{32}1t_2-\psa{23}2\frac{t_1(t_2+2)}{2(t_2+1)} -\psa{31}1\frac{t_1t_2}{2(t_2+1)}+\psa{13}2\frac{t_1^2t_2(t_2+1)}{4(t_2+1)^2}. \end{equation*} Note that $\mbox{$d^\infty$}$ is expressed as a rational function of the parameters rather than a polynomial. This means that the versal deformation is not given by a finite order deformation (in terms of this basis of $H^2$), but rather by a power series. It is very interesting to note that in the examples which we have studied, it has always been possible to find a basis in which the expression of the versal deformation is given by a rational expression, although we do not know if this is true in general. There is one nontrivial relation on the base, $t_1^2t_2(t_2+2)/(1+t_2)=0$, which has two local solutions, $t_1=0$ or $t_2=0$. Thus the base of the versal deformation is given by two lines through the origin. The first line gives $\mbox{$d^\infty$}=d_{15}(1:1+t_2)$, which is a deformation along the family, while the second line is a jump deformation to $d_4$. In the $(1:-1)$ case, we have $h^2=4|3$ and $h^3=5|4$. We omit the expression for the versal deformation, because it is a bit nasty. However, there are two local solutions to the relations on the base of the versal deformation, $t_2=0$ or $t_1=t_3=0$, which means the base is given by a plane and a line intersecting transversally at the origin. The first solution gives \begin{equation*} \mbox{$d^\infty$}=d-\psa{13}1t_1+\psa{31}1t_2+\psa{33}3t_1+\psa{22}3t_3+\psa{11}3t_1^2t_3 -\psa{12}3t_1t_3-\psa{21}3t_1t_3. \end{equation*} When neither $t_1$ nor $t_3$ vanish, the deformation is equivalent to $d_1$, while on the line $t_3=0$, it jumps to $d_{11}$, and on the line $t_1=0$, it jumps to $d_{12}$. The second solution is $\mbox{$d^\infty$}=d_{15}(1:-1+t_2)$, which is just a deformation along the family. Note that for this family, the generic point $d_{15}(0:0)$ is just the zero codifferential, which obviously has jump deformations to every codifferential in the moduli space. \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
-63,367.509695
[ -3.271484375, 2.884765625 ]
25.984252
[ -3.125, 0.317626953125, -2.62890625, -6.453125, -0.68408203125, 9.5 ]
[ 3.634765625, 9.34375, 2.759765625, 7.546875 ]
373
7,043
[ -3.41015625, 4.04296875 ]
35.323822
[ -5.22265625, -3.8828125, -5.6796875, -2.642578125, 1.5322265625, 13.4375 ]
0.979678
9.722057
19.565526
8.366862
[ 2.8633828163146973 ]
-40,001.047139
5.652279
-62,767.801248
1.105278
5.763082
[ -1.7958984375, -3.021484375, -3.6796875, -5.2890625, 1.7646484375, 12.0078125 ]
[ -5.40625, -1.1005859375, -1.595703125, -0.68408203125, 3.259765625, 2.978515625 ]
BkiUbhzxK3YB9raXyg57
\section{Introduction} Humans are able to perform a wide variety of tasks with great flexibility; learning new motions is relatively easy, and adapting to new situations (e.g. change in the environment or body growth) is usually dealt with no particular effort. The strategies adopted by the central nervous system (CNS) to master the complexity of the musculoskeletal apparatus and provide such performance are still not clear. However, it has been speculated that an underlying modular organization of the CNS may simplify the control and provide the observed adaptability. There is evidence that the muscle activity necessary to perform various tasks (e.g. running, walking, keeping balance, reaching and other combined movements) may emerge from the combination of predefined muscle patterns, the so-called \textit{muscle synergies} \citep{d'Avella2003}. This organization seems to explain muscle activity across a wide range of combined movements \citep{Ivanenko2005, Cappellini2006, D'Avella2008}. The scheme of muscle synergies is inherently flexible and adaptable. Different actions are encoded by specific combinations of a small number of predefined synergies; this reduces the computational effort and the time required to learn new useful behaviors. The learning scheme can be regarded as developmental since information previously acquired (i.e. synergies) can be reused to generate new behaviors\citep{Dominici2011}. Finally, improved performance can be easily achieved by introducing additional synergies. Thus, the hypothetical scheme of muscle synergies would contribute to the autonomy and the flexibility observed in biological systems, and it could inspire new methods to endow artificial agents with such desirable features. In this paper we propose a method to control a dynamical system (i.e. the agent) in point-to-point reaching tasks by linear combinations of a small set of predefined actuations (i.e. synergies). Our method initially solves the task in state variables by interpolation; then, it identifies the combination of synergies (i.e. actuation) that generate the closest kinematic trajectory to the computed interpolant. Additionally, we propose a strategy to synthesize a small set of synergies that is tailored to the task and the agent. The overall method can be interpreted in a developmental fashion; i.e. it allows the agent to autonomously synthesize and update its own synergies to increase the performance of new reaching tasks. Other researchers in robotics and control engineering have recently proposed architectures inspired by the concept of muscle synergies. In \citep{Nori2005} the authors derive an analytical form of a set of primitives that can drive a feedback linearized system (known analytically) to any point of its configuration space. In \citep{Alessandro2012} the authors present a numerical method to identify synergies that optimally drive the system over a set of desired trajectories. This method does not require an analytical description of the system, and it has the advantage of assessing the quality of the synergies in task space. However, it is computationally expensive as it involves heavy optimizations. In \citep{Todorov2003} muscle synergies are identified by applying an unsupervised learning procedure to a collection of sensory-motor data obtained by actuating a robot with random signals. In \citep{Schaal05} the architecture of the dynamic movement primitives (DMP) is proposed as a novel tool to formalize control policies in terms of predefined differential equations. Linear combinations of Gaussian functions are used as inputs to modify the attractor landscapes of these equations, and to obtain the desired control policy. In contrast to these works, our method to synthesize synergies does not rely on feedback linearization, nor on repeated integrations of the dynamical system. The method is grounded on the input-output relation of the dynamical system (as in \citep{Todorov2003}), and it provides a computationally fast method to obtain the synergy combinators to solve a given task. Furthermore, our method is inherently adaptable as it allows the on-line modification of the set of synergies to accommodate to new reaching tasks. \section{Definitions and Methods\label{sec:methods}} In this section we introduce the mathematical details of the method we propose. After some definitions, we present the core element of our method: a general procedure to compute actuations that solve point-to-point reaching tasks (see Sec. \protect\ref{sec:solTask}). Subsequently, in Section \protect\ref{sec:synthesis}, we propose a framework for the synthesis and the development of a set of synergies. Let us consider a differential equation modeling a physical system\\* $\mathcal{D}\left(\boldsymbol{q}(t)\right) = \boldsymbol{u}(t)$, where $\boldsymbol{q}(t)$ represents the time-evolution of its configuration variables (their derivatives with respect to time are $\boldsymbol{\dot{q}}(t)$), and $\boldsymbol{u}(t)$ is the actuation applied. Inspired by the hypothesis of muscle synergies\footnote{With respect to the model of time-varying synergies, in this paper we neglect the synergy onset times.} \citep{d'Avella2003}, we formulate the actuation as a linear combination of predefined motor co-activation patterns: \begin{equation}\label{exp:synergies} \boldsymbol{u}(t) = \sum_{i=1}^{{N_\phi}} \boldsymbol{\phi}{}_i(t)b_i := \boldsymbol{\Phi}(t)\vps{b}, \end{equation} \noindent where the functions $\boldsymbol{\phi}{}_i(t) \in \boldsymbol{\Phi}$ are called \textit{motor synergies}. The notation $\boldsymbol{\Phi}(t)$ describes a formal matrix where each column is a different synergy. If we consider a time discretization, $\boldsymbol{\Phi}(t)$ becomes a $N\dim(\boldsymbol{q})$-by-${N_\phi}$ matrix, where $N$ is the number of time steps, $\dim(\boldsymbol{q})$ the dimension of the configuration space and ${N_\phi}$ the number of synergies. We define \textit{dynamic responses} (DR) of the set of synergies as the responses $\vbq{}_i(t)\in \boldsymbol{\Theta}$ of the system to each synergy (i.e. forward dynamics): \begin{equation} \mathcal{D}(\vbq{}_i(t))=\boldsymbol{\phi}{}_i(t) \hspace{5mm} i=1...{N_\phi}.\label{exp:syngDR} \end{equation} \noindent with initial conditions chosen arbitrarily. \subsection{Solution to point-to-point reaching tasks\label{sec:solTask}} A general point-to-point reaching task consists in reaching a final state $\left(\boldsymbol{q}_T,\boldsymbol{\dot{q}}_T\right)$ from an initial state $\left(\boldsymbol{q}_0,\boldsymbol{\dot{q}}_0\right)$ in a given amount of time $T$: \begin{align}\label{exp:task} \begin{gathered} \boldsymbol{q}(0) = \boldsymbol{q}_0 , \quad \boldsymbol{\dot{q}}(0) = \boldsymbol{\dot{q}}_0, \\ \boldsymbol{q}(T) = \boldsymbol{q}_T , \quad \boldsymbol{\dot{q}}(T) = \boldsymbol{\dot{q}}_T. \end{gathered} \end{align} \noindent Controlling a system to perform such tasks amounts to finding the actuation $\boldsymbol{u}(t)$ that fulfills the point constraints\footnote{In this paper we assume that the initial conditions of the systems are equal to $\left(\boldsymbol{q}_0,\boldsymbol{\dot{q}}_0\right)$} \eqref{exp:task}. Specifically, assuming that the synergies are known, the goal is to identify the appropriate synergy combinators $\vps{b}$. In this paper we consider only the subclass of reaching tasks that impose motionless initial and final postures, i.e. $\boldsymbol{\dot{q}}_T = \boldsymbol{\dot{q}}_0 = 0$. The procedure consists of, first, solving the problem in kinematic space (i.e. finding the appropriate $\boldsymbol{q}(t)$), and then computing the corresponding actuations. From the kinematic point of view, the task can be seen as an interpolation problem; i.e. $\boldsymbol{q}(t)$ is a function that interpolates the data in \eqref{exp:task}. Therefore, a set of functions is used to build the interpolant trajectory that satisfy the constraints imposed by the task; these functions are herein the dynamic responses of the synergies: \begin{equation} \boldsymbol{q}(t) = \sum_{i=1}^{{N_\theta}} \vbq{}_i(t)a_i := \boldsymbol{\Theta}(t)\vps{a},\label{exp:solutionDR} \end{equation} \noindent where the vector of combinators $\vps{a}$ is chosen such that the task is solved. As mentioned earlier, if time is discretized, $\boldsymbol{\Theta}(t)$ becomes a $N\dim(\boldsymbol{q})$-by-${N_\theta}$ matrix, where ${N_\theta}$ is the number of dynamic responses. The quality of the DR as interpolants is evaluated in sections \protect\ref{sec:results}. Once a kinematic solution has been found (as linear combination of DRs), the corresponding actuation can be obtained by applying the differential operator; i.e. $\mathcal{D}\left(\boldsymbol{\Theta}(t)\vps{a}\right) = \tilde{\boldsymbol{u}}(t)$. Finally, the vector $\vps{b}$ can be computed by projecting $\tilde{\boldsymbol{u}}(t)$ onto the synergy set $\boldsymbol{\Phi}$. If $\tilde{\boldsymbol{u}}(t)$ does not belong to the linear span of $\boldsymbol{\Phi}$, the solution can only be approximated in terms of a defined norm (e.g. Euclidean): \begin{equation} \vps{b} = \argmin_{\vps{b}} ||\tilde{\boldsymbol{u}}(t) - \boldsymbol{\Phi}(t)\vps{b}||.\label{eq:minimization} \end{equation} \noindent When the time is discretized, all functions of time becomes vectors and this equation can be solved explicitly using the psuedoinverse of the matrix $\boldsymbol{\Phi}$, \begin{equation} \boldsymbol{\Phi}^{+}\tilde{\boldsymbol{u}} = \boldsymbol{\Phi}^{+}\mathcal{D}\left(\boldsymbol{\Theta}\vps{a}\right) = \vps{b}. \end{equation} \noindent This equation highlights the operator $\boldsymbol{\Phi}^{+}\circ\mathcal{D}\circ\boldsymbol{\Theta}$ ($\circ$ denotes operator composition) as the mapping between the kinematic combinators $\vps{a}$ (kinematic solution) and the synergy combinators $\vps{b}$ (dynamic solution). Generically, this operator represents a nonlinear mapping $\mathcal{M}:\mathbb{R}^{N_\theta}\rightarrow \mathbb{R}^{N_\phi}$, and it will be discussed in Section \protect\ref{sec:discu}.\\* To assess the quality of the solution we define the following measures:\\* \emph{Interpolation error}: Measures the quality of the interpolant $\boldsymbol{\Theta}(t)\vps{a}$ with respect to the task. Strictly speaking, only the case of negligible errors corresponds to interpolation. A non-zero error indicates that the trajectory $\boldsymbol{\Theta}(t)\vps{a}$ only approximates the task \begin{equation} \operatorname{err}_I = \sqrt{||\boldsymbol{q}_T - \boldsymbol{\Theta}(T)\vps{a}||^2 + ||\dot{\boldsymbol{\Theta}}(T)\vps{a}||^2}, \end{equation} \noindent where $||\cdot||$ denotes the Euclidean norm, and the difference between angles are mapped to the interval $(-\pi,\pi]$.\\* \emph{Projection error}: Measures the distance between the actuation that solves the task $\tilde{\vps{u}}(t)$, and the linear span of the synergy set $\boldsymbol{\Phi}$ \begin{equation} \operatorname{err}_P = \sqrt{\int_0^T ||\tilde{\boldsymbol{u}}(t) - \boldsymbol{\Phi}(t)\vps{b}||^2 \mathrm{d} t}. \end{equation} \noindent \emph{Forward dynamics error}: Measures the error of a trajectory $\tilde{\vps{q}}(t,\vps{\lambda})$ generated by an actuation $\boldsymbol{\Phi}(t)\vps{\lambda}$, in relation to the task. \begin{equation} \operatorname{err}_F = \sqrt{||\tilde{\boldsymbol{q}}(T,\vps{\lambda}) - \boldsymbol{q}_T||^2 + ||\vps{\dot{\tilde{q}}}(T,\vps{\lambda}) - \boldsymbol{\dot{q}}_T||^2}. \end{equation} \noindent Replacing $\tilde{\boldsymbol{q}}(t,\vps{\lambda})$, $\boldsymbol{q}_T$ and $\boldsymbol{\dot{q}}_T$ with their corresponding end-effector values provides the \underline{forward dynamics error of the end-effector}. \subsection{Synthesis and Development of Synergies\label{sec:synthesis}} The synthesis of synergies is carried on in two phases: exploration and reduction. The exploration phase consists in actuating the system with an extensive set of motor signals $\boldsymbol{\Phi}_0$ in order to obtain the corresponding DRs $\boldsymbol{\Theta}_0$. The reduction phase consists in solving a small number of point-to-point reaching tasks in kinematic space (that we call \textit{proto-tasks}) by creating the interpolants using the elements of set $\boldsymbol{\Theta}_0$, as described in Eq. \eqref{exp:solutionDR}. These solutions are then taken as the elements of the reduced set $\boldsymbol{\Theta}$. Finally, the synergy set $\boldsymbol{\Phi}$ is computed using relation \eqref{exp:syngDR}, i.e. inverse dynamics. As a result, there will be as many synergies as the number of the proto-tasks (i.e. ${N_\phi} = {N_\theta}$). The intuition behind this reduction is that the synergies that solve the proto-tasks may capture essential features both of the task and of the dynamics of the system. Despite the non-linearities of $\mathcal{D}$, linear combination of these synergies might be useful to solve point-to-point reaching tasks that are similar (in terms of Eq. \eqref{exp:task}) to the proto-tasks (see Sec. \protect\ref{sec:results}). The number of proto-tasks as well as their specific instances determine the quality of the synergy-based controller. To obtain good performance in a wide variety of point-to-point reaching tasks, the proto-tasks should cover relevant regions of the state space (see Sec. \protect\ref{sec:results}). Clearly, the higher the number of different proto-tasks, the more regions that can be reached with good performance. However, a large number of proto-tasks (and the corresponding synergies) increases the dimensionality of the controller. In order to tackle this trade-off, we propose a procedure that parsimoniously adds a new proto-task only when and where it is needed: if the performance in a new reaching task is not satisfactory, we add a new proto-task in one of the regions with highest projection error or we modify existing ones. \section{Results\label{sec:results}} We apply the methodology described in Section \protect\ref{sec:methods} to a simulated planar kinematic chain (see \citep{Hollerbach1982} for model details) modeling a human arm\citep{Muceli2010}. In the exploration phase, we employ an extensive set of motor signals $\boldsymbol{\Phi}_0$ to actuate the arm model and generate the corresponding dynamic responses $\boldsymbol{\Theta}_0$. The panels in the first row of Fig. \protect\ref{fig:2DoF_Exploration} show the end-effector trajectories resulting from the exploration phase. We test two different classes of motor signals: actuations that generate minimum jerk end-effector trajectories ($100$ signals), and low-passed uniformly random signals ($90$ signals). In order to evaluate the validity of the general method described in Sec. \protect\ref{sec:solTask}, we use the sets $\boldsymbol{\Phi}_0$ and $\boldsymbol{\Theta}_0$ to solve $13$ different reaching tasks without performing the reduction phase. The second row of Fig. \protect\ref{fig:2DoF_Exploration} depicts the trajectories drawn by the end-effector when the computed mixture of synergies are applied as actuations (i.e. forward dynamics of the solution). It has to be noted how the nature of the solutions (as well as that of the responses), depends on the class of actuations used. The maximum errors are reported in Table \protect\ref{table:exploration_error}. The results are highly satisfactory for both the classes of actuations, and show the validity of the method proposed. Since the reduction phase has not been performed, the dimension of the combinator vectors $\vps{a}$ and $\vps{b}$ equals the number of actuations used in the exploration. \begin{table}[hptb] \centering \begin{tabular}{|c|cc|} \hline & Min. Jerk & Random \\ $\operatorname{err}_I$ & $10^{-15}$ & $10^{-15}$ \\\hline $\operatorname{err}_P$ & $10^{-5}$ & $10^{-3}$ \\\hline $\operatorname{err}_F$ & $10^{-4}$ & $10^{-3}$ \\\hline \end{tabular} \protect\caption{Order of the maximum errors obtained by using $\boldsymbol{\Phi}_0$ and $\boldsymbol{\Theta}_0$ (no reduction phase).} \label{table:exploration_error} \end{table} The objective of the reduction phase is to generate a small set of synergies and DRs that can solve desired reaching tasks effectively. As described in Section \protect\ref{sec:synthesis}, this is done by solving a handful of proto-tasks. The number (and the instances) of these proto-tasks determines the quality of the controller. Figure \protect\ref{fig:2DoF_Reduction} shows the projection error as a function of the number of proto-tasks. The reduction is applied to the low-passed random signal set. Initially, two targets are chosen randomly (top left panel); subsequent targets are then added on the regions characterized by higher projection error. As it can be seen, the introduction of new proto-tasks leads to better performance on wider regions of the end-effector space, and eventually the whole space can be reached with reasonable errors. In fact, the figure shows that this procedure decreases the average projection error to $10^{-3}$ (comparable to the performance of the whole set $\boldsymbol{\Phi}_0$, see Tab. \protect\ref{table:exploration_error}) and reduces the dimension of the combinator vector to $6$, a fifteen-fold reduction. This result shows that a set of ``good'' synergies can drastically reduce the dimensionality of the controller, while maintaining similar performance. The bottom right panel of the figure shows the forward dynamics error of the end-effector obtained with the $6$ proto-tasks. Comparing this panel with the bottom left one, it can be seen that the forward dynamics error of the end-effector reproduces the distribution of the projection error, rendering the latter a good estimate for task performance. To further demonstrate that the reduction phase we propose is not trivial, we compare the errors resulting from the set of $6$ synthesized synergies, with the errors corresponding to $100$ random subsets of size $6$ drawn from the set of low-passed random motor signals. Figure \protect\ref{fig:2DoF_compareReduction} shows this comparison. The task consists in reaching the $13$ targets in Fig. \protect\ref{fig:2DoF_Exploration}. The boxplots correspond to the errors of the random subsets, and the filled circles to the errors of the synergies resulting from the reduction phase. Observe that, the order of the error of the reduced set is, in the worst case, equal to error of the best random subset. However, the mean error of the reduced set is about $2$ orders of magnitude lower. Therefore, the reduction by proto-tasks can produce a parsimonious set of synergies out of a extensive set of actuations. Evaluating the performance with different classes of proto-tasks (e.g. catching, hitting, via-points) is postponed to future works. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{Fig1.pdf} \protect\caption{Comparison of explorations with two different classes of actuation: minimum jerk and low-passed random signal. Each panel shows the kinematic chain in it initial posture (straight segments). The limits of the end-effector are shown as the boundary in solid line.} \label{fig:2DoF_Exploration} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.85\textwidth]{Fig2.pdf} \protect\caption{Selection of targets based on projection error. Each panel shows the kinematic chain in its initial posture (straight segments). The limits of the end-effector are the boundary of the colored regions. The color of each point indicates the projection error produced to reach a target in that position. The bottom right diagram shows the forward dynamics error of the end-effector using 6 proto-tasks (6 synergies).} \label{fig:2DoF_Reduction} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=\textwidth]{Fig3.pdf} \protect\caption{Evaluation of the reduction phase. Errors produced by subsets randomly selected from the exploration-actuations (boxplots) are compared with the errors obtained after the reduction phase (filled circles).} \label{fig:2DoF_compareReduction} \end{figure} \section{Discussion\label{sec:discu}} The results shown in the previous section justify the interpretation of the methodology as a developmental framework. Initially, the agent explores its sensory-motor system employing a variety of actuations. Later, it attempts to solve the first reaching tasks (proto-tasks), perhaps obtaining weak performance as the exploration phase may not have produced enough responses yet (see the box-plots in Fig. \protect\ref{fig:2DoF_compareReduction}). If the agent finds an acceptable solution to a proto-task, it is used to generate a new synergy (populating the set $\boldsymbol{\Phi}$), otherwise it continues with the exploration. The failure to solve tasks of importance for its survival, could motivate the agent to include additional proto-tasks; Figure \protect\ref{fig:2DoF_Reduction} illustrates this mechanism. As it can be seen, the development of the synergy set incrementally improves the ability of the agent to perform point-to-point reaching. Alternatively, existing proto-tasks could be modified by means of a gradient descent or other learning algorithms. In a nutshell, the methodology we propose endows the agent with the ability to autonomously generate and update a set of synergies (and dynamic responses) that solve reaching tasks effectively. Despite the difficulty of the mathematical problem (i.e nonlinear differential operator), our method seems to generate a small set of synergies that span the space of actuations required to solve reaching tasks. This is not a trivial result, since these synergies over-perform many other set of synergies randomly taken from the set $\boldsymbol{\Phi}_0$ (see Fig. \protect\ref{fig:2DoF_compareReduction}). It appears as if the reduction phase builds features upon the exploration phase, that are necessary to solve new reaching tasks. To verify whether solving proto-tasks plays a fundamental role, our synergies could be compared with the principal components extracted from the exploration set. This verification goes beyond the scope of this paper. An important aspect of our method is the relation between $\boldsymbol{\Theta}$ and $\boldsymbol{\Phi}$ (see Eq. \eqref{exp:syngDR}). This mapping makes explicit use of the body parameters (embedded in the differential operator $\mathcal{D}$), hence the synergies obtained can always be realized as actuations. The same cannot be said, in general, for synergies identified from numerical analyses of biomechanical data. Though some studies have verified the feasibility of extracted synergies as actuations \citep{Neptune2009}, biomechanical constraints are not explicitly included in the extraction algorithms. Additionally, Eq. \eqref{exp:syngDR} provides an automatic way to cope with smooth variations of the morphology of the agent. That is, both the synergies and their dynamic responses evolve together with the body. In line with \cite{Nori2005, Alessandro2012}, these observations highlight the importance of the body in the hypothetical modularization of the CNS. Once the task is solved in kinematic space, the corresponding actuation can be computed using the explicit inverse dynamical model of the system (i.e. the differential operator $\mathcal{D}$). It might appear that there is no particular advantage in projecting this solution onto the synergy set. However, the differential operator might be unknown. In this case, a synergy-based controller would allow to compute the appropriate actuation by evaluating the mapping $\mathcal{M}$ on the vector $\vps{a}$, hence obtaining the synergy combinators $\vps{b}$. Since $\mathcal{M}$ is a mapping between two finite low-dimensional vector spaces, estimating this map may turn to be easier than estimating the differential operator $\mathcal{D}$. Furthermore, we believe that the explicit use of $\mathcal{D}$ may harm the biological plausibility of our method. In order to estimate the map $\mathcal{M}$, the input-output data generated during the exploration phase (i.e. $\boldsymbol{\Phi}_0$ and $\boldsymbol{\Theta}_0$) could be used as learning data-set. Further work is required to test these ideas. Additionally, preliminary theoretical considerations (not reported here) indicate that the synthesis of synergies without the explicit knowledge of $\mathcal{D}$ is also feasible. Finally, the current formulation of the method does not includes joint limits explicitly. The interpolated trajectories are valid, i.e. they do not go beyond the limits, due to the lack of intricacy of the boundaries. In higher dimensions, especially when configuration space and end-effector are not mapped one-to-one, this may not be the case anymore. Nevertheless, joint limits can be included by reformulating the interpolation as a constrained minimization problem. Another solution might be the creation of proto-tasks with a tree-topology, relating our method to tree based path planning algorithms\citep{Shkolnik2011}. \section{Conclusion and Future Work} \label{sec:conclusions} The current work introduces a simple framework for the generation of open loop controllers based on synergies. The framework is applied to a planar kinematic chain to solve point-to-point reaching tasks. Synergies synthesized during the reduction phase over-perform hundreds of arbitrary choices of basic controllers taken from the exploration motor signals. Furthermore, our results confirm that the introduction of new synergies increases the performance of reaching tasks. Overall, this shows that our method is able to generate effective synergies, greatly reducing the dimensionality of the problem, while keeping a good performance level. Additionally, the methodology offers a developmental interpretation of the emergence of task-related synergies that could be validated experimentally. Due to the nonlinear nature of the operator $\mathcal{D}$, the theoretical grounding of the method poses a difficult challenge, and it is the focus of our current research. Another interesting line of investigation is the validation of our method against biological data, paving the way towards a predictive model for the hypothesis of muscle synergies. Similarly, the development of an automatic estimation process for the mapping $\mathcal{M}$ would further increase the biological plausibility of the model. The inclusion of joint limits into the current formulation must be prioritized. Solving this problem will allow to test the method on higher dimensional redundant systems. Tree-based path planning algorithms may offer a computationally effective way to approach the issue. \vspace{2mm} \small{ \noindent{\bf Funds}: The research leading to these results has received funding from the European Community's Seventh Framework Programme FP7/2007-2013-Challenge 2 - Cognitive Systems, Interaction, Robotics- under grant agreement No 248311-AMARSi, and from the EU project RobotDoC under 235065 from the 7th Framework Programme (Marie Curie Action ITN). \noindent{\bf Authors Contribution}: {\bf CA} and {\bf JPC} worked on the implementation of the algorithm and the generation of the results reported here. The method was born during {\bf JPC}'s visit to {\bf AD}'s laboratory. {\bf AD} provided material support for this development and uncountable conceptual inputs. All three authors have contributed to the creation of the manuscript. The authors list follows an alphabetical order. } \bibliographystyle{unsrt}
-17,709.082116
[ -3.380859375, 3.0859375 ]
44.285714
[ -3.279296875, 0.369873046875, -2.046875, -6.5703125, -0.85400390625, 9.09375 ]
[ 4.51953125, 6.5078125, 2.154296875, 9.6328125 ]
225
3,554
[ -2.671875, 2.927734375 ]
22.494109
[ -6.671875, -5.1171875, -5.0703125, -2.275390625, 2.90234375, 13.4140625 ]
1.80857
22.660112
28.418683
1.388142
[ 1.9211151599884033 ]
-13,444.485345
6.06753
-17,359.107163
0.788351
5.701032
[ -2.74609375, -3.76953125, -4.0546875, -5.08203125, 2.712890625, 12.296875 ]
[ -5.484375, -2.25390625, -2.41015625, -1.8193359375, 3.71484375, 5.3984375 ]
BkiUeFrxK4sA-9F9lSY9
\section{\label{}} \section{Introduction} Even after the discovery of the Higgs-like particle at the Large Hadron Collider (LHC), phenomenology of the Higgs sector is not fully revealed. The ATLAS and CMS experiments reported their recent results on signal strengths of the Higgs like boson (defined as the ratio of Higgs signal cross sections between the experimental result and the SM prediction) for its decay into diphoton ($\gamma\gamma$) and diboson ($ZZ$ and $WW$)~\cite{ATLAS:2013gamma,ATLAS:2013Z,ATLAS:2013W,CMS:2013gamma,CMS:2013Z,CMS:2013W}. According to these results, the signal strengths of $H \to \gamma\gamma$, $ZZ$ and $WW$ turn out to be $1.65\pm0.24^{+0.25}_{-0.18}$, $1.7^{+0.5}_{-0.4}$ and $1.01\pm0.31$ at the ATLAS experiment, while $0.78\pm0.27$ (MVA based), $0.91^{+0.30}_{-0.24}$ and $0.71\pm0.37$ (cut based) at the CMS experiment. These results are consistent with the SM but there still is a room for a new physics effect in these processes. In this work, we put bounds on universal extra dimension (UED) models. The UED is a candidate of new physics, in which all the SM particles propagate in extra compactified spacial dimensions. The five-dimensional minimal UED (mUED) model without tree-level brane-localized term as a minimal extension of the SM, which is constructed on $S^1/Z_2$~\cite{Appelquist:2000nn}, has been well studied. Six-dimensional UED models with various two-dimensional compactified spaces are also considered. We investigate the 6D UED models based on two torus, $T^2/Z_2$~\cite{Appelquist:2000nn}, $T^2/Z_4$~\cite{Dobrescu:2004zi,Burdman:2005sr}, $T^2/(Z_2 \times Z'_2)$~\cite{Mohapatra:2002ug}, on two sphere $S^2/Z_2$~\cite{Maru:2009wu} and $S^2$, and on the non-orientable manifolds, namely the real projective plane $RP^2$~\cite{Cacciapaglia:2009pa} and the projective sphere (PS)~\cite{Dohi:2010vc}, by putting bounds on the Kaluza-Klein (KK) scale from the results of the Higgs signal search and the electroweak precision measurements~\footnote{See also Ref.~\cite{Antoniadis:1990ew} for an earlier proposal of a TeV scale extra dimension.}. For details of these models, see for example Refs.~\cite{Nishiwaki:2011gm,Nishiwaki:2011gk}. For bounds on the UED models from the electroweak precision measurements, we use the $S$ and $T$ parameters defined as in Ref.~\cite{Peskin:1990zt,Peskin:1991sw}, which are the quantities based on two point functions of the gauge bosons in the electroweak sector. The recent constraints on the $S$ and $T$ parameters are given in Ref.~\cite{Baak:2012kk}. For the bounds from the Higgs signal search, we use the recent results obtained in Ref.~\cite{ATLAS:2013gamma,ATLAS:2013Z,ATLAS:2013W,CMS:2013gamma,CMS:2013Z,CMS:2013W} for each decay process. In order to calculate these quantities in the UED models, we need to know an ultraviolet (UV) cutoff scale in a view point of four-dimensional effective theory. To search for the highest possible UV cutoff scale, we have evaluated the vacuum stability bound on the Higgs potential by solving renormalization group equation, whose details will be shown in Ref.~\cite{KNOOW:UED2013}. \section{Higgs signal at the LHC in UED models} \subsection{Prediction on Higgs signal in UED models} The Higgs signal at the LHC can be divided into two parts, production and decay processes. Higgs production at the LHC mainly comes from gluon fusion through the top loop. Its production cross section is about ten times larger than other channels in SM case. On the other hand, Higgs to diphoton and digluon decays are also induced as loop processes that are mainly constructed by top and W boson loops. The branching ratio of the diphoton decay is ten times much smaller than the other decay channels, but we can see the diphoton signal clearly at the collider. KK tower in the UED models affects such loop processes. Experimental data gives a constraints on signal strength that is defined by the ratio of the cross section, \al{ \frac{\sigma ^\text{exp}_{pp \rightarrow H \rightarrow X}}{\sigma ^\text{SM}_{pp \rightarrow H \rightarrow X}} , } where $X=\gamma \gamma,ZZ,WW,$ etc. We compute it for the gluon fusion production channel in the UED models: \al{ \frac{\sigma ^\text{UED}_{gg\rightarrow H \rightarrow X}}{\sigma ^\text{SM}_{gg\rightarrow H \rightarrow X}} &\simeq \frac{\Gamma ^\text{UED}_{H \rightarrow gg} \Gamma ^\text{UED}_{H\rightarrow X} /\Gamma ^\text{UED}_{H}} {\Gamma ^\text{SM}_{H \rightarrow gg} \Gamma ^\text{SM}_{H\rightarrow X} / \Gamma ^\text{SM}_{H}} \label{signal-st-2gamma}, } where $\Gamma ^{UED/SM}_{H}$ is total decay width of the Higgs in UED/SM case and \al{ \label{Eq:GF} \hat\sigma^\text{UED}_{gg\to H} &= \frac{\pi^2}{8 M_H} \Gamma ^\text{UED}_{H\rightarrow gg} \delta (\hat{s} - M_H^2), \\ \label{Eq:Hgg} \Gamma ^\text{UED}_{H\rightarrow gg} &= K \frac{\alpha^2_s}{8\pi^2}\frac{M_H^3}{v^2_{EW}} |J^\text{SM}_t + J^\text{KK}_t|^2. } $K$ is K-factor which accounts for the higher order QCD corrections, $\alpha_s = \frac{g_s^2}{4\pi}$ is the fine structure constant for QCD, $v_{EW} \simeq$ 246 GeV is vacuum expectation value of Higgs in weak scale, and $J^\text{SM/KK}_t$ indicates the SM/KK top loop function, defined as in Ref.~\cite{Nishiwaki:2011gm, Nishiwaki:2011vi}. The diphoton decay width takes the following form: \al{ \label{Eq:H2gamma} \Gamma ^\text{UED}_{H\rightarrow \gamma \gamma} &= \frac{\alpha^2 G^2_F M_H^3}{8\sqrt{2}\pi^3} \left| J^\text{SM}_W + J^\text{KK}_W + \frac43 (J^\text{SM}_t +J^\text{KK}_t) \right|^2, } where $\alpha = \frac{e^2}{4\pi}$ and $G_F$ are fine structure constant for QED and Fermi constant. The SM/KK $W$ boson loop function $J^\text{SM/KK}_W$ are also defined in Ref.~\cite{Nishiwaki:2011gm, Nishiwaki:2011vi}. For the final states $X=ZZ/WW$, we can approximate as $\Gamma ^\text{UED}_{H\rightarrow ZZ/WW} \sim \Gamma ^\text{SM}_{H\rightarrow ZZ/ WW}$ because Higgs decays into $ZZ/WW$ boson pair at the tree level, and hence KK loop contributions are negligible. As an illustration, we show in FIG.1 the KK loop effect on the branching ratios and on the UED/SM ratio of the diphoton Higgs decay as well as of the digluon one, in the $ T^2/Z_2$ model. The UED/SM ratio of $H \rightarrow gg$ (cyan line in left figure) is always enhanced in this range, while that of $H \rightarrow \gamma \gamma$ (green line in left figure) is suppressed as already seen in Ref.~\cite{Nishiwaki:2011gm, Nishiwaki:2011vi}. If we consider the KK compactification scale 1TeV, the signal strength of diboson decay is enhanced by a factor 1.5 from that of the SM, while the diphoton decay rate is suppressed by 10\%. The signal strength $pp \to H \to \gamma\gamma$ becomes about 1.35 times larger than in the SM. \begin{figure}[t] \begin{center} \includegraphics[clip,width=170mm]{Fig1.pdf} \caption{UED/SM ratio of the Higgs decay rate (left) and the branching ratios (right) as functions of the KK scale $M_\text{KK}$ for the final states $bb$ (red), $cc$ (red dashed), $\tau\tau$ (magenta), $ZZ$ (blue), $WW$ (blue dashed), $gg$ (cyan), and $\gamma\gamma$ (green). Both left and right figures are for the $T^2/Z_2$ model. } \label{Fig:UEDillustration} \end{center} \end{figure} The branching ratios for the diphoton, diboson, and fermionic final states are suppressed compared with those in the SM, because of an enhacement of digluon decay rate, as shown in the right of FIG.1. The enhancement of $H \to gg$ rate is due to the KK top contributions in the loop diagrams. The reason of the suppression in $H \rightarrow \gamma \gamma$ is as follows. Each KK fermion mode is vectorlike, and hence has twice the degrees of freedom compared to its zero mode. Therefore their negative contributions to decay rate become larger than the positive ones coming from the KK $W$ loops. The sum of all the KK loops gives negative contribution, while that of the SM ones is positive. \subsection{Bound on KK scale from current data} As shown above, the UED models give different production cross section in the gluon fusion (GF). On the other hand, the other productions: the vector boson fusion (VBF), the Higgs-strahlung (VH), and the associated production with a $t \bar{t}$ pair (ttH) are the same as in the SM. The ATLAS and CMS have reported on the proportions of these production channels for each event categories of $H \rightarrow \gamma \gamma ,ZZ$ and $WW$~~\cite{ATLAS:2013gamma,ATLAS:2013Z,ATLAS:2013W,CMS:2013gamma,CMS:2013Z,CMS:2013W}. We take these contributions into account. The details are given in~\cite{KNOOW:UED2013}. Figure \ref{Fig:Higgs} shows the bounds on the KK scale from all the ATLAS and CMS results of $H \to \gamma \gamma,WW,ZZ$ channels. The blue solid, dashed, and dotted lines show the results in $T^2$-based models on $T^2/Z_2$, $T^2/(Z_2 \times Z_2')$, and $T^2/Z_4$, respectively. Similarly, the red solid and dashed ones in $S^2$-based models on $S^2$ and $S^2/Z_2$. The green solid and dashed ones on non-orientable manifolds $RP^2$ and $PS$. The black one for mUED model. We can see from FIG.2 that the lower bound on the KK scale in the mUED model is around 650 GeV at the 95$\%$ confidence level (CL). The six dimensional models have the bounds that are heavier than in the mUED model. This is because the number of state in each KK excited level in six-dimensional models is larger than in the mUED one. The bounds on the KK scale in the six-dimensional models $T^2/Z_2$, $T^2/(Z_2 \times Z_2')$ , $ T^2/Z_4$ are 1080 GeV, 950 GeV, and 830 GeV, respectively. Those on $S^2$ and $S^2/Z_2$ are 1330 GeV and 920 GeV, while on $RP^2$ and PS, 880 GeV and 1230 GeV, respectively. \begin{figure}[t] \begin{center} \includegraphics[clip,width=100mm]{UEDboundHiggs.pdf} \caption{Exclusion CLs of all the UED models as functions of the KK scale $M_\text{KK}$ by use of all the ATLAS and CMS results of $H \rightarrow \gamma \gamma,WW,ZZ$. The blue solid, dashed and dotted lines show the results in $T^2/Z_2, T^2/Z_2 \times Z_2'$ and $T^2/Z_4$ models; the red solid and dashed lines show those in $S^2$ and $S^2/Z_2$ models; the green solid and dashed lines show that in $RP^2$ and PS models; and the black line indicate that in mUED model.} \label{Fig:Higgs} \end{center} \end{figure} \section{Electroweak precision constraint in UED models} A measurement related to electroweak sector can be used to obtain indirect bounds on phenomenological models. The $S$ and $T$ parameters proposed by Peskin and Takeuchi~\cite{Peskin:1990zt,Peskin:1991sw} are very useful quantities for such a purpose. These parameters are represented as~\cite{Denner:1991kt} \al{ \frac{\alpha S}{4 s_W^2 c_W^2} &= {\Pi^{\text{T}}_{ZZ}}'(0) + \frac{c_W^2 - s_W^2}{c_W s_W} {\Pi_{Z\gamma}^{\text{T}}}'(0) - {\Pi_{\gamma \gamma}^{\text{T}}}'(0), \\ \alpha T &= \frac{\Pi_{WW}^{\text{T}}(0)}{m_W^2} - \frac{\Pi_{ZZ}^{\text{T}}(0)}{m_Z^2} + {2 c_W s_W} \frac{\Pi_{Z\gamma}^{\text{T}}(0)}{m_W^2}, } where $s_W$ and $c_W$ are sine and cosine of the weak mixing angle $\sin{\theta _W}$ and $\cos{\theta _W}$. The function ${\Pi^{\text{T}}_{ab}}^{(\prime)} (0)$ is the transverse component of the two point functions of the SM gauge bosons ($ab=WW, ZZ, Z\gamma, \gamma\gamma$), which is defined as \al{ \Pi^{\mu\nu}_{ab} (k) = i\,\Pi^{\text{T}}_{ab} (k^2) \left( g^{\mu\nu} - \frac{k^{\mu}k^{\nu}}{k^2} \right) + i\,\Pi^{\text{L}}_{ab} (k^2) \frac{k^{\mu}k^{\nu}}{k^2} \label{gauge_twopointfunction}, } where $k$ is the external momentum. ${\Pi^{\text{T}}_{ab}}' (k^2)$ is defined as $\frac{d}{dk^2} \Pi^{\text{T}}_{ab}(k^2)$. Several measurable quantities are represented as functions of the $S$ and $T$ parameters, and from the global fit to the experimental results, the values of $S,T$ are estimated as~\cite{Baak:2012kk} \al{ S|^\text{exp}_{U=0} = 0.05 \pm 0.09,\quad T|^\text{exp}_{U=0} = 0.08 \pm 0.07, \label{ST_experimental} } with its correlation being $+0.91$, assuming that the $U$ parameter is zero. In an operator-analysis point of view, the $U$ parameter is represented as a coefficient of a higher dimensional operator involving the Higgs doublet than those for $S$ and $T$ in the UED models, and hence we ignore the effect in our analysis. \begin{figure}[t] \begin{center} \includegraphics[clip,width=100mm]{UEDboundST.pdf} \caption{Exclusion CLs of all UED models as functions of the KK scale $M_\text{KK}$ from the fit to the experimental results of $S$ and $T$ parameters. Colors denote the same as in Figure~\ref{Fig:Higgs}. } \label{Fig:STresult} \end{center} \end{figure} In the UED models, the forms of S and T are written as \al{ S = \sum_{s \atop \text{with }M_s < \Lambda} \left( S^{(\text{KK})}_{s,\text{boson}} + S^{(\text{KK})}_{s,\text{fermion}} \right) + S_{\text{Higgs calibration}} + S_{\text{threshold}}, \label{Sform} \\ T = \sum_{s \atop \text{with }M_s < \Lambda} \left( T^{(\text{KK})}_{s,\text{boson}} + T^{(\text{KK})}_{s,\text{fermion}} \right) + T_{\text{Higgs calibration}} + T_{\text{threshold}}, \label{Tform} } where the first two terms in Eqs.~(\ref{Sform}) and (\ref{Tform}) indicate the contributions of the KK particles via bosonic and fermionic loops, and the middle terms represent the effects from Higgs mass calibration, and the last terms show the threshold corrections via possible operators around the UV cutoff scale. In our analysis, in addition to the effects of the KK Higgs boson and the KK top quark~\cite{Appelquist:2002wb}, the effect of the KK gauge boson is newly taken into account. The detailed forms of each terms are found in Ref.~\cite{KNOOW:UED2013}. In Fig.~\ref{Fig:STresult}, we show the bounds on the KK scales from the fit to the results in Eq.~(\ref{ST_experimental}). Colored lines correspond to each UED model in the same manner as shown in Fig.~\ref{Fig:Higgs}. We find that the lower bound on the KK scale in the mUED is around 700 GeV at the 95$\%$ CL. The bounds on the KK scale in $T^2/Z_2$, $T^2/(Z_2 \times Z_2')$, $ T^2/Z_4$, $S^2$, $S^2/Z_2$, $RP^2$ and PS are 1190 GeV, 1100 GeV, 900 GeV, 1500 GeV, 1050 GeV, 1020 GeV and 1410 GeV, respectively at the 95$\%$ CL. \section{Summary} We have estimated the two types of bounds on the KK scales in 5D and 6D UED models from the Higgs search at the LHC and from the electroweak precision data via the $S$ and $T$ parameters. In the UED models, the contributions via loop diagrams including the KK top quarks and the KK gauge bosons modify the Higgs decay rate and production cross section. These contributions affect the Higgs signal strengths at the LHC. From the analysis on the results of Higgs signal strengths in the decay modes $H \to \gamma\gamma$, $H \to ZZ$ and $H \to WW$, we find that the lower bound on the KK scale in the mUED model is 650 GeV at the 95$\%$ CL, while those in the 6D models on $T^2/Z_2$, $T^2/(Z_2 \times Z_2')$, $ T^2/Z_4$, $S^2$, $S^2/Z_2$, $RP^2$ and PS are 1080 GeV, 950 GeV, 830 GeV, 1330 GeV, 920 GeV, 880 GeV and 1230 GeV, respectively. The KK excited states of the massive SM particles (top quark, Higgs boson and gauge boson) alter the $S$ and $T$ parameters. After evaluating the effects, the lower bound in the mUED turns out to be 650 GeV at the 95$\%$ CL, and the counterparts in $T^2/Z_2$, $T^2/(Z_2 \times Z_2')$, $T^2/Z_4$, $S^2$, $S^2/Z_2$, $RP^2$ and PS models are 1190 GeV, 1100 GeV, 900 GeV, 1500 GeV, 1050 GeV, 1020 GeV and 1410 GeV. Comparing the bounds from the Higgs signal search with those from the electroweak measurements, we find that the latter bounds are slightly severer than the former ones in the UED models for now. However, in future the Higgs signal search at the LHC will put more strong constraints on the KK scales in the UED models. \section{Acknowledgment} We are grateful to Swarup Kumar Majee for discussions at the early stages of this work. We thank Tomohiro Abe for useful comments on oblique corrections. K.N. is grateful for valuable discussions with Joydeep Chakrabortty and Daisuke Harada, and for fruitful conversations with Anindya Datta and Sreerup Raychaudhuri. K.N. is partially supported by funding available from the Department of Atomic Energy, Government of India for the Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-Chandra Research Institute. \bibliographystyle{TitleAndArxiv}
-17,002.475923
[ -2.57421875, 2.443359375 ]
52.682927
[ -2.728515625, 0.8251953125, -1.57421875, -4.16796875, -0.98876953125, 6.12109375 ]
[ 1.642578125, 7.4453125, 3.181640625, 4.58984375 ]
132
2,292
[ -3.33203125, 3.84375 ]
30.73302
[ -5.73046875, -3.759765625, -3.37890625, -1.935546875, 1.6640625, 10.5 ]
1.096121
20.186209
29.712042
3.129077
[ 1.9827440977096558 ]
-12,104.89676
5.17452
-16,560.664234
1.112985
5.489125
[ -3.0234375, -3.720703125, -3.4609375, -4.37890625, 2.28125, 11.546875 ]
[ -5.75, -1.6240234375, -1.857421875, -0.81591796875, 3.22265625, 3.712890625 ]
BkiUbuo5qoaAwmU2sBb7
\subsection{Elementary derivations} \begin{lemma} \label{lem:hadamard_product} Products of Hadamards are antipodes: \begin{equation*} \tikzfig{figures/equations/hadamard_product} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/hadamard_product_proof} \end{equation*} \end{lproof} \begin{lemma} \label{lem:hadamard_antipode} Hadamards and antipodes commute: \begin{equation*} \tikzfig{figures/equations/hadamard_antipode} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/hadamard_antipode_proof} \end{equation*} \end{lproof} \begin{lemma} \label{lem:hadamard_inverse} The inverse Hadamard is a product of Hadamards: \begin{equation*} \tikzfig{figures/equations/hadamard_inverse} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/hadamard_inverse_proof} \end{equation*} \end{lproof} \begin{lemma} \label{lem:antipode_unit} Units absorb antipodes: \begin{equation*} \tikzfig{figures/equations/antipode_unit} \end{equation*} \end{lemma} \begin{lproof} The red equation is basically a subcase of \textsc{(M-Elim)}: \begin{equation*} \tikzfig{figures/equations/antipode_unit_proof_red} \end{equation*} and the green rule is obtained using \textsc{(Colour)}: \begin{equation*} \tikzfig{figures/equations/antipode_unit_proof_green} \end{equation*} \end{lproof} \begin{lemma} \label{lem:hadamard_euler} The Hadamards admit simple Euler decompositions: \begin{equation*} \tikzfig{figures/equations/hadamard_euler} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/hadamard_euler_proof} \end{equation*} \begin{equation*} \tikzfig{figures/equations/hadamard_euler_proof_inverse} \end{equation*} \end{lproof} \begin{lemma} \label{lem:hopf} The Hopf identity is derivable in \(\text{\textsc{\textnormal{zx}}}_p\): \begin{equation*} \tikzfig{figures/equations/hopf} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/hopf_proof} \end{equation*} \end{lproof} \begin{lemma} \label{lem:scalar_elementary} The following rules hold between the ``elementary'' scalars: for any \(a,b \in \mathbb{Z}_p\), \begin{equation*} \tikzfig{figures/equations/scalar_elementary} \end{equation*} \end{lemma} \begin{lproof} \begin{equation} \tikzfig{figures/equations/scalar_elementary_proof_1} \end{equation} The rule \(\tikzfig{figures/equations/red_green_dot}\) is immediate using \textsc{(Colour)}. Then: \begin{equation} \tikzfig{figures/equations/scalar_elementary_proof_2} \end{equation} \end{lproof} \begin{lemma} \label{lem:unit_rotation_elim} Green units absorb red rotations and vice-versa: \begin{equation*} \tikzfig{figures/equations/unit_rotation_elim} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/unit_rotation_elim_proof_green} \end{equation*} \end{lproof} \begin{lemma} \label{lem:bigebra_simplified} The scalars in the bigebra law can be simplified to: \begin{equation} \tikzfig{figures/equations/bigebra_simplified} \end{equation} \end{lemma} \begin{lproof} \begin{equation} \tikzfig{figures/equations/bigebra_simplified_proof} \end{equation} \end{lproof} \begin{lemma} \label{lem:antipode_copy} The green co-multiplication copies antipodes: \begin{equation*} \tikzfig{figures/equations/antipode_copy} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/antipode_copy_proof} \end{equation*} \end{lproof} \begin{lemma} \label{lem:bigebra_mn} The bigebra law holds for arbitrary arities: for any \(m,n \in \mathbb{N}\), \begin{equation*} \tikzfig{figures/equations/bigebra_arbitrary}\quad, \end{equation*} where in the diagram on the LHS, there are \(m\) green and \(n\) red spiders, and each green spider is connected to each red spider by a single wire. \end{lemma} \begin{lproof} The cases \(m=0\) or \(n=0\) correspond to the copy rules. The case \(n=1\) is the antipode copy rule (lemma~\ref{lem:antipode_copy}) and the case \(m=1\) is trivial by the green identity rule. The case \(n=2,m=2\) is lemma~\ref{lem:bigebra_simplified}. The general case follows from a straightforward induction (which furthermore is analogous to the qubit case). \end{lproof} \begin{lemma} \label{lem:antipode_multiplier} The antipode can be rewritten as a multiplication: \begin{equation*} \tikzfig{figures/equations/antipode_multiplier} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/antipode_multiplier_proof} \end{equation*} \end{lproof} \begin{lemma} \label{lem:antipode_phase} For any \(x,y \in \mathbb{Z}_p\), \begin{equation*} \tikzfig{figures/equations/antipode_phase} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/antipode_phase_proof_green} \end{equation*} The rule for red spiders is obtained from the green rule: \begin{equation*} \tikzfig{figures/equations/antipode_phase_proof_red}\quad. \end{equation*} \end{lproof} \begin{lemma} \label{lem:antipode_spider} For any \(x,y \in \mathbb{Z}_p\) and \(m,n \in \mathbb{N}\), \begin{equation*} \tikzfig{figures/equations/antipode_spider} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/antipode_spider_proof} \end{equation*} The rule for red spiders is obtained using \textsc{(Colour)} like in the proof of lemma~\ref{lem:antipode_phase}. \end{lproof} \begin{lemma} \label{lem:pauli_copy_phase} Green spiders copy red Pauli phases, and vice-versa: for any \(x \in \mathbb{Z}_p\), \begin{equation*} \tikzfig{figures/equations/pauli_copy_phase} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/pauli_copy_phase_proof} \end{equation*} The other rule is obtained using \textsc{(Colour)}. \end{lproof} \subsection{Multipliers} \begin{lemma} \label{lem:multiplier_sum} Parallel multipliers sum: for any \(x,y \in \mathbb{Z}_p\): \begin{equation*} \tikzfig{figures/equations/multiplier_sum} \end{equation*} \end{lemma} \begin{lproof} This is a straightforward consequence of \textsc{(Spider)}. \end{lproof} \begin{lemma} \label{lem:multiplier_elim} For any \(z \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/multiplier_elim} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/multiplier_elim_proof_green} \end{equation*} The right rule is obtained using \textsc{(Colour)} as previously. \end{lproof} \begin{lemma} \label{lem:multiplier_product} For any \(x,y \in \mathbb{N}\), \begin{equation*} \tikzfig{figures/equations/multiplier_product} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/multiplier_product_proof} \end{equation*} \end{lproof} \begin{lemma} \label{lem:multiplier_inverse} For any \(x \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/multiplier_inverse} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/multiplier_inverse_proof} \end{equation*} The second equality follows from the \textsc{(Colour)} meta-rule. \end{lproof} Lemmas \ref{lem:multiplier_sum}-\ref{lem:multiplier_inverse} suffice to prove proposition~\ref{prop:multiplier} and \ref{prop:weighted_hadamard}, so we consider those rules proved from this point on, and adopt the multiplier notation. \begin{lemma} \label{lem:multiplier_copy} Spiders copy invertible multipliers: for any \(x \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/multiplier_copy} \end{equation*} \end{lemma} \begin{lproof} (a) follows from direct calculation using the definition of multipliers: \begin{equation*} \tikzfig{figures/equations/multiplier_copy_proof_green} \end{equation*} Then (b) is obtained from (a) using already established properties of the multipliers: \begin{equation*} \tikzfig{figures/equations/multiplier_copy_proof_red} \end{equation*} \end{lproof} \begin{lproof}[of proposition~\ref{prop:local_scaling}] This proposition now follows straightforwardly using lemma~\ref{lem:multiplier_copy}, the definition of H-boxes (equation \eqref{eq:H_box_definition}) and proposition~\ref{prop:multiplier}. \end{lproof} \begin{lemma} \label{lem:multiplier_spider} The action of multipliers on spiders is given by: \begin{equation*} \tikzfig{figures/equations/multiplier_spider} \end{equation*} \end{lemma} \begin{lproof} This follows straightforwardly using lemma~\ref{lem:multiplier_copy} and \textsc{(M-Elim)}. \end{lproof} \begin{lemma} \label{lem:clifford_states} Any pure-Clifford states can be represented in both the red and green fragment: for any \(x \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/clifford_states} \end{equation*} Note that these rules span every pure-Clifford state, since there are exactly \(p-1\) rules and \(p-1\) pure-Clifford states in either colour (the case \(x=0\) is not in the pure-Clifford fragment since it is also Pauli). \end{lemma} \begin{lproof} Firstly, we prove the subcase \(x=1\) of (a): \begin{equation*} \tikzfig{figures/equations/clifford_states_proof_green} \end{equation*} Then the general case for any invertible \(x\) follows using lemma~\ref{lem:multiplier_spider}. (b) follows once again using \textsc{(Colour)}. \end{lproof} \begin{lemma} \label{lem:scalar_i_elim} \begin{equation*} \tikzfig{figures/equations/scalar_i_elim} \end{equation*} \end{lemma} \begin{lproof} Since \(1\) is always a square: \begin{equation*} \tikzfig{figures/equations/scalar_i_elim_proof} \end{equation*} \end{lproof} \begin{lemma} \label{lem:scalar_gauss_elim} For any \(z\in\mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/scalar_gauss_elim} \end{equation*} \end{lemma} \begin{lproof} If \(z\) is a square, then there is some \(\alpha \in \mathbb{Z}_p\) such that \begin{equation*} \tikzfig{figures/equations/scalar_gauss_elim_proof} \end{equation*} If \(-z\) is a square, then there is again some \(\alpha \in \mathbb{Z}_p\) such that \begin{equation*} \tikzfig{figures/equations/scalar_gauss_elim_proof_2} \end{equation*} Now, if neither \(z\) nor \(-z\) is a square, then by corollary~\ref{cor:at_least_one_square}, \(-1\) must be a square. We then have \begin{equation*} \tikzfig{figures/equations/scalar_gauss_elim_proof_3} \end{equation*} \end{lproof} \begin{lemma} \label{lem:scalar_omega_elim} For any \(z\in\mathbb{Z}_p\), \begin{equation*} \tikzfig{figures/equations/scalar_omega_elim} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/scalar_omega_elim_proof} \end{equation*} where we have freely used lemma~\ref{lem:antipode_phase} to commute antipodes and spiders throughout. \end{lproof} \begin{lemma} \label{lem:scalar_imaginary_elim} If \(p \equiv 3 \mod 4\), \begin{equation*} \tikzfig{figures/equations/scalar_imaginary_elim} \end{equation*} \end{lemma} \begin{proof} If \(p \equiv 3 \mod 4\), \(-1\) is \emph{not} a square, so that \begin{equation*} \tikzfig{figures/equations/scalar_imaginary_elim_proof} \end{equation*} \end{proof} \begin{lemma} \label{lem:zero_elementary} All the elementary ``zero'' diagrams are equal: \begin{equation*} \tikzfig{figures/equations/zero_elementary} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/zero_elementary_proof} \end{equation*} \end{lproof} \begin{lemma} \label{lem:zero_amplitudes} The ``zero'' diagram absorbs unlabelled elementary scalars: \begin{equation*} \tikzfig{figures/equations/zero_amplitudes} \end{equation*} \end{lemma} \begin{lproof} First note that \begin{equation*} \tikzfig{figures/equations/zero_amplitudes_proof_1} \end{equation*} so that \begin{equation*} \tikzfig{figures/equations/zero_amplitudes_proof_2} \end{equation*} and \begin{equation*} \tikzfig{figures/equations/zero_amplitudes_proof_3} \end{equation*} \end{lproof} \begin{lemma} \label{lem:zero_phases} The zero diagram absorbs the phase scalars: for any \(x \in \mathbb{Z}_p\) and \(z \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/zero_phases} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/zero_phases_proof_1} \end{equation*} \begin{equation*} \tikzfig{figures/equations/zero_phases_proof_2} \end{equation*} \end{lproof} \begin{lemma} \label{lem:scalar_gauss_multiplication} Quadratic phases verify the following multiplication law: for any \(x,y \in \mathbb{Z}_p^*\) \begin{equation*} \tikzfig{figures/equations/scalar_gauss_multiplication} \end{equation*} \end{lemma} \begin{lproof} This follows from applications of \textsc{(Gauss)} and lemma~\ref{lem:scalar_i_elim}. \end{lproof} \begin{lemma} \label{lem:scalar_omega_multiplication} For any \(x,y\in\mathbb{Z}_p\), \begin{equation*} \tikzfig{figures/equations/scalar_omega_multiplication} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/scalar_omega_multiplication_proof} \end{equation*} where we have freely used lemma~\ref{lem:antipode_phase} to commute antipodes and spiders throughout. \end{lproof} \begin{lemma} \label{lem:hadamard_euler_better} The Euler decomposition of the Hadamards can be ``improved'' to: \begin{equation*} \tikzfig{figures/equations/hadamard_euler_better} \end{equation*} \end{lemma} \begin{lproof} This follows from applying lemma~\ref{lem:clifford_states} to the decomposition of lemma~\ref{lem:hadamard_euler}. \end{lproof} \begin{lemma} \label{lem:H_loop} Hadamard loops correspond to pure-Clifford operations: for any \(x \in \mathbb{Z}_p\) and \(z \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/H_loop} \end{equation*} \end{lemma} \begin{lproof} The case \(x=0\) is clear by decomposing the H-box. We begin by proving the case \(x=1\): \begin{equation*} \tikzfig{figures/equations/H_loop_proof_green} \end{equation*} The general case can be obtained by decomposing the weighted H-box into \(x\) H-loops using the sum rule from proposition~\ref{prop:weighted_hadamard}. Then, under the assumption that the weight is invertible, the red version once again follows using \textsc{(Colour)} and the equations of proposition~\ref{prop:weighted_hadamard}: \begin{equation*} \tikzfig{figures/equations/H_loop_proof_red} \end{equation*} \end{lproof} \begin{lproof}[of proposition~\ref{prop:pauli_stabiliser}] \begin{equation*} \tikzfig{figures/equations/pauli_stabiliser_proof} \end{equation*} \end{lproof} \begin{lproof}[of proposition~\ref{prop:local_scaling}] This follows straightforwardly using proposition~\ref{prop:multiplier}. \end{lproof} \begin{lemma} \label{lem:local_complementation_triangle} For any \(x,y \in \mathbb{Z}_p^*\), \begin{equation*} \tikzfig{figures/equations/local_complementation_triangle} \end{equation*} \end{lemma} \begin{lproof} \begin{equation*} \tikzfig{figures/equations/local_complementation_triangle_proof} \end{equation*} and we can eliminate the \(\tikzfig{figures/equations/scalar_i}\) scalar using lemma~\ref{lem:scalar_i_elim}. \end{lproof} \begin{lemma} \label{lem:local_complementation_tree} Let \(\Sigma\) be a \(\mathbb{Z}_p\)-weighted star graph on \(N \in \mathbb{N}\) vertices, i.e.\, it is a tree with \(N-1\) leaves. Order the vertices such that the first vertex is the only internal vertex, and it follows that all of the edges have weights \(\Sigma_{1w}\) for \(w\) ranging from \(2\) to \(N\). Then, \begin{equation*} \tikzfig{figures/equations/local_complementation_tree} \end{equation*} where \(\Sigma \overset{\gamma}{\star} 1\) is the graph obtained by adding an edge weighted in \(\gamma \Sigma_{1m}\Sigma{1n}\) between the wires connected to each pair of vertices \(m,n \neq 1\). \end{lemma} \begin{lproof} The proof is by induction on the size \(N\) of the tree. Assume the lemma is true for any tree of size \(N-1\). First, bend all the wires with green phases into inputs: \begin{equation*} \tikzfig{figures/equations/local_complementation_tree_proof_bent} \end{equation*} Then, \begin{equation*} \tikzfig{figures/equations/local_complementation_tree_proof_1} \end{equation*} Then, we recognise the subtree with head \(1\) and leaves \(3\) to \(N\), which is thus a tree of size \(N-1\), to which we can apply the inductive hypothesis: \begin{equation*} \tikzfig{figures/equations/local_complementation_tree_proof_2} \end{equation*} Then, copy each \(\Sigma_{1w}\)-weighted Hadamard through the corresponding red spider on the right (which changes its colour and multiplies the weight by \(-1\)), and copy the multiplier through the green below it, Fusing the resulting green spiders we get: \begin{equation*} \tikzfig{figures/equations/local_complementation_tree_proof_3} \end{equation*} Bending the inputs back to outputs completes the proof. \end{lproof} \begin{lproof}[of proposition~\ref{prop:local_complementation}] The proof of this proposition is rather cumbersome to write in standard \(\mathsf{ZX}_p^{\mathrm{Stab}}\). We give a sketch of the proof, and leave writing it out formally to future work, since it has a much clearer form in the scalable \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-calculus. The idea is to split out the subtree of \(G\) with head \(w\) and leaves each of the neighbours of \(w\). Then, one applies lemma~\ref{lem:local_complementation_tree}, and adds the resulting edges into \(G\) using the addivitity of parallel H-boxes (proposition~\ref{prop:weighted_hadamard}). \end{lproof} \begin{lproof}[of proposition~\ref{prop:GS+LC}] Assume w.l.o.g. that the diagram \(D\) has type \(0 \to n\) (i.e.\, we consider that the diagram has already been turned into a state via equation~\eqref{eq:choi_isomorphism}), and furthermore that it contains no multipliers (equivalently, they have been unpacked into the sugarless calculus using equation~\eqref{eq:multiplier_explicit}). Then, transform \(D\) as follows: \begin{enumerate} \item Use \textsc{(Colour)} to change every red spider into a green one, surrounded by Hadamards. The resulting diagram consists of only green spiders and Hadamards. \item Use \textsc{(G-Elim)} to add a green spider between any two subsequent Hadamards. The diagram now consists of only green spiders, connected either by plain edges or H-edges. \item Use \textsc{(Spider)} to fuse any two spiders connected by plain edges, to eliminate any loops, and use lemma~\ref{lem:H_loop} to eliminate any H-edge loops. The resulting diagram contains no loops, and furthermore spiders are connected only by H-edges. \item Use the following rule from proposition~\ref{prop:weighted_hadamard}: \begin{equation} \tikzfig{figures/equations/hadamard_sum} \end{equation} to fuse all H-edges between two spiders into a single weighted H-edge or no edge. \item Use \textsc{(G-Elim)} and proposition~\ref{prop:weighted_hadamard} to obtain \begin{equation} \tikzfig{figures/equations/spider_graph_split}\quad, \end{equation} which allows one to split any spider connected to more than one output into several green spiders, each connected to exactly one output. \end{enumerate} The resulting diagram is nearly in GS+LC form, except it contains ``internal'' vertices which are not connected to an output. Diagrams in this form have been called \emph{graph-like} in the literature. However, we can eliminate these as follows. Denote \(u\) the vertex we are trying to remove, and let \((a,b)\) be it's phase. If \(u\) has no neighbours, then it is a scalar diagram which we can ignore. Otherwise, it has a neighbour \(v\). Then, \begin{enumerate} \item Commute the Pauli part of the phase \((a,0)\) through the Hadamard edge from \(u\) to \(v\). This changes the colour of the spider and it's phase according to the weight of the edge. \item Copy the resulting red spider through \(v\), resulting in a red Pauli phase each edge from \(v\) to its neighbours, excluding \(u\). This red spider is also copied into a vertex operator at \(v\). \item Commute these red spiders through the Hadamard, changing their colour and their phase once again. They can then be absorbed into the Pauli part of the neighbours of \(u\). \end{enumerate} Thus we can absorb the Pauli part of the phase at \(u\) into the Pauli phases of the neighbours of \(v\) (excluding \(u\)), and a Pauli vertex operator at \(v\). Then, we can eliminate vertex \(u\) using a sequence of local complementations as follows: \begin{equation*} \tikzfig{figures/equations/eliminate_internal} \end{equation*} where at each local complementation we change the connectivity of the remainder of the graph, and add a vertex operator on each neighbour. The resulting green units are connected via a normal wire to a single other green spider, to which they can be fused using \textsc{(Spider)}. Since at no point do we reintroduce internal vertices, we can repeat this process to eliminate each internal vertex. The resulting diagram is in GS+LC form. \end{lproof} \subsection{Main completeness proofs} \begin{lproof}[of proposition~\ref{prop:c1_completeness}] The single-qupit Clifford group is generated by the invertible generators under sequential composition. It therefore suffices to show that the composition of either of the above diagrams with such a generator can be rewritten to the claimed form. Now, every rewrite rule, except \textsc{(Zero)} can be interpreted as an equality up to an invertible scalar, if we simply ignore the parts of the equations disconnected from both the inputs and the outputs. Thus, we can freely use any of these rules in our proof of normalisation up to invertible scalars, and any equation derivable without \textsc{(Zero)}. \emph{(Multipliers)} For the multipliers this is very straightforward: for any \(x \in \mathbb{Z}_d^*\), \begin{equation*} \tikzfig{figures/equations/c1_multiplier_forms} \end{equation*} \emph{(Hadamard)} We can ignore the weight since this can be extracted as a multiplier and then the previous proof applies. Then, for the first normal form, \begin{equation*} \tikzfig{figures/equations/c1_hadamard_forms_10} \end{equation*} we need to split subcases based on the value of \(v\). If \(v=0\), then we have \begin{equation*} \tikzfig{figures/equations/c1_hadamard_forms_11} \end{equation*} where in the last step we have used \textsc{(Shear)} and \textsc{(Spider)} several times to commute the Paulis on the right back into the leftmost red and green spiders. If \(v \neq 0\), then, \begin{equation*} \tikzfig{figures/equations/c1_hadamard_forms_12} \end{equation*} The second normal form can be done in one go. \begin{equation*} \tikzfig{figures/equations/c1_hadamard_forms_2} \end{equation*} \emph{(Rotations)} The red phase is once again very straightforward: let \(x,y \in \mathbb{Z}_d\), then, \begin{equation*} \tikzfig{figures/equations/c1_phase_forms_red} \end{equation*} The green phases are more involved. Firstly, note that we can ignore the multiplier, since we have: and \(x,y\) are arbitrary. Then, the Pauli part can be straightforwardly taken care of using \textsc{(Shear)} to commute it through to the green spider on the right. Finally, we can view \(\tikzfig{figures/equations/green_phase_0y}\) as the \(y\)-fold composition of \(\tikzfig{figures/equations/green_phase_01}\), so that we have reduced the proof to the normalisation of: \begin{equation*} \tikzfig{figures/equations/c1_phase_forms_green} \end{equation*} On the right (in dashed lines) we recognise the composition of a Hadamard and a normal form, which we have already shown can be normalised. This done, we obtain the composition of a red spider and a normal form, which we have also already shown to be normalisable. Thus, we are done. As for the second normal form, we have: \begin{equation*} \tikzfig{figures/equations/c1_phase_forms_green_2} \end{equation*} and we have again reduced to previously solved cases. Unicity follows from the fact that none of these forms are equivalent, and there are therefore \(p^3(p^2 - 1)\) distinct forms, which matches the cardinality of the single qupit Clifford group. \end{lproof} \begin{lproof}[of proposition~\ref{prop:rGS+LC}] By proposition~\ref{prop:GS+LC}, every stabiliser diagram is equivalent to some GS+LC diagram, and furthermore, the vertex operators can be brought into the normal form of proposition~\ref{prop:c1_completeness}. Using proposition~\ref{prop:local_scaling}, the multiplier part of this normal form can immediately be absorbed into a local scaling of the graph. We now need to eliminate the leftmost red spider of the normal form. Consider the vertex operator acting on vertex \(u\) of the graph. If \(u\) has no neighbours, we can use lemma~\ref{lem:unit_rotation_elim} to eliminate the red spider (up to a scalar in \(\mathbb{G}\)). Otherwise, \(u\) has at least one neighbour. In this case, we can use lemma~\ref{cor:GS_red_rotation} to ``copy'' the red spider into a green spider on each neighbour of \(u\). As in the qubit case, the set \(R\) is not stable under pre-composition by a green spider. However, by the proof of proposition~\ref{prop:c1_completeness}, we see that, after normalisation, this operation applied to any element of \(R\) does not increase the number of red spiders. Thus, applying this procedure at most \(2\abs{V}\) times maps every vertex operator to one of the forms in \(R\). In order to prove the second part of the simplification, assume that the preceding procedure has been applied to the diagram, so that all vertex operators are in \(R\). Let \(u,v\) be neighbouring vertices such that the vertex operators of both \(u\) and \(v\) contain a red spider. We are going to use a sequence of local complementations at \(u\) and \(v\) to eliminate the red spiders on both \(u,v\). Since we have just shown that the green spider these operations enact on neighbours can be corrected and the vertex operators brought back to \(R\), and that local complementations and scalings preserve the GS+LC form, we ignore this action and consider only the part of the diagram connected to \(u\) and \(v\): \begin{equation*} \tikzfig{figures/equations/rGS+LC_paired_reds} \end{equation*} Then, normalising the vertex operators, we obtain \begin{equation*} \tikzfig{figures/equations/rGS+LC_paired_reds_2} \end{equation*} The red spiders can be eliminated using the previous strategy for mapping the vertex operators to \(R\), and so we have removed the pair of neighbouring red spiders. Repeating this process for each such pair renders a diagram in rGS+LC form. \end{lproof} \begin{lproof}[of proposition~\ref{prop:rGS+LC_simple}] Suppose this is not the case, i.e. there are vertices \(u,v\) such that \(u\) has a red vertex operator in \(A\) but not \(B\), \(v\) has a red vertex operator in \(B\) but not \(A\), and \(u,v\) are neighbours in \(A\). Then we perform a manipulation entirely analogous to the proof of proposition~\ref{prop:rGS+LC} in the diagram \(A\): \begin{equation*} \tikzfig{figures/equations/rGS+LC_simple} \end{equation*} Then, after normalisation (proposition~\ref{prop:c1_completeness}) and using proposition~\ref{prop:rGS+LC} to bring the diagram back to rGS+LC form, the vertex operator for \(u\) no longer contains a red vertex in \(A\) and the vertex operator for \(v\) contains a red vertex. Since we can do this for any pair of such red vertices in \(A\) and \(B\), it is clear that we can can simplify the pair of diagrams. \end{lproof} \begin{lproof}[of theorem~\ref{thm:scalar_completeness}] Let \(A \in \mathsf{ZX}_p^{\mathrm{Stab}}[0,0]\), then \(A\) consists of a tensor product of connected scalar diagrams. We first show that \(A\) can be rewritten to a tensor product of elementary scalars. If this is not already the case, pick some connected scalar diagram \(A'\) contained in \(A\). This diagram \(A'\) contains at least one red or green spider, from which we can extract a unit: \begin{equation} \tikzfig{figures/equations/scalar_normalisation_units} \end{equation} where \(B \in \mathsf{ZX}_p^{\mathrm{Stab}}[0,1]\). By theorem~\ref{thm:comp} and the normal form for the single-qupit Clifford group (proposition~\ref{prop:c1_completeness}), we can rewrite \(B\) to one of the forms \begin{equation} \tikzfig{figures/equations/scalar_normalisation_unit_forms} \end{equation} since we know that these forms cover all single qupit Clifford states. Simplifying a bit, we therefore only need to consider the following forms: \begin{equation} \label{eq:scalar_normalisation_all_scalars} \tikzfig{figures/equations/scalar_normalisation_all_scalars} \end{equation} The first of these is already an elementary scalar, and if \(s\) or \(t\) is \(0\), the second is also. If \(s=0\), the third diagram is also elementary. Thus, we can assume that \(t\neq 0 \neq s\) and treat both middle diagrams in one go: \begin{equation} \tikzfig{figures/equations/scalar_normalisation_green} \end{equation} and both of these diagrams can be normalised using \textsc{(M-Elim)} and \textsc{(Shear)}. Finally, the last scalar from equation~\eqref{eq:scalar_normalisation_all_scalars} can be rewritten to: \begin{equation} \tikzfig{figures/equations/scalar_normalisation_last}. \end{equation} We have shown that \(A\) can be rewritten to a tensor product of elementary scalars. If this tensor product contains a zero diagram, then the whole diagram equal to the zero diagram by proposition~\ref{prop:zero_normal_forms}. Otherwise, \(A\) can further be rewritten to a scalar in normal form using the multiplication rules for elementary scalars, which are given by \textsc{(M-One)}, lemmas~\ref{lem:scalar_omega_multiplication} and \ref{lem:scalar_gauss_multiplication}, up to applying \textsc{(Gauss)} to decompose quadratic scalars. \end{lproof} \begin{lproof}[of theorem~\ref{lem:bipartition}] First, $D$ can be put in GS-LC form. This allows to decompose $D$ in the following form: \begin{center} \tikzfig{figures/bipartite1} \end{center} Where $X$ and $Y$ are compositions of E-gates constituting the edges within the same partition together with local Clifford gates, and $G$ is a bipartite graph-state gathering all vertices having a neighbor in a different partition and all corresponding edges. Thus, we see that without loss of generality we can restrict to the case where $D$ is a bipartite graph-state. Assuming that $D$ is a bipartite graph-state with partitions of size $n$ and $m$ with $n\leq m$ we show how to find $A$ and $B$ such that: \begin{center} \tikzfig{figures/bipartite2} \end{center} Let's start with $n$ Bell's pair. Plugging Generalized Hadamard boxes we get the following graph state: \begin{center} \tikzfig{figures/bipartite3} \end{center} Now, let's say we want to add an edge with weight $w$ between the vertices $x$ and $y$ of this graph, to do so we first add an edge of weight $1$ between $x'$ and $y$ and apply the right local complementation to $x'$ to obtain the desired edge. Finally we clean up all the unwanted edges inside the partition by using the conveniant $E$-gates. The process is sketched in the following diagrams: \begin{center} \tikzfig{figures/bipartite4} \end{center} At the end, when all desired edges have been obtained, we still have to take care of the edges between $x$ and $x'$, but since we only kept vertices which are linked to the other partition in the final $D$, we know there is a neighbor $t$ that can be use to take care of this edge in the same way we used $x'$ before, first we add an edge between $t$ and $x'$, then we apply the needed local complementation on $t$ and finally we clean up the edges internal to the partitions. This give up the wanted bipartite graph-state $D$ which conclude the proof. \end{lproof} \begin{lproof}[of \ref{prop:marked}] Let's assume that there is a marked vertices $v$ in $C$ which is not marked in $D$. If $v$ is isolated in $C$ and not in $D$ then the two interpretations are obviously different. The same goes if $v$ is isolated in both diagram since then we can clearly differentiate the two corresponding states. So the only case remaining is when $v$ has neighbors in both diagrams. We then apply to both diagram the same unitary, first a green Clifford map on the chosen vertex and appropriate gates for each edges in the neighborhood of $v$ in $C$. In $C$ we have: \begin{center} \tikzfig{figures/testproof} \end{center} And in $D$: \begin{center} \tikzfig{figures/testproof1} \end{center} Here we compute up to local Cliffords on the wires. Thus, we see that in $C$, $v$ is now disconnected while it isn't in $D$. Then $\interp{D}\neq \interp{C}$. \end{lproof} \begin{lproof}[of Theorem \ref{thm:comp}] We only show it for state by map-state duality. First we rewrite $A$ and $B$ into a simplified pair of rGS-LC diagrams $A'$ and $B'$. So, the marked vertices are the same. It only remain to show that the graph state part with . But if we apply the gates corresponding to the edges in $A'$ to both diagrams, then we obtain a completely disconected graph in $A'$. And the two interpretation can only be equals if $B'$ is also now completely disconnected. It then follows that the edges in both diagrams were the same. \begin{center} \tikzfig{completenessproof} \end{center} Finally, the only way for the interpretations to be the same is that all phases in the green vertices are equals, and then $A'$ can be rewritten in $B'$. So \(\mathsf{Stab}_p \vdash A = B\). \end{lproof} \subsection{Proof of soundness} \label{app:proofs:soundness} \begin{proof}[Proof of theorem~\ref{thm:soundness}] We prove that each of the equations in figure~\ref{fig:axioms} is sound, and this extends to the entire equational theory by functoriality of \(\interp{-}\). Firstly, straightforward calculation shows that: \begin{equation} \label{eq:scalars} \interp{\tikzfig{figures/equations/soundness/sqroot_d_inv}} = \frac{1}{\sqrt{p}} \qc \interp{\tikzfig{figures/equations/soundness/sqroot_d}} = \sqrt{p} \omega^{-2^{-1}xz}, \qand \interp{\tikzfig{figures/equations/soundness/d_phase}} = \sum_{m\in\mathbb{Z}_p} \omega^{2^{-1}(xm + ym^2)}. \end{equation} \textsc{(Scalar)} follows immediately from equation~\eqref{eq:scalars}. (H) For any \(m,n \in \mathbb{Z}_p\), \begin{align} \interp{\tikzfig{figures/equations/soundness/H_RHS}} &= \frac{1}{\sqrt p} \sum_{x,y,z\in\mathbb{Z}_p} \omega^{2^{-1}x^2} \omega^{2^{-1}y^2} \omega^{2^{-1}z^2} \omega^{-xy} \omega^{-yz} \dyad{-x:X}{z:X} \\ &= \frac{1}{\sqrt p} \sum_{x,y,z\in\mathbb{Z}_p} \omega^{2^{-1}x^2} \omega^{2^{-1}y^2} \omega^{2^{-1}z^2} \omega^{-xy} \omega^{-yz} \dyad{-x:X}{z:X} \\ &= \frac{1}{\sqrt p} \sum_{x,y,z\in\mathbb{Z}_p} \omega^{2^{-1}x^2} \omega^{2^{-1}y^2} \omega^{2^{-1}z^2} \omega^{-xy} \omega^{-yz} \dyad{-x:X}{z:X} \\ &= \frac{1}{\sqrt p} \sum_{x,y,z\in\mathbb{Z}_p} \omega^{2^{-1}x^2} \omega^{2^{-1}y^2} \omega^{2^{-1}z^2} \omega^{-xy} \omega^{-yz} \omega^{xz} \omega^{-xz} \dyad{-x:X}{z:X} \\ &= \frac{1}{\sqrt p} \sum_{x,y,z\in\mathbb{Z}_p} \omega^{2^{-1}(x^2 + y^2 + z^2 - 2xy - 2yz + 2xz)} \omega^{-xz} \dyad{-x:X}{z:X} \\ &= \frac{1}{\sqrt p} \sum_{x,z\in\mathbb{Z}_p} \left( \sum_{y\in\mathbb{Z}_p} \omega^{2^{-1}(x-y+z)^2} \right) \omega^{-xz} \dyad{-x:X}{z:X} \\ &= \left( \sum_{y\in\mathbb{Z}_p} \omega^{2^{-1}y^2} \right) \frac{1}{\sqrt{p}} \sum_{x,z\in\mathbb{Z}_p} \omega^{xz} \dyad{x:X}{z:X} \\ &= \left( \sum_{y\in\mathbb{Z}_p} \omega^{2^{-1}y^2} \right) \cdot \sum_{x\in\mathbb{Z}_p} \dyad{x:X}{x:Z} \\ &= \interp{\tikzfig{figures/equations/soundness/H_LHS}}, \end{align} where we have used the change of variables \(y \mapsto y + x + z\) which preserves the sum over \(y\in\mathbb{Z}_p\) for any \(x,z \in \mathbb{Z}_p\). \textsc{(Mult)} For any \(z \in \mathbb{Z}_p^*\), \begin{align} \interp{\tikzfig{figures/equations/soundness/Mult_RHS}} &= \sum_{a,b,c,d\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}a^2} \omega^{2^{-1}zb^2} \omega^{2^{-1}z^{-1}c^2} \omega^{-ab} \omega^{-bc} \delta_{c,d} \dyad{a:Z}{d:X} \\ &= \sum_{a,b,c\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}a^2} \omega^{2^{-1}zb^2} \omega^{2^{-1}z^{-1}c^2} \omega^{-ab} \omega^{-bc} \dyad{a:Z}{c:X} \\ &= \sum_{a,b,c\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}(a-zb+c)^2} \omega^{-z^{-1}ac} \dyad{a:Z}{c:X} \\ &= \left( \sum_{b\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}b^2} \right) \sum_{a,c\in\mathbb{Z}_p} \omega^{-z^{-1}ac} \dyad{a:Z}{c:X} \\ &= \left( \sum_{b\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}b^2} \right) \sqrt{p} \sum_{a\in\mathbb{Z}_p} \dyad{a:Z}{z^{-1}a:Z} \\ &= \left( \sum_{b\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}b^2} \right) \sqrt{p} \sum_{a\in\mathbb{Z}_p} \dyad{za:Z}{a:Z} \\ &= \left( \sum_{b\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}b^2} \right) \sum_{a,c\in\mathbb{Z}_p} \omega^{-zac} \dyad{c:X}{a:Z} \\ &= \left( \sum_{b\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}b^2} \right) \sqrt{p^z} \sum_{a,c\in\mathbb{Z}_p} \dyad{c:X}{c:X}^{\otimes z} \ket{a:Z}^{\otimes z}\bra{a:Z} \\ &= \interp{\tikzfig{figures/equations/soundness/Mult_LHS}}. \end{align} \end{proof} \subsection{Proof of soundness} \label{app:proofs:soundness} \begin{lproof}[of theorem~\ref{thm:soundness}] We prove that each of the equations in figure~\ref{fig:axioms} is sound, and this extends to the entire equational theory by functoriality of \(\interp{-}\). Firstly, straightforward calculation shows that, for any \(x,y \in \mathbb{Z}_p\), \begin{equation} \label{eq:scalars} \interp{\tikzfig{figures/equations/soundness/sqroot_d_inv}} = \frac{1}{\sqrt{p}} \qc \interp{\tikzfig{figures/equations/soundness/sqroot_d}} = \sqrt{p} \omega^{-2^{-2}xz}, \qand \interp{\tikzfig{figures/equations/soundness/sqroot_d_phaseless}} = \sqrt{p}. \end{equation} We also have \begin{equation} \interp{\tikzfig{figures/equations/soundness/d_phase}} = \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1}(xk+yk^2)}, \end{equation} and using the harmonic series for the Kronecker delta, \(\interp{\tikzfig{figures/equations/soundness/d_pauli_phase}} = p\delta_{x,0}\). \textsc{(Zero)} and \textsc{(One)} then follow immediately from equation~\eqref{eq:scalars}. \textsc{(Fusion)} For any \(a,b,c,d \in \mathbb{Z}_p\), \begin{align} \interp{\,\tikzfig{figures/equations/soundness/Fusion_LHS}\,} &=\left( \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1}(ak+bk^2)} \ket{k}^{\otimes n} \bra{k}^{\otimes m} \otimes \mathrm{id}_{m'-1} \right) \\ &\quad\quad\quad \circ \left( \mathrm{id}_{n-1} \otimes \sum_{\ell\in\mathbb{Z}_p} \omega^{2^{-1}(c\ell+d\ell^2)} \ket{\ell}^{\otimes n'} \bra{\ell}^{\otimes m'} \right) \nonumber \\ &= \sum_{k,\ell\in\mathbb{Z}_p} \omega^{2^{-1}(ak+bk^2)} \omega^{2^{-1}(c\ell+d\ell^2)} \ket{k}^{\otimes n} \bra{k}^{\otimes m-1} \ip{k}{\ell} \ket{\ell}^{\otimes n'-1} \bra{\ell}^{\otimes m'} \\ &= \sum_{k,\ell\in\mathbb{Z}_p} \delta_{k,\ell} \omega^{2^{-1}(ak+bk^2)} \omega^{2^{-1}(c\ell+d\ell^2)} \ket{k}^{\otimes n} \bra{k}^{\otimes m-1} \ket{\ell}^{\otimes n'-1} \bra{\ell}^{\otimes m'} \\ &= \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1}((a+c)k+(b+d)k^2)} \ket{k}^{\otimes n+n'-1} \bra{k}^{\otimes m+m'-1} \\ &= \interp{\,\tikzfig{figures/equations/soundness/Fusion_RHS}} \end{align} \textsc{(Colour)} For any \(a,b\in\mathbb{Z}_p\), \begin{align} \interp{\tikzfig{figures/equations/soundness/Colour_LHS}} &=\left( \sum_{j\in\mathbb{Z}_p} \dyad{j:X}{j:Z} \otimes \cdots \otimes \sum_{j\in\mathbb{Z}_p} \dyad{j:X}{j:Z} \right) \\ &\quad\quad\quad \circ \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1}(ak+bk^2)} \dyad{k:Z}{k:Z} \nonumber \\ &\quad\quad\quad \circ \left( \sum_{\ell\in\mathbb{Z}_p} \dyad{\ell:Z}{-\ell:X} \otimes \cdots \otimes \sum_{\ell\in\mathbb{Z}_p} \dyad{\ell:Z}{-\ell:X} \right) \nonumber \\ &= \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1}(ak+bk^2)} \dyad{k:X}{-k:X} \\ &= \interp{\tikzfig{figures/equations/soundness/Colour_RHS}} \end{align} \textsc{(Shear)} For any \(a,c,d\in\mathbb{Z}_p\), \begin{align} \interp{\tikzfig{figures/equations/soundness/Shear_LHS}} &= \sqrt{p}\omega^{-2^{-2}ac} \sum_{j,k,\ell\in\mathbb{Z}_p} \omega^{2^{-1}aj} \omega^{2^{-1}(ck+dk^2)} \omega^{2^{-1}a\ell} \dyad{j}{j} \dyad{-k:X}{k:X} \dyad{\ell}{\ell} \\ &= \sqrt{p}\omega^{-2^{-2}ac} \sum_{j,k,\ell\in\mathbb{Z}_p} \omega^{2^{-1}aj} \omega^{2^{-1}(ck+dk^2)} \omega^{2^{-1}a\ell} \omega^{-jk} \omega^{-k\ell} \dyad{j}{\ell} \\ &= \sqrt{p}\omega^{-2^{-2}ac} \sum_{j,k,\ell\in\mathbb{Z}_p} \omega^{2^{-1}aj} \omega^{2^{-1}a\ell} \omega^{2^{-1}k(dk+c-2j-2\ell)} \dyad{j}{\ell} \\ &= \sqrt{p}\omega^{-2^{-2}ac} \sum_{j,k,\ell\in\mathbb{Z}_p} \omega^{2^{-1}aj} \omega^{2^{-1}a\ell} \omega^{2^{-1}(k+2^{-1}a)(d(k+2^{-1}a)+c-2j-2\ell)} \dyad{j}{\ell} \\ &= \sqrt{p}\omega^{2^{-3}da^2} \sum_{j,k,\ell\in\mathbb{Z}_p} \omega^{2^{-1}k(dk+ad+c-2j-2\ell)} \dyad{j}{\ell} \\ &= \sqrt{p}\omega^{2^{-3}da^2} \sum_{j,k,\ell\in\mathbb{Z}_p} \omega^{-2^{-1}(ck+adk+dk^2)} \omega^{-2^{-1}jk} \omega^{-2^{-1}k\ell}\dyad{j}{\ell} \\ &= \sqrt{p}\omega^{2^{-3}da^2} \sum_{k\in\mathbb{Z}_p} \omega^{-2^{-1}(ck+adk+dk^2)} \dyad{-k:X}{k:X} \\ &= \interp{\tikzfig{figures/equations/soundness/Shear_RHS}} \end{align} \textsc{(Char)} \begin{align} \interp{\tikzfig{figures/equations/soundness/Char_LHS}} &= \sqrt{p^p} \sum_{j,k\in\mathbb{Z}_p} \dyad{-k:X}{k:X}^{\otimes p} \ket{j}^{\otimes p} \bra{j} \\ &= \sum_{j,k\in\mathbb{Z}_p} \omega^{-pjk} \ket{-k:X} \bra{j} \\ &= \sum_{j,k\in\mathbb{Z}_p} \ket{-k:X} \bra{j} \\ &= \interp{\tikzfig{figures/equations/soundness/Char_RHS}} \end{align} \textsc{(Bigebra)} \begin{align} \interp{\tikzfig{figures/equations/soundness/Bigebra_LHS}} &= p \sum_{i,j,k,\ell\in\mathbb{Z}_p} \ket{-k:X} \ket{-\ell:X} \ip{k:X}{i} \ip{\ell:X}{i} \ip{k:X}{j} \ip{\ell:X}{j} \bra{i} \bra{j} \\ &= \frac{1}{p} \sum_{i,j,k,\ell\in\mathbb{Z}_p} \omega^{-ik} \omega^{-i\ell} \omega^{-jk} \omega^{-j\ell} \ket{-k:X} \dyad{-\ell:X}{i} \bra{j} \\ &= \frac{1}{p^3} \sum_{\substack{ i,j,k,\ell \\ m,n,o,p }\in\mathbb{Z}_p} \omega^{-ik} \omega^{-i\ell} \omega^{-jk} \omega^{-j\ell} \omega^{-km}\omega^{-n\ell}\omega^{io}\omega^{pj} \ket{m} \dyad{n}{o:X} \bra{p:X} \\ &= \frac{1}{p^3} \sum_{\substack{ i,j,k,\ell \\ m,n,o,p }\in\mathbb{Z}_p} \omega^{i(o-k-\ell)} \omega^{j(p-k-\ell)} \omega^{-km}\omega^{-n\ell} \ket{m} \dyad{n}{o:X} \bra{p:X} \\ &= \frac{1}{p} \sum_{\substack{ k,\ell \\ m,n,o,p }\in\mathbb{Z}_p} \delta_{o,k+\ell} \delta_{p,k+\ell} \omega^{-km}\omega^{-n\ell} \ket{m} \dyad{n}{o:X} \bra{p:X} \\ &= \frac{1}{p} \sum_{k,\ell,m,n\in\mathbb{Z}_p} \omega^{-km}\omega^{-n\ell} \ket{m} \dyad{n}{k+\ell:X} \bra{k+\ell:X} \\ &= \frac{1}{\sqrt{p}} \sum_{k,\ell,m,n\in\mathbb{Z}_p} \omega^{k(n-m)} \ket{m} \ket{n} \ip{n}{-k-\ell:X} \bra{k+\ell:X} \bra{k+\ell:X} \\ &= \sqrt{p} \sum_{k,\ell,m,n\in\mathbb{Z}_p} \delta_{n,m} \ket{m} \ket{n} \ip{n}{-\ell:X} \bra{\ell:X} \bra{\ell:X} \\ &= \sqrt{p} \sum_{m,\ell\in\mathbb{Z}_p} \ket{m} \dyad{m}{m} \dyad{-\ell:X}{\ell:X} \bra{\ell:X} \\ &= \interp{\tikzfig{figures/equations/soundness/Bigebra_RHS}} \end{align} \textsc{(Copy)} \begin{align} \interp{\tikzfig{figures/equations/soundness/Copy_LHS}} &= \sqrt{p} \sum_{j,k\in\mathbb{Z}_p} \omega^{2^{-1}xj} \ip{k}{-j:X} \ket{k}\ket{k} \\ &= \sum_{j,k\in\mathbb{Z}_p} \omega^{2^{-1}xj} \omega^{-jk} \ket{k}\ket{k} \\ &= p \sum_{k\in\mathbb{Z}_p} \delta_{k,2^{-1}x} \ket{k}\ket{k} \\ &= p \ket{2^{-1}x}\ket{2^{-1}x} \\ &= \sum_{j\in\mathbb{Z}_p} \omega^{-2^{-1}xj} \ket{j:X} \sum_{k\in\mathbb{Z}_p} \omega^{-2^{-1}xk} \ket{k:X} \\ &= \sum_{j\in\mathbb{Z}_p} \omega^{2^{-1}xj} \ket{-j:X} \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1}xk} \ket{-k:X} \\ &= \interp{\tikzfig{figures/equations/soundness/Copy_RHS}} \end{align} \textsc{(G-Elim)} \begin{equation} \interp{\tikzfig{figures/equations/soundness/G-Elim_LHS}} = \sum_{k\in\mathbb{Z}_p} \dyad{k}{k} = \interp{\tikzfig{figures/equations/soundness/G-Elim_RHS}} \end{equation} \textsc{(R-Elim)} \begin{align} \interp{\tikzfig{figures/equations/soundness/R-Elim_LHS}} &= \sum_{k\in\mathbb{Z}_p} \dyad{-k:X}{k:X} \dyad{-j:X}{j:X} = \sum_{k\in\mathbb{Z}_p} \delta_{k,-j} \dyad{-k:X}{j:X} \\ &= \sum_{k\in\mathbb{Z}_p} \dyad{j:X}{j:X} = \interp{\tikzfig{figures/equations/soundness/R-Elim_RHS}} \end{align} \textsc{(Mult)} For any \(z \in \mathbb{Z}_p^*\), \begin{align} \interp{\tikzfig{figures/equations/soundness/Mult_RHS}} &= \sum_{a,b,c,d\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}a^2} \omega^{2^{-1}zb^2} \omega^{2^{-1}z^{-1}c^2} \omega^{-ab} \omega^{-bc} \delta_{c,d} \dyad{a:Z}{d:X} \\ &= \sum_{a,b,c\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}a^2} \omega^{2^{-1}zb^2} \omega^{2^{-1}z^{-1}c^2} \omega^{-ab} \omega^{-bc} \dyad{a:Z}{c:X} \\ &= \sum_{a,b,c\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}(a-zb+c)^2} \omega^{-z^{-1}ac} \dyad{a:Z}{c:X} \\ &= \left( \sum_{b\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}b^2} \right) \sum_{a,c\in\mathbb{Z}_p} \omega^{-z^{-1}ac} \dyad{a:Z}{c:X} \\ &= \left( \sum_{b\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}b^2} \right) \sqrt{p} \sum_{a\in\mathbb{Z}_p} \dyad{a:Z}{z^{-1}a:Z} \\ &= \left( \sum_{b\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}b^2} \right) \sqrt{p} \sum_{a\in\mathbb{Z}_p} \dyad{za:Z}{a:Z} \\ &= \left( \sum_{b\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}b^2} \right) \sum_{a,c\in\mathbb{Z}_p} \omega^{-zac} \dyad{c:X}{a:Z} \\ &= \left( \sum_{b\in\mathbb{Z}_p} \omega^{2^{-1}z^{-1}b^2} \right) \sqrt{p^z} \sum_{a,c\in\mathbb{Z}_p} \dyad{c:X}{c:X}^{\otimes z} \ket{a:Z}^{\otimes z}\bra{a:Z} \\ &= \interp{\tikzfig{figures/equations/soundness/Mult_LHS}}. \end{align} \textsc{(Gauss)} follows using equation~(73) of \cite{appleby_properties_2009}: \begin{equation} \interp{\tikzfig{figures/equations/soundness/Gauss_LHS}} = \sum_{j\in\mathbb{Z}_p} \omega^{2^{-1}zj^2} = -i^{\frac{p+3}{2}} \ell_p(z) = -i^{\frac{p+3}{2}} (-1)^{\chi_p(z)} = \interp{\tikzfig{figures/equations/soundness/Gauss_RHS}}. \end{equation} where we have used the fact that \(\ell_p(1) = 1\) (the Legendre symbol mod \(p\) of 1), and also that \(\ell_p(z) = (-1)^{\chi_p(z)}\) whenever \(z \neq 0\). \textsc{(M-One)} \begin{equation} \interp{\tikzfig{figures/equations/soundness/M-One_LHS}} = (-1) (-1) = 1 = \interp{\tikzfig{figures/equations/soundness/M-One_RHS}}. \end{equation} \textsc{(M-Elim)} \begin{align} \interp{\tikzfig{figures/equations/soundness/M-Elim_LHS}} &= \sum_{j,k,\ell\in\mathbb{Z}_p} \omega^{2^{-1}(a\ell+b\ell^2)} \bra{\ell:X}\ket{-zj}\bra{j} \\ &= \sum_{j,k,\ell\in\mathbb{Z}_p} \omega^{2^{-1}(a\ell+b\ell^2)} \frac{\omega^{2^{-1}\ell zj}}{\sqrt{p}} \bra{j} \\ &= \frac{1}{\sqrt{p}} \sum_{j,\ell\in\mathbb{Z}_p} \omega^{2^{-1}(a\ell+b\ell^2)} \omega^{2^{-1}\ell zj} \bra{j} \\ &= \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1}(a\ell+b\ell^2)} \bra{-z\ell:X} \\ &= \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1}(-az^{-1}\ell+bz^{-2}\ell^2)} \bra{k:X} \\ &= \interp{\tikzfig{figures/equations/soundness/M-Elim_RHS}}. \end{align} \end{lproof} \section{Stabiliser quantum mechanics in odd prime dimensions} \label{sec:preliminaries} \input{section_preliminaries} \section{A ZX-calculus for odd prime dimensions} \label{sec:calculus} \input{section_calculus} \section{Representing Clifford states as graphs} \label{sec:states} \input{section_states} \section{Completeness} \label{sec:completeness} \input{section_completeness} \section{Wigner function and Lagrangian relations} \label{sec:wigner} \input{section_wigner} \section*{Conclusion} We have constructed a ZX-calculus which captures the stabiliser fragment in odd prime dimensions, whilst retaining many of the ``nice'' features of the qubit ZX-calculus. Of course, there are a few obvious questions that we leave for future work. First amongst these is the question of whether a fully universal calculus can be obtained from the ideas we used here. The spiders we have used here labelled with elements \(a,b \in \mathbb{Z}_p \times \mathbb{Z}_p\) and which can be interpreted as polynomials \(x \mapsto ax+bx^2\) which parametrise the phases of the spider. Adding one additional term of degree \(3\) is already sufficient to obtain a universal calculus in prime dimensions (strictly) greater than \(3\) \cite{howard_qudit_2012}. In fact, one might as well add all higher order of polynomials (mod \(p\)) since access to such higher degrees will hopefully prove useful in finding commutation relations for the resulting spiders. Secondly, it remains to be seen how to formulate a universal ZX-calculus for non-prime dimensions, even for just the stabiliser fragment. For this, the methods in this article are clearly inadequate: for example local scaling is no longer an invertible operation and thus certainly not in the Clifford group. Finally, the set of axioms we provide here is probably not minimal. It would be nice to see if a simplified version can be obtained, as was done in \cite{backens_simplified_2017} for the qubit case. \paragraph{Acknowledgements} The authors would like to thank Cole Comfort and Simon Perdrix for enlightening discussions. RIB was funded by the ANR VanQuTe project (ANR-17-CE24-0035), as well as the Cisco University Research Program Fund. {\raggedright\printbibliography} \subsection{Generators} For any odd prime \(p\), consider the symmetric monoidal category \(\mathsf{ZX}_p^{\mathrm{Stab}}\) with objects \(\mathbb{N}\) and morphisms generated by: \begin{align*} \tikzfig{figures/generators/id} &: 1 \to 1 & \quad\quad\quad \tikzfig{figures/generators/g-id} &: 1 \to 1 & \quad\quad\quad \tikzfig{figures/generators/r-inv} &: 1 \to 1 & \\ \tikzfig{figures/generators/hadamard} &: 1 \to 1 & \quad \tikzfig{figures/generators/g-unit} &: 0 \to 1 & \quad \tikzfig{figures/generators/r-unit} &: 0 \to 1 & \\ \tikzfig{figures/generators/cup} &: 0 \to 2 & \quad \tikzfig{figures/generators/g-counit} &: 1 \to 0 & \quad \tikzfig{figures/generators/r-counit} &: 1 \to 0 & \\ \tikzfig{figures/generators/cap} &: 2 \to 0 & \quad \tikzfig{figures/generators/g-mult} &: 2 \to 1 & \quad \tikzfig{figures/generators/r-mult} &: 2 \to 1 & \\ \tikzfig{figures/generators/braid} &: 2 \to 2 & \quad \tikzfig{figures/generators/g-comult} &: 1 \to 2 & \quad \tikzfig{figures/generators/r-comult} &: 1 \to 2 & \end{align*} where \(x,y \in \mathbb{Z}_p\). We also use \(\star : 0 \to 0\), which is used to simplify the presentation of the calculus, since it represents a scalar whose representation in terms of the other generators depends on \(p\). Morphisms are composed by connecting output wires to input wires, and the monoidal product is given on objects by \(n \otimes m = n+m\) and on morphisms by vertical juxtaposition of diagrams. Green spiders are defined inductively in the usual way: for any \(m,n \in \mathbb{N}\), \begin{equation} \tikzfig{figures/generators/g-spider-def}\quad, \end{equation} and it is clear that these have types \(m+1 \to 1\), \(1 \to n+1\) and \(m \to n\) respectively. Red spiders are defined analogously to ZH-calculus beetles: \begin{equation} \tikzfig{figures/generators/r-spider-def}\quad. \end{equation} Finally, labelled spiders are also given in the usual way: \begin{equation} \tikzfig{figures/generators/spiders_labelled}\quad. \end{equation} In order to avoid clutter, we also use the following shorthand: \begin{equation} \tikzfig{figures/equations/hadamard_inverse}\quad, \end{equation} which we shall see represents the compositional inverse of \(\tikzfig{figures/generators/hadamard}\). \subsection{Standard interpretation and universality} The standard interpretation of a \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram is a symmetric monoidal functor \(\interp{-} : \mathsf{ZX}_p^{\mathrm{Stab}} \to \mathsf{FHilb}\) (the category of finite-dimensional complex Hilbert spaces). It is defined on objects as \(\interp{m} \coloneqq \mathbb{C}^{p \times m}\), and on the generators of the morphisms as: \begin{equation*} \begin{aligned} \interp{\tikzfig{figures/generators/g-unit}} & = \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1} (xk + yk^2)} \ket{k:Z} & \quad \interp{\tikzfig{figures/generators/g-counit}} & = \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1} (xk + yk^2)} \bra{k:Z} \\ \interp{\tikzfig{figures/generators/g-comult}} & = \sum_{k\in\mathbb{Z}_p} \dyad{k:Z}{k,k:Z}& \interp{\tikzfig{figures/generators/g-mult}} &= \sum_{k\in\mathbb{Z}_p} \dyad{k,k:Z}{k:Z} & \\ \interp{\tikzfig{figures/generators/g-id}} &= \sum_{k\in\mathbb{Z}_p} \dyad{k:Z}{k:Z} & \quad \interp{\tikzfig{figures/generators/r-inv}} &= \sum_{k\in\mathbb{Z}_p} \dyad{-k:X}{k:X} & \quad \\ \interp{\tikzfig{figures/generators/r-unit}} &= \sum_{k\in\mathbb{Z}_p} \ket{-k:X} & \quad \interp{\tikzfig{figures/generators/r-counit}} &= \sum_{k\in\mathbb{Z}_p} \bra{k:X} & \\ \interp{\tikzfig{figures/generators/r-comult}} &= \sum_{k\in\mathbb{Z}_p} \dyad{-k,-k:X}{k:X} & \interp{\tikzfig{figures/generators/r-mult}} &= \sum_{k\in\mathbb{Z}_p} \dyad{-k:X}{k,k:X} & \\ \interp{\tikzfig{figures/generators/id}} &= \sum_{k\in\mathbb{Z}_p} \dyad{k:Z}{k:Z} & \quad \interp{\tikzfig{figures/generators/hadamard}} &= \sum_{k\in\mathbb{Z}_p} \dyad{k:X}{k:Z} & \\ \interp{\tikzfig{figures/generators/cup}} &= \sum_{k\in\mathbb{Z}_p} \ket{kk:Z} & \quad \interp{\tikzfig{figures/generators/cap}} &= \sum_{k\in\mathbb{Z}_p} \bra{kk:Z} & \\ \interp{\tikzfig{figures/generators/braid}} &= \sum_{k,\ell\in\mathbb{Z}_p} \dyad{k,\ell:Z}{\ell,k:Z} & \quad \interp{\scalebox{1.4}{$\star$}} &= -1 & \end{aligned} \end{equation*} Then, by the functoriality of the standard interpretation, we deduce that \begin{equation} \interp{\tikzfig{figures/generators/g-spider}} = \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1} (xk + yk^2)} \ket{k:Z}^{\otimes n} \bra{k:Z}^{\otimes m}, \end{equation} and \begin{equation} \interp{\tikzfig{figures/generators/r-spider}} = \sum_{k\in\mathbb{Z}_p} \omega^{2^{-1} (xk + yk^2)} \ket{-k:X}^{\otimes n} \bra{k:X}^{\otimes m}. \end{equation} \begin{theorem}[Universality] \label{thm:universality} The standard interpretation \(\interp{-}\) is universal for the qupit stabiliser fragment, i.e. for any stabiliser operation \(C : \mathbb{C}^{p m} \to \mathbb{C}^{p n}\) there is a diagram \(D \in \mathsf{ZX}_p^{\mathrm{Stab}}\) such that \(\interp{D} = C\). Put formally, the co-restriction of \(\interp{-}\) to \(\mathsf{Stab}_p\) is full. \end{theorem} \subsection{Axiomatisation} We now begin to introduce rewrite rules which to perform a purely diagrammatic reasoning. By doing so we are in fact describing a PROP by generators and relations \cite{baez_props_2018}, thus the swap is required to satisfy the following properties: \begin{equation} \tikzfig{figures/universality/swaprules} \end{equation} Remark that the last equation is required to hold for any diagram $D:n\to m$. This property states that our diagrams form a symmetric monoidal category. Furthermore, we want this category to be self-dual compact-closed, hence the cup and cap must satisfy: \begin{equation} \tikzfig{figures/universality/cupcaprule} \end{equation} Furthermore, as long as the connectivity of the diagram remains the same, vertices can be freely moved around without changing the standard interpretation of the diagram. This is a consequence of the fact that we require our generators to be \emph{flexsymmetric}, as shown in \cite{carette_when_2021}. This amounts to imposing that all generators except the swap satisfy: \begin{equation} \tikzfig{figures/universality/flexsymmetry} \end{equation} where $\sigma:n+m\to n+m$ stands for any permutation of the wires involving swap maps. Thus we can formally treat any \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram as a graph whose vertices are the spiders, and whose edges are labelled by the \(1 \to 1\) generators of the language.\footnote{There is a small ambiguity: \(1 \to 1\) spiders can be treated as either edges or vertices. When considering diagrams, it matters little which, since any given graph is always to be understood as one of the many equivalent \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagrams constructed formally out of the generators. Any computer implementation of the calculus will have to carefully resolve this ambiguity in its internal representation.} We will consider all the previous rules as being purely structural and will never state explicitly their use. \begin{figure} \centering \tikzfig{figures/axioms_simplified} \caption{A presentation of the equational theory \(\text{\textsc{\textnormal{zx}}}_p\), which is sound and complete for the stabiliser fragment. The equations hold for any \(a,b,c,d \in \mathbb{Z}_p\) and \(z \in \mathbb{Z}_p^*\). \(\chi_p\) is the characteristic function of the complement of the set of squares in \(\mathbb{Z}_p\), defined in equation~\eqref{eq:legendre_characteristic}} \label{fig:axioms} \end{figure} Figure~\ref{fig:axioms} presents the equational theory \(\text{\textsc{\textnormal{zx}}}_p\), which as we shall see axiomatises the stabiliser fragment of quantum mechanics in the qupit ZX-calculus. Firstly though, we must be sure that all of these rules are sound for the standard interpretation, i.e. it should not be possible to derive an equality of diagrams whose quantum mechanical interpretations are different. \begin{theorem}[Soundness] \label{thm:soundness} The equational theory \(\text{\textsc{\textnormal{zx}}}_p\) is sound for \(\interp{-}\), i.e., for any \(A,B \in \mathsf{ZX}_p^{\mathrm{Stab}}\), \(\text{\textsc{\textnormal{zx}}}_p \vdash A = B\) implies \(\interp{A} = \interp{B}\). Put formally, \(\interp{-}\) factors through the projection \(\mathsf{ZX}_p^{\mathrm{Stab}} \to \mathsf{ZX}_p^{\mathrm{Stab}} / \text{\textsc{\textnormal{zx}}}_p\). \end{theorem} This set of rewriting rule also turns out to be also complete: \begin{theorem}[Completeness] \label{thm:completeness} The equational theory \(\text{\textsc{\textnormal{zx}}}_p\) is complete for \(\interp{-}\), i.e., for any \(A,B \in \mathsf{ZX}_p^{\mathrm{Stab}}\), \(\interp{A} = \interp{B}\) implies \(\text{\textsc{\textnormal{zx}}}_p \vdash A = B\). \end{theorem} The proof of Theorem \ref{thm:completeness} will be the object of the following sections. \section{Some useful structures in \(\mathsf{ZX}_p^{\mathrm{Stab}}\)} The set of axioms in figure~\ref{fig:axioms} is somewhat minimalistic. In this section we present how it will be manipulated in practice. \subsection{Elementary derivations} While we will see that the equations in figure~\ref{fig:axioms} are enough to derive any equality of quantum operation in the stabiliser fragment, we describe here some rules which are derivable in \(\text{\textsc{\textnormal{zx}}}_p\) and particularly useful. We start by the interaction between some $1\to 1$ maps: \begin{equation} \tikzfig{hadanteul} \end{equation} The following rules, very similar to the one of \cite{bonchi_interacting_2017}, will be central: \begin{equation} \tikzfig{glaeq} \end{equation} Here we see that the $1\to 1 $ red spider play the role of the antipode of Hopf algebra, we will then refer to it as the antipode. Finally, the pauli phases can easily move around diagrams: \begin{equation} \tikzfig{paulimove} \end{equation} \subsection{Meta-rules} From the equational theory follows more general patterns that we will often use as meta-rules. \paragraph{Changing colours} The qubit ZX-calculus admits a particularly elegant meta-rule: take any valid equation, and swap the colour of every spider while keeping everything else identical. Then the resulting equation is also derivable in the calculus. In fact this color swap transformation is equivalent to the functor mapping a diagram $D$ to $H^{\otimes m} D \circ H^{\otimes n}$. One might hope that such a rule would hold also in \(\text{\textsc{\textnormal{zx}}}_p\). Unfortunately, the picture is a little more intricate and less pretty since now the analog of the Hadamard gate is not an involution. Using \textsc{(Colour)}, one can see what happen while Hadamrd gates are going through a diagram: \begin{equation} \tikzfig{figures/equations/colour_commutation} \end{equation} This allows to formulate a variation of the color swap rules that holds for qudits as well: \begin{proposition}[Colour change] If \(A \in \mathsf{ZX}_p^{\mathrm{Stab}}[n,m]\) is a diagram, we can consider any spider which is not the antipode as vertices linked by the following four kind of edges: \begin{center} \tikzfig{figures/equations/edges} \end{center} let \(S(A)\) be the \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram obtained from \(A\) by : \begin{enumerate} \item swapping the colour of every vertex; \item applying the following transformation on the edges: \begin{center} \tikzfig{figures/equations/swapedges}; \end{center} \item adding a minus to the Pauli part of every green spider. \end{enumerate} Then $\text{\textsc{\textnormal{zx}}}_p \vdash S(A) = (\tikzfig{figures/generators/hadamard})^{\otimes m} \circ A \circ (\tikzfig{figures/generators/hadamard})^{\otimes n}$. \end{proposition} Application of this rule will simply be referred to as \textsc{(Colour)}, since the colour change rule presented in figure \ref{fig:axioms} is a subcase. Note that, although \(S\) is not functorial, it can be easily made into a functor, simply by post-composing with antipodes. Note that the original colour change rule cannot hold for \(\mathsf{ZX}_p^{\mathrm{Stab}}\), since it does if and only if the antipode is trivial. To show this, it suffices to colour change the \textsc{(G-Elim)} rule. \paragraph{Spider wars} Just like in the qubit case, there are rules for fusing spiders of the same colour, and splitting spiders of opposite colours. The fusion rules are pretty much the same, up to the use of antipodes for red spiders: \begin{equation} \tikzfig{figures/equations/spider_fusion} \end{equation} They are a straightforward consequence of \textsc{(Fusion)} and \textsc{(Loop)}. The splitting rules are more complicated, since unlike the qubit case, we now have two elementary rules for splitting: the Hopf identity (lemma \ref{lem:hopf}) and \textsc{(Char)}. Using a sequential application of these rules, it is straightforward to see that, for any \(x,y \in \mathbb{N}\) such that \(x \geqslant y\), we must have: \begin{equation} \label{eq:spider_wars} \tikzfig{figures/equations/spider_wars_1} \quad\quad\quad \tikzfig{figures/equations/spider_wars_2} \end{equation} We dub this collection of meta-rules \textsc{(Spider)}, along with deformations resulting from flexsymmetry. \subsection{Syntactic sugar for multi-edges} Before we move into the proof of completeness, we introduce some syntactic sugar to the language. Given equation~\eqref{eq:spider_wars}, in \(\mathsf{ZX}_p^{\mathrm{Stab}}\), unlike the qubit case, spiders of opposite colours can be connected by more that one edge, and these multi-edges cannot be simplified. We therefore add some syntactic sugar to represent such multi-edges. These constructions add no expressiveness to the language, and are simply used to reduce the size of some recurring diagrams. They are shamelessly stolen from previous work \cite{bonchi_interacting_2017, zanasi_interacting_2018, carette_szx-calculus:_2019, carette_colored_2020}, and we use them to obtain a particularly nice representation of qupit graph states \cite{zhou_quantum_2003}. Graph states a central role in our proof of completeness, as they have in previous completeness results of the stabiliser fragments for dimensions \(2\) and \(3\) \cite{backens_zx-calculus_2014, duncan_graph-theoretic_2020, wang_qutrit_2018}. In particular, these constructions permit a nice presentation of how graph states evolve under local Clifford operations. Firstly, we extend \(\mathsf{ZX}_p^{\mathrm{Stab}}\) by \emph{multipliers}, which are defined inductively by: \begin{equation} \tikzfig{figures/generators/multiplier}\quad. \end{equation} We also define inverted multipliers, using the standard notation for graphical languages based on symmetric monoidal categories: \begin{equation} \tikzfig{figures/generators/multiplier_inverted}\quad. \end{equation} Explicitly, then, for \(x \in \mathbb{Z}_p^*\), \begin{equation} \label{eq:multiplier_explicit} \tikzfig{figures/equations/multiplier_explicit}\quad. \end{equation} \begin{proposition} \label{prop:multiplier} Multipliers verify the following equations under \(\text{\textsc{\textnormal{zx}}}_p\): for any \(x,y\in\mathbb{Z}_p\) and \(z\in\mathbb{Z}_p^*\), \begin{equation} \tikzfig{figures/equations/multiplier} \end{equation} which amounts to saying that the multipliers form a presentation of the field \(\mathbb{Z}_p\). They also verify the following useful copy and elimination identities: \begin{equation} \tikzfig{figures/equations/multiplier2} \end{equation} \end{proposition} Multipliers also verify the following equation: \begin{equation} \label{eq:hadamard_multiplier} \tikzfig{figures/equations/hadamard_multiplier}\quad, \end{equation} so we can also unambiguously extend the calculus with \emph{H-boxes}: \begin{equation} \label{eq:H_box_definition} \tikzfig{figures/generators/hadamard_weighted}\quad. \end{equation} H-boxes exactly match the labelled Hadamard boxes introduced for the qutrit case \cite{townsend-teague_classifying_2021} when \(p = 3\). More explicitly, H-boxes amount to repeated Hadamard wires: \begin{equation} \tikzfig{figures/equations/weighted_hadamard_multiedge} \end{equation} Owing to the flexsymmetry of the language, we have that for any \(x\in\mathbb{Z}_p\), \begin{equation} \label{eq:hadamard_OCM} \tikzfig{figures/equations/hadamard_inverted} \qand \tikzfig{figures/equations/hadamard_edge}\quad. \end{equation} \begin{proposition} \label{prop:weighted_hadamard} \(\text{\textsc{\textnormal{zx}}}_p\) proves the following equations: \begin{equation} \tikzfig{figures/equations/weighted_hadamard} \end{equation} \end{proposition} \subsection{Elementary scalars} The following is standard form categorical quantum mechanics: \begin{lemma} If \(A,B \in \mathsf{ZX}_p^{\mathrm{Stab}}[0,0]\), then \(\interp{A \otimes B} = \interp{A} \cdot \interp{B} = \interp{A \circ B}\), where \(\cdot\) is the usual multiplication on \(\mathbb{C}\) restricted to the monoid \(\mathbb{G}\). \end{lemma} Now, as when we were define the group of phases \(\mathbb{P}_p\), the set of normal forms for phases must depend on the prime \(p\) in question: \begin{definition} An \emph{elementary scalar} is a diagram \(A \in \mathsf{ZX}_p^{\mathrm{Stab}}[0,0]\) which is a tensor product of diagrams from the collection \(O_p \cup P \cup Q\): where \begin{itemize} \item if \(p \equiv 1 \mod 4\), \begin{equation} O_p = \left\{ \tikzfig{figures/universality/empty}, \scalebox{1.4}{$\star$} \right\} \quad, \end{equation} \item if \(p \equiv 3 \mod 4\), \begin{equation} O_p = \left\{ \tikzfig{figures/universality/empty}, \tikzfig{figures/equations/imaginary_scalar_normal_form}, \scalebox{1.4}{$\star$}, \tikzfig{figures/equations/imaginary_scalar_minus_normal_form} \right\} \quad, \end{equation} \end{itemize} \begin{equation} P = \left\{ \tikzfig{figures/universality/empty}, \tikzfig{figures/equations/omega_scalar_normal_form} \mid s \in \mathbb{Z}_p^* \right\} \quad, \end{equation} and \begin{equation} A = \left\{ \tikzfig{figures/universality/sqrt_d}, \tikzfig{figures/universality/empty}, \tikzfig{figures/universality/sqrt_d_inv} \mid r \in \mathbb{Z} \right\} \quad. \end{equation} If \(A,B \in \mathsf{ZX}_p^{\mathrm{Stab}}\), we say that \(A\) and \(B\) are equal \emph{up to an elementary scalar} if there is an elementary scalar \(C\) such that \(A = B \otimes C\). In that case, we write \(A \simeq B\). \end{definition} Now, as written, equality up to an elementary scalar might seem like a relation that is not symmetric and therefore not an equivalence relation. \begin{proposition} Every elementary scalar \(C \in \mathsf{ZX}_p^{\mathrm{Stab}}[0,0]\) has a multiplicative inverse, i.e. an elementary scalar \(C^{-1} \in \mathsf{ZX}_p^{\mathrm{Stab}}[0,0]\) such that \begin{equation} C \otimes C^{-1} = C \circ C^{-1} = \tikzfig{figures/universality/empty}\quad. \end{equation} \end{proposition} \begin{lproof} This is the content of lemmas~\ref{lem:scalar_elementary}, \ref{lem:scalar_i_elim}, \ref{lem:scalar_omega_elim} and axioms \textsc{(Gauss)} and \textsc{(M-One)}. \end{lproof} In light of this fact, if \(A \simeq B\), there is an elementary scalar \(C\) such that \(A = B \otimes C\), and then \(B = B \otimes C \otimes C^{-1} = A \otimes C^{-1}\), so that \(B \simeq A\). \begin{proposition} Every equation in \(\text{\textsc{\textnormal{zx}}}_p\) can be loosened to equality up an elementary scalar by erasing every part of the LHS and RHS diagrams which are disconnected from the inputs and outputs. \end{proposition} \begin{lproof} This comes from a straightforward structural induction, and noting that every scalar diagram which appears in the axiomatisation (figure~\ref{fig:axioms}) can be straightforwardly rewritten under \(\text{\textsc{\textnormal{zx}}}_p\) to an elementary scalar, and thus can be ignored. \end{lproof} Probably the most important case of equality up to elementary scalars is the completeness of the single-qupit Clifford groups, on which entire proof of completeness of the calculus rests. The fragment of \(\mathsf{ZX}_p^{\mathrm{Stab}}\) which corresponds to \(\mathscr{C}\) is that generated by the \(1 \to 1\) diagrams: \begin{align*} \tikzfig{figures/generators/r-phase} &\quad\quad\quad& \tikzfig{figures/generators/g-phase} \\ \tikzfig{figures/generators/multiplier_generator} &\quad\quad\quad& \tikzfig{figures/generators/weighted_hadamard} \end{align*} We call any such diagram a \emph{single-qupit Clifford diagram}. \begin{proposition} \label{prop:c1_completeness} If \(A \in \mathsf{ZX}_p^{\mathrm{Stab}}[0,0]\) is a single-qupit Clifford diagram, then \(\text{\textsc{\textnormal{zx}}}_p\) proves that \begin{equation} \tikzfig{figures/equations/c1_normal_form}\quad, \end{equation} for some \(s,t,u,v \in \mathbb{Z}_p\) and \(w \in \mathbb{Z}_p^*\). Furthermore, this form is unique. \end{proposition} \subsection{Relating stabiliser diagrams to graphs} In order to use the preceding tools in our completeness proof for the whole stabiliser fragment, we need to understand how an arbitrary stabiliser diagram can be related to a graph state diagram. Firstly, we rewrite any stabiliser diagram \(D\) to a state using the Choi-Jamiolkowski isomorphism: \begin{equation} \label{eq:choi_isomorphism} \tikzfig{figures/equations/choi_stabiliser_graph_LHS} \quad{\Large\rightsquigarrow} \tikzfig{figures/equations/choi_stabiliser_graph_RHS} \end{equation} Any diagram \(0 \to n\) obtained from the Choi-Jamiolkowski isomorphism can be rewritten to a graph state diagram, with single qupit Clifford operations acting on its output wires: \begin{definition}[\cite{backens_zx-calculus_2014, wang_qutrit_2018}] A \emph{GS+LC diagram} is a \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram which consists of a graph state diagram with arbitrary single-qupit Clifford operations applied to each output. These associated Clifford operations are called \emph{vertex operators}. \end{definition} \begin{proposition} \label{prop:GS+LC} Every stabiliser state \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram can be rewritten, up to elementary scalars, to GS+LC form under \(\text{\textsc{\textnormal{zx}}}_p\). \end{proposition} In other words, \(\text{\textsc{\textnormal{zx}}}_p\) proves that, for any stabiliser \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram \(D : m \to n\), there is a graph \(G\) on \(m+n\) vertices and a set of vertex operators \((v_k)_{k=1}^{m+n}\) such that \begin{equation} \tikzfig{figures/equations/stabiliser_graph_LHS} \quad\simeq\quad \tikzfig{figures/equations/stabiliser_graph_RHS} \end{equation} and we only need consider the question of whether \(\text{\textsc{\textnormal{zx}}}_p\) can prove the equality of two GS+LC diagrams. \subsection{Completeness modulo elementary scalars} Now, as a first step, we show that \(\text{\textsc{\textnormal{zx}}}_p\) can normalise any pair of diagrams with equal interpretations, up to elementary scalars. In particular, as was shown in the previous section, we can relax \(\text{\textsc{\textnormal{zx}}}_p\) to reason about equality up to elementary scalars by simply ignoring the scalar part of each rule, and make free use of the ``scalarless'' equational theory. We will take care of the resulting scalars in the next section. This part of the completeness proof follows the general ideas of \cite{backens_zx-calculus_2014}. The first step on the way to completeness, is to note that, considering a diagram in GS+LC form, where the vertex operators have been normalised, we can obtain a yet more reduced diagram by absorbing as much as possible of the vertex operators into local scalings and local complementations. We then obtain the following form for the vertex operators: \begin{definition}[\cite{backens_zx-calculus_2014}] \label{def:rGS+LC} A \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram is in \emph{reduced} GS+LC (or rGS+LC) form if it is in GS+LC form, and furthermore: \begin{enumerate} \item All vertex operators belong to the following set: \begin{equation} R = \left\{ \tikzfig{figures/equations/c1_reduced} \,\middle|\, s,t \in \mathbb{Z}_p \right\}. \end{equation} \item Two adjacent vertices do not have vertex operators that both include red spiders. \end{enumerate} \end{definition} \begin{proposition} \label{prop:rGS+LC} If \(D \in \mathsf{ZX}_p^{\mathrm{Stab}}\) is a Clifford diagram, then there is a diagram \(G \in \mathsf{ZX}_p^{\mathrm{Stab}}\) in rGS+LC form such that \(\text{\textsc{\textnormal{zx}}}_p \vdash D \simeq G\). \end{proposition} Then, given two diagrams with equal interpretations, taking them both to rGS+LC makes the task of comparing the diagrams considerably easier. In particular, we can guarantee that the corresponding vertex operators in each diagram always have matching forms: \begin{definition}[\cite{backens_zx-calculus_2014}] A pair of rGS+LC of the same type (i.e. whose graphs have the same vertex set \(V\)) is said to be \emph{simplified} if there is no pair of vertices \(q,p \in V\) such that \(q\) has a red vertex operator in the first diagram but not the second, \(q\) has a red vertex operator in the second diagram but not the first, and \(q\) and \(p\) are adjacent in at least one of the diagrams. \end{definition} \begin{proposition} \label{prop:rGS+LC_simple} Any pair \(A,B\) of rGS+LC diagrams of the same type (i.e. on the same vertex set) can be simplified. \end{proposition} For the sake of clarity, we shall say that the a vertex operator (or equivalently, the vertex itself) is \emph{marked} if it contains a red spider (i.e. it belongs to the right form of definition~\ref{def:rGS+LC}). Then, two diagrams with the same interpretation can always be rewriten so that the marked vertices match: \begin{proposition}\label{prop:marked} Let \(C,D \in \mathsf{ZX}_p^{\mathrm{Stab}}\) be a simplified pair in rGS+LC form, then \(\interp{C} = \interp{D}\) only if the marked vertices in $C$ and $D$ are the same. \end{proposition} Finally, we have enough control over the pair of diagrams to finish the completeness proof: \begin{theorem}\label{thm:comp} \(\text{\textsc{\textnormal{zx}}}_p\) is complete for \(\interp{-}\), i.e. if for any pair of diagrams \(A,B \in \mathsf{ZX}_p^{\mathrm{Stab}} [0,n]\) with $n\neq 0$ , \(\interp{A} = \interp{B}\), then \(\text{\textsc{\textnormal{zx}}}_p \vdash A \simeq B\). \end{theorem} \subsection{Completeness of the scalar fragment} Finally, we are ready to leap-frog off of the previous section into the full completeness (including scalars). First, we need to find a normal form for diagrams which evaluate to \(0\). In fact, we need pick one normal form for each type \(m \to n\): \begin{proposition} \label{prop:zero_normal_forms} The ``zero'' diagram ``destroys'' diagrams: for any \(m,n \in \mathbb{N}\) and \(D \in \mathsf{ZX}_p^{\mathrm{Stab}}[m,n]\), \begin{equation*} \tikzfig{figures/equations/zero_normal_forms} \end{equation*} We take the RHS diagram to be the ``zero'' diagram of type \(m \to n\). \end{proposition} Now, we say that a scalar \(C \in \mathsf{ZX}_p^{\mathrm{Stab}}[0,0]\) is in \emph{normal form} if it is either the zero diagram, or it belongs to the set \begin{equation} \label{eq:scalar_normal_form_p1} \left\{ \tikzfig{figures/universality/empty}, \scalebox{1.4}{$\star$} \right\} \otimes \left\{ \tikzfig{figures/universality/empty}, \tikzfig{figures/equations/omega_scalar_normal_form} \mid s \in \mathbb{Z}_p^* \right\} \otimes \left\{ \tikzfig{figures/universality/sqrt_d}, \tikzfig{figures/universality/empty}, \tikzfig{figures/universality/sqrt_d_inv} \mid r \in \mathbb{Z} \right\}, \end{equation} if \(p \equiv 1 \mod 4\), or to the set \begin{equation} \label{eq:scalar_normal_form_p3} \left\{ \tikzfig{figures/universality/empty}, \tikzfig{figures/equations/imaginary_scalar_normal_form}, \scalebox{1.4}{$\star$}, \tikzfig{figures/equations/imaginary_scalar_minus_normal_form} \right\} \otimes \left\{ \tikzfig{figures/universality/empty}, \tikzfig{figures/equations/omega_scalar_normal_form} \mid s \in \mathbb{Z}_p^* \right\} \otimes \left\{ \tikzfig{figures/universality/sqrt_d}, \tikzfig{figures/universality/empty}, \tikzfig{figures/universality/sqrt_d_inv} \mid r \in \mathbb{Z} \right\}. \end{equation} when \(p \equiv 3 \mod 4\). It is straightforward to see, by evaluating \(\interp{-}\) on each element, that the sets in equations~\eqref{eq:scalar_normal_form_p1} or \eqref{eq:scalar_normal_form_p3} contain exactly one diagram for each scalar in \(\mathbb{G}_p \setminus \{0\}\) (and the zero diagram \(\tikzfig{figures/equations/zero_diagram}\) corresponds to \(0 \in \mathbb{G}_p\)). \begin{theorem} \label{thm:scalar_completeness} \(\text{\textsc{\textnormal{zx}}}_p\) proves any scalar diagram equal to a scalar from either equations~\eqref{eq:scalar_normal_form_p1} or \eqref{eq:scalar_normal_form_p3} (depending on the congruence of \(p\) modulo \(4\)), or to the zero diagram \(\tikzfig{figures/equations/zero_diagram}\). \end{theorem} The full completeness follows immediately: \begin{theorem} The equational theory \(\text{\textsc{\textnormal{zx}}}_p\) is complete for \(\mathsf{Stab}_p\), i.e. for any \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagrams \(A\) and \(B\), if \(\interp{A} = \interp{B}\), then \(\text{\textsc{\textnormal{zx}}}_p \vdash A = B\). Put more formally, there is a commutative diagram \begin{equation} \begin{tikzcd} \mathsf{ZX}_p^{\mathrm{Stab}} & {\mathsf{ZX}_p^{\mathrm{Stab}} / \text{\textsc{\textnormal{zx}}}_p} \\ & \mathsf{Stab}_p \arrow["{\interp{-}}"', from=1-1, to=2-2] \arrow[from=1-2, to=2-2] \arrow["p", twoheadrightarrow, from=1-1, to=1-2] \end{tikzcd} \end{equation} where \(p\) is the projection \(\mathsf{ZX}_p^{\mathrm{Stab}} \to \mathsf{ZX}_p^{\mathrm{Stab}} / \text{\textsc{\textnormal{zx}}}_p\) and the vertical arrow is an isomorphism of categories. \end{theorem} \subsection{The symplectic presentation of the Clifford group} \subsection{Graph states in \(\mathsf{ZX}_p^{\mathrm{Stab}}\)} The associated graph state is represented in \(\mathsf{ZX}_p^{\mathrm{Stab}}\) as: \begin{equation} \label{eq:simple_graph_state} \tikzfig{figures/simple_graph_state} \end{equation} where the equality is obtained solely by fusing the green spiders. One immediately recognises the graph from equation~\eqref{eq:simple_graph} in the RHS of equation~\eqref{eq:simple_graph_state}. Here is a slightly more involved example: \begin{equation} \tikzfig{figures/graph} \quad \qq{is associated to the state} \quad \tikzfig{figures/graph_state} \end{equation} where each spider is connected to an output wire. Hopefully, it should be clear that from these examples that for any given graph \(G \in \mathbb{Z}_p^{V \times V}\), one can obtain the \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram for the corresponding graph state by identifying each vertex with a green spider, and each edge with a correspondingly weight H-edge. More formally: \begin{definition} A \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram is a graph state diagram if \begin{enumerate} \item it contains only green spiders; \item every spider is connected to a single output by a plain wire; \item the spiders are connected only by H-edges. \end{enumerate} \end{definition} Every graph state diagram is uniquely associated to a graph, and vice-versa. Since we are going to reason about such diagrams in generality and without referring to a specific graph, we use the following informal notation to represent the \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram associated to \(G\): \begin{equation} \tikzfig{figures/graph_state_notation}\quad. \end{equation} In order to avoid having to track uninteresting scalars, we assume that the interpretation of this diagram is normalised. This amounts to including, along with the purely graph-theoretical part, a pair of scalars \begin{equation} \left( \tikzfig{figures/equations/soundness/sqroot_d_inv} \right)^{\otimes\abs{V}} \qand \left( \tikzfig{figures/equations/soundness/sqroot_d_unlabelled} \right)^{\otimes\abs{E}}, \end{equation} where \(V\) is the set of vertices in \(G\) and \(E\) the set of edges. These are simply the normalisation factors for states and entangling gates in G (see equations~\eqref{eq:C1_generators} and \eqref{eq:stabiliser_states}). It should now be clear that \begin{equation} \ket{G} = \interp{\tikzfig{figures/graph_state_notation}}\quad. \end{equation} Note that this notation for graph states can be formalised using the ``scalable'' construction \cite{carette_colored_2020}, but this would involve introducing rather more machinery than we really need. \subsection{Manipulating graph states} Having shown how to represent graph states within \(\mathsf{ZX}_p^{\mathrm{Stab}}\), we now gives some natural rules for manipulating diagrams involving them under \(\text{\textsc{\textnormal{zx}}}_p\). Two graph states \(\ket{G}\) and \(\ket{H}\) are said to be \emph{local Clifford equivalent} is there is a sequence of local Clifford operations that maps \(\ket{G}\) to \(\ket{H}\). It was show in \cite{bahramgiri_efficient_2007} that any local Clifford equivalence can be decomposed as a sequence of elementary local Clifford operations called \emph{local scaling} and \emph{local complementation}. These are particularly nice operations since their actions on a graph state \(\ket{G}\) are straightforward to understand at the level of the graph \(G\) defining the graph state. \subsubsection*{Pauli stabilisers} It is well-known that graph states admit simple Pauli stabilisers, namely, for each \(v \in V\), \begin{equation} X_v^\gamma \prod_{w \in N(v)} Z_w^{\gamma G_{vw}} \ket{G} = \ket{G}. \end{equation} We can give a simple formulation of these rules within \(\mathsf{ZX}_p^{\mathrm{Stab}}\), and they can be derived in \(\text{\textsc{\textnormal{zx}}}_p\): \begin{proposition} \label{prop:pauli_stabiliser} \(\text{\textsc{\textnormal{zx}}}_p\) proves the Pauli stabiliser rules for graph states: \begin{equation*} \tikzfig{figures/equations/pauli_stabiliser}, \end{equation*} where the red spiders is connected to vertex \(w\) in the graph \(G\), and each neighbour \(k\) of \(w\) is connected to a green vertex with phase \((0,\gamma G_{kw}^2)\). \end{proposition} \subsubsection*{Local scaling} For any \(\gamma \in \mathbb{Z}_p^*\), the \emph{\(\gamma\)-scaling} about a vertex \(w\) in a graph \(G\) is given by: \begin{equation} (G \overset{\gamma}{\circ} w)_{uv} \coloneqq \begin{cases} \gamma G_{uv} \quad \text{if} \quad u = w \quad \text{or} \quad v = w;\\ G_{uv} \quad \text{otherwise}. \end{cases} \end{equation} In other words, we apply a multiplicative scaling to all of the edges in the neighbourhood of \(w\). For example: \begin{equation} \tikzfig{figures/local_scaling_example_lhs} \quad \overset{\overset{\gamma}{\circ} w}{\longmapsto} \quad \tikzfig{figures/local_scaling_example_rhs} \quad. \end{equation} \begin{proposition} \label{prop:local_scaling} Local scaling is derivable in \(\text{\textsc{\textnormal{zx}}}_p\): for any graph \(G \in \mathbb{Z}_p^{N \times N}\), \(\gamma \in \mathbb{Z}_p\) and \(w \in \{1,2,\cdots,N\}\), \(\text{\textsc{\textnormal{zx}}}_p\) proves that \begin{equation} \tikzfig{figures/equations/local_scaling} \quad, \end{equation} where in the LHS, the multiplier is connected to vertex \(w\). \end{proposition} \subsubsection*{Local complementation} For any \(\gamma \in \mathbb{Z}_d^*\), the \emph{\(\gamma\)-weighted local \(\mathbb{Z}_d\)-complementation} or \(\gamma\)-complementation about a vertex \(w\) in a graph \(G \i \mathbb{Z}_d^{V \times V}\) is defined as: \begin{equation} (G \overset{\gamma}{\star} w)_{uv} \coloneqq \begin{cases} G_{uv} + \gamma G_{uw} G_{wv} \qif u \neq v; \\ G_{uv} \qq{otherwise.} \end{cases} \end{equation} This operation is somewhat harder than local scaling to get a good intuition for. It essentially operates on ``cones'' in \(G\) with summit \(w\). The simplest example is the following: \begin{equation} \tikzfig{figures/local_complementation_triangle_lhs} \quad \overset{\overset{\gamma}{\star} w}{\longmapsto} \quad \tikzfig{figures/local_complementation_triangle_rhs} \quad. \end{equation} In a more complicated graph, local complementation about \(w\) performs this simple operation for every such ``cone'' with summit \(w\). For example, \begin{equation} \tikzfig{figures/local_complementation_example_lhs} \quad \overset{\overset{\gamma}{\star} w}{\longmapsto} \quad \tikzfig{figures/local_complementation_example_rhs} \quad. \end{equation} \begin{proposition} \label{prop:local_complementation} Local complementation is derivable in \(\text{\textsc{\textnormal{zx}}}_p\): for any graph \(G \in \mathbb{Z}_p^{N \times N}\), \(\gamma \in \mathbb{Z}_p\) and \(w \in \{1,2,\cdots,N\}\), \(\text{\textsc{\textnormal{zx}}}_p\) proves that \begin{equation} \tikzfig{figures/equations/local_complementation} \quad, \end{equation} where in the LHS, the red phase is connected to vertex \(w\), and each neighbour \(v\) of \(w\) is connected to a green phase \(-\gamma G_{vw}^2\). \end{proposition} Combining this with propositions~\ref{prop:local_scaling} and \ref{prop:pauli_stabiliser}, we get \begin{corollary} \label{cor:GS_red_rotation} For any graph \(G \in \mathbb{Z}_p^{N \times N}\), \(\gamma \in \mathbb{Z}_p\) and \(w \in \{1,2,\cdots,N\}\), \(\text{\textsc{\textnormal{zx}}}_p\) proves that \begin{equation} \tikzfig{figures/equations/GS_red_rotation} \quad, \end{equation} where in the LHS, the red phase is connected to vertex \(w\), and each neighbour \(v\) of \(w\) is connected to a green phase with labels proportional to the edge weight \(G_{vw}\). \end{corollary} \subsection{A complete graphical language for $\textup{CPM}( \mathsf{Stab}_p )$ } We now extend our completeness result form $ \mathsf{Stab}_p $ to $\textup{CPM}( \mathsf{Stab}_p )$, the category of completely positive maps corresponding to mixed state stabiliser quantum mechanics, see \cite{DBLP:journals/entcs/Selinger07a,coecke_picturing_2017} for a formal definition. We will rely on the discard construction of \cite{DBLP:conf/icalp/CaretteJPV19} to define a graphical language \({(\mathsf{ZX}_p^{\mathrm{Stab}})}^{\ground}\). It consists in adding to the equational theory one generator, the discard $\ground :1\to 0$ and equations stating that this generator erases all isometries. In \(\mathsf{Stab}_p \), the isometries are generated by the following diagrams: \begin{center} \tikzfig{figures/isometries} \end{center} The equations to add are then: \begin{center} \tikzfig{figures/isometrieserase} \end{center} A new interpretation $\interp{\_}: {\mathsf{ZX}_p^{\mathrm{Stab}}}^{\ground} \to \textup{CPM}( \mathsf{Stab}_p )$ is defined as $\interp{\tikzfig{ground}}: \rho \mapsto \Tr(\rho)$ for the ground and for all \(\mathsf{ZX}_p^{\mathrm{Stab}}\)-diagram $D:n\to m$: \begin{center} $\interp{\tikzfig{figures/D} }^{\ground} : \rho \mapsto \interp{\tikzfig{figures/D} }^{\dagger} \rho \interp{\tikzfig{figures/D} }$. \end{center} Corollary 22 of \cite{DBLP:conf/icalp/CaretteJPV19} provides a sufficient condition for the previous construction to extend a universal and complete graphical language for $\mathsf{Stab}_p $ into a universal complete graphical language for $\textup{CPM}( \mathsf{Stab}_p )$. This condition is for $\mathsf{Stab}_p $ to have \emph{enough isometries}, meaning (we use here a little stronger condition than in \cite{DBLP:conf/icalp/CaretteJPV19}) that for all maps $f:A \to B\otimes X$ and $g:A \to B\otimes Y$ such that: \begin{center} \tikzfig{relcpm} \end{center} there is an isometry $V: X\to Y$ in $\mathsf{Stab}_p $ such that: \begin{center} \tikzfig{reliso} \end{center} To prove this, we will use arguments similar to the proof that $\mathsf{Stab}_2 $ has enough isometry. Everything relies on the following lemma: \begin{lemma}\label{lem:bipartition} Given any bipartition of the outputs of a $\mathsf{ZX}_p^{\mathrm{Stab}}$-diagram $D:0\to n+m$. We can find unitaries $A$ and $B$ in $\mathsf{Stab}_p $ such that: \begin{center} \tikzfig{figures/bipartiteform} \end{center} \end{lemma} Using this we can prove: \begin{lemma} $\mathsf{Stab}_p $ has enough isometries. \end{lemma} The proof is exactly the same as the qubit case, see the proof of Proposition 18 in \cite{DBLP:conf/icalp/CaretteJPV19}. It then follows directly from \cite{DBLP:conf/icalp/CaretteJPV19} that: \begin{theorem} \({(\mathsf{ZX}_p^{\mathrm{Stab}})}^{\ground}\) is universal and complete for $\textup{CPM}( \mathsf{Stab}_p )$. \end{theorem} \subsection{Co-isotropic relations} It has been shown in \cite{comfort_graphical_2021, comfort_symplectic_2021, comfort_relational_nodate} that \(\textrm{CPM}(\mathsf{Stab}_p)\) is equivalent to the category of affine co-isotropic relations up to scalars. More formally, we endow \(\mathbb{Z}_p^2\) with the symplectic map \begin{equation} \omega\left( \begin{bmatrix} a \\ b \end{bmatrix}, \begin{bmatrix} c \\ d \end{bmatrix} \right) = ad - bc, \end{equation} and \(\mathbb{Z}_p^{2m} = \bigoplus_m \mathbb{Z}_p^2\) with the direct sum symplectic map. \begin{definition} The symmetric monoidal category \(\mathsf{AffCoIsoRel}_{\Z_p}\) has as objects \(\mathbb{N}\), and as morphisms, relations \(R : \mathbb{Z}_p^{2m} \to \mathbb{Z}_p^{2n}\) such that \(R\) viewed as a subset of \(\mathbb{Z}_p^m \times \mathbb{Z}_p^n\) is an affine co-isotropic subspace thereof. \end{definition} Since \cite{comfort_graphical_2021} works in the scalarless ZX-calculus, we need to add one extra axiom, which suffices to eliminate all remaining (non-zero) scalars in \(\mathsf{Stab}_p\): we impose the rule \textsc{(Mod)} that \(\frac{1}{p} = 1\). Diagrammatically, this amounts to quotienting \((\mathsf{ZX}_p^{\mathrm{Stab}})^{\ground}\) by the following rule: \begin{equation} \tikzfig{figures/scalarless_axiom} \end{equation} Then we can give an interpretation \(\left[ - \right]\) of \({(\mathsf{ZX}_p^{\mathrm{Stab}})}^{\ground}\) making it universal and complete for \(\mathsf{AffCoIsoRel}_{\Z_p}\), and which is defined uniquely by the commutative diagram \begin{equation} \begin{tikzcd} {(\mathsf{ZX}_p^{\mathrm{Stab}})^{\ground}} & \mathsf{AffCoIsoRel}_{\Z_p} \\ {\mathrm{CPM}(\mathsf{Stab}_p)} & {\mathrm{CPM}(\mathsf{Stab}_p) / \textsc{(Mod)}} \arrow[dashed,"{\left[ - \right]}", from=1-1, to=1-2] \arrow["{\interp{-}}"', from=1-1, to=2-1] \arrow[twoheadrightarrow, from=2-1, to=2-2] \arrow[leftrightarrow, from=1-2, to=2-2] \end{tikzcd} \end{equation} Explicitly, it is given by the identity on objects, \([m] = m\), and is defined on morphisms by: \begin{align*} \left[ \tikzfig{figures/generators/r-spider_labelless} \right] &= \left\{ \left( \bigoplus_{k=1}^m \begin{bmatrix} a \\ b_k \end{bmatrix} , \bigoplus_{k=1}^n \begin{bmatrix} -a \\ -c_k \end{bmatrix} \right) \,\middle|\, a,b_k,c_k \in \mathbb{Z}_p \qand \sum_k b_k = \sum_k c_k \right\} \\ \left[ \tikzfig{figures/generators/g-spider_labelless} \right] &= \left\{ \left( \bigoplus_{k=1}^m \begin{bmatrix} a_k \\ c \end{bmatrix} , \bigoplus_{k=1}^n \begin{bmatrix} b_k \\ c \end{bmatrix} \right) \,\middle|\, a,b_k,c_k \in \mathbb{Z}_p \qand \sum_k a_k = \sum_k b_k \right\} \\ \left[ \tikzfig{figures/generators/r-unit} \right] &= \left\{ \left( \bullet, \begin{bmatrix} -1 & 0 \\ -y & -1 \end{bmatrix} \begin{bmatrix} v \\ 0 \end{bmatrix} + \begin{bmatrix} -x \\ 0 \end{bmatrix} \right) \,\middle|\, v \in \mathbb{Z}_p \right\} \\ \left[ \tikzfig{figures/generators/g-unit} \right] &= \left\{ \left( \bullet, \begin{bmatrix} 1 & -y \\ 0 & 1 \end{bmatrix} \begin{bmatrix} 0 \\ v \end{bmatrix} + \begin{bmatrix} 0 \\ x \end{bmatrix} \right) \,\middle|\, v \in \mathbb{Z}_p \right\} \\ \left[ \tikzfig{figures/generators/hadamard} \right] &= \left\{ \left( \vb{v}, \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \vb{v} \right) \,\middle|\, \vb{v} \in \mathbb{Z}_p^2 \right\} \\ \left[ \tikzfig{figures/generators/multiplier_generator} \right] &= \left\{ \left( \vb{v}, \begin{bmatrix} x & 0 \\ 0 & x^{-1} \end{bmatrix} \vb{v} \right) \,\middle|\, \vb{v} \in \mathbb{Z}_p^2 \right\} \end{align*} Note that all of these are actually affine \emph{Lagrangian} relations. The only generator which has a co-isotropic but not Lagrangian semantics is the discard map: \begin{equation} \left[ \tikzfig{ground} \right] = \left\{ \left( \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \bullet \right) \right\} \end{equation} As pointed out in \cite{baez_props_2018, baez_compositional_2018, comfort_graphical_2021}, the related category of affine Lagrangian relations over the field \(\mathbb{R}[x,y]/(xy-1)\) can be used to represent a fragment of electrical circuits. We expect that the axiomatisation of figure~\ref{fig:axioms} can be adapted to that setting, but leave this for a future article. \subsection{Geometrical interpretation} The previous interpretation corresponds to a geometric intuition. All Clifford gates can be interpreted as affine transformations of the torus $\mathbb{Z}_p \times \mathbb{Z}_p $. This torus can be identified as a phase space, the position and momentum coordinates, $q$ and $p$ corresponding respectively to the first and second wire in the previous section semantics. In this section we will take as an example the case $p=5$. \begin{center} \tikzfig{figures/lattice0} \end{center} The red triangle here will allow us to illustrate the geometrical action of stab. \begin{center} \tikzfig{figures/lattice} \end{center} The Pauli phase gates, acts as translations along the vertical or horizontal axis. The multiplication gates acts as a scaling. The Hadamard gate corresponds to a $\frac{\pi}{2}$-rotation and the antipode to a $\pi$-rotation. Finally, the pure Clifford gates corresponds to shears.
-113,181.470871
[ -1.7080078125, 1.4443359375 ]
20.348059
[ -2.169921875, 3.166015625, -1.4228515625, -4.0234375, -2.48828125, 5.14453125 ]
[ -0.323974609375, 7.02734375, -0.09930419921875, 5.5 ]
357
9,721
[ -3.384765625, 3.90625 ]
31.479187
[ -4.44921875, -1.140625, -2.4375, -1.810546875, 0.55517578125, 8.15625 ]
0.517147
11.420079
19.823063
1.089728
[ 2.3058717250823975 ]
-72,285.050677
6.962144
-113,068.516511
1.374134
6.150462
[ -1.822265625, -2.2734375, -4.1328125, -5.9921875, 2.080078125, 12.578125 ]
[ -4.6640625, -0.5537109375, -1.177734375, -1.6337890625, 2.861328125, 2.771484375 ]
BkiUd5E5qg5A51O-MOXL
\section{Introduction: Holographic molecular binding assays} \label{sec:HPC} \begin{figure*} \centering \includegraphics[width=0.75\textwidth]{schematic02.png} \caption{(a) Schematic representation of a molecular binding assay based on holographic particle characterization. Probe beads consist of spherical polystyrene substrates coated with functional groups (protein A) that can bind target antibodies from solution. The probe beads have an effective diameter that increases from $d_0$ to $d_p$ when antibodies bind. (b) A molecular-scale coating of antibodies influences the recorded hologram of a bead. (c) This change can be quantified by fitting to predictions of the Lorenz-Mie theory of light scattering, yielding an estimate for the fractional surface coverage and from this the concentration of antibodies.} \label{fig:schematic} \end{figure*} Holographic molecular binding assays use holographic particle characterization \cite{lee2007characterizing} to directly measure changes in the diameters of micrometer-scale colloidal spheres caused by molecules binding to their surfaces \cite{cheong2009flow,zagzag2020holographic}. This rapid measurement technique eliminates the need for fluorescent labeling to detect binding and thus reduces the cost, complexity, time to completion and expertise required for standard binding assays such as ELISA. Being based on batch-synthesized beads, holographic molecular binding assays do not require microfabricated sensors and can be performed with comparatively little sample preparation. Holographic molecular binding assays therefore have great promise as medical diagnostic tests, particularly the serological tests required to assess patients' immune responses to pathogens such as SARS-CoV-2, the coronavirus responsible for COVID-19. The ability to measure nanometer-scale changes in the diameters of micrometer-scale spheres is provided by quantitative analysis of single-particle holograms obtained with in-line holographic video microscopy \cite{sheng2006digital,lee2007characterizing}. The hologram of an individual colloidal sphere is fit to a generative model based on the Lorenz-Mie theory of light scattering \cite{mishchenko2002scattering,bohren2008absorption,gouesbet2011generalized} to extract the particle's diameter, $d_p$, refractive index, $n_p$ and three-dimensional position, $\vec{r}_p$ \cite{lee2007characterizing}. This measurement scheme is depicted schematically in Fig.~\ref{fig:schematic}. One such measurement can be completed in a few milliseconds and yields a bead's diameter with a precision of \SI{5}{\nm} and its refractive index to within \SI{1}{ppt}. A set of such measurements can be used to measure the mean diameter of a population of particles to within a fraction of a nanometer \cite{zagzag2020holographic}, which is sufficient to detect the growth of molecular-scale coatings. Previous demonstrations of holographic molecular binding assays \cite{cheong2009flow,zagzag2020holographic} have reported changes in probe beads' properties when the concentration of target molecules is large enough to saturate the beads' binding sites. Here, we report concentration-dependent trends that cover the range from zero analyte to binding-site saturation. Interpreting these results through the statistical mechanics of molecular binding then achieves three goals: (1) to use holographic binding assays to probe the kinetics of molecular binding; (2) to validate the effective-sphere model used to interpret holographic particle characterization measurements on coated spheres; and (3) to establish the effective range of analyte concentrations over which holographic binding assays can quantitate target molecules in solution, a key capability for clinical testing. \section{Experimental} We demonstrate quantitative holographic binding assays through measurements on antibodies binding to beads coated with protein A, specifically immunoglobulin G (IgG) and immunoglobulin M (IgM). These are well-studied model systems \cite{lund2011exploring} with which to validate holographic binding assays and to establish their detection limits. Given the central role of IgG and IgM in the immune response to viral pathogens, these experimental demonstrations furthermore serve as models for fast, inexpensive and quantitative serological tests. \subsection{Probe beads and buffer solution} The probe beads used for this study (Bangs Laboratories, catalog no.~CP02000, lot no.~14540) have a polystyrene core with a nominal diameter of $d_0 = \SI{1}{\um}$ and a surface layer of immobilized protein A molecules, each of which has five binding sites for the Fc region of immunoglobulins \cite{deisenhofer1981crystallographic,moks1986staphylococcal}. These functionalized beads are dispersed at a concentration of \SI{2e6}{particles\per\milli\liter} in an antibody binding buffer. The same buffer is used to dissolve antibodies for testing. Equal volumes of the probe-bead dispersion and the antibody solution are mixed to initiate incubation. The antibody binding buffer consists of \SI{50}{\milli M} sodium borate buffer prepared with boric acid (\SI{99.5}{\percent}, Sigma-Aldrich, catalog no.~B0394, lot no.~SLBM4465V) and NaOH (\SI{98}{\percent}, Sigma-Aldrich, catalog no.~S8045, lot no.~091M01421V) in deionized water (\SI{18.2}{\mega\ohm\centi\meter}, Barnstead Millipure). The pH of the buffer is adjusted to \num{8.2} with the addition of dilute HCl (\SI{38}{\percent}, Sigma-Aldrich, catalog no.~H1758) to optimize the binding of antibodies to protein A \cite{fishman2019protein}. The dispersion of functionalized colloidal spheres constitutes a bead-based assay kit for immunoglobulins that bind to protein A. The same approach can be used to create specific immunoassays for particular antibodies by functionalizing the beads' surfaces with suitable antigens instead of protein A. Multiplexed assays can be produced by separately functionalizing substrate beads that can be distinguished holographically by size or by refractive index and then mixing their dispersions to make a test kit. \subsection{Assay protocol} An assay is performed by dissolving target antibodies in the buffer at concentrations from \SI{200}{\nano\gram\per\milli\liter} up to \SI{200}{\micro\gram\per\milli\liter}. Antibody solution is then mixed with an equal volume of the stock dispersion of probe beads to obtain a bead concentration of \SI{e6}{particles\per\milli\liter} and antibody concentrations in the range from \SI{100}{\nano\gram\per\milli\liter} to \SI{100}{\micro\gram\per\milli\liter}. This easily allows for detection in a physiologically relevant range following suitable dilution, as the typical concentration of immunoglobulins in human serum is \SI{10}{\milli\gram\per\milli\liter} \cite{cassidy1975human}. The sample is allowed to equilibrate for $\tau = \SI{45}{\minute}$ at room temperature before being analyzed. To model immunoassays that would be relevant for serological testing, we performed assays on rabbit IgG (EMD Millipore; catalog no.~PP64, lot no.~3053798) and human IgM (Sigma-Aldrich; catalog no.~I8260, lot no.~069M4838V). Aggregation of IgM is suppressed by increasing the ionic strength of the buffer through the addition of \SI{150}{\milli M} of NaCl (\SI{99.5}{\percent}, Sigma-Aldrich, catalog no.~S7653) \cite{fishman2019antibody}. Control measurements are performed by replacing the antibodies with alcohol dehydrogenase (ADH, Sigma-Aldrich; catalog no.~A3263-7.5KU, lot no.~SLBW31382). Non-specific binding due to incomplete coverage of the bead surfaces by protein A is blocked for these experiments by incubating the probe beads with bovine serum albumin (BSA, Sigma-Aldrich, catalog no.~A2153). BSA adsorbs non-specifically to exposed polystyrene and does not interfere with antibody binding to protein A. ADH does not bind to either protein A or BSA and thus should not attach to the probe beads. With a molecular weight greater than \SI{140}{\kilo\dalton}, ADH is comparable in size to IgG and thus should have a similar holographic signature, were it to bind. \subsection{Holographic particle characterization} Holographic particle characterization measurements are performed with a commercial holographic particle characterization instrument (Spheryx xSight) set to record holograms at a wavelength of \SI{447}{\nm}. Each measurement involves pipetting a \SI{30}{\micro\liter} aliquot of the dispersion into the sample reservoir of one channel in an eight-channel microfluidic chip (Spheryx xCell). The sample chip is then loaded into xSight, which is set to draw \SI{1}{\micro\liter} of the sample through the observation volume in a pressure-driven flow with a peak speed around \SI{3}{\mm\per\second}. Data for a thousand beads is collected in measurement time $\Delta\tau = \SI{2}{\minute}$ and is fully analyzed in about \SI{15}{\minute}. The Lorenz-Mie theory used to analyze holograms treats each particle as a homogeneous sphere. When applied to inhomogeneous particles, such as the coated spheres in the present study, the extracted parameters must be interpreted as representing the properties of an effective sphere \cite{cheong2011holographic,odete2020role,altman2020interpreting}. These effective-sphere properties will differ from the physical properties of the coated sphere unless the coating has the same refractive index as the substrate bead. The refractive index of the coating, moreover, depends on the fraction, $f$, of binding sites occupied by molecules, which means that the effective diameter of the coated sphere also depends on $f$. Numerical studies show that the holographically measured diameter increases linearly with surface coverage \cite{altman2020interpreting}, \begin{equation} \label{eq:diameter} d_p = d_0 + 2 \delta \, f, \end{equation} where $d_0$ is the bare sphere's diameter and $\delta$ is the effective optical thickness of a complete layer of bound molecules. The value of $\delta$ depends on the size of the target molecule, the density of binding sites, and the refractive index of the target molecule relative to those of the medium and the substrate bead \cite{altman2020interpreting}. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{example06.png} \caption{Typical holographic molecular binding assay for a sample of probe beads incubated with \SI{10}{\micro\gram\per\milli\liter} IgG. (a) Holographically measured velocity profile. Each point represents the speed, $v_p$, of a single bead as a function of its axial position, $z_p$, relative to the instrument's focal plane. The solid curve is a fit to the parabolic Poiseuille flow profile. Horizontal dashed lines indicate the axial positions of the channel's walls inferred from this fit. Points are colored by each particle's measured diameter, $d_p$. Evenly mixed colors demonstrate that the results are not biased by the particles' positions in the channel. (b) Holographic characterization data for the same sample of beads showing the distribution of single-particle diameter, $d_p$, and refractive index, $n_p$. Points are colored by the density of measurements, $\rho(d_p, n_p)$. The central point shows the population mean for this sample and is sized to represent the uncertainty in the mean. } \label{fig:example} \end{figure*} Each dispersed particle is recorded and analyzed up to \num{10} times as it traverses the observation volume and the resulting three-dimensional position measurements are linked into a trajectory \cite{crocker1996methods}. The linked measurements are combined to improve the precision of the estimated values for the particle's diameter and refractive index \cite{cheong2009flow}. Typical results for a sample of beads incubated with \SI{10}{\micro\gram\per\milli\liter} of IgG are presented in Fig.~\ref{fig:example}. Each point in these scatter plots represents the holographically measured trajectory, Fig.~\ref{fig:example}(a), and properties, Fig.~\ref{fig:example}(b), of a single particle. The size of the dots is comparable to the estimated measurement precision. Single particle trajectories are useful for mapping the fluid flow in the microfluidic channel. \cite{cheong2009flow} Figure~\ref{fig:example}(a) shows the beads' speed, $v_p(z_p)$, as a function of axial position, $z_p$, relative to the instrument's focal plane. Data points in Fig.~\ref{fig:example}(a) are colored by the spheres' measured diameters and show that particles are distributed uniformly throughout the channel and that particle size is not correlated with height in the channel. Fitting these data to the anticipated parabolic Poiseuille flow profile yields estimates for the positions of the upper and lower walls of the channel, which are indicated by the horizontal dashed lines in Fig.~\ref{fig:example}(a). Mapping the flow profile provides an important internal reliability check, ensuring that the sample has flowed smoothly through the channel, that the microfluidic channel is properly seated in the instrument, and that trajectory linking has proceeded correctly. Figure~\ref{fig:example}(b) show the single-particle characterization data obtained from these trajectories, with each point representing the effective diameter, $d_p$, and refractive index, $n_p$, of a single bead. Plot symbols are colored by the density of observations, $\rho(d_p, n_p)$. The \num{890} particles in this data set enable us to compute the population-average diameter, $d_p = \SI{0.974(2)}{\um}$ and the mean refractive index, $n_p = \num{1.5692(8)}$. The value for the refractive index is significantly smaller than the value of \num{1.60} expected for polystyrene at the imaging wavelength, and is consistent with expectations for a coated sphere in the effective-sphere interpretation \cite{altman2020interpreting}. The mean diameter is significantly larger than the baseline value of $d_0 = \SI{0.964(2)}{\um}$ obtained for the probe beads alone. The difference, $\Delta_p = d_p - d_0 = \SI{10(3)}{\nm}$ is consistent with a statistically significant detection of antibody binding \cite{zagzag2020holographic} at concentrations orders of magnitude lower than physiological levels \cite{cassidy1975human,goldstein2006selective,long2020antibody}. A principal aim of the present study is to combine the effective-sphere analysis of probe beads' holograms \cite{cheong2009flow,zagzag2020holographic,altman2020interpreting} with the statistical physics of molecular binding to obtain quantitative information on the kinetics of antibody binding from measurements of $d_p(c,t)$. Conversely, this analysis establishes that a holographically observed shift in bead diameter can be used to measure the concentration of antibodies in solution and furthermore establishes the trade-off between concentration sensitivity and measurement time for such holographic immunoassays. \subsection{Kinetics of molecular binding} Antibodies bind rapidly to protein A in the antibody binding buffer and the rate of dissociation is small enough for the process to be considered irreversible \cite{norde1992energy}. Antibodies therefore continue to bind to the probe beads until all of the surface sites are saturated or the solution is depleted. Assuming that depletion may be ignored and the solution remains well mixed, the fraction of occupied sites, $f(c, t)$, increases at a rate that depends on the concentration of antibodies, $c$, and the availability of unoccupied sites \cite{privman1991continuum,buijs1996adsorption,adamczyk2000kinetics} \begin{equation} \label{eq:ritsi} \frac{df}{dt} = \gamma(c) [1 - f(c, t)]. \end{equation} This model differs from those in previous studies \cite{dancil1999porous,ogi2007concentration,nelson2015mechanism} by not having to account for detachment of antibodies from binding sites. Minimizing unbinding optimizes the sensitivity of the assay to small concentrations of analyte and reduces the time required to perform measurements. The rate constant, $\gamma(c)$, accounts for the microscopic kinetics of molecular binding. Further assuming that the concentration of antibodies is low enough that binding events are independent, we model $\gamma(c) = k c$, where $k$ is the binding rate for the antibodies in the antibody binding buffer. The solution to Eq.~\eqref{eq:ritsi}, \begin{equation} \label{eq:f} f(c, t) = 1 - e^{- k c t}, \end{equation} satisfies the initial condition $f(c, 0) = 0$ and shows that binding assays can be performed either as a function of time for fixed antibody concentration, $c$, or as a function of concentration at fixed incubation time, $t$. If, furthermore, the measurement is performed over a time interval, $\Delta\tau$, starting after incubation time $\tau$, the average coverage is \begin{subequations} \label{eq:fbar} \begin{align} \bar{f}(c, \tau) & = \frac{1}{\Delta\tau} \int_\tau^{\tau+\Delta\tau} f(c, t) \, dt \\ & = 1 - \frac{1 - e^{-k c \Delta \tau}}{k c \Delta\tau} \, e^{- k c \tau}. \end{align} \end{subequations} \subsection{Monitoring binding holographically} \label{sec:diameterincrease} Combining Eq.~\eqref{eq:diameter} with Eq.~\eqref{eq:fbar} yields an expression for the dependence of the measured bead diameter on the target molecules' concentration in solution: \begin{equation} \label{eq:diametershift} \Delta_d(c, \tau) \equiv d_p - d_0 = 2 \delta \, \left( 1 - \frac{1 - e^{-k c \Delta \tau}}{k c \Delta\tau} \, e^{- k c \tau} \right). \end{equation} Holographic measurements of $\Delta_d(c,\tau)$ at fixed incubation time $\tau$ can be interpreted with Eq.~\eqref{eq:diametershift} to estimate the effective layer thickness, $\delta$, and the rate constant, $k$. These values, in turn, can be used to anticipate how the sensitivity of the assay for antibody concentration depends on incubation time, $\tau$. This sensitivity can be further improved by reducing uncertainties in $\Delta_d(c, \tau)$, either by extending the measurement time to analyze more beads or by optimizing the optical properties of the beads to increase $\delta$ \cite{altman2020interpreting}. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{titration12.png} \caption{Holographic molecular binding assays for (a) IgG (red circles) and (b) IgM (green squares) to colloidal beads coated with protein A dispersed in antibody binding buffer. IgM assay is performed with \SI{150}{\milli M} added NaCl to suppress aggregation. Discrete points show the increase, $\Delta_d(c, \tau) = d_p(c, \tau) - d_0$, of the population-average effective-sphere diameter, $d_p(c, \tau)$, relative to the probe beads' reference diameter, $d_0$, as a function of antibody concentration, $c$ after fixed incubation time $\tau = \SI{45}{\minute}$. Solid curves are best-fits to Eq.~\eqref{eq:diametershift} for measurement time $\Delta\tau = \SI{2}{\minute}$. (c) 45~minute incubation with alcohol dehydrogenase (ADH) has no measurable affect on probe bead diameters. (d) Binding data collapsed according to Eq.~\eqref{eq:diametershift}. Concentrations are scaled by $k \tau$ and diameter shifts are scaled by the layer thickness, $\delta$.} \label{fig:experiment} \end{figure*} \section{Results} The discrete points in Fig.~\ref{fig:experiment} show measured shifts, $\Delta_d(c, \tau)$, in the population-average bead diameter after $\tau = \SI{45}{\minute}$ incubation with (a) IgG, (b) IgM and (c) ADH. These shifts are measured in nanometers and illustrate the precision with which holographic particle characterization can resolve the diameters of probe beads. Error bars indicate uncertainties in the mean diameter given the particle-counting statistics for each measurement. A single point represents results from roughly \num{1000} beads observed in \SI{1}{\micro\liter} of the sample over $\Delta \tau = \SI{2}{\minute}$. As anticipated, bead diameters increase upon incubation with antibodies by an amount that depends on antibody concentration. Incubation with ADH has no such effect presumably because ADH does not bind to protein A. Results for IgG and ADH are presented for concentrations up to \SI{100}{\micro\gram\per\milli\liter}. IgM is plotted only up to \SI{20}{\micro\gram\per\milli\liter} because $\Delta_d(c,t)$ reaches a plateau beyond $c = \SI{5}{\micro\gram\per\milli\liter}$, which we interpret to represent saturation of the available surface sites by IgM. The solid curves in Fig.~\ref{fig:experiment}(a) and Fig.~\ref{fig:experiment}(b) are fits of the measured bead diameters to Eq.~\eqref{eq:diametershift} for the apparent layer thickness, $\delta$, and the rate constant, $k$. Interestingly, fits to the data for both IgG and IgM are consistent with an effective layer thickness of $\delta = \SI{8.0(5)}{\nm}$ even though IgM has five times the molecular weight of IgG. This agreement could be a coincidence arising from the effective-sphere interpretation of holographic imaging data \cite{altman2020interpreting}. It also is consistent with a model in which multi-site binding of the predominantly pentameric IgM assembly results in a flattened orientation of the IgM on the probe beads' surfaces, thus contributing no more to $\delta$ than the single domain of IgG. The fit value for the rate constant of IgG is $k_\text{G} = \SI{1.2(3)e-5}{\milli\liter\per\micro\gram\per\second}$, which corresponds to a rate per binding site of $m_\text{G} k_\text{G} = \SI{3.0(8)e-18}{\milli\liter\per\second}$, given the $m_\text{G} = \SI{150}{\kilo\dalton}$ molecular weight of IgG. We express this figure as a rate of binding events per surface site rather than as a rate per molecule to emphasize that the molecules are filling available binding sites on the probe beads. The corresponding rate constant for IgM, $k_\text{M} = \SI{2.5(8)e-4}{\milli\liter\per\micro\gram\per\second}$, is an order of magnitude greater than $k_\text{G}$. The difference becomes even greater when account is taken of the $m_\text{M} = \SI{970}{\kilo\dalton}$ molecular mass of pentameric IgM: $m_\text{M} k_{M} = \SI{4.1(12)e-16}{\milli\liter\per\second}$. Naively assuming that each IgG molecules occupies $\nu_\text{G} = 1$ binding site and each IgM occupies $\nu_\text{M} = \num{5}$ reduces the difference proportionately, \begin{equation} \frac{m_\text{M} k_\text{M}}{\nu_\text{M}} \frac{\nu_\text{G}}{m_\text{G} k_\text{G}} = \num{27(1)}. \end{equation} The remaining large difference in binding rates cannot be ascribed to differences in bulk transport properties because the molecules' diffusion constants are proportional to their sizes, which suggests that IgG should attach more rapidly, being smaller. It may instead reflect differences in the two antibodies' microscopic binding mechanisms \cite{law2019igm}. Possible explanations include differences in binding probabilities as molecules approach the surface due to the multivalent presentation of binding sites for the pentameric IgM. In addition, different barriers to attachment may arise due to variations in the nature of electrostatic interactions for immunoglobulins. A more thorough evaluation of the influence of multivalency on attachment kinetics for IgGs, IgMs and other biomacromolecules will provide an intriguing challenge for our future studies. Nevertheless, even a simplified model such as a one-to-one binding mode between Protein A and IgG, for example, provides the capability to conduct chemical analysis of immunoglobulin concentration in solution. Given our primary goal of developing immunoassays for serological testing, the experimental results in Fig.~\ref{fig:experiment} confirm that holographic particle characterization provides a basis for quantitative measurements of antibody concentrations under physiological conditions. The success of these fits to a kinetic model for attachment is demonstrated by the data collapse in Fig.~\ref{fig:experiment}(d), with results from IgG and IgM both falling on the same master curve despite the substantial difference in the two antibodies' rate constants. \section{Conclusion} This study has demonstrated that holographic particle characterization can perform quantitative molecular binding assays, including measuring the rate constants that characterize molecular binding. Our results demonstrate that a single \SI{15}{\minute} measurement can quantify the concentration of IgG in solution down to concentrations as low as \SI{10}{\micro\gram\per\milli\liter} and concentrations of IgM as low as \SI{1}{\micro\gram\per\milli\liter}. Longer measurements and larger statistical samples can improve this sensitivity, both by increasing occupancy of binding sites and also by reducing uncertainty in the diameter shift. Whereas the IgG-protein A system has been studied extensively, less has been written about binding of IgM to substrates coated with protein A. The holographic assays reported here provide insights into the binding mechanism that may inform future studies. We find, for example, that IgM tends to bind significantly more rapidly to protein A than IgG. Our observations also suggest that IgM may tend to bind flat to the surface of a functionalized bead. How these trends depend on such factors as electrolyte composition and concentration fall outside the intended scope of the present study and will be addressed elsewhere. Using protein A to provide binding functionalization yields a general-purpose assay for antibody concentration, rather than an immunoassay for specific antibodies. This general-purpose assay already should be useful as a rapid screening test for Antibody Deficiency Disorders \cite{ballow2002primary,patel2019expanding}. Holographic immunassays can be targeted for specific diseases by replacing protein A as a surface binding group with appropriate specific antigens, including peptides, proteins, or other biomolecules. Such functionalized colloidal spheres are standard components of conventional bead-based assays, which typically rely on fluorescent labels for readout. Holographic analysis yields results faster and at lower cost by eliminating reagents, processing steps and expertise needed to apply fluorescent labels. Holographic analysis furthermore yields quantitative results for antibody concentration without requiring extensive calibration. The speed and sensitivity of holographic immunoassays can be improved further by optimizing the sizes and optical properties of the substrate beads. Such efforts currently are under way. \section*{Acknowledgments} This work was supported by the RAPID program of the National Science Foundation under Award No.\ DMR-2027013. Partial support was provided by NASA through Grant No.\ NNX13AR67G. The Spheryx xSight holographic characterization instrument used in this study was acquired by New York University's Materials Research Science and Engineering Center as shared instrumentation with support from the NSF under Award No.\ DMR-1420073.
-16,670.9278
[ -2.79296875, 2.73046875 ]
24.287653
[ -2.634765625, 0.931640625, -1.751953125, -5.3984375, -1.0625, 7.32421875 ]
[ 3.544921875, 6.23046875, 2.58203125, 6.65625 ]
198
3,479
[ -0.62646484375, 0.386962890625 ]
23.152493
[ -5.48046875, -2.857421875, -2.8125, -1.935546875, 0.947265625, 9.890625 ]
0.840179
8.791616
29.088819
2.528924
[ 2.1798887252807617 ]
-12,648.522566
6.158091
-16,260.164181
0.546117
5.821545
[ -3.3359375, -3.736328125, -2.9765625, -3.6484375, 2.646484375, 10.375 ]
[ -5.55859375, -2.5859375, -2.46484375, -1.5078125, 3.515625, 5.19921875 ]
BkiUbynxK4sA-9F9jLdg
\section{Thin film synthesis} Thin films of CaRuO$_3$ (CRO) have been prepared by utilizing a metalorganic aerosol deposition technique~\cite{MAD}. Here, a solution of commercial acetylacetonates of Ca$^{2+}$ and Ru$^{3+}$, dissolved in dimethlyformamide, is sprayed with a pneumatic nozzle with dried air onto a heated substrate~\cite{Melanie_MAD}. In the hot region close to the substrate, the acetylacetonates decompose (above 250$^\circ$C), the metal ions oxidize and the oxide film grows on the substrate. In our previous work on the system Sr$_{1-x}$Ca$_x$RuO$_3$ we have used (100) oriented SrTiO$_3$ (STO) as substrate and obtained after optimization of the growth conditions (110) oriented thin films of high quality with RRR=29 ($x=0$) and 16 ($x=1$) ~\cite{Melanie_MAD,Schneider}. Careful x-ray diffraction (XRD) analysis has revealed that these epitaxial CRO films are fully strained. While STO is well suited for films with composition between x=0 and 0.5, the larger lattice mismatch of $-2\%$ for $x=1$ prevents higher RRR values for pure CRO. Thus, we have chosen NdGaO$_3$ (NGO) as substrate for the growth of high-quality CRO thin films for which the lattice mismatch is considerably smaller ($-0.5\%$). Very similar growth parameters as reported earlier~\cite{Melanie_MAD} with a substrate temperature slightly exceeding $1000^\circ$C have enhanced the RRR values up to 35. In order to further improve the thin film quality and RRR value, we have chosen vicinal NGO substrates with a 3$^\circ$ miscut angle, which optimizes regular step bunching growth. The surface morphology studied by scanning tunneling microscopy (see Fig. 1b of main paper) has proven a step bunching growth corresponding to the steps of the NGO substrate. \section{Structural properties} \begin{figure}[t!] \includegraphics[width=1\columnwidth]{FigS1} \caption{$\Theta-2\Theta$ XRD scans around the (110) reflex of plain vicinal NdGaO$_3$ (NGO) substrate (upper panel), CaRuO$_3$ (CRO) thin film (thickness 78 nm) on vicinal NGO (middle panel) and CRO (9 nm) on vicinal NGO (lower panel). The black arrow indicates the CRO 110 reflex while the grey arrows mark Laue fringes in the very thin film.} \end{figure} Fig.\ 1 displays XRD scans of CRO on a vicinal NGO substrate. Here, the top panel shows the intensity spectrum of the plain NGO film while the two lower panels display CRO thin films grown on NGO. The black arrow marks the (110) reflexes from CRO in the orthorhombic Pbnm perovskite structure. No secondary phases are visible. For the thinnest film of thickness 9~nm, Laue fringes are visible which reflect a very low surface roughness. \begin{figure}[t!] \includegraphics[width=1\columnwidth]{FigS2} \caption{Dependence of the (110) lattice constant for CRO thin films on vicinal NGO as function of film thickness, determined by small angle XRD.} \end{figure} Fig.\ 2 displays the (110) lattice constant as function of thin film thickness. NGO exhibits a slightly bigger lattice constant compared to CRO which leads to a small in-plane tensile strain acting on the CRO thin films. This results in a decrease of the out-of-plane lattice constant, observed particularly for CRO films of low thickness (cf.\ Fig.\ 2). A relaxation of the in-plane tensile strain caused by the negative lattice mismatch of $\sim-0.5\%$ is reached for film thicknesses larger than 20 nm. \begin{figure} \includegraphics[width=0.95\linewidth]{FigS3.png} \caption{(color online) Upper part: transmission electron microscopy (TEM) image on $\mu$m scale of the cross-section of a lamella with NdGaO$_3$ (NGO) substrate, CaRuO$_3$ (CRO) thin film and Pt protection layer (from bottom to top). The $3^\circ$ sawtooth surface of the thin film is marked by red lines. The red arrows indicate crystallographic orientations as determined by Fourier transforms of atomic-resolution TEM images of the substrate (lower left) and thin film (lower right).} \label{Fig2} \end{figure} High-resolution transmission electron microscopy (TEM) measurements, displayed in Fig.\ 3, indicate homogeneous growth of CRO without any indication for secondary phases or grain boundaries. The same holds also true for studies on larger lateral dimension. The calculated fast Fourier transform of the cross-sectional TEM images indicate well ordered pattern and epitaxial growth with equal orientation of the film and substrate. \begin{figure}[t!] \includegraphics[width=1\columnwidth]{FigS4.pdf} \caption{Residual resistivity of CRO thin films on NGO substrates vs. film thickness. The room temperature resistivity amounts to $230\ \mu\Omega$cm.} \end{figure} As shown in Fig. 4, the residual resistivity increases with reduced film thickness. This may be attributed to surface scattering and/or the effect of in-plane tensile strain, which may act slightly inhomogeneously. \section{Shubnikov-de Haas experiments} For studying the Shubnikov-de Haas oscillations in the electrical resistance of CaRuO$_3$, two different thin films were micro-structured~\cite{Schneider}. The bar width of sample D12 is $50\mu m$, that of sample D41 is $100\mu m$. Dilution refrigerators were utilized for the four-probe resistivity measurements. The first set of data has been obtained on sample D10 in a superconducting laboratory magnet with maximum field of 18 Tesla, applied perpendicular to the thin film plane along the [110] direction. Subsequently, we have performed high-field measurements at the LNCMI Grenoble in an electromagnet with maximum field of 35 Tesla. There, we have measured both thin films simultaneously. Sample D41 has been mounted such that we could study the variation of the quantum oscillations upon rotating the field from the [110] towards the [001] direction. However, as the field is rotated into the thin film plane for an angle larger than $40^\circ$, the SdH oscillations are suppressed because the cyclotron orbits interfere with the sample edges. \section{Fermi surfaces in band-theory} We have calculated the electronic structure of CaRuO$_3$ within the density-functional theory using the local-density approximation (LDA) as implemented in the Wien2k package~\cite{wien2k}. The structural information was taken from Table I in Ref.~\cite{zayak}. The electronic structure of CaRuO$_3$ was previously calculated and was in detail discussed~\cite{Mazin}, however, the detailed information on its Fermi surfaces was not given. Using the SKEAF program~\cite{skeaf} we have calculated also the extremal orbits and the corresponding frequencies of quantum oscillations and their effective masses. We visualized the surfaces and the extremal orbits using python scripts that used Mayavi libraries~\cite{mayavi}. We checked our isualizations against XCrysden~\cite{xcrysden} for consistency. To verify that the residual resistivity is compatible with the band theory we calculated the conductivity using the Boltzmann transport approach described in Ref.~\cite{boltztrap} and obtained $\sigma_\mathrm{dc}/\tau_\mathrm{tr}=0.15\times 10^{21}$/$\Omega$m s. \begin{figure} \includegraphics[width=0.95\linewidth]{Fig_dos} \caption{Calculated density of states (DOS) for CaRuO$_3$. The energy is measured with respect to the Fermi energy.} \label{fig:dos} \end{figure} The CaRuO$_3$ DOS is shown on Fig.~\ref{fig:dos}. The DOS at the Fermi energy is 4.41/f.u. eV, which corresponds to a linear coefficient in specific heat $\gamma=10.4$mJ/molK$^2$. The low energy (``t$_{2g}$'') manifold is gapped from the higher energy (``e$_g$'') manifold due to orthorhombic splittings. CaRuO$_3$ electronic dispersions are shown on Fig.~\ref{fig:bands}. As there are 4 formula units in the orthorhombic P$_\textrm{bnm}$ unit cell, there are 12 ``t$_{2g}$'' bands that have also some e$_g$ character due to orthorhombic distortions. In the algorithm, the bands are numbered according to their energy at each k-point separately, thus the band-character is not neccesarily followed properly along the k-path. All the crossings between bands are mistaken for anticrossings, as can be seen by following the orbital character that is indicated by the symbol size on the plot. This has no consequences for the identification of Fermi surfaces. \begin{figure} \includegraphics[width=0.95\linewidth]{Fig_bands} \caption{LDA band structure close to Fermi energy in CaRuO$_3$. The d$_{xz}$ weight is indicated by the thickness of the symbols. The points along the path are (in units of reciprocal unit wave-vectors) $\Gamma=(0,0,0)$, X=$(0,\frac{1}{2},0)$, S=$(\frac{1}{2},\frac{1}{2},0)$, R=$(\frac{1}{2},\frac{1}{2},\frac{1}{2})$, U=$(0,\frac{1}{2},\frac{1}{2})$, Z=$(0,0,\frac{1}{2})$, T=$(\frac{1}{2},0,\frac{1}{2})$. Fermi level (dashed) and its possible spin-dependent shifts in the magnetic field of 10~T, taking Stoner enhancements into account (dotted) are indicated. \label{fig:bands}} \end{figure} Several bands cross the Fermi energy and several Fermi surface sheets are found. The Fermi surfaces of CaRuO$_3$ are presented on Fig.~\ref{fig:fermi}. As the naming convention, we use Greek letters with Latin indices to identify different Fermi surface sheets where $\alpha$ is used for the sheet that encloses the smallest and $\zeta$ the largest volume. Several sheets of the Fermi surface fully enclose another sheet of the Fermi surface of a smaller volume. This is the case for instance for sheets $\beta$. In such cases we label all the corresponding sheets with the same Greek letter, but use Latin number indices to distinguish the sheets among themselves, e.g. $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$ for the innnermost until the outermost sheet, respectively. On Fig.~\ref{fig:fermi}(a) one can see a small hole pocket, that one encounters in the direction $\Gamma \rightarrow$U, and on Fig.~\ref{fig:fermi}(e,f) two small electron pockets $\beta_1$ and $\beta_2$, that are at the edge of the Brillouin zone, touching the surface perpendicular to 100 direction. Figs.~\ref{fig:fermi}(c,d) show more complex surfaces, where each sheet displayed in (d) has its companion with a larger volume displayed in (c). The sheets $\epsilon_1$ and $\epsilon'_1$ that appear to be separate on (d), merge in their larger volume realization $\epsilon_2$ shown on (c), which is also why we used $\epsilon_1'$, to label one of the two $\epsilon$-sheets presented in (c). Sheet $\gamma_1$ has a peculiar pea-pod (containing three 'peas') shape and is enclosed by a more irregular $\gamma_2$ sheet. Center of the face 010 is enclosed in two flat-pebble-like sheets, $\delta_{1,2}$. Fig~\ref{fig:fermi}(b) displays $\zeta$ sheet with a complicated shape and topology, see also Fig.~\ref{fig:fermi2} for a view from the top. It can vagely be described as consisting of a central Greek cross shape with additional wings close to the Brillouin zone boundaries perpendicular to the 010 direction. The central cross furthermore contains a spaceous cave around the $\Gamma$ point in its interior, with small ``windows'' that are visible when looked at from the 001 direction. Some of the extremal orbits for a few orientations of magnetic field are visualized with tubes on Fig.~\ref{fig:fermi}. Two orbits whose oscillation frequencies are consistent with the frequencies that we were able to extract from the experimental signal with confidence are indicated in color: red orbits for the field in (degenerate) 110 and 1-10 directions (c) and cyan for the tilted magnetic field for 35 degrees towards the 001 direction (d). The quantum-oscillation frequencies, the effective masses, the hole/electron character of the orbit and the corresponding sheet of the Fermi surface are presented in Table~\ref{table_1} for magnetic field pointing in the 110 direction. We refer to different extremal orbits using the name of the sheet, with additional superscript Roman number (with small numbers for short orbits). \begin{comment}As with the turning magnetic field, some of the extremal orbits appear/dissappear, we assign the names at our reference field direction (pointing perpendicular to the film, that is, along 110 direction), and denote the new orbits that appear with numbers, starting from one again. \end{comment} \begin{figure} \begin{overpic}[trim=0 0 0 0,clip=true,width=0.49\linewidth]{Fig_fs_1} \put(65,30){$\alpha$} \put(87,6){(a)} \end{overpic} \begin{overpic}[trim=60 60 130 130,clip=true, width=0.49\linewidth]{Fig_fs_2} \put(70,25){$\zeta$} \put(87,6){(b)} \end{overpic} \\ \begin{overpic}[trim=90 70 130 140,clip=true,width=0.49\linewidth]{Fig_fs_3}\put(70,16){$\beta_4$} \put(36,31){$\gamma_2$} \put(54,32){$\delta_2$} \put(23,18){$\epsilon_2$} \put(87,6){(c)} \end{overpic} \begin{overpic}[trim=50 30 50 60,clip=true,width=0.49\linewidth]{Fig_fs_4}\put(73,18){$\beta_3$} \put(36,33){$\gamma_1$} \put(57,32){$\delta_1$} \put(52,15){$\epsilon'_1$} \put(24,18){$\epsilon_1$} \put(87,6){(d)} \end{overpic} \\ \begin{overpic}[trim=30 0 30 40,clip=true,width=0.49\linewidth]{Fig_fs_5}\put(73,15){$\beta_2$} \put(87,6){(e)} \end{overpic} \begin{overpic}[trim=30 0 30 40,clip=true,width=0.49\linewidth]{Fig_fs_6}\put(74,14){$\beta_1$} \put(87,6){(f)} \put(48.5,92){$\scriptstyle \bullet \, \,(-\frac{1}{2},-\frac{1}{2},\frac{1}{2})$} \put(4,31.3){$\scriptstyle \bullet \, \,(\frac{1}{2},-\frac{1}{2},-\frac{1}{2})$} \put(47,6.5){$\scriptstyle \bullet \, \,(\frac{1}{2},\frac{1}{2},-\frac{1}{2})$} \end{overpic} \caption{Fermi surfaces in CaRuO$_3$. The extremal orbits for magnetic field in 110, 1-10, and 001 directions are also shown. In (c) the experimentally observed orbit is indicated in red. In (d), the experimentally observed orbit in the tilted magnetic field is indicated in cyan.} \label{fig:fermi} \end{figure} \begin{figure} \begin{overpic}[trim=100 50 100 50,clip=true,width=0.99\linewidth]{Fig_fs_7} \end{overpic} \caption{$\zeta$ Fermi surface sheet looked at from 001 direction. Note the hole in the center.} \label{fig:fermi2} \end{figure} \begin{table*} \begin{center} \begin{tabular}{c c c c c c} name & sheet & ``band'' index & frequency[T] & mass [el.mass] & comment \\ \hline $\alpha^1$ & $\alpha$ & 87 & 86 & 0.32 & hole pocket \\ $\alpha^2$ & $\alpha$ & 87 & 96 & 0.36 & same hole pocket, from different side \\ \hline $\zeta_1$ & $\zeta$ & 88 & 300 & 1.44 & hole-like (wing, next-most-far from body) \\ $\zeta_2$ & $\zeta$ & 88 & 335 & 1.18 & hole-like (wing, most-far from body) \\ $\zeta_3$ & $\zeta$ & 88 & 338 & 2.41 & hole-like (wing, most-close to body) \\ $\zeta_4$ & $\zeta$ & 88 & 396 & 1.48 & hole-like (goes through ``stomach'') \\ $\zeta_5$ & $\zeta$ & 88 & 434 & 1.17 & hole-like (wing, next most-close to body) \\ $\zeta_6$ & $\zeta$ & 88 & 1089 & 1.45 & electron-like (follows necks that penetrate out of Brillouin zone) \\ $\zeta_7$ & $\zeta$ & 88 & 1512 & 4.72 & hole-like (goes around the wing in the long direction)\\ \hline $\delta_2^2$ & $\delta_2$ & 89 & 483 & 1.03 & electron pocket (flat-pebble) \\ $\delta_2^1$ & $\delta_2$ & 89 & 461 & 0.83 & electron pocket (flat-pebble) \\ $\gamma_2^3$ & $\gamma_2$ & 89 & 438 & 0.85 & electron pocket (pea-pod: side) \\ $\gamma_2^2$ & $\gamma_2$ & 89 & 389 & 1.10 & electron pocket (pea-pod: center) \\ $\gamma_2^1$ & $\gamma_2$ & 89 & 354 & 0.97 & electron pocket (pea-pod: side) \\ $\epsilon_2^4$ & $\epsilon_2$ & 89 & 1019 & 1.20 & electron pocket (neck part, different side) \\ $\epsilon_2^3$ & $\epsilon_2$ & 89 & 799 & 1.40 & electron pocket (side part, different side) \\ $\epsilon_2^2$ & $\epsilon_2$ & 89 & 785 & 1.06 & electron pocket (side part) \\ $\epsilon_2^1$ & $\epsilon_2$ & 89 & 133 & 1.03 & electron-like triangular-shaped neck \\ $\beta_4$ & $\beta_4$ & 89 & 773 & 1.40 & electron pocket \\ \hline $\epsilon_1$ & $\epsilon_1$ & 90 & 780 & 0.75 & electron pocket \\ $\epsilon'_1$ & $\epsilon'_1$ & 90 & 254 & 0.43 & electron pocket \\ $\epsilon'^2_1$ & $\epsilon'_1$ & 90 & 362 & 0.76 & electron pocket \\ $\delta_1$ & $\delta_1$ & 90 & 263 & 0.63 & electron pocket (flat-pebble) \\ $\gamma_1^2$ & $\gamma_1$ & 90 & 124 & 0.83 & electron pocket (pea-pod: center) \\ $\gamma_1^1$ & $\gamma_1$ & 90 & 104 & 0.76 & electron pocket (pea-pod: side) \\ $\beta_3$ & $\beta_3$ & 90 & 409 & 0.86 & electron pocket (inside $\beta_4$) \\ \hline $\beta_2$ & $\beta_2$ & 91 & 210 & 0.55 & electron pocket (inside $\beta_3$) \\ $\beta_1$ & $\beta_1$ & 92 & 163 & 0.38 & electron pocket (inside $\beta_2$) \\ \end{tabular} \end{center} \caption{The data about SdH extremal orbits when the magnetic field is in the 110 direction. \label{table_1}} \end{table*} On Fig.~\ref{fig:angular} the angular dependence of the SdH frequencies is shown. The window from about 0.1 to about 1kT is quite densely covered. The smallest pockets have frequencies about 80T. \begin{figure} \includegraphics[width=\linewidth]{Fig_plot_broad} \\ \includegraphics[width=\linewidth]{Fig_plot_medium} \caption{The angular dependence of SdH frequencies as the magnetic field is turned from the 110 to 001 direction. Broad (top) and narrower frequency range (bottom) are shown. Different symbols are used for extremal orbits for the Fermi surface sheets presented in Fig.~\ref{fig:fermi}(a-f), for (i-vi), respectively.} \label{fig:angular} \end{figure} \subsection{Extremal orbits at around 35 degrees inclination} The Fourier transformed SdH data at 35 degrees inclination from 110 to 001 reveals two well identifiable peaks: one with frequency 541$\pm$20T and mass $4.9 \pm 0.3$$m_e$ and one with frequency 472$\pm$20T and mass $4.3 \pm 0.4$$m_e$. We associate the origin of the signal to the 460 T extremal orbit on sheet $\beta_3$ with mass 1$m_e$. Other extremal orbits have frequencies within experimental errorbars, but not only their angular dependence is incompatible with our observations but also their masses are too high (1.3$m_e$ for a 460T orbit associated with $\gamma_2$ sheet which is the lightest among them). \begin{comment} \subsection{Discussion of scattering rates from different points of view} Quasiparticle life-time (as measured by SdH) is given by $1/\tau_{\mathrm{qp}}=-Z \mathrm{Im} \Sigma(0)$, where $Z=m_\mathrm{band}/m^*=(1-\partial \mathrm{Re} \Sigma/\partial \omega)^{-1}_{\omega->0}$. Transport lifetime (at $T=0$) is $1/\tau_\mathrm{tr}=1/Z 1/\tau_\mathrm{qp} = -\mathrm{Im} \Sigma(0)$. Optical life-time (at low $T$) $1/\tau_\mathrm{opt} = -2 Z \mathrm{Im} \Sigma $. \\ Impurity life-time: from SdH=1.3ps. $Z \Gamma=0.27$meV (from FL fits). This corresponds to 2.3ps and may be explained by the increase of $Z$ in the presence of field. Namely, assuming that 0 field $Z=1/7$ and at SdH field it is $1/4$, and that $\Gamma$ from impurity does not change with field, than the zero-field quasiparticle life-time, as estimated from SdH is $1/\tau^*=1.3*7/4=2.3$ps. From Drude fit: at 5K, Marc had 1ps. From $T$ dependence, factor of 30\% is expected, pushing this to 1.3ps at low $T$, which is, roughly half of the $\tau^*$. Starting from the 15K data, as in the main text, one has $\tau_D$=320fs. On looking at the temperature dependence, a factor of 3.5 is expected as one cools down. This yields optical 1.1ps, again roughly half of $\tau^*$. Finaly, from Boltzmann transport I get $\sigma/\tau_\mathrm{tr}=0.15\times 10^{21}$1/$\Omega$m s. note that at the time I am not sure if one needs to put As $\tau_\mathrm{tr} =Z\tau^*=150$fs this yields 4$\mu \Omega$cm, compatible with the residual resistivity found in the measurements. \end{comment} \begin{comment} The colored bands on the plot are those that cross the Fermi energy. Fermi energy is denoted with a dashed line. In the field of 17T=0.001eV, in the LDA+DMFT I see that the self energies are split by 0.01eV. Assuming the bands narrow for factor of 5, this means that any band, whose extent below/above the Fermi energy is only of the order of 0.05 eV has chance to be filled completely, emptyied out completely by the magnetic field (Lifschitz transition). The energy $E_F \pm 0.05$eV is marked by dotted line. \end{comment} \section{TH\MakeLowercase{z} study} \subsection{THz experiment and determination of optical conductivity} We have studied the THz properties of a 40~nm thick CRO film grown on a non-vicinal (110) NGO substrate. The residual resistivity ratio of this film is 31. At this stage, we cannot employ thicker films with even higher RRR for the THz experiments because the transmitted signal would be too small to obtain sufficient signal-to-noise; the transmission of the present sample is of order 0.001 at low temperatures and frequencies. This small transmission also sets the low-frequency limit of the spectral range that we can employ reliably for this study: at low frequencies, a small fraction of the THz radiation reaches the detector without interacting with the sample. This \lq parasitic radiation\rq, which is not understood well, is probably caused by diffraction at e.g.\ apertures of cryostat windows. Its strength increases with decreasing frequency, and in our case of a low-transmission sample we can rule out significant errors due to the parasitic radiation only for frequencies above 0.2~THz. We performed transmission and phase measurements using a set of backward-wave-oscillators (BWOs) as tunable monochromatic and coherent THz sources. Phase measurements were performed with a Mach-Zehnder interferometer. To reach cryogenic temperatures, we employed two different home-built cryostats, one of them for temperatures down to 2~K, the other one, with tilted outer windows to reduce undesired standing waves, for temperatures down to 5~K \cite{Pracht2013}. Both transmission and phase spectra display pronounced Fabry-Perot oscillations due to standing waves in the dielectric substrate, and these can be described by formulas based on Fresnel equations \cite{Dressel2002}. Individually modeling each of these oscillations with two complex optical constants (e.g.\ $\sigma_1$ and $\sigma_2$), using values for the dielectric properties of the substrate independently obtained on a plain NGO reference, we determine very precise values for the optical properties of the CRO film. The NGO substrate with a thickness of 0.5~mm, a dielectric constant around 22, and negligible dielectric loss at low temperature is well suited for these THz transmission measurements. This is in contrast to STO substrates, which often dominate over the influence of a metallic film and furthermore are very difficult to model \cite{Geiger2012}. (STO has a very high and strongly temperature- and frequency-dependent dielectric constant combined with large dielectric losses.) \subsection{Optical spectra at temperatures up to 300~K} \begin{figure}[tb] \includegraphics[width=0.95\linewidth]{Fig_THzHighT.pdf} \caption{\label{Fig_THzHighT} (a) Real part $\sigma_1$ and (b) imaginary part $\sigma_2$ of optical conductivity for our sample \#D5 at temperatures between 5~K and 300~K.} \end{figure} Fig.\ \ref{Fig_THzHighT} shows conductivity data for a set of temperatures from 5~K up to 300~K. The evolution of the Drude-type roll-off in the real part $\sigma_1$ can be tracked smoothly, see Fig.\ \ref{Fig_THzHighT}(a): at the highest temperatures, the scattering rate is higher than our spectral range, and as a result the $\sigma_1$ spectra are basically flat and coincide with the dc conductivity. Upon cooling, the scattering rate decreases and a Drude roll-off in $\sigma_1$ develops. For temperatures lower than 40~K, the scattering rate is clearly visible in our frequency range. In the case of a simple temperature-dependent Drude response, the relaxation rate as a function of temperature can be evaluated by determining the temperature with the highest $\sigma_1$ at a given frequency. Also in this aspect we find a smooth evolution (0.6~THz $\approx$ 30~K, 0.45~THz $\approx$ 20~K, 0.35~THz $\approx$ 15~K, 0.2~THz $\approx$ 10~K) within our data. For temperatures lower than 10~K, the scattering rate has decreased to a frequency lower than what we can access with the present THz experiment. In the imaginary part $\sigma_2$, shown in Fig.\ \ref{Fig_THzHighT}(b), we see the corresponding behavior: With decreasing temperature, $\sigma_2$ continuously increases for all temperatures. Starting around 40~K, we can identify a maximum in the $\sigma_2$ spectra which moves toward lower frequencies upon further cooling. In the simple Drude picture, this maximum indicates the relaxation rate, and we find consistent frequencies compared to the behavior in $\sigma_1$. For temperatures below 15~K, we cannot identify the position of this maximum any more because it has moved to frequencies lower than we can reliably address in this study. \subsection{Optical spectra at temperatures below 5~K} \begin{figure}[tb] \includegraphics[width=0.95\linewidth]{Fig_THzLowT.pdf} \caption{\label{Fig_THzLowT}Real part $\sigma_1$ of optical conductivity for our sample \#D5 at temperatures between 2~K and 5~K.} \end{figure} The THz data for temperatures between 2~K and 5~K were performed in a separate experimental run using a home-built low-temperature optical cryostat \cite{Pracht2013}. The conductivity spectra are shown in Fig.\ \ref{Fig_THzLowT}. The trend observed at higher temperatures smoothly continues: the scattering rate reduces further, but since it is lower that our lowest accessible frequency, we observe this only by the further narrowing (toward zero frequency) of the roll-off in the $\sigma_1$ spectrum. \subsection{Comparison with theory for a Fermi liquid} \begin{figure}[tb] \includegraphics[width=0.95\linewidth]{Fig_THzFLscalingPlusDrude.pdf} \caption{\label{Fig_THzFLscaling}THz data for a few temperatures compared to theoretical predictions. The full lines follow a FL description \cite{Berthod}, where a non-Drude plateau is predicted around 4~THz for the 5~K curve. The dashed lines are simple Drude curves for frequency-independent scattering rate $1/\tau_{\textrm{D}}$ with NFL temperature dependence $1/\tau_{\textrm{D}} = a T^{3/2} + 1/\tau_0$.} \end{figure} We have compared our THz data to the theoretically expected curves for a Fermi liquid as recently discussed by Berthod \textit{et al.} \cite{Berthod}. This theory describes the optical response of a local Fermi liquid. It assumes a Fermi-liquid self energy $\mathrm\Sigma= (1-1/Z)\hbar \omega - i (\hbar^2\omega^2+\pi^2 k_B^2 T^2)/(Z \pi k_B T_0)$ that does not depend on the wave-vector (i.e.\ is local). The vertex corrections to optical conductivity are assumed to vanish. $Z$ is the quasiparticle residue and $T_0$ is the coherence scale. Based on these assumptions, a scaling form for the optical conductivity has been derived. The optical conductivity reads \begin{equation} \sigma/\sigma_\mathrm{dc} =\mathscr{S} (\frac{\hbar \omega}{2 \pi k_B T}, \omega \tau_{\mathrm{qp}}) \end{equation} and is thus described in terms of the single parameter $\hbar/\tau_\mathrm{qp}= 2 \pi (k_B T)^2/k_B T_0$. The scaling function $\mathscr{S}$ can be analytically evaluated. The optical response of a local Fermi liquid shows on a log-log plot a characteristic non-Drude plateau (see lines on Fig.~\ref{Fig_THzFLscaling}). The onset of the plateau on the low-frequency side occurs at the thermal frequency $\hbar \omega = 2 \pi k_B T$. If also the impurity scattering is introduced by a frequency-independent imaginary part $\Sigma \rightarrow \Sigma - i \Gamma_{\mathrm{imp}}$, the optical conductivity, normalized to the dc value becomes a two parameter function. On the low-temperature side, the onset of the plateau ceases to scale with $T$ but rather saturates at a temperature at which the zero-frequency scattering from the interactions coincides with the impurity scattering. As signs of a plateau were also found in our measurements, we attempted to fit the Berthod \textit{et al.} theory to our $\sigma_1$ data. We found that the low frequency part can be fit well with $Z \Gamma_{\mathrm{imp}}=3$~meV and $T_0=150$~K (see Fig.\ 4(b) of main paper). The onset of the plateau, on the other hand, appears at a too-high frequency to account for our data, see Fig.\ \ref{Fig_THzFLscaling}. The plateau that we observe is thus not a consequence of a FL frequency dependence of the self energy. Furthermore, its onset does not scale with temperature. We also note that $T_0$ was for a doped Mott insulator described within the dynamical mean-field theory found to be about 10$T_\mathrm{FL}$. Insisting on Berthod \textit{et al.} description, despite the obvious deviations, for CaRuO$_3$ we find that $T_0/10>>T_\mathrm{FL} $. $T_\mathrm{FL}$ thus appears anomalously low also from this perspective. In Fig.\ \ref{Fig_THzFLscaling} we also plot the frequency dependence for the NFL regime if one assumes a bare Drude response, where the frequency-independent scattering rate $1/\tau_{\textrm{D}} \sim a T^{3/2} + 1/\tau_0$ reproduces the temperature dependence of the dc resistivity ($\tau_0 =$ 1.3~ps). In the NFL temperature range, the data below 0.6~THz is described well by this simple model. \subsection{Comparison with previous optical studies} \begin{figure}[tb] \includegraphics[width=0.95\linewidth]{Fig_THzLiterature.pdf} \caption{\label{Fig_THzLiterature}Real part $\sigma_1$ of optical conductivity for our sample \#D5 at temperatures 10~K and 15~K. For comparison, THz data by Kamal \textit{et al}.\cite{Kamal2006} and infrared data by Lee \textit{et al} \cite{Lee2002} are displayed.} \end{figure} Our data obtained on sample \#D5 roughly compares to the results obtained previously by Kamal \textit{et al.} using THz time-domain spectroscopy \cite{Kamal2006}. In Fig.\ \ref{Fig_THzLiterature} we show our data at temperatures 10~K and 15~K, and data by Kamal \textit{et al.} for 10~K and 16~K. The main differences are that the low-frequency conductivity of our sample is much higher due to the improved sample growth, and that the order of the curves is exchanged: for our sample and above 0.3~THz, $\sigma_1$($T$=15~K) exceeds $\sigma_1$($T$=10~K), because here the sample is already in the relaxation regime, with excitation frequency larger than the optical scattering rate. Our data are also consistent with the previous infrared results by Lee \textit{et al.} \cite{Lee2002}, as shown with the dashed curve for 10~K in Fig.\ \ref{Fig_THzLiterature}. The slight mismatch in absolute conductivity we again attribute to the different sample quality.
-20,159.702773
[ -3.251953125, 2.98828125 ]
25.66586
[ -3.228515625, 0.396484375, -1.9931640625, -5.0234375, -0.498291015625, 8.03125 ]
[ 3.38671875, 6.62109375, 3.27734375, 5.0546875 ]
314
4,077
[ -1.5849609375, 1.822265625 ]
28.638405
[ -5.14453125, -1.37109375, -1.98046875, -2.078125, 0.127197265625, 7.875 ]
0.898729
18.714197
31.788079
5.387267
[ 1.8330917358398438 ]
-14,109.812113
5.5948
-19,515.081214
0.263043
6.124821
[ -3.451171875, -3.6953125, -3.0078125, -3.857421875, 2.482421875, 11.03125 ]
[ -6.05859375, -2.078125, -2.703125, -1.5546875, 3.8515625, 5.3203125 ]
BkiUa245qsNCPV6Yix3b
\section{Introduction} The study of fractional (noninteger order) calculus on time scales is a subject of strong current interest \cite{MyID:152,MyID:201,MyID:296,MyID:320}. Recently, Benkhettou, Hassani and Torres introduced a (local) fractional calculus on arbitrary time scales $\mathbb{T}$ (called here the BHT fractional calculus) based on the $T_\alpha$ differentiation operator and the $\alpha$-fractional integral \cite{MyID:324}. The Hilger time-scale calculus \cite{BookTS:2001} is then obtained as a particular case, by choosing $\alpha=1$. In this paper we develop the BHT time-scale fractional calculus initiated in \cite{MyID:324}. Precisely, we prove two different chain rules for the fractional derivative $T_\alpha$ (Theorems~\ref{CR} and \ref{thm:CR2}) and several inequalities for the $\alpha$-fractional integral: H\"{o}lder's inequality (Theorem~\ref{thm:Hineq}), Cauchy--Schwarz's inequality (Theorem~\ref{thm:CSineq}), Minkowski's inequality (Theorem~\ref{thm:Mink_ineq}), generalized Jensen's fractional inequality (Theorem~\ref{thm:Jen}) and a weighted fractional Hermite--Hadamard inequality on time scales (Theorem~\ref{thm:HHineq}). The paper is organized as follows. In Section~\ref{sec:Prelim} we recall the basics of the the BHT fractional calculus. Our results are then formulated and proved in Section~\ref{sec:MR}. \section{Preliminaries} \label{sec:Prelim} We briefly recall the necessary notions from the BHT fractional calculus \cite{MyID:324}: fractional differentiation and fractional integration on time scales. For an introduction to the time-scale theory we refer the reader to the book \cite{BookTS:2001}. \begin{dfn}[See \cite{MyID:324}] \label{def:fd:ts} Let $\mathbb{T}$ be a time scale, $f:\mathbb{T}\rightarrow \mathbb{R}$, $t\in \mathbb{T}^{\kappa}$, and $\alpha \in (0,1]$. For $t>0$, we define $T_\alpha(f)(t)$ to be the number (provided it exists) with the property that, given any $\epsilon >0$, there is a $\delta$-neighbourhood $\mathcal{V}_t = \left(t-\delta ,t+\delta\right) \cap \mathbb{T}$ of $t$, $\delta > 0$, such that $\left \vert \left[ f(\sigma (t))-f(s)\right]t^{1-\alpha} -T_\alpha(f)(t)\left[ \sigma(t)-s\right]\right \vert \leq \epsilon \left \vert \sigma (t)-s\right \vert$ for all $s\in \mathcal{V}_t$. We call $T_\alpha(f)(t)$ the $\alpha$-fractional derivative of $f$ of order $\alpha $ at $t$, and we define the $\alpha$-fractional derivative at 0 as $T_\alpha(f)(0):=\displaystyle\lim_{t\rightarrow 0^+} T_\alpha(f)(t)$. \end{dfn} If $\alpha = 1$, then we obtain from Definition~\ref{def:fd:ts} the Hilger delta derivative of time scales \cite{BookTS:2001}. The $\alpha$-fractional derivative of order zero is defined by the identity operator: $T_0(f) := f$. The basic properties of the $\alpha$-fractional derivative on time scales are given in \cite{MyID:324}, together with several illustrative examples. Here we just recall the item (iv) of Theorem 4 in \cite{MyID:324}, which is needed in the proof of our Theorem~\ref{CR}. \begin{thm}[See \cite{MyID:324}] Let $\alpha\in (0, 1]$ and $\mathbb{T}$ be a time scale. Assume $f: \mathbb{T}\rightarrow \mathbb{R}$ and let $t\in {\mathbb{T}}^\kappa$. If $f$ is $\alpha$-fractional differentiable of order $\alpha$ at $t$, then $$ f(\sigma(t))=f(t)+\mu(t)t^{\alpha-1}T_{\alpha}(f)(t). $$ \end{thm} The other main operator of \cite{MyID:324} is the $\alpha$-fractional integral of $f : \mathbb{T} \rightarrow \mathbb{R}$, defined by $$ \int f(t) \Delta^\alpha t := \int f(t) t^{\alpha-1} \Delta t, $$ where the integral on the right-hand side is the usual Hilger delta-integral of time scales \cite[Def.~26]{MyID:324}. If $F_{\alpha}(t):=\int f(t)\Delta^{\alpha} t$, then one defines the Cauchy $\alpha$-fractional integral by $\int_{a}^{b}f(t)\Delta^{\alpha} t :=F_{\alpha}(b)-F_{\alpha}(a)$, where $a, b\in \mathbb{T}$ \cite[Def.~28]{MyID:324}. The interested reader can find the basic properties of the Cauchy $\alpha$-fractional integral in \cite{MyID:324}. Here we are interested to prove some fractional integral inequalities on time scales. For that, we use some of the properties of \cite[Theorem~31]{MyID:324}. \begin{thm}[Cf. Theorem~31 of \cite{MyID:324}] \label{Int-Proprty} Let $\alpha\in (0,\ 1]$, $a, b, c \in \mathbb{T}$, $\gamma\in\mathbb{R}$, and $f, g$ be two rd-continuous functions. Then, \begin{enumerate} \item[(i)] $\int_{a}^{b}[f(t)+g(t)]\Delta^{\alpha} t = \int_{a}^{b}f(t)\Delta^{\alpha} t + \int_{a}^{b}g(t)\Delta^{\alpha} t$; \item[(ii)] $\int_{a}^{b}(\gamma f)(t)\Delta^{\alpha} t = \gamma \int_{a}^{b}f(t)\Delta^{\alpha} t$; \item[(iii)] $\int_{a}^{b}f(t)\Delta^{\alpha} t = - \int_{b}^{a}f(t)\Delta^{\alpha} t$; \item[(iv)] $\int_{a}^{b}f(t)\Delta^{\alpha} t = \int_{a}^{c}f(t)\Delta^{\alpha} t +\int_{c}^{b}f(t)\Delta^{\alpha} t$; \item[(v)] if there exist $g: \mathbb{T}\rightarrow \mathbb{R}$ with $|f(t)|\leq g(t)$ for all $t\in [a,\ b]$, then $\left| \int_{a}^{b} f(t)\Delta^{\alpha} t\right| \leq\int_{a}^{b} g(t)\Delta^{\alpha} t$. \end{enumerate} \end{thm} \section{Main Results} \label{sec:MR} The chain rule, as we know it from the classical differential calculus, does not hold for the BHT fractional calculus. A simple example of this fact has been given in \cite[Example~20]{MyID:324}. Moreover, it has been shown in \cite[Theorem~21]{MyID:324}, using the mean value theorem, that if $g:\mathbb{T}\rightarrow\mathbb{R}$ is continuous and fractional differentiable of order $\alpha \in (0, 1]$ at $t \in \mathbb{T}^\kappa$ and $f:\mathbb{R}\rightarrow\mathbb{R}$ is continuously differentiable, then there exists $c \in [t, \sigma(t)]$ such that $T_\alpha(f\circ g)(t) = f'(g(c)) T_\alpha(g)(t)$. In Section~\ref{sec:3.1}, we provide two other chain rules. Then, in Section~\ref{sec:3.2}, we prove some fractional integral inequalities on time scales. \subsection{Fractional chain rules on time scales} \label{sec:3.1} \begin{thm}[Chain Rule I] \label{CR} Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be continuously differentiable, $\mathbb{T}$ be a given time scale and $g:\mathbb{T}\rightarrow\mathbb{R}$ be $\alpha$-fractional differentiable. Then, $f\circ g: \mathbb{T}\rightarrow\mathbb{R}$ is also $\alpha$-fractional differentiable with \begin{equation} \label{eq:CR} T_\alpha(f\circ g)(t)=\Bigg[\int_0^1 f'\left(g(t) +h\mu(t)t^{\alpha-1}T_\alpha(g)(t)\right)dh\Bigg] T_\alpha(g)(t). \end{equation} \end{thm} \begin{proof} We begin by applying the ordinary substitution rule from calculus: \begin{align*} f(g(\sigma(t)))-f(g(s)) &= \int_{g(s)}^{g(\sigma(t))}f'(\tau)d\tau\\ &=[g(\sigma(t))-g(s)]\int_0^1 f'(hg(\sigma(t))+(1-h)g(s))dh. \end{align*} Let $t\in {\mathbb{T}}^\kappa$ and $\epsilon>0$. Since $g$ is $\alpha$-fractional differentiable at $t$, we know from Definition~\ref{def:fd:ts} that there exists a neighbourhood $U_1$ of $t$ such that $$ \left|[g(\sigma(t)) - g(s)]t^{1-\alpha} - T_\alpha(g)(t) (\sigma(t)- s)\right| \leq \epsilon^{*}|\sigma(t)-s| \quad \text{ for all } s \in U_1, $$ where $$ \epsilon^{*} = \dfrac{\epsilon}{\displaystyle 1+2\int_0^1 \left|f'(hg(\sigma(t))+(1-h)g(t))\right|dh}. $$ Moreover, $f'$ is continuous on $\mathbb{R}$ and, therefore, it is uniformly continuous on closed subsets of $\mathbb{R}$. Observing that $g$ is also continuous, because it is $\alpha$-fractional differentiable (see item (i) of Theorem~4 in \cite{MyID:324}), there exists a neighbourhood $U_2$ of $t$ such that $$ |f'(hg(\sigma(t))+(1-h)g(s)) - f'(hg(\sigma(t))+(1-h)g(t))| \leq \frac{\epsilon}{2(\epsilon^{*}+|T_{\alpha}(g)(t)|)} $$ for all $s\in U_2$. To see this, note that \begin{align*} |hg(\sigma(t))+(1-h)g(s)- (hg(\sigma(t))+(1-h)g(t))|&= (1-h)|g(s)-g(t)|\\ &\leq |g(s)-g(t)| \end{align*} holds for all $0\leq h\leq 1$. We then define $U:=U_1\cap U_2$ and let $s\in U$. For convenience, we put $$ \gamma = hg(\sigma(t))+(1-h)g(s) \quad \text{ and } \quad \beta = hg(\sigma(t))+(1-h)g(t). $$ Then we have \begin{align*} &\Bigg|[(f\circ g)(\sigma(t)) - (f\circ g)(s)]t^{1-\alpha} - T_\alpha(g)(t) (\sigma(t)- s)\int_0^1 f'(\beta)dh\Bigg|\\ &=\Bigg|t^{1-\alpha}[g(\sigma(t))-g(s)]\int_0^1 f'(\gamma)dh - T_\alpha(g)(t) (\sigma(t)- s)\int_0^1 f'(\beta)dh \Bigg|\\ &=\Bigg|\Big(t^{1-\alpha}[g(\sigma(t))-g(s)] - (\sigma(t)-s)T_{\alpha}(g)(t) \Big)\\ & \quad \times \int_0^1 f'(\gamma)dh + T_\alpha(g)(t) (\sigma(t)- s)\int_0^1 (f'(\gamma)-f'(\beta))dh\Bigg|\\ &\leq \Big|t^{1-\alpha}[g(\sigma(t))-g(s)] - (\sigma(t)-s)T_{\alpha}(g)(t) \Big|\int_0^1 |f'(\gamma)|dh \\ &\quad + \big|T_\alpha(g)(t)\big|| \sigma(t)- s|\int_0^1 |f'(\gamma)-f'(\beta)|dh\\ &\leq \epsilon^{*}|\sigma(t)-s| \int_0^1 |f'(\gamma)|dh + \big[\epsilon^{*}+ \big|T_\alpha(g)(t)\big|\big]| \sigma(t) - s|\int_0^1 |f'(\gamma)-f'(\beta)|dh\\ &\leq \frac{\epsilon}{2}|\sigma(t)-s| + \frac{\epsilon}{2}|\sigma(t)-s|\\ &=\epsilon|\sigma(t)-s|. \end{align*} Therefore, $f\circ g$ is $\alpha$-fractional differentiable at $t$ and \eqref{eq:CR} holds. \end{proof} Let us illustrate Theorem~\ref{CR} with an example. \begin{ex} Let $g:\mathbb{Z}\rightarrow\mathbb{R}$ and $f:\mathbb{R}\rightarrow\mathbb{R}$ be defined by $$ g(t)=t^2 \text{ and } f(t)=e^{t}. $$ Then, $T_{\alpha}(g)(t)=(2t+1)t^{1-\alpha} \text{ and } f'(t)=e^t$. Hence, we have by Theorem~\ref{CR} that \begin{align*} T_\alpha(f\circ g)(t) &=\Bigg[\int_0^1 f'(g(t)+h\mu(t)t^{\alpha-1}T_\alpha(g)(t))dh\Bigg]T_\alpha(g)(t)\\ &= (2t+1)t^{1-\alpha}\int_0^1 e^{t^2+h(2t+1)}dh\\ &=(2t+1)t^{1-\alpha}e^{t^2}\int_0^1 e^{h(2t+1)}dh\\ &=(2t+1)t^{1-\alpha}e^{t^2}\frac{1}{2t+1}\big[ e^{2t+1}-1\big]\\ &=t^{1-\alpha}e^{t^2}\big[ e^{2t+1} - 1\big]. \end{align*} \end{ex} \begin{thm}[Chain Rule II] \label{thm:CR2} Let $\mathbb{T}$ be a time scale. Assume $\nu:\mathbb{T}\rightarrow \mathbb{R}$ is strictly increasing and $\tilde{\mathbb{T}}:=\nu(\mathbb{T})$ is also a time scale. Let $w:\tilde{\mathbb{T}}\rightarrow \mathbb{R}$, $\alpha\in (0, 1]$, and $\tilde{T_{\alpha}}$ denote the $\alpha$-fractional derivative on $\tilde{\mathbb{T}}$. If for each $t\in {\mathbb{T}}^\kappa$, $\tilde{T_{\alpha}}(w)(\nu(t))$ exists and for every $\epsilon>0$, there is a neighbourhood $V$ of $t$ such that $$ |\tilde{\sigma}(\nu(t))-\nu(s) - T_\alpha(\nu)(t) (\sigma(t)- s)| \leq \epsilon|\sigma(t)-s| \quad \text{ for all } s\in V, $$ where $\tilde{\sigma}$ denotes the forward jump operator on $\tilde{\mathbb{T}}$, then $$ T_{\alpha}(w\circ \nu)(t) =\big[\tilde{T_{\alpha}}(w)\circ \nu\big](t) T_{\alpha}(\nu)(t). $$ \end{thm} \begin{proof} Let $0<\epsilon<1$ be given and define $\epsilon^{*}:=\epsilon\Big[1+ \big|T_{\alpha}(\nu)(t)\big| + \big| \tilde{T_{\alpha}}(w)(\nu(t))\big| \Big]^{-1}$. Note that $0<\epsilon^{*}<1$. According to the assumptions, there exist neighbourhoods $U_1$ of $t$ and $U_2$ of $\nu(t)$ such that $$ |\tilde{\sigma}(\nu(t))-\nu(s) - T_\alpha(\nu)(t) (\sigma(t)- s)| \leq \epsilon^{*}|\sigma(t)-s| $$ for all $s\in U_1$ and $$ \big|[w(\tilde{\sigma}(\nu(t))) - w(r)]t^{1-\alpha} - \tilde{T_{\alpha}}(w)(\nu(t)) (\tilde{\sigma}(\nu(t))- r)\big| \leq \epsilon^{*}|\tilde{\sigma}(\nu(t))-r| $$ for all $r\in U_2$. Let $U := U_1\cap \nu^{-1}(U_2)$. For any $s\in U$, we have that $s\in U_1$ and $\nu(s)\in U_2$ with \begin{align*} \Big|&[w(\nu(\sigma(t))) - w(\nu(s))]t^{1-\alpha} - (\sigma(t)- s) \big[\tilde{T_{\alpha}}(w)(\nu(t))\big] T_{\alpha}(\nu)(t)\Big|\\ &=\Big|[w(\nu(\sigma(t))) - w(\nu(s))]t^{1-\alpha} -[\tilde{\sigma}(\nu(t))-\nu(s)]\tilde{T_{\alpha}}(w)(\nu(t))\\ &\quad + [\tilde{\sigma}(\nu(t))-\nu(s) - T_\alpha(\nu)(t) (\sigma(t)- s)]\tilde{T_{\alpha}}(w)(\nu(t))\Big|\\ &\leq \epsilon^{*}|\tilde{\sigma}(\nu(t))-\nu(s)| + \epsilon^{*}|\sigma(t)-s||\tilde{T_{\alpha}}(w)(\nu(t))|\\ &\leq \epsilon^{*}\Big[|\tilde{\sigma}(\nu(t))-\nu(s) -(\sigma(t)-s)T_{\alpha}(\nu)(t)|\\ &\quad + |\sigma(t)-s||T_{\alpha}(\nu)(t)| + |\sigma(t)-s||\tilde{T_{\alpha}}(w)(\nu(t))|\Big]\\ \end{align*} \begin{align*} &\leq \epsilon^{*}\Big[\epsilon^{*}|\sigma(t)-s| +|\sigma(t)-s||T_{\alpha}(\nu)(t)| + |\sigma(t)-s||\tilde{T_{\alpha}}(w)(\nu(t))| \Big]\\ &= \epsilon^{*}|\sigma(t)-s|\Big[\epsilon^{*}+|T_{\alpha}(\nu)(t)| + |\tilde{T_{\alpha}}(w)(\nu(t))| \Big]\\ &\leq \epsilon^{*}\Big[1+|T_{\alpha}(\nu)(t)| + |\tilde{T_{\alpha}}(w)(\nu(t))| \Big]|\sigma(t)-s|\\ &=\epsilon |\sigma(t)-s|. \end{align*} This proves the claim. \end{proof} \subsection{Fractional integral inequalities on time scales} \label{sec:3.2} The $\alpha$-fractional integral on time scales was introduced in \cite[Section~3]{MyID:324}, where some basic properties were proved. Here we show that the $\alpha$-fractional integral satisfies appropriate fractional versions of the fundamental inequalities of H\"{o}lder, Cauchy--Schwarz, Minkowski, Jensen and Hermite--Hadamard. \begin{thm}[H\"{o}lder's fractional inequality on time scales] \label{thm:Hineq} Let $\alpha\in (0, 1]$ and $a, b\in\mathbb{T}$. If $f, g, h :[a, b]\rightarrow\mathbb{R}$ are $rd$-continuous, then \begin{equation} \label{eq:Hineq} \int_a^b |f(t)g(t)| |h(t)| \Delta^{\alpha}t \leq \Bigg[\int_a^b |f(t)|^p |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{p}}\Bigg[ \int_a^b |g(t)|^q |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{q}}, \end{equation} where $p>1$ and $\frac{1}{p} + \frac{1}{q} = 1$. \end{thm} \begin{proof} For nonnegative real numbers $A$ and $B$, the basic inequality $$ A^{1/p}B^{1/q} \leq \frac{A}{p}+\frac{B}{q} $$ holds. Now, suppose, without loss of generality, that $$ \Bigg[\int_a^b |f(t)|^p |h(t)|\Delta^{\alpha}t\Bigg] \Bigg[\int_a^b |g(t)|^q |h(t)|\Delta^{\alpha}t\Bigg] \neq 0. $$ Applying Theorem~\ref{Int-Proprty} and the above inequality to $$ A(t)=\frac{|f(t)|^p |h(t)|}{\int_a^b|f(\tau)|^p |h(\tau)|\Delta^{\alpha}\tau} $$ and $$ B(t)=\frac{|g(t)|^q |h(t)|}{\int_a^b|g(\tau)|^p |h(\tau)| \Delta^{\alpha}\tau}, $$ and integrating the obtained inequality between $a$ and $b$, which is possible since all occurring functions are $rd$-continuous, we find that \begin{align*} \int_a^b & [A(t)]^{1/p}[B(t)]^{1/q}\Delta^{\alpha}t\\ &= \int_a^b \frac{|f(t)| |h(t)|^{1/p}}{\Big[\int_a^b|f(\tau)|^p|h(\tau)|\Delta^{\alpha}\tau\Big]^{1/p}} \frac{|g(t)| |h(t)|^{1/q}}{\Big[\int_a^b|g(\tau)|^q |h(\tau)|\Delta^{\alpha}\tau\Big]^{1/q}}\Delta^{\alpha}t\\ &\leq \int_a^b\Bigg[\frac{A(t)}{p}+\frac{B(t)}{q}\Bigg]\Delta^{\alpha}t\\ &=\int_a^b\Bigg[\frac{1}{p}\frac{|f(t)|^p |h(t)|}{\int_a^b|f(\tau)|^p |h(\tau)|\Delta^{\alpha}\tau} +\frac{1}{q}\frac{|g(t)|^q |h(t)|}{\int_a^b|g(\tau)|^q |h(\tau)|\Delta^{\alpha}\tau}\Bigg]\Delta^{\alpha}t\\ &=\frac{1}{p}\int_a^b\Bigg[\frac{|f(t)|^p |h(t)|}{\int_a^b |f(\tau)|^p |h(\tau)|\Delta^{\alpha}\tau}\Bigg]\Delta^{\alpha}t +\frac{1}{q}\int_a^b\Bigg[\frac{|g(t)|^q |h(t)|}{\int_a^b |g(\tau)|^q |h(\tau)|\Delta^{\alpha}\tau}\Bigg]\Delta^{\alpha}t\\ &\leq \frac{1}{p}+\frac{1}{q}\\ &=1. \end{align*} This directly yields the H\"{o}lder inequality \eqref{eq:Hineq}. \end{proof} As a particular case of Theorem~\ref{thm:Hineq}, we obtain the following inequality. \begin{thm}[Cauchy--Schwarz's fractional inequality on time scales] \label{thm:CSineq} Let $\alpha\in (0, 1]$ and $a, b\in\mathbb{T}$. If $f, g, h :[a, b]\rightarrow\mathbb{R}$ are $rd$-continuous, then $$ \int_a^b |f(t)g(t)| |h(t)| \Delta^{\alpha}t \leq \sqrt{\Bigg[\int_a^b |f(t)|^2 |h(t)| \Delta^{\alpha}t\Bigg] \Bigg[\int_a^b |g(t)|^2 |h(t)| \Delta^{\alpha}t\Bigg]}. $$ \end{thm} \begin{proof} Choose $p=q=2$ in H\"{o}lder's inequality \eqref{eq:Hineq}. \end{proof} Using H\"{o}lder's inequality \eqref{eq:Hineq}, we can also prove the following result. \begin{cor} Let $\alpha\in (0, 1]$ and $a, b\in\mathbb{T}$. If $f, g, h :[a, b]\rightarrow\mathbb{R}$ are $rd$-continuous, then $$ \int_a^b |f(t)g(t)| |h(t)| \Delta^{\alpha}t \geq \Bigg[\int_a^b |f(t)|^p |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{p}} \Bigg[\int_a^b |g(t)|^q |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{q}}, $$ where $\frac{1}{p}+\frac{1}{q}=1$ and $p < 0$ or $q < 0$. \end{cor} \begin{proof} Without loss of generality, we may assume that $p < 0$ and $q > 0$. Set $P = - \frac{p}{q}$ and $Q = \frac{1}{q}$. Then, $\frac{1}{P}+\frac{1}{Q}=1$ with $P > 1$ and $Q > 0$. From \eqref{eq:Hineq} we can write that \begin{multline} \label{eq:Hineq:FG} \int_a^b |F(t)G(t)| |h(t)| \Delta^{\alpha}t\\ \leq \Bigg[\int_a^b |F(t)|^P |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{P}}\Bigg[ \int_a^b |G(t)|^Q |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{Q}} \end{multline} for any $rd$-continuous functions $F, G:[a, b]\rightarrow\mathbb{R}$. The desired result is obtained by taking $F(t) = [f(t)]^{-q}$ and $G(t) = [f(t)]^{q} [g(t)]^{q}$ in inequality \eqref{eq:Hineq:FG}. \end{proof} Next, we use H\"{o}lder's inequality \eqref{eq:Hineq} to deduce a fractional Minkowski's inequality on time scales. \begin{thm}[Minkowski's fractional inequality on time scales] \label{thm:Mink_ineq} Let $\alpha\in (0, 1]$, $a, b\in\mathbb{T}$ and $p>1$. If $f, g, h:[a, b]\rightarrow\mathbb{R}$ are $rd$-continuous, then \begin{multline} \label{eq:Mink_ineq} \Bigg[\int_a^b |(f+g)(t)|^p |h(t)| \Delta^{\alpha}t\Bigg]^{1/p}\\ \leq \Bigg[\int_a^b |f(t)|^p |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{p}} +\Bigg[\int_a^b |g(t)|^p |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{p}}. \end{multline} \end{thm} \begin{proof} We apply H\"{o}lder's inequality \eqref{eq:Hineq} with $q=p/(p-1)$ and items (i) and (v) of Theorem~\ref{Int-Proprty} to obtain \begin{align*} \int_a^b &|(f+g)(t)|^p |h(t)| \Delta^{\alpha}t\\ &=\int_a^b |(f+g)(t)|^{p-1}|(f+g)(t)| |h(t)| \Delta^{\alpha}t\\ &\leq \int_a^b |f(t)||(f+g)(t)|^{p-1} |h(t)| \Delta^{\alpha}t + \int_a^b |g(t)||(f+g)(t)|^{p-1} |h(t)| \Delta^{\alpha}t\\ &\leq \Bigg[\int_a^b |f(t)|^p |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{p}} \Bigg[\int_a^b |(f+g)(t)|^{(p-1)q} |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{q}}\\ &\quad + \Bigg[\int_a^b |g(t)|^p |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{p}} \Bigg[\int_a^b |(f+g)(t)|^{(p-1)q} |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{q}}\\ &=\Bigg[\int_a^b |(f+g)(t)|^p |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{q}}\\ &\qquad \times \Bigg(\Bigg[\int_a^b |f(t)|^p |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{p}} + \Bigg[\int_a^b |g(t)|^p |h(t)| \Delta^{\alpha}t\Bigg]^{\frac{1}{p}}\Bigg). \end{align*} Dividing both sides of the obtained inequality by $\Bigg[\int_a^b |(f+g)(t)|^p |h(t)|\Delta^{\alpha}t\Bigg]^{\frac{1}{q}}$, we arrive at the Minkowski inequality \eqref{eq:Mink_ineq}. \end{proof} Jensen's classical inequality relates the value of a convex/concave function of an integral to the integral of the convex/concave function. We prove a generalization of such relation for the BHT fractional calculus on time scales. \begin{thm}[Generalized Jensen's fractional inequality on time scales] \label{thm:Jen} Let $\mathbb{T}$ be a time scale, $a, b\in\mathbb{T}$ with $a<b$, $c, d\in\mathbb{R}$, $\alpha \in (0, 1]$, $g \in C\left([a,b]\cap\mathbb{T}; (c,d)\right)$ and $h \in C\left([a,b]\cap\mathbb{T}; \mathbb{R}\right)$ with $$ \int_{a}^{b} |h(s)| \Delta^\alpha s > 0. $$ \begin{itemize} \item If $f \in C\left((c,d); \mathbb{R}\right)$ is convex, then \begin{equation} \label{eq:Jen:conv} f\left(\frac{\int_{a}^{b} g(s) |h(s)| \Delta^\alpha s}{\int_{a}^{b} |h(s)| \Delta^\alpha s}\right) \leq \frac{\int_{a}^{b} f(g(s)) |h(s)| \Delta^\alpha s}{\int_{a}^{b} |h(s)| \Delta^\alpha s}. \end{equation} \item If $f \in C\left((c,d); \mathbb{R}\right)$ is concave, then \begin{equation} \label{eq:Jen:conc} f\left(\frac{\int_{a}^{b} g(s) |h(s)| \Delta^\alpha s}{\int_{a}^{b} |h(s)| \Delta^\alpha s}\right) \geq \frac{\int_{a}^{b} f(g(s)) |h(s)| \Delta^\alpha s}{\int_{a}^{b} |h(s)| \Delta^\alpha s}. \end{equation} \end{itemize} \end{thm} \begin{proof} We start by proving \eqref{eq:Jen:conv}. Since $f$ is convex, for any $t \in (c, d)$ there exists $a_t \in \mathbb{R}$ such that \begin{equation} \label{eq:ineq:conv} a_t (x-t) \leq f(x) - f(t) \quad \text{ for all } x \in (c,d). \end{equation} Let $$ t = \frac{\int_{a}^{b} g(s) |h(s)| \Delta^\alpha s}{\int_{a}^{b} |h(s)| \Delta^\alpha s}. $$ It follows from \eqref{eq:ineq:conv} and item (v) of Theorem~\ref{Int-Proprty} that \begin{equation*} \begin{split} \int_{a}^{b} f(g(s)) & |h(s)| \Delta^\alpha s -\left(\int_{a}^{b} |h(s)| \Delta^\alpha s\right) f\left(\frac{\int_{a}^{b} g(s) |h(s)| \Delta^\alpha s}{\int_{a}^{b} |h(s)| \Delta^\alpha s}\right)\\ &= \int_{a}^{b} f(g(s)) |h(s)| \Delta^\alpha s -\left(\int_{a}^{b} |h(s)| \Delta^\alpha s\right) f\left(t\right)\\ &= \int_{a}^{b} \left(f(g(s))-f(t)\right) |h(s)| \Delta^\alpha s\\ \end{split} \end{equation*} \begin{equation*} \begin{split} &\geq a_t \int_{a}^{b} \left(g(s)-t\right) |h(s)| \Delta^\alpha s\\ &= a_t \left(\int_{a}^{b} g(s) |h(s)| \Delta^\alpha s - t \int_{a}^{b} |h(s)| \Delta^\alpha s\right)\\ &= a_t \left(\int_{a}^{b} g(s) |h(s)| \Delta^\alpha s - \int_{a}^{b} g(s) |h(s)| \Delta^\alpha s\right)\\ &= 0. \end{split} \end{equation*} This proves \eqref{eq:Jen:conv}. To prove \eqref{eq:Jen:conc}, we simply observe that $F(x) = -f(x)$ is convex (because we are now assuming $f$ to be concave) and then we apply inequality \eqref{eq:Jen:conv} to function $F$. \end{proof} We end with an application of Theorem~\ref{thm:Jen}. \begin{thm}[A weighted fractional Hermite--Hadamard inequality on time scales] \label{thm:HHineq} Let $\mathbb{T}$ be a time scale, $a, b \in \mathbb{T}$ and $\alpha \in (0,1]$. Let $f : [a,b] \rightarrow \mathbb{R}$ be a continuous convex function and let $w : \mathbb{T} \rightarrow \mathbb{R}$ be a continuous function such that $w(t) \geq 0$ for all $t \in \mathbb{T}$ and $\int_a^b w(t) \Delta^\alpha t > 0$. Then, \begin{equation} \label{eq:HHineq} f(x_{w,\alpha}) \leq \frac{1}{\int_a^b w(t) \Delta^\alpha t} \int_{a}^{b} f(t) w(t) \Delta^\alpha t \leq \frac{b - x_{w,\alpha}}{b-a} f(a) + \frac{x_{w,\alpha} - a}{b-a} f(b), \end{equation} where $x_{w,\alpha} = \frac{\int_{a}^{b} t w(t) \Delta^\alpha t}{\int_{a}^{b} w(t) \Delta^\alpha t}$. \end{thm} \begin{proof} For every convex function one has $$ f(t) \leq f(a) + \frac{f(b)-f(a)}{b-a} (t-a). $$ Multiplying this inequality with $w(t)$, which is nonnegative, we get $$ w(t) f(t) \leq f(a) w(t) + \frac{f(b)-f(a)}{b-a} (t-a) w(t). $$ Taking the $\alpha$-fractional integral on both sides, we can write that $$ \int_{a}^{b} w(t) f(t) \Delta^\alpha t \leq \int_{a}^{b} f(a) w(t) \Delta^\alpha t + \int_{a}^{b} \frac{f(b)-f(a)}{b-a} (t-a) w(t) \Delta^\alpha t, $$ which implies \begin{multline*} \int_{a}^{b} w(t) f(t) \Delta^\alpha t \\ \leq f(a) \int_{a}^{b} w(t) \Delta^\alpha t + \frac{f(b)-f(a)}{b-a} \left(\int_{a}^{b} t w(t) \Delta^\alpha t - a \int_{a}^{b} w(t) \Delta^\alpha t\right), \end{multline*} that is, $$ \frac{1}{\int_a^b w(t) \Delta^\alpha t} \int_{a}^{b} f(t) w(t) \Delta^\alpha t \leq \frac{b - x_{w,\alpha}}{b-a} f(a) + \frac{x_{w,\alpha} - a}{b-a} f(b). $$ We have just proved the second inequality of \eqref{eq:HHineq}. For the first inequality of \eqref{eq:HHineq}, we use \eqref{eq:Jen:conv} of Theorem~\ref{thm:Jen} by taking $g : \mathbb{T} \rightarrow \mathbb{T}$ defined by $g(s) = s$ for all $s \in \mathbb{T}$ and $h : \mathbb{T} \rightarrow \mathbb{R}$ given by $h = w$. \end{proof} Note that if in Theorem~\ref{thm:HHineq} we consider a concave function $f$ instead of a convex one, then the inequalities of \eqref{eq:HHineq} are reversed. \section*{Acknowledgements} Torres was partially supported by the Portuguese Foundation for Science and Technology (FCT), through the Center for Research and Development in Mathematics and Applications (CIDMA), within project UID/MAT/04106/2013. The authors are greatly indebted to two referees for their several useful suggestions and valuable comments.
-42,860.951091
[ -2.88671875, 2.578125 ]
22.697368
[ -3.345703125, 0.7880859375, -2.03515625, -5.73828125, -0.9892578125, 8.7265625 ]
[ 3.533203125, 9.0625, 1.2255859375, 7.375 ]
113
2,303
[ -3.521484375, 4.078125 ]
39.347946
[ -5.5078125, -4.01171875, -4.4609375, -2.1640625, 1.8759765625, 12.328125 ]
1.88705
11.499648
29.092488
2.259437
[ 1.7912437915802002 ]
-29,684.804433
6.350847
-43,295.686689
1.312731
5.698625
[ -1.712890625, -3.140625, -4.25390625, -5.83203125, 2.033203125, 13.3984375 ]
[ -5.9453125, -1.5673828125, -1.9541015625, -1.1728515625, 3.30078125, 3.830078125 ]
BkiUdd4241xg-QyzEflc
\section{Introduction}\label{sec:1} The blazars are the most extreme class of radio-loud active galactic nuclei (AGNs) in their unification scheme. Blazars are emitting electromagnetic radiation ranging from radio to High and Very High Energy $\gamma$-ray{} bands (HE; $\geq100$ MeV and VHE; $\geq100$ GeV) characterized by rapid and high-amplitude variability which can be explained assuming the jets are oriented close to the line of sight of the observer (a few degrees) and the nonthermal plasma moves with relativistic velocities along the jet \citep{urry}. Blazars are grouped into two large sub-classes, Flat Spectrum Radio Quasars (FSRQs) and BL Lacertae objects (BL Lacs), on the basis of different emission line properties, which are stronger and quasar-like in FSRQs and weak or absent in BL Lacs. An alternative classification method is based on the luminosity of the broad emission lines (or accretion disc) measured in Eddington units: when $L_{\rm BLR}/L_{\rm Edd}\geq5\times10^{-4}$ the objects are FSRQs otherwise they are BL Lacs \citep{2011MNRAS,2012MNRAS}.\\ The multi-wavelength studies have shown that the Spectral Energy Distributions (SEDs) of both types of blazars consist of two broad humps, peaking in the IR-X-ray (low-energy component) and in the MeV-TeV bands (HE-component). The low-energy component is well explained by synchrotron emission from relativistic electrons in the jet, whereas the nature of the HE-component is less well understood as several different emission mechanisms can be responsible for that emission (e.g., see \cite{sikora09}). The simplest explanation scenario is the synchrotron self-Compton (SSC) radiation, where the soft synchrotron photons are inverse-Compton-up-scattered by the same electrons that have produced the synchrotron emission \citep{ghisellini, bloom, maraschi}. As FSRQ jets are in an environment with a stronger external radiation field which can be beamed and enhanced in the frame of the jet, the inverse Compton scattering of external photons too can contribute to the observed HE emission \citep{blazejowski,ghiselini09, sikora}. Alternatively, if the protons are efficiently accelerated in the jet (beyond the threshold for pion production), the HE emission can be also explained by the interaction of energetic protons \citep{mucke2,mucke1}.\\ After the lunch of Fermi Large Area Telescope ({\it Fermi}-LAT{}) several thousand blazars were detected in the $\gamma$-ray{} band \citep{ackermancatalog} which opens new perspectives for investigation of the broadband emission from them. The observations indirectly show that the $\gamma$-rays{} can be produced either close to or far from the central black hole. As the $\gamma$-ray{} emission regions are very compact, inferred from extreme short time scale variabilities (e.g., in minute scales \cite{2016ApJ,foschini11,foschini13, nalewajko, brown13, rani13,saito,hayashida15}) and that there is a sharp break in the GeV $\gamma$-ray{} spectra of some blazars \citep{,poutanen}, the emission is most likely produced within the broad-line regions (BLRs). On the other hand, the recent detection of $\geq100$ GeV photons from several FSRQs \citep{ahnen1441,aleksic1510, aleksic1222, 2017MNRAS.470.2861S} implies that the $\gamma$-ray{} emission region should most likely be beyond the BLR in order to bypass strong absorption of VHE photons \citep{poutanen, liu}. Unfortunately, the angular resolution of $\gamma$-ray{} instruments is not high enough (and will not be in the near future) to resolve and localize the $\gamma$-ray{} emission regions which makes it difficult to determine the exact origin of $\gamma$-ray{} emission from blazars as the jet dissipation can occur at any distance from the central black hole.\\ Among the FSRQs detected by {\it Fermi}-LAT{}, the powerful GeV $\gamma$-ray{} emitter CTA 102, $z=1.037$ \citep{schmidt}, is flaring frequently, its $\gamma$-ray{} flux sometimes exceeding $10^{-5}\:{\rm photon\:s^{-1}\:cm^{-2}}$. CTA 102 is a luminous, well-studied highly polarized quasar \citep{stockman} having variable optical emission \citep{pica}. It has been initially identified by Compton Gamma Ray Observatory mission as a $\gamma$-ray{} emitter (the flux $>$ 100 MeV being $(2.4\pm0.5)\times10^{-7}\:{\rm photon\:s^{-1}\:cm^{-2}}$), and then it is being included in all the point source catalogs of {\it Fermi}-LAT{} \citep{acero}. Since 2016, CTA 102 was in the enhanced emission state in the UV/optical, X-ray and HE $\gamma$-ray{} bands \citep{casadio, Balonek, chapman, popov, ciprini, bulgarelli, ciprini1, becerra, minervini, carrasco} with several prominent $\gamma$-ray{} bright periods. Considering the available large amount of multi-wavelength data which allows to constrain the emitting region size and location, magnetic field and electron energy distribution, etc., CTA 102 is an ideal object for exploring the physics of FSRQ jets.\\ In this paper, we analyze the Swift UVOT/XRT, NuSTAR and {\it Fermi}-LAT{} data collected from 2016 to 2018 to study the broadband emission from CTA 102. The data collected for the analysis and its reduction methods are described in Section \ref{sec:2}. The spectral changes in different bands during the flaring and low state is discussed in Section \ref{sec:3}. The broadband SED modeling is presented in Section \ref{sec:4} and Results and Discussion in Section \ref{sec:5}. The conclusion is summarized in Section \ref{sec:6}. \section{Observations and Data Reduction}\label{sec:2} \subsection{Gamma-ray observations: Fermi LAT} \begin{figure*} \centering \includegraphics[width=0.99 \textwidth]{f1.eps} \caption{Multifrequency light curve of CTA 102 obtained for the period from 2008 August to 2018 January. {\it a)} $\gamma$-ray{} light curves with adaptive (red; $\geq156.1$ MeV) and 2-day (blue; $100$ MeV) bins, {\it b) and c)} the flux and photon index with 2- and 7-days binning, {\it d)} Swift XRT light curve in the 0.3-10 keV range, {\it e)} UV/optical fluxes in $V$, $B$, $U$, $W1$, $M2$ and $W2$ bands and {\it f)} the energy and arrival times of the highest-energy photons. The vertical blue dashed line shows the period when a large flare in the $R-$ band was observed (28 December 2016). }% \label{var_mult} \end{figure*} In the present paper we use the publicly available {\it Fermi}-LAT{} data acquired in the period from 01 January 2016 to 09 January 2018 when large-amplitude flares of CTA 102 were observed. Fermi Science Tools v10r0p5 was used to analyze the data with {\it P8R2\_SOURCE\_V6} instrument response function. Only the 100 MeV - 300 GeV events extracted from a $12^\circ$ region of interest (ROI) centered on the location of CTA 102 [(RA,dec)= (338.139, 11.720)] have been analyzed. However, the results were checked by repeating the same analyses selecting ROI radii of $10^\circ$ and $15^\circ$. To eliminate the Earth limb events, the recommended quality cuts, (DATA\_QUAL==1)$\&\&$(LAT\_CONFIG==1) and a zenith angle cut at $90^\circ$ were applied. After binning the data into pixels of $0.1^\circ\times0.1^\circ$ and into 34 equal logarithmically-spaced energy bins, with the help of {\it gtlike} a binned likelihood analysis is performed. The model file describing ROI was created using the {\it Fermi}-LAT{} third source catalog \citep{acero} (3FGL) which contains sources within ROI+$5^\circ$ from the target, as well as Galactic {\it gll\_iem\_v06} and {\it iso\_P8R2\_SOURCE\_V6\_v06} diffuse components. All point-source spectra were modeled with those given in the catalog, allowing the photon index and normalization of the sources within $12^\circ$ to be free in the analysis. Also, the normalization of diffuse background components are free. To check if there are new $\gamma$-ray{} sources in the ROI, a Test Statistics (TS) map (TS defined as TS $= 2(log {\rm L}-log {\rm L_0})$, where ${\rm L}$ and ${\rm L_0}$ are the likelihoods whether or not the source is included) is created with {\it gttsmap} tool which places a point source at each pixel and evaluates its TS. In the TS map, there are new hotspots (pixels) with TS $>$ 25 ($5\:\sigma$) which possibly hints at the presence of new sources. For each new hotspot we sequentially added a new point source with a power-law spectral definition. For the further analysis the model file containing these additional point sources is used.\\ In the whole-time analysis, the $\gamma$-ray{} spectrum of CTA 102 was first modeled using a log-parabola \citep{Massaro} as in 3FGL and then assuming a power-law shape. The latter will be used in the light curve calculations, as shorter periods will be considered and a power law can be a good approximation of the spectrum. During the analysis of each individual flare a different model file obtained from the analyses of the data accumulated during one/two- month periods covering the flares was also used. An unbinned maximum likelihood analysis was performed using $(0.1-300)$ GeV photons with the appropriate quality cuts mentioned above, to obtain the $\gamma$-ray{} light curves. Since no variability is expected from the underlying background diffuse emission, we fix the normalization of both background components to the best fit values obtained for the whole time period.\\ Initially, the light curve was calculated with the help of an adaptive binning method. At regular (fixed) time binning, the long bins will smooth out the fast variation while short bins might result in many upper limits during the low-activity periods. In the adaptive binning method, the time bin widths are adjusted to produce bins with constant flux uncertainty above the optimal energies \citep{lott} meant to find rapid changes in $\gamma$-ray{} fluxes. The adaptively binned light curve with 15\% uncertainty and above $E_0=156.1$ MeV in Fig. \ref{var_mult} shows several bright $\gamma$-ray{} states: from MJD 57420 to MJD 57445 and from MJD 57700 to MJD 57900. The peak flux of $(2.52\pm0.42)\times10^{-5}\:{\rm photon\:cm^{-2}\:s^{-1}}$ with a photon index of $\Gamma=1.99\pm0.15$ was observed on MJD 57738.47 within 4.31 minutes with a convincingly high $\sim20.0\sigma$. It corresponds to a flux of $(3.55\pm0.55)\times10^{-5}\:{\rm photon\:cm^{-2}\:s^{-1}}$ above 100 MeV which $\sim221$ times exceeds the average $\gamma$-ray{} flux given in 3FGL ($\simeq1.60\times10^{-7}\:{\rm photon\:cm^{-2}\:s^{-1}}$ but the source is variable with a variability index of 1602.3 in 3FGL). In addition, we used {\it gtfindsrc} tool to determine the best coordinates of the $\gamma$-ray{} emission in this period, yielding (RA,dec)= (338.115, 11.746) with a 95\% confidence error circle radius of $r_{95}$ = 0.06. These coordinates are offset only by $0.03^\circ$ from the $\gamma$-ray{} position of CTA 102, indicating that it is the most likely source of the emission. The hardest photon index of $1.61\pm 0.10$ ($22.56\sigma$) was observed on MJD 57752.45 within $9.46$ minutes, which is significantly harder than the mean photon index observed during the considered period, $\Gamma_{\rm mean}=2.22$. \\ In the adaptively binned light curve there is a hint at flux changes in minute scales. For example, the interval of MJD 57737.88- MJD 57739.00 ($\sim1.13$ days), contains 67 adaptive bins each having a width of the order of a few minutes and a detection significance of $> 14.3\sigma$. Another such active period was observed on MJD 57752.0, though the time bin widths were a few tens of minute. Many times during the considered period, the source flux exceeded $10^{-5}\:{\rm photon\:cm^{-2}\:s^{-1}}$, mostly observed during the extremely active period from MJD $57736.4$ to MJD $57798.46$ as well as a few times on MJD 57439.0 and MJD 57862.0. During these periods, the photon flux and index vary within $(1.01-2.52)\times10^{-5}\:{\rm photon\:cm^{-2}\:s^{-1}}$ and $1.61-2.56$, respectively, the minimum and maximum bin widths being $4.31$ and $194.54$ minutes and the detection significance varying from $13.18\sigma$ to $22.61\sigma$.\\ \begin{figure} \centering \includegraphics[width=\hsize,clip]{f2.eps} \caption{CTA 102 $\gamma$-ray{} photon index vs. flux in adaptive (orange) and two-day bins (blue). Similar plot for the X-ray band is shown in the insert.} \label{phot_index} \end{figure} Fig. \ref{var_mult} b) shows the $\gamma$-ray{} light curve $>1$ GeV (2 days; red color) and $>10$ GeV (7 days; blue color) with a noticeable increase in the flux, the peaks being $(2.32\pm0.10)\times10^{-6}\:{\rm photon\:cm^{-2}\:s^{-1}}$ and $(6.43\pm0.94)\times10^{-8}\:{\rm photon\:cm^{-2}\:s^{-1}}$, at 2-day and 7-day binning, respectively. Above 10 GeV, among 105 total bins only in 36 the detection significance is at least $4\sigma$, but, e.g., on MJD 57741.0 and MJD 57748.0 it is as large as $\simeq29\sigma$, $N_{\rm pred}$ varying within $46-55$. The $\gamma$-ray{} photon index variation above 0.1 and 10 GeV is shown in Fig. \ref{var_mult} c) with red and blue colors, respectively. There is an obvious hardening above 0.1 GeV, when the photon index changed to $\Gamma\simeq2.0$, during the most bright periods of the source. The mean $\gamma$-ray{} photon index above 10 GeV is $\Gamma_{\rm mean}=3.41$ but on MJD 57776.0 $\Gamma=1.79\pm0.55$ with $7.85\sigma$.\\ The $\gamma$-ray{} photon index versus flux is presented in Fig. \ref{phot_index} for adaptive (orange) and 2-day binning (blue; $> 0.1$ GeV). When 2-day intervals are considered, there is a hint of spectral hardening as the source gets brighter. In the $\gamma$-ray{} band such behaviour has been already observed from several blazars (e.g., PKS 1502+106 \citep{2010ApJ...710..810A}, PKS 1510-089 \citep{2010ApJ...721.1425A}, sometimes from 3C 454.3 \citep{2010ApJ...721.1383A}, etc.) and radio galaxies (e.g., NGC 1275 \citep{2017ApJ...848..111B}). Such evolution of spectral index and flux is expectable when accelerated HE electrons are cooled down (e.g., \citep{kirk}). It is hard to see similar relation in the case of adaptive bins as the bright periods last shorter, leading to larger uncertainties. The linear-Pearson correlation test applied to 2-day and adaptively binned intervals yielded $r_{p}=-0.569$ and $r_{p}=-0.533$, respectively, the p-value being $<<10^{-5}$. This suggests negative correlation between the flux and photon index, i.e., as the flux increases, the photon index decreases (hardens).\\ The distribution of highest energy events ($>10$ GeV) detected from CTA 102, calculated using the {\it gtsrcprob} tool is presented in Fig. \ref{var_mult} f). Most of the HE photons are observed during MJD 57700-57800 with the maximum of $97.93$ GeV detected on MJD 57773.34. \subsection{Swift UVOT/XRT observations} The data from seventy Swift (Neil Gehrels Swift observatory) observations of CTA 102 carried out from 01 January 2016 to 09 January 2018 have been analyzed. The exposures range from 0.3 ks (ObsID:33509083) to 3.14 ks (ObsID:33509095) and most of the observations were made in the photon counting and only two in the window timing mode. The XRT data were analyzed with {\it XRTDAS} (v.3.3.0) using standard procedure and the most recent calibration databases. Events for the spectral analysis were selected within a 20 pixel ($47''$) circle with the source at the center, while the background region as an annulus with the same center and having inner and outer radii of 51 ($120''$) and 85 pixels ($200''$), respectively. The count rate in some observations was above 0.5 count ${\rm s^{-1}}$ implying pile-up in the inner part of the PSF. This effect was removed by excluding the events within a 3 pixel radius circle centered on the source position. The Cash statistics \citep{1979ApJ...228..939C} on ungrouped data was used as for some observations the number of counts was low. However for the observations with a high count rate, the results were also cross-checked by rebining to have at least 20 counts per bin and then fitted using the $\chi^2$ minimization technique. The individual spectra were fitted with {\it XSPEC} v12.9.1a adopting an absorbed power-law model with $N_{H}=5.35\times 10^{20}\:{\rm cm^{-2}}$ column density, ignoring the channels with energy below 0.3 keV and above 10 keV. Fig. \ref{var_mult} d) shows the X-ray flux evolution in time (the corresponding parameters are presented in Table \ref{tabeswift}), where its gradual increase contemporaneous with the $\gamma$-ray{} flux around MJD 57750 can be seen. The highest flux of $F_{0.3-10\:{\rm keV}}\simeq(6.71\pm 0.21)\times10^{-11}\:{\rm erg\:cm^{-2}\:s^{-1}}$ observed on MJD 57759.69 exceeds the average flux ($\simeq1.2\times10^{-11}\:{\rm erg\:cm^{-2}\:s^{-1}}$) $\sim5.6$ times. A relation between the unabsorbed X-ray flux and photon index is represented in the insert of Fig. \ref{phot_index}. A trend of a harder spectrum when the source is brighter can be seen. Such harder-when-brighter trend in the X-ray band was already observed from several FSRQs (e.g., PKS 1510-089 \citep{2008ApJ...672..787K, 2011A&A...529A.145D}, 3C 454.3 \citep{2010ApJ...712..405V} and etc.) which can be described if assuming the electrons are losing energy mainly through interaction with the external photon fields (e.g., \citep{2011ApJ...736L..38V}).\\ The data from the second instrument on board the Swift satellite, UVOT, was used to measure the flux of the source in the UV/optical bands. Photometry was computed using a five-arcsecond source region around CTA 102 and for the background - a source-free annulus centered on the source position with $27''$ inner and $35''$ outer radii. The magnitudes were computed using {\it UVOTSOURCE} task, then corrected for extinction, using the reddening coefficient E(B-V) from \citet{schlafly} and the ratios of the extinction to reddening A/E(B-V) for each filter from \citet{fitzpatrick} then converting to fluxes, following \citet{breeveld}. The flux measured for $V,\:B, \: U, \: W1, \:M2$ and $W2$ filters is shown in Fig. \ref{var_mult} e). Even if the available data are not enough for detailed studies, it is clear that up to $\sim$ MJD 57720 the source was in a relatively faint state in the optical/UV band but its flux significantly increased during the bright flaring period around $\sim$ MJD 57750. This is in agreement with the recent results by \citet{2017Natur.552..374R} which show that the source emission in the optical band increased in late 2016 with a 6-7 magnitude jump as compared with the minimal state. The maximum flux in the $R-$ band was observed on 28 December 2016 (MJD 57750) with a peak luminosity of $1.32\times10^{48}\:{\rm erg\:s^{-1}}$. In addition, the radio monitoring (at 37 GHz) showed that the peak in this band is much earlier than the one in the R-band, inferring these emissions were produced in different locations of the jet. \subsection{NuSTAR observation} In the hard X-ray band (3-79 keV), CTA 102 was observed once on 30 December 2016 by NuSTAR with a net exposure of $\sim26.21$ ks, when it was bright in the X-ray and $\gamma$-ray{} bands. The raw data (from both Focal Plane Modules [FPMA and FPMB; \citep{2013ApJ...770..103H}] were processed with the NuSTAR Data Analysis Software ({\it NuSTARDAS}) package v.1.4.1 (via the script {\it nupipeline}), producing calibrated and cleaned event files. The events data were extracted from a region of $75''$ centered on the source position, while the background was extracted from a nearby source free circular region with the same radius. The spectra were binned so to have at least 30 counts per bin and fitted assuming an absorbed power-law model. The best fit resulted in $\Gamma_{\rm X}=1.32\pm0.005$ and $F_{3-79\:{\rm keV}}\simeq(2.94\pm 0.02)\times10^{-10}\:{\rm erg\:cm^{-2}\:s^{-1}}$ with $\chi^2=0.97$ for 1131 degrees of freedom. The corresponding {spectra for FPMA and FPMB} are shown in Fig. \ref{Nustarspec}. \startlongtable \begin{deluxetable*}{cccccc} \tablecaption{Summary of Swift XRT observations of CTA 102. \label{tabeswift}} \tablehead{ \colhead{Sequence No.} & \colhead{Date (MJD)} & \colhead{Exp(sec)} & \colhead{Log(Flux)\tablenotemark{a}} & $\Gamma$ & \colhead{C-stat./dof} } \tabletypesize{\footnotesize} \startdata 00033509016 & 2016-01-02(57389.33) & 834.1 & $-10.94\pm0.06$ & $1.23\pm0.14$ & 91.23(103) \\ 00033509017 & 2016-01-03(57390.6) & 1119.0 & $-10.88\pm0.04$ & $1.18\pm0.10$ & 166.99(168) \\ 00033509018 & 2016-06-21(57560.67) & 991.4 & $-10.71\pm0.03$ & $1.43\pm0.08$ & 218.22(218) \\ 00033509021 & 2016-06-30(57569.66) & 844.1 & $-10.77\pm0.04$ & $1.42\pm0.11$ & 195.86(151) \\ 00033509022 & 2016-08-24(57624.88) & 1633.0 & $-10.68\pm0.02$ & $1.38\pm0.06$ & 422.02(320) \\ 00033509023 & 2016-08-25(57625.35) & 1691.0 & $-10.73\pm0.02$ & $1.46\pm0.06$ & 393.49(306) \\ 00033509024 & 2016-08-26(57626.94) & 1868.0 & $-10.71\pm0.02$ & $1.41\pm0.06$ & 410.73(323) \\ 00033509025 & 2016-08-27(57627.94) & 1466.0 & $-10.65\pm0.03$ & $1.43\pm0.07$ & 233.39(291) \\ 00033509026 & 2016-08-28(57628.01) & 2148.0 & $-10.75\pm0.02$ & $1.45\pm0.06$ & 271.35(309) \\ 00033509027 & 2016-08-28(57628.94) & 2797.0 & $-10.71\pm0.02$ & $1.36\pm0.05$ & 450.67(403) \\ 00033509028 & 2016-08-30(57630.93) & 1576.0 & $-10.81\pm0.03$ & $1.48\pm0.07$ & 226.96(270) \\ 00033509030 & 2016-08-31(57631.93) & 2133.0 & $-10.76\pm0.03$ & $1.35\pm0.07$ & 289.41(290) \\ 00033509031 & 2016-09-02(57633.06) & 1978.0 & $-10.78\pm0.02$ & $1.42\pm0.06$ & 316.53(301) \\ 00033509034 & 2016-09-03(57634.79) & 966.5 & $-10.72\pm0.03$ & $1.57\pm0.09$ & 220.83(194) \\ 00033509035 & 2016-09-04(57635.64) & 869.1 & $-10.79\pm0.03$ & $1.64\pm0.09$ & 207.90(193) \\ 00033509076 & 2016-09-02(57633.92) & 1965.0 & $-10.75\pm0.02$ & $1.47\pm0.06$ & 373.93(324) \\ 00033509077 & 2016-09-08(57639.9) & 991.4 & $-10.78\pm0.04$ & $1.33\pm0.10$ & 202.35(178) \\ 00033509078 & 2016-09-12(57643.43) & 914.0 & $-10.85\pm0.04$ & $1.47\pm0.10$ & 160.64(171) \\ 00033509079 & 2016-09-14(57645.36) & 1091.0 & $-10.81\pm0.03$ & $1.4\pm0.09$ & 192.03(199) \\ 00033509080 & 2016-09-17(57648.47) & 894.0 & $-10.66\pm0.03$ & $1.34\pm0.08$ & 262.16(217) \\ 00033509081 & 2016-09-20(57651.32) & 996.4 & $-10.64\pm0.03$ & $1.33\pm0.07$ & 311.96(242) \\ 00033509082 & 2016-09-26(57657.11) & 789.1 & $-10.72\pm0.04$ & $1.43\pm0.09$ & 198.62(189) \\ 00033509083 & 2016-10-02(57663.43) & 344.6 & $-10.85\pm0.07$ & $1.37\pm0.20$ & 47.08(67) \\ 00033509084 & 2016-10-08(57669.34) & 609.3 & $-10.79\pm0.05$ & $1.45\pm0.12$ & 130.49(103) \\ 00033509085 & 2016-10-14(57675.33) & 966.5 & $-10.8\pm0.04$ & $1.38\pm0.10$ & 221.53(186) \\ 00033509086 & 2016-10-20(57681.43) & 971.4 & $-10.79\pm0.04$ & $1.32\pm0.09$ & 248.18(190) \\ 00033509087 & 2016-10-27(57688.54) & 1965.0 & $-10.71\pm0.02$ & $1.29\pm0.06$ & 449.38(329) \\ 00033509088 & 2016-10-28(57689.21) & 1711.0 & $-10.79\pm0.03$ & $1.36\pm0.07$ & 301.62(271) \\ 00033509090 & 2016-10-29(57690.59) & 1723.0 & $-10.85\pm0.03$ & $1.46\pm0.08$ & 182.50(224) \\ 00033509091 & 2016-10-30(57691.92) & 1656.0 & $-10.84\pm0.03$ & $1.4\pm0.08$ & 231.46(241) \\ 00033509092 & 2016-10-31(57692.79) & 2108.0 & $-10.82\pm0.02$ & $1.57\pm0.06$ & 287.31(299) \\ 00033509093 & 2016-11-14(57706.68) & 2974.0 & $-10.59\pm0.02$ & $1.22\pm0.04$ & 597.15(447) \\ 00033509094 & 2016-11-16(57708.53) & 2762.0 & $-10.66\pm0.02$ & $1.25\pm0.05$ & 460.61(428) \\ 00033509095 & 2016-11-18(57710.26) & 3137.0 & $-10.73\pm0.02$ & $1.33\pm0.05$ & 432.80(415) \\ 00033509096 & 2016-11-20(57712.58) & 2435.0 & $-10.63\pm0.02$ & $1.32\pm0.05$ & 556.82(417) \\ 00033509097 & 2016-11-22(57714.11) & 1693.0 & $-10.49\pm0.02$ & $1.55\pm0.06$ & 265.95(322) \\ 00033509098 & 2016-11-23(57715.9) & 2934.0 & $-10.43\pm0.02$ & $1.19\pm0.04$ & 717.13(517) \\ 00033509099 & 2016-11-27(57719.78) & 1963.0 & $-10.63\pm0.02$ & $1.36\pm0.05$ & 505.78(364) \\ 00033509100 & 2016-11-30(57722.02) & 382.1 & $-10.68\pm0.05$ & $1.42\pm0.12$ & 108.37(118) \\ 00033509101 & 2016-12-01(57723.08) & 1341.0 & $-10.74\pm0.02$ & $1.78\pm0.07$ & 278.11(275) \\ 00033509103 & 2016-12-06(57728.07) & 1958.0 & $-10.47\pm0.02$ & $1.69\pm0.05$ & 449.08(354) \\ 00033509105 & 2016-12-13(57735.06) & 2655.0 & $-10.40\pm0.02$ & $1.32\pm0.04$ & 457.45(437) \\ 00033509106 & 2016-12-16(57738.05) & 2440.0 & $-10.30\pm0.02$ & $1.23\pm0.04$ & 653.22(469) \\ 00033509107 & 2016-12-18(57740.49) & 2402.0 & $-10.32\pm0.02$ & $1.27\pm0.05$ & 569.51(541) \\ 00033509108 & 2016-12-20(57742.95) & 818.4 & $-10.39\pm0.03$ & $1.47\pm0.08$ & 271.95(359) \\ 00033509109 & 2016-12-23(57745.07) & 1993.0 & $-10.35\pm0.02$ & $1.58\pm0.05$ & 399.45(388) \\ 00033509110 & 2016-12-26(57748.33) & 1686.0 & $-10.39\pm0.02$ & $1.39\pm0.06$ & 347.1(329) \\ 00033509111 & 2016-12-29(57751.8) & 1823.0 & $-10.27\pm0.02$ & $1.62\pm0.04$ & 455.60(397) \\ 00033509112 & 2016-12-30(57752.54) & 1468.0 & $-10.19\pm0.02$ & $1.29\pm0.05$ & 482.04(410) \\ 00088026001 & 2016-12-31(57753.06) & 2048.0 & $-10.20\pm0.02$ & $1.26\pm0.04$ & 619.50(486) \\ 00033509113 & 2017-01-02(57755.05) & 1566.0 & $-10.30\pm0.02$ & $1.24\pm0.05$ & 383.85(386) \\ 00033509114 & 2017-01-01(57754.37) & 1488.0 & $-10.19\pm0.02$ & $1.18\pm0.05$ & 405.54(421) \\ 00033509115 & 2017-01-06(57759.69) & 2472.0 & $-10.17\pm0.01$ & $1.33\pm0.03$ & 748.83(539) \\ 00033509116 & 2017-01-08(57761.68) & 2480.0 & $-10.29\pm0.02$ & $1.31\pm0.04$ & 507.41(465) \\ 00033509117 & 2017-01-10(57763.14) & 2502.0 & $-10.30\pm0.02$ & $1.17\pm0.04$ & 607.92(463) \\ 00033509118 & 2017-01-12(57765.07) & 521.9 & $-10.45\pm0.04$ & $1.19\pm0.09$ & 200.60(200) \\ 00033509119 & 2017-01-15(57768.86) & 1009.0 & $-10.45\pm0.03$ & $1.33\pm0.08$ & 254.30(243) \\ 00033509120 & 2017-01-18(57771.38) & 1768.0 & $-10.50\pm0.02$ & $1.41\pm0.05$ & 399.09(391) \\ 00033509121 & 2017-04-20(57863.68) & 1975.0 & $-10.48\pm0.02$ & $1.56\pm0.06$ & 342.39(331) \\ 00033509122 & 2017-04-23(57866.86) & 2273.0 & $-10.54\pm0.02$ & $1.38\pm0.05$ & 467.86(419) \\ 00033509123 & 2017-04-26(57869.13) & 2018.0 & $-10.34\pm0.02$ & $1.33\pm0.05$ & 494.03(383) \\ 00033509124 & 2017-04-30(57873.83) & 991.4 & $-10.36\pm0.03$ & $1.34\pm0.07$ & 298.45(263) \\ 00033509125 & 2017-05-01(57874.31) & 891.5 & $-10.31\pm0.03$ & $1.16\pm0.08$ & 207.36(245) \\ 00033509126 & 2017-05-05(57878.23) & 681.8 & $-10.59\pm0.04$ & $1.41\pm0.09$ & 203.60(192) \\ 00033509127 & 2017-05-06(57879.75) & 529.4 & $-10.56\pm0.04$ & $1.33\pm0.09$ & 205.28(182) \\ 00033509128 & 2017-08-01(57966.04) & 1975.0 & $-10.67\pm0.02$ & $1.45\pm0.05$ & 427.29(342) \\ 00033509129 & 2017-08-03(57968.65) & 2298.0 & $-10.62\pm0.02$ & $1.42\pm0.05$ & 457.95(394) \\ 00033509131 & 2018-01-05(58123.69) & 1970.0 & $-10.4\pm0.02$ & $1.27\pm0.06$ & 397.65(354) \\ 00033509130 & 2017-08-05(57970.96) & 876.5 & $-10.62\pm0.03$ & $1.41\pm0.08$ & 270.04(241) \\ 00033509061 & 2017-12-08(58095.17) & 2477.0 & $-10.59\pm0.02$ & $1.25\pm0.05$ & 444.7(416) \\ \hline \enddata \tablenotetext{a} {Flux in 0.3--10 keV in unit of erg cm$^{-2}$ s$^{-1}$}. \end{deluxetable*} \begin{figure} \centering \includegraphics[width=\hsize,clip]{f3.eps} \caption{{\it Top:} NuSTAR FPMA (black) and FPMB (red) spectra and best-fit models. {\it Bottom:} Residuals with respect to power-law model.} \label{Nustarspec} \end{figure} \subsection{The light curves variability} \begin{figure} \centering \includegraphics[width= 0.475 \textwidth]{f4A.eps} \includegraphics[width= 0.475 \textwidth]{f4B.eps} \caption{Light curves of CTA 102 above 100 MeV with time binning of 6 h (upper panel) and 12 h (lower panel). The lines show the flare fit with Eq. \ref{func1} (Table \ref{fit_par}). }% \label{flares} \end{figure} \begin{deluxetable}{cccccc} \tablecaption{Parameter values best explaining the flares. \label{fit_par}} \tablehead{ \colhead{Flare period $t_{0}$} & \colhead{$t_r \pm err$} & \colhead{$t_d \pm err$} & $F_{\rm 0}/10^{-6}$ \\ \colhead{MJD} & \colhead{(day) } & \colhead{(day)} & \colhead{$\mathrm{\rm photon\:cm}^{-2}s^{-1}$} } \startdata $57736.53\pm 0.11$\tablenotemark{a} & $0.46 \pm 0.13$ & $0.17 \pm 0.08$ & $18.68 \pm 3.33$ \\ $57738.50\pm 0.06$\tablenotemark{a} & $0.60 \pm 0.09$ & $0.21\pm 0.03$ & $29.04\pm2.39$ \\ $57845.78\pm 0.36$\tablenotemark{b} & $1.49\pm0.33$ & $0.70\pm0.23$ & $9.72\pm 1.26$ \\ $57862.02\pm 0.11$\tablenotemark{b} & $0.17\pm0.06$ & $0.73\pm0.11$ & $25.20\pm2.63$ \\ \enddata \tablenotetext{a}{$F_{\rm c}=(4.18\pm 0.34)\times 10^{-6}\mathrm{\rm photon\:cm}^{-2}s^{-1}$.} \tablenotetext{b}{$F_{\rm c}=(1.07\pm 0.08)\times 10^{-6}\mathrm{\rm photon\:cm}^{-2}s^{-1}$.} \end{deluxetable} The $\gamma$-ray{} (2-day ($>0.1$ and $>1.0$ GeV), 7-day ($>10.0$ GeV) and adaptive binned ($>156.1$ MeV)), X-ray (0.3-10 keV) and UV/optical fluxes variation in time are shown in the a), b), c), d) and e) panels of Fig. \ref{var_mult}. There is an evident major $\gamma$-ray{} flux increase accompanied by moderate brightening in the X-ray and UV/optical bands. The variability in different bands is quantified using their fractional rms variability ($F_{\rm var}$) amplitude \citep{vaughan}, resulting in $F_{\rm var}=0.511\pm 0.008$ for X-ray band and correspondingly $0.920\pm 0.006$ and $0.984\pm 0.004$ for the $\gamma$-ray{} light curves with adaptive and 2-day ($>0.1$ GeV) binning, implying much stronger variability in the $\gamma$-ray{} band. This variability is even stronger when the light curves with 2-day ($>1.0$ GeV) and 7-day ($10.0$ GeV) bins are used (excluding correspondingly 20 and 69 periods with upper limits in them), since $F_{\rm var}=1.61\pm 0.01$ and $1.18\pm 0.06$, respectively.\\ The rapid variability in the $\gamma$-ray{} band can be further investigated by fitting the data with the double exponential form function to obtain the time profiles of the flux variations. However we note that the double exponential form function is not unique and the flare time profiles can be reproduced also by other functions (e.g., see \citet{2010ApJ...714L..73A}). As the main purpose of the current fit is only to estimate the rise and decay times, we fit the light curves with the following function \citep{abdoflares}: \begin{equation} F(t)= F_{\rm c}+F_0\times\left(e^{\frac{t-t_{\rm 0}}{t_{\rm r}}}+e^{\frac{t_{\rm 0}-t}{t_{\rm d}}}\right)^{-1} \label{func1} \end{equation} where $t_0$ is the time of the flare peak ($F_0$) and $t_r$ and $t_d$ are the rise and decay times, respectively. Each light curve was fitted with the non-linear optimization python package {\it lmfit} \footnote{https://lmfit.github.io/lmfit-py/} using a function that contains two inverses of the sum of exponentials (corresponding to the number of flares).\\ The active (bright) periods identified in the adaptively binned light curve are analyzed with normal time sampling and only the periods when the rise and decay times can be well constrained are considered. Accordingly, the periods from MJD 57734 to MJD 57740 and from MJD 57840 to MJD 57870 (Fig. \ref{flares}) divided into 6- and 12 hour bins respectively are selected; the detection significance in each bin is $>5\sigma$ and the plot of Npred/$\sqrt{\rm Npred}$ vs Flux/$\Delta {\rm Flux}$ shows linear correlation, so the likelihood fit converged for each time bins. The identified four peaks are sequentially numbered from 1 to 4 (F1- F4).\\ The fit is shown in Fig. \ref{flares} and the corresponding parameters are given in Table \ref{fit_par}. The average flux level ($F_{\rm c}$) is left free during the fitting and the corresponding values are presented in Table \ref{fit_par}. The flares 1-3 have rise times longer than the fall, and only F4 shows the opposite tendency. The symmetry of the flares can be quantitatively estimated by calculating the parameter of $\xi=(t_{\rm d}-t_{\rm r})/(t_{\rm d}+t_{\rm r})$ as defined in \citet{abdoflares} which ranges from $-0.64$ to $-0.46$ for F1-3 and $0.62$ for F4, implying these are moderately asymmetric flares. The shortest e-folding times for rise and decay are $t_{\rm r}=0.17\pm0.06$ and $t_{\rm d}=0.21\pm 0.03$ day \footnote{in Table \ref{fit_par} e-folding times are given, the doubling or halving timescales can be computed by $t_{\rm r,d} \times ln2$} observed during F2 and F4, respectively. During F4, when the highest flux was observed within $4.08\pm1.44$ hours, the flux increased up to $(2.52\pm0.26)\times10^{-5}\mathrm{\rm photon\:cm}^{-2}s^{-1}$ and dropped to its average level within $17.52\pm2.64$ hour. \begin{figure} \centering \includegraphics[width= 0.49 \textwidth]{f5.eps} \caption{The broadband SEDs of CTA 102 in the selected periods. The archival data are shown in light gray.}% \label{Seds} \end{figure} \section{Spectral evolution}\label{sec:3} A $"$Light curve/SED movie$"$ is made for a better understanding of the spectral evolution in different bands. For each adaptively binned interval, using the estimated photon index and flux, the $\gamma$-ray{} spectra are calculated by dividing the (0.16-300) GeV interval into five logarithmically equal bins. These $\gamma$-ray{} spectra are combined with the UV/optical/X-ray (if available) data to make SEDs. As moving from bin to bin, the spectra in all bands can be compared and their evolution in time seen.\\ The movie is uploaded here \href{https://youtu.be/K9WWWSy6W8U}{\nolinkurl{youtu.be/K9WWWSy6W8U}}, where the time period from MJD 57620 to MJD 57950 coinciding with the most active $\gamma$-ray{} emitting state is presented. Up to $\simeq$ MJD 57730, the emission from the source had a soft photon index $\Gamma\geq2.0$ and a maximum flux around $\simeq10^{-10}\:{\rm erg\:cm^{-2}\:s^{-1}}$, which afterwards exceeded $10^{-9}\:{\rm erg\:cm^{-2}\:s^{-1}}$ with hard $\gamma$-ray{} photon indices. Starting from MJD 57765, the flux dropped to its original level and the $\gamma$-ray{} photon index softened. Around MJD 57800, when the flux increased again, the photon indices were $\Gamma\simeq2.0$, implying a flat spectrum of the source in ($\nu-\nu F_{\rm \nu}$) representation. These spectral evolutions once more confirm a harder-when-brighter trend. \begin{table}[t!] \scriptsize \begin{center} \caption{Parameters of spectral analysis}\label{tab:results} \begin{tabular}{c c c c } \hline \hline \multicolumn{4}{c}{{\it Fermi}-LAT{}{}} \\ \hline Period & Photon Index \tablenotemark{a} & Flux\tablenotemark{b} & $\sigma$\tablenotemark{c} \\ \hline low & $2.39\pm0.03$ & $1.13\pm0.04$ & 61.4\\ P1 & $2.01\pm0.09$ & $6.34\pm0.72$ & 25.0 \\ P2 & $1.93\pm0.08$ & $24.17\pm2.43$ & 33.4\\ P3 &$1.96\pm0.04$ & $24.74\pm1.31$ & 56.5 \\ P4 &$1.93\pm0.05$ & $21.72\pm1.40$ & 48.7 \\ P5 &$1.81\pm0.08$ & $25.14\pm2.65$ & 31.0 \\ \hline \multicolumn{4}{c}{Swift-XRT} \\ \hline Period & Photon Index \tablenotemark{d} & Unabsorbed Flux \tablenotemark{e} & $\chi^2_{\rm red}$ (d.o.f.) \\ \hline low & $1.44\pm0.05$ & $1.45\pm0.07$ & 1.10(39) \\ P1 & $1.41\pm0.05$ & $1.91\pm0.09$ & 0.77(52) \\ P2 & $1.23\pm0.05$ & $4.79\pm0.22$ & 0.97(53) \\ P3 & $1.25\pm0.04$ & $5.75\pm0.13$ & 1.26(84) \\ P4 & $1.32\pm0.04$ & $6.46\pm0.15$ & 1.20(75) \\ P5 & $1.56\pm0.06$ & $3.31\pm0.15$ & 0.91(31) \\ \hline \multicolumn{4}{c}{NuSTAR} \\ \hline P4\tablenotemark{f} & $1.32\pm0.005$ & $29.36\pm0.20$ & 0.97(1131) \\ \hline \multicolumn{4}{l}{% \begin{minipage}{0.45 \textwidth} \tablenotetext{} Notes: \tablenotetext{a}{$\gamma$-ray{} photon index from likelihood analysis.} \tablenotetext{b}{$\gamma$-ray{} flux in the $0.1-300$ GeV energy range in units of $10^{-7}\:{\rm photon\:cm^{-2}\:s^{-1}}$.} \tablenotetext{c}{Detection significance} \tablenotetext{d}{X-ray photon index.} \tablenotetext{e}{0.3--10 keV X-ray flux corrected for the Galactic absorption in units of $\times$10$^{-11}$ erg cm$^{-2}$ s$^{-1}$.} \tablenotetext{f}{X-ray flux and photon index are measured in the energy range 3--79 keV} \end{minipage}% }\\ \end{tabular} \end{center} \end{table} \subsection{Spectral analysis} The data from the following periods are considered for the spectral analyses: \begin{itemize} \item[] {\it low state (when the source was not flaring in the $\gamma$-ray{} band)}: when X-ray and $\gamma$-ray{} fluxes were in their average levels: from Swift observations, Obsid: 33509078, 33509079, 33509085, 33509086 and 33509091 were analyzed by merging them to increase the exposure and statistics as they have similar X-ray flux and photon indices while a few intervals, when the source flux exceeded $9\times10^{-7} \:{\rm photon\:cm^{-2}\:s^{-1}}$, were excluded from the contemporaneously obtained $\gamma$-ray{} data. This period corresponds to the pre-flaring state, allowing to investigate the source emission spectrum before the major flare. \item[] {\it Period 1 (P1):} MJD 57625.06-57625.39 when the source was in the bright $\gamma$-ray{} state coinciding with XRT observations (Obsid: 33509022 and 33509023, merged during the analyses). \item[] {\it Period (P2):} MJD 57738.02-57738.08, bright $\gamma$-ray{} period coinciding with the Swift Obsid: 33509106. \item[] {\it Period 3 (P3):} $\simeq3.11$ hour period centered on MJD 57752.52, corresponding to a bright $\gamma$-ray{} state coinciding with Swift (Obsid: 33509112 and 88026001, merged) and NuSTAR observations. \item[] {\it Period 4 (P4):} $\simeq8.06$ hour period centered on MJD 57759.62, corresponding to the period when the highest X-ray flux was observed (Obsid: 33509115). \item[] {\it Period 5 (P5):} $\simeq14.66$ min period centered on MJD 57862.15, corresponding to another peak of $\gamma$-ray{} emission and available quasi-simultaneous Swift observation on the next day (Obsid: 33509121). \end{itemize} During the unbinned likelihood analyses of {\it Fermi}-LAT{} data, the spectrum of CTA 102 has been modeled using a power-law function with the normalization and index as free parameters. Then, the SEDs are calculated by fixing the power-law index of CTA 102 and running {\it gtlike} separately for smaller energy bins of equal width in log scale. For the spectral analyses the Swift data were binned to have at least 20 counts per bin and then fitted using $\chi^2$ minimization technique. Then, in order to increase the significance of individual points in the SEDs calculations, a denser rebinning was applied, restricting the energy range to $>\;0.5$ keV. The results of analyses (both X-ray and $\gamma$-ray) are given in Table \ref{tab:results} and the corresponding spectra shown in Fig. \ref{Seds}.\\ The $\gamma$-ray{} emission spectra in the low state extended up to $\sim10$ GeV with a soft photon index of $\Gamma=2.39\pm0.03$ while it hardened during the flares, e.g., $\Gamma=1.81\pm0.08$ during P5. There is an indication of deviation of the model with respect to the data above several GeV during P3 (cyan data in Fig. \ref{Seds}). An alternative fit with functions in the form of $dN/dE\sim E_{\gamma}^{-\alpha}\:\times Exp(-E_{\gamma}/E_{cut})$ and $dN/dE\sim (E_{\gamma}/E_{\rm br})^{-(\alpha+\beta log(E_{\gamma}/E_{\rm br}))}$ were applied to check whether the curvature in the spectrum is statistically significant. The first fit resulted in $\alpha=1.64\pm0.09$ and $E_{\rm cut}=3.84\pm1.21$ GeV which is preferred over the simple power-law modeling (comparing log likelihood ratio tests) with a significance of $4.81\:\sigma$. The second fit with $\alpha=1.58\pm0.10$ and $\beta=0.21\pm0.05$ is preferred with a significance of $5.2\:\sigma$. The breaks in the emission spectra can be expected from pair production in BLR \citep{2010ApJ...717L.118P} or can be related with the breaks in the emitting electron spectra \citep{2010ApJ...710.1271A}. The possible origin of the curvature in the GeV spectra should be investigated deeper, with more detailed spectral analyses of single as well as several flaring periods, which is beyond the scope of the current paper. \section{Broadband SEDs}\label{sec:4} Fig. \ref{Seds} shows the broadband SEDs of CTA 102 in its low and active periods together with the archival radio-X-ray data (light gray) from ASI science data center. The WISE IR data are highlighted by red asterisk which are most probably due to the torus emission as the recent studies show that the detection rate of almost all $\gamma$-ray{} blazars was high in the WISE all-sky survey \citep{2016ApJ...827...67M}. The comparison shows that during the considered periods the fluxes in the optical/X-ray and $\gamma$-ray{} bands exceed the averaged archival data: the increase is more significant in the optical/UV band. This increase in all bands is expected as the selected periods correspond to the pre-flaring, flaring and post flaring states, and the source shows different emission properties as compared with the averaged spectrum.\\ Comparing our selected period {\it i)} the low-energy component increased while its peak frequency remained relatively constant ($\leq\:10^{15}$ Hz), {\it ii)} the second component increased and shifted to HEs with a strong Compton peak dominance and {\it iii)} the UV/optical, X-ray and $\gamma$-ray{} fluxes contemporaneously increased in P2, P3 and P4, while the emission in the UV/optical and X-ray bands was relatively constant in P1 and P5.\\% These common blazar-like behaviors were observed in other blazars as well. \\ The blazar flares can be explained by the changes in the magnetic field, in the emitting region size and its distance from the black hole, bulk Lorentz factor, particle energy distribution, etc. \citep{paggi}. For example, both emission components will be shifted to HEs when the particles are effectively re-accelerated. Only the HE component will increase when the contribution of the external photon fields starts to dominate, for example, due to the changes in the location of the emitting region \citep{paggi}. However, these are not unique models for explaining the flaring events. Another possibility is the geometrical interpretation of the origin of flares, the case when the jet regions may have different viewing angles. Such a model with a twisted inhomogeneous jet was already applied to explain the emission from CTA 102 jet in the optical, infrared and radio bands \citep{2017Natur.552..374R}. The photons of different energy come from the jet regions which have different orientations (hence, different Doppler boosting factors) because of the curvature of the jet. \\ The SEDs obtained in the low state, P1 and P5 showing different features, and in the bright P2 have been modeled. In order to account for Compton dominance, we assume the bulk Lorentz factor ($\delta$ which equals to the bulk Lorentz factor for small viewing angles, $\delta\simeq \Gamma$) of the emitting region increased from 10 in the low to 20 in the active states (these are typical values estimated for FSRQs \citep{ghistav}). When the SEDs in the low state and in P2 are modeled, the emission from a compact region inside and outside the BLR is discussed. Instead, when modeling the periods with lacking correlation in the $\gamma$-ray and UV/optical/X-ray bands, we assume the emission from the radio to X-rays is produced in the extended and slow-moving region unrelated to the flaring component, while the HE $\gamma$-rays{} come from a compact and fast-moving region outside BLR \citep{tavecchio11}. \subsection{Modeling the SEDs} The SEDs are fitted within a leptonic scenario that includes synchrotron/Synchrotron Self-Compton (SSC) \citep{ghisellini, bloom, maraschi} and External Inverse-Compton (EIC) \citep{sikora} models. A spherical emission region ("blob") with a radius of $R$ and $B$ magnetic field carries relativistic electrons with a $N^{\prime}_{\rm e}(E^{\prime}_{\rm e})= N^{\prime}_{0}\:\left( E^{\prime}_{e}/m_{e}\:c^2\right)^{-\alpha}\:Exp[-E^{\prime}_{\rm e}/E^{\prime}_{\rm cut}]$ distribution for $E^{\prime}_{\rm e}\geq E^{\prime}_{\rm min}$ where $E^{\prime}_{\rm min}$ is the minimum electron energy. The size of the emitting region can be inferred from the observed e-folding timescale of $4.08$ hours from the $R\leq\delta\:c\:t/(1+z)\approx\delta\times2.16\times10^{14}$ cm relation. For the extended emission component, a region with a ten times larger radius ($\simeq4\times10^{16}$ cm) will be used. \\ The low-energy component is modeled by synchrotron emission while for the Inverse Compton (IC) scattering the photons from synchrotron emission, from BLR and dusty torus will be taken into account. The density of BLR ($u_{BLR}$) and dusty torus ($u_{dust}$) are calculated as functions of the distance $r$ from the black hole by the formulae, (e.g., \citet{sikora09}) \begin{equation} u_{\rm BLR} (r)=\frac{ L_{\rm BLR}}{4\pi r^2_{\rm BLR}c[1+(r/r_{\rm BLR})^3]},\ \label{u1} \end{equation} \begin{equation} u_{\rm dust} (r)=\frac{L_{\rm dust}}{4\pi r^2_{\rm dust}c[1+(r/r_{\rm dust})^4]}.\ \label{u2} \end{equation} The estimated size and luminosity of BLR correspondingly are $r_{\rm BLR}=6.73\times10^{17}$ cm and $L_{\rm BLR}=4.14\times10^{45}\:{\rm erg\:s^{-1}}$ \citep{pian}. The disk luminosity is $L_{\rm disk}=10\times L_{\rm BLR}\simeq 4.14\times10^{46}{\rm erg\:s^{-1}}$ (assuming its 10\% is reprocessed into BLR radiation) then the size and luminosity of torus will be $R_{dust}=10^{18}\:(L_{\rm disc}/10^{45})^{0.5}= 6.43\times10^{18}$ cm \citep{nenkova} and $L_{\it dust}=\eta\:L_{\it disc}=1.24\times10^{46}{\rm erg\:s^{-1}}$ ($\eta=0.6$, \citep{ghisellini2009}) a little larger than the value from tentative detection of dust emission in CTA 102 \citep{malmrose}. Moreover, reproducing the near-IR data presented in Fig. \ref{Seds} with a blackbody component requires a luminosity of a few times $10^{46}{\rm erg\:s^{-1}}$ in agreement with the value used. We adopt an effective temperature $T_{\rm BLR}=10^4\ $K for the BLR radiation and $T=10^{3}$ K for dusty torus.\\ The model free parameters and their uncertainties are estimated using a Markov Chain Monte Carlo (MCMC) method. We have modified the {\it naima} package \citep{zabalza} and the spectral model parameters have been derived through MCMC sampling of their likelihood distributions. For the model free parameters the following expected ranges are considered: $1.5\leq\alpha\leq10$, $0.511\:{\rm MeV}\leq E^\prime_{\rm cut,\:min}\leq10\:{\rm TeV}$, and $N_0$ and $B$ are defined as positive parameters. \section{Results and Discussion}\label{sec:5} The broadband emission from CTA 102 during its bright period in 2016-2018 was investigated. In the $\gamma$-ray{} band, during several periods the flux exceeded $10^{-5}\:{\rm photon\:cm^{-2}\:s^{-1}}$ with the maximum being $(3.55\pm0.55)\times10^{-5}\:{\rm photon\:cm^{-2}\:s^{-1}}$ (above 100 MeV) observed on MJD 57738.47 which corresponds to an apparent isotropic $\gamma$-ray{} luminosity of $L_{\gamma}=3.25\times10^{50}\:{\rm erg\:s^{-1}}$ (for a distance of $d_{\rm L}=6.91\:{\rm Gpc}$). This is one of the highest $\gamma$-ray{} luminosities observed from blazars so far (e.g., see \citet{nalewajko}). In the proper frame of the jet, the power emitted in the $\gamma$-ray{} band is $\sim L_{\gamma}/2\delta^2=4.06\times10^{47}\:{\rm erg\:s^{-1}}$ for $\delta=20$ which is higher than $L_{\rm disk}$ in agreement with the results by \citet{ghisellini14}. During this bright period, on a 6-h timescale, the apparent luminosity was $\simeq2.0\times10^{50}\:{\rm erg\:s^{-1}}$ with the rate of change $L/\Delta t\simeq1.89\times10^{46}\:{\rm erg\:s^{-2}}$ (using $\Delta t=6\:{\rm h}/(1+z)\simeq1.06\times10^{4}$ s), slightly higher than that observed from 3C 454.3 \citep{2011ApJ...733L..26A} and well above the Elliot-Shapiro relation \citep{1974ApJ...192L...3E}.\\ The photon index varies as well: the hardest was $1.61\pm 0.10$ observed on MJD 57752.45 which is unusual for FSRQs (having an average photon index of $2.4$ \citep{ackermancatalog}), while on MJD 57528.63 it was as soft as $3.08\pm0.23$. The hardest and softest photon indices were observed during the active and low states, confirming the harder-when-brighter trend. The HE photons ($>10$ GeV) were mostly emitted during the active period of MJD 57700-57800, the highest energy photon being 97.93 GeV. The fractional variability parameter $F_{\rm var}$ shows that the variability is stronger in the $\gamma$-ray{} band ($F_{\rm var}>0.9$), increasing at higher energies.\\ The observed flares are asymmetric which might be due to different relations between particle acceleration and emission timescales. For example, the flares decrease much faster (F1-F3) when the accelerated particles start to escape from the emitting region or the cooling time gradually increases. Whereas, the flare will appear with a fast rise and a slow decay trend (F4) when the fast injected energetic particles loose energy or escape from the regions for a longer time. The observed shortest e-folding time is $\simeq4.1$ hours, inferring that the emitting region is compact. However, during the brightest periods of $\sim$MJD 57738.0 and $\sim$MJD 57752.0, several minutes of observations were already enough to have $> 14.3\sigma$ detection significance, implying shorter time scale variability cannot be excluded (see \citet{shukla} for detailed analysis in shorter periods).\\ Contemporaneous increase in the UV/optical and X-ray bands were also observed during some bright $\gamma$-ray{} periods. In the X-ray band (0.3-10 keV), the maximum flux is $(6.71\pm 0.21)\times10^{-11}\:{\rm erg\:cm^{-2} s^{-1}}$ and the photon index hardens in the bright periods. Comparing the Swift UVOT data obtained in different periods (see Fig. \ref{Seds} and SED/light curve movie) one can see a clear indication of flux increase in the UV/optical bands as well. \begin{figure*} \centering \includegraphics[width= 0.49 \textwidth]{f6A.eps} \includegraphics[width= 0.49 \textwidth]{f6B.eps} \caption{Modeling of the broadband SEDs of CTA 102 during the low state and P2 (left panel, gray and orange, respectively) and P1 and P5 (right panel, blue and red, respectively). The model parameters are given in Table \ref{parres}. For the models applied see the text.}% \label{S} \end{figure*} \subsection{The origin of the emission} \begin{deluxetable*}{l|cr|c|cc|cc} \tabletypesize{\footnotesize} \tablecaption{Parameters best describing the multiwavelength emission in different periods. \label{parres}} \tablehead{\colhead{} & \multicolumn{2}{|c|} {low }&\multicolumn{1}{c|}{P1} & \multicolumn{2}{c|}{P2} & \multicolumn{2}{c}{P5} \\\hline & SSC+BLR & BLR & compact & SSC+BLR & Torus & compact & Torus } \startdata $\delta$ & 10 & 10 & 20 & 20 & 20 & 20 & 20\\ $\alpha$ & $2.51\pm0.11$ & $2.19\pm0.02$ & $2.12\pm0.54$ & $2.79\pm0.44$ & $1.91\pm0.03$ & $1.78\pm0.52$ & $1.95\pm0.03$ \\ $E_{\rm min}$[MeV] & $68.25\pm5.27$ & $0.54\pm0.03$ & $155.59\pm109.18$ & $227.25\pm26.43$ & $1.38\pm0.15$ & $121.33\pm67.33$ & $0.63\pm0.09$ \\ $E_{\rm c}$[GeV] & $0.67\pm0.1$ & $0.49\pm0.04$ & $1.42\pm0.81$ & $1.32\pm0.43$ & $0.98\pm0.05$ & $2.36\pm1.54$ & $3.85\pm1.57$ \\ $E_{\rm max}$ [TeV] & $0.57\pm0.31$ & $0.49\pm 0.31$ & $0.48 \pm 0.34$ & $0.50\pm0.30$ & $0.41\pm0.18$ & $0.58\pm0.25 $ & $0.54\pm 0.31$\\ $B$[G] & $5.40\pm0.13$ & $5.37\pm0.14$ & $0.23\pm0.29$ & $6.10\pm0.50$ & $1.01\pm0.003$ & $0.004\pm0.042$ & $0.015\pm0.049$ \\ $L_{\rm B}[{\rm erg\:s^{-1}}]$ & $1.75\times10^{46}$ & $1.73\times10^{46}$ & $1.47\times10^{42}$ & $1.04\times10^{45}$ & $2.86\times10^{43}$ & $3.86\times10^{38}$ & $6.44\times10^{39}$\\ $L_{\rm e}[{\rm erg\:s^{-1}}]$ & $4.66\times10^{44}$ & $2.90\times10^{45}$ & $1.73\times10^{46}$ & $2.84\times10^{45}$ & $2.74\times10^{47}$ & $7.33\times10^{46}$ & $1.97\times10^{47}$ \enddata \end{deluxetable*} Initially, we modeled the SED observed in the low state (Fig. \ref{S}; left panel). The radio data are treated as upper limits during the modeling, as the emission in this band is produced from the low-energy electrons which are perhaps from much extended regions. We note that the IR flux predicted by the models exceeds the archival IR data $\sim200$ times in the flaring (P2) and $28.7$ times in the selected low states (see Fig. \ref{S}; left panel), implying that the non-thermal synchrotron emission from the jet dominates over the other emission components. When the IC scatterings of both synchrotron and BLR photons are considered, the X-ray data allow to measure $E_{\rm min}^{\prime}=68.25\pm5.27$ MeV and $\alpha=2.51\pm0.11$. In order to explain the observed UV/optical data, a $E_{\rm c}^{\prime}=0.67\pm0.1$ GeV cut-off is required which makes the SSC component to decay in sub-MeV band and the HE data are described only by IC of BLR photons. Alternatively, both X-ray and $\gamma$-ray{} data can be described by IC scattering of BLR photons (dot-dashed gray line in Fig. \ref{S}) but the low-energy tail of IC spectra can reproduce the X-ray data only if $\gamma_{\rm min}=E_{\rm e}/m_{e}c^2$ is close to unity \citep{cellotti}. In this case, however, the synchrotron emission of these low energy electrons with $E_{\rm min}=0.54\pm0.03$ MeV will exceed the observed radio flux, making this scenario unlikely.\\ \paragraph{P2} Fig. \ref{S} (left panel) shows the modeling of the SED observed in P2, considering the synchrotron and BLR photons (SSC+BLR, solid line) and then only BLR (dashed line) and only torus (dot-dashed line) photons. When the emitting region is within BLR (SSC+BLR), the hard X-ray spectra $1.23\pm0.05$ can be explained only when $E_{\rm min}^{\prime}=227.25\pm26.43$ MeV and $\alpha=2.79\pm0.44$, while $E_{\rm c}^{\prime}=1.32\pm0.43$ GeV and $B=6.10\pm0.50$ G are estimated from the low-energy component. Also, the external photon fields can dominate for the IC scattering as their density will increase $\Gamma^2$ times in the jet frame. For example, the required parameters (especially $B$) can be somewhat softened when only the IC of torus photons is considered (see Table \ref{parres}). In the case of only BLR photons, the low-energy tail of IC spectra will decline at $\sim\gamma^{2}\:\epsilon_{\rm BLR}\simeq0.52$ keV (dashed line in Fig. \ref{S} left panel), contradicting Swift XRT data (unless lower $\delta$ is used). This modeling shows that during the bright $\gamma$-ray{} periods the emission can be also produced outside the BLR. At low energies, the model flux overpredicts noncontemporaneous radio data, but when taking the synchrotron self-absorption into account, which dominates below the frequencies $\sim10^{13}$ Hz (calculated following \citet{1979rpa..book.....R}), the synchrotron flux will be below the radio data. We note that simultaneous observations at low energies, which are missing in this case, are crucial for better constraining of the model free parameters and for deriving some limits/constraints on the source emission properties. As the models presented in Fig. \ref{S} (left panel) predict different spectra and fluxes at GHz or mid-IR range, the observations at these bands can be also used to distinguish between these two models. \paragraph{P1 and P5} Fig. \ref{S} (right panel) shows the results of a two-zone SEDs modeling. For the emission from the extended blob we fixed all the parameters, except $B$ and $N_{0}$, to the values obtained from the fitting of the SED in the low state, as in the UV/optical and X-ray bands the flux and photon indices did not change significantly (Fig. \ref{Seds}). In addition, all the parameters of the compact blob are free, but it is required that its synchrotron emission has no contribution at lower energies.\\ As compared with the low state, the magnetic field in the extended blobs is estimated to be low, $5.05\pm0.08$ G and $3.43\pm0.05$ G for P1 and P5, respectively, implying the modest X-ray flux changes are related with the increase of electron density. The $\gamma$-ray{} emission is produced in the interaction of fresh electrons (hard power law index $\leq2.1$) with the torus photons in the compact, fast-moving and particle-dominated blob $U_{\rm e}/U_{\rm B}\geq10^{4}$ (Fig. \ref{S} right panel). The cut-off energies (defined by the last point in the {\it Fermi}-LAT{} data) should be considered as lower limits, since there is no indication of break in the $\gamma$-ray{} spectra. In Fig. \ref{S} (right panel) the red dot-dashed line shows an alternative modeling, when both X-ray and $\gamma$-ray{} data are modeled by the IC scattering of torus photons. Within such a scenario, the flare is mainly due to the injection/cooling of $>$ 10 GeV electrons, which are affecting only the HE spectra having small contribution to the X-ray band (e.g., the density at lower energies increases due to the cooling of HE electrons). Again, the low energy component should be necessarily produced in a different blob, otherwise its relatively constant peak frequency cannot be explained. \paragraph{Jet energetics} The total power of the jet, $L_{\rm jet}=L_{\rm B}+L_{\rm e}$ where $L_{B}=\pi c R_b^2 \Gamma^2 U_{B}$ and $L_{e}=\pi c R_b^2 \Gamma^2 U_{e}$ (e.g., \citep{cellot}), is of the order of $L_{\rm jet}\simeq2\times10^{46}\:{\rm erg \:s^{-1}}$ in the low state and can be as large as $\simeq3\times10^{47}\:{\rm erg \:s^{-1}}$ during the flares.\\ When the low and high energy components are contemporaneously increased the required maximum energy of electrons ($E_{\rm c}$) reaches only a few GeV constrained by the low energy data (the energy of synchrotron photons is proportional to $\sim \delta\:B\:E_{\rm e}^2$). Therefore, during these intense $\gamma$-ray{} flares, the acceleration mechanisms are not effective enough or the electrons cool faster and do not reach HEs. On the other hand, when the $\gamma$-ray{} and UV/optical/X-ray fluxes are uncorrelated, the $\gamma$-rays{} are perhaps produced in a different part of the jet that contains fresh electrons which can emit up to HE and VHE bands. \section{Conclusions}\label{sec:6} We report the results on the observations of CTA 102 in the UV/optical, X-ray and $\gamma$-ray{} bands from January 2016 to January 2018 when the source was in the bright and active states. Generally, the flares are roughly correlated in all these bands but the variability is more prominent in the $\gamma$-ray{} band with several bright flares when the $\gamma$-ray{} flux is substantially increased and the photon index is hardened, showing a harder-when-brighter trend. The measured hardest photon index $\Gamma=1.61\pm 0.10$ significantly differs from the average $\gamma$-ray{} photon index of CTA 102 and is unusual for FSRQs. The highest $\gamma$-ray{} flux measured by {\it Fermi}-LAT{} is $(3.55\pm0.55)\times10^{-5}\:{\rm photon\:cm^{-2}\:s^{-1}}$ (above 100 MeV) observed on MJD 57738.47, corresponding to an extremely high isotropic $\gamma$-ray{} luminosity of $L_{\gamma}=3.25\times10^{50}\:{\rm erg\:s^{-1}}$. \\ We discussed the origin of the multiwavelength emission from CTA 102 in the framework of the one-zone and multi-zone synchrotron, SSC and EIC scenarios. We assumed a compact ($R\leq \delta\times2.16\times10^{14}$ cm inferred from $4.08$ hours $\gamma$-ray{} flux variation) blob inside and outside the BLR. In a single emitting region, the inverse-Compton up-scattering of both synchrotron and BLR photons can explain the data observed in the low state, whereas the contribution of torus photons is essential in the flaring periods. When in the flaring periods the fluxes in the UV/optical, X-ray and $\gamma$-ray{} bands are unrelated, the two-zone models (with an extended blob inside and a compact fast-moving one outside the BLR) can well explain the observed data under reasonable assumptions on the required parameters. These periods appear to be more favorable for the HE emission from CTA 102 as the emitting electrons have higher cut-off energies and harder power-law indicies. Most likely, the emission in these periods is produced in the regions outside BLR that contain fresh electrons which dominantly cool due to the inverse-Compton scattering making the variability more evident in the $\gamma$-ray{} band. \section*{acknowledgements} We thank the anonymous referee for constructive comments that improved the paper.
-45,884.729391
[ -3.177734375, 2.94140625 ]
12.944984
[ -3.650390625, -0.122314453125, -1.95703125, -6.203125, -0.6669921875, 8.9921875 ]
[ 2.494140625, 7.95703125, 3.77734375, 4.3671875 ]
1,285
7,722
[ -3.28125, 3.83203125 ]
38.574906
[ -5.7734375, -2.47265625, -2.60546875, -1.87109375, 1.0791015625, 9.59375 ]
1.168738
5.458176
29.046879
27.811632
[ 3.050300121307373 ]
-31,964.882223
5.595571
-43,287.407876
0.27772
6.371432
[ -3.775390625, -3.849609375, -3.28515625, -3.927734375, 2.53125, 11.203125 ]
[ -6.2734375, -2.326171875, -2.451171875, -1.4697265625, 4.00390625, 5.4375 ]
BkiUbb04dbjiU9i0yjOT
\section{\label{sec1}Introduction} Inflation, a quasi-exponential expansion of the very early Universe, is a required supplement to the Big Bang Universe, which can successfully solve the problems of horizon, flatness and monopole \cite{Guth1981,Linde1982,Albrecht1982}. Inflation is also able to predict the generation of a nearly flat spectrum of primordial perturbations, fitting the observations of cosmological microwave background (CMB) exactly. Till now, there are two kinds of inflationary theories: standard inflation and warm inflation. Warm inflation, firstly proposed by A. Berera \cite{BereraFang,Berera1995}, has been developed a lot in the past more than twenty years. Warm inflation inherits the advantages of successfully solving the horizon and flatness problems and naturally producing seeds to give rise to the large scale structure of the Universe \cite{BereraFang,Lisa2004,Berera2000}. But the origins of the perturbation in the two kinds of inflation are different. Cosmological perturbations naturally arise from vacuum fluctuations of quantum fields in standard inflation \cite{LiddleLyth,Bassett2006}, while thermal fluctuations in warm inflation \cite{BereraFang,Lisa2004,Berera2000}. In addition, warm inflation does not need a separate reheating regime, and it can go smoothly to the radiation-dominated phase. Warm inflation can cure ``$\eta$-problem" \cite{etaproblem,etaproblem1} and the overlarge amplitude of the inflaton suffered in standard inflation \cite{Berera2006,BereraIanRamos}. The strict slow roll conditions in standard inflation can be relaxed a lot in warm inflation \cite{Ian2008,Campo2010,ZhangZhu,Zhang2014}. What's more, warm inflation broadens the scope of inflation, and thus some models that were ruled out in standard inflation such as the quartic chaotic potential model can again be in very good agreement with the Planck results in warm inflationary scenario \cite{Sam2014}. Warm inflation was often considered as caused by a canonical inflaton field whose energy is dominated by potential, and the inflaton slow roll down its potential. An added thermal damping term makes the slow roll in warm inflation more easily to be satisfied \cite{Ian2008,Campo2010,ZhangZhu,Zhang2014}. The original picture of warm inflation was phenomenological \cite{BereraFang,Berera1995,Linde1999}, and A. Berera etc try to give the microphysical mechanism of warm inflation from the view of first principle \cite{Berera1999,Berera1998}. Then many works \cite{Berera1999,Berera1998,MossXiong,BereraRamos,Hall2005} concentrated on the thermal damping effect and give the mechanism that makes the warm inflation realizable in the point of field theory and particle physics \cite{Berera1999,Berera1998,MossXiong,BereraRamos,Hall2005}. Basing on these researches, the dissipative coefficient in different cases are obtained \cite{Zhangyi2009,ZhangZhu,BereraIanRamos,MossXiong,Gil2013,Linde1999,BereraRamos,Hall2004,Berera1998,Kephart,Ian2008, Campo2010}. The research of perturbations in warm inflation are developed a lot in the past twenty years \cite{BereraFang,Berera1995,Berera2000,BereraIanRamos,Lisa2004,Taylor2000}. The perturbation equation of inflaton in warm inflation is second order Langevin equation including a thermal stochastic force. The early papers concentrating on perturbation in warm inflation get the analytic form of power spectrum \cite{Lisa2004,Berera2000,Taylor2000,Chris2009}, which is enhanced by the thermal effect compared to standard inflation. For scalar power spectrum is fixed by observations, the amplitude of primordial gravitation waves is thus decreased. With in-depth study, it's found that the analytic form power spectrum can be obtained just in the temperature independent case (the dissipative coefficient doesn't depend on temperature) \cite{Lisa2004,Chris2009}. In the temperature dependent case, it's hardly to get an analytic power spectrum, and an approximate analytic power spectrum basing the numerical method can be obtained \cite{Chris2009}. When the dissipative coefficient has a positive power law dependence of temperature, the power spectrum presents a ``growing mode" \cite{Chris2009}. The issue of non-Gaussianity has also been analysed in different cases such as strong or weak dissipative warm inflation, and temperature independent or dependent warm inflation \cite{Gupta2006,Gupta2002}. Warm inflation are almost used the canonical scalar field as inflaton in the previous researches. Tachyon and Dirac-Born-Infeld (DBI) warm inflation are specially researched in \cite{Herrera2006,Cai2011} respectively. We proposed noncanonical warm inflation where the inflaton having an uncoupled Lagrangian density \cite{Zhang2014}. We find noncanonical warm inflation has some novel features such as more relaxed slow roll conditions and enhanced scalar power spectrum compared to canonical warm inflation. It's necessary and interesting to research noncanonical warm inflation, and noncanonical warm inflation broads the scope of inflationary theory. As for noncanonical warm inflation, the case we researched previously in \cite{Zhang2014} is just a simple kind to some degree. After the opening research for noncanonical warm inflation, we'll extend warm inflation to the more general case. In this paper, we concentrate on the noncanonical warm inflation where the Lagrangian density of inflaton having a coupled form of kinetic and potential terms. The coupled case is more general but complicated. The inflaton in the coupled noncanonical warm inflationary case may not have the ``right" or normal mass dimension as in canonical inflation. The emotion equation of inflaton in coupled noncanonical warm inflation is quite different from the uncoupled noncanonical warm inflation, let alone canonical warm inflation. In this paper, we establish the framework of coupled noncanonical warm inflation, try to get special field redefinition making the emotion equation of inflaton decoupled, and give the field redefinition relation between the special uncoupled field and the general noncanonical field. We develop the emotion equations and slow roll approximations of the coupled noncanonical warm inflation. Finally, we calculate the cosmological perturbations generated by the new kind of warm inflation. The paper is organized as follows: In Sec. \ref{sec2}, we introduce the new noncanonical warm inflationary scenario, give the basic equations of the new picture, and obtain the relation between two different field representations. Then in Sec. \ref{sec3}, two concrete examples are given to show how transform an inflaton with a Lagrangian density having coupled form of kinetic and potential terms, to the inflaton with a Lagrangian density having uncoupled form. The perturbation calculations in the new scenario are performed in Sec. \ref{sec4}. Finally, we draw the conclusions and discussions in Sec. \ref{sec5}. \section{\label{sec2}noncanonical warm inflation and field redefinition} Warm inflation occurs when there is a significant amount of radiation production during the inflationary epoch, thus the Universe is hot with a non-zero temperature $T$. In warm inflation, the Universe is a multi-component system whose total matter action is given by \begin{equation}\label{action} S=\int d^4x \sqrt{-g} \left[\mathcal{L}(X',\varphi)+\mathcal{L}_R+\mathcal{L}_{int}\right], \end{equation} where $X'=\frac12g^{\mu\nu}\partial_{\mu}\varphi\partial_{\nu}\varphi$. The Lagrangian density of the noncanonical field is $\mathcal{L}_{non-can}= \mathcal{L}(X',\varphi)$, which is potential dominated as usual, $\mathcal{L}_R$ is the Lagrangian density of radiation fields and $\mathcal{L}_{int}$ denotes the interactions between the scalar fields in warm inflation. Usually a proper noncanonical Lagrangian density should satisfy the conditions: $\mathcal{L}_{X'}\geq0$ and $\mathcal{L}_{X'X'}\geq0$ \cite{Franche2010,Bean2008}, where a subscript $X'$ denotes a derivative. The emotion equation of inflaton can be obtained through varying the action: \begin{equation}\label{vary} \frac{\partial(\mathcal{L}(X',\varphi)+\mathcal{L}_{int})}{\partial\varphi}-\left(\frac{1}{\sqrt{-g}}\right) \partial_{\mu}\left(\sqrt{-g}\frac{\partial\mathcal{L}(X',\varphi)}{\partial(\partial_{\mu}\varphi)}\right)=0. \end{equation} In the flat Friedmann-Robertson-Walker (FRW) Universe, the field is homogeneous, i.e. $\varphi=\varphi(t)$, and we can get the motion equation of the scalar field: \begin{eqnarray}\label{EOM1} \left[\left(\frac{\partial\mathcal{L}(X',\varphi)}{\partial X'}\right)+2X'\left(\frac{\partial^2\mathcal{L}(X',\varphi)}{\partial X'^2}\right)\right]\ddot\varphi\nonumber\\+\left[3H\left(\frac{\partial\mathcal{L}(X',\varphi)}{\partial X'}\right)+\dot\varphi\left(\frac{\partial^2\mathcal{L}(X',\varphi)}{\partial X'\partial\varphi}\right)\right]\dot\varphi\nonumber\\-\frac{\partial(\mathcal{L}(X',\varphi) +\mathcal{L}_{int})}{\partial\varphi}=0, \end{eqnarray} where $X'=\frac12\dot\varphi^2$, and $H$ is the Hubble parameter. The thermal damping effect in warm inflation, which can be described by a dissipative coefficient $\Gamma$, comes from the interaction term $\mathcal{L}_{int}$ between inflaton and other sub-dominated scalar fields. The energy-momentum tensor of the inflaton is $T^{\mu\nu}=\frac{\partial\mathcal{L}}{\partial X'}\partial^{\mu}\varphi\partial^{\nu}\varphi-g^{\mu\nu}\mathcal{L}$, and from that we can get the energy density and pressure of the inflaton: $\rho(\varphi,X')=2X'\frac{\partial\mathcal{L}}{\partial X'}-\mathcal{L}$, $p(\varphi,X')=\mathcal{L}$. An important characteristic parameter of noncanonical field which can describe the traveling speed of scalar perturbations is the sound speed: \begin{equation}\label{soundspeed} c_s^2=\frac{\partial p/\partial X'}{\partial\rho/\partial X'}=\left(1+2X'\frac{\mathcal{L}_{X'X'}}{\mathcal{L}_X'}\right)^{-1}, \end{equation} where the subscript $X'$ also denotes a derivative. The Lagrangian density term $\mathcal{L}_{int}$ in Eq. (\ref{EOM1}) is just the function of zero order of the inflaton and other subdominated fields, but doesn't contain the derivative of the fields. The most successful explanation of the interaction between the inflaton and other fields is the supersymmetric two-stage mechanism \cite{MossXiong,Kephart}. Under some detailed calculations \cite{Berera1999,MossXiong}, the last term in Eq. (\ref{EOM1}), i.e. $\frac{\partial(\mathcal{L}(X',\varphi)+\mathcal{L}_{int})}{\partial\varphi}$ can be divided into two parts. One part expressed by $\tilde{\Gamma}\dot\varphi$ describes the dissipation effect of inflaton to all other fields \cite{BereraFang,Berera1995,Berera2000,Berera2006,BereraIanRamos}, another part is the sum of those terms which do not contain $\dot\varphi$ in the $\frac{\partial\mathcal{L}_{int}}{\partial\varphi}$ and the term $\frac{\partial\mathcal{L}(X',\varphi)}{\partial\varphi}$ in Eq. (\ref{EOM1}). The second part is resumed as the effective potential $V_{eff}$ in warm inflation, which is the potential acquired thermal correction and is a function of both inflaton and temperature \cite{BereraFang,Berera1999,MossXiong}. The temperature $T$ appearing in the effective potential is temperature of the radiation bath and does not fall to zero thanks to the dissipations of the inflaton to the bath \cite{BereraFang,Ian2008,MossXiong}. With these conventional calculations in warm inflation and using the definition of sound speed, the motion equation of the noncanonical inflaton can be expressed as: \begin{equation}\label{EOM2} \mathcal{L}_{X'}c_{s}^{-2}\ddot{\varphi}+(3H\mathcal{L}_{X'}+\tilde{\Gamma})\dot{\varphi}+ \mathcal{L}_{X'\varphi}\dot\varphi^2+V_{eff,\varphi}(\varphi,T)=0. \end{equation} The subscripts $\varphi$ and $X'$ both denote a derivative in this paper, and $\tilde{\Gamma}$ denotes the dissipative coefficient in warm inflation. From the equation above we can see that the damping terms, which contain an enhanced Hubble damping term and a thermal damping term, are much larger than that in the canonical warm inflation, let alone standard inflation. While, there is an annoying quadratic term of $\dot\varphi$ due to the kinetic potential coupling term of Lagrangian density $\mathcal{L}_{X'\varphi}$. The coupling term $\mathcal{L}_{X'\varphi}\dot\varphi^2$ brings difficulty to solve the inflation such as giving slow roll approximation and calculating perturbations to some extent. Fortunately, we can eliminate this term by making a field redefinition $\phi=f(\varphi)$. Using the new field representation, the Lagrangian density is $\mathcal{L}(X,\phi)$, where $X=\frac12\dot\phi^2$ in the FRW Universe. Then through varying the new action, we can get the motion equation of $\phi$, which of course has the same original form as Eq. (\ref{EOM2}): \begin{equation}\label{EOM3} \mathcal{L}_{X}c_{s}^{-2}\ddot{\phi}+(3H\mathcal{L}_{X}+\Gamma)\dot{\phi}+ \mathcal{L}_{X\phi}\dot\phi^2+V_{eff,\phi}(\phi,T)=0. \end{equation} This implies that we can often choose a field redefinition $\phi=f(\varphi)$ to make the coupling term vanish in the new representation by \begin{equation}\label{coupling} \mathcal{L}_{X\phi}=\frac{1}{f^4_{\varphi}}[f_{\varphi}\mathcal{L}_{X'\varphi}- 2f_{\varphi\varphi}\mathcal{L}_X'-2f_{\varphi\varphi}\mathcal{L}_{X'X'}X']=0, \end{equation} where $f_{\varphi}$ is the first derivative of the function $f(\varphi)$, and $f_{\varphi\varphi}$ is the second derivative of $f(\varphi)$. The dissipative coefficients in the two field representations can have the relation $\tilde{\Gamma}=f_{\varphi}^2\Gamma$. In uncoupling noncanonical warm inflation, the slow roll approximation equation Eq. (\ref{EOM5}) is easily to be satisfied, which we'll discuss hereafter. So we can express the time derivative of $\phi$ in terms of $\phi$ as, say $\dot\phi=g(\phi)$, and also we have $X=\frac12g^2(\phi)$, $X'=\frac{g^2(f(\varphi))}{2f_{\varphi}^2}$. Then, theoretically, given a coupled form Lagrangian density $\mathcal{L}(X',\varphi)$, combining Eqs. (\ref{coupling}) and (\ref{EOM5}), we can work out the function $f_{\varphi}$ and then $f(\varphi)$ analytically or with the help of numerical method. Thus when using the new redefined $\phi$ representation, the motion equation of inflaton becomes: \begin{equation}\label{EOM4} \mathcal{L}_{X}c_{s}^{-2}\ddot{\phi}+(3H\mathcal{L}_{X}+\Gamma)\dot{\phi}+V_{eff,\phi}(\phi,T)=0. \end{equation} For simplicity we'll write the thermal effective potential $V_{eff}$ as $V$ hereinafter. Now the motion equation of inflaton has a clear form: a second order inertia term, a first order damping term and the effective potential term. Then the slow roll approximations, the warm inflationary dissipative strength and the perturbation quantities etc are more easily to be worked out, which we'll analyse hereafter. We'll give calculation of noncanonical warm inflation in the easy-to-use $\phi$ representation hereafter. Given an arbitrary noncanonical Lagrangian density $\mathcal{L}(X',\varphi)$ or its transformed form Lagrangian density $\mathcal{L}(X,\phi)$, the noncanonical inflatons and the dissipative coefficient, may not has the normal mass dimension as in canonical warm inflation, so the dimensionless dissipative strength parameter in our noncanonical warm inflation is defined as \begin{equation}\label{r} r=\frac{\Gamma}{3H\mathcal{L}_X}, \end{equation} which is different from canonical warm inflation. Warm inflation is in strong regime when $r\gg1$, while in weak regime when $r\ll1$. The thermal damping effect of inflaton and energy transfer to radiation in warm inflation \cite{BereraIanRamos,Zhang2014} can be characterized by the entropy production equation: \begin{equation}\label{entropy1} T\dot{s}+3HTs=\Gamma\dot{\phi}^{2}, \end{equation} where $s$ is the entropy density. Inflation is often associated with slow roll approximations to drop the highest derivative terms in the equations of motion, the same in the new inflationary picture. The slow roll equations are: \begin{equation}\label{EOM5} (3H\mathcal{L}_{X}+\Gamma)\dot\phi+V_{\phi}(\phi,T)=0, \end{equation} \begin{equation}\label{entropy} 3HTs-\Gamma \dot\phi^2=0. \end{equation} The validity of slow roll approximations depends on the slow roll conditions \cite{Zhang2014}: \begin{eqnarray} \epsilon&\ll&\frac{\mathcal{L}_{X}(1+r)}{c^2_s},~~\beta\ll\frac{\mathcal{L}_{X}(1+r)}{c^2_s},~~\eta\ll\frac{\mathcal{L}_{X}}{c^2_s}, \nonumber\\ &b&\ll\frac{min\{1,r\}}{(1+r)c^2_s},~~~~~~~|c|<4. \end{eqnarray} The slow roll parameters in the equations above are defined as: \begin{equation} \epsilon =\frac{M_p^2}{2}\left(\frac{V_{\phi}}{V}\right) ^2, ~~~\eta =M_p^2\frac {V_{\phi \phi}}{V}, ~~~\beta =M_p^2\frac{V_{\phi}\Gamma_{\phi}}{V\Gamma}, \end{equation} another two additional parameters describing the temperature dependence are: \begin{equation} b=\frac {TV_{\phi T}}{V_{\phi}},~~~~c=\frac{T\Gamma_T}{\Gamma}, \end{equation} where $M_p^2=1/8\pi G$ is the reduced squared Planck mass. The slow roll approximations are more easily to be guaranteed in noncanonical warm inflationary scenario than in canonical warm inflation, let alone standard inflation, so we can safely use them in the calculation of Sec. \ref{sec4}. The number of e-folds in the noncanonical warm inflation can be given by \begin{equation} N=\int H dt=\int\frac{H}{\dot{\phi}}d\phi\simeq-\frac{1}{M_p^2}\int_{\phi_{\ast}} ^{\phi_{end}}\frac{V\mathcal{L}_X(1+r)}{V_{\phi}}d\phi, \end{equation} where $\phi_{\ast}$ is the inflaton value when horizon crossing. \section{\label{sec3}two examples} In this section, we'll give two concrete examples to show how to translate a general noncanonical Lagrangian density with $\mathcal{L}_{X'\varphi}\neq0$ in the coupled $\varphi$ representation, to the easy-to-use uncoupled representation. \subsection{\label{sec31} the model with $\mathcal{L}(X',\varphi)_{X'X'}=0$} Given an original noncanonical Lagrangian density $\mathcal{L}(X',\varphi)=e^{-\varphi}X'-V_0 e^{-2\varphi}$, we can calculate its $\mathcal{L}_{X'}=e^{-\varphi}$, $\mathcal{L}_{X'\varphi}=-e^{-\varphi}\neq0$. Basing on the Eq. (\ref{coupling}), we should have: \begin{equation}\label{coupling1} f_{\varphi}\mathcal{L}_{X'\varphi}-2f_{\varphi\varphi}\mathcal{L}_{X'}=0 \end{equation} to make $\mathcal{L}_{X\phi}=0$. Then we can work out that if \begin{equation}\label{f1} \phi=f(\varphi)=-2e^{-\frac12\varphi}, \end{equation} the Lagrangian density can be expressed as \begin{equation}\label{Lagrangian1} \mathcal{L}(X,\phi)=X-\frac1{16}V_0\phi^4, \end{equation} where we can see that the coupling term $\mathcal{L}_{X\phi}=0$. \subsection{\label{sec32} the model with $\mathcal{L}(X',\varphi)_{X'X'}\neq0$} We begin with a coupled form Lagrangian density which can be expressed as $\mathcal{L}(X',\varphi)=e^{-\varphi}\ln X'-V(\varphi)$, then $\mathcal{L}_{X'}=\frac{e^{-\varphi}}{X'}$, $\mathcal{L}_{X'X'}=-\frac{e^{-\varphi}}{X'^2}$, and $\mathcal{L}_{X'\varphi}=-\frac{e^{\varphi}}{X'}$. The coupling term of the motion equation Eq. (\ref{EOM2}) in $\varphi$ representation is \begin{equation}\label{coupling2} \mathcal{L}_{X'\varphi}\dot\varphi^2=2\mathcal{L}_{X'\varphi}X'=-2e^{-\varphi}. \end{equation} From the equation above, we can see the coupling term is totaly a function of $\varphi$, but not of $\dot\varphi$, so it can be well absorbed into the effective dominated potential term $V_{eff}$. Then the motion equation of $\varphi$ still can be expressed in an uncoupling form. \section{\label{sec4}perturbations in the noncanonical warm inflation} Now we perform the theory of cosmological perturbations in the noncanonical warm inflation. Considering the small perturbations, we can expand the total inflaton field as $\Phi(\mathbf{x},t)=\phi(t)+\delta\phi(\mathbf{x},t)$, where $\delta\phi(x,t)$ is the linear response due to the thermal stochastic noise $\xi$ in thermal system. The thermal noise source we introduced in warm inflation is Markovian: $\langle\xi(\mathbf{k},t)\xi(-\mathbf{k}',t)\rangle=2\Gamma T(2\pi)^3\delta^3(\mathbf{k}-\mathbf{k}')\delta(t-t')$ \cite{Lisa2004,Gleiser1994}. Introducing the noise term and the dissipative term, and substituting the expansion of inflaton to the motion equation which obtained through varying the action, we can get a second order Langevin equation: \begin{eqnarray}\label{perturbation1} \mathcal{L}_{X}c_s^{-2}(\ddot\phi(t)+\delta\ddot\phi(\mathbf{x},t))&+&(3H\mathcal{L}_X+\Gamma)(\dot\phi(t)+ \delta\dot\phi(\mathbf{x},t))\nonumber\\+V_{\phi}+V_{\phi\phi}\delta\phi(\mathbf{x},t)&-& \mathcal{L}_X\frac{\nabla^2}{a^2}\delta\phi(\mathbf{x},t)=\xi(\mathbf{x},t). \end{eqnarray} After we taking the Fourier transform, the evolution equation for the fluctuation can be obtained: \begin{equation}\label{perturbation2} \mathcal{L}_{X}c_s^{-2}\delta\ddot\phi_{\mathbf{k}}+(3H\mathcal{L}_X+\Gamma)\delta\dot\phi_{\mathbf{k}}+ (\mathcal{L}_X\frac{k_c^2}{a^2}+m^2)\delta\phi_{\mathbf{k}}=\xi_{\mathbf{k}}, \end{equation} where $m^2=V_{\phi\phi}$ and $m^2$ is a tiny term that much less than the term $\mathcal{L}_Xk_p^2$ in slow roll inflation (the relation between comoving wave number $k_c$ and physical wave number $k_p$ is $k_p=k_c/a$). The second order Langevin equation above is hard to solve, and we only want to get the perturbation observations when horizon crossing. Horizon crossing is well inside the slow roll inflationary regime \cite{LiddleLyth}. The warm slow roll regime is overdamped so the inertia term can be neglected \cite{Berera2000,Taylor2000}. Then the second order Langevin equation (\ref{perturbation2}) can be reduced to first order: \begin{equation}\label{perturbation3} 3H\mathcal{L}_X(1+r)\delta\dot\phi_{\mathbf{k}}+ (\mathcal{L}_X\frac{k_c^2}{a^2}+m^2)\delta\phi_{\mathbf{k}}=\xi_{\mathbf{k}}. \end{equation} The approximate solution is given by \begin{eqnarray}\label{solution} \delta\phi_{\mathbf{k}}(\tau)\approx \frac1{3H\mathcal{L}_{X}(1+r)}\exp\left[-\frac{\mathcal{L}_Xk_p^2+m^2} {3H\mathcal{L}_X(1+r)}(\tau-\tau_0)\right]\nonumber \\ \int_{\tau_0}^{\tau}\exp\frac{\mathcal{L}_Xk_p^2+m^2} {3H\mathcal{L}_X(1+r)}(\tau'-\tau_0)\xi(\mathbf{k},\tau')d\tau' \nonumber \\ + \delta\phi_{\mathbf{k}}(\tau_0)\exp\left[-\frac{\mathcal{L}_Xk_p^2+m^2} {3H\mathcal{L}_X(1+r)}(\tau-\tau_0)\right]. \end{eqnarray} With the solution, we can get the corresponding correlation function \begin{eqnarray}\label{correlation} \langle\delta\phi_{\mathbf{k}_p}(\tau)\delta\phi_{\mathbf{k}_p'}(\tau)\rangle\approx (2\pi)^3 \frac{\Gamma T}{3H\mathcal{L}_X(1+r)(\mathcal{L}_Xk_p^2+m^2)} \nonumber\\ \delta^3(\mathbf{k}_p-\mathbf{k}_p')\left[1-\exp\left(-\frac{2(\mathcal{L}_Xk_p^2+m^2)}{3H\mathcal{L}_X(1+r)}(\tau-\tau_0)\right)\right] \nonumber \\ +\langle\delta\phi_{\mathbf{k}_p}(\tau_0)\delta\phi_{\mathbf{k}_p'}(\tau_0)\rangle \exp\left[-\frac {2(\mathcal{L}_Xk_p^2+m^2)}{3H\mathcal{L}_X(1+r)}(\tau-\tau_0)\right]. \end{eqnarray} On the right hand side of Eq. (\ref{solution}), the first term is the noise contribution that is driving the mode to thermal equilibrium, and the second term contains the memory of the initial conditions at $\tau=\tau_0$, which are exponentially damped. In the expanding Universe, from Eq. (\ref{solution}), we can find that the larger $k_p^2$ is the faster the relaxation rate is. If $k^2_p$ is sufficiently large for the mode to relax within a Hubble time, then that mode thermalizes. As soon as the physical wave number of a $\delta\phi(\mathbf{x},t)$ field mode becomes less than the freezing wave number $k_F$, it essentially feels no effect of the thermal noise $\xi(\mathbf{k},t)$ during a Hubble time \cite{Berera2000}. Based on the criterion, the freeze-out physical wave number $k_F$ is defined as $\frac{\mathcal{L}_Xk_F^2+m^2}{3H\mathcal{L}_X(1+r)H}=1$. In slow roll inflation, the mass term is negligible, so we can work out the freeze-out wave number \begin{equation}\label{freezeout} k_F=\sqrt{3(1+r)}H. \end{equation} The freezeout time $t=t_F$ always precedes the horizon crossing time, so the scalar power spectrum of warm inflation is already fixed at an early time $k_F$ when $k=k_F>H$. Through the relation $\langle\delta\phi_{\mathbf{k}}\delta\phi_{\mathbf{k}'}\rangle=\delta^3(\mathbf{k}-\mathbf{k}')\frac{2\pi^2}{k^3} \mathcal{P}_{\phi}(k)$ and Eq. (\ref{correlation}), we can get the power spectrum of inflaton: \begin{equation}\label{powerphi} \mathcal{P}_{\phi}(\mathbf{k})=\frac{rk_FT}{2\pi^2(1+r)\mathcal{L}_X}=\frac{\sqrt{3}rHT}{2\pi^2\sqrt{1+r}\mathcal{L}_X}. \end{equation} Then the scalar power spectrum of curvature perturbation is \begin{equation}\label{power} \mathcal{P}_R=\left(\frac{H}{\dot\phi}\right)^2\mathcal{P}_{\phi}=\frac{9\sqrt{3}H^5\mathcal{L}_X(1+r)^{\frac32}rT}{2\pi^2V_{\phi}^2}. \end{equation} In strong regime of warm inflation ($r\gg1$), the power spectrum becomes \begin{equation}\label{strong} \mathcal{P}_R=\left(\frac{H}{\dot\phi}\right)^2\mathcal{P}_{\phi}=\frac{9\sqrt{3}H^5\mathcal{L}_Xr^{\frac52}T}{2\pi^2V_{\phi}^2}, \end{equation} and in weak regime of warm inflation ($r\ll1$), the power spectrum becomes \begin{equation}\label{weak} \mathcal{P}_R=\left(\frac{H}{\dot\phi}\right)^2\mathcal{P}_{\phi}=\frac{9\sqrt{3}H^5\mathcal{L}_XrT}{2\pi^2V_{\phi}^2}. \end{equation} Cosmological microwave background (CMB) observations provide a good normalization of the scalar power spectrum $P_R\approx10^{-9}$ on large scales. We can see from our new result Eq. (\ref{power}) that, compared to standard inflation or canonical warm inflation, the energy scale when horizon crossing can be depressed by both the noncanonical effect and thermal effect. This is good news to the assumption that the universe inflation can be described well by effective field theory. The spectral index is \begin{equation}\label{index} n_s-1=\frac {d\ln\mathcal{P}_R} {d\ln k} \simeq \frac{ \mathcal{\dot P}_R} {H\mathcal{P}_R}, \end{equation} which can be expressed by \begin{eqnarray}\label{index1} n_s-1&=&\left[\frac{5c-16}{4-c}+\frac{6r}{(4-c)(1+r)}\right]\frac{\epsilon}{\mathcal{L}_X(1+r)}\nonumber\\ &+&\frac{2\eta}{\mathcal{L}_X(1+r)}-\frac{10r+4}{(4-c)(1+r)}\frac{\beta}{\mathcal{L}_X(1+r)}\nonumber\\&-& \frac{3r}{2(1+r)}\left(\frac{1}{c_s^2}-1\right)\delta +\frac{5cr+2r+2c+2}{2(4-c)r}b\nonumber\\&+& \frac1{4-c}\frac{\epsilon\beta}{\mathcal{L}_X(1+r)} \end{eqnarray} in our noncanonical warm inflation, where $\delta=\frac{\ddot\phi}{H\dot\phi}$ is also a slow roll parameter which is much less than one. Through the equation above and the slow roll conditions in noncanonical warm inflation we stated in Sec. \ref{sec2}, we can see that the spectral index, which is of order $\frac{\epsilon}{\mathcal{L}_X(1+r)}$, is much less than one. So we obtained a nearly scale-invariant power spectrum that is consistent with observations in general noncanonical warm inflation. Since the tensor perturbations do not couple to the thermal background, the gravitational waves are only generated by the quantum fluctuations as in standard inflation: \begin{equation}\label{tensor} \mathcal{P}_T=\frac2{M_p^2}\left(\frac H{2\pi}\right)^2. \end{equation} The tensor-to-scalar ratio thus is \begin{equation}\label{ratio} R=\frac{\mathcal{P}_T}{\mathcal{P}_R}=\frac HT\frac{2\epsilon}{\sqrt{3}\mathcal{L}_X(1+r)^{\frac32}r}. \end{equation} From the equation above, we can find that the tensor perturbations can be weaker than canonical warm inflation, let alone standard inflation. This characteristic is due to both the noncanonical effect and thermal effect if both the effects are strong, which is a kind of synergy of both effects. \section{\label{sec5}conclusions and discussions} We summarize with a few remarks. We extend warm inflation to more general noncanonical case and thus generalize the scope of the inflationary theory. The general warm inflation are often dominated by a noncanonical inflaton with a complicated Lagrangian density $\mathcal{L}(X',\varphi)$ having $\mathcal{L}_{X'\varphi}\neq0$. This kind noncanonical warm inflation is not easy to solve, for an annoying coupling term $\mathcal{L}_{X'\varphi}\dot\varphi^2$ presenting in the motion equation makes the slow roll inflation hardly to be defined and given out. Fortunately, in many cases, we can transform the Lagrangian density $\mathcal{L}(X'\varphi)$ which has an annoying coupling form to the clear uncoupling form $\mathcal{L}(X,\phi)$ with the help of a field redefinition $\phi=f(\varphi)$. Then in the new filed representation, the noncanonical Lagrangian density $\mathcal{L}(X,\phi)$ has $\mathcal{L}_{X\phi}=0$. Then the warm inflation can be more easily to be dealt, for the motion equation has an conventional uncoupling form, the slow roll approximations are clear to be defined and used. In noncanonical warm inflation, the slow roll conditions are more easily to be satisfied, for the Hubble damping term is enhanced by noncanonical effect and there is an additional thermal damping term. We give two concrete examples to show that how to make field redefinitions to obtain uncoupling noncanonical warm inflation in different cases. In the new kind warm inflationary model, we calculate and obtain a new form but still nearly scale invariant scalar power spectrum. And we find the energy scale during horizon crossing can be depressed by the synergy of the noncanonical effect and thermal effect. The tensor-to-scalar ratio is also analysed, and we find the amplitude of gravitational wave is weaker than canonical warm inflation and standard inflation. This characteristic can result from both the noncanonical effect and thermal effect if both the effects are strong, which is another synergy of both effects. The detailed issue of non-Gaussianity in the new scenario also deserves more cognition and research, which can be our future work, and we will also concentrate on a new kind of warm inflation which is not dominated by potential again. \acknowledgments This work was supported by the National Natural Science Foundation of China (Grants No. 11605100 and No. 11547035).
-28,778.995635
[ -2.556640625, 2.431640625 ]
31.981982
[ -3.099609375, 0.79052734375, -2.36328125, -4.84375, -0.25146484375, 6.91796875 ]
[ 2.75, 8.2890625, 1.84765625, 4.953125 ]
164
3,310
[ -1.9775390625, 1.8134765625 ]
26.532643
[ -6.24609375, -4.19140625, -4.734375, -2.419921875, 2.19140625, 12.53125 ]
4.118087
15.400101
25.558912
2.98183
[ 3.106313467025757 ]
-19,429.147399
6.866767
-28,766.397291
2.529808
5.536963
[ -2.87109375, -3.791015625, -3.873046875, -4.6953125, 2.37109375, 12.21875 ]
[ -5.53125, -1.66796875, -2.138671875, -1.0087890625, 3.63671875, 4.1328125 ]
BkiUec7xK02iP4Y2vYfY
\section{Conclusion} In this study we investigated the prevalence of code smells in ML projects. We gathered a dataset of 74 ML projects, ran the static analysis tool Pylint on them and collected the distribution of Pylint messages per category per project (\autoref{tab:results:msgs-per-category}), the top 10 code smells in these projects overall (\autoref{tab:results:top10-all}), and the top 20 code smells per category (\autoref{tab:results:top20-per-category}). Moreover, by performing a manual analysis of a subset of the detected smells, we have found that code duplication is common in ML, but does require further research to understand to what extent this occurs and how it can be avoided. We also found that the PEP8 convention for identifier naming style may not always be applicable in ML code due to its resemblance with mathematical notation. This calls for additional research on how it affects the readability of ML code. Most importantly, however, we have found serious issues with the specification of dependencies that present a major threat to the reproducibility and maintainability of Python ML projects. Furthermore, we found that Pylint produces a high rate of false positives on import statements and thus cannot reliably check for correct usage of imported dependencies, including prominent ML libraries such as PyTorch. Both of these problems also provide a major obstacle to the adoption of CI in ML systems. Further research needs to be undertaken to help ML practitioners avoid issues in the dependency management of their projects. \section{Implications} \label{sec:discussion} In this section, we discuss the implications of our results for ML developers. We start by elaborating upon the code smells that we have found to be most prevalent and continue with a discussion of problems regarding dependency management that we encountered while performing this research. We also argue how these problems affect the maintainability and reproducibility of the analysed ML projects. \subsection{Explaining the most prevalent code smells} \label{sec:discussion:code-smells} This section aims at providing an explanation for the prevalence of the most common code smells in our dataset of ML projects by investigating their occurrences. \textbf{Error} -- Most interestingly, we found that there were \textit{zero} projects that had \textit{zero} messages in the Error category. Only the \codeinline{geomancer} project in the DeepMind research repository\footnote{\url{https://github.com/deepmind/deepmind-research}} had one error, namely a true positive \codeinline{no-name-in-module} error message in the project's test file. We also found that \codeinline{no-member} and \codeinline{import-error} are the most reported code smells in this category. Upon manual inspection of these messages in several projects, we noticed that import errors have two primary causes, namely: \begin{itemize}[leftmargin=*] \item \textbf{Bad specification of requirements} -- Using the \codeinline{kaggle-kuzushiji-recognition-2019} project as an example, we noticed that it was missing at least four dependencies in its (otherwise well-defined) requirements file. Imports of these missing dependencies were primarily found in the code of a dependency that the project's authors had copied into their repository for some slight customisations, but were also found in other scripts in the repository. \item \textbf{Pylint producing false positives on local imports} -- Taking the \codeinline{navigan}\footnote{\url{https://github.com/yandex-research/navigan}} project as an example, even though we manually fixed the import errors relating to badly specified requirements, there were 28 errors remaining. These errors come from unresolved imports from local modules, i.e., Python files in the repository. In Python, a local module \codeinline{utils.py} can be imported from other modules in the same directory using \pyinline{import utils} and it is recommended (but not necessary) to add an \codeinline{__init__.py} file to that directory to indicate that it is a Python package \cite{python-docs-initpy}. However, as a GitHub issue reports\footnote{\url{https://github.com/PyCQA/pylint/issues/3984}}, Pylint produces false positive import errors on local imports, but strangely \textit{not} when the \codeinline{__init__.py} file is \textit{not} present. \end{itemize} As for \codeinline{no-member} errors, in the \codeinline{kaggle-kuzushiji-recognition-2019} project -- which has 327 of them -- these were primarily caused by false positives from Pylint on the majority of -- if not all -- usages of the \codeinline{torch} library (i.e. PyTorch), including those of basic PyTorch functions like \pyinline{torch.as_tensor}, \pyinline{torch.tensor} and \pyinline{torch.max}. The project with the most \codeinline{no-member} errors, \codeinline{RSNA-STR-Pulmonary-Embolism-Detection} (1549), showed the same trend, as did \codeinline{DL-unet} and several other projects that we investigated. This is a known issue that has been reported to Pylint's Github repository\footnote{https://github.com/PyCQA/pylint/issues/\{\href{https://github.com/PyCQA/pylint/issues/3510}{3510}, \href{https://github.com/PyCQA/pylint/issues/2708}{2708}, \href{https://github.com/PyCQA/pylint/issues/2067}{2067}\}.} of which the essence goes back as far as 2013 with a similar problem in the use of NumPy\footnote{\url{https://github.com/PyCQA/pylint/issues/58}.}. The reason that is stated in these issues, is that Pylint has trouble extracting the members and type information of libraries that are backed by bindings with the C programming language. \highlight{\textbf{Pylint cannot reliably check} for correct uses of \textbf{import statements}; both local imports, as well as imports from C-backed libraries such as PyTorch, suffer from a \textbf{high rate of false positives}.} This is especially concerning in the context of ML, as the majority of ML libraries are backed by C (to make them performant). The fix that the Pylint developers propose in the relevant GitHub issues all entail (partially) disabling the \codeinline{no-member} rule, implying that Pylint cannot reliably check for correct uses of C-backed libraries. This is additionally concerning in the context of ML, as it has a high degree of glue code, i.e. code that is written to coerce data in and out of general-purpose libraries \cite{sculley2015hidden}. Additionally, the fact that Pylint fails to reliably analyse the usage of prominent ML libraries, provides a major obstacle to the adoption of Continuous Integration (CI) in the development environment of ML systems. If a static code analysis tool produces too many false positives, it will be noisy and counterproductive \cite{vassallo2020asats}. Thus, other important true positives may be overlooked. \highlight{Additionally, the fact that Pylint fails to reliably analyse whether prominent ML libraries are used correctly, provides a \textbf{major obstacle to the adoption of Continuous Integration (CI)} in the development environment of ML systems.} \textbf{Warning} -- In this category, we found that one project (\codeinline{kaggle_rsna2019_3rd_solution}) was responsible for 13917 of all 26307 \codeinline{unused-wildcard-import} messages, with 53 \codeinline{wildcard-import} messages. Since \codeinline{unused-wildcard-import} are emitted per unused function imported with a wildcard import, this means that there were on average 263 unused imports per wildcard import in this project. Notably, most of these messages were also (contained in) instances of duplicate code. Such unused wildcard imports pollute a module's namespace with the names of all imported functions, meaning there is a greater chance of (accidentally) redefining an outer name. Additionally, wildcard imports may also have unintended side-effects that can be very difficult to debug. The tendency towards using wildcard imports may stem from the prototypical and experimental nature of ML projects, combined with the fact that it is simply easier for the developer to import everything from a library and use whatever they need, rather than import functions individually. Dead experimental codepaths as found by \citet{sculley2015hidden} of which the imports still remain, can also be a cause of bad import management. As for the many \codeinline{bad-indentation} messages, these were dominated primarily by DeepMind projects that were using a different convention for indentation width, namely two spaces instead of four. This is not surprising since indentation width is a preference, where the PEP8 style guide\footnote{\url{https://www.python.org/dev/peps/pep-0008/}} prescribes four spaces, but others such as Google's TensorFlow style guide\footnote{\url{https://www.tensorflow.org/community/contribute/code_style}} prescribe two spaces. \textbf{Refactor} -- We found that \codeinline{duplicate-code} is the most commonly reported refactoring opportunity. Having manually inspected a random subset of these messages and where they occur, we have noticed that these are primarily caused by ML developers having multiple permutations of similar ML models to perform the same task. Each model (experimental codepath) then uses a slightly different underlying algorithm or slightly different parameters and are each defined in their own file, likely in an attempt to find the best performing one. Yet instead of identifying the commonalities between these different models and abstracting them into modules that can be reused across their codebase, ML developers seem to prefer simply copy-pasting the files. However, more research into code reuse and duplication in ML code is required to truly understand this phenomenon and how it can be prevented. \highlight{\textbf{Code duplication is common in ML}, but calls for more extensive research to truly understand to what extent and for what reasons this occurs, and how it can be avoided.} Regarding the high prevalence of the \codeinline{too-many-arguments} and \codeinline{too-many-locals} messages, it is congruent with previous work which shows that data science projects contain significantly more instances of these than traditional software projects~\cite{simmons2020dsstandards}. \citet{simmons2020dsstandards} also notes that these messages are related: function arguments namely also count as local function variables. By default, a \codeinline{too-many-arguments} message is emitted when a function or method takes more than five arguments, while \codeinline{too-many-locals} is emitted when a function or method contains more than 15 local variables. A possible cause for their prevalence, as \citet{simmons2020dsstandards} note, are \textit{``models with multiple hyperparameters that are either hard-coded as variables in the function definition or passed to the function as individual parameters rather than being stored in a configuration object.''} \textbf{Convention} -- While line length violations stem primarily from developer preference, invalid and improper naming is a problem not exclusive to Python \cite{lacerda2020smellsSLR}. Pylint by default emits a \codeinline{invalid-name} message when it finds names that do not comply to PEP8, i.e. are improperly capitalised or less than three characters long (except in the case of inline variables). Indeed, shorter identifier names do take longer to comprehend than whole words \cite{hofmeister2019shortids}, but \citet{simmons2020dsstandards} aptly notes that this may not necessarily be the case in DS and ML code due to its heavily mathematical background. Thus, ML practitioners may find it easier to comprehend the details of a piece of ML code written so that it resembles the notation of the underlying mathematical model, including the names of the identifiers. Future research on this subject will have to show how this affects the readability of ML code. \highlight{\textbf{The PEP8 convention} for identifier naming style \textbf{may not always be applicable in ML code} due to its resemblance with mathematical notation. Future research is required to investigate how this affects the readability of ML code.} \subsection{Problems installing project dependencies} \label{sec:discussion:dependencies} Setting up the projects' codebases as detailed in Section \ref{sec:methodology:setting-up} was not a trivial task. While 42 out of 74 projects did have a requirements file in their repository that installed without a hitch out of the box, there were 32 projects where a \codeinline{requirements.txt} had to be generated or had to manually be created from inspecting the repository or manually modified from what was already in the repository. Furthermore, there were 13 projects that required installing extra dependencies after installing those in the requirements file. One of these projects had a valid requirements file -- and specified the extra dependencies in their ReadMe -- but the 12 others did not. As for the projects for which a requirements file had to be manually created or modified, we made a few observations as to why this was needed. First, some projects did not contain a requirements file at all, but did specify instructions in the ReadMe, e.g., the \codeinline{yolact_edge} project. Secondly, some projects had simply made a small mistake in their manual maintenance of the requirements file, as was the case with the \codeinline{navigan} project. The project authors fixed the mistake less than a day after we filed an issue on their GitHub\footnote{\url{https://github.com/yandex-research/navigan/issues/1}} about it. Thirdly, some projects were relying on custom Docker containers for their runtime environment. These projects, e.g. \codeinline{kaggle-imaterialist}, maintain a Dockerfile in their repository (sometimes with an additional requirements file) in which the project's dependencies are installed, often without specifying exact dependency versions. Finally, and most commonly (especially with the Kaggle projects), projects would contain a \codeinline{requirements.txt} file that was likely the result of running \codeinline{pip freeze} -- a shell command that lists all the packages installed in the current Python environment, including their respective dependencies, along with their exact versions. However, the are three problems with this approach: \begin{itemize}[leftmargin=*] \item \textbf{Difficult to maintain} -- Since \codeinline{pip freeze} lists \textit{all} direct, indirect, runtime and development dependencies, without distinction, in alphabetical order, we conjecture that it is difficult for maintainers to assess whether a certain dependency can safely be upgraded without breaking their code or breaking any of their dependencies. \item \textbf{May result in unresolvable dependencies} -- The resulting requirements file may contain dependencies sourced from different dependency management tools and package indexes. These dependencies may have slightly different package names across package indexes or have only published certain versions to e.g. Conda's package index, but not to PyPI. There were also projects that depended on pre-release versions of certain libraries that are no longer available on PyPI (e.g., older nightly versions of Tensorflow packages). \item \textbf{May include unrelated dependencies} -- Especially if the user is not installing their dependencies into a virtual environment, then the resulting requirements file may also include unnecessary, unrelated (and potentially unresolvable) Python dependencies, such as those used by their operating system or those used in other projects. For example, the \codeinline{side_effects_penalties} in the DeepMind research repository depends on \codeinline{youtube-dl} (even though the project has nothing to do with videos), as well as some dependencies from the operating system level such as \codeinline{python-apt}, \codeinline{python-debian} and \codeinline{ufw}. The inclusion of the latter dependencies directly indicates that the project author was not using a virtual environment, but was instead using \codeinline{sudo pip install} to install all of their Python dependencies. \end{itemize} \highlight{We have found serious issues with the specification of dependencies that present a \textbf{major threat to the reproducibility and maintainability of Python ML projects}. Further research needs to be undertaken to help ML practitioners avoid issues in the dependency management of their projects.} \section{Introduction} \label{sec:introduction} Artificial Intelligence (AI) and Machine Learning (ML) are pervasive in the current landscape of computer science. Companies such as Facebook, Google, Nvidia and ING are making use of AI and ML for a plethora of tasks that are difficult (if not impossible) to describe using traditional Software Engineering (SE)~\cite{haakman2020ai,sculley2015hidden,amershi2019software,gonzalez2020mluniverse}. Examples include facial recognition \& recomposition, natural language processing, real-time video transformation, detection of medical anomalies and intercepting fraudulent financial transactions. Yet, as \citet{sculley2015hidden} wrote in their 2015 paper on the hidden technical debt in ML systems at Google, \textit{``only a small fraction of real-world ML systems is composed of the ML code. (...) The required surrounding infrastructure is vast and complex.''} This is also in part what leads \citet{menzies2020lawsofSEforAI} to predict that the future of software will be a rich and powerful mix of ideas from both SE and AI. \citeauthor{menzies2020lawsofSEforAI} also advocates for more SE experience in the field of AI and ML, stating that poor SE leads to poor AI while better SE leads to better AI \cite{menzies2020lawsofSEforAI}. The data scientists that write AI / ML code often come from non-SE backgrounds where SE best practices are unknown \cite{simmons2020dsstandards}. One such SE best practice is the practice of static code analysis to find (potential) defects in the source code, refactoring opportunities and violations of common coding standards, which we amalgamate into `code smells' for the rest of this paper. Research has shown that the attributes of quality most affected by code smells are maintainability, understandability and complexity, and that early detection of code smells reduces the cost of maintenance \cite{lacerda2020smellsSLR}. With a focus on the maintainability and reproducibility of ML projects, the goal of our research is therefore to apply static code analysis to applications of ML, in an attempt to uncover the frequency of code smells in these projects and list the most prevalent code smells. Thus, we formulate the following research question: \textit{What are the most prevalent code smells in Machine Learning code?} The main contributions of this paper are: \begin{itemize} \item An empirical study on the prevalence of code smells in 74 Python ML projects. \item A dataset of 74 ML projects and an open-source tool to perform simultaneous static code analysis on all of these projects. \end{itemize} \section{Methodology} For this paper, we performed an empirical study on the prevalence of code smells in ML code. We collected a dataset of 74 ML projects and implemented a tool to set these projects up with their dependencies in order to replicate their execution environment. It then runs Pylint with its default configuration on all projects in the dataset, collecting and counting the detected code smells. The tool and dataset are both open-source and can be found on GitLab\footnote{\url{https://gitlab.com/bvobart/python-ml-analysis}}. Our empirical study follows the methodology illustrated in Figure~\ref{fig:methodology}. It comprises three main steps, namely: A) project selection, B) setting up the codebases, and C) static analysis. \begin{figure*}[htbp] \centering \includegraphics[width=0.9\textwidth]{figs/methodology-diagram.pdf} \caption{Methodology Diagram.} \label{fig:methodology} \vspace{1mm} \end{figure*} \subsection{Project Selection} In total, our collected dataset comprises 74 ML projects; 32 projects come from finished Kaggle competitions, 38 from \url{paperswithcode.com} (of which 25 projects were from the Google-affiliated DeepMind), and 4 from \url{reproducedpapers.org}. It includes projects from academic papers, (student) reproductions, prize money awarding Kaggle competitions, as well as industry players such as Facebook, Nvidia and DeepMind. The dataset defines a list of Git repository URLs and allows for customising the dependencies of particular projects, when they have not been properly defined in their respective repository. We elaborate on a number of characteristics of our dataset in Section \ref{sec:methodology:selection:characteristics}, but first, we explain how we collected the projects in the dataset and what guidelines were used for doing so. We aim for this dataset to be a systematically gathered set of projects, representative of the current, real-world state of ML and AI projects. To this end, we have created a set of guidelines for the inclusion of projects in the dataset, which can be found below. Each project included in the dataset\dot \begin{enumerate} \item \dots must be hosted in an open-source Git repository. \item \dots must be written in Python 3. \item \dots must contain pure Python code and does not consist purely of Jupyter Notebooks. More specifically, a project should contain \textit{either} a) at least 200 lines of pure Python code, even if the rest of the code is embedded in Jupyter Notebooks, \textit{or} b) more lines of pure Python code than there are lines of Python code in all Jupyter notebooks of that project. \item \dots must implement an ML or AI model and may not be a library or tool for use in ML projects. \item \dots must be considered `deliverable', i.e., \textit{either} a) the project is part of or accompanies a published academic paper, \textit{or} b) the project has been submitted to \url{paperswithcode.com}, \url{reproducedpapers.org} or a Kaggle competition (which has finished and declared the winners at the time of considering the project). \end{enumerate} The first guideline limits our scope to open-source projects, as these are openly available to download and analyse. The second and third guideline stem from a technical limitations, as Pylint only supports Python 3 and is only able to analyse pure Python files. Jupyter Notebooks are essentially JSON files, containing `cells' with code in Markdown, Python, Julia, or a small selection of other different languages. While it is technically possible to convert the Python code embedded in these notebooks to pure Python files using a tool such as \codeinline{nbconvert}, the produced code has a slightly different style than general Python modules, which invalidates certain Pylint rules. For example, the Pylint messages \codeinline{pointlesss-statement}, \codeinline{expression-not-assigned} and \codeinline{wrong-import-statement} produce false positives in notebook-style code. Due to the lack of direct Pylint support for Jupyter Notebooks and since we do not want to selectively disable Pylint rules for notebook-extracted code as opposed to pure Python code, we decided to exclude projects that purely contain Jupyter Notebooks from our dataset. The minimum of 200 lines of pure Python code in the presence of larger Jupyter Notebooks was chosen such that this code is likely not to be purely utility code, but also contain part of the ML code. The fourth guideline embodies that we are interested in analysing applications of ML rather than libraries used in their development, such as \codeinline{tensorflow}, \codeinline{pandas}, or \codeinline{sklearn}. The fifth and final guideline focuses on avoiding toy projects, unfinished projects, or projects still under development. \subsubsection{Dataset characteristics} \label{sec:methodology:selection:characteristics} We measured several general characteristics of every project, which can be found in \autoref{tab:methodology:characteristics}. The 74 projects in the dataset contain a total of 3156 pure Python files, amounting to 511.018 lines of Python code, including empty lines. The median project has 17 pure Python files with 2848.5 lines of code, resulting in an average of 157.3 lines of Python code per file. The smallest project contained a single file with 58 lines of Python, while the largest project had 78572 lines of Python code across 229 files. This project was found to be embedding the code of several dependencies in its repository, multiple times. \begin{table*} \caption{Characteristics of our dataset of 74 ML projects} \label{tab:methodology:characteristics} \centering \begin{tabular}{lrrrrrrr} \toprule \textit{Characteristic} & \textit{Min} & \textit{Q1} & \textit{Median} & \textit{Q3} & \textit{Max} & \textit{Mean} & \textit{Std Dev.} \\ \midrule Number of pure Python files & 1 & 8 & 17 & 36 & 730 & 43 & 95 \\ Number of Jupyter Notebook files & 0 & 0 & 0 & 1 & 52 & 3 & 8 \\ Lines of Python code & 58 & 1449 & 2849 & 5243 & 78572 & 6906 & 13568 \\ Lines of Jupyter Notebook Python & 0 & 0 & 0 & 197 & 43387 & 1008 & 5115 \\ Avg. lines of Python code per Python file & 58 & 108 & 157 & 214 & 1151 & 193 & 155 \\ Avg. lines of Jupyter Notebook Python per Jupyter Notebook file & 12 & 73 & 129 & 223 & 1610 & 256 & 335 \\ \bottomrule \end{tabular} \vspace{1mm} \end{table*} \subsection{Setting up the codebases} \label{sec:methodology:setting-up} In this step, performed by our analysis tool, for each project, we clone the latest version of the project's Git repository, create a virtual environment in it, ensure that there exists a file that specifies any necessary dependencies and then install those dependencies into the virtual environment. These need to be installed, such that our static analysis tool of choice is able to check whether imports resolve correctly and whether the imported libraries are used correctly. This is particularly interesting in the case of ML, as \citet{sculley2015hidden} noted that ML projects have a high degree of glue code and so make extensive use of libraries. In total, the folder containing all of the 74 cloned projects from our dataset along with the accompanying virtual environments with their installed dependencies, amounts to 131 GiB. Python projects can specify and install their dependencies in a variety of ways. The most common way to install a Python dependency is to use \codeinline{pip}, the package manager that is installed alongside Python. It is common convention to specify a Python project's list of dependencies in a requirements file called \codeinline{requirements.txt}, which is conventionally placed at the root of the project's source code repository. It is also possible to specify a \codeinline{setup.py} file which allows the project to be built into a Python package, ready to be published to PyPI, \codeinline{pip}'s default package index. \codeinline{pip} can be configured to use other package indexes, but by default it can only install packages from PyPI, or directly from source through a Git repository URL or a local folder with a \codeinline{setup.py}. However, there are also other package managers / dependency management solutions such as Conda, Poetry, \codeinline{pip-tools} and Pipenv, with the latter being directly endorsed in Python's Packaging User Guide \cite{python-packaging-guide}. These tools each have their own way of specifying dependencies and -- especially in Conda's case -- may use additional package indexes to PyPI, which makes resolving these dependencies difficult. It is possible to use \codeinline{pip freeze > requirements.txt}, which collects all Python packages and their exact versions installed in the current Python environment (disregarding by which means these packages were installed) and outputs them to a \codeinline{requirements.txt} file. This approach is flawed though, as we explain in Section \ref{sec:discussion:dependencies}. Our analysis tool currently only supports installing dependencies with \codeinline{pip} and expects a \codeinline{requirements.txt} or \codeinline{setup.py} file in their conventional location. The dataset also supports specifying a custom path to a \codeinline{requirements.txt} file, or alternatively, the contents of a custom \codeinline{requirements.txt} file for a project in the dataset. It is also possible to specify extra requirements that need to be installed \textit{after} installing the dependencies from the requirements file. This is necessary for, e.g., Nvidia's Apex library, which depends on PyTorch; when trying to run \codeinline{pip install} on a requirements file containing both PyTorch and Apex's Git repository URL (no matter the order), the installation of Apex fails because PyTorch is not yet installed. Only for projects that do not have a \codeinline{requirements.txt} file, nor a manually defined one, our analysis tool uses \codeinline{pipreqs}\footnote{\url{https://github.com/bndr/pipreqs}} to generate a \codeinline{requirements.txt} file based on the libraries imported in the code. Our analysis tool currently does not support using Conda, Poetry or Pipenv for resolving and installing dependencies. We therefore had to exclude one project that used Poetry and two projects that used Conda and solely specified a Conda \codeinline{environment.yml} file, but no \codeinline{requirements.txt} or \codeinline{setup.py}. No projects that we came across were using Pipenv. \subsection{Static Analysis} This step is also performed by our analysis tool and concerns running the static code analysis tool Pylint (version 2.6.0) in its default configuration on all pure Python files in each project (but not on any of the dependencies). We choose Pylint for static code analysis as it is widely used and widely accepted in the Python community, as well as being highly configurable \cite{simmons2020dsstandards,bafatakis2019usesPylint}. It is also well integrated into IDEs such as PyCharm and VS Code. Furthermore, \citet{bafatakis2019usesPylint} used it to measure coding style compliance in StackOverflow answers, \citet{omari2019usesPylint} used it as a metric for the code quality of open-source Python projects, and \citet{simmons2020dsstandards} used it in their code quality comparison between DS and non-DS Python projects. Pylint provides an extensive set of messages, not only for stylistic issues, but also for issues regarding programming conventions, possible refactorings and other logical code smells. While Pylint is very configurable, we chose to use Pylint's default configuration as it reflects the community standards, similar to \citet{simmons2020dsstandards}. The code smells that Pylint reports are each identified by a symbol, such as \codeinline{bad-indentation} or \codeinline{import-error}, which is also how we refer to specific Pylint messages in this paper. Furthermore, these messages are divided into five categories (message types), which we describe below. The italic text is how Pylint describes the category. \begin{itemize} \item \textbf{Convention} -- \textit{for programming standard violation} -- Messages in this category show violations of primarily code style conventions, as well as documentation conventions and Pythonic programming conventions \item \textbf{Refactor} -- \textit{for bad code smell} -- Messages in this category indicate that the smelly code should be refactored. \item \textbf{Warning} -- \textit{for Python specific problems} -- This category includes many generic and Python-specific linting messages \item \textbf{Error} -- \textit{for probable bugs in the code} -- Messages in this category indicate problems in the code that are very likely to cause run-time problems \item \textbf{Fatal} -- \textit{if an error occurred which prevented Pylint from doing further processing}. \end{itemize} Cloning and installing all projects, even though this is performed automatically by the tool, is the most time-consuming part of the analysis -- it takes roughly three hours. With all projects already cloned and their dependencies already installed, the analysis of all 74 projects took 8m 53s using 12 threads on an Intel\textsuperscript{\textregistered{}} Core\texttrademark~i7-8750H processor. Eight projects contained code that caused Pylint to crash during analysis, so we excluded these from the 82 projects we originally had in the dataset, bringing the total to 74. Several issues have been filed about this, including one by this paper's first author\footnote{\url{https://github.com/PyCQA/pylint/issues/3986}}. This bug has since been fixed. \section{Related Work} Several studies have investigated linting and static code analysis of non-ML projects \cite{tomasdottir2020eslint,omari2019usesPylint,bafatakis2019usesPylint,chen2018pysmells}. \citet{tomasdottir2020eslint} researched why JavaScript (JS) developers use linters and how they tend to configure them. They found that maintaining code consistency, preventing errors, saving discussion time and avoiding complex code were among the top reasons why JS developers use linters. They also found that JS developers commonly stick with already existing preset linting configurations. \citet{vassallo2020asats} found a similar result; among other results, they found that developers are often unwilling to configure automatic static analysis tools (ASATs) and emphasise \textit{``the necessity to improve existing strategies for the selection and prioritisation of ASATs warnings that are shown to developers.''} Within the Python ecosystem, \citet{chen2018pysmells} investigated the detection and prevalence of code smells in 106 Python projects with the most stars on GitHub. They found that long parameter lists and long methods were more prevalent than other code smells. \citet{omari2019usesPylint} used Pylint to analyse the code quality of a dataset of large Python projects. Furthermore, \citet{bafatakis2019usesPylint} used Pylint to investigate the Python coding style compliance of StackOverflow answers. Within the Machine Learning ecosystem, we only found one paper by \citet{simmons2020dsstandards} that performed static code analysis on a large dataset of Data Science (DS) projects. They also analysed non-DS projects with the goal of comparing the code quality and coding standard conformance of (open-source) DS projects versus non-DS projects, using Pylint in its default configuration as a metric. They sourced their DS projects from \citet{biswas2019dsdataset}, who in 2019 published a dataset of 1558 \textit{``mature Github projects that develop Python software for Data Science tasks.''}. Aside from applications of ML, it also includes ML libraries and tools. Our study differs from \cite{simmons2020dsstandards} in that we do not compare against non-DS projects and in that we do not solely focus on the adherence to coding standards as \cite{simmons2020dsstandards} does. Our primary focus lies more on investigating obstructions to the maintainability and reproducibility of ML projects, which includes coding standards violations, but also entails recognising refactoring opportunities and other code smells \cite{lacerda2020smellsSLR}. Moreover, we solely focus on applications of ML, and leave ML libraries and tools out of scope. We argue that the underlying nature of ML libraries and tools is very different from ML applications, and thus different results are expected when studied separately. Furthermore, \citet{simmons2020dsstandards} simplified the installation of the projects' dependencies by using \codeinline{findimports}\footnote{\url{https://pypi.org/project/findimports/}} to resolve all imports used in the projects, instead of relying on what projects' authors defined in their repositories, noting that \textit{``it was impractical to reliably determine and install dependencies for the projects analysed.''} However, if there is an inherent difficulty in resolving these dependencies within Python projects, then that is in itself an obstruction to the reproducibility and maintainability of these projects. Hence, we investigate this in our study. \section{Results} \label{sec:results} Applying our methodology, we collected, installed, and analysed 74 ML projects. In this section, we present our results and answer to the research question posed in the introduction: \begin{itemize} \item \textbf{RQ} -- \textit{What are the most prevalent code smells in Machine Learning code?} \end{itemize} To answer this, we first analysed the distribution of the amount of code smells per Pylint category per project, of which the characteristics can be found in \autoref{tab:results:msgs-per-category}. The table shows the minimum, maximum, mean and median number of messages reported by Pylint for each category, as well as the 25th percentile (Q1), 75th percentile (Q3), and standard deviation (Std. Dev.). We use the median as the main measure of central tendency. Our results show that Pylint messages in the Warning category are the most prevalent -- the median project has 356 warnings -- closely followed by messages in the Convention category with 226 messages for the median project. Messages in the Refactor and Error categories are less prevalent; respectively 49 and 56 such messages for the median project. However, especially given the Error category is meant for messages that show ``probable bugs'', this is an interesting observation. Even more interesting, there was \textit{no} project for which Pylint reported \textit{no} error messages. \input{results/msgs-per-category} As a more direct answer to this research question, we measured across all projects in our dataset what the top 20 code smells per category are that Pylint reported, see \autoref{tab:results:top20-per-category}. The top 10 messages that Pylint reported, disregarding category, are in \autoref{tab:results:top10-all}. \input{results/top10-all} \input{results/top20-per-category} \textbf{Convention} -- In this category we found that invalid naming, missing documentation (\codeinline{missing-function-docstring}, \codeinline{missing-module-docstring}, \codeinline{missing-class-docstring} and \codeinline{missing-docstring}) and improper organisation of imports (\codeinline{wrong-import-position}, \codeinline{wrong-import-order}, \codeinline{ungrouped-imports}, \codeinline{import-outside-toplevel} and \codeinline{multiple-imports}) were the most commonly recognised code smells in the Convention category. \textbf{Refactor} -- The most commonly recognised opportunities for refactoring pertained to duplicate code (4649), using too many arguments when defining a function or method (2158, \codeinline{too-many-arguments}), and using an old style for calling \pyinline{super} in the constructor of an inheriting class (1802, \codeinline{super-with-arguments}), instead of using the Python 3 style where no arguments to \pyinline{super} are necessary. It also shows that functions and classes are often too complex; Pylint reports 1456 functions that use too many local variables (\codeinline{too-many-locals}), 265 that are too long (\codeinline{too-many-statements}) and 218 that have too many branches, as well as 658 classes that have too many attributes on them (\codeinline{too-many-instance-attributes}). \textbf{Warning} -- The most reported Warning messages, by far, are \codeinline{unused-wildcard-import} (26307) and \codeinline{bad-indentation} (19921). Code smells relating to import management, as already indicated in the Convention category, are also reflected in the Warning category with 26307 counts of unused wildcard imports, 2321 counts of unused imports, 322 counts of libraries that were imported multiple times in the same file (\codeinline{reimported}) and 297 counts of wildcard imports. Aside from unused imports, unused variables (986) and unused arguments (902) are also common. Having variables that redefine (shadow) function or variable names from an outer scope (\codeinline{redefined-outer-scope}) is also common with 2548 recognised cases, as is redefining Python's built-in global names (536, \codeinline{redefined-builtin}). \textbf{Error} -- Finally, in the Error category, with 5860 counts, the \codeinline{no-member} message is the most prevalent, warning about the usage of non-existent attributes and methods on class instances and non-existent functions in Python modules. Import errors are the second most common with 1750 counts (i.e. on average 23.6 import errors per project), which are reported when a module (whether an external library or a module from a local file) contains imports that Pylint cannot resolve. The 326 \codeinline{no-name-in-module} messages are also related to these import problems, as they are emitted upon using a \pyinline{from X import Y} style import, where \codeinline{X} is resolved (so no import error is emitted), but \codeinline{Y} is not found. Furthermore, the use of undefined variables and attempting to call uncallable objects are also prevalent. \section{Threats to Validity} \subsection{Validity of the dataset} Our dataset may not yet be fully representative of the real-world state of ML code, as it currently only contains open-source ML projects, Therefore, in future research, we want to collect a dataset of closed-source ML projects from the industry, such as ING's AI-driven FinTech industry. We will use this both to compare the prevalence of code smells in these closed-source industry projects with that of open-source projects as presented in this paper, as well as to make our dataset more representative of the real-world state of ML and AI projects. We will also explore adding projects from the dataset published by \citet{biswas2019dsdataset}. Furthermore, we currently do not perform any analysis on the code quality of Jupyter Notebooks, even though they are very popular and have emerged as a de facto standard for data scientists \cite{perkel2018jupyter}. This was deliberate, as Pylint currently does not support directly analysing the Python code in Jupyter Notebook files and we wanted to avoid applying double standards to pure Python code and notebook Python code by extracting the notebook code into pure Python files. However, given their popularity, we do intend to perform future research on the code quality and linting of Jupyter Notebook code. \subsection{Validity of Pylint} Due to its dynamically typed nature, linting Python code is notoriously difficult \cite{simmons2020dsstandards,chen2018pysmells}. It is therefore no surprise that Pylint contains bugs and limitations that cause false positives and false negatives. Pylint's issue tracker on GitHub also reports 165 open and 501 closed issues regarding false positives\footnote{See \url{https://github.com/PyCQA/pylint/issues?q=is\%3Aissue+false+positive}} as of January 19th 2021. We have also noticed some of these shortcomings for ourselves during this research, as we have discussed in Section \ref{sec:discussion:code-smells}. We mitigate this threat by manually checking a subset of projects to analyse potential false positives.
-20,143.291014
[ -3.220703125, 2.8828125 ]
72.757475
[ -3.546875, 0.203125, -2.013671875, -5.3046875, 0.5771484375, 7.6171875 ]
[ 1.6630859375, 5.109375, 2.84375, 7.4921875 ]
297
5,896
[ -1.2919921875, 1.3447265625 ]
21.331497
[ -6.66015625, -4.703125, -4.19140625, -1.6318359375, 2.98828125, 12.875 ]
0.730931
48.561884
23.931479
2.113976
[ 1.4072391986846924 ]
-16,599.684166
5.917062
-19,863.826843
0.412761
6.018187
[ -3.65234375, -3.474609375, -2.70703125, -3.3984375, 3.23046875, 9.3046875 ]
[ -6.9453125, -4.00390625, -2.916015625, -2.333984375, 4.7734375, 7.98828125 ]
BkiUbL7xK7Ehm308osVM
\section{Introduction} The SM of particle physics has been tested and confirmed by many indirect and direct measurements in the last decades~\cite{ParticleDataGroup:2020ssz} and it was completed in 2012 by the discovery of the Brout-Englert-Higgs boson~\cite{ATLAS:2012yve,CMS:2012qbp}. Therefore, the focus of particle physics has now shifted towards discovering physics beyond the SM, i.e. new particles and new interactions. However, despite the observation of Dark Matter (at astrophysical scales) and neutrino masses (via oscillations), as well as compelling theoretical arguments for the existence of beyond the SM physics, no new particles were (so far) observed directly at the Large Hadron Collider (LHC) at CERN (see e.g. Refs.~\cite{Butler:2017afk,Masetti:2018btj} for an overview). Fortunately, intriguing indirect hints for new physics (NP) have been accumulated in: \begin{itemize} \item Semi-leptonic bottom quark decays ($b\to s\ell^+\ell^-$) \item Tauonic $B$ meson decays ($b\to c\tau\nu$) \item The anomalous magnetic moment of the muon ($a_\mu$) \item The Cabibbo angle anomaly (CAA) \item Non-resonant di-electrons ($q\bar q \to e^+e^-$) \item The difference of the forward-backward asymmetry in $B\to D^*\mu\nu$ vs $B\to D^*e\nu$ ($\Delta A_{\rm FB}$) \item Leptonic tau decays ($\tau\to\mu\nu\nu$) \end{itemize} within recent years. Interestingly, as we want to {propose} in this letter, all these observables admit an interpretation in terms of LFUV, i.e. NP that distinguishes between muon, electrons and tau leptons. While some of the anomalies are by construction measures of LFUV, also the other observables can be interpreted in this context. This unified view suggests a common origin of the anomalies in terms of beyond the SM (BSM) physics which reinforces the case for LFUV. {In particular, it opens up novel avenues for the construction of NP models and allows for the construction of a compelling physics case for colliders.} \section{Anomalies and LFUV} In the SM, the gauge interactions respect lepton flavour universality (LFU), which is in fact only broken by the Higgs Yukawa couplings. As these couplings are very small, at most of the order of one percent for the tau lepton, LFU is an approximate accidental symmetry of the SM (at the Lagrangian level). However, the impact of the lepton masses, originating from the Higgs Yukawa couplings after electroweak (EW) symmetry breaking, on the life times of charged leptons is enormous due to kinematic effects. Therefore, if we refer to LFUV we mean interactions with different couplings to electrons, muons and tau leptons (disregarding phase space effects) that directly distinguish among the charged leptons at the Lagrangian level. \vspace{2mm} \begin{boldmath}$b\to s\ell^+\ell^-$:\end{boldmath}~As all flavour changing neutral current processes, $b\to s\ell^+\ell^-$ transitions are loop and CKM suppressed in the SM, resulting in branching ratios which are at most of the order of $10^{-6}$. Among the different decays involving $b\to s\ell^+\ell^-$ transitions, the ratios $R({K^{\left(*\right)}}) = {\rm Br}({B \to K^{\left( * \right)}\mu ^+\mu ^-}) /{\rm Br}({B \to K^{\left( * \right)}e^+e^-})$ are particularly prominent. They are measured by LHCb~\cite{LHCb:2021trn,LHCb:2017avl} (and Belle~\cite{BELLE:2019xld,Belle:2019oag}) and their theory predictions are very clean (within the SM) since the dependence on the form factors drops out to an excellent approximation. Moreover, there is a long list of semi-leptonic $b \to s \ell^+\ell^-$ observables that significantly deviate from the SM predictions, like the optimized observable $P_5^{\prime\mu}$~\cite{Descotes-Genon:2012isb,Descotes-Genon:2013vna}\footnote{$P_5^{\prime\mu}$ leads to the important LFUV observable $Q_5=P_5^{\prime\mu}-P_5^{\prime e}$~\cite{Capdevila:2016ivx} measured by Belle\cite{Belle:2016fev} which agrees with the expectations from $R(K^{(*)})$.}, but also total branching ratios (sensitive to form factors) like ${\rm Br}({B\to K^{*}\mu^+\mu^-})$~\cite{LHCb:2016ykl}, ${\rm Br}({B \to K\mu^+\mu^-})$\cite{LHCb:2014cxe} and ${\rm Br}({B_s \to \phi\mu^+\mu^-})$~\cite{LHCb:2021zwz}. Furthermore, also the decay $B_s\to\mu^+\mu^-$~\cite{LHCb:2020zud}, being a purely leptonic decay, which can be predicted accurately including 3-loop QCD corrections~\cite{Hermann:2013kca} and enhanced electromagnetic corrections~\cite{Beneke:2017vpq}, displays a tension. Even though none of these observables measures LFUV, it is intriguing that if one assumes that the bulk of the NP effect is related to muons, a picture emerges which is in excellent agreement with data, further reinforcing the case for BSM physics in $b\to s\ell^+\ell^-$ transitions. In fact, including all observables into a global fit~\cite{Descotes-Genon:2015uva,Capdevila:2017bsm,Alguero:2019ptt}, one finds that several NP scenarios, possessing destructive NP w.r.t. the SM at the $20\%$ level, are preferred over the SM hypothesis by more than $7\sigma$~\cite{Alguero:2021anc} (see also Refs.~\cite{Altmannshofer:2021qrr,Alok:2020bia,Hurth:2020ehu, Ciuchini:2020gvn}). \vspace{2mm} \begin{boldmath}$b\!\!\to\!\! c\tau\nu$:\end{boldmath} This charged current transition is already mediated at tree-level in the SM and the corresponding decays have significant branching ratios (${\mathcal O}(10^{-3})$). Here the ratios $R\left( {{D^{\left( * \right)}}} \right) = {{\rm Br}({B \to {D^{\left( * \right)}}\tau \nu } )/ {\rm Br}({ B \to {D^{\left( * \right)}}\ell \nu )})}$, measured by BaBar~\cite{BaBar:2013mob}, Belle~\cite{Belle:2019gij} and LHCb~\cite{LHCb:2017smo}, possess imperfect cancellations of the form factor dependence since the tau mass is sizeable. However, the error is experimentally dominated and the resulting significance for constructive NP, at the $10\%$ level w.r.t. the SM, is $3\sigma$~\cite{HFLAV:2019otj}. Interestingly, these measurements are supported by $R(J/\Psi)$~\cite{LHCb:2017vlu} which is also observed to be larger than expected in the SM. \vspace{2mm} \begin{boldmath}$a_\mu$:\end{boldmath}~The result of the E821 experiment at Brookhaven~\cite{Bennett:2006fi} was recently confirmed by the $g-2$ experiment at Fermilab~\cite{Abi:2021gix} and the combined result displays a $4.2\,\sigma$ tension with the SM prediction~\cite{Aoyama:2012wk,Aoyama:2019ryr,Czarnecki:2002nt,Gnendiger:2013pva,Davier:2017zfy,Keshavarzi:2018mgv,Colangelo:2018mtw,Hoferichter:2019gzf,Davier:2019can,Keshavarzi:2019abf,Kurz:2014wya,Melnikov:2003xd,Masjuan:2017tvw,Colangelo:2017fiz,Hoferichter:2018kwz,Gerardin:2019vio,Bijnens:2019ghy,Colangelo:2019uex,Blum:2019ugy,Colangelo:2014qya}, which points towards NP of the order of the SM EW contribution. The main theory uncertainty comes from hadronic vacuum polarization (HVP). Here, the Budapest Marseilles Wuppertal collaboration (BMWc) released first lattice results that indicate a value in tension with the one derived from $e^+e^-$ data~\cite{Davier:2017zfy,Keshavarzi:2018mgv,Colangelo:2018mtw,Hoferichter:2019gzf,Davier:2019can,Keshavarzi:2019abf}. Therefore, future detailed comparisons with other lattice calculations will be necessary to achieve the same level of scrutiny that has become standard for the data-driven approach\footnote{Note that HVP also enters the global EW fit~\cite{Passera:2008jk}, and its {indirect determination via this fit~\cite{Haller:2018nnx}, slightly} disfavours the BMWc result~\cite{Crivellin:2020zul,Keshavarzi:2020bfy}.}. Note that {the UV contribution to} $a_\ell$ is necessarily proportional to one power of the lepton mass due to kinematics. Factoring out this dependence, one can see that the limit on the NP effect in $a_e$ is so stringent~\cite{Hanneke:2008tm,Aoyama:2017uqe,Laporta:2017okg} that the NP effect in $a_\ell$ cannot be flavour blind, and must therefore break LFU, if one aims at explaining $a_\mu$\footnote{However, in this case, LFUV could originate from the SM Yukawa couplings like in the MSSM.}. \vspace{2mm} {\bf CAA}:~The Cabbibo angle parametrizes the mixing among the first two generations of quarks and dominates the first row and first column CKM unitarity relations. Therefore, it can be used to check the consistency of different determinations of CKM elements within the SM and thus also to search for physics beyond the SM. Interestingly, a deficit in the first row and first column CKM unitarity is observed~\cite{Zyla:2020zbs}. This can be traced back to the fact that $V_{ud}$ extracted from super-allowed beta decays does not agree with $V_{us}$ determined from kaon and tau decays, when comparing (both) via CKM unitarity. The significance of these deviations crucially depends on the radiative corrections applied to beta decays~\cite{Marciano:2005ec,Seng:2018yzq,Seng:2018qru,Gorchtein:2018fxl,Czarnecki:2019mwq,Seng:2020wjq,Hayen:2020cxh,Hardy:2020qwl}, but also on the treatment of tensions between $K_{\ell 2}$ and $K_{\ell 3}$~\cite{Moulson:2017ive,Seng:2021nar} and tau decays~\cite{Amhis:2019ckw}. In the end $\sum_i\big|V_{ui}\big|^2= 0.9985(5)$ and $\sum_i\big|V_{id}\big|^2= 0.9970(18)$ should give a realistic representation of the current situation~\cite{Zyla:2020zbs}. Note that the fact that there is a deficit both in first row and first column CKM unitarity suggests that, if these tensions are due to NP, they should be related to $V_{ud}$ and thus beta decays. Interestingly, these deviations can also be interpreted as a sign of LFUV~\cite{Coutinho:2019aiy,Crivellin:2020lzu} since beta decays involve electrons while the most precise determination of $V_{us}$ comes from decays with final state muons. \vspace{2mm} \begin{boldmath}$q\bar q\to e^+e^-$:\end{boldmath}~In the search for high-energetic oppositely charged lepton pairs, the CMS experiment observed 44 electron events with an invariant mass of more than $1.8\,$TeV, while only $29.2\pm3.6$ events were expected~\cite{CMS:2021ctt}, resulting in a $\approx4\sigma$ tension. As the number of observed muons is compatible with the SM prediction, this can be interpreted as a sign of LFUV and CMS provided the ratio of muons over electrons which reduces the theoretical uncertainties~\cite{Greljo:2017vvb}. Importantly, this CMS excess is compatible with the ATLAS limit~\cite{ATLAS:2020yat}, as also ATLAS observed slightly more electrons than expected. \vspace{2mm} \begin{boldmath}$\Delta A_{FB}$:\end{boldmath} This observable encodes the difference of the forward-backward asymmetry in $B\to D^*\mu\nu$ vs $B\to D^*e\nu$. Like for $R(K^{(*)})$ the muon and electron mass can both be neglected such that the form-factor dependence cancels and the SM prediction is, to the currently relevant precision, zero. Even though the corresponding measurements of the total branching ratios are consistent with the SM expectations~\cite{Glattauer:2015teq,Abdesselam:2017kjf}, recently Ref.~\cite{Bobeth:2021lya} unveiled a $\approx\!4\sigma$ tension in $\Delta A_{\mathrm{FB}}$, extracted from $B\to D^*\ell\bar \nu$ data of BELLE~\cite{Waheed:2018djm}. \vspace{2mm} \begin{boldmath}$\tau\to\mu\nu\nu$:\end{boldmath} Combining the ratios $\tau \to \mu,e \nu \bar \nu/\mu \to e \nu \bar \nu$ and $\tau \to \mu \nu \bar \nu/\tau \to e \nu \bar \nu$, including the latest BELLE result~\cite{Belle:2013teo} and correlations~\cite{Amhis:2019ckw}, leads to a $\approx 2\sigma$ preference for constructive NP at the per-mille level in $\tau \to \mu \nu \bar \nu$. On the theory side, QED corrections~\cite{Marciano:1988vm,Decker:1994ea} are relevant due to the high precision of the measurement. \section{Explanations} \label{explanations} Since the anomalies fit into the coherent patterns of LFUV beyond the SM, it is natural to ask how they could be explained in terms of new particles and new interactions. For a consistent renormalizable extension of the SM, only scalars bosons (spin 0), fermions (spin 1/2) and vectors bosons\footnote{Here a consistent extension requires some spontaneous symmetry breaking via a Higgs mechanism or some composite or extra-dimensional dynamics.} (spin 1) are at disposal. Here, we will consider four classes of SM extensions (focusing on heavy NP realized above the EW breaking scale): \vspace{-2mm} \begin{itemize} \item Leptoquarks (LQs): Scalar or vector particles that carry color and couple directly to a quark and a lepton~\cite{Buchmuller:1986zs,Dorsner:2016wpm}. They were first proposed in the context of the Pati-Salam model~\cite{Pati:1974yy}, Grand Unified Theories (GUTs)~\cite{Georgi:1974sy,Dimopoulos:1980hn,Senjanovic:1982ex,Frampton:1989fu,Witten:1985xc} and in the R-parity violating MSSM (see e.g. Ref.~\cite{Barbier:2004ez} for a review). \vspace{-2mm} \item $W^\prime$ bosons: Singly charged, QCD neutral vector particles. They appear as Kaluza Klein excitations of the SM $W$ in composite~\cite{Weinberg:1962hj,Susskind:1978ms} or extra-dimensional models~\cite{Randall:1999ee} as well as in models with additional $SU(2)$ factors, including left-right-symmetric models~\cite{Mohapatra:1974gc}. \vspace{-2mm} \item $Z^\prime$ bosons: Neutral (color and electric charge) vector bosons. They can be singlets under $SU(2)_L$ but also neutral components of an $SU(2)_L$ multiplet. Again, they can be resonances of the SM $Z$ or originate from an abelian symmetry like $B-L$~\cite{Pati:1974yy} or gauged flavour symmetries~\cite{Froggatt:1978nt,He:1991qd}. \vspace{-2mm} \item New scalars and fermions (S/F): In this category we pigeonhole all vector-like fermions as well as all scalar particles that are not LQs. Vector-like fermions appear in GUTs~\cite{Hewett:1988xc,Langacker:1980js,delAguila:1982fs}, composite models or models with extra dimensions~\cite{Antoniadis:1990ew,ArkaniHamed:1998kx} and vector-like leptons are involved in the type I~\cite{Minkowski:1977sc,Lee:1977tib} and type III~\cite{Foot:1988aq} seesaw mechanisms. New fermions and scalars could be supersymmetric partners of SM particles~\cite{Haber:1984rc} and the MSSM also includes additional Higgses like in the 2HDMs~\cite{Chanowitz:1985ug,Branco:2011iw}. \end{itemize} \begin{boldmath}{$b\to s\ell^+\ell^-$:}\end{boldmath}~As these processes are suppressed in the SM, the required $O(20\%)$ NP effect (w.r.t.~the SM) is small and we have three different classes of solutions: 1)~A $Z'$ boson with a flavour violating couplings to bottom and strange quarks can account for the anomaly at tree-level~\cite{Buras:2013qja,Gauld:2013qba,Altmannshofer:2014cfa,Crivellin:2015mga,Crivellin:2015lwa,Niehoff:2015bfa}. Even though one in general expects an effect in $B_s-\bar B_s$ mixing~\cite{DiLuzio:2017fdq}, and the $Z^\prime$ can be produced resonantly at the LHC (see e.g.~\cite{Allanach:2015gkd}), such a solution is viable if the couplings to first generations quarks are suppressed and the models possesses an approximate global $U(2)$ flavour symmetry to protect it from $K^0-\bar K^0$ and $D^0-\bar D^0$ mixing~\cite{Calibbi:2019lvs}. 2)~Three LQ representations, the scalar triplet ($S_1$), vector singlet ($U_1$) and vector triplet ($S_3$), can already at tree-level give a good fit to $b\to s\ell^+\ell^-$ data~\cite{Hiller:2014yaa,Alonso:2015sja}. The couplings to electrons should be small because of $\mu\to e\gamma$~\cite{Crivellin:2017dsk} and $\mu\to e$ conversion. As effects in $b\to s\gamma$ and $B_s-\bar B_s$ mixing are only generated at the loop-level, these processes are not constraining and also the LHC bounds are weak~\cite{Diaz:2017lit} as no coupling to first generation quarks are needed. 3)~Loop-effect involving box diagrams with new heavy scalars and fermions~\cite{Gripaios:2015gra,Arnan:2016cpy,Grinstein:2018fgb,Arnan:2019uhr} or top quarks~\cite{Aebischer:2015fzz} (either in combinations with LQs~\cite{Becirevic:2017jtw} or a $Z^\prime$~\cite{Kamenik:2017tnu}) can account for the anomaly if NP is at or below the TeV scale. \begin{boldmath}{$b\to c\tau\nu$:}\end{boldmath}~As this transition is tree-level mediated in the SM, also a tree-level NP contribution is necessary to obtain the desired effect of 10\% w.r.t the SM (for heavy NP with perturbative couplings). As it is a charged current process, the only possibilities are charged Higgses~\cite{Crivellin:2012ye,Fajfer:2012jt,Celis:2012dk}, $W'$~bosons~\cite{Bhattacharya:2014wla,Greljo:2015mma,Boucenna:2016qad,Greljo:2018ogz,Robinson:2018gza,Asadi:2018wea,Carena:2018cow} (with or without right-handed neutrinos) or LQs~\cite{Sakaki:2013bfa,Bauer:2015knc,Freytsis:2015qca,Fajfer:2015ycq}. The first two options are disfavoured by the $B_c$ lifetime~\cite{Celis:2016azn,Alonso:2016oyd} and/or LHC searches~\cite{Bhattacharya:2014wla,Greljo:2015mma}, leaving LQ as the best option for a full explanation. However, also in this case, a solution is not trivial, as constraints from $B_s-\bar B_s$ mixing, $B\to K^*\nu\nu$ and LHC searches must be avoided. Therefore, either the $SU(2)_L$ singlet vector LQ~\cite{Calibbi:2015kma,Barbieri:2016las,DiLuzio:2017vat,Calibbi:2017qbu,Bordone:2017bld,Blanke:2018sro,Crivellin:2018yvo} or the singlet-triplet model~\cite{Crivellin:2017zlb,Crivellin:2019dwb,Gherardi:2020qhc} that can avoid these constraints are particularly interesting. \vspace{2mm} \begin{boldmath}$a_\mu$:\end{boldmath}~As the deviation from the SM prediction is of the order of its EW contribution, heavy NP around the TeV scale must posses an enhancement factor\footnote{See Ref.~\cite{Athron:2021iuf} for a recent overview on NP explanations of $a_\mu$.}. This can be provided via chiral enhancement meaning that the chirality flip does not originate from the muon Yukawa but from a larger coupling of NP to the SM Higgs doublet. In the MSSM, this factor is $\tan\beta$~\cite{Everett:2001tq,Feng:2001tr} and $a_\mu$ has been considered for many years as the smoking gun of the MSSM~\cite{Stockinger:2006zn}, however, due to the stringent LHC bounds minimal versions with universal scalar masses~\cite{Nilles:1983ge} cannot account for it anymore~\cite{Costa:2017gup} while the general MSSM still can. Alternatively, models with generic new scalars and fermions can explain $a_\mu$~\cite{Czarnecki:2001pv,Kannike:2011ng,Kowalska:2017iqv,Crivellin:2018qmi,Crivellin:2021rbq} and there are two scalar LQ representations that address $a_\mu$ via a $m_t/m_\mu$ enhancement~\cite{Djouadi:1989md,Davidson:1993qk,Couture:1995he,Chakraverty:2001yg,ColuccioLeskow:2016dox}. \vspace{2mm} {\bf CAA}:~A sub per-mille effect in the determination of $V_{ud}$ suffices to explain the CAA. In order to extract $V_{ud}$ from beta decays knowledge of the Fermi constant, most precisely measured in muon decay~\cite{Tishchenko:2012ie}, is needed. However, as the Fermi constant could subsume NP, we have the following possibilities~\cite{Crivellin:2021njn}: 1) direct (tree-level) modification of beta decays 2) direct (tree-level) modification of muon decay 3) modified $W$-$\mu$-$\nu$ coupling 4) modified $W$-$u$-$d$ coupling. Note than the effect of a modified $W$-$e$-$\nu$ coupling in $V_{ud}$ cancels~\cite{Coutinho:2019aiy,Crivellin:2020lzu}. Option 1) could be realized by a $W'$~\cite{Capdevila:2020rrl} or a LQ~\cite{Crivellin:2021egp}, however in the latter case stringent bounds from other flavour observables arise. Possibility 2) can be achieved by adding a singly charged $SU(2)_L$ singlet scalar~\cite{Crivellin:2020klg}, a $W^\prime$~\cite{Capdevila:2020rrl} or $Z^\prime$ vector boson with flavour violating couplings~\cite{Buras:2021btx}, while option 3) and 4) can be realized by vector-like leptons~\cite{Crivellin:2020ebi,Kirk:2020wdk,Alok:2020jod} and vector-like quarks~\cite{Belfatto:2019swo,Branco:2021vhs,Belfatto:2021jhf}, respectively. Note that vector-like quarks could also solve the tension between the different determinations of $V_{us}$ while vector like leptons have the potential to improve the global EW fit. \vspace{2mm} \begin{boldmath}{$q\bar q\!\to\! e^+e^-$:}\end{boldmath}~As CMS analyzes "non-resonant" electrons, meaning that they do not originate from the on-shell production of a new particle, NP must be heavier than the LHC energy scale. Furthermore, since the muon channel agrees with the SM expectations, constructive interference in the electron channel is required. This is possible with NP coupling to first generation quarks and electrons with $O(1)$ couplings and masses around 10~TeV. Therefore, a $Z^\prime$~boson or a LQ coupling to electrons and first generation quarks~\cite{Crivellin:2021egp} have the potential to explain this CMS measurement. However, taking into account the information on the various $q^2$ bins of the muon to electron ratio, one can see that NP is only preferred by $\approx 3\,\sigma$~\cite{Crivellin:2021rbf} meaning that the excess of $4\,\sigma$ cannot be fully explained. \footnote{{Note that even though here an effect in electrons is required, while $b\to s\ell^+\ell^-$ points towards NP mainly related to muons, this is not a priori a contradiction. It is well possible that NP couples electrons to first generation quarks (explaining $q\bar q\!\to\! e^+e^-$) and muons to second and third generation quarks (explaining $b\to s\ell^+\ell^-$).}} \vspace{2mm} \begin{boldmath}$\Delta A_{FB}$:\end{boldmath}~A good fit to data requires a non-zero Wilson coefficient of the tensor operators. Importantly, among the set of renormalizable models, only two scalar LQ can generate this operator at tree-level and only the $SU(2)_L$ singlet gives a good fit to data~\cite{Carvunis:2021dss}. However, even in this case, due to the constraints from other asymmetries, $\Delta A_{FB}$ cannot be fully explained, but the global fit to $b\to c\mu\nu$ and $b\to c e\nu$ data can be improved by more than $3\sigma$~\cite{Carvunis:2021dss}. \begin{table}[t] \begin{center} \begin{tabular}{ l | c c c c} &statistics & experiment & theory & NP explanation \\\hline $b \to s\ell \ell$ & $\star\star\star\star\star$ & $\star\star\star\star\star$ & $\star\star\star\star$ & $\star\star\star\star$ \\ $b \to c\tau \nu $ & $\star\star\star$ & $\star\star\star$ & $\star\star\star\star$ & $\star\star\star$\\ $a_\mu$ & $\star\star\star\star$ & $\star\star\star\star$ & $\star\star\star$ & $\star\star\star$\\ CAA & $\star\star\star$ & $\star\star\star\star$ & $\star\star$ & $\star\star\star\star\star$\\ $q\bar q\to e^+e^-$ & $\star\star\star$ & $\star\star\star$ & $\star\star\star\star\star$ & $\star\star\star$\\ $\Delta A_{FB}$ & $\star\star\star\star$ & $\star\star$ & $\star\star\star\star\star$ & $\star\star$\\ $\tau\to\mu\nu\nu$ & $\star\star$ & $\star\star\star$ & $\star\star\star\star$ & $\star\star\star\star$ \end{tabular} \end{center} \caption{{Quantitative} comparison, using a one to five stars rating, of the different anomalies pointing towards LFUV.} \label{table:Anomalies} \end{table} \vspace{2mm} \begin{boldmath}$\tau\to\mu\nu\nu$:\end{boldmath}~Explanations of $\tau\to\mu\nu\nu$ are very similar to NP explanations of the CAA via a modified Fermi constant (with $\tau\to\mu\nu\nu$ taking the role of $\mu\to e\nu\nu$). It can be achieved by a tree-level effect via a singly charged $SU(2)_L$ singlet scalar~\cite{Crivellin:2020klg}, a $W^\prime$ or a flavour violating $Z^\prime$~\cite{Buras:2021btx}. Alternatively, a modification of the $W$-$\tau$-$\nu$ coupling through the mixing of vector like leptons or a $W^\prime$ boson is possible. Furthermore, a $Z^\prime$ coupling to muons and tau leptons can generate the right effect via box diagrams~\cite{Altmannshofer:2014cfa,Crivellin:2020oup}. \begin{figure}[t] \begin{centering} \includegraphics[width=0.7\textwidth]{NPimplications} \caption{{Synthesis} of possible explanations (red boxes) of the anomalies (blue boxes). The arrows indicate to which extensions of the SM the anomalies point. Thick arrows stand for probable explanations without significant experimental or theoretical shortcomings while the thin ones indicate that the new particles can only partially explain the measurement or generate problems in other observables. The red arrow indicates that the $Z^\prime$ and $W^\prime$ could be components of a single $SU(2)_L$ triplet.} \label{fig:Explanations} \end{centering} \end{figure} \section{Conclusions and Outlook} \label{conclusions} In this letter we {proposed} that not only the ratios $R(K^{(*)})$, $R(D^{(*)})$ and $Q_5$ could be manifestations of LFUV physics beyond the SM but that also several other anomalies admit an interpretation in this context. We {display} these observables in Table~\ref{table:Anomalies} where we compare them regarding four categories: \vspace{-1mm} \begin{itemize} \item Statistics: Statistical significance of the deviations from the SM prediction. \vspace{-1mm} \item Experiment: Variety and dis-correlation of the available measurements. Robustness regarding systematic uncertainties. \vspace{-1mm} \item Theory: Solidity and coherence of the SM prediction. \vspace{-1mm} \item NP explanations: Explainability in terms of new particles and interactions, taking into account possible conflicts with other observables. \end{itemize} \vspace{-2mm} with a star rating from one to five. From this table we can see that $b\to s\ell^+\ell^-$ has top ratings for statistics and experiment and and also the theory prediction is very solid due to recent progress in the calculation of long distance charm computations~\cite{Gubernari:2020eft}. For $a_\mu$ the two weaker points are the SM prediction in regard to HVP and that the required NP effect is large, such that it is not easy to obtain in a UV complete model. $\Delta A_{FB}$ has only two stars for the NP explanation as the tension can only be eased but not fully explained. Both the CAA and $\tau\to\mu\nu\nu$ are very easy to explain via NP as only a per-mille effect is required but have weaknesses in their SM predictions and statistics, respectively. As {we have shown that all} the anomalies fit into the coherent pattern of LFUV, common explanations are not only possible but even probably. We {synthesize} the {viable} SM extensions (LQs, S/F, $W^\prime$ and $Z^\prime$) which can account for them in Fig.~\ref{fig:Explanations}. From there we can see that the various anomalies point towards different possible extensions of the SM and that an extension by a single new field cannot explain all anomalies, however, in particular LQs {are} very promising solution. On the one hand, this could indicate that not all anomalies might be confirmed in the future. On the other hand, it is of course possible (and maybe even likely) that the SM is superseded by a more complicated theory with a sizeable number of new degrees of freedom. Importantly, even in case only one of the anomalies were confirmed, this would prove the existence of physics beyond the SM at the (multi) TeV scale which would not only constitute the biggest breakthrough in particle physics within the last decades but also provide a convincing physics case for future colliders {guiding the field into a new era}. \section*{Acknowledgments} A.C. gratefully acknowledges the support by the Swiss National Science Foundation under Project No.\ PP00P21\_76884. JM acknowledges financial support from the Spanish Ministry of Science, Innovation and Universities (PID2020-112965GB-I00/AEI/ 10.13039/501100011033) and by ICREA under the ICREA Academia programme. \section*{Bibliography}
-20,006.427677
[ -2.90234375, 2.71875 ]
42.753623
[ -3.15625, 0.57421875, -1.376953125, -5.49609375, -0.285400390625, 7.04296875 ]
[ 3.265625, 7.60546875, 4.734375, 6.49609375 ]
165
3,346
[ -3.3984375, 3.845703125 ]
27.263812
[ -5.8984375, -3.716796875, -4.12109375, -2.447265625, 1.7080078125, 11.7890625 ]
0.95143
4.268395
31.022116
3.70414
[ 2.364413261413574 ]
-15,160.931087
6.282427
-19,481.648723
0.2664
5.944957
[ -3.201171875, -3.61328125, -3.517578125, -4.625, 2.33203125, 11.8203125 ]
[ -5.83984375, -1.8671875, -1.724609375, -1.2841796875, 3.384765625, 4.359375 ]
BkiUdgs5qhLA5d-DF18W
\section{#1}} \renewcommand{\theequation}{\arabic{equation}} \newcommand{\app}[1]{\setcounter{section}{0} \setcounter{equation}{0} \renewcommand{\thesection}{\Alph{section}} \section{#1}} \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\begin{eqnarray}}{\begin{eqnarray}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\end{eqnarray}}{\end{eqnarray}} \newcommand{\nonumber \end{equation}}{\nonumber \end{equation}} \newcommand{\hspace{0.7cm}}{\hspace{0.7cm}} \def\spinst#1#2{{#1\brack#2}} \def\vskip .4cm{\vskip .4cm} \def\noindent{\noindent} \def\omega{\omega} \def\alpha{\alpha} \def\beta{\beta} \def\gamma{\gamma} \def\Gamma{\Gamma} \def\delta{\delta} \defq-q^{-1}{q-q^{-1}} \def{1 \over \la}{{1 \over q-q^{-1}}} \def{1 \over \la}{{1 \over q-q^{-1}}} \def\bar{\alpha}{\bar{\alpha}} \def\bar{\beta}{\bar{\beta}} \def\bar{\gamma}{\bar{\gamma}} \def\bar{\delta}{\bar{\delta}} \def\bar{a}{\bar{a}} \def\bar{A}{\bar{A}} \def\bar{B}{\bar{B}} \def{\bf C}{\bar{C}} \def\bar{D}{\bar{D}} \def\bar{a}{\bar{a}} \def\bar{c}{\bar{c}} \def\bar{d}{\bar{d}} \def\bar{b}{\bar{b}} \def\bar{e}{\bar{e}} \def\bar{f}{\bar{f}} \def\hat\xi{\hat\xi} \def\hat\Xi{\hat\Xi} \def\hat u{\hat u} \def\hat v{\hat v} \def\bar u{\bar u} \def\bar v{\bar v} \def\bar \xi{\bar \xi} \def{\alpha}^{\prime}{{\alpha}^{\prime}} \def{\beta}^{\prime}{{\beta}^{\prime}} \def{\gamma}^{\prime}{{\gamma}^{\prime}} \def{\delta}^{\prime}{{\delta}^{\prime}} \def{\rho}^{\prime}{{\rho}^{\prime}} \def{\tau}^{\prime}{{\tau}^{\prime}} \def\rho ''{\rho ''} \def{\theta}^{\prime}{{\theta}^{\prime}} \def{i\over 2}{{i\over 2}} \def{1 \over 4}{{1 \over 4}} \def{1 \over 2}{{1 \over 2}} \def\varepsilon{\varepsilon} \def\wedge{\wedge} \def\otimes{\otimes} \def\theta{\theta} \def\delta{\delta} \defi_{\de {\vec y}}{i_{\delta {\vec y}}} \defl_{\de {\vec y}}{l_{\delta {\vec y}}} \def{\vec t}{{\vec t}} \def{\tilde G}{{\tilde G}} \def\vec {\de y}{\vec {\delta y}} \def\partial{\partial} \def{\partial \over {\partial x^+}}{{\partial \over {\partial x^+}}} \def{\partial \over {\partial x^-}}{{\partial \over {\partial x^-}}} \def{\partial \over {\partial x^i}}{{\partial \over {\partial x^i}}} \def\pdy#1{{\partial \over {\partial y^{#1}}}} \def\pdx#1{{\partial \over {\partial x^{#1}}}} \def\pdyx#1{{\partial \over {\partial (yx)^{#1}}}} \defq-Poincar\'e~{q-Poincar\'e~} \def\A#1#2{ A^{#1}_{~~~#2} } \def\R#1#2{ R^{#1}_{~~~#2} } \def\PI#1#2{(P_I)^{#1}_{~~~#2} } \def\PJ#1#2{ (P_J)^{#1}_{~~~#2} } \def(P_+,P_+){(P_+,P_+)} \def(P_+,P_-){(P_+,P_-)} \def(P_-,P_+){(P_-,P_+)} \def(P_-,P_-){(P_-,P_-)} \def(P_+,P_0){(P_+,P_0)} \def(P_0,P_-){(P_0,P_-)} \def(P_0,P_+){(P_0,P_+)} \def(P_-,P_0){(P_-,P_0)} \def(P_0,P_0){(P_0,P_0)} \def(P_{\sigma},P_0){(P_{\sigma},P_0)} \def(P_0,P_{\sigma}){(P_0,P_{\sigma})} \def\Rp#1#2{ (R^+)^{#1}_{~~~#2} } \def\Rpinv#1#2{ [(R^+)^{-1}]^{#1}_{~~~#2} } \def\Rm#1#2{ (R^-)^{#1}_{~~~#2} } \def\Rinv#1#2{ (R^{-1})^{#1}_{~~~#2} } \def\Rpm#1#2{(R^{\pm})^{#1}_{~~~#2} } \def\Rpminv#1#2{((R^{\pm})^{-1})^{#1}_{~~~#2} } \def{\cal R}{{\cal R}} \def\Rb#1#2{{ {\cal R}^{#1}_{~~~#2} }} \def\Rbp#1#2{{ ({\cal R}^+)^{#1}_{~~~#2} }} \def\Rbm#1#2{ ({\cal R}^-)^{#1}_{~~~#2} } \def\Rbinv#1#2{ ({\cal R}^{-1})^{#1}_{~~~#2} } \def\Rbpm#1#2{({\cal R}^{\pm})^{#1}_{~~~#2} } \def\Rbpminv#1#2{(({\cal R}^{\pm})^{-1})^{#1}_{~~~#2} } \defR^{\pm}{R^{\pm}} \defR^{+}{R^{+}} \defR^{-}{R^{-}} \def{\hat R}{{\hat R}} \def{\hat {\Rbo}}{{\hat {{\cal R}}}} \def\Rhat#1#2{ {\hat R}^{#1}_{~~~#2} } \def\L#1#2{ \Lambda^{#1}_{~~~#2} } \def\Linv#1#2{ (\Lambda^{-1})^{#1}_{~~~#2} } \def\Rbhat#1#2{ {\hat {\Rbo}}^{#1}_{~~~#2} } \def\Rhatinv#1#2{ ({\hat R}^{-1})^{#1}_{~~~#2} } \def\Rbhatinv#1#2{ ({\hat {\Rbo}}^{-1})^{#1}_{~~~#2} } \def\Z#1#2{ Z^{#1}_{~~~#2} } \def\Rt#1{ {\hat R}_{#1} } \def\Lambda{\Lambda} \def{\hat R}{{\hat R}} \def\ff#1#2#3{f_{#1~~~#3}^{~#2}} \def\MM#1#2#3{M^{#1~~~#3}_{~#2}} \def\cchi#1#2{\chi^{#1}_{~#2}} \def\ome#1#2{\omega_{#1}^{~#2}} \def\RRhat#1#2#3#4#5#6#7#8{\Lambda^{~#2~#4}_{#1~#3}|^{#5~#7}_{~#6~#8}} \def\RRhatinv#1#2#3#4#5#6#7#8{(\Lambda^{-1})^ {~#2~#4}_{#1~#3}|^{#5~#7}_{~#6~#8}} \def\LL#1#2#3#4#5#6#7#8{\Lambda^{~#2~#4}_{#1~#3}|^{#5~#7}_{~#6~#8}} \def\ZZ#1#2#3#4#5#6#7#8{Z^{~#2~#4}_{#1~#3}|^{#5~#7}_{~#6~#8}} \def\WW#1#2#3#4#5#6#7#8{W^{~#2~#4}_{#1~#3}|^{#5~#7}_{~#6~#8}} \def\PIJ#1#2#3#4#5#6#7#8{(P_I,P_J)^{~#2~#4}_{#1~#3}|^{#5~#7}_{~#6~#8}} \def\LLinv#1#2#3#4#5#6#7#8{(\Lambda^{-1})^ {~#2~#4}_{#1~#3}|^{#5~#7}_{~#6~#8}} \def\U#1#2#3#4#5#6#7#8{U^{~#2~#4}_{#1~#3}|^{#5~#7}_{~#6~#8}} \def{\bf C}{{\bf C}} \def\CC#1#2#3#4#5#6{{\bf C}_{~#2~#4}^{#1~#3}|_{#5}^{~#6}} \def\cc#1#2#3#4#5#6{C_{~#2~#4}^{#1~#3}|_{#5}^{~#6}} \def\C#1#2{ {\bf C}_{#1}^{~~~#2} } \def\c#1#2{ C_{#1}^{~~~#2} } \def\q#1{ {{q^{#1} - q^{-#1}} \over {q^{{1 \over 2}}-q^{-{1 \over 2}}}} } \def\Dmat#1#2{D^{#1}_{~#2}} \def\Dmatinv#1#2{(D^{-1})^{#1}_{~#2}} \def\Delta_R{\Delta_R} \def\Delta_L{\Delta_L} \def\f#1#2{ f^{#1}_{~~#2} } \def\F#1#2{ F^{#1}_{~~#2} } \def\T#1#2{ T^{#1}_{~~#2} } \def\Ti#1#2{ (T^{-1})^{#1}_{~~#2} } \def\Tp#1#2{ (T^{\prime})^{#1}_{~~#2} } \def\Th#1#2{ {\hat T}^{#1}_{~~#2} } \def T^{\prime} { T^{\prime} } \def\M#1#2{ M_{#1}^{~#2} } \defq^{-1}{q^{-1}} \defu^{-1}{u^{-1}} \defv^{-1}{v^{-1}} \defx^{-}{x^{-}} \defx^{+}{x^{+}} \deff_-{f_-} \deff_+{f_+} \deff_0{f_0} \def\Delta{\Delta} \def\Mat#1#2#3#4#5#6#7#8#9{\left( \matrix{ #1 & #2 & #3 \cr #4 & #5 & #6 \cr #7 & #8 & #9 \cr }\right) } \defA^{\prime}{A^{\prime}} \def\Delta^{\prime}{\Delta^{\prime}} \defI^{\prime}{I^{\prime}} \def\epsi^{\prime}{\varepsilon^{\prime}} \def\kappa^{\prime}{\kappa^{\prime}} \def\kappa^{\prime -1}{\kappa^{\prime -1}} \def\kappa^{-1}{\kappa^{-1}} \defg^{\prime}{g^{\prime}} \defq \rightarrow 1{q \rightarrow 1} \defF_{\mu\nu}{F_{\mu\nu}} \defA_{\mu}{A_{\mu}} \defA_{\nu}{A_{\nu}} \def\part_{\mu}{\partial_{\mu}} \def\part_{\nu}{\partial_{\nu}} \defA_{\nu]}{A_{\nu]}} \defB_{\nu]}{B_{\nu]}} \defZ_{\nu]}{Z_{\nu]}} \def\part_{[\mu}{\partial_{[\mu}} \def$[SU(2) \times U(1)]_q~${$[SU(2) \times U(1)]_q~$} \def$SU_q(2)~${$SU_q(2)~$} \def$SU(2) \times U(1)~${$SU(2) \times U(1)~$} \defg_{ij}{g_{ij}} \defSL_q(2,{\bf C}){SL_q(2,{\bf C})} \defR^*{R^*} \def\rr#1{R^*_{#1}} \def\Lpm#1#2{(L^{\pm})^{#1}_{~~#2}} \def\Lmp#1#2{(L^{\mp})^{#1}_{~~#2}} \defL^{\pm}{L^{\pm}} \defL^{+}{L^{+}} \defL^{-}{L^{-}} \def\Lp#1#2{(L^{+})^{#1}_{~~#2}} \def\Lm#1#2{(L^{-})^{#1}_{~~#2}} \defg_{U(1)}{g_{U(1)}} \defg_{SU(2)}{g_{SU(2)}} \def {\rm tg} { {\rm tg} } \def$Fun(G)~${$Fun(G)~$} \def{}_{{\rm inv}}\Ga{{}_{{\rm inv}}\Gamma} \def\Ga_{{\rm inv}}{\Gamma_{{\rm inv}}} \def\stackrel{q \rightarrow 1}{\longrightarrow}{\stackrel{q \rightarrow 1}{\longrightarrow}} \def\viel#1#2{e^{#1}_{~~{#2}}} \def\rightarrow{\rightarrow} \def{\det}_q{{\det}_q} \def{1 \over Q}{{1 \over Q}} \defB_n, C_n, D_n{B_n, C_n, D_n} \newcommand{\NP}[1]{Nucl.\ Phys.\ {\bf #1}} \newcommand{\PL}[1]{Phys.\ Lett.\ {\bf #1}} \newcommand{\NC}[1]{Nuovo Cim.\ {\bf #1}} \newcommand{\CMP}[1]{Comm.\ Math.\ Phys.\ {\bf #1}} \newcommand{\PR}[1]{Phys.\ Rev.\ {\bf #1}} \newcommand{\PRL}[1]{Phys.\ Rev.\ Lett.\ {\bf #1}} \newcommand{\MPL}[1]{Mod.\ Phys.\ Lett.\ {\bf #1}} \newcommand{\IJMP}[1]{Int.\ J.\ Mod.\ Phys.\ {\bf #1}} \newcommand{\JETP}[1]{Sov.\ Phys.\ JETP {\bf #1}} \newcommand{\TMP}[1]{Teor.\ Mat.\ Fiz.\ {\bf #1}} \begin{document} \begin{titlepage} \rightline{DFTT-18/93} \vskip 2em \begin{center}{\bf A NOTE ON QUANTUM STRUCTURE CONSTANTS} \\[6em] Leonardo Castellani ${}^{*}$ and Marco A. R-Monteiro${}^{\diamond *}$ \\[2em] {\sl${}^{*}$Istituto Nazionale di Fisica Nucleare, Sezione di Torino \\and\\Dipartimento di Fisica Teorica\\ Via P. Giuria 1, 10125 Torino, Italy.} \\ \vskip .4cm {\sl {}$^{\diamond}$Centro Brasileiro de Pesquisas Fisicas (CBPF)\\Rio de Janeiro, Brasil}\\ \vskip 2cm \end{center} \begin{abstract} The Cartan-Maurer equations for any $q$-group of the $A_{n-1}, B_n, C_n, D_n$ series are given in a convenient form, which allows their direct computation and clarifies their connection with the $q=1$ case. These equations, defining the field strengths, are essential in the construction of $q$-deformed gauge theories. An explicit expression $\omega ^i\wedge \omega^j= -\Z {ij}{kl}\omega ^k\wedge \omega^l$ for the $q$-commutations of left-invariant one-forms is found, with $\Z{ij}{kl} \omega^k \wedge \omega^l \stackrel{q \rightarrow 1}{\longrightarrow} \omega^j\wedge\omega^i$. \end{abstract} \vskip 5cm \noindent DFTT-18/93 \noindent April 1993 \vskip .2cm \noindent \hrule \vskip.2cm \hbox{\vbox{\hbox{{\small{\it email addresses:}}}\hbox{}} \vbox{\hbox{{\small Decnet=31890::castellani;}} \hbox{{\small Bitnet= [email protected] }}}} \end{titlepage} \newpage \setcounter{page}{1} Quantum groups \cite{Drinfeld}-\cite{Majid1} appear as a natural and consistent algebraic structure behind continuously deformed physical theories. Thus, in recent times, there have been various proposals for deformed gauge theories and gravity-like theories \cite{qgauge} based on $q$-groups. Such deformations are interesting from different points of view, depending also on which theory we are deforming. For example, in quantized $q$-gravity theories space-time becomes noncommutative, a fact that does not contradict (Gedanken) experiments under the Planck length, and that could possibly provide a regularization mechanism \cite{Connes,Majid2}. On the other hand, for the $q$-gauge theories constructed in \cite{Cas} spacetime can be taken to be the ordinary Minkowski spacetime, the $q$-commutativity residing on the fiber itself. As shown in \cite{Cas}, one can construct a $q$-lagrangian invariant under $q$-gauge variations. This could suggest a way to break the classical symmetry via a $q$-deformation, rather than by introducing ad hoc scalar fields. Note also that, unlike the $q=1$ case, the $q$-group $U_q(N)$ is simple, thus providing a ``quantum unification" of $SU(N) \otimes U(1)$. In order to proceed from the algebraic $q$-structure to a dynamical $q$-field theory, it is essential to investigate the differential calculus on $q$-groups. Indeed this provides the $q$-analogues of the ``classical" definitions of curvatures, field strengths, exterior products of forms, Bianchi identities, covariant and Lie derivatives and so on, see for ex. \cite{Aschieri} for a review. \vskip .4cm In this Letter we address and solve a specific problem: to find the Cartan-Maurer equations for any $q$-group of the $A,B,C,D$ series in explicit form. These equations define the field strengths of the corresponding $q$-gauge theories \cite{Cas}. The $A_{n-1}$ case was already treated in \cite{Aschieri}, where the structure constants were given explicitly, and shown to have the correct classical limit. To our knowledge, this problem has been tackled previously only in ref. \cite{Watamura}. There, however, the authors use (for the $B,C,D$ $q$-groups) a definition for the exterior product different from the one introduced in ref.s \cite{Wor}, adopted in \cite{Jurco,Zumino,Aschieri} and in the present Letter. As we will comment later, their choice leads to a more complicated scenario. \vskip .4cm Quantum groups are characterized by their $R$-matrix, which controls the noncommutativity of the quantum group basic elements $\T{a}{b}$ (fundamental representation): \begin{equation} \R{ab}{ef} \T{e}{c} \T{f}{d} = \T{b}{f} \T{a}{e} \R{ef}{cd} \label{RTT} \end{equation} and satisfies the quantum Yang-Baxter equation \begin{equation} \R{a_1b_1}{a_2b_2} \R{a_2c_1}{a_3c_2} \R{b_2c_2}{b_3c_3}= \R{b_1c_1}{b_2c_2} \R{a_1c_2}{a_2c_3} \R{a_2b_2}{a_3b_3}, \label{QYB} \end{equation} a sufficient condition for the consistency of the ``RTT" relations (\ref{RTT}). Its elements depend continuously on a (in general complex) parameter $q$, or even on a set of parameters. For $q \rightarrow 1$ we have $\R{ab}{cd} \stackrel{q \rightarrow 1}{\longrightarrow} \delta^a_c \delta^b_d$, i.e. the matrix entries $\T{a}{b}$ commute and become the usual entries of the fundamental representation. The $q$-analogue of $\det T=1$, unitarity and orthogonality conditions can be imposed on the elements $\T{a}{b}$, consistently with the $RTT$ relations (\ref{RTT}), see \cite{FRT}. \vskip .4cm The (uniparametric) $R$-matrices for the $q$-groups of the $A_{n-1}, B_n, C_n, D_n$ series can be found in ref. \cite{FRT}. We recall the projector decomposition of the ${\hat R}$ matrix defined by $\Rhat{ab}{cd} \equiv \R{ba}{cd}$, whose $q \rightarrow 1$ limit is the permutation operator $\delta^a_d \delta^b_c$ : \vskip .4cm $A_n$ series: \begin{equation} {\hat R}=qP_+-q^{-1} P_- \label{RprojA} \end{equation} with \begin{equation} \begin{array}{ll} &P_+={1 \over {q+q^{-1}}} ({\hat R}+q^{-1} I)\\ &P_-={1 \over {q+q^{-1}}} (-{\hat R}+qI)\\ &I=P_++P_- \end{array} \label{projA} \end{equation} \vskip .4cm $B_n,C_n,D_n$ series: \begin{equation} {\hat R}=qP_+-q^{-1} P_-+\varepsilon q^{\varepsilon-N}P_0 \label{RprojBCD} \end{equation} with \begin{equation} \begin{array}{ll} &P_+={1 \over {q+q^{-1}}} [{\hat R}+q^{-1} I-(q^{-1}+\varepsilon q^{\varepsilon-N})P_0]\\ &P_-={1 \over {q+q^{-1}}} [-{\hat R}+qI-(q-\varepsilon q^{\varepsilon-N})P_0]\\ &P_0={{1-q^2} \over {(1-\varepsilon q^{N+1-\varepsilon})(1+\varepsilon q^{\varepsilon-N+1})}} K\\ &K^{ab}_{~~cd}=C^{ab} C_{cd}\\ &I=P_++P_-+P_0 \end{array} \label{projBCD} \end{equation} where $\varepsilon=1$ for $B_n,D_n$, $\varepsilon=-1$ for $C_n$, and $N$ is the dimension of the fundamental representation $\T{a}{b}$ ($N=2n+1$ for $B_n$ and $N=2n$ for $C_n,D_n$); $C_{ab}$ is the $q$-metric, and $C^{ab}$ its inverse (cf. ref. \cite{FRT}). {}From (\ref{RprojA}) and (\ref{RprojBCD}) we read off the eigenvalues of the ${\hat R}$ matrix, and deduce the characteristic equations: \begin{equation} ({\hat R}-qI)({\hat R}+q^{-1} I)=0~~for~A_{n-1}~~(Hecke~condition) \label{Hecke} \end{equation} \begin{equation} ({\hat R}-qI)({\hat R}+q^{-1} I)({\hat R}-\varepsilon q ^{\varepsilon-N} I)=0, {}~~for~B_n,C_n,D_n \label{cubic} \end{equation} \vskip .4cm The differential calculus on $q$-groups, initiated in ref.s \cite{Wor}, can be entirely formulated in terms of the $R$ matrix. The general constructive procedure can be found in ref. \cite{Jurco}, or, in the notations we adopt here, in ref. \cite{Aschieri}. \vskip .4cm As discussed in \cite{Wor} and \cite{Jurco}, we can start by introducing the (quantum) left-invariant one-forms $\ome{a}{b}$, whose exterior product \begin{equation} \ome{a_1}{a_2} \wedge \ome{d_1}{d_2} \equiv \ome{a_1}{a_2} \otimes \ome{d_1}{d_2} - \RRhat{a_1}{a_2}{d_1}{d_2}{c_1}{c_2}{b_1}{b_2} \ome{c_1}{c_2} \otimes \ome{b_1}{b_2} \label{exteriorproduct} \end{equation} \noindent is defined by the braiding matrix $\Lambda$: \begin{equation} \LL{a_1}{a_2}{d_1}{d_2}{c_1}{c_2}{b_1}{b_2} \equiv d^{f_2} d^{-1}_{c_2} \Rhat{b_1f_2}{c_2g_1} \Rhatinv{c_1g_1}{a_1e_1} \Rhatinv{a_2e_1}{d_1g_2} \Rhat{d_2g_2}{b_2f_2} \label{Lambda} \end{equation} \noindent For $q\rightarrow 1$ the braiding matrix $\Lambda$ becomes the usual permutation operator and one recovers the classical exterior product. Note that the ``quantum cotangent space" $\Gamma$, i.e. the space spanned by the quantum one-forms $\ome{a}{b}$, has dimension $N^2$, in general bigger than its classical counterpart ($dim\Gamma=N^2$ only for the $U_q(N)$ groups). This is necessary in order to have a bicovariant bimodule structure for $\Gamma$ (cf. ref. (\cite{Watamura}.). The same phenomenon occurs for the $q$-Lie generators defined below. For these, however, one finds restrictions (induced by the conditions imposed on the $\T{a}{b}$ elements) that in general reduce the number of independent generators. Working with $N^2$ generators is more convenient, since the nice quadratic relations (\ref{qLiealgebra}) of the $q$-Lie algebra become of higher order if one expresses them in terms of a reduced set of independent generators. For a discussion see \cite{Zumino}. \vskip .4cm The relations (\ref{Hecke}) and (\ref{cubic}) satisfied by the ${\hat R}$ matrices of the $A$ and $B,C,D$ series respectively reflect themselves in the relations for the matrix $\Lambda$: \begin{equation} (\Lambda +q^2 I)(\Lambda+q^{-2}I)(\Lambda -I)=0 \label{Aspectral} \end{equation} for the $A$ $q$-groups, and \begin{equation} \begin{array}{l} (\Lambda +q^2 I)(\Lambda+q^{-2}I)(\Lambda+ \varepsilon q^{\varepsilon+1-N}I)(\Lambda+\varepsilon q^{N-\varepsilon-1}I) {}~~~~~~~~~~~\\ {}~~~~~~~~~~~~~~~~~~~~~~\times~(\Lambda-\varepsilon q^{N+1-\varepsilon}I) (\Lambda-\varepsilon q^{-N-1+\varepsilon}I)(\Lambda -I) =0 \label{BCDspectral} \end{array} \end{equation} for the $B,C,D$ $q$-groups, with the same $\varepsilon$ as in (\ref{cubic}). We give later an easy proof of these two relations. \vskip .4cm Besides defining the exterior product of forms, the matrix $\Lambda$ contains all the the information about the quantum Lie algebra corresponding to the $q$-group. \vskip .4cm The exterior differential of a quantum k-form $\theta$ is defined by means of the bi-invariant (i.e. left- and right-invariant) element $\tau=\sum_a \ome{a}{a} $ as follows: \begin{equation} d\theta \equiv {1 \over \la} [\tau \wedge \theta - (-1)^k \theta \wedge \tau], \label{exteriordifferential} \end{equation} The normalization ${1 \over \la}$ is necessary in order to obtain the correct classical limit (see for ex. \cite{Aschieri}). This linear map satisfies $d^2=0$, the Leibniz rule and commutes with the left and right action of the $q$-group \cite{Jurco}. \vskip .4cm The exterior differentiation allows the definition of the ``quantum Lie algebra generators" $\cchi{a_1}{a_2}$, via the formula \cite{Wor} \begin{equation} da={1 \over \la}[\tau a - a\tau] =(\cchi{a_1}{a_2} * a) \ome{a_1}{a_2}. \label{qgenerators} \end{equation} \noindent where \begin{equation} {}~~\chi * a \equiv (id \otimes \chi) \Delta (a),~~~~\forall a \in G_q,~ \chi \in G_q' \end{equation} and $\Delta$ is the usual coproduct on the quantum group $G_q$, defined by $\Delta(\T{a}{b})\equiv \T{a}{c}\otimes \T{c}{b}$. The $q$-generators $\chi$ are linear functionals on $G_q$. By taking the exterior derivative of (\ref{qgenerators}), using $d^2=0$ and the bi-invariance of $\tau=\ome{b}{b}$, we arrive at the $q$-Lie algebra relations \cite{Jurco}, \cite{Aschieri}: \begin{equation} \cchi{d_1}{d_2} \cchi{c_1}{c_2} - \LL{e_1}{e_2}{f_1}{f_2} {d_1}{d_2}{c_1}{c_2} ~\cchi{e_1}{e_2} \cchi{f_1}{f_2} = \CC{d_1}{d_2}{c_1}{c_2}{a_1}{a_2} \cchi{a_1}{a_2} \label{qLiealgebra} \end{equation} \noindent where the structure constants are explicitly given by: \begin{equation} \CC{a_1}{a_2}{b_1}{b_2}{c_1}{c_2} ={1 \over \la} [- \delta^{b_1}_{b_2} \delta^{a_1}_{c_1} \delta^{c_2}_{a_2} + \LL{b}{b}{c_1}{c_2}{a_1}{a_2}{b_1}{b_2}]. \label{CC} \end{equation} \noindent and $\cchi{d_1}{d_2} \cchi{c_1}{c_2} \equiv (\cchi{d_1}{d_2} \otimes \cchi{c_1}{c_2}) \Delta$. Notice that \begin{equation} \LL{a_1}{a_2}{d_1}{d_2}{c_1}{c_2}{b_1}{b_2}= \delta^{b_1}_{a_1} \delta^{a_2}_{b_2} \delta^{c_1}_{d_1} \delta^{d_2}_{c_2} + O(q-q^{-1}) \label{Lexpansion} \end{equation} because the $R$ matrix itself has the form $R=I+(q-q^{-1})U$, with $U$ finite in the $q \rightarrow 1$ limit, see ref. \cite{FRT}. Then it is easy to see that (\ref{CC}) has a finite $q \rightarrow 1$ limit, since the ${1 \over \la}$ terms cancel. \vskip .4cm The Cartan-Maurer equations are found by applying to $\ome{c_1}{c_2}$ the exterior differential as defined in (\ref{exteriordifferential}): \begin{equation} d\ome{c_1}{c_2}={1 \over \la} (\ome{b}{b} \wedge \ome{c_1}{c_2} + \ome{c_1}{c_2} \wedge \ome{b}{b}) . \label{CartanMaurer} \end{equation} \noindent Written as above, the Cartan-Maurer equations are not of much use for computations. The right-hand side has an undefined ${0\over 0}$ classical limit. We need a formula of the type $\ome{c_1}{c_2} \wedge \ome{b}{b}= -\ome{b}{b} \wedge\ome{c_1}{c_2}+ O(q-q^{-1})$ that allows to eliminate in (\ref{CartanMaurer}) the terms with the trace $\ome{b}{b}$ (which has no classical counterpart) and obtain an explicitly $q \rightarrow 1$ finite expression. \vskip .4cm The desired ``$\omega$ -permutator" can be found as follows. We first treat the case of the $A_{n-1}$ series. We apply relation (\ref{Hecke}) to the tensor product $\omega \otimes \omega$, i.e.: \begin{equation} (\L{ij}{kl}+q^2 \delta^i_k \delta^j_l) (\L{kl}{mn}+q^{-2} \delta^k_m \delta^l_n) (\L{mn}{rs}- \delta^m_r \delta^n_s)~\omega^m \otimes \omega^n=0 \end{equation} where we have used the adjoint indices ${~}^i \leftrightarrow {}_a^{~b}$, ${~}_i \leftrightarrow {}^a_{~b}$. Inserting the definition of the exterior product $\omega^n \wedge \omega^n=\omega^m \otimes \omega^n -\L{mn}{rs} \omega^r \otimes \omega^s$ yields \begin{equation} (\L{ij}{kl}+q^2 \delta^i_k \delta^j_l)(\L{kl}{mn}+ q^{-2} \delta^k_m \delta^l_n)~\omega^m \wedge \omega^n = 0 \end{equation} Multiplying by $\Lambda^{-1}$ gives $(\Lambda +(q^2+q^{-2})I+\Lambda^{-1})~ \omega\wedge\omega$, or equivalently \begin{equation} \omega^i \wedge \omega^j = - \Z{ij}{kl} \omega^k \wedge \omega^l \label{commom} \end{equation} \begin{equation} \Z{ij}{kl} \equiv {1\over {q^2 + q^{-2}}} [\L{ij}{kl} + \Linv {ij}{kl}]. \label{defZ} \end{equation} cf. ref. \cite{Aschieri}. The $\omega$-permutator $\Z{ij}{kl}$ has the expected $q \rightarrow 1$ limit, that is $\delta^i_l \delta^j_k$. \vskip .4cm There is another way to deduce the permutator $Z$, based on projector methods, that we will use for the $B,C,D$ series. We first illustrate it in the easier $A$-case. Define \begin{equation} \PIJ{a_1}{a_2}{d_1}{d_2}{c_1}{c_2}{b_1}{b_2} \equiv d^{f_2} d^{-1}_{c_2} \Rhat{b_1f_2}{c_2g_1} \PI{c_1g_1}{a_1e_1} \Rhatinv{a_2e_1}{d_1g_2} \PJ{d_2g_2}{b_2f_2} \label{PIJ} \end{equation} with {\small I,J=}$+,-$, the projectors $P_+,P_-$ being given in (\ref{projA}). The $(P_I,P_J)$ are themselves projectors, i.e.: \begin{equation} (P_I,P_J) (P_K,P_L) =\delta_{IK} \delta_{JL} (P_I,P_J) \label{projIJ} \end{equation} Moreover \begin{equation} (I,I)=I \label{IIeqI} \end{equation} so that \begin{equation} (I,I)=(P_++P_-,P_++P_-)=(P_+,P_+) + (P_-,P_-) +(P_+,P_-) +(P_-,P_+)=I \label{sumprojIJ} \end{equation} Eq.s (\ref{projIJ}) and (\ref{IIeqI}) are easy to prove by using (\ref{PIJ}) and the relation, valid for all $A,B,C,D$ $q$-groups: \begin{equation} d^f d^{-1}_c \Rhat{bf}{cg} \Rhatinv{ce}{ba}=\delta^f_a \delta^e_g \end{equation} Projectors similar to (\ref{PIJ}) were already introduced in ref. \cite{Watamura}. {}From the definition (\ref{Lambda}) of $\Lambda$ , using (\ref{RprojA}) and (\ref{PIJ}) we can write \begin{equation} \Lambda=(P_+,P_+)+(P_-,P_-)-q^{-2} (P_+,P_-)-q^2 (P_-,P_+) \label{Lproj} \end{equation} This decomposition shows that $\Lambda$ has eigenvalues $1,q^{\pm 2}$, and proves therefore eq. (\ref{Aspectral}). {}From the definition of the exterior product $\omega \wedge \omega=\omega \otimes \omega-\Lambda \omega \otimes \omega$ we find the action of the projectors $(P_I,P_J)$ on $\omega\wedge\omega$: \begin{equation} (P_+,P_+)\omega\wedge\omega=(P_-,P_-)\omega\wedge\omega=0 \label{Pom} \end{equation} \begin{equation} (P_+,P_-)\omega\wedge\omega=(1+q^{-2}) (P_+,P_-)\omega\otimes\omega, {}~~(P_-,P_+)\omega\wedge\omega=(1+q^2) (P_-,P_+)\omega\otimes\omega \end{equation} Using (\ref{sumprojIJ}) and (\ref{Pom}) we find : \begin{equation} \omega \wedge \omega=[(P_+,P_+) + (P_-,P_-) +(P_+,P_-) + (P_-,P_+)] \omega \wedge \omega=[(P_+,P_-) +(P_-,P_+)] \omega \wedge\omega \end{equation} The $\omega$-permutator is therefore $Z=-(P_+,P_-)-(P_-,P_+)$. We can express it in terms of the $\Lambda$ matrix by observing that \begin{equation} (P_+,P_-)+(P_-,P_+)=-{1 \over q^2+q^{-2}} (\Lambda+\Lambda^{-1})+{2 \over q^2+q^{-2}} ((P_+,P_+)+(P_-,P_-)) \label{ZprojA} \end{equation} as one deduces from (\ref{Lproj}). Note that $\Lambda^{-1}$ is given in terms of projectors by the same expression as in (\ref{Lproj}), with $q \rightarrow q^{-1}$. When acting on $\omega\wedge\omega$ the $(P_+,P_+), (P_-,P_-)$ terms in (\ref{ZprojA}) can be dropped because of (\ref{Pom}), so that finally we arrive at eq. (\ref{defZ}). \vskip .4cm Because of the expansion (\ref{Lexpansion}) and a similar one for $\Lambda^{-1}$ we easily see that the $\omega$-permutator (\ref{defZ}) can be expanded as \begin{equation} \ZZ{c_1}{c_2}{d_1}{d_2}{a_1}{a_2}{b_1}{b_2} \ome{a_1}{a_2} \wedge \ome{b_1}{b_2}=\ome{d_1}{d_2} \wedge \ome{c_1}{c_2}+(q-q^{-1}) \WW{c_1}{c_2}{d_1}{d_2}{a_1}{a_2}{b_1}{b_2} \ome{a_1}{a_2} \wedge \ome{b_1}{b_2} \label{Zlim} \end{equation} where $W$ is a finite matrix in the limit $q \rightarrow 1$. \vskip .4cm Let us return to the Cartan-Maurer eqs. (\ref{CartanMaurer}). Using (\ref{commom}) we can write: \begin{equation} d\ome{c_1}{c_2}={1 \over \la} (\ome{b}{b} \wedge \ome{c_1}{c_2} - \ZZ{c_1}{c_2}{b}{b}{a_1}{a_2}{b_1}{b_2} \ome{a_1}{a_2} \wedge \ome{b_1}{b_2}) \label{CartanMaurerZ} \end{equation} where $Z$ is given by $(\Lambda+\Lambda^{-1})/(q^2+q^{-2})$, cf. (\ref{defZ}). Because of (\ref{Zlim}) we see that the $\ome{b}{b}$ terms disappear, and (\ref{CartanMaurerZ}) has a finite $q \rightarrow 1$ limit. \vskip .4cm We now repeat the above construction for the case of $q$-groups belonging to the $B,C,D$ series. \vskip .4cm Using (\ref{RprojBCD}) and (\ref{PIJ}) we find the following projector decomposition for the $\Lambda$ matrix : \begin{equation} \begin{array}{ll} \Lambda=&(P_+,P_+)+(P_-,P_-)+(P_0,P_0)+ \varepsilon q^{\varepsilon-1-N}(P_+,P_0)+\varepsilon q^{-(\varepsilon-1-N)}(P_0,P_+)\\ &-q^{-2} (P_+,P_-)-q^2 (P_-,P_+) -\varepsilon q^{N-\varepsilon-1} (P_0,P_-)-\varepsilon q^{-(N-\varepsilon-1)}(P_-,P_0) \end{array} \label{LprojBCD} \end{equation} from which we read off the eigenvalues of $\Lambda$, and prove eq. (\ref{BCDspectral}). Proceeding as in the $A$ case, we find the action of the projectors on $\omega\wedge\omega$ : \begin{equation} (P_+,P_+)\omega\wedge\omega=(P_-,P_-)\omega\wedge\omega=(P_0,P_0) \omega\wedge\omega=0 \label{PomBCD} \end{equation} \begin{eqnarray} &(P_+,P_-)\omega\wedge\omega=(1+q^{-2})(P_+,P_-)\omega\otimes\omega, \nonumber\\&(P_-,P_+)\omega\wedge\omega= (1+q^2)(P_-,P_+)\omega\otimes\omega \end{eqnarray} \begin{eqnarray} &(P_-,P_0)\omega\wedge\omega=(1+\varepsilon q^{-(N-\varepsilon-1)})(P_-,P_0)\omega\otimes\omega, \nonumber\\ &(P_0,P_-)\omega\wedge\omega=(1+\varepsilon q^{N-\varepsilon-1})(P_0,P_-)\omega\otimes\omega \label{Pom1BCD} \end{eqnarray} \begin{eqnarray} &(P_+,P_0)\omega\wedge\omega=(1-\varepsilon q^{\varepsilon-1-N})(P_+,P_0)\omega\otimes\omega, \nonumber\\ &(P_0,P_+)\omega\wedge\omega=(1-\varepsilon q^{-(\varepsilon-1-N)})(P_0,P_+)\omega\otimes\omega \label{Pom2BCD} \end{eqnarray} Again the sum of the projectors $(P_I,P_J)$ yields the identity, so that we can write: \begin{equation} \omega \wedge \omega=[(P_+,P_-) +(P_-,P_+)+(P_-,P_0)+(P_0,P_-)+(P_+,P_0)+(P_0,P_+)] \omega \wedge \omega \end{equation} where we have taken (\ref{PomBCD}) into account. The $\omega$-permutator $Z$ is therefore given by \begin{equation} Z=-[(P_+,P_-) +(P_-,P_+)+(P_-,P_0)+(P_0,P_-)+(P_+,P_0)+(P_0,P_+)] \label{ZBCDproj} \end{equation} Can we express it in terms of odd powers of the $\Lambda$ matrix, as in the case of the $A$ groups ? The answer is: only partially. In fact, by elementary algebra we find that \begin{equation} Z=-\alpha(\Lambda+\Lambda^{-1})-\beta(\Lambda^3+\Lambda^{-3}) -(1-\alpha q_{-\varepsilon N}-\beta q_{-3\varepsilon N}) [(P_{\sigma},P_0)+(P_0,P_{\sigma})] \label{ZBCD} \end{equation} with $\sigma \equiv sgn(\varepsilon)$ and \begin{equation} \alpha=-{{1+\beta q_6} \over q_2} \end{equation} \begin{equation} \beta={{q_2-q_{\varepsilon N-2}}\over{q_6 q_{\varepsilon N-2}-q_2 q_{3(\varepsilon N-2)}}} \end{equation} \begin{equation} q_n \equiv q^n+q^{-n} \end{equation} Note: $\Lambda^r$ is given by \begin{eqnarray} &\Lambda^r=(P_+,P_+)+(P_-,P_-)+(P_0,P_0) +\varepsilon^r [q^{r(\varepsilon-1-N)}(P_+,P_0)+ q^{-r(\varepsilon-1-N)}(P_0,P_+)] \nonumber\\ &+(-1)^r[q^{-2r} (P_+,P_-) +q^{2r} (P_-,P_+) ] \nonumber\\ &+(-\varepsilon)^r [q^{r(N-\varepsilon-1)}(P_0,P_-)+ q^{-r(N-\varepsilon-1)}(P_-,P_0)] \end{eqnarray} Let us check that $Z$ in (\ref{ZBCD}) has a correct classical limit. We have $\alpha \stackrel{q \rightarrow 1}{\longrightarrow} - {9\over 16}$ and $\beta \stackrel{q \rightarrow 1}{\longrightarrow} {1\over 16}$; taking into account that the $(P_{\sigma},P_0), (P_0,P_{\sigma})$ terms disappear in the classical limit (cf. eq. (\ref{Pom1BCD}), (\ref{Pom2BCD})) when applied to $\omega\wedge\omega$, we find the expected limit $\Z{ij}{kl} \stackrel{q \rightarrow 1}{\longrightarrow}\delta^i_l \delta^j_k$. \vskip .4cm The Cartan-Maurer equations are deduced as before, and are given by (\ref{CartanMaurerZ}) where now $Z$ is the $\omega$ permutator of eq. (\ref{ZBCD}) (Note: for explicit calculations the expression (\ref{ZBCDproj}) or equivalently $Z=(P_+,P_+)+(P_-,P_-)+(P_0,P_0)-I$ is more convenient). Again the $\ome{b}{b}$ terms drop out since $Z$ admits the expansion $\ZZ{c_1}{c_2}{d_1}{d_2}{a_1}{a_2}{b_1}{b_2} \ome{a_1}{a_2} \wedge \ome{b_1}{b_2}=-\ome{d_1}{d_2}\wedge\ome{c_1}{c_2}+O(q-q^{-1})$. \vskip .4cm In conclusion: we have found an explicit (and computable) expression for the Cartan-Maurer equations of the $B_n, C_n, D_n$ $q$-groups. This opens the possibility of constructing gauge theories of these $q$-groups, following the procedure used in \cite{Cas} for the $A_{n-1}$ $q$-groups. \vskip .4cm Finally, let us comment on the differential calculus presented by the authors of ref. \cite{Watamura}. Their definition of exterior product in the $B,C,D$ case differs from ours (and from the one adopted in \cite{Wor,Jurco,Zumino}), and essentially amounts to require that $(P_{\sigma},P_0) \omega \wedge \omega=0$, $(P_0,P_{\sigma}) \omega\wedge\omega=0$, besides (\ref{PomBCD}). This has one advantage: the term $(P_{\sigma},P_0)+(P_0,P_{\sigma})$ disappears in the expression (\ref{ZBCD}). The disadvantage is that the defining formula $\omega\wedge\omega=(I-\Lambda)\omega\otimes\omega$ does not hold any more for the $B,C,D$ series, so that the general treatment of ref. \cite{Wor} and the constructive procedure of ref.s \cite{Jurco} do not apply. \vfill\eject
-45,104.036987
[ -2.169921875, 1.744140625 ]
8.767773
[ -3.248046875, 1.3212890625, 0.658203125, -3.07421875, -0.465576171875, 2.3515625 ]
[ -3.720703125, 1.6552734375, -4.41015625, -2.3125 ]
181
2,956
[ -2.470703125, 2.880859375 ]
41.069187
[ -3.26171875, -1.0498046875, -1.8544921875, -1.8564453125, -0.1627197265625, 5.55859375 ]
0.77017
8.104704
38.599459
6.460758
[ 1.1663845777511597 ]
-30,194.899121
6.36908
-44,160.696174
0.509906
6.186344
[ -2.736328125, -2.455078125, -3.162109375, -4.375, 2.17578125, 10.1171875 ]
[ -2.890625, 1.3896484375, -0.81005859375, -0.156005859375, 0.896484375, -0.69091796875 ]
BkiUal_xK2li-DeXxupJ
\section{Introduction} Problems involving singularity have of late become a hugely popular interest of research in the Mathematical community. A good amount of research has been done to prove the existence of a solution to the problem \begin{eqnarray} -\Delta u&=& f(x)h(u)~\text{in}~\Omega,\nonumber\\ u&=&0~\text{on}~\partial\Omega.\label{ieq0} \end{eqnarray} A few noteworthy results on such a problem with $\Omega\subset\mathbb{R}^N$ being a bounded domain can be found in \cite{lazer}, \cite{giach1, giach2}, \cite{bocca}, \cite{tali}, \cite{gati}, \cite{arcoya}, \cite{Oliva h} and the references therein. An existence and uniqueness result has been proved by Lazer and McKenna \cite{lazer}, pertaining to the case $h(s)=\frac{1}{s^{\gamma}}$, with $f$ being a H\"{o}lder continuous function. The authors in \cite{lazer} arrived at the unicity of solution by the application of the sub-super solution method. Furthermore, the authors in \cite{lazer} have proved that the problem possesses a solution iff $\gamma<3$. They have also shown that for $\gamma>1$, solutions to the problem with infinite energy exists. A weaker condition on the function $f$ can be considered by picking $f$ from $L^p(\Omega)$, for $p \geq 1$, or from the space of Radon measures. Boccardo and Orsina in \cite{bocca} have proved the existence and uniqueness of a solution to a similar problem as in \cite{lazer} but with lesser assumptions of regularity on $f$. They have considered $f\geq0$ in $\Omega$, $\gamma>0$. The existence result depends on the $L^p$ space from where $f$ has been chosen. The value of $\gamma$ also decides the space in which the solution belongs to, i.e. if $\gamma < 1$ then $u\in W_0^{1,1}(\Omega)$, if $\gamma=1$ then $u\in H_0^1(\Omega)$ and if $\gamma>1$ then $u\in H_{loc}^1(\Omega)$ where the zero Dirichlet boundary condition has been assumed in a weaker sense than the usual sense of trace. When $f$ is a bounded, Radon measure the problem may not possess a solution in general and in this case the question of nonexistence is of great importance as seen in \cite{bocca}. In \cite{cave} the authors have considered a nonlinear elliptic boundary value problem with a general singular lower order term. Here the authors have proved the existence of a distributional solution. A slight improvement of the result in \cite{lazer}, can be found in \cite{yijing}. In \cite{canino1}, a minimax method has been used to address the `jumping problem' for a singular semilinear elliptic equation. A symmetry of solutions have been shown in \cite{canino2}, for a class of semilinear equations with singular nonlinearities. In \cite{canino3}, the authors have considered quasilinear elliptic equations involving the $p$-Laplacian and singular nonlinearities. They have deduced a few comparison principles and have proved some uniqueness results. The reader may also refer to a series of noteworthy contributions made by Canino et al. in \cite{tali}, \cite{gati}, \cite{arcoya} to the semilinear elliptic problem with a singularity. It is worth mentioning the work due to Giachetti et al. \cite{giach1, giach2} and the references therein. In this paper we will prove the existence of weak solution to the following PDE. \begin{eqnarray}\label{ieq1} -\Delta u&=& f(x)h(u)+\mu~\text{in}~\Omega,\nonumber\\ u&=&0~\text{on}~\partial\Omega,\nonumber\\ u&>& 0~\text{on}~\Omega, \end{eqnarray} where $\Omega$ is a bounded domain in $\mathbb{R}^N$ for $N \geq 2$, $f $ is a nonnegative function and $\mu$ is a nonnegative, bounded Radon measure. We will further assume greater regularity on $f$ in $\eqref{ieq1}$ to guarantee the existence of a very weak solution. \subsection{Notations} This subsection is about the notations and definitions which will be used throughout this article. Henceforth, we will denote by $\Omega$ a bounded domain in $\mathbb{R}^N$. The Sobolev space denoted by $W^{k,p}(\Omega)$ $\cite{Evans}$ consists of all locally summable functions $u: \Omega\rightarrow \mathbb{R}$ such that for each multiindex $\alpha$ with $|\alpha|\leq k$, $D^{\alpha}u$ exists in the weak sense and belongs to $L^p(\Omega)$. If $u\in W^{k,p}(\Omega)$, we define its norm as \begin{center} $\|u\|_{W^{k,p}(\Omega)}= { \left\{ \begin{array}{ll} \left(\sum_{|\alpha|\leq k}\int_\Omega |{D^{\alpha}u}|^p dx\right)^{\frac{1}{p}} &(1\leq p<\infty),\\ \sum_{|\alpha|\leq k}\|D^{\alpha}u\|_{L^{\infty}(\Omega)} & (p=\infty). \end{array} \right. }$ \end{center} The closure of $C_c^{\infty}(\Omega)$ in $W^{k,p}(\Omega)$ is denoted by $W_0^{k,p}(\Omega)$. The local Sobolev space, $W_{loc}^{k,p}(\Omega)$, is defined to be the set of functions $u$ such that for every compact subset $K$ of $\Omega$ $u\in W^{k,p}(K)$. The H\"{o}lder Space $C^{k,\beta}(\bar{\Omega})$ with $0<\beta\leq1$ $\cite{Evans}$, consists of all those functions $u\in C^k(\bar{\Omega})$ such that $\sum\limits_{|\alpha|\leq k}\sup|D^{\alpha}u|+\sup\limits_{x\neq y} \left\{\frac{|D^ku(x)-D^ku(y)|}{|x-y|^{\beta}}\right\}$ is finite. We will use the following truncation functions. For fixed $k > 0$ $$T_k(s)=\max\{-k,\min\{k,s\}\}$$ and $$G_k(s)=(|s|-k)^+sign(s)$$ with $s\in \mathbb{R}$. It is easy to observe that $T_k(s)+G_k(s)=s$, for any $s\in \mathbb{R}$ and $k>0$.\\ We will denote the space of all finite Radon measures on $\Omega$ as $\mathcal{M}(\Omega)$ endowed with the `total variation norm' which is defined as $$\|\mu\|_{\mathcal{M}(\Omega)}=\int_{\Omega}d|\mu|.$$ The Marcinkiewicz space ${M}^q(\Omega)$ $\cite{Benilan}$ (or the weak $L^q(\Omega)$ space), $0 < q <\infty$, is the set of all measurable functions $f:\Omega\rightarrow \mathbb{R}$ such that the corresponding distribution satisfies an estimate of the form $$m(\{x\in \Omega:|f(x)|>t\})\leq \frac{C}{t^q},~t>0,~C<\infty$$, where $m$ is the Lebesgue measure. We also have ${M}^q(\Omega)\subset {M}^{\bar{q}}(\Omega)$ if $q\geq \bar{q}$, for some fixed positive $\bar{q}$. We recall here the following useful continuous embeddings \begin{equation}\label{mar} L^q(\Omega)\hookrightarrow {M}^q(\Omega)\hookrightarrow L^{q-\epsilon}(\Omega), \end{equation} for every $1<q<\infty$ and $0<\epsilon<q-1$. The article is organized as follows. In Section $2$ we state and prove the main results pertaining to the cases $0<\gamma< 1$ and $\gamma\geq 1$. In Section $3$ we will prove the existence of a solution to $\eqref{ieq1}$ in the very weak sense for $0<\gamma<1$ with two different regularity assumptions on $f$. To the end of the article, in the Appendix, we will derive a Kato type inequality corresponding to the problem defined in section 3. \section{Assumptions, Definitions and the main results} We consider the following boundary value problem. \begin{eqnarray}\label{eq0} -\Delta u&=& h(u)f+\mu ~\text{in}~\Omega,\nonumber\\ u &=& 0 ~\text{on}~ \partial\Omega,\\ u &>& 0 ~\text{in}~ \Omega,\nonumber \end{eqnarray} where $N>2$, $\mu$ is a nonnegative, bounded, Radon measure on $\Omega$, $f$ is a nonnegative function in $ L^m (\Omega)$ for $m \geq 1$, which could be a measure.\\ The function $h : \mathbb{R}^+ \rightarrow \mathbb{R}^+$ is a nonlinear, nonincreasing, continuous function such that \begin{equation}\label{eqn3} \underset{s\rightarrow 0^+}{\lim}~ h(s)\in (0,\infty]~ \text{and}~\underset{s\rightarrow \infty}{\lim} h(s)=h(\infty)<\infty \end{equation} with the following growth condition near zero and infinity \begin{equation}\label{eqq1} \exists\, C_1, \underline{K}>0~\text{such that}~ h(s)\leq\frac{C_1}{s^\gamma}~\text{if}~s<\underline{K}, \gamma> 0, \end{equation} \begin{equation}\label{eqqq} \exists\, C_2, \overline{K}>0~\text{such that}~ h(s)\leq\frac{C_2}{s^\theta}~\text{if}~s>\overline{K}, \theta>0 \end{equation} respectively. We will later observe that the behavior of $h$ at infinity influences the regularity of the solution $u$. Now we give two important definitions which is essential to our study of the problem in ($\ref{eq0}$). \begin{definition}\label{defn-1} Let $(\mu_n)$ be the sequence of measurable functions in $\mathcal{M}(\Omega)$. We say $(\mu_n)$ converges to $\mu\in \mathcal{M}(\Omega)$ in the sense of measure i.e. $\mu_n\rightharpoonup \mu$ in $\mathcal{M}(\Omega)$, if $$\int_\Omega f d\mu_n\rightarrow \int_\Omega f d\mu,~\forall f\in C_0(\Omega).$$ \end{definition} \begin{definition} If $0<\gamma<1$, then a weak solution to the problem in ($\ref{eq0}$) is a function in $W_0^{1,1}(\Omega)$ such that \begin{eqnarray}\label{cond1} \int_{\Omega}\nabla u.\nabla\varphi&=&\int_{\Omega}fh(u)\varphi +\int_{\Omega}\varphi d\mu,~ \forall\varphi\in C_c^1(\bar\Omega) \end{eqnarray} and \begin{eqnarray}\label{cond2}\forall K\subset\subset\Omega, ~\exists~C_{K}~\text{such that}~u\geq C_{K}>0.\end{eqnarray} If $\gamma \geq 1$, then a weak solution to the problem is a function $u\in W_{loc}^{1,1}(\Omega)$ satisfying ($\ref{cond1}$) and ($\ref{cond2}$) such that $T_{k}^{\frac{\gamma+1}{2}}(u)\in W_0^{1,2}(\Omega)$ for each fixed $k>0$. \end{definition} \noindent In the subsections $\ref{sub1}$ and $\ref{sub2}$, we will prove the existence of solution to the problem ($\ref{eq0}$) for both the cases, i.e. $0<\gamma<1$ and $\gamma\geq 1$. We begin with the following sequence of problems. \begin{eqnarray}\label{eq1} -\Delta u_n&=& h_n\left(u_n+\frac{1}{n}\right)f_n+\mu_n ~\text{in}~\Omega,\nonumber\\ u_n &=& 0 ~\text{on}~ \partial\Omega, \end{eqnarray} where ($\mu_n$) is a sequence of smooth nonnegative functions bounded in $L^1(\Omega)$ that converges to $\mu$ in the sense of Definition $\ref{defn-1}$. Further, $h_n=T_n(h)$ and $f_n=T_n(f)$ are the truncations at level $n$. The weak formulation of $\eqref{eq1}$ is \begin{equation}\label{weak} \int_{\Omega} \nabla u_n \nabla\varphi = \int_{\Omega} h_n\left(u_n+\frac{1}{n}\right)f_n\varphi + \int_\Omega \mu_n\varphi, ~\forall\varphi\in C_c^1(\bar\Omega). \end{equation} We now prove the existence of a solution to the problem ($\ref{eq1}$) in the following lemma. \begin{lemma}\label{13'} The problem $\eqref{eq1}$ admits a nonnegative weak solution $u_n\in W_0^{1,2} (\Omega)\cap L^{\infty}(\Omega)$. \begin{proof} We will apply the Schauder's fixed point argument to prove the lemma. For a fixed $n\in \mathbb{N}$ let us define a map, $$G:L^2(\Omega)\rightarrow L^2(\Omega)$$ such that, for any $v\in L^2(\Omega)$ we get a unique weak solution $w$ to the following problem \begin{eqnarray} -\Delta w&=& h_n\left(|v|+\frac{1}{n}\right)f_n+\mu_n ~\text{in}~\Omega,\nonumber\\ w &=& 0 ~\text{on}~ \partial\Omega.\label{schauder} \end{eqnarray} The existence of a unique $w\in W_0^{1,2}(\Omega)$ corresponding to a $v\in L^2(\Omega)$ is guaranteed due to the Lax-Milgram theorem. Thus we can choose $w$ as a test function in the weak formulation of ($\ref{schauder}$) with the test function space $W_0^{1,2}(\Omega)$. Let $\lambda_1$ be the first eigenvalue of ($-\Delta$). On using the Poincar\'{e} inequality we get \begin{align}\label{equation} \lambda_1\int _\Omega |w|^2 &\leq\int _\Omega |\nabla w|^2 \nonumber\\& = {\int _\Omega h_n\left(|v|+\frac{1}{n}\right)f_n w} +\int _\Omega w{\mu}_n~~\text{(by the weak formulation of (\ref{schauder}))}\nonumber \\& \leq{ C_1 \int _{(|v|+\frac{1}{n}< \underline{K})}\frac{f_n w} {(|v|+\frac{1}{n})^\gamma}} + \max_{[\underline{K},\overline{K}]} h(s) \int_{(\underline{K}\leq (|v|+\frac{1}{n})\leq \overline{K})} f_n w \nonumber\\& ~~~~~~+ C_2 \int _{(|v|+\frac{1}{n} > \overline{K})}\frac{f_n w} {(|v|+\frac{1}{n})^\theta} + C(n) \int_\Omega|w|\nonumber\\ & \leq { C_1 n^{1+\gamma}\int _{(|v|+\frac{1}{n}< \underline{K})}|w| + n \max_{[\underline{K},\overline{K}]} h(s) \int_{(\underline{K}\leq (|v|+\frac{1}{n})\leq \overline{K})} |w| + C_2n^{1+\theta} \int _{(|v|+\frac{1}{n} > \overline{K})} |w| }\nonumber \\& ~~~~~~+ C(n) \int_\Omega|w|\nonumber \\& \leq C(n,\gamma) \int _\Omega |w|\nonumber\\ &\leq C'.C(n,\gamma)\|w\|_2~~\text{(by using the H\"{o}lder's inequality)}. \end{align} This shows that \begin{equation}\label{eq} \|w\|_{L^2(\Omega)}\leq C'.C(n,\gamma), \end{equation} where $C'$ and $C(n,\gamma)$ are independent of $v$. We will prove that the map $G$ is continuous over $L^2(\Omega)$. For this let us consider a sequence ($v_k$) that converges to $v$ with respect to the $L^2$-norm. By the dominated convergence theorem we obtain \begin{center} $\|\big(h_n\left(v_k +\frac{1}{n}\right)f_n+{\mu}_n\big)-\big(h_n\left(v +\frac{1}{n}\right)f_n+{\mu}_n\big)\|_{L^2(\Omega)}\longrightarrow 0.$ \end{center} Hence, by the uniqueness of the weak solution, we can say that $w_k=G(v_k)$ converges to $w=G(v)$ in $L^2(\Omega)$. Thus $G$ is continuous over $L^2(\Omega)$.\\ {\it Claim:} $G(L^2(\Omega))$ is relatively compact in $L^2(\Omega)$.\\ We have proved in $\eqref{eq}$ that \begin{align} \int_\Omega |\nabla w|^2&=\int _\Omega |\nabla G(v)|^2\nonumber\\ & \leq C'.C(n,\gamma),\nonumber \end{align} for any $v\in L^2(\Omega)$, so that, $G(L^2(\Omega))$ is relatively compact in $L^2(\Omega)$ by the Rellich-Kondrachov theorem. This proves that $G(L^2(\Omega))$ is relatively compact in $L^2(\Omega)$. Hence the claim.\\ Therefore, on applying the Schauder fixed point theorem to $G$ we guarantee an existence of a fixed point $u_n\in L^2(\Omega)$ that is a weak solution to $\eqref{eq1}$ in $W_0^{1,2}(\Omega)$. \\ Since $\left(h_n\left(u_n+\frac{1}{n}\right)f_n+{\mu}_n\right) \geq 0$, hence by the maximum principle $u_n\geq 0$. Furthermore, for a fixed $n$, we have $u_n$ belongs to $L^{\infty}(\Omega)$ (by Th\'{e}or\`{e}me 4.2, page 215 in $\cite{Stampacchia}$) because the righthand side of ($\ref{eq1}$) is in $L^{\infty}(\Omega)$ and this concludes the proof. \end{proof} \end{lemma} \noindent The next step is to prove that the sequence ($u_n$) is uniformly bounded from below on every compact subset of $\Omega$. \begin{lemma}\label{l4} For every $K\subset\subset \Omega$ there exists $C_K$ such that $u_n(x)\geq C_K >0$, a.e. in $K$, for every $n\in \mathbb{N}$. \begin{proof} Let us consider the sequence of problems \begin{eqnarray}\label{eq2} -\Delta v_n&=& h_n\left(v_n+\frac{1}{n}\right)f_n ~\text{in}~\Omega,\nonumber\\ v_n &=& 0 ~\text{on}~ \partial\Omega. \end{eqnarray} We first show the existence of a weak solution $v_n$ to the problem in ($\ref{eq2}$) such that for every $ K \subset \subset \Omega,~\text{there exists}~ C_K~ \text{such that}~ v_n \geq C_K >0,$ for almost every $x$ in $K$, $C_K$ being independent of $n$. The existence of a weak solution to ($\ref{eq2}$) follows from the same proof as in Lemma $\ref{13'}$. Since $0\leq f_n\leq f_{n+1}$ and $h$ is nonincreasing we have that $h_n$ is nonincreasing. Thus \begin{align}\label{monotonicity1} -\Delta v_n&=f_n h_n\left(v_n+\frac{1}{n}\right)\nonumber\\ &\leq f_{n+1}h_n\left(v_n+\frac{1}{n+1}\right)~ \text{in}~\Omega,\nonumber\\ v_n &= 0~ \text{on}~\partial\Omega. \end{align} We also know that $v_{n+1}$ is a weak solution to \begin{eqnarray}\label{monotonicity2} -\Delta v_{n+1}&=& f_{n+1}h_{n+1}\left(v_{n+1}+\frac{1}{n+1}\right) ~\text{in}~\Omega,\nonumber\\ v_{n+1} &=& 0 ~\text{on}~ \partial\Omega. \end{eqnarray} The difference between the weak formulations of the problems in ($\ref{monotonicity1}$), ($\ref{monotonicity2}$) with the choice of a test function as $(v_n-v_{n+1})^{+}$ we obtain \begin{eqnarray}\label{monotonicity3} \int_{\Omega}\nabla(v_n-v_{n+1})\cdot\nabla(v_n-v_{n+1})^{+}&=&\int_{\Omega}|\nabla(v_n-v_{n+1})^{+}|^2\nonumber\\&\leq &\int_{\Omega}f_{n+1}\left[h_n\left(v_n+\frac{1}{n+1}\right)\right.\nonumber\\ & &\left.-h_{n+1}\left(v_{n+1}+\frac{1}{n+1}\right)\right](v_n-v_{n+1})^{+}\nonumber\\ &= & \int_{\Omega}f_{n+1}\left[\left\{h_n\left(v_n+\frac{1}{n+1}\right)\right.\right.\nonumber\\ & &\left.\left.-h_{n}\left(v_{n+1}+\frac{1}{n+1}\right)\right.\right\}\chi_{[v_n\leq v_{n+1}]}(v_n-v_{n+1})^{+}\nonumber\\ & &+\left\{h_n\left(v_n+\frac{1}{n+1}\right)\right.\nonumber\\ & &\left.\left.\left.-h_{n}\left(v_{n+1}+\frac{1}{n+1}\right)\right.\right\}\chi_{[v_n> v_{n+1}]}(v_n-v_{n+1})^{+}\right]\nonumber\\ & = & \int_{\Omega}f_{n+1}\left\{h_n\left(v_n+\frac{1}{n+1}\right)\right.\nonumber\\ & &\left.\left.-h_{n}\left(v_{n+1}+\frac{1}{n+1}\right)\right.\right\}\chi_{[v_n>v_{n+1}]}(v_n-v_{n+1})^{+}\nonumber\\ &\leq & 0. \end{eqnarray} Therefore, $(v_n-v_{n+1})^{+}=0$ almost everywhere in $\Omega$, thereby implying that $v_n \leq v_{n+1}$. We again use the Th\'{e}or\`{e}me 4.2, in page 215 \cite{Stampacchia} to obtain \begin{eqnarray}\label{boundedness} \|v_1\|_{\infty} &\leq& K_1\|f_1h_1(v_1+1)\|_{\infty}+K_2\|v_1\|_{2}\nonumber\\ & \leq & K_1+K_2=C~\text{(say)}. \end{eqnarray} Thus we have \begin{eqnarray} -\Delta v_1&=&f_1h_1(v_1+1)\nonumber\\& \geq &f_1h_1(\|v_1\|_{\infty}+1)\nonumber\\ &\geq &f_1h_1(C+1)\nonumber\\ &>&0. \end{eqnarray} Since $f_1h_1(C+1)$ is identically not equal to zero, hence by the strong maximum principle over $(-\Delta)$ we have $v_1>0$. Thus, by our choice of $K\subset\subset\Omega$, there exists a constant $C_K$ such that $v_1(x)\geq C_K>0$ for almost every $x\in K$.\\ Coming back to the proof of the lemma, we take the difference between the weak formulations of $\eqref{eq1}$ and $\eqref{eq2}$ respectively with the choice of test function being $(u_n-v_n)^-$. It is easy to show that $u_n\geq v_n$ alomost everywhere in $\Omega$. For if not, i.e. if $u_n<v_n$ in $\Omega$, then \begin{align*} -\int_{\Omega} |\nabla(u_n-v_n)^-|^2 & =\int _{\Omega} \nabla (u_n-v_n).\nabla (u_n-v_n)^- \\& =\int _{\Omega} \bigg(h_n\left(u_n +\frac{1}{n}\right)-h_n\left(v_n +\frac{1}{n}\right) \bigg)f_n\cdot (u_n-v_n)^-\\ & + \int_{\Omega} \mu_n\cdot(u_n-v_n)^- \\& \geq 0. \end{align*} This implies $u_n\geq v_n$ almost everywhere in $\Omega$ and hence in $K$.\\ Thus we have showed that for every $K\subset\subset\Omega$ there exists $C_K$ such that $u_n\geq v_n\geq C_K>0$ almost everywhere in $K$. \end{proof} \end{lemma} \noindent Using the results proved so far we will now prove the existence of a solution to the problem in $\eqref{eq0}$. In order to do this we divide the problem into the following two cases. \subsection{The case of $0<\gamma < 1$}\label{sub1} In this subsection, we consider the problem in $\eqref{eq1}$ for the case of $0<\gamma<1$. \begin{lemma}\label{l1} Let $u_n$ be a solution to the problem ($\ref{eq1}$), where $h$ satisfies ($\ref{eqq1}$) and ($\ref{eqqq}$), with $0<\gamma<1$. Then $(u_n)$ is bounded in $W_0^{1,q}(\Omega)$ for every $q<\frac{N}{N-1}$. \begin{proof} We follow the arguments used in $\cite{Benilan}$ to prove this lemma. We will first prove that ($\nabla u_n$) is bounded in $M^\frac{N}{N-1}(\Omega)$. For this, we take $\varphi =T_k(u_n)$ as a test function in the weak formulation of $\eqref{eq1}$ and get \begin{equation}\label{eq3} \int_\Omega |\nabla T_k(u_n)|^2 \leq \int_\Omega h_n\left(u_n +\frac{1}{n}\right)T_k(u_n)f_n + \int_\Omega T_k(u_n) \mu_n. \end{equation} Now, $\frac{T_k(u_n)}{(u_n+\frac{1}{n})^{\gamma}}\leq\frac{u_n}{(u_n+\frac{1}{n})^{\gamma}}=\frac{u_n^{\gamma}}{(u_n+\frac{1}{n})^{\gamma}u_n^{\gamma-1}}\leq u_n^{1-\gamma}$.\\ Using $\eqref{eqq1}$ and $\eqref{eqqq}$ in the right hand side of $\eqref{eq3}$ we have, \begin{align}\label{m} {\int_\Omega h_n\left(u_n +\frac{1}{n}\right)f_n T_k(u_n)} & \leq{ C_1 \int _{(u_n +\frac{1}{n}< \underline{K})}\frac{f_n T_k(u_n)} {(u_n +\frac{1}{n})^\gamma}} + \max_{[\underline{K},\overline{K}]} h(s) \int_{(\underline{K}\leq (u_n+\frac{1}{n})\leq \overline{K})} f_n T_k(u_n) \nonumber \\& + C_2 \int _{(u_n+\frac{1}{n} > \overline{K})}\frac{f_n T_k(u_n)} {(u_n+\frac{1}{n})^\theta}\nonumber\\&\leq C_1\underline{K}^{1-\gamma} \int _{(u_n+\frac{1}{n}< \underline{K})}f + k \max_{[\underline{K},\overline{K}]} h(s) \int_{(\underline{K}\leq (u_n+\frac{1}{n})\leq \overline{K})}f\nonumber\\&+ \frac{C_2 k}{\overline{K}^{\theta}} \int _{(u_n+\frac{1}{n} > \overline{K})} f\nonumber \\&\leq Ck \end{align} and \begin{align}\label{n} \int_\Omega T_k(u_n) \mu_n& \leq k\|\mu_n\|_{L^1(\Omega)} \nonumber\\ &\leq Ck. \end{align} Using the inequalities $\eqref{m}$ and $\eqref{n}$ in $\eqref{eq3}$, we obtain \begin{equation}\label{e} \int_\Omega |\nabla T_k(u_n)|^2 \leq Ck. \end{equation} Consider \begin{align*} \{|\nabla u_n|\geq t\} & = \{|\nabla u_n|\geq t,u_n< k\} \cup \{|\nabla u_n| \geq t,u_n \geq k\} \\& \subset \{|\nabla u_n|\geq t,u_n <k\} \cup \{u_n \geq k\}\subset \Omega. \end{align*} Then using the subadditivity property of Lesbegue measure $m$ we have, \begin{equation}\label{ee1} m( \{|\nabla u_n|\geq t\}) \leq m(\{|\nabla u_n|\geq t,u_n< k\}) + m(\{u_n \geq k\}). \end{equation} Therefore, from the Sobolev inequality \begin{align}\label{f} \Bigg(\int_\Omega |T_k(u_n)|^{2^*}\Bigg)^{\frac{2}{2^*}}&\leq\frac{1}{\lambda_1} \int_{\Omega}|\nabla T_k(u_n)|^2 \nonumber\\ &\leq Ck, \end{align} where $\lambda_1$ is the first eigenvalue of the Laplacian operator. By restricting the left hand side of $\eqref{e}$ over $I_1=\{|\nabla u_n|\geq t,u_n< k\}$, we get \begin{align} m(\{|\nabla u_n|\geq t,u_n< k\})&\leq \frac{1}{t^2}\int_\Omega |\nabla T_k(u_n)|^2\nonumber\\ &\leq \frac{Ck}{t^2}, ~\forall k>1. \nonumber \end{align} Again by restricting the integral on the left hand side of $\eqref{f}$ over $I_2=\left\lbrace x\in\Omega:u_n\geq k \right\rbrace$, in which $T_k(u_n)=k$, we obtain $$k^2m(\{u_n\geq k\})^{\frac{2}{2^*}}\leq Ck.$$ This implies $$m(\{u_n\geq k\})\leq \frac{C}{k^\frac{N}{N-2}},~ \forall k\geq1.$$ Hence, $(u_n)$ is bounded in $M^{\frac{N}{N-2}}(\Omega)$. Now $\eqref{ee1}$ becomes \begin{align} m( \{|\nabla u_n|\geq t\})& \leq m(\{|\nabla u_n|\geq t,u_n< k\}) + m(\{u_n \geq k\})\nonumber\\ &\leq \frac{Ck}{t^2} + \frac{C}{k^\frac{N}{N-2}},~ \forall k>1.\nonumber \end{align} On choosing $k=t^{\frac{N-2}{N-1}}$, we get $$ m(\{|\nabla u_n|\geq t\})\leq \frac{C}{t^\frac{N}{N-1}},~ \forall t\geq 1.$$ We have thus proved that $(\nabla u_n)$ is bounded in $M^{\frac{N}{N-1}}(\Omega)$. Therefore, the property in $\eqref{mar}$ implies that $(u_n)$ is bounded in $W_0^{1,q}$ for every $q<\frac{N}{N-1}$. \end{proof} \end{lemma} \begin{theorem}\label{t1} There exists a weak solution $u$ of $\eqref{eq0}$ in $W_0^{1,q}(\Omega)$ for every $q<\frac{N}{N-1}$. \end{theorem} \begin{proof} With the assumptions made in Lemma $\ref{l1}$, there exists $u$ such that the sequence ($u_n$) converges weakly to $u$ in $W_0^{1,q}(\Omega)$ for every $q<\frac{N}{N-1}$. This implies that for $\varphi$ in $C_c^1(\bar{\Omega})$ $$\lim_{n\rightarrow \infty} \int_{\Omega} \nabla u_n . \nabla\varphi = \int_{\Omega}\nabla u .\nabla\varphi.$$ In addition to this, by the compact embedding we conclude that $u_n$ converges to $u$ strongly in $L^1(\Omega)$ and hence pointwise upto a subsequence almost everywhere in $\Omega$. Thus, for $\varphi$ belonging to $C_c^1(\bar{\Omega})$, we have \begin{align*} 0 & \leq |h_n\left(u_n+\frac{1}{n}\right) f_n\varphi|\\& \leq {\left\{\begin{array}{ll} \frac{C_1\parallel \varphi \parallel_{L^\infty(\Omega)}f}{C_K^\gamma}, & \text{if \space} u_n+\frac{1}{n}< \underline{K} \\ {M\parallel \varphi \parallel_{L^\infty(\Omega)}f}, & \text{if \space} \underline{K}\leq u_n+\frac{1}{n}\leq \overline{K}~\\ \frac{C_2\parallel \varphi \parallel_{L^\infty(\Omega)}f}{C_K^\theta}, & \text{if \space} u_n+\frac{1}{n}> \overline{K} \end{array} \right. } \end{align*} where, $M>0$ and $K$ is the set $\{x\in\Omega:\varphi(x)\neq0\}$. On applying the dominated convergence theorem we get $$\lim_{n\rightarrow \infty} \int_{\Omega} h_n\left(u_n+\frac{1}{n}\right) f_n\varphi = \int_{\Omega}h( u) f\varphi.$$ Hence, on passing the limit $n\rightarrow\infty$ in the last term of $\eqref{weak}$ involving $\mu_n$, we obtain a weak solution $u$ of $\eqref{eq0}$ in $W_0^{1,q}(\Omega)$ for every $q<\frac{N}{N-1}$. This completes the proof. \end{proof} \subsection{The case of $\gamma \geq 1$}\label{sub2} Since this is a strongly singular case, we can obtain local estimates on $u_n$ in the Sobolev space. We will globally estimate $ \left(T_k^{\frac{\gamma+1}{2}}(u_n)\right)$ in $W_0^{1,2}(\Omega)$ with the aim of giving a sense to the boundary values of $u$ at least in a weaker sense when compared to the trace sence. \begin{lemma}\label{l2} Let $u_n$ be a solution of $\eqref{eq1}$ with $\gamma \geq 1$. Then $\left(T_k^{\frac{\gamma+1}{2}}(u_n)\right)$ is bounded in $W_0^{1,2}(\Omega)$ for every fixed $k>0$. \begin{proof} Consider $\varphi=T_k^\gamma(u_n)$ as a test function in $\eqref{eq1}$. We have \begin{equation}\label{eq4} \gamma\int_\Omega \nabla u_n.\nabla T_k(u_n)T_k^{\gamma-1}(u_n) =\int_{\Omega} h_n\left(u_n+\frac{1}{n}\right)f_nT_k^\gamma(u_n) + \int_{\Omega}T_k^\gamma(u_n)\mu_n. \end{equation} Since, $\gamma\geq 1$ and by the definition of $T_k(u_n)$, we estimate the term on the left hand side of $\eqref{eq4}$ as \begin{equation}\label{g} \gamma\int_\Omega \nabla u_n.\nabla T_k(u_n)T_k^{\gamma-1}(u_n)\geq\gamma\int_\Omega |\nabla T_k^{\frac{\gamma+1}{2}}(u_n)|^2. \end{equation} Recal that $\frac{T_k^\gamma(u_n)}{(u_n+\frac{1}{n})^\gamma}\leq \frac{u_n^\gamma}{(u_n+\frac{1}{n})^\gamma}\leq 1$, then the term on the right hand side of $\eqref{eq4}$ can be estimated as \begin{align}\label{h} \int_{\Omega} h_n\left(u_n+\frac{1}{n}\right)f_nT_k^\gamma(u_n) + \int_{\Omega}T_k^\gamma(u_n)\mu_n & \leq{ C_1 \int _{(u_n +\frac{1}{n}< \underline{K})}\frac{f_n T_k^\gamma(u_n)} {(u_n +\frac{1}{n})^\gamma}} + C_2 \int _{(u_n+\frac{1}{n} > \overline{K})}\frac{f_n T_k^\gamma(u_n)} {(u_n+\frac{1}{n})^\theta} \nonumber \\& + \max_{[\underline{K},\overline{K}]} h(s) \int_{(\underline{K}\leq (u_n+\frac{1}{n})\leq \overline{K})} f_n T_k^\gamma(u_n) + k^\gamma \int_{\Omega} \mu_n\nonumber \\& \leq C_1 \int _{(u_n+\frac{1}{n}< \underline{K})}f + \frac{C_2 k^\gamma}{\overline{K}^{\theta}} \int _{(u_n+\frac{1}{n} > \overline{K})} f \nonumber \\& + k^\gamma \max_{[\underline{K},\overline{K}]} h(s) \int_{(\underline{K}\leq (u_n+\frac{1}{n})\leq \overline{K})}f + k^\gamma \int_\Omega \mu_n\nonumber \\& \leq C(k,\gamma)k^\gamma. \end{align} On combining the inequalities in $\eqref{g}$ and $\eqref{h}$, we get \begin{equation}\label{k} \int_{\Omega} |\nabla T_k^{\frac{\gamma+1}{2}}(u_n)|^2 \leq Ck^\gamma. \end{equation} Therefore, $\left(T_k^{\frac{\gamma+1}{2}}(u_n)\right)$ is bounded in $W_0^{1,2}(\Omega)$ for every fixed $k>0$. \end{proof} \end{lemma} \noindent In order to pass the limit $n\rightarrow\infty$ in the weak formulation $\eqref{weak}$, we require local estimates on $(u_n)$. We prove the following lemma. \begin{lemma}\label{l3} Let $u_n$ be a solution of $\eqref{eq1}$ with $\gamma\geq1$. Then ($u_n$) is bounded in $W_{loc}^{1,q}(\Omega)$ for every $q<\frac{N}{N-1}$. \begin{proof} We prove this theorem in two steps.\\ $\boldmath{\text{Step 1.}}$ We claim that $\left(G_1(u_n)\right)$ is bounded in $W_0^{1,q}(\Omega)$ for every $q<\frac{N}{N-1}$.\\ It is apparent that $G_1(u_n)=0$, when $0\leq u_n\leq 1$ and $G_1(u_n)=u_n-1$, when $u_n>1$. So $\nabla G_1(u_n)=\nabla u_n$ for $u_n>1$.\\ Now, we need to show that $\left(\nabla G_1(u_n)\right)$ is bounded in the Marcinkiewicz space $M^{\frac{N}{N-1}}(\Omega)$. We observe \begin{align*} \{|\nabla u_n|> t, u_n>1\} & = \{|\nabla u_n|> t,1<u_n\leq k+1\} \cup \{|\nabla u_n| > t,u_n > k+1\} \\& \subset \{|\nabla u_n|> t,1<u_n\leq k+1\} \cup \{u_n > k+1\}\subset \Omega. \end{align*} Hence, by the subadditivity of Lebesgue measure $m$, we have \begin{equation}\label{eq5} m( \{|\nabla u_n|> t,u_n>1\}) \leq m(\{|\nabla u_n|> t,1<u_n\leq k+1\}) + m(\{u_n > k+1\}). \end{equation} In order to estimate $\eqref{eq5}$ we take $\varphi=T_k(G_1(u_n))$, for $k>1$, as a test function in $\eqref{eq1}$. We observe that $\nabla T_k(G_1(u_n))= \nabla u_n$ only when $1<u_n \leq k+1$, otherwise it is equal to zero and $T_k(G_1(u_n))=0$ when $ u_n\leq 1$. Thus we have \begin{align}\label{i} \int_\Omega |\nabla T_k(G_1(u_n))|^2 & \leq \int_\Omega h_n\left(u_n+\frac{1}{n}\right)f_nT_k(G_1(u_n)) + \int_{\Omega}T_k(G_1(u_n))\mu_n\nonumber \\& \leq{ C_1 \int _{(u_n +\frac{1}{n}< \underline{K})}\frac{f_n T_k(G_1(u_n))} {(u_n +\frac{1}{n})^\gamma}} + \max_{[\underline{K},\overline{K}]} h(s) \int_{(\underline{K}\leq (u_n+\frac{1}{n})\leq \overline{K})} f_n T_k(G_1(u_n)) \nonumber \\& + C_2 \int _{(u_n+\frac{1}{n} > \overline{K})}\frac{f_n T_k(G_1(u_n))} {(u_n+\frac{1}{n})^\theta}+k\int_\Omega\mu_n\nonumber \\& \leq { C_1k \int _{(u_n+\frac{1}{n}< \underline{K})}\frac{f_n}{(1+\frac{1}{n})^\gamma} + k \max_{[\underline{K},\overline{K}]} h(s) \int_{(\underline{K}\leq (u_n+\frac{1}{n})\leq \overline{K})}f_n}\nonumber \\& + \frac{C_2 k}{\overline{K}^{\theta}} \int _{(u_n+\frac{1}{n} > \overline{K})} f_n +k\int_\Omega\mu_n\nonumber \\& \leq Ck. \end{align} By restricting the integral in $\eqref{i}$ over $J_1={\left\lbrace 1<u_n\leq k+1 \right\rbrace}$, we get \begin{align} \int_{\left\lbrace 1<u_n\leq k+1 \right\rbrace} |\nabla T_k(G_1(u_n))|^2 \nonumber& = \int_{\left\lbrace 1<u_n\leq k+1 \right\rbrace} |\nabla u_n|^2\nonumber \\& \geq \int_{\left\lbrace |\nabla u_n|>t, 1<u_n\leq k+1 \right\rbrace} |\nabla u_n|^2\nonumber \\&\geq t^2m(\{|\nabla u_n|> t,1<u_n\leq k+1\}).\nonumber \end{align} Thus, $$m(\{|\nabla u_n|> t,1<u_n\leq k+1\})\leq \frac{Ck}{t^2}, ~\forall k \geq 1.$$ According to $\eqref{k}$ in the proof of Lemma $\ref{l2}$, one can see that $$\int_{\Omega} |\nabla T_k^{\frac{\gamma+1}{2}}(u_n)|^2 \leq Ck^\gamma,~ \forall k>1.$$ Therefore, from the Sobolev inequality \begin{align}\label{j} \Bigg(\int_\Omega |T_k^{\frac{\gamma+1}{2}}(u_n)|^{2^*}\Bigg)^{\frac{2}{2^*}}&\leq \frac{1}{\lambda_1}\int_{\Omega}|\nabla T_k^{\frac{\gamma+1}{2}}(u_n)|^2\nonumber\\ &\leq Ck^{\gamma}, \end{align} where $\lambda_1$ is the first eigenvalue of the Laplacian operator. By restricting the integral on the left hand side of $\eqref{j}$ over $J_2=\{x: u_n(x)> k+1\}$, we obtain $$k^{\gamma+1}m(\{u_n>k+1\})^{\frac{2}{2^*}}\leq Ck^{\gamma}$$ so that $$m(\{u_n>k+1\})\leq \frac{C}{k^\frac{N}{N-2}},~ \forall k\geq1.$$ So, $(u_n)$ is bounded in $M^{\frac{N}{N-2}}(\Omega)$, i.e. $(G_1(u_n))$ is also bounded in $M^{\frac{N}{N-2}}(\Omega)$.\\ Now from $\eqref{eq5}$, we have \begin{eqnarray}m(\{|\nabla u_n|> t,u_n>1\}) &\leq & m(\{|\nabla u_n|> t,1<u_n\leq k+1\}) + m(\{u_n > k+1\})\nonumber\\&\leq& \frac{Ck}{t^2} + \frac{C}{k^\frac{N}{N-2}}, \forall k>1.\nonumber \end{eqnarray} On choosing $k=t^{\frac{N-2}{N-1}}$ we get $$ m(\{|\nabla u_n|> t,u_n>1\})\leq \frac{C}{t^\frac{N}{N-1}},~ \forall t\geq 1.$$ We thus proved that $(\nabla u_n)=(\nabla G_1(u_n))$ is bounded in $M^{\frac{N}{N-2}}(\Omega)$. Hence, by the property in ($\ref{mar}$), we conclude that $(G_1(u_n))$ is bounded in $W_0^{1,q}(\Omega)$ for every $q<\frac{N}{N-1}$.\\ $\boldmath{\text{Step 2.}}$ We claim that $(T_1(u_n))$ is bounded in $W_{loc}^{1,2}(\Omega)$.\\ To prove this claim we need to examine the behaviour, for small values, of $u_n$ for each $n$. For this we first prove that for every $K\subset\subset \Omega$, \begin{equation}\label{eq7} \int_K |\nabla T_1(u_n)|^2 \leq C. \end{equation} We have already proved in Lemma $\ref{l4}$ that $u_n\geq C_K>0$ on $K\subset\subset\Omega$. On using $\varphi=T_1^\gamma(u_n)$ as a test function in $\eqref{weak}$, we get \begin{align}\label{eq8} \int_\Omega \nabla u_n . \nabla T_1(u_n) T_1^{\gamma-1}(u_n)&= \int_\Omega h_n\left(u_n+\frac{1}{n}\right)f_nT_1^{\gamma}(u_n) +\int_\Omega T_1^{\gamma}(u_n)\mu_n\nonumber\\ & \leq C. \end{align} We observe that \begin{align}\label{eq9} \int_\Omega \nabla u_n . \nabla T_1(u_n) T_1^{\gamma-1}(u_n) & \geq\int_K |\nabla T_1(u_n)|^2 T_1^{\gamma-1}(u_n)\nonumber\\ & \geq C_K^{\gamma-1}\int_{K}|\nabla T_1(u_n)|^2. \end{align} Inequalities $\eqref{eq8}$ and $\eqref{eq9}$ together yields $\eqref{eq7}$. \\ Since $u_n=T_1(u_n)+G_1(u_n)$, we conclude that ($u_n$) is bounded in $W_{loc}^{1,q}(\Omega)$ for every $q<\frac{N}{N-1}$. \end{proof} \end{lemma} \noindent We now state and prove the existence result. \begin{theorem} Let $\gamma\geq1$. Then there exists a weak solution $u$ of $\eqref{eq0}$ in $W_{loc}^{1,q}(\Omega)$ for every $q<\frac{N}{N-1}$. \begin{proof} The proof of this theorem is a straightforward application of the results in Theorem $\ref{t1}$, Lemma $\ref{l2}$ and Lemma $\ref{l3}$. \end{proof} \end{theorem} \section{Further discussion of the case $0<\gamma<1$.} In this section we will consider $\Omega$ that has a boundary $\partial\Omega$ of class $C^{2,\beta}$ for some $0<\beta<1$. We consider the following semilinear elliptic problem \begin{eqnarray}\label{e1} -\Delta u&=& h(u)f+\mu ~\text{in}~\Omega,\nonumber\\ u &=& 0 ~\text{on}~ \partial\Omega, \end{eqnarray} where, $0<\gamma<1$, $f\in C^{\beta} (\bar{\Omega})$ such that $f>0$ in $\bar{\Omega}$ and $\mu$ is a nonnegative, bounded, Radon measure on $\Omega$. We will show the existence of a nonnegative very weak solution to the problem $\eqref{e1}$. Before proving this we give a few definitions. \begin{definition}\label{d1} A very weak solution to problem $\eqref{e1}$ is a function $u\in L^1(\Omega)$ such that $u>0$ a.e. in $\Omega$, $fh(u) \in L^1(\Omega) $ and \begin{equation}\label{e2} -\int_{\Omega}u\Delta \varphi=\int_{\Omega}h(u)f\varphi +\int_{\Omega}\varphi d\mu,~ \forall\varphi\in C_0^2({\bar{\Omega}}). \end{equation} \end{definition} \begin{definition}\label{d2} A function $\underline{u}$ is a subsolution for $\eqref{e1}$ if $\underline{u}\in L^1(\Omega)$, $\underline{u}>0$ in $\Omega$, $fh(\underline{u})\in L^1(\Omega)$ and \begin{equation}\label{e3} -\int_{\Omega}\underline{u}\Delta \varphi\leq\int_{\Omega}h(\underline{u})f\varphi +\int_{\Omega}\varphi d\mu,~ \forall\varphi\in C_0^2({\bar{\Omega}}),~ \varphi\geq0. \end{equation} Equivalently, $\bar{u}$ is said to be a supersolution for the problem $\eqref{e1}$ if $\bar{u}\in L^1(\Omega)$, $\bar{u}>0$ in $\Omega$, $fh(\bar{u})\in L^1(\Omega)$ and \begin{equation}\label{e6} -\int_{\Omega}\bar{u}\Delta \varphi\geq\int_{\Omega}h(\bar{u})f\varphi +\int_{\Omega}\varphi d\mu,~ \forall\varphi\in C_0^2({\bar{\Omega}}),~\varphi\geq0. \end{equation} \end{definition} \noindent We now prove the following two theorems in order to guarantee the existence of a nonnegative solution to $\eqref{e1}$ in the sense of Definition $\ref{d1}$. \begin{theorem}\label{t2} Let $\underline{u}$ be a subsolution and $\bar{u}$ be a supersolution to the problem in $\eqref{e1}$ with $\underline{u}\leq\bar{u}$ in $\Omega$, then there exists a solution $u$ to $\eqref{e1}$ in the sense of Definition $\ref{d1}$ such that $\underline{u}\leq u\leq \bar{u}$. \begin{proof} We will follow the arguments due to Montenegro \& Ponce $\cite{Ponce}$. We define $\bar{g}:\Omega \times \mathbb{R}\rightarrow \mathbb{R}$ as $$\bar{g}(x,t)= \left\{ \begin{array}{ll} f(x)h(\underline{u}(x)) & \text{if}~ t<\underline{u}(x),\\ f(x)h(t) & \text{if}~ \underline{u}(x)\leq t\leq \bar{u}(x),\\ f(x)h(\bar{u}(x)) & \text{if}~ t>\bar{u}(x). \end{array} \right. $$ Moreover, $\underline{u}>0$ and hence $\bar{g}$ is well defined a.e. in $\Omega$. For each fixed $v\in L^1(\Omega)$ we have that $\bar{g}(x,v(x))\in L^1(\Omega)$. We divide the proof into two steps.\\ $\boldmath{\text{Step 1.}}$ We claim that if $u$ satisfies \begin{eqnarray}\label{e4} -\Delta u&=& \bar{g}(x,u)+\mu ~\text{in}~\Omega,\nonumber\\ u &=& 0 ~\text{on}~ \partial\Omega, \end{eqnarray} then $\underline{u}\leq u\leq \bar{u}$. Thus $\bar{g}(.,u)=fh(u)\in L^1(\Omega)$ and $u$ is a solution to $\eqref{e1}$.\\ The very weak formulation of $\eqref{e4}$ is given by \begin{equation}\label{e5} -\int_{\Omega}u\Delta \varphi= \int_{\Omega}\bar{g}(x,u)\varphi +\int_\Omega\varphi d\mu,~\forall\varphi\in C_0^2(\bar\Omega). \end{equation} We need to prove that $u\leq\bar{u}$ in $\Omega$. The proof of the other side of the inequality, $\underline{u}\leq u$, follows similarly.\\ We will show that $u$ is a solution to $\eqref{e4}$ and $\bar{u}$ is a supersolution to $\eqref{e1}$. Subtracting equation $\eqref{e5}$ from $\eqref{e6}$ we have, for every $\varphi \in C_0^2(\bar{\Omega})$ such that $\varphi \geq0$, \begin{align} -\int_{\Omega}(u-\bar{u})\Delta \varphi&\leq \int_{\Omega}\left(\bar{g}(x,u)-fh(\bar{u})\right)\varphi \nonumber\\ &=\int_{\Omega}\chi_{\{u\leq \bar{u}\}}\left(\bar{g}(x,u)-fh(\bar{u})\right)\varphi.\nonumber \end{align} On applying the Kato type inequality from the Appendix, we get \begin{align} \int_{\Omega}(u-\bar{u})^+& \leq \int_{\Omega}\chi_{\{u\leq \bar{u}\}} \left(\bar{g}(x,u)-fh(\bar{u})\right) (sign_+(u-\bar{u}))\varphi\nonumber\\ &=0.\nonumber \end{align} This further implies that $$\int_{\Omega}(u-\bar{u})^+\leq 0.$$ Thus $u\leq\bar{u}$ a.e. in $\Omega$. Similarly, one can show that $\underline{u}\leq u$ a.e. in $\Omega$. Thus we have proved that $\underline{u}\leq u\leq\bar{u}$ a.e. in $\Omega$.\\ $\boldmath{\text{Step 2.}}$ Now we prove that a solution to the problem in $\eqref{e4}$ exists. Let us define $$G:L^1(\Omega)\rightarrow L^1(\Omega).$$ The map $G$ is so defined that it assigns to every $v\in L^1(\Omega)$ a solution $u$ to the following linear problem \begin{eqnarray} -\Delta u&=& \bar{g}(x,v)+\mu ~\text{in}~\Omega,\nonumber\\ u &=& 0 ~\text{on}~ \partial\Omega.\label{linprob} \end{eqnarray} The problem in ($\ref{linprob}$) admits a unique solution for a given Radon measure due to \cite{Stampacchia}. We need to show that this map is continuous in $L^1(\Omega)$. Let us choose a sequence $(v_n)$ converging to some function $v$ in $L^1(\Omega)$. Then by the definition of $\bar{g}$ and $h$ being a nonincreasing, continuous function we get $$|\bar{g}(x,v_n(x))|\leq fh(\underline{u}).$$ Hence, using the dominated convergence theorem, we conclude that $$\|\bar{g}(x, v_n)-\bar{g}(x, v)\|_{L^1(\Omega)}\rightarrow 0.$$ By $\cite{bhakta}$, the linear problem $(\ref{linprob})$ has a unique very weak solution corresponding to this $v$. Thus \begin{eqnarray} \lim_{n\rightarrow\infty}\left(-\int_{\Omega}u_n\Delta\varphi\right) &=& \lim_{n\rightarrow\infty}\int_{\Omega}fh(v_n)\varphi+\int_{\Omega}\varphi d\mu\nonumber\\ &=&\int_{\Omega}fh(v)\varphi+\int_{\Omega}\varphi d\mu\nonumber\\ &=& -\int_{\Omega}u\Delta\varphi.\nonumber \end{eqnarray} Hence, $u = G(v)$. It can be seen from Th\'{e}or\`{e}me 9.1 in \cite{Stampacchia} that \begin{align} \|u_n-u\|_{L^1(\Omega)}&\leq\|u_n-u\|_{W_0^{1,q}(\Omega)}\nonumber\\ &\leq \|(\bar{g}(x,v_n)+\mu)-(\bar{g}(x,v)+\mu)\|_{\mathcal{M}(\Omega)}\nonumber\\ &=\|\bar{g}(x,v_n)-\bar{g}(x,v)\|_{L^1(\Omega)}\rightarrow 0, ~\text{(as)}~n\rightarrow\infty. \nonumber \end{align} Hence, $\|u_n-u\|_{L^1(\Omega)}=\|G(v_n)-G(v)\|_{L^1(\Omega)}\rightarrow 0$ as $n\rightarrow\infty.$ Therefore, $G$ is continuous.\\ We are still left to prove that the set $G(L^1(\Omega))$ is bounded and relatively compact in $L^1(\Omega)$. For every $v\in L^1(\Omega)$ we have \begin{align} \parallel\bar{g}(x,v)+\mu\parallel_{\mathcal{M}(\Omega)}& \leq \parallel\bar{g}(x,v)\parallel_{\mathcal{M}(\Omega)}+\parallel \mu\parallel_{\mathcal{M}(\Omega)}\nonumber\\ &\leq ~\parallel f h(\underline{u})\parallel_{L^1(\Omega)}+\parallel\mu\parallel_{\mathcal{M}(\Omega)}.\nonumber \end{align} Again, by Th\'{e}or\`{e}me 9.1 in $\cite{Stampacchia}$, we see that $G(v)$ is bounded in $W_0^{1,q}(\Omega)$ for every $q<\frac{N}{N-1}$. Therefore, by the Rellich-Kondrachov theorem we get $G(L^1(\Omega))$ is bounded and hence relatively compact in $L^1(\Omega)$.\\ We now apply the Schauder fixed point theorem to see that $G$ has a fixed point $u\in L^1(\Omega)$. This fixed point of $G$ is a very weak solution to the problem $\eqref{e1}$. Also by step 1 we have that this solution $u$ satisfies $\underline{u}\leq u\leq \bar{u}$ a.e. in $\Omega$. \end{proof} \end{theorem} \begin{theorem}\label{t3} There exists a nonnegative solution to the problem $\eqref{e1}$ in the sense of Definition $\ref{d1}$. \begin{proof} We will apply the Theorem $\ref{t2}$ for which we find a subsolution and a supersolution to the problem $\eqref{e1}$ in the sense of Definition $\ref{d2}$. We first find a subsolution. Let us consider the problem \begin{eqnarray}\label{e7} -\Delta v&=& h(v)f ~\text{in}~\Omega,\nonumber\\ v &=& 0 ~\text{on}~ \partial\Omega. \end{eqnarray} The existence of a very weak solution in $L^1(\Omega)$ to the problem in ($\ref{e7}$) can be proved by the arguments used in the Theorem $\ref{t2}$, i.e. by using the Schauder fixed point theorem. Consider the eigenfunction $\phi_1>0$ of ($-\Delta$) corresponding to the smallest eigenvalue $\lambda_1$ with $\phi_1|_{\partial\Omega}=0$ \cite{Evans}. Observe that \begin{eqnarray}\label{positivity} -\Delta(\epsilon\phi_1)-h(\epsilon\phi_1)f&<&0 \nonumber\\ &=&-\Delta v-h(v)f \nonumber \end{eqnarray} since (i) $\phi_1>0$ and a choice of sufficiently small $\epsilon>0$, (ii) the nonincreasing nature of $h$ and (iii) $v$ is a solution to ($\ref{e7}$). Hence, we have $v > 0$ in $\Omega$. Since $\mu$ is a nonnegative, bounded, Radon measure we get the following inequality. $$-\int_\Omega v\Delta \varphi\leq\int_{\Omega}h(v)f\varphi +\int_{\Omega}\varphi d\mu,~ \forall\varphi\in C_0^2(\bar{\Omega}), ~\varphi\geq0$$ and hence $v$ is a subsolution to the problem $\eqref{e1}$. We now look for a supersolution to the problem in $\eqref{e1}$. Let $w$ be the solution of \begin{eqnarray}\label{e8} -\Delta w&=& \mu ~\text{in}~\Omega,\nonumber\\ w &=& 0 ~\text{on}~ \partial\Omega. \end{eqnarray} Since $\mu\geq 0$, by the maximum principle on Laplacian we have $w\geq 0$. Let us denote $z=w + v$, where $v$ is a solution to ($\ref{e7}$). Thus \begin{align} -\int_\Omega z\Delta \varphi&=-\int_\Omega w\Delta \varphi -\int_\Omega v\Delta \varphi\nonumber\\ &=\int_{\Omega}h(v)f\varphi +\int_{\Omega}\varphi d\mu,~ \forall\varphi\in C_0^2(\bar{\Omega}).\nonumber \end{align} We know that $w$ is nonnegative and hence we have $0<h(z)\leq h(v)$. Therefore, $$\int_{\Omega}h(z)f\varphi +\int_{\Omega}\varphi d\mu\leq \int_{\Omega}h(v)f\varphi +\int_{\Omega}\varphi d\mu,~ \forall\varphi\in C_0^2(\bar{\Omega}),~ \varphi\geq0,$$ i.e., $z$ is a positive function in $L^1(\Omega)$ such that $h(z)\leq h(v)\in L^1(\Omega)$ and $$-\int_\Omega z\Delta \varphi\geq\int_{\Omega}h(z)f\varphi+\int_{\Omega}\varphi d\mu,~ \forall\varphi\in C_0^2(\bar{\Omega}),~ \varphi\geq0.$$ Therefore, $z$ is a supersolution to $\eqref{e1}$. We can now apply Theorem $\ref{t2}$ to conclude that there exists a solution $u$ to problem $\eqref{e1}$ in the sense of Definition $\ref{d1}$. \end{proof} \end{theorem} \subsection{Relaxation of assumption on $f$} Theorem $\ref{t3}$ has been proved by assuming a strong regularity on $f$, i.e. $f$ belongs to $C^{\beta}(\bar{\Omega})$ for some $0 <\beta< 1$. In this subsection we relax our assumption on $f$ in order to prove the existence of solution in the sense of Definition $\ref{d1}$.\\ For a fixed $\delta>0$, let us define $\Omega_\delta=\{x\in\Omega :\text{dist}(x,\partial\Omega)<\delta \}$ and let $f$ be an almost everywhere positive function in $L^1(\Omega)\cap L^\infty(\Omega_\delta)$. \begin{theorem} Let $f\in L^1(\Omega)\cap L^{\infty}(\Omega_\delta)$ such that $f>0$ a.e. in $\Omega$ for some fixed $\delta>0$. Then there exists a solution to the problem ($\ref{e1}$) in the sense of Definition $\ref{d1}$. \begin{proof} We consider the following sequence of problems \begin{eqnarray} -\Delta v_n&=& h_n\left(v_n+\frac{1}{n}\right)f_n ~\text{in}~\Omega,\nonumber\\ v_n &=& 0 ~\text{on}~ \partial\Omega. \end{eqnarray} In Lemma $\ref{l4}$ we have proved that the nondecreasing sequence $(v_n)$ converges to a solution of the problem in ($\ref{e7}$) and for each fixed $n$, the function $v_n$ belongs to $L^{\infty}(\Omega)$. So the function $h_1(v_1+1)f_1$ also belongs to $L^{\infty}(\Omega)$. We now can apply the Lemma 3.2 in $\cite{Brezis}$ so as to obtain \begin{align} \frac{v_1(x)}{d(x)}&\geq C\int_\Omega d(y)f_1(y) h_1\left(\parallel v_1\parallel_{L^{\infty}(\Omega)}+1\right)dy \nonumber\\ &\geq C>0\nonumber \end{align} for every $x$ in $\Omega$, where $d(x) = d(x, \partial\Omega)$. Thus, we have $v(x)\geq v_1(x)\geq Cd(x)$, a.e. on $\Omega$. Therefore, as $f\in L^{\infty}(\Omega_\delta)$, we have $h(v)f\in L^1(\Omega)$ because (i) $h(v)f\leq h(Cd(x))f$ and (ii) $h(Cd(x))f$ is integrable for every $\gamma<1$. Hence, the subsolution is bounded from below. Proceeding further by using the arguments used in the proof of the Theorem $\ref{t3}$, one can produce a super solution to $\eqref{e1}$. Now using the result proved in Theorem $\ref{t2}$, we conclude the existence of a solution to the problem in $\eqref{e1}$ in the sense of Definition $\ref{d1}$. \end{proof} \end{theorem} \section{Appendix} We prove the Kato type inequality for the problem \begin{eqnarray}\label{eqq0} -\Delta u&=& h(u)f+\mu ~\text{in}~\Omega,\nonumber\\ u &=& 0 ~\text{on}~ \partial\Omega,\\ u &>& 0 ~\text{in}~ \Omega,\nonumber \end{eqnarray} where $f>0$ and $u\in L^1(\Omega)$ is a very weak solution with $u>0$ a.e. in $\Omega$ and $fh(u)\in L^1(\Omega)$ .\\ Let $u_1$ and $u_2$ are two very weak solutions to the problem $\eqref{eqq0}$ with measure sources $\mu_1$ and $\mu_2$, respectively. Hence, $u_1,u_2\in L^1(\Omega)$ and $h(u_1)f,h(u_2)f\in L^1(\Omega)$. Then for every $\varphi\in C_0^2(\bar{\Omega})$, the very weak formulations corresponding to the problem $\eqref{eqq0}$ are $$-\int_{\Omega}u_1\Delta\varphi = \int_{\Omega}h(u_1)f\varphi +\int_{\Omega}\varphi d\mu_1$$ and $$-\int_{\Omega}u_2\Delta\varphi = \int_{\Omega}h(u_2)f\varphi +\int_{\Omega}\varphi d\mu_2.$$ On taking the difference between the two formulations we get $$-\int_{\Omega}(u_1-u_2)\Delta\varphi = \int_{\Omega}f(h(u_1)-h(u_2))\varphi +\int_{\Omega}\varphi (d\mu_1-d\mu_2).$$ We see that $(u_1-u_2)$ is a very weak solution to the problem \begin{eqnarray} -\Delta(u_1-u_2) &=& f(h(u_1)-h(u_2)) + (\mu_1-\mu_2) ~\text{in}~\Omega,\nonumber\\ u_1-u_2 &=& 0 ~\text{on}~ \partial\Omega.\nonumber \end{eqnarray} Using the Proposition 1.5.4 (Kato's inequality) of $\cite{Marcus}$, we can observe that \begin{equation}\label{l} -\int_{\Omega}(u_1-u_2)^+\Delta\varphi \leq \int_{\Omega}f(h(u_1)-h(u_2)) (sign_+ (u_1-u_2))\varphi +\int_{\Omega}\varphi (d\mu_1-d\mu_2), \end{equation} where, $sign_+ (u_1-u_2)=\chi_{\{x\in\Omega:u_1(x)\geq u_2(x)\}}$. Let us consider a $\varphi_0$ such that $-\Delta\varphi_0=1$ in $\Omega$ and $\varphi_0=0$ on $\partial\Omega$. Now the inequality in $\eqref{l}$ becomes \begin{align}\label{kato1} & \int_{\Omega}(u_1-u_2)^+ \leq \int_{\Omega}f(h(u_1)-h(u_2))(sign_+(u_1-u_2))\varphi_0 +\int_{\Omega}\varphi_0 d(\mu_1-\mu_2). \end{align} Similarly, it is easy to obtain \begin{align}\label{kato2} & \int_{\Omega}(u_2-u_1)^+ \leq \int_{\Omega}f(h(u_2)-h(u_1))(sign_+(u_2-u_1))\varphi_0 +\int_{\Omega}\varphi_0 d(\mu_2-\mu_1). \end{align} Equations $\eqref{kato1}$ and $\eqref{kato2}$ are our required Kato type inequalities.\\ We will now prove that if $h$ is strictly decreasing then $\eqref{eqq0}$ has a unique very weak solution. For if $u_1,u_2$ are two very weak solutions corresponding to the same measure data then from $\eqref{kato1}$ considered over $A=\{x\in\Omega :u_1(x)\geq u_2(x)\}$ we have \begin{align}\label{appkato1} & \int_{A}(u_1-u_2)^+ +\int_{A}f(h(u_2)-h(u_1))\varphi_0 \leq 0. \end{align} Similarly from $\eqref{kato2}$ considered over $B=\{x\in\Omega :u_2(x)\geq u_1(x)\}$ we have \begin{align}\label{appkato2} & \int_{B}(u_2-u_1)^+ +\int_{B}f(h(u_1)-h(u_2))\varphi_0 \leq 0. \end{align} Since the first term in $\eqref{appkato1}$, $\eqref{appkato2}$ are nonnegative, we get $0\leq \int_{A}f(h(u_2)-h(u_1))\varphi_0\leq 0$ and $0\leq \int_{B}f(h(u_1)-h(u_2))\varphi_0 \leq 0$. This implies that $h(u_1)=h(u_2)$ a.e. in $\Omega$. Due to the strictly decreasing nature of $h$, we conclude that $u_1=u_2$ a.e. in $\Omega$. \section*{Acknowledgement} Two of the authors, A. Panda and S. Ghosh, thanks for the financial assistantship received to carry out this research work from the Ministry of Human Resource Development(M.H.R.D.), Govt. of India and the Council of Scientific and Industrial Research(C.S.I.R.), Govt. of India respectively. This is also to declare that there are no financial conflict of interest whatsoever. Finally the authors thank the anonymous referee for the constructive comments and suggestions.
-64,858.495042
[ -2.830078125, 2.521484375 ]
22.586207
[ -3.5546875, 1.0234375, -1.7880859375, -6.03125, -1.0986328125, 8.6875 ]
[ 2.65234375, 8.0390625, 0.061126708984375, 5.7578125 ]
370
5,192
[ -3.25390625, 3.89453125 ]
39.442929
[ -5.640625, -3.876953125, -4.375, -2.0234375, 1.7802734375, 11.5625 ]
0.622148
2.139012
23.497689
2.615788
[ 1.3498191833496094 ]
-41,333.227622
6.036787
-64,532.557966
0.918865
5.912014
[ -1.685546875, -3, -4.171875, -5.8203125, 1.7978515625, 12.609375 ]
[ -5.765625, -1.330078125, -1.5703125, -0.75830078125, 3.376953125, 2.798828125 ]
BkiUbQHxK0iCl7UGVngH
\section{Introduction} The condition number of a problem measures the sensitivity of a solution to small perturbations in its input data. For many problems that arise in numerical analysis, there is often a simple relationship between the condition number of a problem instance and the distance to the set of ill-posed problems---those problem instances whose condition numbers are infinite \cite{Demmel2}. For example, with respect to the problem of inverting a matrix $A$, it is known (see \cite{Horn}, for example) that if $A$ is perturbed to $A+E$ for sufficiently small $E$, then \[ \frac{\|(A+E)^{-1} - A^{-1}\|}{\|A^{-1}\|} \leq \|A^{-1}\|\|E\|+O(\|E\|^2). \] Thus, a condition measure for this problem may be taken as $\|A^{-1}\|$. Associated with this is the classical Eckart-Young theorem found in \cite{Eckart}, relating the above condition measure to the distance to ill-posedness. \begin{thm}[Eckart-Young]\label{EY} For any non-singular matrix, $A$, \[ \min_{E}\{ \|E\| : A + E \mbox{ is singular}\} ~= ~\frac{1}{\|A^{-1}\|}. \] \end{thm} We are typically concerned with relative condition numbers as introduced by Demmel in \cite{Demmel2}. For example, with respect to the problem of matrix inversion, the relative condition number is $k(A) := \|A\| \|A^{-1}\|$, the commonly used condition measure. Condition numbers are also important from an algorithmic perspective. In the above example of matrix inversion, for instance, the sensitivity of a problem under perturbations could come into prominence regarding errors in either the initial problem data or accumulated computational error due to rounding. Hence, it seems natural that condition numbers should affect algorithm speed. For example, in the context of linear programming, Renegar defined a condition measure based on the distance to ill-posedness in \cite{Renegar1}--in a similar sense as the Eckart-Young result--and showed its effect on the convergence rate of interior point methods in \cite{Renegar2}. For another example, consider the problem of finding a solution to the system $Ax = b$ where $A$ is a positive-definite matrix. It was shown in \cite{Akaike} that the steepest descent method is linearly convergent with rate $(\frac{k(A)-1}{k(A)+1})^2$ and that this bound is asymptotically tight for almost all choices of initial iterates. Similarly, it is well known (see \cite{Golub}) that the conjugate gradient method applied to the same problem is also linearly convergent with rate $\frac{\sqrt{k(A)} - 1}{\sqrt{k(A)}+1}$. From a computational perspective, a related and important area of study is that of error bounds. Given a subset of a Euclidean space, an error bound is an inequality that bounds the distance from a test vector to the specified subset in terms of some residual function that is typically easy to compute. In that sense, an error bound can be used both as part of a stopping rule during implementation of an algorithm as well as an aide in proving algorithmic convergence. A comprehensive survey of error bounds for a variety of problems arising in optimization can be found in \cite{Pang}. With regards to the problem of solving a nonsingular linear system $Ax = b$, one connection between condition measures and error bounds is immediate. Let $x^*$ be a solution to the system and $x$ be any other vector. Then \[ \|x-x^*\| = \|A^{-1}A(x-x^*)\| = \|A^{-1}(Ax-b)\|\leq \|A^{-1}\|\|Ax-b\|, \] so the distance to the solution set is bounded by a constant multiple of the residual vector, $\|Ax-b\|$, and this constant is the same one that appears in the context of conditioning and distance to infeasibility. As we discuss later, this result is not confined to systems of linear equations. As a result, error bounds and the related condition numbers often make a prominent appearance in convergence proofs for a variety of algorithms. In this paper, motivated by a recent randomized iterated projection scheme for systems of linear equations due to Strohmer and Vershynin in \cite{Strohmer}, we revisit some classical algorithms and show that, with an appropriate randomization scheme, we can demonstrate convergence rates directly in terms of these natural condition measures. The rest of the paper is organized as follows. In the remainder of this section, we define some notation used throughout the rest of this paper. In Section \ref{coordinate}, we consider the problem of solving a linear system $Ax = b$ and show that a randomized coordinate descent scheme, implemented according to a specific probability distribution, is linearly convergent with a rate expressible in terms of traditional conditioning measures. In Section \ref{alternating}, we build upon the work of Strohmer and Vershynin in \cite{Strohmer} by considering randomized iterated projection algorithms for linear inequality systems. In particular, we show how randomization can provide convergence rates in terms of the traditional Hoffman error bound in \cite{Hoffman} as well as in terms of Renegar's distance to infeasibility from \cite{Renegar3}. In Section \ref{metric}, we consider randomized iterated projection algorithms for general convex sets and, under appropriate metric regularity assumptions, obtain {\em local} convergence rates in terms of the modulus of regularity. The classical, deterministic versions of the simple algorithms we consider here have been widely studied, in part due to the extreme simplicity of each iteration: their linear convergence is well-known. However, as remarked for linear systems of equations in \cite{Strohmer}, randomized versions are interesting for several reasons. The randomized iterated projection method for linear equations from which this work originated may have some practical promise, even compared with conjugate gradients, for example \cite{Strohmer}. Our emphasis here, however, is theoretical: randomization here provides a framework for simplifying the analysis of algorithms, allowing easy bounds on the rates of linear convergence in terms of natural linear-algebraic condition measures, such as relative condition numbers, Hoffman constants, and the modulus of metric regularity. \section{Notation} On the Euclidean space $\Rn$, we denote the Euclidean norm by $\|\cdot\|$. Let $e_i$ denote the column vector with a 1 in the $i^{th}$ position and zeros elsewhere. We consider $m$-by-$n$ real matrices $A$. We denote the set of rows of $A$ by $\{a_1^T,\ldots,a_m^T\}$ and the set of columns is denoted $\{A_1,\ldots,A_n\}$. The {\em spectral norm} of $A$ is the quantity $\|A\|_2 := \max_{\|x\|=1}\|Ax\|$ and the {\em Frobenius norm} is $\|A\|_F := \sum_{i,j} a_{ij}^2$. These norms satisfy the following inequality \cite{Horn}: \bmye \label{upper} \|A\|_F \le \sqrt{n} \|A\|_2. \emye For an arbitrary matrix, $A$, let $\|A^{-1}\|_2$ be the smallest constant $M$ such that $\|Ax\|_2 \geq \frac{1}{M}\|x\|_2$ for all vectors $x$. In the case $m \ge n$, if $A$ has singular values $\sigma_1 \ge \sigma_2 \ge \cdots \ge \sigma_n$, then $M$ can also be expressed as the reciprocal of the minimum singular value $\sigma_n$, and, if $A$ is invertible, this quantity equals the spectral norm of $A^{-1}$. The {\em relative condition number} of $A$ is the quantity $k(A) := \|A\|_2\|A^{-1}\|_2$; related to this is the {\em scaled condition number} introduced by Demmel in \cite{Demmel}, defined by $\kappa(A) := \|A\|_F\|A^{-1}\|_2$. From this, it is easy to verify (using the singular value decomposition, for example) the following relationship between condition numbers: \[ 1 \leq \frac{\kappa(A)}{\sqrt{n}} \leq k(A). \] Now suppose the matrix $A$ is $n$-by-$n$ symmetric and positive definite. The {\em energy norm} (or $A$-norm), denoted $\|\cdot\|_A$, is defined by $\|x\|_A := \sqrt{x^TAx}$. The inequality \bmye \label{energy} \|x\|_A^2 \le \|A^{-1}\|_2 \cdot \|Ax\|^2 ~~\mbox{for all}~ x \in \Rn \emye is useful later. Further, if $A$ is simply positive semi-definite, we can generalize Inequality \ref{energy}: \bmye \label{energy2} x^TAx \leq \frac{1}{\underline{\lambda}(A)} \|Ax\|^2 \emye where $\underline{\lambda}(A)$ is the smallest non-zero eigenvalue of $A$. We denote the trace of $A$ by $\tra A$: it satisfies the inequality \bmye \label{lower} \|A\|_F \ge \frac{\tra A}{\sqrt{n}}. \emye Given a nonempty closed convex set $S$, let $P_S(x)$ be the projection of $x$ onto $S$: that is, $P_S(x)$ is the vector $y$ that is the optimal solution to $\min_{z\in S} \|x-z\|_2$. Additionally, define the distance from $x$ to a set $S$ by \[ d(x,S) = \min_{z\in S} \|x-z\|_2 = \|x-P_S(x)\|. \] The following useful inequality is standard: \bmye \label{pythagoras} \|y - x\|^2 - \|P_S(y) - x\|^2 \ge \|y - P_S(y)\|^2 ~~ \mbox{for all}~ x \in S,~ y \in \Rn. \emye \section{Randomized Coordinate Descent} \label{coordinate} Let $A$ be an $n$-by-$n$ positive-definite matrix. We consider a linear system of the form $Ax=b$, with solution $x^* = A^{-1}b$. We consider the equivalent problem of minimizing the strictly convex quadratic function \[ f(x) = \frac{1}{2}x^TAx - b^Tx, \] and note the standard relationship \bmye \label{standard} f(x) - f(x^*) = \frac{1}{2} \|x-x^*\|_A^2. \emye Suppose our current iterate is $x$ and we obtain a new iterate $x_+$ by performing an exact line search in the nonzero direction $d$: that is, $x_+$ is the solution to $\min_{t\in\R} f(x+td)$. This gives us \[ x_+ = x + \frac{(b-Ax)^Td}{d^TAd}d \] and \bmye \label{FUpd1} f(x_+) - f(x^*) = \frac{1}{2}\|x_+-x^*\|^2_A = \frac{1}{2}\|x-x^*\|^2_A - \frac{((Ax-b)^Td)^2}{2d^TAd}. \emye One natural choice of a set of easily-computable search directions is to choose $d$ from the set of coordinate directions, $\{e_1,\ldots,e_n\}$. Note that, when using search direction $e_i$, we can compute the new point \[ x_+ = x + \frac{b_i-a_i^Tx}{a_{ii}}e_i \] using only $2n + 2$ arithmetic operations. If the search direction is chosen at each iteration by successively cycling through the set of coordinate directions, then the algorithm is known to be linearly convergent but with a rate not easily expressible in terms of typical matrix quantities (see \cite{Golub} or \cite{Quarteroni}) . However, by choosing a coordinate direction as a search direction randomly according to an appropriate probability distribution, we can obtain a convergence rate in terms of the relative condition number. This is expressed in the following result. \begin{alg}\label{CDPD} Consider an $n$-by-$n$ positive semidefinite system $Ax=b$ and let $x_0 \in \Rn$ be an arbitrary starting point. For $j=0,1,\ldots,$ compute \[ x_{j+1} = x_j + \frac{b_i - a_i^Tx_j}{a_{ii}}e_i \] where, at each iteration $j$, the index $i$ is chosen independently at random from the set $\{1,\ldots,n\}$, with distribution \[ P\{ i = k \} ~=~ \frac{a_{kk}}{\tra A}. \] \end{alg} Notice in the algorithm that the matrix $A$ may be singular, but that nonetheless $a_{ii} > 0$ almost surely. If $A$ is merely positive semidefinite, solutions of the system $Ax=b$ coincide with minimizers of the function $f$, and consistency of the system is equivalent to $f$ being bounded below. We now have the following result. \begin{thm}\label{LC1} Consider a consistent positive-semidefinite system $Ax=b$, and define the corresponding objective and error by \begin{eqnarray*} f(x) & = & \frac{1}{2}x^TAx - b^Tx \\ \delta(x) & = & f(x) - \min f. \end{eqnarray*} Then Algorithm \ref{CDPD} is linearly convergent in expectation: indeed, for each iteration $j=0,1,2,\ldots$, \[ E[\delta(x_{j+1}) \:|\: x_j] ~\leq~ \Big(1-\frac{\underline{\lambda}(A)}{\tra A} \Big) \delta(x_j). \] In particular, if $A$ is positive-definite and $x^* = A^{-1}b$, we have the equivalent property \[ E[\|x_{j+1}-x^*\|^2_{A} \:|\: x_j] ~\leq~ \Big( 1-\frac{1}{\|A^{-1}\|_2 \tra A} \Big) \|x_j-x^*\|^2_{A}. \] Hence the expected reduction in the squared error $\|x_j - x^*\|_A^2$ is at least a factor \[ 1-\frac{1}{\sqrt{n}\kappa(A)} ~\le~ 1-\frac{1}{n k(A)} \] at each iteration. \end{thm} \pf Note that if coordinate direction $e_i$ is chosen during iteration $j$, then Equation \ref{FUpd1} shows \[ f(x_{j+1}) = f(x_j) - \frac{(b_i - a_i^Tx_j)^2}{2a_{ii}}. \] Hence, it follows that \[ E[f(x_{j+1}) ~|~ x_j] = f(x_j) - \sum_{i=1}^n \frac{a_{ii}}{\tra(A)}\frac{(b_i - a_i^Tx_j)^2}{2a_{ii}} = f(x_j) - \frac{1}{2\tra A}\|Ax_j-b\|^2. \] Using Inequality \ref{energy2} and Equation \ref{standard}, we easily verify \[ \frac{1}{2}\|Ax_j-b\|^2 \ge \underline{\lambda}(A) \delta(x_j), \] and the first result follows. Applying Equation \ref{standard} provides the second result. The final result comes from applying Inequalities \ref{upper} and \ref{lower}. \finpf The simple idea behind the proof of Theorem \ref{LC1} is the main engine driving the remaining results in this paper. Fundamentally, the idea is to choose a probability distribution so that the expected distance to the solution from the new iterate is the distance to the solution from the old iterate minus some multiple of a residual. Then, using some type of error bound to bound the distance to a solution in terms of the residual, we obtain expected linear convergence of the algorithm. Now let us consider the more general problem of finding a solution to a linear system $Ax = b$ where $A$ is an $m \times n$. More generally, since the system might be inconsistent, we seek a ``least squares solution'' by minimizing the function $\|Ax-b\|^2$. The minimizers are exactly the solutions of the positive-semidefinite system $A^TAx = A^Tb$, to which we could easily apply the previous algorithm; however, as usual, we wish to avoid computing the new matrix $A^TA$ explicitly. Instead, we can proceed as follows. \begin{alg}\label{CDGen} Consider a linear system $Ax = b$ for a nonzero $m$-by-$n$ matrix $A$. Let $x_0 \in \Rn$ be an arbitrary initial point and let $r_0 = b - Ax_0$ be the initial residual. For each $j=0,1,\ldots,$ compute \begin{eqnarray*} \alpha_j & = & \frac{A_i^Tr_j}{\|A_i\|^2} \\ x_{j+1} & = & x_j + \alpha_j e_i \\ r_{j+1} & = & r_j - \alpha_j A_i, \end{eqnarray*} where, at each iteration $j$, the index $i$ is chosen independently at random from the set $\{1,\ldots,n\}$, with distribution \[ P\{ i = k \} ~=~ \frac{\|A_k\|^2}{\|A\|_F^2} ~~~ (k = 1,2,\ldots,n). \] \end{alg} \noindent (In the formula for $\alpha_j$, notice by assumption that $A_i \ne 0$ almost surely.) Note that the step size at each iteration can be obtained by directly minimizing the residual in the respective coordinate direction. However, the algorithm can also be viewed as the application of the algorithm for positive definite systems on the system of normal equations, $A^TAx = A^Tb$, without actually having to compute the matrix $A^TA$. Given the motivation of directly minimizing the residual, we would expect that Algorithm \ref{CDGen} would converge to a least squares solution, even in the case where the underlying system is inconsistent. The next result shows that this is, in fact, the case. \begin{thm}\label{CDMN2} Consider any linear system $Ax=b$, where the matrix $A$ is nonzero. Define the least-squares residual and the error by \begin{eqnarray*} f(x) & = & \frac{1}{2} \|Ax-b\|^2 \\ \delta(x) & = & f(x) - \min f. \end{eqnarray*} Then Algorithm \ref{CDGen}~is linearly convergent in expectation to a least squares solution for the system: for each iteration $j=0,1,2\ldots$, \[ E[\delta(x_{j+1}) \:|\: x_j] ~\leq~ \Big(1-\frac{\underline{\lambda}(A^TA)}{\|A\|^2_F}\Big) \delta(x_j). \] In particular, if $A$ has full column rank, we have the equivalent property \[ E[\|x_{j+1}-\hat{x}\|^2_{A^TA} \:|\: x_j] ~\leq~ \Big(1-\frac{1}{\kappa(A)^2}\Big)\|x_j-\hat{x}\|^2_{A^TA} \] where $\hat{x} = (A^TA)^{-1}A^Tb$ is the unique least-squares solution. \end{thm} \pf It is easy to verify, by induction on $j$, that the iterates $x_j$ are exactly the same as the iterates generated by Algorithm \ref{CDPD}, when applied to the positive semi-definite system $A^TAx = A^Tb$, and furthermore that the residuals satisfy $r_j = b-Ax_j$ for all $j=0,1,2,\ldots$. Hence, the results follow directly by Theorem \ref{LC1}. \finpf By the coordinate descent nature of this algorithm, once we have computed the initial residual $r_0$ and column norms $\{\|A_i\|^2\}_{i=1}^n$, we can perform each iteration in $O(n)$ time, just as in the positive-definite case. Specifically, this new iteration takes $4n+1$ arithmetic operations, compared with $2n+2$ for the positive-definite case. For a computational example, we apply Algorithm \ref{CDGen} to random $500\times n$ matrices where each element of $A$ and $b$ is an independent Gaussian random variable and we let $n$ take values 50, 100, 150 and 200. \begin{center} \includegraphics[width=15.5cm]{RandomCoordinateDescentExample3.png} \end{center} \noindent Note that in the above examples, the theoretical bound provided by Theorem \ref{CDMN2} predicts the actual behavior of the algorithm reasonably well. \section{Randomized Iterated Projections} \label{alternating} Iterated projection algorithms share some important characteristics with coordinate descent algorithms. Both are well studied and much convergence theory exists; a comprehensive overview on iterated projections can be found in \cite{Deutsch}. However, even for linear systems of equations, standard developments do not provide bounds on convergence rates in terms of usual condition numbers. By contrast, in the recent paper \cite{Strohmer}, Strohmer and Vershynin obtained such bounds via the following randomized iterated projection algorithm, which also provided the motivation for our work in the previous section. \begin{alg} \label{SV1} Consider a linear system $Ax = b$ for a nonzero $m$-by-$n$ matrix $A$. Let $x_0 \in \Rn$ be an arbitrary initial point. For each $j=0,1,\ldots,$ compute \[ x_{j+1} = x_j - \frac{a_i^Tx_j - b_i}{\|a_i\|^2}a_i \] where, at each iteration $j$, the index $i$ is chosen independently at random from the set $\{1,\ldots,m\}$, with distribution \[ P\{ i = k \} ~=~ \frac{\|a_k\|^2}{\|A\|_F^2} ~~~ (k = 1,2,\ldots,m). \] \end{alg} Notice that the new iterate $x_{j+1}$ is simply the orthogonal projection of the old iterate $x_j$ onto the hyperplane $\{ x : a_i^T x = b_i \}$. At first sight, the choice of probability distribution may seem curious, since we could rescale the equations arbitrarily without having any impact on the projection operations. However, following \cite{Strohmer}, we emphasize that the aim is to understand linear convergence rates in terms of {\em linear-algebraic} condition measures associated with the original system, rather than in terms of {\em geometric} notions associated with the hyperplanes. This randomized algorithm has the following behavior. \begin{thm}[Strohmer-Vershynin, \cite{Strohmer}] \label{AP} Given any matrix $A$ with full column rank, suppose the linear system $Ax=b$ has solution $x^*$. Then Algorithm \ref{SV1}~converges linearly in expectation: for each iteration $j=0,1,2,\ldots$, \[ E[\|x_{j+1}-x^*\|_2^2 ~|~x_j] ~\leq~ \Big(1-\frac{1}{\kappa(A)^2}\Big)\|x_j-x^*\|^2_2 \] \end{thm} We seek a way of generalizing the above algorithm and convergence result to more general systems of linear inequalities, of the form \bmye \label{system} \left\{ \begin{array}{rcll} a_i^T x & \le & b_i & (i \in I_{\le}) \\ a_i^T x & = & b_i & (i \in I_{=}), \end{array} \right. \emye where the disjoint index sets $I_{\le}$ and $I_=$ partition the set $\{1,2,\ldots,m\}$. To do so, staying with the techniques of the previous section, we need a corresponding error bound for a system of linear inequalities. First, given a vector $x\in\Rn$, define the vector $x^+$ by $(x^+)_i = \max\{x_i,0\}$. Then a starting point for this subject is a result by Hoffman in \cite{Hoffman}. \begin{thm}[Hoffman]\label{HoffC} For any right-hand side vector $b \in \R^m$, let $S_b$ be the set of feasible solutions of the linear system (\ref{system}). Then there exists a constant $L$, independent of $b$, with the following property: \bmye \label{hoffman} x \in \Rn ~\mbox{and}~ S_b \ne \emptyset ~~\Rightarrow~~ d(x,S_b) \le L \|e(Ax-b)\|, \emye where the function $e \colon \R^m \to \R^m$ is defined by \[ e(y)_i = \left\{ \begin{array}{cl} y_i^+ & (i \in I_{\le}) \\ y_i & (i \in I_{=}), \end{array} \right. \] \end{thm} In the above result, each component of the vector $e(Ax-b)$ indicates the error in the corresponding inequality or equation. In particular $e(Ax-b)=0$ if and only if $x \in S_b$. Thus Hoffman's result provides a linear bound for the distance from a trial point $x$ to the feasible region in terms of the size of the ``a posteriori error'' associated with $x$. We call the minimum constant $L$ such that property (\ref{hoffman}) holds the {\em Hoffman constant} for the system (\ref{system}). Several authors give geometric or algebraic meaning to this constant, or exact expressions for it, including \cite{Guler}, \cite{Ng}, \cite{Li}; for a more thorough treatment of the subject, see \cite{Pang}. In the case of linear equations (that is, $I_{\le} = \emptyset$), an easy calculation using the singular value decomposition shows that the Hoffman constant is just the reciprocal of the smallest nonzero singular value of the matrix $A$, and hence equals $\|A^{-1}\|_2$ when $A$ has full column rank. For the problem of finding a solution to a system of linear inequalities, we consider a randomized algorithm generalizing Algorithm \ref{SV1}. \begin{alg} \label{SV2} Consider the system of inequalities (\ref{system}). Let $x_0$ be an arbitrary initial point. For each $j=0,1,\ldots,$ compute \begin{eqnarray*} \beta_j & = & \left\{ \begin{array}{cl} (a_i^Tx_j - b_i)^+ & (i \in I_{\le}) \\ a_i^Tx_j - b_i & (i \in I_=) \end{array} \right. \\ x_{j+1} & = & x_j - \frac{\beta_j}{\|a_i\|^2}a_i \end{eqnarray*} where, at each iteration $j$, the index $i$ is chosen independently at random from the set $\{1,\ldots,m\}$, with distribution \[ P\{ i = k \} ~=~ \frac{\|a_k\|^2}{\|A\|_F^2} ~~~ (k = 1,2,\ldots,m). \] \end{alg} \noindent In the above algorithm, notice $\beta_j = e(Ax_j-b)_i$. We can now generalize Theorem \ref{AP} as follows. \begin{thm}\label{APHoff} Suppose the system (\ref{system}) has nonempty feasible region $S$. Then Algorithm \ref{SV2} converges linearly in expectation: for each iteration $j=0,1,2,\ldots$, \[ E[d(x_{j+1},S)^2 ~|~ x_j] ~\leq~ \Big(1 - \frac{1}{L^2\|A\|^2_F} \Big) d(x_j,S)^2. \] where $L$ is the Hoffman constant. \end{thm} \pf Note that if the index $i$ is chosen during iteration $j$, then it follows that \begin{eqnarray*} \lefteqn{ \|x_{j+1}-P_S(x_{j+1})\|^2_2 ~ \leq ~ \|x_{j+1}-P_S(x_j)\|^2_2 } \\ & & = ~ \Big\| x_j - \frac{e(Ax_j-b)_i}{\|a_i\|^2}a_i -P_S(x_j) \Big\|^2_2 \\ & & = ~ \| x_j - P_S(x_j) \|^2_2 + \frac{e(Ax_j-b)_i^2}{\|a_i\|^2} - 2 \frac{e(Ax_j-b)_i}{\|a_i\|^2}a_i^T(x_j - P_S(x_j)). \end{eqnarray*} Note $P_S(x_j)\in S$. Hence if $i \in I_{\le}$, then $a_i^T P_S(x_j)\leq b_i$, and $e(Ax_j-b)_i \ge 0$, so \[ e(Ax_j-b)_i a_i^T(x_j - P_S(x_j)) \ge e(Ax_j-b)_i (a_i^T x_j - b_i) = e(Ax_j-b)_i^2. \] On the other hand, if $i \in I_=$, then $a_i^T P_S(x_j) = b_i$, so \[ e(Ax_j-b)_i a_i^T(x_j - P_S(x_j)) = e(Ax_j-b)_i (a_i^T x_j - b_i) = e(Ax_j-b)_i^2. \] Putting these two cases together with the previous inequality shows \[ d(x_{j+1},S)^2 ~\le~ d(x_j,S)^2 - \frac{e(Ax_j-b)_i^2}{\|a_i\|^2}. \] Taking the expectation with respect to the specified probability distribution, it follows that \[ E[d(x_{j+1},S)^2 ~|~ x_j] \le d(x_j,S)^2 - \frac{\|e(Ax_j-b)\|^2}{\|A\|_F^2} \] and the result now follows by the Hoffman bound. \finpf \noindent Since Hoffman's bound is not independent of the scaling of the matrix $A$, it is not surprising that a normalizing constant like $\|A\|^2_F$ term appears in the result. For a computational example, we consider linear inequality systems $Ax \leq b$ where the elements of $A$ are independent standard Gaussian random variables and $b$ is chosen so that the resulting system has a non-empty interior. We consider matrices $A$ which are $500\times n$, letting $n$ take values 50, 100, 150 and 200. We then apply Algorithm \ref{SV2} to these problems and observe the following computational results. \begin{center} \includegraphics[width=13.5cm]{RandomAltProjectIneqExample2.png} \end{center} Another natural conditioning measure for linear inequality systems is the distance to infeasibility, defined by Renegar in \cite{Renegar3}, and shown in \cite{Renegar2} to govern the convergence rate of interior point methods for linear programming. It is interesting, therefore, from a theoretical perspective, to obtain a linear convergence rate for iterated projection algorithms in terms of this condition measure as well. For simplicity, we concentrate on the inequality case, $Ax \le b$. To begin, let us recall the following results. The {\em distance to infeasibility} \cite{Renegar3} for the system $Ax \le b$ is the number \[ \mu ~=~ \inf \Big\{ \max \{ \| \Delta A \| , \| \Delta b \| \} : (A + \Delta A) x \le b + \Delta b ~ \mbox{is infeasible} \Big\}. \] \begin{thm}[Renegar, \cite{Renegar3}, Thm 1.1] \label{renegar} Suppose $\mu > 0$. Then there exists a point $\hat x \in S$ satisfying $\|\hat x\| \le \|b\|/\mu$. Furthermore, any point $x \in \R^n$ satisfies the inequality \[ d(x,S) ~\le~ \frac{\max \{ 1 , \|x\| \}}{\mu} \|(Ax-b)^+\|. \] \end{thm} Using this, we can bound the linear convergence rate for the Algorithm \ref{SV2} in terms of the distance to infeasibility, as follows. Notice first that $\|x_j - \hat x\|$ is nonincreasing in $j$, by Inequality \ref{pythagoras}. Suppose we start Algorithm \ref{SV2} at the initial point $x_0 = 0$. Applying Theorem \ref{renegar}, we see that for all $j=1,2,\ldots,$ \[ \|x_j\| \le \|\hat x\| + \|x_j - \hat x\| \le \|\hat x\| + \|x_0 - \hat x\| \le \frac{2\|b\|}{\mu}, \] so \[ d(x_j , S) \le \max \Big\{ \frac{1}{\mu} , \frac{2\|b\|}{\mu^2} \Big\} \|(Ax_j-b)^+\|. \] Using this inequality in place of Hoffman's bound in the proof of Theorem \ref{APHoff} gives \[ E[d(x_{j+1},S)^2 ~|~ x_j] \leq \left[1 - \frac{1}{\|A\|_F^2(\max\{\frac{1}{\mu}, \frac{2\|b\|}{\mu^2}\})^2}\right]d(x_j,S)^2. \] Although this bound may not be the best possible (and, in fact, it may not be as good as the bound provided in Theorem \ref{APHoff}), this result simply emphasizes a relationship between algorithm speed and conditioning measures that appears naturally in other contexts. In the next section, we proceed with these ideas in a more general framework. \section{Metric Regularity and Local Convergence} \label{metric} The previous section concerned global rates of linear convergence. If instead we are interested in {\em local} rates, we can re-examine a generalization of our problem through an alternative perspective of set-valued mappings. Consider a set-valued mapping $\Phi: \Rn\tto \Rm$ and the problem of solving the associated constraint system of the form $b\in \Phi(x)$ for the unknown vector $x$. For example, finding a feasible solution to $Ax\leq b$ is equivalent to finding an $x$ such that \bmye \label{inclusion} b \in Ax + \Rm_+. \emye Related to this is the idea of \textit{metric regularity} of set-valued mappings. We say the set-valued mapping $\Phi$ is metrically regular at $\bar x$ for $\bar b\in \Phi(\bar x)$ if there exists $\gamma > 0$ such that \bmye \label{MetReg} d(x, \Phi^{-1}(b)) \leq \gamma d(b, \Phi(x)) \mbox{ for all } (x,b) \mbox{ near } (\bar x, \bar b), \emye where $\Phi^{-1}(b) = \{x: b\in \Phi(x)\}$. Further, the \textit{modulus of regularity} is the infimum of all constants $\gamma$ such that Equation \ref{MetReg} holds. Metric regularity is strongly connected with a variety of ideas from variational analysis: a good background reference is \cite{Roc98}. Metric regularity generalizes the error bounds discussed in previous sections at the expense of only guaranteeing a bound in local terms. For example, if $\Phi$ is a single-valued linear map, then the modulus of regularity (at any $\bar{x}$ for any $\bar{b}$) corresponds to the typical conditioning measure $\|\Phi^{-1}\|$ (with $\|\Phi^{-1}\| = \infty$ implying the map is not metrically regular) and if $\Phi$ is a smooth single-valued mapping, then the modulus of regularity is the reciprocal of the minimum singular value of the Jacobian, $\nabla \Phi(x)$. From an alternative perspective, metric regularity provides a framework for generalizing the Eckart-Young result on the distance to ill-posedness of linear mappings cited in Theorem \ref{EY}. Specifically, if we define the \textit{radius of metric regularity} at $\bar{x}$ for $\bar{b}$ for a set-valued mapping $\Phi$ between finite dimensional spaces by \[ \mbox{rad}\Phi(\bar{x}|\bar{b}) ~=~ \inf \{\|E\| : \Phi + E \mbox{ not metrically regular at } \bar{x} \mbox{ for } \bar{b} + E(\bar{x}) \}, \] where the infimum is over all linear functions $E$, then one obtains the strikingly simple relationship (see \cite{Dontchev}) \[ \mbox{modulus of regularity of $\Phi$ at $\bar{x}$ for $\bar{b}$} ~=~ \frac{1}{\mbox{rad}\Phi(\bar{x}|\bar{b})}. \] We will not be directly using the above result. Here, we simply use the fundamental idea of metric regularity which says that the distance from a point to the solution set, $d(x,\Phi^{-1}(b))$, is locally bounded by some constant times a ''residual''. For example, in the case where $\Phi$ corresponds to the linear inequality system (\ref{inclusion}), we have that $d(b,\Phi(x)) = \|(Ax-b)^+\|$ implies that the modulus of regularity is in fact a global bound and equals the Hoffman bound. More generally, we wish to emphasize that metric regularity ties together several of the ideas from previous sections at the expense of those results now only holding locally instead of globally. In what follows, assume all distances are Euclidean distances. We wish to consider how the modulus of regularity of $\Phi$ affects the convergence rate of iterated projection algorithms. We remark that linear convergence for iterated projection methods on convex sets has been very widely studied: see \cite{Deutsch}, for example. Our aim here is to observe, by analogy with previous sections, how randomization makes the linear convergence rate easy to interpret in terms of metric regularity. Let $S_1,S_2,\ldots,S_m$ be closed convex sets in a Euclidean space $\E$ such that $\cap_i S_i\neq \emptyset$. Then, in a manner similar to \cite{Lewis}, we can endow the product space $\E^m$ with the inner product \[ \langle (u_1,u_2,\ldots,u_m), (v_1,v_2,\ldots, v_m)\rangle = \sum_{i=1}^m \langle u_i,v_i\rangle \] and consider the set-valued mapping $\Phi: \E\rightarrow\E^m$ given by \bmye \label{PhiMap} \Phi(x) = (S_1 - x, S_2 - x, \ldots, S_m -x). \emye Then it clearly follows that $\bar{x}\in\cap_i S_i \Leftrightarrow 0\in\Phi(\bar{x})$. Under appropriate regularity assumptions, we obtain the following local convergence result. \begin{thm}\label{MRAP} Suppose the set-valued mapping $\Phi$ given by Equation \ref{PhiMap} is metrically regular at $\bar{x}$ for 0 with regularity modulus $\gamma$. Let $\bar{\gamma}$ be any constant strictly larger than $\gamma$ and let $x_0$ be any initial point sufficiently close to $\bar{x}$. Further, suppose that $x_{j+1} = P_{S_i}(x_j)$ with probability $\frac{1}{m}$ for $i=1,\ldots,m$. Then \[ E[d(x_{j+1}, S)^2 ~|~ x_j] ~\leq~ \Big(1-\frac{1}{m\bar{\gamma}^2}\Big)d(x_j,S)^2. \] \end{thm} \pf First, note that by Inequality \ref{pythagoras}, the distance $\|x_j - \bar x\|$ is nonincreasing in $j$. Hence if $x_0$ is sufficiently close to $\bar{x}$, then $x_j$ is as well for all $j\geq 0$. Then, again using Inequality \ref{pythagoras} (applied to the set $S_i$), we have, for all points $x \in S \subset S_i$, \[ \|x_j - x\|^2 - \|x_j - P_{S_i}(x_j)\|^2 \ge \|P_{S_i}(x_j) - x\|^2. \] Taking the minimum over $x \in S$, we deduce \[ d(x_j,S)^2 - \|x_j - P_{S_i}(x_j)\|^2 \ge d(P_{S_i}(x_j),S)^2. \] Hence \begin{eqnarray*} E[d(x_{j+1}, S)^2 ~|~ x_j] & = & \frac{1}{m}\sum_{i=1}^m d(P_{S_i}(x_j),S)^2 \\ & \leq & \frac{1}{m}\sum_{i=1}^m [d(x_j,S)^2 - d(x_j, S_i)^2] \\ & = & d(x_j,S)^2 - \frac{1}{m}\sum_{i=1}^m d(x_j,S_i)^2 \\ & = & d(x_j,S)^2 - \frac{1}{m}d(0, \Phi(x_j))^2 \\ & \leq & \Big(1-\frac{1}{m\bar{\gamma}^2}\Big)d(x_j,S)^2, \end{eqnarray*} using the definition of metric regularity. \finpf Note that metric regularity at $\bar{x}$ for 0 is a slightly stronger assumption than actually necessary for this result. Specifically, the above result holds as long as Equation \ref{MetReg} holds for all $x$ near $\bar{x}$ with $\bar{b}=0$ fixed, as opposed to the above definition requiring it to hold for all $b$ near $\bar{b}$ as well. For a moment, let $m=2$ and consider the sequence of iterates $\{x_j\}_{j\geq 0}$ generated by the randomized iterated projection algorithm. By idempotency of the projection operator, there's no benefit to projecting onto the same set in two consecutive iterations, so the subsequence consisting of different iterates corresponds exactly to that of the non-randomized iterated projection algorithm. In particular, if $x_j\in S_1$, then \[ d(P_{S_2}(x_j), S)^2 \leq d(x_j,S)^2 - d(x_j,S_2)^2 = d(x_j,S)^2 - [d(x_j,S_2)^2 + d(x_j,S_1)^2] \] since $d(x_j,S_1) = 0$. This gives us the following corollary, which also follows through more standard deterministic arguments. \begin{cor}\label{MRAPCor} If $\Phi$ is metrically regular at $\bar{x}$ for 0 with regularity modulus $\gamma$ and $\bar{\gamma}$ is larger than $\gamma$, then for $x_0$ sufficiently close to $\bar{x}$, the 2-set iterated projection algorithm is linearly convergent and \[ d(x_{j+1},S)^2 \leq \Big(1-\frac{1}{\bar{\gamma}^2}\Big)d(x_j,S)^2. \] \end{cor} Further, consider the following refined version of the $m$-set randomized algorithm. Suppose $x_0\in S_1$ and $i_0 = 1$. Then for $j=1,2,\ldots,$ let $i_j$ be chosen uniformly at random from $\{1,\ldots,m\}\backslash \{i_{j-1}\}$ and $x_{j+1} = P_{S_{i_j}}(x_j)$. Then we obtain the following similar result. \begin{cor}\label{MRAPCor2} If $\Phi$ is metrically regular at $\bar{x}$ for 0 with regularity modulus $\gamma$ and $\bar{\gamma}$ is larger than $\gamma$, then for $x_0$ sufficiently close to $\bar{x}$, the refined $m$-set randomized iterated projection algorithm is linearly convergent in expectation and \[ E[d(x_{j+1},S)^2 ~|~ x_j,~ i_{j-1}] \leq \Big(1-\frac{1}{(m-1)\bar{\gamma}^2}\Big) d(x_j,S)^2. \] \end{cor} A simple but effective product space formulation by Pierra in \cite{Pierra} has the benefit of reducing the problem of finding a point in the intersection of finitely many sets to the problem of finding a point in the intersection of 2 sets. Using the notation above, we consider the closed set in the product space given by \[ T = S_1 \times S_2 \times \ldots \times S_m \] and the subspace \[ L = \{Ax: x\in E\} \] where the linear mapping $A:\E\rightarrow\E^m$ is defined by $Ax = (x,x,\ldots,x)$. Again, notice that $\bar{x}\in \cap_i S_i \Leftrightarrow (\bar{x},\ldots,\bar{x}) \in T \cap L$. One interesting aspect of this formulation is that projections in the product space $\E^m$ relate back to projections in the original space $\E$ by \begin{eqnarray*} (z_1,\ldots,z_m) \in P_T(Ax) & \Leftrightarrow & z_i\in P_{S_i}(x) ~~(i=1,2,\ldots,m) \\ (P_L(z_1,\ldots,z_m))_i & = & \frac{1}{m}(z_1+z_2+\ldots+z_m) ~~(i=1,\ldots,m) \end{eqnarray*} This formulation provides a nice analytical framework: we can use the above equivalence of projections to consider the \textit{method of averaged projections} directly, defined as follows. \begin{alg}\label{AvgP}Let $S_1,\ldots,S_m\subseteq E$ be nonempty closed convex sets. Let $x_0$ be an initial point. For $j=1,2,\ldots$, let \[ x_{j+1} = \frac{1}{m}\sum_{i=1}^m P_{S_i}(x_j). \] \end{alg} Simply put, at each iteration, the algorithm projects the current iterate onto each set individually and takes the average of those projections as the next iterate. In the product space formulation, this is equivalent to $x_{j+1} = P_L(P_T(x_j))$. Expanding on the work of Pierra in \cite{Pierra}, additional convergence theory for this algorithm has been examined by Bauschke and Borwein in \cite{Bauschke}. Under appropriate regularity conditions, the general idea is that convergence of the iterated projection algorithm for two sets implies convergence of the averaged projection algorithm for $m$ sets. In a similar sense, we prove the following result in terms of randomized projections. \begin{thm}\label{AvgPRandom} Suppose $S = \cap_{i=1}^m S_i$ is non-empty. If the randomized projection algorithm of Theorem \ref{MRAP} is linearly convergent in expectation with rate $\alpha$, then so is Algorithm \ref{AvgP}. \end{thm} \pf Let $x_j$ be the current iterate, $x_{j+1}^{AP}$ be the new iterate in the method of averaged projections and $x_{j+1}^{RP}$ be the new iterate in the method of uniformly randomized projections. Then note that: \[ x_{j+1}^{AP} = \frac{1}{m}\sum_{i=1}^m P_{S_i}(x_j) = E[x_{j+1}^{RP}]. \] By convexity of the $S_i$'s, it follows that \[ d(x_{j+1}^{AP}, S) = d(E[x_{j+1}^{RP}|x_j], S) \leq E[d(x_{j+1}^{RP}, S) | x_j] \leq \alpha d(x_j, S), \] by Jensen's Inequality. \finpf Hence, the method of averaged projections converges no more slowly than the method of uniformly random projections. In particular, under the assumptions of Theorem \ref{MRAP}, the method of averaged projections converges with rate no larger than $1-\frac{1}{m\bar{\gamma}^2}$. \bibliographystyle{plain}
-32,385.449622
[ -3.10546875, 2.80078125 ]
21.026895
[ -2.427734375, 1.078125, -1.9189453125, -6.07421875, -1.54296875, 8.2734375 ]
[ 3.55078125, 8.625, 2.0625, 8.8515625 ]
232
5,084
[ -3.580078125, 4.2421875 ]
31.04671
[ -5.82421875, -4.61328125, -4.8828125, -2.306640625, 2.072265625, 13.15625 ]
0.741191
11.144455
22.501967
2.413871
[ 2.3206210136413574 ]
-19,773.91195
5.174862
-32,127.432741
0.615759
5.906423
[ -2.080078125, -3.326171875, -3.451171875, -4.890625, 2.162109375, 11.7421875 ]
[ -5.63671875, -2.357421875, -2.443359375, -1.921875, 3.740234375, 5.40234375 ]
BkiUdY45qoYA4xX7Fs7m
\section{Introduction} Mastering networking, operating systems, and cybersecurity is inconceivable without practical experience with real computer systems and tools. Practicing these skills in a physical or virtual laboratory or at students' own hosts is a common instructional practice. In general, students receive task assignments that are prepared to be solved in the provided lab environment (single or multiple connected computers). These assignments could be a part of on-premise or remote sessions with an instructor, individual or team homework, or extracurricular competitions. Regardless of the used instructional method, all students usually receive the same assignment and an instance of the same environment for its solving. They are expected to achieve the same goal and submit the same answer. Having the same learning environment is convenient for learning new skills and technologies, but it is not always suitable for evaluating student learning or competitions. Students may share the correct answers with their peers via online communication. This answer sharing is easier than in other disciplines because the assignments themselves require interactions with computers. Monitoring students during the evaluation is laborious. Disconnecting students' computers from the Internet is impractical or infeasible because searching online sources such as documentation or data might be an inherent part of the assignment. Therefore, cheating is an issue that many computing educators face. \textit{Automatic Problem Generation} (APG, also \textit{Automated Exercise Generation}) enables instructors to create modified versions of the problems (tasks), called problem instances. Each student is provided with one instance of the same problem. APG can thus mitigate the threat of copied or leaked answers~\cite{burket2015}. Although APG has already been applied in computing disciplines, it is not commonly used in hands-on assignments involving a lab environment (see \Cref{sec:related}). This paper contributes to the broader adoption of APG by i)~providing an open-source toolset for an automated setup of a lab environment with unique configuration for each student (\Cref{sec:toolset},~\cite{apg-toolset}), and ii)~reporting experience from a case study of using the toolset in an introductory security course enrolled by 207 students (\Cref{sec:study}). The toolset enabled instructors to assign personalized hands-on homework to all students. The additional effort was minimal compared to the standard practice of assigning the same tasks to all students. In return, the toolset revealed seven groups of submissions indicating forbidden cooperation, such as sharing answers between students (\Cref{sec:results_discussion}). \section{Related Work} \label{sec:related} Cheating and its mitigation is summarized in a recent paper~\cite[Section~2.3]{Fowler2021}. APG is one of the methods, which we focus on. \subsection{APG in Lab Environment} \label{sec:apg_lab} This subsection reviews existing works on APG in hands-on assignments featuring a \emph{lab environment} (or \emph{sandbox}), which we define as one or more physical or virtual computers and/or networks the students interact with to solve the assignment. Security Scenario Generator (SecGen)~\cite{Schreuders2017} is a robust framework for building networks of virtual machines with randomized services, vulnerabilities, and themed content. SecGen can randomly choose elements of generated machines, such as operating systems, network services, user credentials, or vulnerabilities. Configuration of each element of the machines can be varied, such as network ports or the strength of passwords. SecGen uses its own complex scenario specification language in the XML format to describe constraints and properties of the generated machines, such as a system with a remotely exploitable vulnerability that would grant user-level access. SecGen has been used for teaching at universities and hosting a country-wide security competition in the United Kingdom. Chothia and Novakovic~\cite{Chothia2015} developed a virtual machine framework for cybersecurity education at the University of Birmingham, United Kingdom. The virtual machine runs a Linux operating system with many user accounts and intentionally flawed access control, web server, database server, and purposely-built insecure protocols. Once the machine is booted for the first time, its unique content is generated for each student. Students are tasked to find particular text strings (flags) and submit them to the server, which checks their correctness. The authors used the framework for exercises in introductory cybersecurity courses for master's degree and undergraduate students. The exercises covered encryption, access control, key-agreement protocols, web security, and reverse engineering. The authors encountered three cases where groups of students copied the flags or shared the virtual machine. Tele-Lab~\cite{Willems2012} is an online lab environment with the automatic assessment of practical exercises in cybersecurity. The assignments can include variables, which are instantiated and used for creating personalized content of virtual machines in the lab. The variables are used in multiple-choice or free-text tests in a web interface of the lab environment. Students work with personalized machines and answer the tests with personalized content. The exercises cover attacks on accounts and passwords, network reconnaissance, eavesdropping of network traffic, wireless and web security. \subsection{APG Not Involving the Lab Environment} \label{sec:apg_nonlab} This subsection reports applications of APG where students do not interact with full-fledged computer hosts or networks. Burket et al.~\cite{burket2015} deployed APG in 2014 in PicoCTF, a large-scale cybersecurity competition for middle- and high-school students. They used \emph{problem templates} to generate a pool of problem instances with unique answers per instance. The competition included ten automatically generated problems: five on cryptography, three featuring web pages, one on reverse engineering, and one on converting a number to a different base. Agudo et al.~\cite{Agudo2019} designed SERA, an extensible framework for personalized exercises for introductory computer security courses. The exercises are defined using their own specification language, which works with modules for generating assignments and checking students' submissions. Each exercise is defined by i) the assignment template, ii) template parameters and their ranges used for the generation, and iii) functions for checking the students' submissions and providing feedback. SERA has been piloted at the University of Malaga, Spain, using exercises on X.509 and TLS certificates, vulnerabilities of web servers, and secure e-mail MetaCTF~\cite{Feng2015} is a set of 17 homework assignments on reverse engineering and malware analysis for Linux operating systems. The assignments are organized into levels with increasing difficulty. Each level is completed when a student runs a provided binary, enters a correct password, and causes the binary to print the string \enquote{Good Job}. The binaries are unique for each student. MetaCTF includes a web interface for distributing individual binaries to students and checking the submitted passwords. MetaCTF was used in a course at Portland State University, USA, in 2015. Sadigh et al.~\cite{sadigh2012} reported their work-in-progress on applying APG for an undergraduate embedded systems course at the University of California, Berkeley, USA, in 2012. They generated state-machines and real-time scheduling problems using problem templates and techniques for formal verification and synthesis. Fowler and Zilles~\cite{Fowler2021} focused on assessing basic programming skills. They created question variants of a similar difficulty using surface feature permutations. The variants were derived from base questions by changing specific elements such as variable or function names and the order of the parameters. The variant questions were used in homework assignments and the exam in an introductory Python course at the University of Illinois, USA, in 2020. Qi and Fossati~\cite{Qi2020} introduced Unlimited Trace Tutor, which automatically generates original blocks of Java code for practicing code tracing of \texttt{for} and \texttt{while} loops and \texttt{if} statements. The system generates a parse tree from a provided code snippet, modifies variables' values and relational operators, and produces a new snippet. The authors ran a pilot experiment with 11 volunteer students of Emory University, USA. \subsection{Our Contribution} \label{sec:apg_our} The contribution of this paper is in three areas. We enable fair summative assessment in the lab environment, report results of a study in the large class, and provide our toolset to other educators. \subsubsection{Approach} Our approach and technical solution appear similar to Tele-Lab~\cite{Willems2012}, which involves the lab environment and automatically assesses personalized assignments. However, no implementation or evaluation of Tele-Lab personalized assignments has been published after the initial paper from 2012. Our approach is close to~\cite{Fowler2021}, which uses base questions and permutes their specific elements. However, \cite{Fowler2021} does not target networking, operating systems, or cybersecurity and does not involve any lab environment. Also, picoCTF~\cite{burket2015}, MetaCTF~\cite{Feng2015}, and SERA~\cite{Agudo2019} use templates for generating problems or files for cybersecurity competitions or classes. Still, they do not create the whole lab environment (virtual machines or networks), only their parts (web pages, binaries, certificates). PicoCTF and SERA provide a programming interface for generating arbitrary tasks and values. While this approach is more flexible than ours, instructors have to provide code for the generation instead of declaring type and constraints in the YAML markup language as in our approach. Our technical solution is the most similar to~\cite{Chothia2015}. While~\cite{Chothia2015} provides a single virtual machine that is difficult to modify, we enable educators to generate arbitrary networks with multiple machines. Next, unlike~\cite{Chothia2015}, which developed a custom submission server that is not publicly available, we extended CTFd~\cite{chung2017}, a popular open-source platform for hosting competitions and exercises. SecGen~\cite{Schreuders2017} applies APG in the lab environment with a different goal. Our goal is to provide students with a lab environment with the same structure but different content and values that students are required to search for. SecGen, on the other hand, creates various environments of the same complexity based on instructor-defined constraints. As a result, two environments generated by SecGen can feature different network services and vulnerabilities. Works~\cite{sadigh2012} and~\cite{Qi2020} do not involve the lab environment, and use approaches specific to the problem they generate. \subsubsection{Evaluation} This paper evaluates our method and toolset in the authentic teaching context of a large class. The same applies to~\cite{Fowler2021}, but they do not involve the lab environment. In contrast, APG by~\cite{burket2015} was evaluated in a different context (team security competition), though attended by 10,000 students in 3,000 teams. Similarly, SecGen~\cite{Schreuders2017} was used in another competition, but only with 59 students. Other works included evaluation with only a few participants (MetaCTF~\cite{Feng2015}, Unlimited Trace Tutor~\cite{Qi2020}), did not report the number of participants in their studies (\cite{Chothia2015}), or did not include any evaluation at all (Tele-Lab~\cite{Willems2012}, SERA~\cite{Agudo2019}, \cite{sadigh2012}). \subsubsection{Reusability} We have released our toolset as an open-source software project. Only the authors of SecGen~\cite{Schreuders2017} and picoCTF~\cite{burket2015} did the same, and partially~\cite{Chothia2015}. Unlimited Trace Tutor~\cite{Qi2020} is available free upon request. \section{Toolset for APG for Hands-on Labs} \label{sec:toolset} Since no toolset for generating a personalized lab environment for summative assessment was available, we implemented it. The toolset consists of two core components, the \textit{environment generator} and the \textit{submission server}, see \Cref{fig:toolset}. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{figures/2022-SIGCSE-APG-toolset.pdf} \caption{The APG toolset design. \circlednumber{1} The student starts the environment generator with a unique seed. \circlednumber{2} The generator creates a personalized lab environment. \circlednumber{3} Answers specific to that environment are stored in the submission database. \circlednumber{4} The student solves the tasks. \circlednumber{5} The submitted answers are checked against the generated personalized answers. \circlednumber{6} Instructor examines the submission logs for cheating.} \label{fig:toolset} \vspace{-0.4cm} \end{figure} \subsection{Environment Generator} Before developing the APG toolset, the lab environment where the students solved hands-on tasks was static, i.e., the same for everyone. The static lab environment was defined by text files specifying the parameters and configuration of virtual machines that students used for solving the hands-on assignments. Students were given these files and instantiated the lab environment locally on their computers using Cyber Sandbox Creator~\cite{my-2021-FIE-KYPO-CSC} based on free and open-source tools Vagrant~\cite{vagrant}, VirtualBox~\cite{virtualbox}, and Ansible~\cite{ansible}. The new environment generator is a set of Python scripts that work on top of these lab definition files. A student instantiates their lab environment with a unique identifier, such as an e-mail address. This identifier is hashed and serves as a seed for generating values used to configure and deploy the personalized lab environment. For example, network service will run on a different unique port for each student in a networking lab. The environment generation is driven by a configuration file that specifies the types of the generated values and their constraints, such as a range or prohibited values. Currently, it can generate user names and passwords of a predefined length, sentences, IP addresses, and port numbers. The generated random values are passed to Ansible, which is responsible for configuration management of the virtual machines in the environment. \subsection{Submission Server} Students who solve hands-on tasks enter their answers into a central submission server. However, in traditional static assignments, every student searches for the same answer, which is prone to cheating. Along with the environment generator, we developed an open-source plugin that extends the popular platform CTFd~\cite{chung2017}. CTFd is a web portal that allows students to submit answers to instructor-defined tasks. When the students instantiate their personalized lab environment, the environment generator sends the generated personalized answers to CTFd. Our plugin then associates individual answers with the corresponding students in the database. This submission server brings two key improvements over the traditional assessment process. First, for each student, only their personalized answer is accepted as correct. Second, if a student submits an answer of someone else, this activity is logged. \section{Case Study Methods} \label{sec:study} We describe the methods of the case study that uses the proposed toolset for APG. The goal is to evaluate APG for hands-on lab assignments in an authentic teaching context. \subsection{Teaching Context and Participants} The case study involves one homework assignment in an introductory computer security course enrolled by 207 undergraduate students. The course was taught at a public university in the Czech Republic in the Spring 2021 semester remotely (via video conferences) in English. The homework constituted 4\% of the final grade and was due in 14 days (May 3--17, 2021). \subsection{Exercise Content and Format} The assignment enhanced the skills students learned in the lab session before homework. In particular, it covered network attacks on authentication of Telnet and SSH servers, securing an SSH server, and capturing and analyzing SSH traffic. The assignment had to be completed individually using the personalized lab environment and submission server introduced in \Cref{sec:toolset}. \subsubsection{Task personalization} The homework was structured into eight tasks. Each task had to be completed by entering a text string (answer) to the submission server. Answers to three tasks were uniform, and five tasks were personalized for each student. These answers were generated using our APG toolset. In the end, each student had a personalized lab environment, which contained i) a host running the Telnet server at a random network port, ii) one user account with a random username, iii) another user account with a random password, and iv) a file containing a random sentence. The generation of the random port number was limited to the values from 1500 to 65000, excluding the number 2323, a well-known alternative port number for the Telnet service. The password was a four-digit number from 1300 to 2000, excluding the numbers 1234 and 1337 because some students may guess these numbers. Finally, the username was one of the common usernames from a well-known dictionary used for authentication attacks. \subsubsection{Task dependencies} Before a student started to solve the homework tasks, they were presented with one simple question, intended only for their familiarization with the submission server and conventions. The familiarization explained submitting the answers, displaying hints, and unlocking tasks in a chain. Once they solved it, they were allowed to start solving the homework tasks. In the beginning, they could choose the first task from a chain of six tasks (A1--A4, S1, and S2), or any of the two other tasks (T1 or T2), as depicted in~\Cref{fig:chal_dep}. The chain defined the order of the six tasks, unlocked once a student had solved the previous tasks in the chain. Each task was worth 1 point. \begin{figure}[t] \centering \begin{tikzpicture}[ ->, >=stealth', shorten >=1pt, auto, node distance=1.3cm, thick, scale=0.8, every node/.style={scale=0.7} ] \node[state,color=black,initial] (F) {F}; \node[state,color=ACMBlue] (A1) [above right of=F] {A1}; \node[state,color=gray,style=dashed] (A2) [right of=A1] {A2}; \node[state,color=gray,style=dashed] (A3) [right of=A2] {A3}; \node[state,color=gray,style=dashed] (A4) [right of=A3] {A4}; \node[state,color=gray,style=dashed] (A5) [right of=A4] {S1}; \node[state,color=gray,style=dashed] (A6) [right of=A5] {S2}; \node[state,color=ACMBlue] (T1) [right of=F] {T1}; \node[state,color=ACMBlue] (T2) [below right of=F] {T2}; \path (F) edge node {} (A1) (A1) edge node {} (A2) (A2) edge node {} (A3) (A3) edge node {} (A4) (A4) edge node {} (A5) (A5) edge node {} (A6) (F) edge node {} (T1) (F) edge node {} (T2) ; \end{tikzpicture} \caption{Blue tasks are displayed after a student answers the familiarization question (F). Locked (hidden) tasks are gray.} \label{fig:chal_dep} \end{figure} \subsubsection{Homework rules} \label{sec:rules} Students were instructed that sharing correct answers is strictly prohibited and useless because each of them had received the personalized lab environment. Next, students had five attempts to submit a correct answer to each task. After that, they were not allowed to submit another answer or proceed to the next task in a chain. Finally, when the homework was over, we randomly selected four students who were required to demonstrate their approach to the instructor at a dedicated one-to-one video call. All these rules were announced when the homework was assigned. \subsection{Cheating Detection} To reach the goal of this study, we also detect suspicious students' submissions, which may indicate cheating. Some of the methods were piloted in our previous research~\cite{vykopal2020-ctf}. \subsubsection{Someone else's answers} The most reliable detection method is tracking incorrect submissions of correct answers belonging to other students. This method assumes that some students shared their correct answers with other students who unthinkingly submitted someone else's answers. \subsubsection{Task chains} Another method benefits from locked tasks in a chain. Since a task in the chain is unlocked only after the previous task is successfully solved, this method computes the solve time for consecutive tasks. Then, the student's solve time is compared to the \emph{minimal possible solve time} of a human who immediately performed all actions required to solve the tasks without any mistakes and time for thinking about the steps. Any student's solve time close to or lower than the minimal possible solve time may indicate that the student obtained step-by-step instructions from another student. \subsubsection{Submission proximity} The least conclusive method lies in searching for \emph{time proximity} or \emph{location proximity} of two or more submissions. Any of these proximities may indicate that students were working together and submitted the answers (correct or incorrect) at the same time or place. We consider submissions to be in the location proximity if they originated from the same IP address, which multiple hosts might share in some networks~\cite{Maier2011}. \subsection{Data Collection and Survey} To use the methods for cheating detection, we logged students' correct and incorrect answers submitted to the submission server, together with timestamps and IP addresses. In addition, we surveyed students about their opinions on the format of the homework assignment. While the assignment was an inherent part of the course, the survey after the assignment was optional. All students who participated in the survey provided their informed consent to use the collected data for research purposes. \Cref{tab:postGameAssessmentQuestions} lists the questions asked in the survey. \begin{table}[h] \centering \caption{Wording of the survey questions.} \label{tab:postGameAssessmentQuestions} \small \begin{tabular}{p{0.3cm}|p{7.3cm}} \toprule \textbf{No.} & \textbf{Question} \\ \midrule Q1 & Would you rather complete an assignment of this format (lab environment~+~submission server) or traditional homework assignments (written report) in your future security courses? \\ \midrule Q2 & How useful was the instant feedback on your submissions (the server's response whether your answer is correct or not)? \\ \midrule Q3 & How stressful was the limited number of five attempts per task? \\ \midrule Q4 & How stressful was the possibility that you could be randomly selected to demonstrate your solution approach to the instructor? \\ \midrule Q5 & Have you experienced any technical issues with your personalized lab environment? \\ \midrule Q6 & Do you have any comments or thoughts related to the previous questions or any other feedback? \\ \bottomrule \end{tabular} \end{table} \section{Results and Discussion} \label{sec:results_discussion} Here we report the study results and summarize our experience with APG and preventing cheating in lab assignments. Out of 207 students enrolled, 195 students logged into the submission server. In total, 178 students solved at least one task. All eight tasks were solved by 160 students. \subsection{Suspicious Submissions} We analyzed the students' answers to identify suspicious submissions, which may indicate cheating. \subsubsection{Someone else's answers} We discovered three cases. \paragraph{Case 1} The most conclusive was the following case. Student A submitted the correct answer \texttt{41247} for A1 on May 9th, 22:39. Student B submitted the incorrect flag \texttt{41247} twice, several days later: first on May 13, 23:23, and then on May 14, 11:21. However, Student B generated his personalized lab environment for the first time on May 14, 11:00. That means the first incorrect submission of Student B occurred before his first interaction with the lab environment. When questioned, the students replied that Student B used a laptop of Student A with the already running personalized lab environment of Student A due to technical issues of a laptop of Student B. To avoid similar situations, instructors should communicate to students what constitutes cheating together with concrete examples of violations of homework rules. \paragraph{Case 2} A1 involved using the \texttt{nmap} tool to discover a network service running on a personalized network port. Student C submitted an incorrect answer \texttt{16278} on May 11, 16:16:55. Student D submitted his correct answer \texttt{16278} only 90 seconds later. Then, Student C submitted his correct answer \texttt{26569} 4 minutes after Student D. We asked Student C why he had first submitted the incorrect answer \texttt{16278}. He replied he \enquote{typed a random number}. A more likely explanation is that Student D solved the task first and shared his correct answer with Student C who submitted it first. The submission server replied that the answer was incorrect for Student C. He then asked Student D for the command to run in his environment and later submitted the correct answer. The network service discovery lasted only a few seconds, so Student C could have completed the task in 4 minutes. \paragraph{Case 3} Student E submitted the correct answer \texttt{asd} for A4 on May 8, 11:39. Student F submitted an incorrect answer \texttt{asd} for the familiarization question six days later. Since one of the tasks involved was the familiarization question and \texttt{asd} is a common testing string composed from three neighboring letters on a keyboard, we do not consider this case cheating. \subsubsection{Task chains} We discovered two cases where students submitted their answers incredibly quickly. \paragraph{Case 4} The minimal possible solve time of A3 was 45 seconds. The assignment text consists of 102 words. The task requires capturing the network traffic using the \texttt{tshark} tool to intercept the secret content of the file. The tool has to be configured to analyze packets at a non-standard network port discovered in A1. Three students G, H, and I completed the task in 58 seconds. These solve times contrast with other tasks, which they completed in near to median time compared to the other students. We consider their submissions suspicious since reading the assignment text, thinking, and performing all the actions take considerably longer than the minimal possible solve. However, the students may not have followed the assignment text (i.e., have not intercepted the network traffic) but simply printed out the file content as another student confessed during one demonstration session (see \Cref{sec:results_demonstration}). \paragraph{Case 5} The minimal possible solve time of S2 was 33 seconds. The assignment text of S2 consists of 69 words. The task requires disabling public key authentication at an SSH server. This involves setting the right configuration option and restarting the server. Student J completed the task in 46 seconds. Since the task involved multiple actions in the lab environment, it is unlikely to accomplish them in such a short time. Since the answer to this task was a short text string same for all students, the student may have obtained it from another student or found the answer in the documentation without applying it in the environment. One option for personalizing this task would be to insert a commented string into the configuration file on a random line and ask for a line number that must be uncommented. \subsubsection{Submission proximity} We found one confirmed case of students' collaboration using the location proximity and the least conclusive case using time proximity. \paragraph{Case 6} Four groups of students used the same IP address. Students from three groups submitted their correct answers at different times. However, two students K and L in one group submitted their answers to T2 within 68 seconds. Student K confessed he had cooperated with student L. He told us they share the same dormitory room and shared only the steps for T2, not the answer. \paragraph{Case 7} Assuming the homework was open for 14 days, and the students worked individually, it is improbable that they submitted the correct answer at the almost same time. We found that students M and N submitted their answers for A3 within 13 minutes, for A4 within 2 minutes, for S1 within 13 minutes, and for S2 within 4 minutes (all at midnight of May 16). They submitted the other tasks within one hour. Still, it might be a coincidence since they worked the last day before the deadline, so there was a higher chance of submissions in close time proximity. \subsection{Findings from Demonstration Sessions} \label{sec:results_demonstration} One of the demonstration sessions after the homework deadline revealed that one student did not complete A3 as required. He only simply printed out the content of a file at the server instead of capturing the network traffic using the \texttt{tshark} tool. The point he had earned for submitting the correct answer for this particular task was deducted. The other three sessions did not reveal any issues. \subsection{Results from the Post-Homework Survey} The optional survey after the assignment was answered by 45 students. Forty students (89\%) reported they would prefer the provided format of completing assignments in security courses. Only one student would prefer the traditional homework assignment, and the remaining four were not sure. \begin{figure}[h!] \centering \small \scalebox{0.95} \begin{tikzpicture \begin{axis} [ xtick={1,2,3,4}, xticklabel style={align=center}, xticklabels={Q2\\Useful\\feedback, Q3\\Stressful\\limit, Q4\\Stressful\\demonstration, Q5\\Technical\\issues}, ytick={1,2,3,4,5}, yticklabels={Not at all, Slightly, Moderately, Much, Very much}, width=1\linewidth, height=0.5\linewidth ] \addplot [box plot median] table [opacity=0.5, color=orange]{boxplotdatasurvey.dat}; \addplot [box plot box] table {boxplotdatasurvey.dat}; \addplot [box plot top whisker] table {boxplotdatasurvey.dat}; \addplot [box plot bottom whisker] table {boxplotdatasurvey.dat}; \end{axis} \end{tikzpicture} } \caption{Answers to Q2--Q5 in the survey from 45 students.} \label{fig:answers-Q2-Q5} \end{figure} The answers to questions Q2--Q5 are summarized in \Cref{fig:answers-Q2-Q5}. The students highlighted that the immediate feedback of correctness or incorrectness of their answers was much useful (Q2). Next, most students considered the limited number of attempts to answer the tasks to be stressful slightly or not at all (Q3). Answers to Q4 show the students were stressed a bit more about the possibility to be chosen for the solution demonstration. This may indicate (i) students attempted cheating, or (ii) students were not sure whether the instructor would approve of their approach (not the answer). Answers to Q5 show that the APG toolset worked well. Open-ended Q6 yielded diverse answers. Nine students elaborated their answer to Q1 that they enjoyed the homework format. Three students rated this homework as one of the best assignments in the course. Four students rated the difficulty of the particular tasks and sometimes suggested improvements. Two students mentioned not having sufficient system resources for a smooth run of the lab environment. Two other students reported other technical issues with running the environment at their own hosts. Two students elaborated answers to Q3 about the submission limit. One student \enquote{wasn't really worried about limited attempts because it was clear what we are supposed to submit}. Another student reported: \enquote{At first, I thought those 5 attempts will be very limiting, but after solving it, I realized, it wasn't limiting at all}. \subsection{Limitations} Our study involved a single exercise in one course. Still, the number of participants who interacted with our tool is considerably larger than in the vast majority of works reported in \Cref{sec:related}. The cheating detection methods analyze only students' actions at the submission server. We do not capture any other data, such as commands typed in the lab environment. We can confront students with our findings, but we cannot be entirely sure whether students actually cheated or not (see Case 1). Estimating the location proximity using the same IP address of the submission is a double-edged sword. While our study shows that one IP address may be shared by roommates, in other cases it might be shared by students who work independently. Since the personalized answers were generated at students' computers, advanced students may reverse-engineer the environment generator and obtain the answers even without interaction with the personalized lab environment. However, we agree with~\cite{Chothia2015} that doing so would be more difficult than completing the assignment. The optional survey was answered by 45 students out of 195 who started solving the homework assignment. The answers may not entirely represent the opinions of all students solving the homework, particularly the critical voices. Nevertheless, we have not received any negative feedback from other formal or informal channels. \section{Conclusions and Future Work} \label{sec:conclusion} We presented an open-source toolset for creating and marking personalized hands-on assignments involving virtual machines and networks. The toolset was used for preventing and detecting cheating in individual homework in an introductory security course enrolled by 207 students. Each student received tasks with the same assignment text but different answers to be found by interacting with the lab environment. We discovered seven suspicious cases using three different cheating detection methods. At the same time, students enjoyed the assignment and its format and did not perceive cheating prevention disruptively. Our approach is lightweight and privacy-preserving. Students were not under surveillance when solving their homework. Still, we discovered suspicious submissions only from minimal data collected (submitted answers, timestamps, IP addresses). Logging additional student actions would increase the precision of cheating detection. We plan to integrate a command-line logging toolset~\cite{my-2021-FIE-logging} into the personalized lab environment. The ability to analyze the students' commands would allow to reconstruct the problem-solving process and timeline better. To conclude, we showed that prevention and detection of cheating in hands-on assignments involving the lab environment is possible in large and remote classes. The key components are: automated provisioning of the lab environment with personalized values generated locally at students' computers, task chains, submission limits, and a demonstration session. We provide the toolset and an exemplary assignment in a public repository~\cite{apg-toolset}. Since all used components are free and open-source, other instructors can immediately use them in their classes as we did or adapt only the components that fit their needs. Our future work will focus on automatic problem generation preserving tasks' difficulty and fidelity of the lab environment for teaching networking, operating systems, and cybersecurity skills. An example of a challenging project is the automatic generation of a network with several hosts, each with a random yet valid IP address, able to communicate with other hosts in the network. \begin{acks} This research was supported by \grantsponsor{ERDF}{ERDF}{} project \textit{CyberSecurity, CyberCrime and Critical Information Infrastructures Center of Excellence} (No. \grantnum{ERDF}{CZ.02.1.01/0.0/0.0/16\_019/0000822}). We also thank Daniel Košč for developing the toolset. \end{acks} \balance \bibliographystyle{ACM-Reference-Format}
-15,510.965399
[ -1.91015625, 2.287109375 ]
63.186813
[ -2.712890625, 1.0048828125, -2.158203125, -5.0390625, -0.04681396484375, 6.9609375 ]
[ 2.966796875, 5.78515625, 1.779296875, 9.03125 ]
309
5,072
[ -2, 2.1015625 ]
21.946735
[ -5.2265625, -1.87890625, -2.380859375, -1.2060546875, 1.2236328125, 8.140625 ]
1.856701
36.858249
26.217228
2.920937
[ 2.6108572483062744 ]
-13,312.123959
5.79791
-15,300.344506
0.982759
6.055302
[ -3.83984375, -3.015625, -1.802734375, -2.87890625, 2.923828125, 7.58203125 ]
[ -6.40625, -3.037109375, -2.58203125, -1.5498046875, 4.28515625, 6.3671875 ]
BkiUc5025V5ipNhMBkE8
\section{Sample-Wise Jacobian Regularization}\label{sec:method1} Consider a supervised $K$-class classification problem in the multimodal context. Suppose that features from two distinct modalities $A$ (e.g., audio) and $B$ (e.g., video) are provided in the form of $\mathcal{D}_A=\{(\mathbf{x}^i_A,{y}^{i})\}_{i=1}^N$ and $\mathcal{D}_B=\{(\mathbf{x}^i_B,{y}^{i})\}_{i=1}^N$, where $\{\mathbf{x}_A,\mathbf{x}_B\}$ represents input features and ${y}\in\{0,1,\cdots,K-1\}$ represents the true label. We train two unimodal networks separately, each corresponding to one modality. Given a specific input feature $\mathbf{x}_A$, the first unimodal network calculates the class prediction $\mathbf{p}_A\in\mathbb{R}^K$ by: \begin{equation}\label{eq:expression_final_layer} \mathbf{p}_A=\bm\sigma_1(\mathbf{z}_A)=\bm\sigma_1(\mathbf{W}_A\mathbf{h}_A+\mathbf{b}_A)=\bm\sigma_1(\mathbf{W}_A\mathbf{f}_A(\mathbf{x}_A)+\mathbf{b}_A) \end{equation} where $\bm\sigma_1(\cdot)$ represents the Softmax function, $\mathbf{z}_A\in\mathbb{R}^K$ represents the raw logit, and $\mathbf{h}_A=\mathbf{f}_A(\mathbf{x}_A)\in\mathbb{R}^{H}$ represents the feature being fed into the last layer \footnote{(\ref{eq:expression_final_layer}) holds because the last layer is usually implemented as a fully connected (FC) layer with Softmax activation in classification tasks.}. Here $\mathbf{W}_A\in\mathbb{R}^{K\times H}$ and $\mathbf{b}_A\in\mathbb{R}^K$ are the learnable weight and bias of the last linear layer, respectively. Similarly, the second unimodal network provides the class prediction $\mathbf{p}_B=\bm\sigma_1(\mathbf{z}_B)=\bm\sigma_1(\mathbf{W}_B\mathbf{h}_B+\mathbf{b}_B)=\bm\sigma_1(\mathbf{W}_B\mathbf{f}_B(\mathbf{x}_B)+\mathbf{b}_B)$. Based on the conditional independence assumption \cite{stat_fusion1}, the basic statistical late-fusion method generates the final class prediction as \cite{stat_fusion2}: \begin{equation}\label{eq:expression_stat_fusion} \mathbf{p}=\bm\sigma_2(\frac{\mathbf{p}_A\odot\mathbf{p}_B}{\mathbf{freq}})=\bm\sigma_2(\frac{\bm\sigma_1(\mathbf{z}_A)\odot\bm\sigma_1(\mathbf{z}_B)}{\mathbf{freq}}) \end{equation} where the division is performed in an element-wise manner, $\odot$ represents the element-wise product, and $\bm\sigma_2(\cdot)$ represents a linear normalization enforcing the summation of elements equal to one. Here $\mathbf{freq}\in\mathbb{R}^K$ contains the occurring frequencies of each class, calculated from the training dataset. Our proposed approach builds upon (\ref{eq:expression_stat_fusion}). Specifically, we consider adding two $K\times K$ weight matrices $\{\mathbf{W}_a,\,\mathbf{W}_b\}$ ahead of $\{\mathbf{z}_A,\,\mathbf{z}_B\}$, respectively, right before they get activated. Consequently, the final multimodal prediction is re-calibrated as: \begin{equation}\label{eq:expression_p_prime} \mathbf{p}^\prime=\bm\sigma_2(\frac{\bm\sigma_1(\mathbf{z}_A^\prime)\odot\bm\sigma_1(\mathbf{z}_B^\prime)}{\mathbf{freq}})=\bm\sigma_2(\frac{\bm\sigma_1(\mathbf{W}_a\mathbf{z}_A)\odot\bm\sigma_1(\mathbf{W}_b\mathbf{z}_B)}{\mathbf{freq}}) \end{equation} Suppose that the data provided from modality $A$ are perturbed at the input or feature level, while the data from modality $B$ are clean. For matrix $\mathbf{W}_b$, we could simply set it to an identity matrix, implying that we didn't invoke the robust add-on for the second modality. To determine the value of $\mathbf{W}_a$, we first calculate the derivative of $\mathbf{p}^\prime$ with respect to $\mathbf{h}_A$ (See Supplementary): \begin{equation}\label{eq:derivative} \frac{\partial \mathbf{p}^\prime}{\partial \mathbf{h}_A}= \mathbf{J}^\prime\mathbf{W}_a\mathbf{W}_A=[\mathbf{p}^\prime\mathbf{p}^{\prime,T}-\text{Diag}(\mathbf{p}^\prime)]\mathbf{W}_a\mathbf{W}_A \end{equation} Then we minimize the following regularized Jacobian loss with respect to $\mathbf{W}_a$: \begin{equation}\label{eq:expression_loss} \min_{\mathbf{W}_a}\ L =\min_{\mathbf{W}_a}\ (1-\gamma) ||\mathbf{J}^\prime\mathbf{W}_a\mathbf{W}_A||^2_F + \gamma ||\mathbf{W}_a-\mathbf{I}||_F^2 \end{equation} where $0<\gamma<1$ is a tunable hyper-parameter. Minimizing the first term in the loss could make the change of $\mathbf{p}^\prime$ limited to some extent when $\mathbf{h}_A$ is perturbed, while the second term in the loss guarantees numerical stability and the prediction in the perturbed case won't get too far from that of the clean case. For a specific multimodal input $\{\mathbf{x}_A,\mathbf{x}_B\}$, once $\mathbf{W}_a$ is determined, so are $\mathbf{p}^\prime$ and $\mathbf{J}^\prime$ via (\ref{eq:expression_p_prime}) and (\ref{eq:derivative}), respectively. Thus, (\ref{eq:expression_loss}) is well-determined and non-linear with respect to $\mathbf{W}_a$. \begin{algorithm} \caption{Iteratively solving regularized Jacobian loss} \label{alg:solve_loss} \begin{algorithmic}[1] \STATE {Given one specific input $\{\mathbf{x}_A,\mathbf{x}_B\}$. Initialize iteration index $t=0$.} \STATE {Perform one forward pass, yielding the initial class prediction $\mathbf{p}^{(0)}$.} \WHILE{$t<t_{\text{max}}$} \STATE {Calculate $\mathbf{J}^{(t)}=\mathbf{p}^{(t)}\mathbf{p}^{(t),T}-\text{Diag}(\mathbf{p}^{(t)})$.} \STATE {Minimize $L^{(t)} = (1-\gamma) ||\mathbf{J}^{(t)}\mathbf{W}_a\mathbf{W}_A||^2_F + \gamma ||\mathbf{W}_a-\mathbf{I}||_F^2$ with respect to $\mathbf{W}_a$.} \STATE {Calculate $\mathbf{p}^{(t+1)}$ based on Eq (\ref{eq:expression_p_prime}) with the optimal $\mathbf{W}_a$.} \STATE {Update $t=t+1$.} \ENDWHILE \STATE {Return $\mathbf{p}^\prime=\mathbf{p}^{(t)}$.} \end{algorithmic} \end{algorithm} We propose a heuristic iterative method making the above optimization problem tractable in Algorithm \ref{alg:solve_loss}. Our key is to decouple $\mathbf{J}^\prime$ from $\mathbf{W}_a$. Namely, in step 5 of Algorithm \ref{alg:solve_loss}, all terms are known except $\mathbf{W}_a$, and thus the relaxed loss is convex. After writing out all terms of $\partial L^{(t)}/\partial \mathbf{W}_a$, we observe that it is a Sylvester equation \cite{solve_sylvester1}. It has a unique solution and the run time is as large as inverting a $K\times K$ matrix and hence affordable. See Supplementary for details. \vspace{6pt} \noindent\textbf{Remarks}\quad First, in our implementation we find that one iteration could already yield a sufficiently accurate result, thus all our numerical results are reported with $t_{\text{max}}=1$ (i.e., $\mathbf{p}^\prime=\mathbf{p}^{(1)}$). Second, if we know data from modality $B$ are also perturbed, we could solve $\mathbf{W}_b$ in a similar manner. Third, notice that our approach is invoked merely during inference (i.e., test time), but not the train time. Furthermore, we demonstrate our approach in the context of two modalities, while obviously, it could be equally applied to many modalities. Finally, with a moderate assumption, we can further prove that when the input is perturbed, the change of final prediction enhanced by our method is limited to a certain amount. This has been summarized in Theorem \ref{theorem:change_output} (see Supplementary for details). Some immediate corollary include: (i) when $\bm{\epsilon}\sim N(\mathbf{0},\mathbf{\Sigma})$, the bound is simplified to ${E}[||\mathbf{p}^{\prime,noise}-\mathbf{p}^\prime||]\leq l\,({\frac{\gamma K}{2(1-\gamma)}})^{1/2}\,\text{Tr}[\mathbf{\Sigma}]$, (ii) when the $L_2$ norm of $\bm{\epsilon}$ is constrained smaller than $\delta$ (usually assumed in adversarial attacks), the bound is simplified to $||\mathbf{p}^{\prime,noise}-\mathbf{p}^\prime||\leq l\,\delta\,({\frac{\gamma K}{2(1-\gamma)}})^{1/2}$. \begin{theorem}\label{theorem:change_output} If $\mathbf{f}_A$ is $l$-Lipschitz continuous and $\mathbf{x}_A$ is perturbed by $\bm{\epsilon}$: $\mathbf{x}_A^{\text{noise}}=\mathbf{x}_A+\bm{\epsilon}$, then the Euclidean norm of our final prediction (i.e., $||\mathbf{p}^{\prime,noise}-\mathbf{p}^\prime||$) at most changes $l\sqrt{\frac{\gamma K}{2(1-\gamma)}}||\bm\epsilon||$. \end{theorem} \section{Necessity of the Extra Modality: Biasing Effect}\label{sec:method2} In this sub-section, we explain the principles behind our approach. Let us move one step back and consider the unimodal case. In what follows, we will demonstrate that our approach might not work well in the unimodal context, which in turn justifies the necessity of the extra modality. At first glance, our approach seems to be equally applicable in the unimodal context. Namely, suppose that only one unimodal network $\mathbf{p}=\bm\sigma_1(\mathbf{z})=\bm\sigma_1(\mathbf{W}\mathbf{h}+\mathbf{b})$ is available, in test time and for each specific input $\mathbf{x}$, we add a weight matrix $\mathbf{W}_x$ before the raw logit $\mathbf{z}$ being fed into the Softmax activation, re-calibrating the final unimodal prediction as: $\mathbf{p}^\prime=\bm\sigma_1(\mathbf{z^\prime})=\bm\sigma_1(\mathbf{W}_x\mathbf{z})$. Here $\mathbf{W}_x$ can be solved by analogy to (\ref{eq:expression_loss}). However, we observe that the introduction of $\mathbf{W}_x$ usually won't change the final prediction. For an intuitive understanding, we consider the TwoMoon example used by \cite{xue2021multimodal}. In this example, all data points are scattered in a 2D space. The data located at the upper and lower leaf have true labels $0$ and $1$, and are colored by red and blue, respectively. \begin{figure}[!htp] \centering \includegraphics[width=1.0\linewidth]{figures/twomoon_um_final_version.pdf} \vspace{-18pt} \caption{({Unimodal case}) The leftmost figure reveals the results of training and testing on clean data. The heatmap in the background represents the value of $||\mathbf{J}\mathbf{W}||_F$. The remaining right three figures show the results of testing on data with Gaussian noise, and similarly, the heatmap in the background represents the value of $||\mathbf{J}\mathbf{W}_x\mathbf{W}||_F$. For each figure, the test accuracy on clean and noisy data are reported at the left top and right bottom.} \label{fig:twoomoon_um} \end{figure} In the unimodal case, we take both horizontal and vertical coordinates as input and train a neural network with three FC layers. As shown in Figure \ref{fig:twoomoon_um} (a), the network perfectly fits to the clean data and achieves 97.75\% accuracy on the clean test data. In the remaining three figures, we evaluate the trained network on noisy test data. Specifically, we deliberately choose $\gamma=1.0$ in Figure \ref{fig:twoomoon_um} (b), so that our approach is actually not invoked (since the solved $\mathbf{W}_x$ equals $\mathbf{I}$). In this case, the accuracy drops to 97.00\% on noisy data, while the heatmap in the background doesn't change at all compared to Figure \ref{fig:twoomoon_um} (a). In Figure \ref{fig:twoomoon_um} (c) and (d), we choose $\gamma=0.5$ and $\gamma=0.1$, respectively. We observe that even though the color of the heatmap surely becomes lighter as expected, the test accuracies of both cases are still 97.00\%. More importantly, we find that this phenomenon is not coincident and that the prediction doesn't change for any input after adding $\mathbf{W}_x$ in unimodal binary classification. See Supplementary for rigorous mathematical proof. Moreover, in a $K$-class ($K>2$) classification problem, the final prediction might change if $\gamma$ is sufficiently small. For instance, given $\mathbf{z}=[1,0,2]^T$ and $\mathbf{W}=\mathbf{I}$, if we choose $\gamma=0.5$, then the final prediction will change from $\mathbf{p}=[0.245, 0.090, 0.665]^T$ to $\mathbf{p}^\prime=[0.270,0.096,0.635]^T$, where the last entry remains to be the largest. However, if we choose $\gamma=0.01$, then the final prediction will become $\mathbf{p}^\prime=[0.391, 0.219, 0.390]^T$, and now the first entry is the largest. See Supplementary for a theoretical bound of $\gamma$ in this high-dimensional case. Now we turn to the multimodal case. We treat the horizontal coordinates as one modality, and the vertical coordinates as the second modality. Two neural networks are trained, each corresponding to one modality. Then we fuse them based on the aforementioned statistical fusion method. As shown in Figure \ref{fig:twoomoon_mm}, in the multimodal context, when our method is invoked ($\gamma=0.5$ or $0.1$), the color of the heatmap becomes lighter and the test accuracies on noisy data all increase compared to the trivial statistical fusion (i.e., when $\gamma=1.0$). On the other hand, the test accuracies on clean data almost remain still (or slightly increase alongside $\gamma\downarrow$). \begin{figure}[!htp] \centering \includegraphics[width=1.0\linewidth]{figures/twomoon_mm_final_version.pdf} \vspace{-18pt} \caption{({Multimodal case}) The leftmost column reveals the results of training and testing on clean data. For illustration purpose, we only perturb the Y-coordinates with Gaussian noise. The first and second row correspond to small and large noise, respectively. Heatmaps reveal the values of $||\mathbf{J}\mathbf{W}||_F$ in (a) and $||\mathbf{J}\mathbf{W}_A\mathbf{W}||_F$ in (b), (c), and (d). For each figure, the test accuracy on clean and noisy data are reported at the left top and right bottom.} \label{fig:twoomoon_mm} \end{figure} \begin{lemma}\label{lemma:equi_stat_conc} Statistical fusion is equivalent to concatenating the raw logits: \begin{equation*} \bm\sigma_2(\frac{\mathbf{p}_A\odot\mathbf{p}_B}{\mathbf{freq}})=\bm\sigma_1(\mathbf{z}_A+\mathbf{z}_B-\ln\mathbf{freq})=\bm\sigma_1( \left[ \begin{array}{cc} \mathbf{I} & \mathbf{I}\\ \end{array}\right] \left[ \begin{array}{c} \mathbf{z}_A\\ \mathbf{z}_B \end{array}\right] -\ln\mathbf{freq}) \end{equation*} \end{lemma} \noindent\textbf{Biasing effect of the extra modality}\quad As we have seen in the TwoMoon example, our proposed method behaves differently in the unimodal and multimodal context. To get a better understanding of this subtle difference, we first summarize an equivalent form of statistical fusion in Lemma \ref{lemma:equi_stat_conc} (see Supplementary). Now assume that the sample-wise $\mathbf{W}_a$ is added, then the final multimodal prediction becomes $\bm\sigma_1(\mathbf{W}_a\mathbf{z}_A+\mathbf{z}_B-\ln\mathbf{freq})$. Alternatively, in the unimodal context, our method adds a sample-wise $\mathbf{W}_x$ and the final prediction becomes $\bm\sigma_1(\mathbf{W}_x\mathbf{z})$. Comparing these two expressions, we observe that the introduced $\mathbf{W}_a$ and $\mathbf{W}_x$ occur ahead of the perturbed modality. In the unimodal case, the Softmax function $\bm\sigma_1$ directly applies onto $\mathbf{W}_x\mathbf{z}$, while in the multimodal case, the extra modality information $\mathbf{z}_B$ acts as a bias term inside the Softmax. Since entries of $\mathbf{z}_B$ have different values, it might impact the final prediction. \section{Introduction} \end{document} \section{Introduction} Deep fusion models have recently drawn great attention of researchers in the context of multimodal learning \cite{vielzeuf2018centralnet,mm_survey,perez2019mfas,deep_mm_channel_exchange,xue2021multimodal} as it provides an easy way to boost model accuracy and robustness. For instance, RGB cameras and LiDARs are usually deployed simultaneously on an autonomous vehicle, and the resulting RGB images and point clouds are referred to as two modalities, respectively. When RGB images are blurry at night, point clouds could provide complementary information and help to make decisions in vision tasks \cite{kim2019single}. Over the past few years, numerous multimodal fusion methods have been proposed at different levels: early-, middle-, and late-fusion \cite{stat_fusion2}. In early-fusion, input feature vectors from different modalities are concatenated and fed into one single deep neural network (DNN), while in middle-fusion, they go into DNNs independently and exchange information in feature space. Unlike the previous two cases, late-fusion is realized by merging distinct DNNs at their output layers via concatenation, element-wise summation, etc. These three levels of fusion possess different pros and cons. For instance, late-fusion, the primary concern of our paper, is (i) privacy-friendly and (ii) convenient to deploy. Specifically, assume that a hospital wants to have an AI agent to judge whether a patient has a certain disease or not \cite{sun2020tcgm}. It has to divide the complete training feature (e.g., medical records, X-ray images) of every patient and deliver them to different AI companies, otherwise, the patients' identities will be exposed and their privacy are unprotected. This, in turn, directly rules out the possibility of applying early- or middle-fusion methods. On the other hand, the hospital could still exploit late-fusion technique to generate the ultimate AI agent after several unimodal DNNs are trained by AI companies. Moreover, unlike early- or middle-fusion, many late-fusion methods could tolerate missing modality information (i.e., no need for paired training data) and thus are convenient to deploy. Although late-fusion is a mature topic in the literature, its performance under adversarial attacks \cite{madry2017towards_resist_attack,robust_at_odds} and random corruptions \cite{stab_train,kim2019single} is rather under-explored. In this paper, we address the problem of robust late-fusion by utilizing Jacobian regularization \cite{jacobian4,jacobian2,jacobian3,jacobian1} and conditional independence assumption \cite{sun2020tcgm}. The key is to minimize the Frobenius norm of a Jacobian matrix so that the multimodal prediction is stabilized (see Figure \ref{fig:illsutration_of_method}). Our main contributions are as follows: \begin{itemize} \item To the best of our knowledge, we are the first to propose a training-free robust late-fusion method. The involving optimization problem is relaxed to a Sylvester equation \cite{solve_sylvester1}, and the solution is obtained with only a little computational overhead. \item We provide a theoretical error bound of our proposed robust late-fusion method and an illustrative explanation about the function of the extra modality via the TwoMoon example. \item Thorough numerical experiments demonstrate that our method outperforms other late-fusion methods and is capable to handle both adversarial attacks and random corruptions. \end{itemize} \begin{figure}[!htb] \centering \includegraphics[width=0.95\linewidth]{figures/Picture1.pdf} \caption{Illustration of the proposed robust late-fusion method. Before unimodal raw logit $\mathbf{z}_A$ and $\mathbf{z}_B$ are fused, the Jacobian regularization technique is applied. Roughly speaking, it will enforce the derivative of $\mathbf{p}$ with respect to $\mathbf{z}_A$ becomes smaller. Thus, when $\mathbf{z}_A$ is perturbed to $\mathbf{z}_A^{\prime}$ (due to random corruption or adversarial attack), the change of multimodal prediction $||\mathbf{p}^\prime-\mathbf{p}||$ will be limited to some extent. For illustration purpose, all variables here (e.g., $\mathbf{p},\mathbf{z}_A$) are drawn in one-dimensional space.} \label{fig:illsutration_of_method} \end{figure} \section{Preliminary}\label{sec:background} \textbf{Network Robustness}\quad To verify network robustness, two major kinds of perturbations are used, which in our paper we referred to as (i) adversarial attacks such as FGSM, PGD, or CW attack \cite{goodfellow2014fsgm,madry2017towards_resist_attack,carlini2017cw} and (ii) random corruptions such as Gaussian noise, missing entries or illumination change \cite{stab_train,kim2019single}. Correspondingly, many methods have been proposed to offset the negative effects of such perturbations. Adversarial training based on projected gradient descent \cite{madry2017towards_resist_attack} is one strong mechanism to defend against adversarial attacks, and the recent Free-m method \cite{shafahi2019freem} is proposed as its fast variant. Besides adversarial training, several regularization techniques are also proven to have such capability, such as Mixup \cite{zhang2020mixup1}, Jacobian regularization \cite{jacobian2}. Alternatively, with regard to random corruptions, Mixup and Jacobian regularization are also effective in this case \cite{zhang2017mixup2,jacobian3}. Another powerful approach is stability training \cite{stab_train}, where it introduces an additional KL divergence term into the conventional classification loss so that the trained DNNs are stabilized. \vspace{6pt} \noindent\textbf{Multimodal Learning}\quad DNNs trained by fusing data from different modalities have outperformed their unimodal counterparts in various applications, such as object detection \cite{stat_fusion2,kim2019single}, semantic segmentation \cite{chen2020bi_semantic_seg2,feng2020deep_sementic_seg1}, audio recognition \cite{gemmeke2017audio1,chen2020vggsound}. Based on where the information is exchanged between different modalities, multimodal fusion methods could be classified into three kinds: (i) early-fusion \cite{wagner2016multispectral,stat_fusion2}, (ii) middle-fusion \cite{kim2019single,deep_mm_channel_exchange}, and (iii) late-fusion \cite{stat_fusion2,liu2021one}. For instance, if the information is fused at the end of DNNs (e.g., Figure \ref{fig:illsutration_of_method}), such a method belongs to the late-fusion category. Although vast efforts have been put into exploiting multimodal fusion methods to improve DNNs' accuracy on specific learning tasks, few works have explored network robustness in the multimodal context. Specifically, \cite{mees2016adaptive1}, \cite{valada2017adaptive2}, and \cite{kim2018robust} exploited gating networks to deal with random corruptions, adverse or changing environments. Afterwards, \cite{kim2019single} proposed a surrogate minimization scheme and a latent ensemble layer to handle single-source corruptions. However, all the aforementioned methods belong to middle-fusion and only random corruptions are considered. On the other hand, we focus on another important scenario: late-fusion, and besides corruptions, we further take adversarial attacks into account. To the best of our knowledge, robust late-fusion is un-explored in the previous literature. \section{Discussion and Future Work}\label{sec:conclusion} In this paper, we propose a training-free robust late-fusion method. Intuitively, our method designs a filter for each sample during inference (i.e., test time). Such a filter is implemented by Jacobian regularization, and after a sample goes through it, the change of final multimodal prediction is limited to some amount under an input perturbation. The error bound analysis and series of experiments justify the efficacy of our method both theoretically and numerically. The Twomoon example explains the biasing effect of the extra modality, rooted for the difference between unimodal and multimodal. Our method opens up other directions for further exploration. In the unimodal context, we understand that directly applying our method usually doesn't change the prediction. Thus, would it be capable to adjust the confidence of a DNN? Another possibility is that besides $\mathbf{W}_x$, we deliberately add another bias term $\mathbf{b}_x$ and optimize both for unimodal robustness. Alternatively, in the multimodal context, we might consider minimizing the derivative of the final prediction directly to the input feature. Moreover, it is educational to compare it with consistency regularization. Last but not least, statistical fusion implicitly uses the assumption that train and test data come from the same distribution, so that we could calculate $\mathbf{freq}$ based on the train dataset and reuse it in inference time. When domain shift presents, this doesn't work and a straightforward remedy might be to make $\mathbf{freq}$ a learnable parameter. \section{Experimental Results}\label{sec:experiment} \subsection{AV-MNIST} AV-MNIST \cite{perez2019mfas,vielzeuf2018centralnet} is a novel dataset created by pairing audio features to the original MNIST images. The first modality corresponds to MNIST images with size of $28\times 28$ and $75\%$ energy removed by principal component analysis \cite{perez2019mfas}. The second modality is made up of spectrograms with size of $112\times 112$. These spectrograms are extracted from audio samples obtain by merging pronounced digits and random natural noise. Following \cite{vielzeuf2018centralnet}, we use LeNet5 \cite{LeCun_handwritten} and a 6-layer CNN to process image and audio modalities, respectively. To thoroughly verify the efficiency of our method, we design various types of perturbations: (i) random corruptions including Gaussian noise ($\omega_0$), missing entries ($\omega_1$), and biased noise ($\omega_2,\omega_3$), and (ii) adversarial attacks including FGSM ($\omega_4$) and PGD attack ($\omega_5,\omega_6$). See Supplementary for detailed definitions. \vspace{6pt} \noindent\textbf{Baselines}\quad We limit our comparison to the range of late-fusion methods. To verify our method, we consider several methods which could improve network robustness, including (i) regular training, (ii) adversarial training (advT) by \cite{madry2017towards_resist_attack}, (iii) free-m training (freT) by \cite{shafahi2019freem}, (iv) stability training (stabT) by \cite{stab_train}, and (v) Mixup (mixT) by \cite{zhang2017mixup2}. To the best of our knowledge, no previous robust late-fusion methods exist. Thus for comparison purpose, we slightly modify the confidence-based weighted summation in \cite{xue_find_that_paper} and adapt it here, referred to as mean fusion with confidence-based weighted summation. Combinations of different fusion methods and robust add-ons provide us with a few baselines. We emphasize that in our experiments, the train data are always free of noise. Following \cite{kim2019single}, our experiments are mainly conducted on the case when one modality is perturbed. Results on various other settings are reported in Supplementary. \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on AV-MNIST when audio features are perturbed. Mean accuracies are reported after repeatedly running 20 times. The value of $\gamma$ is selected from $\{0.1,0.5,0.9\}$. The green ({\color{green}{$\uparrow$}}) or red ({\color{red}{$\downarrow$}}) arrow represents after applying our Jacobian regularization, the model accuracy increases or decreases compared to others with the same unimodal backbones. The best accuracy in one column is bolded. `UM' and `MM' represent `unimodal' and `multimodal', respectively. `MM($0,\,i$)' represents a multimodal network obtained by fusing the unimodal network indexed by `$0$' and `$i$'.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~~Model & Clean & $\omega_0 = 1.0$ & $\omega_1 = 10$ & $\omega_4 = 0.03$ & $\omega_5 = 0.001$\\ \midrule & 0: Img-regT & $73.4$ & $73.4$ & $73.4$ & $73.4$ & $73.4$ \\ & 1: Aud-regT & $83.9$ & $55.1$ & $73.3$ & $69.9$ & $77.8$ \\ UM & 2: Aud-advT & $84.4$ & $59.2$ & $72.0$ & $81.9$ & $83.3$ \\ Nets & 3: Aud-freT & $82.1$ & $55.6$ & $71.9$ & $80.8$ & $81.6$ \\ & 4: Aud-staT & $86.2$ & $67.6$ & $74.4$ & $66.5$ & $74.5$ \\ & 5: Aud-mixT & $87.6$ & $61.3$ & $74.9$ & $74.9$ & $78.3$ \\ \midrule & Mean-w/o & $93.6$ & $80.3$ & $86.4$ & $89.8$ & $92.7$ \\ MM & Mean-w/ & $90.5$ & $73.7$ & $83.6$ & $82.0$ & $88.2$ \\ (0, 1) & Stat-w/o & $93.6$ & $80.4$ & $86.6$ & $89.9$ & $92.7$ \\ & Stat-w/ (ours) & $94.7$ ({\color{green}{$\uparrow$}}) & $84.5$ ({\color{green}{$\uparrow$}}) & $89.1$ ({\color{green}{$\uparrow$}}) & $92.2$ ({\color{green}{$\uparrow$}}) & $\bm{94.1}$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $91.8$ & $78.3$ & $82.7$ & $91.9$ & $92.5$ \\ MM & Mean-w/ & $86.0$ & $72.7$ & $77.5$ & $85.7$ & $86.9$ \\ (0, 2) & Stat-w/o & $91.7$ & $78.3$ & $83.5$ & $91.9$ & $92.4$ \\ & Stat-w/ (ours) & $93.4$ ({\color{green}{$\uparrow$}}) & $83.1$ ({\color{green}{$\uparrow$}}) & $86.4$ ({\color{green}{$\uparrow$}}) & $\bm{93.5}$ ({\color{green}{$\uparrow$}}) & $93.7$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $93.0$ & $81.1$ & $87.7$ & $93.2$ & $93.2$ \\ MM & Mean-w/ & $82.4$ & $74.4$ & $78.6$ & $83.7$ & $83.5$ \\ (0, 3) & Stat-w/o & $93.0$ & $80.9$ & $87.7$ & $93.2$ & $93.2$ \\ & Stat-w/ & $93.0$ ({\color{green}{$\uparrow$}}) & $82.3$ ({\color{green}{$\uparrow$}}) & $88.1$ ({\color{green}{$\uparrow$}}) & $92.7$ ({\color{red}{$\downarrow$}}) & $92.9$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $93.0$ & $83.8$ & $84.9$ & $83.5$ & $88.8$ \\ MM & Mean-w/ & $90.9$ & $78.1$ & $81.8$ & $74.6$ & $81.3$ \\ (0, 4) & Stat-w/o & $93.1$ & $83.7$ & $85.3$ & $83.4$ & $88.8$ \\ & Stat-w/ (ours) & $94.7$ ({\color{green}{$\uparrow$}}) & $\bm{87.5}$ ({\color{green}{$\uparrow$}}) & $87.8$ ({\color{green}{$\uparrow$}}) & $87.9$ ({\color{green}{$\uparrow$}}) & $91.9$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $\bm{95.2}$ & $85.9$ & $89.7$ & $91.1$ & $92.5$ \\ MM & Mean-w/ & $94.0$ & $82.1$ & $88.6$ & $87.1$ & $88.9$ \\ (0, 5) & Stat-w/o & $95.1$ & $85.7$ & $\bm{90.1}$ & $91.1$ & $92.4$ \\ & Stat-w/ (ours) & $95.0$ ({\color{red}{$\downarrow$}}) & $86.1$ ({\color{green}{$\uparrow$}}) & $90.1$ ({\color{red}{$\downarrow$}}) & $91.0$ ({\color{red}{$\downarrow$}}) & $92.2$ ({\color{red}{$\downarrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_avmnist_gauss_moda1} \end{table} \vspace{6pt} \noindent \textbf{Results}\quad Table \ref{table:exp_avmnist_gauss_moda1} reports the test accuracies of several models on AV-MNIST. Six unimodal networks are trained on clean data. Then we fuse them with different fusion methods and different robust add-ons. We only invoke our Jacobian regularization for the perturbed audio modality, while keeping the image modality unchanged. First, in the case of mean fusion, confidence-based weighted summation doesn't improve robustness. In the case of statistical fusion, we observe that our proposed Jacobian regularization method not only boosts the accuracy on noisy data but also on clean data. For instance, when our method is invoked on the combination of model-0 and model-1, the test accuracy on clean data increases from $93.6\%$ to $94.7\%$, and the test accuracy on data with Gaussian noise (e.g., $\omega_0=1.0$) increases from $80.4\%$ to $84.5\%$. Similar phenomena can be observed when our method is invoked on other combinations. Moreover, the number of green arrows is much larger than that of red arrows implying that our method is compatible with different robust techniques and applicable under different noises/attacks. Comparing the last row in the second sub-section and the third row in the third sub-section, we find that with our proposed method enabled, even if the unimodal backbone model-1 is worse than model-2, the final multimodal accuracies $(0, 1)$ with our method invoked are even larger than $(0, 2)$ without our method invoked. Similar phenomena could usually be observed in other sub-sections. We argue that in this example, statistically fusing two regular unimodal baselines with our Jacobian regularization invoked surpasses robust unimodal baselines and their trivial fusion. Moreover, the highest accuracy in each column has been bolded. It is clear that the multimodal network with our method invoked usually achieves the best performance over all others. The first row of Figure \ref{fig:ablation_diff_gamma} further plots model accuracy versus magnitude of noise/attack. It displays a consistent trend that our method works well regardless of the magnitude of noise/attack. Furthermore, the larger the noise/attack is, the better our method performs. \begin{figure}[!thb] \centering \includegraphics[width=1.0\linewidth]{figures/Picture3.pdf} \caption{Accuracies of multimodal networks obtained by statistically fusing a vanilla image network and audio network with $\gamma=0.1$ (the 1-st row), $\gamma=0.5$ (the 2-nd row), and $\gamma=0.9$ (the 3-rd row). Mean and Std of 20 repeated experiments are shown by the solid lines and the shaded regions, respectively. Note that FGSM attack is deterministic, thus there is almost no variance when repeating experiments.} \label{fig:ablation_diff_gamma} \end{figure} We also plot model accuracy versus magnitude of noise/attack with different gamma values in the second and third row of Figure \ref{fig:ablation_diff_gamma}. Alongside the increasing of $\gamma$, the gap between the orange and blue lines becomes smaller. However, the orange lines consistently appear upon the blue lines, indicating that our method works in a wide range of gamma value and that hyper-parameter tuning is relatively easy in our method. We emphasize that $\gamma=1.0$ is equivalent to trivial fusion. On the other hand, we consider perturbations on the image modality, while the audio modality is assumed clean. Consequently, we invoke our Jacobian regularization method for the image modality, while keep the audio modality unchanged. We have deliberately enlarged the magnitude of the noise/attack, and results are reported in Table \ref{table:exp_avmnist_gauss_moda2}. As shown in Table \ref{table:exp_avmnist_gauss_moda2}, confidence-weighted summation still doesn't outperform purely mean fusion in all cases. Regards to statistical fusion, the second column demonstrates that this time our Jacobian regularization might lead to accuracy drop on clean data. The phenomenon that robust network is less accurate on clean data might happen as suggested by \cite{robust_at_odds}. However, we notice that such phenomenon doesn't occur in Table \ref{table:exp_avmnist_gauss_moda1}. We believe that such difference might lie intrinsically in modality difference, and we will explore it in our future work. \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on AV-MNIST when image features are perturbed. Mean accuracies are reported after repeatedly running 20 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~Model & Clean & $\omega_0 = 2.5$ & $\omega_{2,3} = 10,2$ & $\omega_4 = 0.07$ & $\omega_5 = 0.008$\\ \midrule & 0: Aud-regT & $83.9$ & $83.9$ & $83.9$ & $83.9$ & $83.9$ \\ & 1: Img-regT & $73.4$ & $24.4$ & $37.2$ & $15.6$ & $12.3$ \\ UM & 2: Img-advT & $73.1$ & $33.4$ & $40.5$ & $43.3$ & $34.1$ \\ Nets & 3: Img-freT & $65.1$ & $36.2$ & $40.7$ & $46.7$ & $42.6$ \\ & 4: Img-staT & $74.2$ & $29.5$ & $49.0$ & $19.4$ & $14.1$ \\ & 5: Img-mixT & $74.1$ & $30.4$ & $37.6$ & $37.3$ & $23.9$ \\ \midrule & Mean-w/o & $93.6$ & $71.9$ & $84.6$ & $83.2$ & $80.0$ \\ MM & Mean-w/ & $90.5$ & $78.1$ & $83.5$ & $81.6$ & $77.7$ \\ (0, 1) & Stat-w/o & $93.6$ & $71.7$ & $84.2$ & $83.1$ & $79.9$ \\ & Stat-w/ & $91.3$ ({\color{red}{$\downarrow$}}) & $75.2$ ({\color{red}{$\downarrow$}}) & $86.4$ ({\color{green}{$\uparrow$}}) & $85.4$ ({\color{green}{$\uparrow$}}) & $83.3$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $\bm{93.9}$ & $83.6$ & $85.4$ & $89.9$ & $88.4$ \\ MM & Mean-w/ & $91.5$ & $82.7$ & $82.0$ & $86.1$ & $83.2$ \\ (0, 2) & Stat-w/o & $\bm{93.9}$ & $83.5$ & $85.4$ & $\bm{89.9}$ & $88.4$ \\ & Stat-w/ (ours) & $90.8$ ({\color{red}{$\downarrow$}}) & $85.0$ ({\color{green}{$\uparrow$}}) & $86.9$ ({\color{green}{$\uparrow$}}) & $88.1$ ({\color{red}{$\downarrow$}}) & $87.6$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $91.4$ & $87.1$ & $88.1$ & $88.8$ & $\bm{88.6}$ \\ MM & Mean-w/ & $88.3$ & $85.6$ & $85.7$ & $85.9$ & $85.5$ \\ (0, 3) & Stat-w/o & $91.4$ & $87.1$ & $88.2$ & $88.8$ & $\bm{88.6}$ \\ & Stat-w/ (ours) & $90.9$ ({\color{red}{$\downarrow$}}) & $\bm{87.0}$ ({\color{red}{$\downarrow$}}) & $88.2$ ({\color{green}{$\uparrow$}}) & $88.6$ ({\color{red}{$\downarrow$}}) & $88.3$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $93.7$ & $77.8$ & $89.0$ & $85.1$ & $82.0$ \\ MM & Mean-w/ & $91.1$ & $83.3$ & $86.8$ & $82.3$ & $78.0$ \\ (0, 4) & Stat-w/o & $93.7$ & $77.8$ & $89.1$ & $85.0$ & $81.8$ \\ & Stat-w/ (ours) & $91.3$ ({\color{red}{$\downarrow$}}) & $79.9$ ({\color{red}{$\downarrow$}}) & $\bm{89.3}$ ({\color{green}{$\uparrow$}}) & $86.4$ ({\color{green}{$\uparrow$}}) & $84.6$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $92.6$ & $85.7$ & $86.5$ & $87.2$ & $84.2$ \\ MM & Mean-w/ & $90.1$ & $84.5$ & $84.4$ & $84.3$ & $80.0$ \\ (0, 5) & Stat-w/o & $92.6$ & $85.7$ & $86.7$ & $87.2$ & $84.1$ \\ & Stat-w/ (ours) & $92.2$ ({\color{red}{$\downarrow$}}) & $85.7$ ({\color{green}{$\uparrow$}}) & $86.9$ ({\color{green}{$\uparrow$}}) & $87.1$ ({\color{red}{$\downarrow$}}) & $84.6$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_avmnist_gauss_moda2} \end{table} \subsection{Emotion Recognition on RAVDESS} We consider emotion recognition on RAVDESS \cite{livingstone2018ravdess,xue2021multimodal}. We use a similar network structure for both the image and audio modality: Three convolution layers, two max pooling layers, and three FC layers connect in sequential. Similar baselines and experimental settings are adopted as those in AV-MNIST. Results are reported in Table \ref{table:exp_ravdess_moda1}. \vspace{6pt} \noindent\textbf{Results}\quad Still, in the case of mean fusion, confidence-based weighted mean fusion usually doesn't work except in few cases, while our proposed Jacobian regularization exhibits good compatibility with different trained models under various perturbations. The bold results imply that the multimodal network with our Jacobian regularization invoked outperforms all other models except in FGSM attack ($\omega_4=0.03$). Due to page limits, extra results (e.g., perturbation on the image modality) are presented in Supplementary. \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on RAVDESS when audio features are perturbed. Mean accuracies are reported after repeatedly running 20 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~~Model & Clean & $\omega_0 = 1.0$ & $\omega_1 = 6$ & $\omega_4 = 0.03$ & $\omega_5 = 0.001$\\ \midrule & 0: Img-regT & $82.5$ & $82.5$ & $82.5$ & $82.5$ & $82.5$ \\ & 1: Aud-regT & $71.9$ & $54.2$ & $59.3$ & $31.5$ & $29.9$ \\ UM & 2: Aud-advT & $78.6$ & $43.1$ & $71.9$ & $53.3$ & $62.3$\\ Nets & 3: Aud-freT & $66.4$ & $19.5$ & $62.4$ & $56.0$ & $60.5$\\ & 4: Aud-staT & $71.8$ & $58.3$ & $59.8$ & $25.9$ & $29.6$ \\ & 5: Aud-mixT & $74.6$ & $54.5$ & $66.9$ & $15.7$ & $19.2$\\ \midrule & Mean-w/o & $89.8$ & $82.8$ & $86.7$ & $57.0$ & $63.6$ \\ MM & Mean-w/ & $88.5$ & $80.4$ & $84.1$ & $60.7$ & $57.3$\\ (0, 1) & Stat-w/o & $89.9$ & $82.9$ & $86.3$ & $56.9$ & $63.3$ \\ & Stat-w/ (ours) & $89.9$ ({\color{green}{$\uparrow$}}) & $85.0$ ({\color{green}{$\uparrow$}}) & $87.7$ ({\color{green}{$\uparrow$}}) & $59.4$ ({\color{red}{$\downarrow$}}) & $68.1$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $90.8$ & $76.9$ & $89.5$ & $87.3$ & $90.6$\\ MM & Mean-w/ & $89.3$ & $76.9$ & $87.6$ & $79.7$ & $85.9$\\ (0, 2) & Stat-w/o & $90.8$ & $77.1$ & $89.6$ & $87.2$ & $90.5$ \\ & Stat-w/ (ours) & $\bm{91.4}$ ({\color{green}{$\uparrow$}}) & $79.9$ ({\color{green}{$\uparrow$}}) & $\bm{90.2}$ ({\color{green}{$\uparrow$}}) & $88.6$ ({\color{green}{$\uparrow$}}) & $90.6$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $90.5$ & $60.8$ & $89.6$ & $\bm{90.1}$ & $90.8$\\ MM & Mean-w/ & $86.9$ & $64.0$ & $86.6$ & $82.2$ & $85.7$\\ (0, 3) & Stat-w/o & $90.5$ & $61.5$ & $89.2$ & $90.0$ & $90.7$ \\ & Stat-w/ (ours) & $90.8$ ({\color{green}{$\uparrow$}}) & $65.6$ ({\color{green}{$\uparrow$}}) & $90.0$ ({\color{green}{$\uparrow$}}) & $89.5$ ({\color{red}{$\downarrow$}}) & $\bm{90.7}$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $90.7$ & $88.4$ & $88.5$ & $69.3$ & $78.8$\\ MM & Mean-w/ & $89.2$ & $85.4$ & $85.5$ & $68.9$ & $69.3$\\ (0, 4) & Stat-w/o & $90.4$ & $88.3$ & $88.5$ & $69.7$ & $78.9$ \\ & Stat-w/ (ours) & $90.8$ ({\color{green}{$\uparrow$}}) & $\bm{88.8}$ ({\color{green}{$\uparrow$}}) & $88.7$ ({\color{green}{$\uparrow$}}) & $75.3$ ({\color{green}{$\uparrow$}}) & $82.3$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $89.5$ & $87.8$ & $87.9$ & $81.7$ & $82.7$\\ MM & Mean-w/ & $87.9$ & $85.8$ & $86.5$ & $82.0$ & $79.7$\\ (0, 5) & Stat-w/o & $89.4$ & $87.7$ & $87.9$ & $81.6$ & $82.5$ \\ & Stat-w/ (ours) & $86.6$ ({\color{red}{$\downarrow$}}) & $86.7$ ({\color{red}{$\downarrow$}}) & $86.0$ ({\color{red}{$\downarrow$}}) & $82.9$ ({\color{green}{$\uparrow$}}) & $83.7$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_ravdess_moda1} \end{table} \subsection{VGGSound} \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on VGGSound when video features are perturbed. Mean accuracies are reported after repeatedly running 5 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~~Model & Clean & $\omega_0 = 1.5$ & $\omega_1 = 6$ & $\omega_4 = 0.03$ & $\omega_5 = 0.001$\\ \midrule & 0: Aud-regT & $54.4$ & $15.0$ & $49.8$ & $23.0$ & $19.9$ \\ & 1: Img-regT & $27.4$ & $5.8$ & $27.4$ & $9.5$ & $9.0$ \\ UM & 2: Img-advT & $27.5$ & $5.3$ & $27.4$ & $10.7$ & $10.3$\\ Nets & 3: Img-freT & $25.2$ & $4.0$ & $24.2$ & $20.4$ & $22.9$\\ & 4: Img-staT & $27.0$ & $6.9$ & $26.9$ & $10.5$ & $9.6$ \\ & 5: Img-mixT & $27.2$ & $8.4$ & $27.1$ & $7.3$ & $7.2$\\ \midrule & Mean-w/o & $57.7$ & $45.8$ & $57.7$ & $35.0$ & $25.7$ \\ MM & Mean-w/ & $53.9$ & $48.6$ & $53.9$ & $37.9$ & $20.5$\\ (0, 1) & Stat-w/o & $58.5$ & $46.0$ & $58.4$ & $35.3$ & $26.0$ \\ & Stat-w/ (ours) & $60.1$ ({\color{green}{$\uparrow$}}) & $51.2$ ({\color{green}{$\uparrow$}}) & $60.0$ ({\color{green}{$\uparrow$}}) & $39.7$ ({\color{green}{$\uparrow$}}) & $28.5$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $58.8$ & $45.0$ & $58.8$ & $37.2$ & $26.9$\\ MM & Mean-w/ & $54.1$ & $49.0$ & $54.1$ & $38.3$ & $21.3$\\ (0, 2) & Stat-w/o & $59.4$ & $45.4$ & $59.4$ & $37.3$ & $27.2$ \\ & Stat-w/ (ours) & $\bm{61.1}$ ({\color{green}{$\uparrow$}}) & $50.1$ ({\color{green}{$\uparrow$}}) & $\bm{61.1}$ ({\color{green}{$\uparrow$}}) & $41.2$ ({\color{green}{$\uparrow$}}) & $29.7$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $57.9$ & $51.5$ & $57.9$ & $57.8$ & $57.7$\\ MM & Mean-w/ & $55.2$ & $53.6$ & $55.2$ & $55.0$ & $55.2$\\ (0, 3) & Stat-w/o & $58.8$ & $52.6$ & $58.8$ & $55.6$ & $57.7$ \\ & Stat-w/ (ours) & $56.7$ ({\color{red}{$\downarrow$}}) & $53.2$ ({\color{red}{$\downarrow$}}) & $56.7$ ({\color{red}{$\downarrow$}}) & $\bm{58.5}$ ({\color{green}{$\uparrow$}}) & $\bm{57.9}$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $57.5$ & $47.2$ & $57.3$ & $36.9$ & $30.9$\\ MM & Mean-w/ & $53.5$ & $49.2$ & $53.4$ & $37.2$ & $23.4$\\ (0, 4) & Stat-w/o & $58.2$ & $47.5$ & $58.1$ & $37.4$ & $31.0$ \\ & Stat-w/ (ours) & $59.8$ ({\color{green}{$\uparrow$}}) & $51.9$ ({\color{green}{$\uparrow$}}) & $59.7$ ({\color{green}{$\uparrow$}}) & $41.0$ ({\color{green}{$\uparrow$}}) & $34.0$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $57.8$ & $55.3$ & $57.8$ & $53.9$ & $53.5$\\ MM & Mean-w/ & $55.6$ & $54.7$ & $55.6$ & $54.3$ & $49.5$\\ (0, 5) & Stat-w/o & $59.1$ & $\bm{56.4}$ & $59.0$ & $54.5$ & $54.3$ \\ & Stat-w/ (ours) & $57.1$ ({\color{red}{$\downarrow$}}) & $55.9$ ({\color{red}{$\downarrow$}}) & $57.0$ ({\color{red}{$\downarrow$}}) & $55.2$ ({\color{green}{$\uparrow$}}) & $55.4$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_vggsound_moda1} \end{table} We consider classification task on a real-world data set VGGSound~\cite{chen2020vggsound}, where two modalities audio and video are available. To construct an affordable problem, our classification task only focuses on a subset of all classes. Namely, we randomly choose 100 classes and our classification problem is constrained on them. Consequently, there are 53622 audio-video pairs for training and 4706 audio-video pairs for testing. For the audio modality, we apply a short-time Fourier transform on each raw waveform, generating a 313 $\times$ 513 spectrogram. We take a 2D ResNet-18~\cite{resnet} to process these spectrograms. For the video modality, we evenly sample 32 frames from each 10-second video resulting input feature size of 32 $\times$ 256 $\times$ 256. We take a ResNet-18 network to process the video input where 3D convolutions have been adopted to replace 2D convolutions. \vspace{6pt} \noindent\textbf{Results} We present the results of mean fusion and statistical fusion with or without robust add-on in Table \ref{table:exp_vggsound_moda1}. Unlike the previous experiments, here we make audio modality clean and assume corruptions/attacks on the video modality. The results demonstrate that our method outperforms counterpart statistical fusion without Jacobian regularization in most cases. Furthermore, the bold results imply that our Jacobian regularization could enhance model robustness under various corruptions or attacks. {In Supplementary, we capture another important setting: the video modality (or audio modality) is corrupted, but we don't know which one.} Thus, both modalities are trained with robust add-ons. Please refer to Supplementary for more details. \section{Missing Deduction} \subsection{Equivalent Form and Derivative of Statistical Fusion}\label{appendix:proof_of_jacobian_stat_fusion} In this subsection, we first prove Lemma 1 shown in the main text. For conciseness, we denote the $k$-th element of $\mathbf{p}_A$ as ${p}_{A,k}$. Similar notations are adopted for $\mathbf{p}_B$, $\mathbf{z}_A$, $\mathbf{z}_B$, and $\ln\mathbf{freq}$. The $k$-th element of the left-hand side (LHS) in Lemma 1 can be simplified as: \begin{equation}\label{eq:lhs_rhs}\tag{S1} \begin{aligned} \frac{{p}_{A,k}{p}_{B,k}/{freq}_k}{\sum_{j=1}^K {p}_{A,j}{p}_{B,j}/{freq}_j} &=\frac{\frac{\exp({z}_{A,k})}{T_A}\frac{\exp({z}_{B,k})}{T_B}/{freq}_k}{\sum_{j=1}^K \frac{\exp({z}_{A,j})}{T_A}\frac{\exp({z}_{B,k})}{T_B}/{freq}_j}\\ &=\frac{{\exp({z}_{A,k})\exp({z}_{B,k})}/{freq}_k}{\sum_{j=1}^K \exp({z}_{A,j})\exp({z}_{B,k})/{freq}_j}\\ &=\frac{\exp({z}_{A,k}+{z}_{B,k}-\ln{freq}_k)}{\sum_{j=1}^K \exp({z}_{A,j}+{z}_{B,j}-\ln{freq}_j)}\\ \end{aligned} \end{equation} which is exactly the $k$-th element of the right-hand side (RHS) in Lemma 1 by definition. Note that in the first line of (\ref{eq:lhs_rhs}), we use the definition $\mathbf{p}_A=\bm\sigma_1(\mathbf{z}_A)$, $\mathbf{p}_B=\bm\sigma_1(\mathbf{z}_B)$, and have introduced two normalization constants $T_A=||\exp(\mathbf{z}_A)||_1$ and $T_B=||\exp(\mathbf{z}_B)||_1$. As a result, an equivalent form of (3) is $\mathbf{p}^\prime=\bm\sigma_1(\mathbf{W}_a\mathbf{z}_A+\mathbf{W}_b\mathbf{z}_B-\ln\mathbf{freq})$. Based on the chain rule and $\mathbf{z}_A=\mathbf{W}_A\mathbf{h}_A+\mathbf{b}_A$, we have: \begin{equation}\tag{S2} \frac{\partial \mathbf{p}^\prime}{\partial \mathbf{h}_A}=\frac{\partial \mathbf{p}^\prime}{\partial \mathbf{z}_A}\frac{\partial \mathbf{z}_A}{\partial \mathbf{h}_A} = \mathbf{J}^\prime\mathbf{W}_a\mathbf{W}_A=[\mathbf{p}^\prime\mathbf{p}^{\prime,T}-\text{Diag}(\mathbf{p}^\prime)]\mathbf{W}_a\mathbf{W}_A \end{equation} where we have used the Jacobian matrix of Softmax function to calculate $\mathbf{J}^\prime$. We emphasize that $\mathbf{J}^\prime$ is symmetric. \subsection{Solution of Regularized Jacobian Loss}\label{appendix:solution_of_jaco_loss} Because Frobenius norm is convex and a weighted summation with positive coefficients preserves convexity, the relaxed optimization problem shown in Step 5 of Algorithm 1 is a convex function with respect to $\mathbf{W}_a$. This implies that any local minimum is also the global minimum. We calculate the derivative of the relaxed optimization problem with respect to $\mathbf{W}_a$ and set it to zero, yielding: \begin{equation}\label{eq:slv}\tag{S3} \begin{aligned} &(\frac{1}{\gamma}-1)\mathbf{J}^{(t),2}\mathbf{W}_a(\mathbf{W}_A\mathbf{W}_A^T) + \mathbf{W}_a = \mathbf{I}\\ &\quad \Rightarrow \mathbf{A}\mathbf{W}_a+\mathbf{W}_a\mathbf{B}=\mathbf{B} \end{aligned} \end{equation} where for simplicity we have defined $\mathbf{A}=(1/\gamma-1)\mathbf{J}^{(t),2}\in\mathbb{R}^{K\times K}$ and $\mathbf{B}=(\mathbf{W}_A\mathbf{W}_A^T)^{-1}\in\mathbb{R}^{K\times K}$. (\ref{eq:slv}) is known as Sylvester equation\footnote{A general Sylvester equation has the form: $\mathbf{A}\mathbf{X}+\mathbf{X}\mathbf{B}=\mathbf{C}$, where $\{\mathbf{A},\mathbf{B},\mathbf{C}\}$ are all known and $\mathbf{X}$ is required to be solved.}, and it has been proven that if $\mathbf{A}$ and $\mathbf{-B}$ do not share any eigenvalues, then the Sylvester equation has an unique solution. In our case, since $\mathbf{A}$ is positive semi-definite (PSD) and $-\mathbf{B}$ is negative semi-definite, the only possible overlapping eigenvalue is $0$. On one hand, $0$ is surely an eigenvalue of $\mathbf{A}$ since $\mathbf{J}\mathbf{u}=0\mathbf{u}$, where $\mathbf{u}$ is a column vector with all elements equal to $1$. On the other hand, it is argued that a well-generalized neural network normally doesn't have singular weight matrices \cite{martin_weight_nonsingular}. Thus, $0$ is almost surely not the overlapping eigenvalue and a unique solution of (\ref{eq:slv}) is guaranteed. As demonstrated by \cite{solve_sylvester1}, the matrix equation (\ref{eq:slv}) can be converted to invert a $K\times K$ matrix. As in a classification task, $K$ won't be too large, the run time of solving Sylvester equation is affordable in this context. Furthermore, as shown in the above equation, when $\gamma\to 0$, the matrix $A$ will tend to infinity, leading to an ill-conditioned matrix equation. This, in turn, implies that the second term in the loss shown in (5) guarantees numerical stability. \subsection{Proof of Theorem 1}\label{appendix:proof_of_them_1} In the last iteration $t=t_{\text{max}}\to\infty$, noticing $\mathbf{W}_a$ satisfies (\ref{eq:slv}), we have: \begin{equation}\tag{S4} \begin{aligned} ||\mathbf{J}^{(t_{\text{max}})}\mathbf{W}_a\mathbf{W}_A||^2_F &=\text{Tr}[\mathbf{J}^{(t_{\text{max}})}\mathbf{W}_a\mathbf{W}_A\mathbf{W}_A^T\mathbf{W}_a^T\mathbf{J}^{(t_{\text{max}})}]\\ &=\text{Tr}[\mathbf{J}^{(t_{\text{max}}),2}\mathbf{W}_a(\mathbf{W}_A\mathbf{W}_A^T)\mathbf{W}_a^T]\\ &=(\frac{1}{\gamma}-1)^{-1}\text{Tr}[\mathbf{A}\mathbf{W}_a\mathbf{B}^{-1}\mathbf{W}_a^T]\\ &=(\frac{1}{\gamma}-1)^{-1}\text{Tr}[(\mathbf{B}-\mathbf{W}_a\mathbf{B})\mathbf{B}^{-1}\mathbf{W}_a^T]\\ &=(\frac{1}{\gamma}-1)^{-1}\text{Tr}[\mathbf{W}_a-\mathbf{W}_a\mathbf{W}_a^T]\\ \end{aligned} \end{equation} and: \begin{equation}\tag{S5} \begin{aligned} ||\mathbf{W}_a-\mathbf{I}||^2_F &=\text{Tr}[\mathbf{W}_a\mathbf{W}_a^T-2\mathbf{W}_a+\mathbf{I}]\\ \end{aligned} \end{equation} Thus we have: \begin{equation}\tag{S6} \begin{aligned} ||\mathbf{J}^{(t_{\text{max}})}\mathbf{W}_a\mathbf{W}_A||^2_F & \leq \frac{1}{1-\gamma}[(1-\gamma)||\mathbf{J}^{(t_{\text{max}})}\mathbf{W}_a\mathbf{W}_A||^2_F+\frac{\gamma}{2}||\mathbf{W}_a-\mathbf{I}||^2_F]\\ & = \frac{1}{1-\gamma}\left\{\gamma\text{Tr}[\mathbf{W}_a-\mathbf{W}_a\mathbf{W}_a^T]+\frac{\gamma}{2}\text{Tr}[\mathbf{W}_a\mathbf{W}_a^T-2\mathbf{W}_a^T+\mathbf{I}]\right\}\\ &=\frac{\gamma}{2(1-\gamma)}\text{Tr}[\mathbf{I}-\mathbf{W}_a\mathbf{W}_a^T]\\ &\leq\frac{\gamma}{2(1-\gamma)}\text{Tr}[\mathbf{I}]=\frac{\gamma K}{2(1-\gamma)}\\ \end{aligned} \end{equation} where we have taken advantage of the fact that $\mathbf{W}_a\mathbf{W}_a^T$ is a positive semi-definite matrix, and thus its trace is no less than zero. Based on the first-order Taylor expansion, we have: \begin{equation}\tag{S7} \begin{aligned} \mathbf{p}^{\prime,noise}-\mathbf{p}^\prime &\approx\frac{\partial \mathbf{p}^\prime}{\partial \mathbf{h}_A}(\mathbf{h}_A^{\text{noise}}-\mathbf{h}_A)\\ &=\frac{\partial \mathbf{p}^{(t_{\text{max}}+1)}}{\partial \mathbf{h}_A}(\mathbf{h}_A^{\text{noise}}-\mathbf{h}_A)\\ &\approx\mathbf{J}^{(t_\text{max})}\mathbf{W}_a\mathbf{W}_A(\mathbf{h}_A^{\text{noise}}-\mathbf{h}_A)\\ \end{aligned} \end{equation} Thus we have: \begin{equation}\tag{S8} \begin{aligned} ||\mathbf{p}^{\prime,noise}-\mathbf{p}^\prime|| &\leq ||\mathbf{J}^{(t_\text{max})}\mathbf{W}_a\mathbf{W}_A||_F\cdot||\mathbf{h}_A^{\text{noise}}-\mathbf{h}_A||\\ &\leq \sqrt{\frac{\gamma K}{2(1-\gamma)}} \cdot l ||\mathbf{x}_A^{\text{noise}}-\mathbf{x}_A||\\ &= l\sqrt{\frac{\gamma K}{2(1-\gamma)}}||\bm\epsilon||\\ \end{aligned} \end{equation} where we have utilized the $l$-Lipschitz continuity of $\mathbf{f}_A$. \subsection{Unimodal 2D Case}\label{appendix:proof_of_2d_unimodal} Consider an illustrating example when $K=2$ (a.k.a. unimodal binary classification). In this case, the final prediction is two dimensional: $\mathbf{p}=[p_1\,p_2]^T$. We denote the raw logit after adding the $\mathbf{W}_x$ matrix as $\mathbf{z}^\prime=\mathbf{W}_x\mathbf{z}$, and its entries are denoted as ${z}_{1}^\prime$ and ${z}_{2}^\prime$. Let us begin by explicitly writing out the first line of (\ref{eq:slv}). After some simplification, we obtain: \begin{equation}\tag{S9} \left[\begin{array}{cc} \kappa & -\kappa \\ -\kappa & \kappa \end{array}\right]\left[\begin{array}{cc} \rho_1 & \rho_2\\ \rho_3 & \rho_4 \\ \end{array}\right] \left[\begin{array}{cc} a & b \\ b & c \end{array}\right] +\left[\begin{array}{cc} \rho_1 & \rho_2\\ \rho_3 & \rho_4 \\ \end{array}\right]=\mathbf{I} \end{equation} where $\kappa=1/\gamma-1$. Here we have explicitly written out the elements of $\mathbf{W}\mathbf{W}^T$ and merged some terms of $\mathbf{J}^{\prime,2}$ into $a=p_1p_2(w_{11}^2+w_{12}^2)$, $b=p_1p_2(w_{21}w_{11}+w_{12}w_{22})$, and $c=p_1p_2(w_{21}^2+w_{22}^2)$. The unknowns occur in $\mathbf{W}_x$: \begin{equation}\tag{S10} \mathbf{W}_x = \left[\begin{array}{cc} \rho_1 & \rho_2\\ \rho_3 & \rho_4 \\ \end{array}\right] \end{equation} After some algebra, we can solve all unknowns via the following block matrix inversion: \begin{equation}\tag{S11} \left[\begin{array}{c} \rho_1\\ \rho_2\\ \rho_3\\ \rho_4\\ \end{array}\right]=\left[\begin{array}{cc} \mathbf{G} & \mathbf{H} \\ \mathbf{H} & \mathbf{G} \\ \end{array}\right]^{-1}\cdot\left[\begin{array}{c} 1\\ 0 \\ 0\\ 1\\ \end{array}\right] \end{equation} where for simplicity, we have defined: \begin{equation}\tag{S12} \mathbf{G}=\left[\begin{array}{cc} 1+ a\kappa & b\kappa \\ b\kappa & 1+ c\kappa \\ \end{array}\right],\quad \mathbf{H}=\left[\begin{array}{cc} -a\kappa & -b\kappa \\ -b\kappa & -c\kappa \\ \end{array}\right] \end{equation} Noticing that $\mathbf{z}^\prime=\mathbf{W}_x\mathbf{z}$, we can prove that: \begin{equation}\tag{S13} ({z}_{1}^\prime-{z}_{2}^\prime)\cdot({z}_{1}-{z}_{2})=\mathbf{z}^T \left[\begin{array}{cccc} \rho_1 - \rho_3 & -(\rho_1 - \rho_3)\\ \rho_2 - \rho_4 & -(\rho_2 - \rho_4) \end{array}\right]\mathbf{z} \end{equation} It is easy to calculate the eigenvalue of the matrix in the middle is $0$ and $\rho_1+\rho_4-\rho_2-\rho_3$. In other words, if we could prove $\rho_1+\rho_4-\rho_2-\rho_3$ is no less than zero, then the matrix in the middle is positive semi-definite, which implies order preserving property holds: If ${z}_{1}>{z}_{2}$, then ${z}_{1}^\prime>{z}_{2}^\prime$ and vice versa. This order preserving property is equivalent to say that adding the $\mathbf{W}_x$ matrix makes no sense in unimodal binary classification problem, since the predication won't change for any input. Now let us prove $\rho_1+\rho_4-\rho_2-\rho_3\geq 0$. With a lot of algebra, we obtain: \begin{equation}\label{eq:sum_of_rho}\tag{S14} \begin{aligned} \rho_1+\rho_4-\rho_2-\rho_3 &=\left[\begin{array}{cccc} 1 & -1 & -1 & 1 \end{array}\right] \left[\begin{array}{c} \rho_1\\ \rho_2\\ \rho_3\\ \rho_4\\ \end{array}\right]\\ &=\left[\begin{array}{cccc} 1 & -1 & -1 & 1 \end{array}\right] \left[\begin{array}{cc} \mathbf{G} & \mathbf{H} \\ \mathbf{H} & \mathbf{G} \\ \end{array}\right]^{-1}\cdot\left[\begin{array}{c} 1\\ 0 \\ 0\\ 1\\ \end{array}\right]\\ &=\frac{2(a+2b+c)\kappa+2}{4(ac-b^2)\kappa^2+2(a+c)\kappa+1} \end{aligned} \end{equation} Based on the definitions of $\{a,b,c,d\}$, by using Cauchy inequality and completing square, we could prove: $ac-b^2\geq 0$, $a+c\geq 0$, $a+2b+c\geq 0$, and $a-2b+c\geq 0$. Treat $\kappa\in(0,\infty)$ as a variable. The derivative of (\ref{eq:sum_of_rho}) with respect to $\kappa$ is a fraction. Its denominator is always larger than zero, while the numerator is a quadratic polynomial: \begin{equation}\tag{S15} \underbrace{8(b^2-ac)(a+2b+c)}_{\leq 0}\kappa^2+\underbrace{16(b^2-ac)}_{\leq 0}\kappa-2\underbrace{(a-2b+c)}_{\geq 0} \end{equation} Thus $\rho_1+\rho_4-\rho_2-\rho_3$ monotonically decreases when $\kappa$ increases in $(0,\infty)$. Its minimum value equals $0$ and is attained when $\kappa\to\infty$. Our proof completes. \subsection{Unimodal High-Dimensional Case}\label{appendix:proof_unimodal_high_d} Without loss of generality, we consider the case when $\mathbf{W}\mathbf{W}^T=\mathbf{I}$. We emphasize this is a mild assumption. Because when $\mathbf{W}$ doesn't satisfy this condition, we could perform SVD decomposition $\mathbf{W}=\mathbf{U}\bm{\Sigma}\mathbf{V}$ and convert the FC layer $\mathbf{Wh}+\mathbf{b}$ into three sequential linear layers: $\mathbf{h}\to\mathbf{Vh}\to\bm{\Sigma}\mathbf{V}\mathbf{h}\to\mathbf{U}\bm{\Sigma}\mathbf{V}\mathbf{h}+\mathbf{b}$, where now the weight matrix in the last layer satisfies $\mathbf{U}\mathbf{U}^T=\mathbf{I}$. Under this assumption $\mathbf{B}=(\mathbf{W}\mathbf{W}^T)^{-1}=\mathbf{I}$, (\ref{eq:slv}) could be further simplified as: \begin{equation}\label{eq:Wx}\tag{S16} \mathbf{W}_x=[(\frac{1}{\gamma}-1)\mathbf{J}^{\prime,2}+\mathbf{I}]^{-1}=(\kappa\mathbf{J}^{\prime,2}+\mathbf{I})^{-1} \end{equation} where $\mathbf{J}^{\prime}=\mathbf{J}^{(0)}$ is known, and for simplicity, we have denoted $\kappa=1/\gamma-1$. From this expression, it is clear that the second term of (5) guarantees numerical stability, since $\mathbf{J}^{\prime,2}$ is not invertible and without the identity matrix (\ref{eq:Wx}) doesn't exist. Before moving on, we define a helper vector $\mathbf{e}_{ij}\in\mathbb{R}^K$, whose $i$-th and $j$-th entries equal $1$ and $-1$, respectively, and all other entries equal $0$. Then considering the relation of $\mathbf{z}^\prime=\mathbf{W}_x\mathbf{z}$, we have: \begin{equation}\tag{S17} \begin{aligned} (z_i^\prime-z_j^\prime)(z_i-z_j) &= (\mathbf{z}^{\prime,T}\mathbf{e}_{ij})(\mathbf{e}_{ij}^T\mathbf{z})\\ &= \mathbf{z}^{\prime,T}\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}\mathbf{z}^\prime\\ \end{aligned} \end{equation} This implies that if $\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}$ is a positive semi-definite (PSD) matrix for arbitrary $i\neq j$, then the order preserving property holds (i.e., if ${z}_i>{z}_j$, then ${z}_i^\prime>{z}_j^\prime$). Since $\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}$ is asymmetrical, examining whether it is PSD requires us to focus on the summation of its transpose and itself. Namely, $\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}$ is PSD if and only if the eigenvalues of $[\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}+(\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1})^T]$ are all no less than zero. Notice that $\mathbf{W}_x$ and $\mathbf{W}_x^{-1}$ are symmetric as shown in (\ref{eq:Wx}), we have \begin{equation}\tag{S18} \begin{aligned} \mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}+(\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1})^T &=\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}+\mathbf{W}_x^{-1}\mathbf{e}_{ij}\mathbf{e}_{ij}^T\\ &=\mathbf{e}_{ij}\mathbf{e}_{ij}^T(\kappa\mathbf{J}^{\prime,2}+\mathbf{I})+(\kappa\mathbf{J}^{\prime,2}+\mathbf{I})\mathbf{e}_{ij}\mathbf{e}_{ij}^T\\ &=2\mathbf{e}_{ij}\mathbf{e}_{ij}^T+\kappa(\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{J}^{\prime,2}+\mathbf{J}^{\prime,2}\mathbf{e}_{ij}\mathbf{e}_{ij}^T) \end{aligned} \end{equation} For simplicity we denote the set of $\kappa$ which can make all eigenvalues of the aforementioned matrix non-negative as: $\Gamma_{ij}=\{\kappa>0|\lambda_{min}(2\mathbf{e}_{ij}\mathbf{e}_{ij}^T+\kappa(\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{J}^{\prime,2}+\mathbf{J}^{\prime,2}\mathbf{e}_{ij}\mathbf{e}_{ij}^T))\geq 0\}$, where $\lambda_{min}(\cdot)$ represents the minimum eigenvalue of a matrix. Then a necessary and sufficient condition for the order persevering property is $\kappa\in\cap_{i\neq j}\Gamma_{ij}$. An immediate corollary is: if we want the final prediction to change after adding $\mathbf{W}_x$, then we must have $\frac{1}{\gamma}-1\notin\cap_{i\neq j}\Gamma_{ij}$. \section{Experiment Details and Ablation Study}\label{appendix:exp_details_ablation_study} \subsection{Definition of Omega}\label{appendix:def_of_omega} \begin{figure}[!htp] \centering \includegraphics[width=0.5\linewidth]{figures/avmnist.pdf} \caption{Image samples in AV-MNIST. Original images are shown in the first column, and their counterparts perturbed by Gaussian noise ( $\omega_0=0.2$) are shown in the second column. The third ($\omega_3=-1$) and last ($\omega_3=2$) column show the images perturbed by bias noise.} \label{fig:avmnist_noise} \end{figure} Take AV-MNIST as an example. We have defined the following types of perturbations. \begin{itemize} \item \textbf{Gaussian noise}: Perturb every element $x$ in the image (or audio) feature by a Gaussian noise in the form of $x^{noise}=(1+\epsilon)x$, where $\epsilon\sim N(0,\omega_0^2)$. \item \textbf{Missing entries}: For the audio modality, we randomly select $\omega_1$ consecutive columns (or rows) and all elements in there are set to $0$. This corresponds to missing time frames (or frequency components). \item \textbf{Bias noise}: For the image modality, we randomly select an image patch with size of $\omega_2\times\omega_2$, and every element $x$ in there is perturbed by a given amount: $x^{noise}=(1+\omega_3)x$. This corresponds to change of illumination. (see Figure \ref{fig:avmnist_noise}). \item \textbf{FGSM attack}: We use the Cleverhans libarary \cite{papernot2018cleverhans} to implement a FGSM attack on the input image/audio feature with $l_{\infty}$ norm. The maximum overall norm variation is $\omega_4$ (e.g., $\omega_4=0.3$). \item \textbf{PGD attack}: Cleverhans is adopted to implement a PGD attack on the input image/audio feature with $l_{\infty}$ norm. The maximum overall norm variation also equals $\omega_4$, and that for each inner attack is $\omega_5$ (e.g., $\omega_5=0.01$), and the number of inner attacks is $\omega_6$. \end{itemize} Unless explicitly mentioned, $\omega_6$ is set to $20$. \subsection{Extra Results on RAVDESS}\label{appendix:extra_on_ravdess} \textbf{Impact of hyper-parameters}\quad Following Figure 4 of the main text\footnote{Unless explicitly mentioned like this sentence, all Tables and Figures cross-referenced refer to the ones in this Supplementary.}, we also plot model accuracy versus magnitude of noise/attack with different gamma values in the emotion recognition experiment. The results are shown in Figure \ref{fig:ablation_diff_gamma_ravdess}. The multimodal network with our Jacobian regularization invoked (i.e., the orange line) almost surpasses the network without that invoked (i.e., the blue line) in all cases. \begin{figure}[!htb] \centering \includegraphics[width=1.0\linewidth]{figures/Picture4.pdf} \caption{Accuracies of multimodal networks obtained by statistically fusing a vanilla image network and audio network with $\gamma=0.1$ (the 1-st row), $\gamma=0.5$ (the 2-nd row), and $\gamma=0.9$ (the 3-rd row). Mean and Std of 20 repeated experiments are shown by the solid lines and the shaded regions, respectively. Note that FGSM attack is deterministic, thus there is almost no variance when repeating experiments.} \label{fig:ablation_diff_gamma_ravdess} \end{figure} \vspace{6pt} \noindent\textbf{Perturbation on the image modality}\quad Here we consider perturbations on the image modality. Similarly, we have deliberately enlarged the magnitude of the noise/attack, and results are presented in Table \ref{table:exp_ravdess_moda2}. As demonstrated in the table, the mutimodal network enhanced by our method usually achieves the best accuracy among all models except in few cases. Also, our Jacobian regularization could usually improve model accuracy except in the clean case. We notice that in some extreme cases when the image modality is severely perturbed, the multimdoal network might be even worse than the regular unimodal audio network. This, in turn, implies that multimodal fusion is not always the winner, especially when large noise/attack is presented. \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on RAVDESS when image features are perturbed. Mean accuracies are reported after repeatedly running 20 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~Model & Clean & $\omega_0 = 4.0$ & $\omega_{2,3} = 200,-4$ & $\omega_4 = 0.07$ & $\omega_5 = 0.008$\\ \midrule & 0: Aud-regT & $71.9$ & $71.9$ & $71.9$ & $71.9$ & $71.9$ \\ & 1: Img-regT & $82.5$ & $25.9$ & $21.3$ & $21.1$ & $10.7$ \\ UM & 2: Img-advT & $83.3$ & $21.1$ & $29.4$ & $40.0$ & $25.1$ \\ Nets & 3: Img-freT & $76.0$ & $15.3$ & $24.7$ & $50.1$ & $38.7$ \\ & 4: Img-staT & $83.7$ & $35.1$ & $23.7$ & $17.0$ & $10.9$ \\ & 5: Img-mixT & $85.1$ & $15.7$ & $27.3$ & $8.4$ & $8.5$ \\ \midrule & Mean-w/o & $89.8$ & $63.3$ & $40.8$ & $49.2$ & $21.8$ \\ MM & Mean-w/ & $88.5$ & $64.6$ & $40.4$ & $43.3$ & $16.3$ \\ (0, 1) & Stat-w/o & $89.9$ & $63.6$ & $41.1$ & $49.4$ & $21.8$ \\ & Stat-w/ (ours) & $88.7$ ({\color{red}{$\downarrow$}}) & $66.1$ ({\color{green}{$\uparrow$}}) & $43.4$ ({\color{green}{$\uparrow$}}) & $54.5$ ({\color{green}{$\uparrow$}}) & $23.5$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $89.2$ & $51.8$ & $62.0$ & $78.2$ & $74.0$ \\ MM & Mean-w/ & $87.4$ & $61.9$ & $63.1$ & $69.3$ & $56.4$ \\ (0, 2) & Stat-w/o & $89.2$ & $51.7$ & $62.3$ & $78.1$ & $74.2$ \\ & Stat-w/ (ours) & $87.7$ ({\color{red}{$\downarrow$}}) & $54.3$ ({\color{green}{$\uparrow$}}) & $64.6$ ({\color{green}{$\uparrow$}}) & $78.9$ ({\color{green}{$\uparrow$}}) & $75.1$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $86.9$ & $37.5$ & $63.5$ & $81.3$ & $77.6$ \\ MM & Mean-w/ & $84.1$ & $40.1$ & $62.0$ & $74.6$ & $68.1$ \\ (0, 3) & Stat-w/o & $86.9$ & $37.6$ & $63.7$ & $\bm{81.1}$ & $77.7$ \\ & Stat-w/ (ours) & $84.8$ ({\color{red}{$\downarrow$}}) & $40.9$ ({\color{green}{$\uparrow$}}) & $\bm{66.5}$ ({\color{green}{$\uparrow$}}) & $80.6$ ({\color{red}{$\downarrow$}}) & $\bm{78.6}$({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $89.6$ & $75.7$ & $52.8$ & $55.8$ & $32.0$ \\ MM & Mean-w/ & $86.7$ & $71.8$ & $53.7$ & $51.4$ & $19.8$ \\ (0, 4) & Stat-w/o & $\bm{89.5}$ & $76.3$ & $52.9$ & $55.7$ & $31.7$ \\ & Stat-w/ (ours) & $87.8$ ({\color{red}{$\downarrow$}}) & $\bm{76.4}$ ({\color{green}{$\uparrow$}}) & $56.2$ ({\color{green}{$\uparrow$}}) & $60.5$ ({\color{green}{$\uparrow$}}) & $35.3$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $83.0$ & $73.1$ & $72.0$ & $66.5$ & $53.8$\\ MM & Mean-w/ & $82.4$ & $69.5$ & $69.9$ & $66.3$ & $33.2$\\ (0, 5) & Stat-w/o & $82.9$ & $73.1$ & $72.1$ & $66.5$ & $54.0$\\ & Stat-w/ (ours) & $80.3$ ({\color{red}{$\downarrow$}}) & $73.7$ ({\color{green}{$\uparrow$}}) & $73.3$ ({\color{green}{$\uparrow$}}) & $69.4$ ({\color{green}{$\uparrow$}}) & $58.8$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_ravdess_moda2} \end{table} \subsection{Extra Results on VGGSound} \textbf{Impact of hyper-parameters} \quad We also plot model accuracy versus magnitude of noise on video modality with different gamma values in the VGGSound experiment in Figure \ref{fig:ablation_diff_gamma_vgg}. \vspace{6pt} \noindent\textbf{Unknown Noise on Single Modality}\quad To capture another setting, here we consider that the video modality (or audio modality) is corrupted, and both modalities are trained with robust add-ons in Table \ref{table:exp_vggsound_moda14} and \ref{table:exp_vggsound_moda13}, respectively. We emphasize that this setting is valid because in real-world applications, we sometimes do not know which modality is corrupted. As shown in these tables, the multimodal networks fused by our method could usually achieve higher accuracies in most cases, except when both unimodal networks are learned with free-m training \footnote{We neglect the cases when networks are tested on clean data, since it is possible that a robust network is less accurate on clean data as suggested by \cite{robust_at_odds}.}. Moreover, the fact that our Jacobian regularized multimodal network with a corrupted modality still outperforms the unimodal network with clean data in all cases demonstrates the positive effect of an extra modality. \label{appendix:extra_on_vgg} \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on VGGSound when video features are perturbed. Mean accuracies are reported after repeatedly running 5 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~Model & Clean & $\omega_0 = 1.5$ & $\omega_1 = 6$ & $\omega_4 = 0.03$ & $\omega_5 = 0.001$\\ \midrule & 0: Img-regT & $27.4$ & $5.78$ & $27.4$ & $9.5$ & $9.0$ \\ & 1: Aud-regT & $54.4$ & $15.0$ & $49.8$ & $23.0$ & $19.9$ \\ & 2: Img-advT & $27.5$ & $5.3$ & $27.4$ & $10.7$ & $10.3$\\ & 3: Aud-advT & $54.8$ & $19.4$ & $50.6$ & $41.8$ & $48.5$\\ UM & 4: Img-freT & $22.2$ & $4.0$ & $22.2$ & $19.4$ & $20.9$\\ Nets & 5: Aud-freT & $49.2$ & $32.4$ & $46.6$ & $48.2$ & $49.3$\\ & 6: Img-staT & $27.0$ & $6.9$ & $26.9$ & $10.5$ & $9.6$ \\ & 7: Aud-staT & $55.2$ & $18.8$ & $50.0$ & $22.4$ & $20.1$ \\ & 8: Img-mixT & $27.2$ & $8.4$ & $27.1$ & $7.3$ & $7.2$\\ & 9: Aud-mixT & $56.3$ & $1.57$ & $50.1$ & $16.9$ & $11.7$ \\ \midrule & Mean-w/o & $57.7$ & $45.8$ & $57.6$ & $35.0$ & $25.7$ \\ MM & Mean-w/ & $53.9$ & $48.6$ & $53.8$ & $37.9$ & $20.5$\\ (0, 1) & Stat-w/o & $58.5$ & $46.0$ & $58.4$ & $35.3$ & $26.0$ \\ & Stat-w/ (ours) & $60.1$ ({\color{green}{$\uparrow$}}) & $51.2$ ({\color{green}{$\uparrow$}}) & $60.0$ ({\color{green}{$\uparrow$}}) & $39.7$ ({\color{green}{$\uparrow$}}) & $28.5$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $59.1$ & $45.2$ & $59.9$ & $36.6$ & $26.9$\\ MM & Mean-w/ & $54.6$ & $48.6$ & $54.6$ & $37.6$ & $21.3$\\ (2, 3) & Stat-w/o & $59.6$ & $45.4$ & $59.6$ & $37.1$ & $27.1$ \\ & Stat-w/ (ours) & $\bm{61.0}$ ({\color{green}{$\uparrow$}}) & $50.0$ ({\color{green}{$\uparrow$}}) & $\bm{61.0}$ ({\color{green}{$\uparrow$}}) & $40.9$ ({\color{green}{$\uparrow$}}) & $29.7$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $54.6$ & $42.2$ & $52.5$ & $54.5$ & $54.6$\\ MM & Mean-w/ & $51.4$ & $37.6$ & $49.1$ & $51.0$ & $51.4$\\ (4, 5) & Stat-w/o & $56.4$ & $43.7$& $54.4$ & $\bm{56.3}$& $\bm{56.4}$ \\ & Stat-w/ (ours) & $45.1$ ({\color{red}{$\downarrow$}}) & $38.6$ ({\color{red}{$\downarrow$}}) & $43.9$ ({\color{red}{$\downarrow$}}) & $44.3$ ({\color{red}{$\downarrow$}}) & $44.7$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $57.5$ & $47.6$ & $57.4$ & $37.3$ & $30.8$\\ MM & Mean-w/ & $53.5$ & $49.4$ & $53.4$ & $37.1$ & $23.0$\\ (6, 7) & Stat-w/o & $58.2$ & $48.0$ & $58.1$ & $37.7$ & $31.1$ \\ & Stat-w/ (ours) & $60.0$({\color{green}{$\uparrow$}}) & $52.2$({\color{green}{$\uparrow$}}) & $59.9$ ({\color{green}{$\uparrow$}}) & $41.3$ ({\color{green}{$\uparrow$}}) & $34.4$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $57.6$ & $48.1$ & $57.5$ & $42.8$ & $33.6$\\ MM & Mean-w/ & $57.8$ & $52.2$ & $57.8$ & $50.9$ & $33.6$\\ (8, 9) & Stat-w/o & $59.7$ & $50.1$ & $59.5$ & $44.2$ & $34.6$ \\ & Stat-w/ (ours) & $60.4$ ({\color{green}{$\uparrow$}}) & $57.4$ ({\color{green}{$\uparrow$}}) & $60.3$ ({\color{green}{$\uparrow$}}) & $54.3$ ({\color{green}{$\uparrow$}}) & $49.9$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_vggsound_moda14} \end{table} \begin{figure}[!ht] \center \includegraphics[width=0.345\linewidth]{figures/VGG01G.pdf} \hspace{-5mm} \includegraphics[width=0.345\linewidth]{figures/VGG05G.pdf} \hspace{-5mm} \includegraphics[width=0.345\linewidth]{figures/VGG10G.pdf} \caption{Accuracies of multimodal networks obtained by statistically fusing a vanilla image network and audio network with $\gamma=0.1$ (the 1-st column), $\gamma=0.5$ (the 2-nd column), and $\gamma=0.99$ (the 3-rd column). Mean and Std of 5 repeated experiments are shown by the solid lines and the shaded regions, respectively.} \label{fig:ablation_diff_gamma_vgg} \end{figure} \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on VGGSound when audio features are perturbed. Mean accuracies are reported after repeatedly running 5 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~Model & Clean & $\omega_0 = 1.5$ & $\omega_1 = 6$ & $\omega_4 = 0.03$ & $\omega_5 = 0.001$\\ \midrule & 0: Img-regT & $27.4$ & $5.78$ & $27.4$ & $9.5$ & $9.0$ \\ & 1: Aud-regT & $54.4$ & $15.0$ & $49.8$ & $23.0$ & $19.9$ \\ & 2: Img-advT & $27.5$ & $5.3$ & $27.4$ & $10.7$ & $10.3$\\ & 3: Aud-advT & $54.8$ & $19.4$ & $50.6$ & $41.8$ & $48.5$\\ UM & 4: Img-freT & $22.2$ & $4.0$ & $22.2$ & $19.4$ & $20.9$\\ Nets & 5: Aud-freT & $49.2$ & $32.4$ & $46.6$ & $48.2$ & $49.3$\\ & 6: Img-staT & $27.0$ & $6.9$ & $26.9$ & $10.5$ & $9.6$ \\ & 7: Aud-staT & $55.2$ & $18.8$ & $50.0$ & $22.4$ & $20.1$ \\ & 8: Img-mixT & $27.2$ & $8.4$ & $27.1$ & $7.3$ & $7.2$\\ & 9: Aud-mixT & $56.3$ & $1.57$ & $50.1$ & $16.9$ & $11.7$ \\ \midrule & Mean-w/o & $57.7$ & $29.5$ & $55.5$ & $36.2$ & $32.7$ \\ MM & Mean-w/ & $53.9$ & $23.4$ & $50.8$ & $29.8$ & $25.2$ \\ (0, 1) & Stat-w/o & $58.5$ & $30.5$ & $56.1$ & $36.6$ & $32.9$ \\ & Stat-w/ (ours) & $\bm{60.0}$({\color{green}{$\uparrow$}}) & $31.3$ ({\color{green}{$\uparrow$}}) & $52.6$ ({\color{red}{$\downarrow$}}) & $38.9$ ({\color{green}{$\uparrow$}}) & $35.5$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $59.1$ & $37.5$ & $56.7$ & $53.4$ & $55.9$\\ MM & Mean-w/ & $54.8$ & $29.7$ & $52.0$ & $43.7$ & $48.5$\\ (2, 3) & Stat-w/o & $59.6$ & $38.0$ & $57.1$ & $54.2$ & $56.5$ \\ & Stat-w/ (ours) & $55.4$ ({\color{red}{$\downarrow$}}) & $38.5$ ({\color{green}{$\uparrow$}}) & $53.2$ ({\color{red}{$\downarrow$}}) & $\bm{56.3}$ ({\color{green}{$\uparrow$}}) & $\bm{57.9}$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $54.6$ & $39.2$ & $54.6$ & $54.6$ & $54.6$\\ MM & Mean-w/ & $51.4$ & $\bm{45.4}$ & $51.3$ & $51.0$ & $51.5$\\ (4, 5) & Stat-w/o & $56.5$ & $39.2$ & $\bm{58.8}$ & $55.8$& $55.4$ \\ & Stat-w/ (ours) & $46.2$ ({\color{red}{$\downarrow$}}) & $35.5$ ({\color{red}{$\downarrow$}}) & $57.3$ ({\color{red}{$\downarrow$}}) & $55.5$ ({\color{red}{$\downarrow$}}) & $47.3$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $57.5$ & $36.2$ & $55.7$ & $36.7$ & $35.2$\\ MM & Mean-w/ & $53.3$ & $36.3$ & $52.0$ & $28.5$ & $37.1$\\ (6, 7) & Stat-w/o & $58.2$ & $35.7$ & $55.0$ & $36.9$ & $31.0$ \\ & Stat-w/ (ours) & $54.4$ ({\color{green}{$\uparrow$}}) & $27.7$ ({\color{green}{$\uparrow$}}) & $50.2$ ({\color{green}{$\uparrow$}}) & $38.8$ ({\color{green}{$\uparrow$}}) & $25.4$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $57.5$ & $4.1$ & $54.5$ & $21.1$ & $31.2$\\ MM & Mean-w/ & $57.8$ & $3.5$ & $53.2$ & $17.8$ & $26.9$\\ (8, 9) & Stat-w/o & $53.8$ & $3.6$ & $56.6$ & $21.2$ & $32.6$ \\ & Stat-w/ (ours) & $59.6$ ({\color{green}{$\uparrow$}}) & $5.9$ ({\color{green}{$\uparrow$}}) & $52.7$ ({\color{red}{$\downarrow$}}) & $26.4$ ({\color{green}{$\uparrow$}}) & $33.3$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_vggsound_moda13} \end{table} \section{Missing Deduction} \subsection{Equivalent Form and Derivative of Statistical Fusion}\label{appendix:proof_of_jacobian_stat_fusion} In this subsection, we first prove Lemma 1 shown in the main text. For conciseness, we denote the $k$-th element of $\mathbf{p}_A$ as ${p}_{A,k}$. Similar notations are adopted for $\mathbf{p}_B$, $\mathbf{z}_A$, $\mathbf{z}_B$, and $\ln\mathbf{freq}$. The $k$-th element of the left-hand side (LHS) in Lemma 1 can be simplified as: \begin{equation}\label{eq:lhs_rhs}\tag{S1} \begin{aligned} \frac{{p}_{A,k}{p}_{B,k}/{freq}_k}{\sum_{j=1}^K {p}_{A,j}{p}_{B,j}/{freq}_j} &=\frac{\frac{\exp({z}_{A,k})}{T_A}\frac{\exp({z}_{B,k})}{T_B}/{freq}_k}{\sum_{j=1}^K \frac{\exp({z}_{A,j})}{T_A}\frac{\exp({z}_{B,k})}{T_B}/{freq}_j}\\ &=\frac{{\exp({z}_{A,k})\exp({z}_{B,k})}/{freq}_k}{\sum_{j=1}^K \exp({z}_{A,j})\exp({z}_{B,k})/{freq}_j}\\ &=\frac{\exp({z}_{A,k}+{z}_{B,k}-\ln{freq}_k)}{\sum_{j=1}^K \exp({z}_{A,j}+{z}_{B,j}-\ln{freq}_j)}\\ \end{aligned} \end{equation} which is exactly the $k$-th element of the right-hand side (RHS) in Lemma 1 by definition. Note that in the first line of (\ref{eq:lhs_rhs}), we use the definition $\mathbf{p}_A=\bm\sigma_1(\mathbf{z}_A)$, $\mathbf{p}_B=\bm\sigma_1(\mathbf{z}_B)$, and have introduced two normalization constants $T_A=||\exp(\mathbf{z}_A)||_1$ and $T_B=||\exp(\mathbf{z}_B)||_1$. As a result, an equivalent form of (3) is $\mathbf{p}^\prime=\bm\sigma_1(\mathbf{W}_a\mathbf{z}_A+\mathbf{W}_b\mathbf{z}_B-\ln\mathbf{freq})$. Based on the chain rule and $\mathbf{z}_A=\mathbf{W}_A\mathbf{h}_A+\mathbf{b}_A$, we have: \begin{equation}\tag{S2} \frac{\partial \mathbf{p}^\prime}{\partial \mathbf{h}_A}=\frac{\partial \mathbf{p}^\prime}{\partial \mathbf{z}_A}\frac{\partial \mathbf{z}_A}{\partial \mathbf{h}_A} = \mathbf{J}^\prime\mathbf{W}_a\mathbf{W}_A=[\mathbf{p}^\prime\mathbf{p}^{\prime,T}-\text{Diag}(\mathbf{p}^\prime)]\mathbf{W}_a\mathbf{W}_A \end{equation} where we have used the Jacobian matrix of Softmax function to calculate $\mathbf{J}^\prime$. We emphasize that $\mathbf{J}^\prime$ is symmetric. \subsection{Solution of Regularized Jacobian Loss}\label{appendix:solution_of_jaco_loss} Because Frobenius norm is convex and a weighted summation with positive coefficients preserves convexity, the relaxed optimization problem shown in Step 5 of Algorithm 1 is a convex function with respect to $\mathbf{W}_a$. This implies that any local minimum is also the global minimum. We calculate the derivative of the relaxed optimization problem with respect to $\mathbf{W}_a$ and set it to zero, yielding: \begin{equation}\label{eq:slv}\tag{S3} \begin{aligned} &(\frac{1}{\gamma}-1)\mathbf{J}^{(t),2}\mathbf{W}_a(\mathbf{W}_A\mathbf{W}_A^T) + \mathbf{W}_a = \mathbf{I}\\ &\quad \Rightarrow \mathbf{A}\mathbf{W}_a+\mathbf{W}_a\mathbf{B}=\mathbf{B} \end{aligned} \end{equation} where for simplicity we have defined $\mathbf{A}=(1/\gamma-1)\mathbf{J}^{(t),2}\in\mathbb{R}^{K\times K}$ and $\mathbf{B}=(\mathbf{W}_A\mathbf{W}_A^T)^{-1}\in\mathbb{R}^{K\times K}$. (\ref{eq:slv}) is known as Sylvester equation\footnote{A general Sylvester equation has the form: $\mathbf{A}\mathbf{X}+\mathbf{X}\mathbf{B}=\mathbf{C}$, where $\{\mathbf{A},\mathbf{B},\mathbf{C}\}$ are all known and $\mathbf{X}$ is required to be solved.}, and it has been proven that if $\mathbf{A}$ and $\mathbf{-B}$ do not share any eigenvalues, then the Sylvester equation has an unique solution. In our case, since $\mathbf{A}$ is positive semi-definite (PSD) and $-\mathbf{B}$ is negative semi-definite, the only possible overlapping eigenvalue is $0$. On one hand, $0$ is surely an eigenvalue of $\mathbf{A}$ since $\mathbf{J}\mathbf{u}=0\mathbf{u}$, where $\mathbf{u}$ is a column vector with all elements equal to $1$. On the other hand, it is argued that a well-generalized neural network normally doesn't have singular weight matrices \cite{martin_weight_nonsingular}. Thus, $0$ is almost surely not the overlapping eigenvalue and a unique solution of (\ref{eq:slv}) is guaranteed. As demonstrated by \cite{solve_sylvester1}, the matrix equation (\ref{eq:slv}) can be converted to invert a $K\times K$ matrix. As in a classification task, $K$ won't be too large, the run time of solving Sylvester equation is affordable in this context. Furthermore, as shown in the above equation, when $\gamma\to 0$, the matrix $A$ will tend to infinity, leading to an ill-conditioned matrix equation. This, in turn, implies that the second term in the loss shown in (5) guarantees numerical stability. \subsection{Proof of Theorem 1}\label{appendix:proof_of_them_1} In the last iteration $t=t_{\text{max}}\to\infty$, noticing $\mathbf{W}_a$ satisfies (\ref{eq:slv}), we have: \begin{equation}\tag{S4} \begin{aligned} ||\mathbf{J}^{(t_{\text{max}})}\mathbf{W}_a\mathbf{W}_A||^2_F &=\text{Tr}[\mathbf{J}^{(t_{\text{max}})}\mathbf{W}_a\mathbf{W}_A\mathbf{W}_A^T\mathbf{W}_a^T\mathbf{J}^{(t_{\text{max}})}]\\ &=\text{Tr}[\mathbf{J}^{(t_{\text{max}}),2}\mathbf{W}_a(\mathbf{W}_A\mathbf{W}_A^T)\mathbf{W}_a^T]\\ &=(\frac{1}{\gamma}-1)^{-1}\text{Tr}[\mathbf{A}\mathbf{W}_a\mathbf{B}^{-1}\mathbf{W}_a^T]\\ &=(\frac{1}{\gamma}-1)^{-1}\text{Tr}[(\mathbf{B}-\mathbf{W}_a\mathbf{B})\mathbf{B}^{-1}\mathbf{W}_a^T]\\ &=(\frac{1}{\gamma}-1)^{-1}\text{Tr}[\mathbf{W}_a-\mathbf{W}_a\mathbf{W}_a^T]\\ \end{aligned} \end{equation} and: \begin{equation}\tag{S5} \begin{aligned} ||\mathbf{W}_a-\mathbf{I}||^2_F &=\text{Tr}[\mathbf{W}_a\mathbf{W}_a^T-2\mathbf{W}_a+\mathbf{I}]\\ \end{aligned} \end{equation} Thus we have: \begin{equation}\tag{S6} \begin{aligned} ||\mathbf{J}^{(t_{\text{max}})}\mathbf{W}_a\mathbf{W}_A||^2_F & \leq \frac{1}{1-\gamma}[(1-\gamma)||\mathbf{J}^{(t_{\text{max}})}\mathbf{W}_a\mathbf{W}_A||^2_F+\frac{\gamma}{2}||\mathbf{W}_a-\mathbf{I}||^2_F]\\ & = \frac{1}{1-\gamma}\left\{\gamma\text{Tr}[\mathbf{W}_a-\mathbf{W}_a\mathbf{W}_a^T]+\frac{\gamma}{2}\text{Tr}[\mathbf{W}_a\mathbf{W}_a^T-2\mathbf{W}_a^T+\mathbf{I}]\right\}\\ &=\frac{\gamma}{2(1-\gamma)}\text{Tr}[\mathbf{I}-\mathbf{W}_a\mathbf{W}_a^T]\\ &\leq\frac{\gamma}{2(1-\gamma)}\text{Tr}[\mathbf{I}]=\frac{\gamma K}{2(1-\gamma)}\\ \end{aligned} \end{equation} where we have taken advantage of the fact that $\mathbf{W}_a\mathbf{W}_a^T$ is a positive semi-definite matrix, and thus its trace is no less than zero. Based on the first-order Taylor expansion, we have: \begin{equation}\tag{S7} \begin{aligned} \mathbf{p}^{\prime,noise}-\mathbf{p}^\prime &\approx\frac{\partial \mathbf{p}^\prime}{\partial \mathbf{h}_A}(\mathbf{h}_A^{\text{noise}}-\mathbf{h}_A)\\ &=\frac{\partial \mathbf{p}^{(t_{\text{max}}+1)}}{\partial \mathbf{h}_A}(\mathbf{h}_A^{\text{noise}}-\mathbf{h}_A)\\ &\approx\mathbf{J}^{(t_\text{max})}\mathbf{W}_a\mathbf{W}_A(\mathbf{h}_A^{\text{noise}}-\mathbf{h}_A)\\ \end{aligned} \end{equation} Thus we have: \begin{equation}\tag{S8} \begin{aligned} ||\mathbf{p}^{\prime,noise}-\mathbf{p}^\prime|| &\leq ||\mathbf{J}^{(t_\text{max})}\mathbf{W}_a\mathbf{W}_A||_F\cdot||\mathbf{h}_A^{\text{noise}}-\mathbf{h}_A||\\ &\leq \sqrt{\frac{\gamma K}{2(1-\gamma)}} \cdot l ||\mathbf{x}_A^{\text{noise}}-\mathbf{x}_A||\\ &= l\sqrt{\frac{\gamma K}{2(1-\gamma)}}||\bm\epsilon||\\ \end{aligned} \end{equation} where we have utilized the $l$-Lipschitz continuity of $\mathbf{f}_A$. \subsection{Unimodal 2D Case}\label{appendix:proof_of_2d_unimodal} Consider an illustrating example when $K=2$ (a.k.a. unimodal binary classification). In this case, the final prediction is two dimensional: $\mathbf{p}=[p_1\,p_2]^T$. We denote the raw logit after adding the $\mathbf{W}_x$ matrix as $\mathbf{z}^\prime=\mathbf{W}_x\mathbf{z}$, and its entries are denoted as ${z}_{1}^\prime$ and ${z}_{2}^\prime$. Let us begin by explicitly writing out the first line of (\ref{eq:slv}). After some simplification, we obtain: \begin{equation}\tag{S9} \left[\begin{array}{cc} \kappa & -\kappa \\ -\kappa & \kappa \end{array}\right]\left[\begin{array}{cc} \rho_1 & \rho_2\\ \rho_3 & \rho_4 \\ \end{array}\right] \left[\begin{array}{cc} a & b \\ b & c \end{array}\right] +\left[\begin{array}{cc} \rho_1 & \rho_2\\ \rho_3 & \rho_4 \\ \end{array}\right]=\mathbf{I} \end{equation} where $\kappa=1/\gamma-1$. Here we have explicitly written out the elements of $\mathbf{W}\mathbf{W}^T$ and merged some terms of $\mathbf{J}^{\prime,2}$ into $a=p_1p_2(w_{11}^2+w_{12}^2)$, $b=p_1p_2(w_{21}w_{11}+w_{12}w_{22})$, and $c=p_1p_2(w_{21}^2+w_{22}^2)$. The unknowns occur in $\mathbf{W}_x$: \begin{equation}\tag{S10} \mathbf{W}_x = \left[\begin{array}{cc} \rho_1 & \rho_2\\ \rho_3 & \rho_4 \\ \end{array}\right] \end{equation} After some algebra, we can solve all unknowns via the following block matrix inversion: \begin{equation}\tag{S11} \left[\begin{array}{c} \rho_1\\ \rho_2\\ \rho_3\\ \rho_4\\ \end{array}\right]=\left[\begin{array}{cc} \mathbf{G} & \mathbf{H} \\ \mathbf{H} & \mathbf{G} \\ \end{array}\right]^{-1}\cdot\left[\begin{array}{c} 1\\ 0 \\ 0\\ 1\\ \end{array}\right] \end{equation} where for simplicity, we have defined: \begin{equation}\tag{S12} \mathbf{G}=\left[\begin{array}{cc} 1+ a\kappa & b\kappa \\ b\kappa & 1+ c\kappa \\ \end{array}\right],\quad \mathbf{H}=\left[\begin{array}{cc} -a\kappa & -b\kappa \\ -b\kappa & -c\kappa \\ \end{array}\right] \end{equation} Noticing that $\mathbf{z}^\prime=\mathbf{W}_x\mathbf{z}$, we can prove that: \begin{equation}\tag{S13} ({z}_{1}^\prime-{z}_{2}^\prime)\cdot({z}_{1}-{z}_{2})=\mathbf{z}^T \left[\begin{array}{cccc} \rho_1 - \rho_3 & -(\rho_1 - \rho_3)\\ \rho_2 - \rho_4 & -(\rho_2 - \rho_4) \end{array}\right]\mathbf{z} \end{equation} It is easy to calculate the eigenvalue of the matrix in the middle is $0$ and $\rho_1+\rho_4-\rho_2-\rho_3$. In other words, if we could prove $\rho_1+\rho_4-\rho_2-\rho_3$ is no less than zero, then the matrix in the middle is positive semi-definite, which implies order preserving property holds: If ${z}_{1}>{z}_{2}$, then ${z}_{1}^\prime>{z}_{2}^\prime$ and vice versa. This order preserving property is equivalent to say that adding the $\mathbf{W}_x$ matrix makes no sense in unimodal binary classification problem, since the predication won't change for any input. Now let us prove $\rho_1+\rho_4-\rho_2-\rho_3\geq 0$. With a lot of algebra, we obtain: \begin{equation}\label{eq:sum_of_rho}\tag{S14} \begin{aligned} \rho_1+\rho_4-\rho_2-\rho_3 &=\left[\begin{array}{cccc} 1 & -1 & -1 & 1 \end{array}\right] \left[\begin{array}{c} \rho_1\\ \rho_2\\ \rho_3\\ \rho_4\\ \end{array}\right]\\ &=\left[\begin{array}{cccc} 1 & -1 & -1 & 1 \end{array}\right] \left[\begin{array}{cc} \mathbf{G} & \mathbf{H} \\ \mathbf{H} & \mathbf{G} \\ \end{array}\right]^{-1}\cdot\left[\begin{array}{c} 1\\ 0 \\ 0\\ 1\\ \end{array}\right]\\ &=\frac{2(a+2b+c)\kappa+2}{4(ac-b^2)\kappa^2+2(a+c)\kappa+1} \end{aligned} \end{equation} Based on the definitions of $\{a,b,c,d\}$, by using Cauchy inequality and completing square, we could prove: $ac-b^2\geq 0$, $a+c\geq 0$, $a+2b+c\geq 0$, and $a-2b+c\geq 0$. Treat $\kappa\in(0,\infty)$ as a variable. The derivative of (\ref{eq:sum_of_rho}) with respect to $\kappa$ is a fraction. Its denominator is always larger than zero, while the numerator is a quadratic polynomial: \begin{equation}\tag{S15} \underbrace{8(b^2-ac)(a+2b+c)}_{\leq 0}\kappa^2+\underbrace{16(b^2-ac)}_{\leq 0}\kappa-2\underbrace{(a-2b+c)}_{\geq 0} \end{equation} Thus $\rho_1+\rho_4-\rho_2-\rho_3$ monotonically decreases when $\kappa$ increases in $(0,\infty)$. Its minimum value equals $0$ and is attained when $\kappa\to\infty$. Our proof completes. \subsection{Unimodal High-Dimensional Case}\label{appendix:proof_unimodal_high_d} Without loss of generality, we consider the case when $\mathbf{W}\mathbf{W}^T=\mathbf{I}$. We emphasize this is a mild assumption. Because when $\mathbf{W}$ doesn't satisfy this condition, we could perform SVD decomposition $\mathbf{W}=\mathbf{U}\bm{\Sigma}\mathbf{V}$ and convert the FC layer $\mathbf{Wh}+\mathbf{b}$ into three sequential linear layers: $\mathbf{h}\to\mathbf{Vh}\to\bm{\Sigma}\mathbf{V}\mathbf{h}\to\mathbf{U}\bm{\Sigma}\mathbf{V}\mathbf{h}+\mathbf{b}$, where now the weight matrix in the last layer satisfies $\mathbf{U}\mathbf{U}^T=\mathbf{I}$. Under this assumption $\mathbf{B}=(\mathbf{W}\mathbf{W}^T)^{-1}=\mathbf{I}$, (\ref{eq:slv}) could be further simplified as: \begin{equation}\label{eq:Wx}\tag{S16} \mathbf{W}_x=[(\frac{1}{\gamma}-1)\mathbf{J}^{\prime,2}+\mathbf{I}]^{-1}=(\kappa\mathbf{J}^{\prime,2}+\mathbf{I})^{-1} \end{equation} where $\mathbf{J}^{\prime}=\mathbf{J}^{(0)}$ is known, and for simplicity, we have denoted $\kappa=1/\gamma-1$. From this expression, it is clear that the second term of (5) guarantees numerical stability, since $\mathbf{J}^{\prime,2}$ is not invertible and without the identity matrix (\ref{eq:Wx}) doesn't exist. Before moving on, we define a helper vector $\mathbf{e}_{ij}\in\mathbb{R}^K$, whose $i$-th and $j$-th entries equal $1$ and $-1$, respectively, and all other entries equal $0$. Then considering the relation of $\mathbf{z}^\prime=\mathbf{W}_x\mathbf{z}$, we have: \begin{equation}\tag{S17} \begin{aligned} (z_i^\prime-z_j^\prime)(z_i-z_j) &= (\mathbf{z}^{\prime,T}\mathbf{e}_{ij})(\mathbf{e}_{ij}^T\mathbf{z})\\ &= \mathbf{z}^{\prime,T}\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}\mathbf{z}^\prime\\ \end{aligned} \end{equation} This implies that if $\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}$ is a positive semi-definite (PSD) matrix for arbitrary $i\neq j$, then the order preserving property holds (i.e., if ${z}_i>{z}_j$, then ${z}_i^\prime>{z}_j^\prime$). Since $\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}$ is asymmetrical, examining whether it is PSD requires us to focus on the summation of its transpose and itself. Namely, $\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}$ is PSD if and only if the eigenvalues of $[\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}+(\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1})^T]$ are all no less than zero. Notice that $\mathbf{W}_x$ and $\mathbf{W}_x^{-1}$ are symmetric as shown in (\ref{eq:Wx}), we have \begin{equation}\tag{S18} \begin{aligned} \mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}+(\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1})^T &=\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{W}_x^{-1}+\mathbf{W}_x^{-1}\mathbf{e}_{ij}\mathbf{e}_{ij}^T\\ &=\mathbf{e}_{ij}\mathbf{e}_{ij}^T(\kappa\mathbf{J}^{\prime,2}+\mathbf{I})+(\kappa\mathbf{J}^{\prime,2}+\mathbf{I})\mathbf{e}_{ij}\mathbf{e}_{ij}^T\\ &=2\mathbf{e}_{ij}\mathbf{e}_{ij}^T+\kappa(\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{J}^{\prime,2}+\mathbf{J}^{\prime,2}\mathbf{e}_{ij}\mathbf{e}_{ij}^T) \end{aligned} \end{equation} For simplicity we denote the set of $\kappa$ which can make all eigenvalues of the aforementioned matrix non-negative as: $\Gamma_{ij}=\{\kappa>0|\lambda_{min}(2\mathbf{e}_{ij}\mathbf{e}_{ij}^T+\kappa(\mathbf{e}_{ij}\mathbf{e}_{ij}^T\mathbf{J}^{\prime,2}+\mathbf{J}^{\prime,2}\mathbf{e}_{ij}\mathbf{e}_{ij}^T))\geq 0\}$, where $\lambda_{min}(\cdot)$ represents the minimum eigenvalue of a matrix. Then a necessary and sufficient condition for the order persevering property is $\kappa\in\cap_{i\neq j}\Gamma_{ij}$. An immediate corollary is: if we want the final prediction to change after adding $\mathbf{W}_x$, then we must have $\frac{1}{\gamma}-1\notin\cap_{i\neq j}\Gamma_{ij}$. \section{Experiment Details and Ablation Study}\label{appendix:exp_details_ablation_study} \subsection{Definition of Omega}\label{appendix:def_of_omega} \begin{figure}[!htp] \centering \includegraphics[width=0.5\linewidth]{figures/avmnist.pdf} \caption{Image samples in AV-MNIST. Original images are shown in the first column, and their counterparts perturbed by Gaussian noise ( $\omega_0=0.2$) are shown in the second column. The third ($\omega_3=-1$) and last ($\omega_3=2$) column show the images perturbed by bias noise.} \label{fig:avmnist_noise} \end{figure} Take AV-MNIST as an example. We have defined the following types of perturbations. \begin{itemize} \item \textbf{Gaussian noise}: Perturb every element $x$ in the image (or audio) feature by a Gaussian noise in the form of $x^{noise}=(1+\epsilon)x$, where $\epsilon\sim N(0,\omega_0^2)$. \item \textbf{Missing entries}: For the audio modality, we randomly select $\omega_1$ consecutive columns (or rows) and all elements in there are set to $0$. This corresponds to missing time frames (or frequency components). \item \textbf{Bias noise}: For the image modality, we randomly select an image patch with size of $\omega_2\times\omega_2$, and every element $x$ in there is perturbed by a given amount: $x^{noise}=(1+\omega_3)x$. This corresponds to change of illumination. (see Figure \ref{fig:avmnist_noise}). \item \textbf{FGSM attack}: We use the Cleverhans libarary \cite{papernot2018cleverhans} to implement a FGSM attack on the input image/audio feature with $l_{\infty}$ norm. The maximum overall norm variation is $\omega_4$ (e.g., $\omega_4=0.3$). \item \textbf{PGD attack}: Cleverhans is adopted to implement a PGD attack on the input image/audio feature with $l_{\infty}$ norm. The maximum overall norm variation also equals $\omega_4$, and that for each inner attack is $\omega_5$ (e.g., $\omega_5=0.01$), and the number of inner attacks is $\omega_6$. \end{itemize} Unless explicitly mentioned, $\omega_6$ is set to $20$. \subsection{Extra Results on RAVDESS}\label{appendix:extra_on_ravdess} \textbf{Impact of hyper-parameters}\quad Following Figure 4 of the main text\footnote{Unless explicitly mentioned like this sentence, all Tables and Figures cross-referenced refer to the ones in this Supplementary.}, we also plot model accuracy versus magnitude of noise/attack with different gamma values in the emotion recognition experiment. The results are shown in Figure \ref{fig:ablation_diff_gamma_ravdess}. The multimodal network with our Jacobian regularization invoked (i.e., the orange line) almost surpasses the network without that invoked (i.e., the blue line) in all cases. \begin{figure}[!htb] \centering \includegraphics[width=1.0\linewidth]{figures/Picture4.pdf} \caption{Accuracies of multimodal networks obtained by statistically fusing a vanilla image network and audio network with $\gamma=0.1$ (the 1-st row), $\gamma=0.5$ (the 2-nd row), and $\gamma=0.9$ (the 3-rd row). Mean and Std of 20 repeated experiments are shown by the solid lines and the shaded regions, respectively. Note that FGSM attack is deterministic, thus there is almost no variance when repeating experiments.} \label{fig:ablation_diff_gamma_ravdess} \end{figure} \vspace{6pt} \noindent\textbf{Perturbation on the image modality}\quad Here we consider perturbations on the image modality. Similarly, we have deliberately enlarged the magnitude of the noise/attack, and results are presented in Table \ref{table:exp_ravdess_moda2}. As demonstrated in the table, the mutimodal network enhanced by our method usually achieves the best accuracy among all models except in few cases. Also, our Jacobian regularization could usually improve model accuracy except in the clean case. We notice that in some extreme cases when the image modality is severely perturbed, the multimdoal network might be even worse than the regular unimodal audio network. This, in turn, implies that multimodal fusion is not always the winner, especially when large noise/attack is presented. \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on RAVDESS when image features are perturbed. Mean accuracies are reported after repeatedly running 20 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~Model & Clean & $\omega_0 = 4.0$ & $\omega_{2,3} = 200,-4$ & $\omega_4 = 0.07$ & $\omega_5 = 0.008$\\ \midrule & 0: Aud-regT & $71.9$ & $71.9$ & $71.9$ & $71.9$ & $71.9$ \\ & 1: Img-regT & $82.5$ & $25.9$ & $21.3$ & $21.1$ & $10.7$ \\ UM & 2: Img-advT & $83.3$ & $21.1$ & $29.4$ & $40.0$ & $25.1$ \\ Nets & 3: Img-freT & $76.0$ & $15.3$ & $24.7$ & $50.1$ & $38.7$ \\ & 4: Img-staT & $83.7$ & $35.1$ & $23.7$ & $17.0$ & $10.9$ \\ & 5: Img-mixT & $85.1$ & $15.7$ & $27.3$ & $8.4$ & $8.5$ \\ \midrule & Mean-w/o & $89.8$ & $63.3$ & $40.8$ & $49.2$ & $21.8$ \\ MM & Mean-w/ & $88.5$ & $64.6$ & $40.4$ & $43.3$ & $16.3$ \\ (0, 1) & Stat-w/o & $89.9$ & $63.6$ & $41.1$ & $49.4$ & $21.8$ \\ & Stat-w/ (ours) & $88.7$ ({\color{red}{$\downarrow$}}) & $66.1$ ({\color{green}{$\uparrow$}}) & $43.4$ ({\color{green}{$\uparrow$}}) & $54.5$ ({\color{green}{$\uparrow$}}) & $23.5$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $89.2$ & $51.8$ & $62.0$ & $78.2$ & $74.0$ \\ MM & Mean-w/ & $87.4$ & $61.9$ & $63.1$ & $69.3$ & $56.4$ \\ (0, 2) & Stat-w/o & $89.2$ & $51.7$ & $62.3$ & $78.1$ & $74.2$ \\ & Stat-w/ (ours) & $87.7$ ({\color{red}{$\downarrow$}}) & $54.3$ ({\color{green}{$\uparrow$}}) & $64.6$ ({\color{green}{$\uparrow$}}) & $78.9$ ({\color{green}{$\uparrow$}}) & $75.1$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $86.9$ & $37.5$ & $63.5$ & $81.3$ & $77.6$ \\ MM & Mean-w/ & $84.1$ & $40.1$ & $62.0$ & $74.6$ & $68.1$ \\ (0, 3) & Stat-w/o & $86.9$ & $37.6$ & $63.7$ & $\bm{81.1}$ & $77.7$ \\ & Stat-w/ (ours) & $84.8$ ({\color{red}{$\downarrow$}}) & $40.9$ ({\color{green}{$\uparrow$}}) & $\bm{66.5}$ ({\color{green}{$\uparrow$}}) & $80.6$ ({\color{red}{$\downarrow$}}) & $\bm{78.6}$({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $89.6$ & $75.7$ & $52.8$ & $55.8$ & $32.0$ \\ MM & Mean-w/ & $86.7$ & $71.8$ & $53.7$ & $51.4$ & $19.8$ \\ (0, 4) & Stat-w/o & $\bm{89.5}$ & $76.3$ & $52.9$ & $55.7$ & $31.7$ \\ & Stat-w/ (ours) & $87.8$ ({\color{red}{$\downarrow$}}) & $\bm{76.4}$ ({\color{green}{$\uparrow$}}) & $56.2$ ({\color{green}{$\uparrow$}}) & $60.5$ ({\color{green}{$\uparrow$}}) & $35.3$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $83.0$ & $73.1$ & $72.0$ & $66.5$ & $53.8$\\ MM & Mean-w/ & $82.4$ & $69.5$ & $69.9$ & $66.3$ & $33.2$\\ (0, 5) & Stat-w/o & $82.9$ & $73.1$ & $72.1$ & $66.5$ & $54.0$\\ & Stat-w/ (ours) & $80.3$ ({\color{red}{$\downarrow$}}) & $73.7$ ({\color{green}{$\uparrow$}}) & $73.3$ ({\color{green}{$\uparrow$}}) & $69.4$ ({\color{green}{$\uparrow$}}) & $58.8$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_ravdess_moda2} \end{table} \subsection{Extra Results on VGGSound} \textbf{Impact of hyper-parameters} \quad We also plot model accuracy versus magnitude of noise on video modality with different gamma values in the VGGSound experiment in Figure \ref{fig:ablation_diff_gamma_vgg}. \vspace{6pt} \noindent\textbf{Unknown Noise on Single Modality}\quad To capture another setting, here we consider that the video modality (or audio modality) is corrupted, and both modalities are trained with robust add-ons in Table \ref{table:exp_vggsound_moda14} and \ref{table:exp_vggsound_moda13}, respectively. We emphasize that this setting is valid because in real-world applications, we sometimes do not know which modality is corrupted. As shown in these tables, the multimodal networks fused by our method could usually achieve higher accuracies in most cases, except when both unimodal networks are learned with free-m training \footnote{We neglect the cases when networks are tested on clean data, since it is possible that a robust network is less accurate on clean data as suggested by \cite{robust_at_odds}.}. Moreover, the fact that our Jacobian regularized multimodal network with a corrupted modality still outperforms the unimodal network with clean data in all cases demonstrates the positive effect of an extra modality. \label{appendix:extra_on_vgg} \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on VGGSound when video features are perturbed. Mean accuracies are reported after repeatedly running 5 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~Model & Clean & $\omega_0 = 1.5$ & $\omega_1 = 6$ & $\omega_4 = 0.03$ & $\omega_5 = 0.001$\\ \midrule & 0: Img-regT & $27.4$ & $5.78$ & $27.4$ & $9.5$ & $9.0$ \\ & 1: Aud-regT & $54.4$ & $15.0$ & $49.8$ & $23.0$ & $19.9$ \\ & 2: Img-advT & $27.5$ & $5.3$ & $27.4$ & $10.7$ & $10.3$\\ & 3: Aud-advT & $54.8$ & $19.4$ & $50.6$ & $41.8$ & $48.5$\\ UM & 4: Img-freT & $22.2$ & $4.0$ & $22.2$ & $19.4$ & $20.9$\\ Nets & 5: Aud-freT & $49.2$ & $32.4$ & $46.6$ & $48.2$ & $49.3$\\ & 6: Img-staT & $27.0$ & $6.9$ & $26.9$ & $10.5$ & $9.6$ \\ & 7: Aud-staT & $55.2$ & $18.8$ & $50.0$ & $22.4$ & $20.1$ \\ & 8: Img-mixT & $27.2$ & $8.4$ & $27.1$ & $7.3$ & $7.2$\\ & 9: Aud-mixT & $56.3$ & $1.57$ & $50.1$ & $16.9$ & $11.7$ \\ \midrule & Mean-w/o & $57.7$ & $45.8$ & $57.6$ & $35.0$ & $25.7$ \\ MM & Mean-w/ & $53.9$ & $48.6$ & $53.8$ & $37.9$ & $20.5$\\ (0, 1) & Stat-w/o & $58.5$ & $46.0$ & $58.4$ & $35.3$ & $26.0$ \\ & Stat-w/ (ours) & $60.1$ ({\color{green}{$\uparrow$}}) & $51.2$ ({\color{green}{$\uparrow$}}) & $60.0$ ({\color{green}{$\uparrow$}}) & $39.7$ ({\color{green}{$\uparrow$}}) & $28.5$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $59.1$ & $45.2$ & $59.9$ & $36.6$ & $26.9$\\ MM & Mean-w/ & $54.6$ & $48.6$ & $54.6$ & $37.6$ & $21.3$\\ (2, 3) & Stat-w/o & $59.6$ & $45.4$ & $59.6$ & $37.1$ & $27.1$ \\ & Stat-w/ (ours) & $\bm{61.0}$ ({\color{green}{$\uparrow$}}) & $50.0$ ({\color{green}{$\uparrow$}}) & $\bm{61.0}$ ({\color{green}{$\uparrow$}}) & $40.9$ ({\color{green}{$\uparrow$}}) & $29.7$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $54.6$ & $42.2$ & $52.5$ & $54.5$ & $54.6$\\ MM & Mean-w/ & $51.4$ & $37.6$ & $49.1$ & $51.0$ & $51.4$\\ (4, 5) & Stat-w/o & $56.4$ & $43.7$& $54.4$ & $\bm{56.3}$& $\bm{56.4}$ \\ & Stat-w/ (ours) & $45.1$ ({\color{red}{$\downarrow$}}) & $38.6$ ({\color{red}{$\downarrow$}}) & $43.9$ ({\color{red}{$\downarrow$}}) & $44.3$ ({\color{red}{$\downarrow$}}) & $44.7$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $57.5$ & $47.6$ & $57.4$ & $37.3$ & $30.8$\\ MM & Mean-w/ & $53.5$ & $49.4$ & $53.4$ & $37.1$ & $23.0$\\ (6, 7) & Stat-w/o & $58.2$ & $48.0$ & $58.1$ & $37.7$ & $31.1$ \\ & Stat-w/ (ours) & $60.0$({\color{green}{$\uparrow$}}) & $52.2$({\color{green}{$\uparrow$}}) & $59.9$ ({\color{green}{$\uparrow$}}) & $41.3$ ({\color{green}{$\uparrow$}}) & $34.4$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $57.6$ & $48.1$ & $57.5$ & $42.8$ & $33.6$\\ MM & Mean-w/ & $57.8$ & $52.2$ & $57.8$ & $50.9$ & $33.6$\\ (8, 9) & Stat-w/o & $59.7$ & $50.1$ & $59.5$ & $44.2$ & $34.6$ \\ & Stat-w/ (ours) & $60.4$ ({\color{green}{$\uparrow$}}) & $57.4$ ({\color{green}{$\uparrow$}}) & $60.3$ ({\color{green}{$\uparrow$}}) & $54.3$ ({\color{green}{$\uparrow$}}) & $49.9$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_vggsound_moda14} \end{table} \begin{figure}[!ht] \center \includegraphics[width=0.345\linewidth]{figures/VGG01G.pdf} \hspace{-5mm} \includegraphics[width=0.345\linewidth]{figures/VGG05G.pdf} \hspace{-5mm} \includegraphics[width=0.345\linewidth]{figures/VGG10G.pdf} \caption{Accuracies of multimodal networks obtained by statistically fusing a vanilla image network and audio network with $\gamma=0.1$ (the 1-st column), $\gamma=0.5$ (the 2-nd column), and $\gamma=0.99$ (the 3-rd column). Mean and Std of 5 repeated experiments are shown by the solid lines and the shaded regions, respectively.} \label{fig:ablation_diff_gamma_vgg} \end{figure} \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on VGGSound when audio features are perturbed. Mean accuracies are reported after repeatedly running 5 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~Model & Clean & $\omega_0 = 1.5$ & $\omega_1 = 6$ & $\omega_4 = 0.03$ & $\omega_5 = 0.001$\\ \midrule & 0: Img-regT & $27.4$ & $5.78$ & $27.4$ & $9.5$ & $9.0$ \\ & 1: Aud-regT & $54.4$ & $15.0$ & $49.8$ & $23.0$ & $19.9$ \\ & 2: Img-advT & $27.5$ & $5.3$ & $27.4$ & $10.7$ & $10.3$\\ & 3: Aud-advT & $54.8$ & $19.4$ & $50.6$ & $41.8$ & $48.5$\\ UM & 4: Img-freT & $22.2$ & $4.0$ & $22.2$ & $19.4$ & $20.9$\\ Nets & 5: Aud-freT & $49.2$ & $32.4$ & $46.6$ & $48.2$ & $49.3$\\ & 6: Img-staT & $27.0$ & $6.9$ & $26.9$ & $10.5$ & $9.6$ \\ & 7: Aud-staT & $55.2$ & $18.8$ & $50.0$ & $22.4$ & $20.1$ \\ & 8: Img-mixT & $27.2$ & $8.4$ & $27.1$ & $7.3$ & $7.2$\\ & 9: Aud-mixT & $56.3$ & $1.57$ & $50.1$ & $16.9$ & $11.7$ \\ \midrule & Mean-w/o & $57.7$ & $29.5$ & $55.5$ & $36.2$ & $32.7$ \\ MM & Mean-w/ & $53.9$ & $23.4$ & $50.8$ & $29.8$ & $25.2$ \\ (0, 1) & Stat-w/o & $58.5$ & $30.5$ & $56.1$ & $36.6$ & $32.9$ \\ & Stat-w/ (ours) & $\bm{60.0}$({\color{green}{$\uparrow$}}) & $31.3$ ({\color{green}{$\uparrow$}}) & $52.6$ ({\color{red}{$\downarrow$}}) & $38.9$ ({\color{green}{$\uparrow$}}) & $35.5$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $59.1$ & $37.5$ & $56.7$ & $53.4$ & $55.9$\\ MM & Mean-w/ & $54.8$ & $29.7$ & $52.0$ & $43.7$ & $48.5$\\ (2, 3) & Stat-w/o & $59.6$ & $38.0$ & $57.1$ & $54.2$ & $56.5$ \\ & Stat-w/ (ours) & $55.4$ ({\color{red}{$\downarrow$}}) & $38.5$ ({\color{green}{$\uparrow$}}) & $53.2$ ({\color{red}{$\downarrow$}}) & $\bm{56.3}$ ({\color{green}{$\uparrow$}}) & $\bm{57.9}$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $54.6$ & $39.2$ & $54.6$ & $54.6$ & $54.6$\\ MM & Mean-w/ & $51.4$ & $\bm{45.4}$ & $51.3$ & $51.0$ & $51.5$\\ (4, 5) & Stat-w/o & $56.5$ & $39.2$ & $\bm{58.8}$ & $55.8$& $55.4$ \\ & Stat-w/ (ours) & $46.2$ ({\color{red}{$\downarrow$}}) & $35.5$ ({\color{red}{$\downarrow$}}) & $57.3$ ({\color{red}{$\downarrow$}}) & $55.5$ ({\color{red}{$\downarrow$}}) & $47.3$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $57.5$ & $36.2$ & $55.7$ & $36.7$ & $35.2$\\ MM & Mean-w/ & $53.3$ & $36.3$ & $52.0$ & $28.5$ & $37.1$\\ (6, 7) & Stat-w/o & $58.2$ & $35.7$ & $55.0$ & $36.9$ & $31.0$ \\ & Stat-w/ (ours) & $54.4$ ({\color{green}{$\uparrow$}}) & $27.7$ ({\color{green}{$\uparrow$}}) & $50.2$ ({\color{green}{$\uparrow$}}) & $38.8$ ({\color{green}{$\uparrow$}}) & $25.4$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $57.5$ & $4.1$ & $54.5$ & $21.1$ & $31.2$\\ MM & Mean-w/ & $57.8$ & $3.5$ & $53.2$ & $17.8$ & $26.9$\\ (8, 9) & Stat-w/o & $53.8$ & $3.6$ & $56.6$ & $21.2$ & $32.6$ \\ & Stat-w/ (ours) & $59.6$ ({\color{green}{$\uparrow$}}) & $5.9$ ({\color{green}{$\uparrow$}}) & $52.7$ ({\color{red}{$\downarrow$}}) & $26.4$ ({\color{green}{$\uparrow$}}) & $33.3$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_vggsound_moda13} \end{table} \section{Preliminary}\label{sec:background} \textbf{Network Robustness}\quad To verify network robustness, two major kinds of perturbations are used, which in our paper we referred to as (i) adversarial attacks such as FGSM, PGD, or CW attack \cite{goodfellow2014fsgm,madry2017towards_resist_attack,carlini2017cw} and (ii) random corruptions such as Gaussian noise, missing entries or illumination change \cite{stab_train,kim2019single}. Correspondingly, many methods have been proposed to offset the negative effects of such perturbations. Adversarial training based on projected gradient descent \cite{madry2017towards_resist_attack} is one strong mechanism to defend against adversarial attacks, and the recent Free-m method \cite{shafahi2019freem} is proposed as its fast variant. Besides adversarial training, several regularization techniques are also proven to have such capability, such as Mixup \cite{zhang2020mixup1}, Jacobian regularization \cite{jacobian2}. Alternatively, with regard to random corruptions, Mixup and Jacobian regularization are also effective in this case \cite{zhang2017mixup2,jacobian3}. Another powerful approach is stability training \cite{stab_train}, where it introduces an additional KL divergence term into the conventional classification loss so that the trained DNNs are stabilized. \vspace{6pt} \noindent\textbf{Multimodal Learning}\quad DNNs trained by fusing data from different modalities have outperformed their unimodal counterparts in various applications, such as object detection \cite{stat_fusion2,kim2019single}, semantic segmentation \cite{chen2020bi_semantic_seg2,feng2020deep_sementic_seg1}, audio recognition \cite{gemmeke2017audio1,chen2020vggsound}. Based on where the information is exchanged between different modalities, multimodal fusion methods could be classified into three kinds: (i) early-fusion \cite{wagner2016multispectral,stat_fusion2}, (ii) middle-fusion \cite{kim2019single,deep_mm_channel_exchange}, and (iii) late-fusion \cite{stat_fusion2,liu2021one}. For instance, if the information is fused at the end of DNNs (e.g., Figure \ref{fig:illsutration_of_method}), such a method belongs to the late-fusion category. Although vast efforts have been put into exploiting multimodal fusion methods to improve DNNs' accuracy on specific learning tasks, few works have explored network robustness in the multimodal context. Specifically, \cite{mees2016adaptive1}, \cite{valada2017adaptive2}, and \cite{kim2018robust} exploited gating networks to deal with random corruptions, adverse or changing environments. Afterwards, \cite{kim2019single} proposed a surrogate minimization scheme and a latent ensemble layer to handle single-source corruptions. However, all the aforementioned methods belong to middle-fusion and only random corruptions are considered. On the other hand, we focus on another important scenario: late-fusion, and besides corruptions, we further take adversarial attacks into account. To the best of our knowledge, robust late-fusion is un-explored in the previous literature. \section{Discussion and Future Work}\label{sec:conclusion} In this paper, we propose a training-free robust late-fusion method. Intuitively, our method designs a filter for each sample during inference (i.e., test time). Such a filter is implemented by Jacobian regularization, and after a sample goes through it, the change of final multimodal prediction is limited to some amount under an input perturbation. The error bound analysis and series of experiments justify the efficacy of our method both theoretically and numerically. The Twomoon example explains the biasing effect of the extra modality, rooted for the difference between unimodal and multimodal. Our method opens up other directions for further exploration. In the unimodal context, we understand that directly applying our method usually doesn't change the prediction. Thus, would it be capable to adjust the confidence of a DNN? Another possibility is that besides $\mathbf{W}_x$, we deliberately add another bias term $\mathbf{b}_x$ and optimize both for unimodal robustness. Alternatively, in the multimodal context, we might consider minimizing the derivative of the final prediction directly to the input feature. Moreover, it is educational to compare it with consistency regularization. Last but not least, statistical fusion implicitly uses the assumption that train and test data come from the same distribution, so that we could calculate $\mathbf{freq}$ based on the train dataset and reuse it in inference time. When domain shift presents, this doesn't work and a straightforward remedy might be to make $\mathbf{freq}$ a learnable parameter. \section{Introduction} \end{document} \section{Introduction} Deep fusion models have recently drawn great attention of researchers in the context of multimodal learning \cite{vielzeuf2018centralnet,mm_survey,perez2019mfas,deep_mm_channel_exchange,xue2021multimodal} as it provides an easy way to boost model accuracy and robustness. For instance, RGB cameras and LiDARs are usually deployed simultaneously on an autonomous vehicle, and the resulting RGB images and point clouds are referred to as two modalities, respectively. When RGB images are blurry at night, point clouds could provide complementary information and help to make decisions in vision tasks \cite{kim2019single}. Over the past few years, numerous multimodal fusion methods have been proposed at different levels: early-, middle-, and late-fusion \cite{stat_fusion2}. In early-fusion, input feature vectors from different modalities are concatenated and fed into one single deep neural network (DNN), while in middle-fusion, they go into DNNs independently and exchange information in feature space. Unlike the previous two cases, late-fusion is realized by merging distinct DNNs at their output layers via concatenation, element-wise summation, etc. These three levels of fusion possess different pros and cons. For instance, late-fusion, the primary concern of our paper, is (i) privacy-friendly and (ii) convenient to deploy. Specifically, assume that a hospital wants to have an AI agent to judge whether a patient has a certain disease or not \cite{sun2020tcgm}. It has to divide the complete training feature (e.g., medical records, X-ray images) of every patient and deliver them to different AI companies, otherwise, the patients' identities will be exposed and their privacy are unprotected. This, in turn, directly rules out the possibility of applying early- or middle-fusion methods. On the other hand, the hospital could still exploit late-fusion technique to generate the ultimate AI agent after several unimodal DNNs are trained by AI companies. Moreover, unlike early- or middle-fusion, many late-fusion methods could tolerate missing modality information (i.e., no need for paired training data) and thus are convenient to deploy. Although late-fusion is a mature topic in the literature, its performance under adversarial attacks \cite{madry2017towards_resist_attack,robust_at_odds} and random corruptions \cite{stab_train,kim2019single} is rather under-explored. In this paper, we address the problem of robust late-fusion by utilizing Jacobian regularization \cite{jacobian4,jacobian2,jacobian3,jacobian1} and conditional independence assumption \cite{sun2020tcgm}. The key is to minimize the Frobenius norm of a Jacobian matrix so that the multimodal prediction is stabilized (see Figure \ref{fig:illsutration_of_method}). Our main contributions are as follows: \begin{itemize} \item To the best of our knowledge, we are the first to propose a training-free robust late-fusion method. The involving optimization problem is relaxed to a Sylvester equation \cite{solve_sylvester1}, and the solution is obtained with only a little computational overhead. \item We provide a theoretical error bound of our proposed robust late-fusion method and an illustrative explanation about the function of the extra modality via the TwoMoon example. \item Thorough numerical experiments demonstrate that our method outperforms other late-fusion methods and is capable to handle both adversarial attacks and random corruptions. \end{itemize} \begin{figure}[!htb] \centering \includegraphics[width=0.95\linewidth]{figures/Picture1.pdf} \caption{Illustration of the proposed robust late-fusion method. Before unimodal raw logit $\mathbf{z}_A$ and $\mathbf{z}_B$ are fused, the Jacobian regularization technique is applied. Roughly speaking, it will enforce the derivative of $\mathbf{p}$ with respect to $\mathbf{z}_A$ becomes smaller. Thus, when $\mathbf{z}_A$ is perturbed to $\mathbf{z}_A^{\prime}$ (due to random corruption or adversarial attack), the change of multimodal prediction $||\mathbf{p}^\prime-\mathbf{p}||$ will be limited to some extent. For illustration purpose, all variables here (e.g., $\mathbf{p},\mathbf{z}_A$) are drawn in one-dimensional space.} \label{fig:illsutration_of_method} \end{figure} \section{Sample-Wise Jacobian Regularization}\label{sec:method1} Consider a supervised $K$-class classification problem in the multimodal context. Suppose that features from two distinct modalities $A$ (e.g., audio) and $B$ (e.g., video) are provided in the form of $\mathcal{D}_A=\{(\mathbf{x}^i_A,{y}^{i})\}_{i=1}^N$ and $\mathcal{D}_B=\{(\mathbf{x}^i_B,{y}^{i})\}_{i=1}^N$, where $\{\mathbf{x}_A,\mathbf{x}_B\}$ represents input features and ${y}\in\{0,1,\cdots,K-1\}$ represents the true label. We train two unimodal networks separately, each corresponding to one modality. Given a specific input feature $\mathbf{x}_A$, the first unimodal network calculates the class prediction $\mathbf{p}_A\in\mathbb{R}^K$ by: \begin{equation}\label{eq:expression_final_layer} \mathbf{p}_A=\bm\sigma_1(\mathbf{z}_A)=\bm\sigma_1(\mathbf{W}_A\mathbf{h}_A+\mathbf{b}_A)=\bm\sigma_1(\mathbf{W}_A\mathbf{f}_A(\mathbf{x}_A)+\mathbf{b}_A) \end{equation} where $\bm\sigma_1(\cdot)$ represents the Softmax function, $\mathbf{z}_A\in\mathbb{R}^K$ represents the raw logit, and $\mathbf{h}_A=\mathbf{f}_A(\mathbf{x}_A)\in\mathbb{R}^{H}$ represents the feature being fed into the last layer \footnote{(\ref{eq:expression_final_layer}) holds because the last layer is usually implemented as a fully connected (FC) layer with Softmax activation in classification tasks.}. Here $\mathbf{W}_A\in\mathbb{R}^{K\times H}$ and $\mathbf{b}_A\in\mathbb{R}^K$ are the learnable weight and bias of the last linear layer, respectively. Similarly, the second unimodal network provides the class prediction $\mathbf{p}_B=\bm\sigma_1(\mathbf{z}_B)=\bm\sigma_1(\mathbf{W}_B\mathbf{h}_B+\mathbf{b}_B)=\bm\sigma_1(\mathbf{W}_B\mathbf{f}_B(\mathbf{x}_B)+\mathbf{b}_B)$. Based on the conditional independence assumption \cite{stat_fusion1}, the basic statistical late-fusion method generates the final class prediction as \cite{stat_fusion2}: \begin{equation}\label{eq:expression_stat_fusion} \mathbf{p}=\bm\sigma_2(\frac{\mathbf{p}_A\odot\mathbf{p}_B}{\mathbf{freq}})=\bm\sigma_2(\frac{\bm\sigma_1(\mathbf{z}_A)\odot\bm\sigma_1(\mathbf{z}_B)}{\mathbf{freq}}) \end{equation} where the division is performed in an element-wise manner, $\odot$ represents the element-wise product, and $\bm\sigma_2(\cdot)$ represents a linear normalization enforcing the summation of elements equal to one. Here $\mathbf{freq}\in\mathbb{R}^K$ contains the occurring frequencies of each class, calculated from the training dataset. Our proposed approach builds upon (\ref{eq:expression_stat_fusion}). Specifically, we consider adding two $K\times K$ weight matrices $\{\mathbf{W}_a,\,\mathbf{W}_b\}$ ahead of $\{\mathbf{z}_A,\,\mathbf{z}_B\}$, respectively, right before they get activated. Consequently, the final multimodal prediction is re-calibrated as: \begin{equation}\label{eq:expression_p_prime} \mathbf{p}^\prime=\bm\sigma_2(\frac{\bm\sigma_1(\mathbf{z}_A^\prime)\odot\bm\sigma_1(\mathbf{z}_B^\prime)}{\mathbf{freq}})=\bm\sigma_2(\frac{\bm\sigma_1(\mathbf{W}_a\mathbf{z}_A)\odot\bm\sigma_1(\mathbf{W}_b\mathbf{z}_B)}{\mathbf{freq}}) \end{equation} Suppose that the data provided from modality $A$ are perturbed at the input or feature level, while the data from modality $B$ are clean. For matrix $\mathbf{W}_b$, we could simply set it to an identity matrix, implying that we didn't invoke the robust add-on for the second modality. To determine the value of $\mathbf{W}_a$, we first calculate the derivative of $\mathbf{p}^\prime$ with respect to $\mathbf{h}_A$ (See Supplementary): \begin{equation}\label{eq:derivative} \frac{\partial \mathbf{p}^\prime}{\partial \mathbf{h}_A}= \mathbf{J}^\prime\mathbf{W}_a\mathbf{W}_A=[\mathbf{p}^\prime\mathbf{p}^{\prime,T}-\text{Diag}(\mathbf{p}^\prime)]\mathbf{W}_a\mathbf{W}_A \end{equation} Then we minimize the following regularized Jacobian loss with respect to $\mathbf{W}_a$: \begin{equation}\label{eq:expression_loss} \min_{\mathbf{W}_a}\ L =\min_{\mathbf{W}_a}\ (1-\gamma) ||\mathbf{J}^\prime\mathbf{W}_a\mathbf{W}_A||^2_F + \gamma ||\mathbf{W}_a-\mathbf{I}||_F^2 \end{equation} where $0<\gamma<1$ is a tunable hyper-parameter. Minimizing the first term in the loss could make the change of $\mathbf{p}^\prime$ limited to some extent when $\mathbf{h}_A$ is perturbed, while the second term in the loss guarantees numerical stability and the prediction in the perturbed case won't get too far from that of the clean case. For a specific multimodal input $\{\mathbf{x}_A,\mathbf{x}_B\}$, once $\mathbf{W}_a$ is determined, so are $\mathbf{p}^\prime$ and $\mathbf{J}^\prime$ via (\ref{eq:expression_p_prime}) and (\ref{eq:derivative}), respectively. Thus, (\ref{eq:expression_loss}) is well-determined and non-linear with respect to $\mathbf{W}_a$. \begin{algorithm} \caption{Iteratively solving regularized Jacobian loss} \label{alg:solve_loss} \begin{algorithmic}[1] \STATE {Given one specific input $\{\mathbf{x}_A,\mathbf{x}_B\}$. Initialize iteration index $t=0$.} \STATE {Perform one forward pass, yielding the initial class prediction $\mathbf{p}^{(0)}$.} \WHILE{$t<t_{\text{max}}$} \STATE {Calculate $\mathbf{J}^{(t)}=\mathbf{p}^{(t)}\mathbf{p}^{(t),T}-\text{Diag}(\mathbf{p}^{(t)})$.} \STATE {Minimize $L^{(t)} = (1-\gamma) ||\mathbf{J}^{(t)}\mathbf{W}_a\mathbf{W}_A||^2_F + \gamma ||\mathbf{W}_a-\mathbf{I}||_F^2$ with respect to $\mathbf{W}_a$.} \STATE {Calculate $\mathbf{p}^{(t+1)}$ based on Eq (\ref{eq:expression_p_prime}) with the optimal $\mathbf{W}_a$.} \STATE {Update $t=t+1$.} \ENDWHILE \STATE {Return $\mathbf{p}^\prime=\mathbf{p}^{(t)}$.} \end{algorithmic} \end{algorithm} We propose a heuristic iterative method making the above optimization problem tractable in Algorithm \ref{alg:solve_loss}. Our key is to decouple $\mathbf{J}^\prime$ from $\mathbf{W}_a$. Namely, in step 5 of Algorithm \ref{alg:solve_loss}, all terms are known except $\mathbf{W}_a$, and thus the relaxed loss is convex. After writing out all terms of $\partial L^{(t)}/\partial \mathbf{W}_a$, we observe that it is a Sylvester equation \cite{solve_sylvester1}. It has a unique solution and the run time is as large as inverting a $K\times K$ matrix and hence affordable. See Supplementary for details. \vspace{6pt} \noindent\textbf{Remarks}\quad First, in our implementation we find that one iteration could already yield a sufficiently accurate result, thus all our numerical results are reported with $t_{\text{max}}=1$ (i.e., $\mathbf{p}^\prime=\mathbf{p}^{(1)}$). Second, if we know data from modality $B$ are also perturbed, we could solve $\mathbf{W}_b$ in a similar manner. Third, notice that our approach is invoked merely during inference (i.e., test time), but not the train time. Furthermore, we demonstrate our approach in the context of two modalities, while obviously, it could be equally applied to many modalities. Finally, with a moderate assumption, we can further prove that when the input is perturbed, the change of final prediction enhanced by our method is limited to a certain amount. This has been summarized in Theorem \ref{theorem:change_output} (see Supplementary for details). Some immediate corollary include: (i) when $\bm{\epsilon}\sim N(\mathbf{0},\mathbf{\Sigma})$, the bound is simplified to ${E}[||\mathbf{p}^{\prime,noise}-\mathbf{p}^\prime||]\leq l\,({\frac{\gamma K}{2(1-\gamma)}})^{1/2}\,\text{Tr}[\mathbf{\Sigma}]$, (ii) when the $L_2$ norm of $\bm{\epsilon}$ is constrained smaller than $\delta$ (usually assumed in adversarial attacks), the bound is simplified to $||\mathbf{p}^{\prime,noise}-\mathbf{p}^\prime||\leq l\,\delta\,({\frac{\gamma K}{2(1-\gamma)}})^{1/2}$. \begin{theorem}\label{theorem:change_output} If $\mathbf{f}_A$ is $l$-Lipschitz continuous and $\mathbf{x}_A$ is perturbed by $\bm{\epsilon}$: $\mathbf{x}_A^{\text{noise}}=\mathbf{x}_A+\bm{\epsilon}$, then the Euclidean norm of our final prediction (i.e., $||\mathbf{p}^{\prime,noise}-\mathbf{p}^\prime||$) at most changes $l\sqrt{\frac{\gamma K}{2(1-\gamma)}}||\bm\epsilon||$. \end{theorem} \section{Necessity of the Extra Modality: Biasing Effect}\label{sec:method2} In this sub-section, we explain the principles behind our approach. Let us move one step back and consider the unimodal case. In what follows, we will demonstrate that our approach might not work well in the unimodal context, which in turn justifies the necessity of the extra modality. At first glance, our approach seems to be equally applicable in the unimodal context. Namely, suppose that only one unimodal network $\mathbf{p}=\bm\sigma_1(\mathbf{z})=\bm\sigma_1(\mathbf{W}\mathbf{h}+\mathbf{b})$ is available, in test time and for each specific input $\mathbf{x}$, we add a weight matrix $\mathbf{W}_x$ before the raw logit $\mathbf{z}$ being fed into the Softmax activation, re-calibrating the final unimodal prediction as: $\mathbf{p}^\prime=\bm\sigma_1(\mathbf{z^\prime})=\bm\sigma_1(\mathbf{W}_x\mathbf{z})$. Here $\mathbf{W}_x$ can be solved by analogy to (\ref{eq:expression_loss}). However, we observe that the introduction of $\mathbf{W}_x$ usually won't change the final prediction. For an intuitive understanding, we consider the TwoMoon example used by \cite{xue2021multimodal}. In this example, all data points are scattered in a 2D space. The data located at the upper and lower leaf have true labels $0$ and $1$, and are colored by red and blue, respectively. \begin{figure}[!htp] \centering \includegraphics[width=1.0\linewidth]{figures/twomoon_um_final_version.pdf} \vspace{-18pt} \caption{({Unimodal case}) The leftmost figure reveals the results of training and testing on clean data. The heatmap in the background represents the value of $||\mathbf{J}\mathbf{W}||_F$. The remaining right three figures show the results of testing on data with Gaussian noise, and similarly, the heatmap in the background represents the value of $||\mathbf{J}\mathbf{W}_x\mathbf{W}||_F$. For each figure, the test accuracy on clean and noisy data are reported at the left top and right bottom.} \label{fig:twoomoon_um} \end{figure} In the unimodal case, we take both horizontal and vertical coordinates as input and train a neural network with three FC layers. As shown in Figure \ref{fig:twoomoon_um} (a), the network perfectly fits to the clean data and achieves 97.75\% accuracy on the clean test data. In the remaining three figures, we evaluate the trained network on noisy test data. Specifically, we deliberately choose $\gamma=1.0$ in Figure \ref{fig:twoomoon_um} (b), so that our approach is actually not invoked (since the solved $\mathbf{W}_x$ equals $\mathbf{I}$). In this case, the accuracy drops to 97.00\% on noisy data, while the heatmap in the background doesn't change at all compared to Figure \ref{fig:twoomoon_um} (a). In Figure \ref{fig:twoomoon_um} (c) and (d), we choose $\gamma=0.5$ and $\gamma=0.1$, respectively. We observe that even though the color of the heatmap surely becomes lighter as expected, the test accuracies of both cases are still 97.00\%. More importantly, we find that this phenomenon is not coincident and that the prediction doesn't change for any input after adding $\mathbf{W}_x$ in unimodal binary classification. See Supplementary for rigorous mathematical proof. Moreover, in a $K$-class ($K>2$) classification problem, the final prediction might change if $\gamma$ is sufficiently small. For instance, given $\mathbf{z}=[1,0,2]^T$ and $\mathbf{W}=\mathbf{I}$, if we choose $\gamma=0.5$, then the final prediction will change from $\mathbf{p}=[0.245, 0.090, 0.665]^T$ to $\mathbf{p}^\prime=[0.270,0.096,0.635]^T$, where the last entry remains to be the largest. However, if we choose $\gamma=0.01$, then the final prediction will become $\mathbf{p}^\prime=[0.391, 0.219, 0.390]^T$, and now the first entry is the largest. See Supplementary for a theoretical bound of $\gamma$ in this high-dimensional case. Now we turn to the multimodal case. We treat the horizontal coordinates as one modality, and the vertical coordinates as the second modality. Two neural networks are trained, each corresponding to one modality. Then we fuse them based on the aforementioned statistical fusion method. As shown in Figure \ref{fig:twoomoon_mm}, in the multimodal context, when our method is invoked ($\gamma=0.5$ or $0.1$), the color of the heatmap becomes lighter and the test accuracies on noisy data all increase compared to the trivial statistical fusion (i.e., when $\gamma=1.0$). On the other hand, the test accuracies on clean data almost remain still (or slightly increase alongside $\gamma\downarrow$). \begin{figure}[!htp] \centering \includegraphics[width=1.0\linewidth]{figures/twomoon_mm_final_version.pdf} \vspace{-18pt} \caption{({Multimodal case}) The leftmost column reveals the results of training and testing on clean data. For illustration purpose, we only perturb the Y-coordinates with Gaussian noise. The first and second row correspond to small and large noise, respectively. Heatmaps reveal the values of $||\mathbf{J}\mathbf{W}||_F$ in (a) and $||\mathbf{J}\mathbf{W}_A\mathbf{W}||_F$ in (b), (c), and (d). For each figure, the test accuracy on clean and noisy data are reported at the left top and right bottom.} \label{fig:twoomoon_mm} \end{figure} \begin{lemma}\label{lemma:equi_stat_conc} Statistical fusion is equivalent to concatenating the raw logits: \begin{equation*} \bm\sigma_2(\frac{\mathbf{p}_A\odot\mathbf{p}_B}{\mathbf{freq}})=\bm\sigma_1(\mathbf{z}_A+\mathbf{z}_B-\ln\mathbf{freq})=\bm\sigma_1( \left[ \begin{array}{cc} \mathbf{I} & \mathbf{I}\\ \end{array}\right] \left[ \begin{array}{c} \mathbf{z}_A\\ \mathbf{z}_B \end{array}\right] -\ln\mathbf{freq}) \end{equation*} \end{lemma} \noindent\textbf{Biasing effect of the extra modality}\quad As we have seen in the TwoMoon example, our proposed method behaves differently in the unimodal and multimodal context. To get a better understanding of this subtle difference, we first summarize an equivalent form of statistical fusion in Lemma \ref{lemma:equi_stat_conc} (see Supplementary). Now assume that the sample-wise $\mathbf{W}_a$ is added, then the final multimodal prediction becomes $\bm\sigma_1(\mathbf{W}_a\mathbf{z}_A+\mathbf{z}_B-\ln\mathbf{freq})$. Alternatively, in the unimodal context, our method adds a sample-wise $\mathbf{W}_x$ and the final prediction becomes $\bm\sigma_1(\mathbf{W}_x\mathbf{z})$. Comparing these two expressions, we observe that the introduced $\mathbf{W}_a$ and $\mathbf{W}_x$ occur ahead of the perturbed modality. In the unimodal case, the Softmax function $\bm\sigma_1$ directly applies onto $\mathbf{W}_x\mathbf{z}$, while in the multimodal case, the extra modality information $\mathbf{z}_B$ acts as a bias term inside the Softmax. Since entries of $\mathbf{z}_B$ have different values, it might impact the final prediction. \section{Experimental Results}\label{sec:experiment} \subsection{AV-MNIST} AV-MNIST \cite{perez2019mfas,vielzeuf2018centralnet} is a novel dataset created by pairing audio features to the original MNIST images. The first modality corresponds to MNIST images with size of $28\times 28$ and $75\%$ energy removed by principal component analysis \cite{perez2019mfas}. The second modality is made up of spectrograms with size of $112\times 112$. These spectrograms are extracted from audio samples obtain by merging pronounced digits and random natural noise. Following \cite{vielzeuf2018centralnet}, we use LeNet5 \cite{LeCun_handwritten} and a 6-layer CNN to process image and audio modalities, respectively. To thoroughly verify the efficiency of our method, we design various types of perturbations: (i) random corruptions including Gaussian noise ($\omega_0$), missing entries ($\omega_1$), and biased noise ($\omega_2,\omega_3$), and (ii) adversarial attacks including FGSM ($\omega_4$) and PGD attack ($\omega_5,\omega_6$). See Supplementary for detailed definitions. \vspace{6pt} \noindent\textbf{Baselines}\quad We limit our comparison to the range of late-fusion methods. To verify our method, we consider several methods which could improve network robustness, including (i) regular training, (ii) adversarial training (advT) by \cite{madry2017towards_resist_attack}, (iii) free-m training (freT) by \cite{shafahi2019freem}, (iv) stability training (stabT) by \cite{stab_train}, and (v) Mixup (mixT) by \cite{zhang2017mixup2}. To the best of our knowledge, no previous robust late-fusion methods exist. Thus for comparison purpose, we slightly modify the confidence-based weighted summation in \cite{xue_find_that_paper} and adapt it here, referred to as mean fusion with confidence-based weighted summation. Combinations of different fusion methods and robust add-ons provide us with a few baselines. We emphasize that in our experiments, the train data are always free of noise. Following \cite{kim2019single}, our experiments are mainly conducted on the case when one modality is perturbed. Results on various other settings are reported in Supplementary. \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on AV-MNIST when audio features are perturbed. Mean accuracies are reported after repeatedly running 20 times. The value of $\gamma$ is selected from $\{0.1,0.5,0.9\}$. The green ({\color{green}{$\uparrow$}}) or red ({\color{red}{$\downarrow$}}) arrow represents after applying our Jacobian regularization, the model accuracy increases or decreases compared to others with the same unimodal backbones. The best accuracy in one column is bolded. `UM' and `MM' represent `unimodal' and `multimodal', respectively. `MM($0,\,i$)' represents a multimodal network obtained by fusing the unimodal network indexed by `$0$' and `$i$'.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~~Model & Clean & $\omega_0 = 1.0$ & $\omega_1 = 10$ & $\omega_4 = 0.03$ & $\omega_5 = 0.001$\\ \midrule & 0: Img-regT & $73.4$ & $73.4$ & $73.4$ & $73.4$ & $73.4$ \\ & 1: Aud-regT & $83.9$ & $55.1$ & $73.3$ & $69.9$ & $77.8$ \\ UM & 2: Aud-advT & $84.4$ & $59.2$ & $72.0$ & $81.9$ & $83.3$ \\ Nets & 3: Aud-freT & $82.1$ & $55.6$ & $71.9$ & $80.8$ & $81.6$ \\ & 4: Aud-staT & $86.2$ & $67.6$ & $74.4$ & $66.5$ & $74.5$ \\ & 5: Aud-mixT & $87.6$ & $61.3$ & $74.9$ & $74.9$ & $78.3$ \\ \midrule & Mean-w/o & $93.6$ & $80.3$ & $86.4$ & $89.8$ & $92.7$ \\ MM & Mean-w/ & $90.5$ & $73.7$ & $83.6$ & $82.0$ & $88.2$ \\ (0, 1) & Stat-w/o & $93.6$ & $80.4$ & $86.6$ & $89.9$ & $92.7$ \\ & Stat-w/ (ours) & $94.7$ ({\color{green}{$\uparrow$}}) & $84.5$ ({\color{green}{$\uparrow$}}) & $89.1$ ({\color{green}{$\uparrow$}}) & $92.2$ ({\color{green}{$\uparrow$}}) & $\bm{94.1}$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $91.8$ & $78.3$ & $82.7$ & $91.9$ & $92.5$ \\ MM & Mean-w/ & $86.0$ & $72.7$ & $77.5$ & $85.7$ & $86.9$ \\ (0, 2) & Stat-w/o & $91.7$ & $78.3$ & $83.5$ & $91.9$ & $92.4$ \\ & Stat-w/ (ours) & $93.4$ ({\color{green}{$\uparrow$}}) & $83.1$ ({\color{green}{$\uparrow$}}) & $86.4$ ({\color{green}{$\uparrow$}}) & $\bm{93.5}$ ({\color{green}{$\uparrow$}}) & $93.7$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $93.0$ & $81.1$ & $87.7$ & $93.2$ & $93.2$ \\ MM & Mean-w/ & $82.4$ & $74.4$ & $78.6$ & $83.7$ & $83.5$ \\ (0, 3) & Stat-w/o & $93.0$ & $80.9$ & $87.7$ & $93.2$ & $93.2$ \\ & Stat-w/ & $93.0$ ({\color{green}{$\uparrow$}}) & $82.3$ ({\color{green}{$\uparrow$}}) & $88.1$ ({\color{green}{$\uparrow$}}) & $92.7$ ({\color{red}{$\downarrow$}}) & $92.9$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $93.0$ & $83.8$ & $84.9$ & $83.5$ & $88.8$ \\ MM & Mean-w/ & $90.9$ & $78.1$ & $81.8$ & $74.6$ & $81.3$ \\ (0, 4) & Stat-w/o & $93.1$ & $83.7$ & $85.3$ & $83.4$ & $88.8$ \\ & Stat-w/ (ours) & $94.7$ ({\color{green}{$\uparrow$}}) & $\bm{87.5}$ ({\color{green}{$\uparrow$}}) & $87.8$ ({\color{green}{$\uparrow$}}) & $87.9$ ({\color{green}{$\uparrow$}}) & $91.9$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $\bm{95.2}$ & $85.9$ & $89.7$ & $91.1$ & $92.5$ \\ MM & Mean-w/ & $94.0$ & $82.1$ & $88.6$ & $87.1$ & $88.9$ \\ (0, 5) & Stat-w/o & $95.1$ & $85.7$ & $\bm{90.1}$ & $91.1$ & $92.4$ \\ & Stat-w/ (ours) & $95.0$ ({\color{red}{$\downarrow$}}) & $86.1$ ({\color{green}{$\uparrow$}}) & $90.1$ ({\color{red}{$\downarrow$}}) & $91.0$ ({\color{red}{$\downarrow$}}) & $92.2$ ({\color{red}{$\downarrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_avmnist_gauss_moda1} \end{table} \vspace{6pt} \noindent \textbf{Results}\quad Table \ref{table:exp_avmnist_gauss_moda1} reports the test accuracies of several models on AV-MNIST. Six unimodal networks are trained on clean data. Then we fuse them with different fusion methods and different robust add-ons. We only invoke our Jacobian regularization for the perturbed audio modality, while keeping the image modality unchanged. First, in the case of mean fusion, confidence-based weighted summation doesn't improve robustness. In the case of statistical fusion, we observe that our proposed Jacobian regularization method not only boosts the accuracy on noisy data but also on clean data. For instance, when our method is invoked on the combination of model-0 and model-1, the test accuracy on clean data increases from $93.6\%$ to $94.7\%$, and the test accuracy on data with Gaussian noise (e.g., $\omega_0=1.0$) increases from $80.4\%$ to $84.5\%$. Similar phenomena can be observed when our method is invoked on other combinations. Moreover, the number of green arrows is much larger than that of red arrows implying that our method is compatible with different robust techniques and applicable under different noises/attacks. Comparing the last row in the second sub-section and the third row in the third sub-section, we find that with our proposed method enabled, even if the unimodal backbone model-1 is worse than model-2, the final multimodal accuracies $(0, 1)$ with our method invoked are even larger than $(0, 2)$ without our method invoked. Similar phenomena could usually be observed in other sub-sections. We argue that in this example, statistically fusing two regular unimodal baselines with our Jacobian regularization invoked surpasses robust unimodal baselines and their trivial fusion. Moreover, the highest accuracy in each column has been bolded. It is clear that the multimodal network with our method invoked usually achieves the best performance over all others. The first row of Figure \ref{fig:ablation_diff_gamma} further plots model accuracy versus magnitude of noise/attack. It displays a consistent trend that our method works well regardless of the magnitude of noise/attack. Furthermore, the larger the noise/attack is, the better our method performs. \begin{figure}[!thb] \centering \includegraphics[width=1.0\linewidth]{figures/Picture3.pdf} \caption{Accuracies of multimodal networks obtained by statistically fusing a vanilla image network and audio network with $\gamma=0.1$ (the 1-st row), $\gamma=0.5$ (the 2-nd row), and $\gamma=0.9$ (the 3-rd row). Mean and Std of 20 repeated experiments are shown by the solid lines and the shaded regions, respectively. Note that FGSM attack is deterministic, thus there is almost no variance when repeating experiments.} \label{fig:ablation_diff_gamma} \end{figure} We also plot model accuracy versus magnitude of noise/attack with different gamma values in the second and third row of Figure \ref{fig:ablation_diff_gamma}. Alongside the increasing of $\gamma$, the gap between the orange and blue lines becomes smaller. However, the orange lines consistently appear upon the blue lines, indicating that our method works in a wide range of gamma value and that hyper-parameter tuning is relatively easy in our method. We emphasize that $\gamma=1.0$ is equivalent to trivial fusion. On the other hand, we consider perturbations on the image modality, while the audio modality is assumed clean. Consequently, we invoke our Jacobian regularization method for the image modality, while keep the audio modality unchanged. We have deliberately enlarged the magnitude of the noise/attack, and results are reported in Table \ref{table:exp_avmnist_gauss_moda2}. As shown in Table \ref{table:exp_avmnist_gauss_moda2}, confidence-weighted summation still doesn't outperform purely mean fusion in all cases. Regards to statistical fusion, the second column demonstrates that this time our Jacobian regularization might lead to accuracy drop on clean data. The phenomenon that robust network is less accurate on clean data might happen as suggested by \cite{robust_at_odds}. However, we notice that such phenomenon doesn't occur in Table \ref{table:exp_avmnist_gauss_moda1}. We believe that such difference might lie intrinsically in modality difference, and we will explore it in our future work. \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on AV-MNIST when image features are perturbed. Mean accuracies are reported after repeatedly running 20 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~Model & Clean & $\omega_0 = 2.5$ & $\omega_{2,3} = 10,2$ & $\omega_4 = 0.07$ & $\omega_5 = 0.008$\\ \midrule & 0: Aud-regT & $83.9$ & $83.9$ & $83.9$ & $83.9$ & $83.9$ \\ & 1: Img-regT & $73.4$ & $24.4$ & $37.2$ & $15.6$ & $12.3$ \\ UM & 2: Img-advT & $73.1$ & $33.4$ & $40.5$ & $43.3$ & $34.1$ \\ Nets & 3: Img-freT & $65.1$ & $36.2$ & $40.7$ & $46.7$ & $42.6$ \\ & 4: Img-staT & $74.2$ & $29.5$ & $49.0$ & $19.4$ & $14.1$ \\ & 5: Img-mixT & $74.1$ & $30.4$ & $37.6$ & $37.3$ & $23.9$ \\ \midrule & Mean-w/o & $93.6$ & $71.9$ & $84.6$ & $83.2$ & $80.0$ \\ MM & Mean-w/ & $90.5$ & $78.1$ & $83.5$ & $81.6$ & $77.7$ \\ (0, 1) & Stat-w/o & $93.6$ & $71.7$ & $84.2$ & $83.1$ & $79.9$ \\ & Stat-w/ & $91.3$ ({\color{red}{$\downarrow$}}) & $75.2$ ({\color{red}{$\downarrow$}}) & $86.4$ ({\color{green}{$\uparrow$}}) & $85.4$ ({\color{green}{$\uparrow$}}) & $83.3$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $\bm{93.9}$ & $83.6$ & $85.4$ & $89.9$ & $88.4$ \\ MM & Mean-w/ & $91.5$ & $82.7$ & $82.0$ & $86.1$ & $83.2$ \\ (0, 2) & Stat-w/o & $\bm{93.9}$ & $83.5$ & $85.4$ & $\bm{89.9}$ & $88.4$ \\ & Stat-w/ (ours) & $90.8$ ({\color{red}{$\downarrow$}}) & $85.0$ ({\color{green}{$\uparrow$}}) & $86.9$ ({\color{green}{$\uparrow$}}) & $88.1$ ({\color{red}{$\downarrow$}}) & $87.6$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $91.4$ & $87.1$ & $88.1$ & $88.8$ & $\bm{88.6}$ \\ MM & Mean-w/ & $88.3$ & $85.6$ & $85.7$ & $85.9$ & $85.5$ \\ (0, 3) & Stat-w/o & $91.4$ & $87.1$ & $88.2$ & $88.8$ & $\bm{88.6}$ \\ & Stat-w/ (ours) & $90.9$ ({\color{red}{$\downarrow$}}) & $\bm{87.0}$ ({\color{red}{$\downarrow$}}) & $88.2$ ({\color{green}{$\uparrow$}}) & $88.6$ ({\color{red}{$\downarrow$}}) & $88.3$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $93.7$ & $77.8$ & $89.0$ & $85.1$ & $82.0$ \\ MM & Mean-w/ & $91.1$ & $83.3$ & $86.8$ & $82.3$ & $78.0$ \\ (0, 4) & Stat-w/o & $93.7$ & $77.8$ & $89.1$ & $85.0$ & $81.8$ \\ & Stat-w/ (ours) & $91.3$ ({\color{red}{$\downarrow$}}) & $79.9$ ({\color{red}{$\downarrow$}}) & $\bm{89.3}$ ({\color{green}{$\uparrow$}}) & $86.4$ ({\color{green}{$\uparrow$}}) & $84.6$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $92.6$ & $85.7$ & $86.5$ & $87.2$ & $84.2$ \\ MM & Mean-w/ & $90.1$ & $84.5$ & $84.4$ & $84.3$ & $80.0$ \\ (0, 5) & Stat-w/o & $92.6$ & $85.7$ & $86.7$ & $87.2$ & $84.1$ \\ & Stat-w/ (ours) & $92.2$ ({\color{red}{$\downarrow$}}) & $85.7$ ({\color{green}{$\uparrow$}}) & $86.9$ ({\color{green}{$\uparrow$}}) & $87.1$ ({\color{red}{$\downarrow$}}) & $84.6$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_avmnist_gauss_moda2} \end{table} \subsection{Emotion Recognition on RAVDESS} We consider emotion recognition on RAVDESS \cite{livingstone2018ravdess,xue2021multimodal}. We use a similar network structure for both the image and audio modality: Three convolution layers, two max pooling layers, and three FC layers connect in sequential. Similar baselines and experimental settings are adopted as those in AV-MNIST. Results are reported in Table \ref{table:exp_ravdess_moda1}. \vspace{6pt} \noindent\textbf{Results}\quad Still, in the case of mean fusion, confidence-based weighted mean fusion usually doesn't work except in few cases, while our proposed Jacobian regularization exhibits good compatibility with different trained models under various perturbations. The bold results imply that the multimodal network with our Jacobian regularization invoked outperforms all other models except in FGSM attack ($\omega_4=0.03$). Due to page limits, extra results (e.g., perturbation on the image modality) are presented in Supplementary. \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on RAVDESS when audio features are perturbed. Mean accuracies are reported after repeatedly running 20 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~~Model & Clean & $\omega_0 = 1.0$ & $\omega_1 = 6$ & $\omega_4 = 0.03$ & $\omega_5 = 0.001$\\ \midrule & 0: Img-regT & $82.5$ & $82.5$ & $82.5$ & $82.5$ & $82.5$ \\ & 1: Aud-regT & $71.9$ & $54.2$ & $59.3$ & $31.5$ & $29.9$ \\ UM & 2: Aud-advT & $78.6$ & $43.1$ & $71.9$ & $53.3$ & $62.3$\\ Nets & 3: Aud-freT & $66.4$ & $19.5$ & $62.4$ & $56.0$ & $60.5$\\ & 4: Aud-staT & $71.8$ & $58.3$ & $59.8$ & $25.9$ & $29.6$ \\ & 5: Aud-mixT & $74.6$ & $54.5$ & $66.9$ & $15.7$ & $19.2$\\ \midrule & Mean-w/o & $89.8$ & $82.8$ & $86.7$ & $57.0$ & $63.6$ \\ MM & Mean-w/ & $88.5$ & $80.4$ & $84.1$ & $60.7$ & $57.3$\\ (0, 1) & Stat-w/o & $89.9$ & $82.9$ & $86.3$ & $56.9$ & $63.3$ \\ & Stat-w/ (ours) & $89.9$ ({\color{green}{$\uparrow$}}) & $85.0$ ({\color{green}{$\uparrow$}}) & $87.7$ ({\color{green}{$\uparrow$}}) & $59.4$ ({\color{red}{$\downarrow$}}) & $68.1$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $90.8$ & $76.9$ & $89.5$ & $87.3$ & $90.6$\\ MM & Mean-w/ & $89.3$ & $76.9$ & $87.6$ & $79.7$ & $85.9$\\ (0, 2) & Stat-w/o & $90.8$ & $77.1$ & $89.6$ & $87.2$ & $90.5$ \\ & Stat-w/ (ours) & $\bm{91.4}$ ({\color{green}{$\uparrow$}}) & $79.9$ ({\color{green}{$\uparrow$}}) & $\bm{90.2}$ ({\color{green}{$\uparrow$}}) & $88.6$ ({\color{green}{$\uparrow$}}) & $90.6$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $90.5$ & $60.8$ & $89.6$ & $\bm{90.1}$ & $90.8$\\ MM & Mean-w/ & $86.9$ & $64.0$ & $86.6$ & $82.2$ & $85.7$\\ (0, 3) & Stat-w/o & $90.5$ & $61.5$ & $89.2$ & $90.0$ & $90.7$ \\ & Stat-w/ (ours) & $90.8$ ({\color{green}{$\uparrow$}}) & $65.6$ ({\color{green}{$\uparrow$}}) & $90.0$ ({\color{green}{$\uparrow$}}) & $89.5$ ({\color{red}{$\downarrow$}}) & $\bm{90.7}$ ({\color{red}{$\downarrow$}}) \\ \hline & Mean-w/o & $90.7$ & $88.4$ & $88.5$ & $69.3$ & $78.8$\\ MM & Mean-w/ & $89.2$ & $85.4$ & $85.5$ & $68.9$ & $69.3$\\ (0, 4) & Stat-w/o & $90.4$ & $88.3$ & $88.5$ & $69.7$ & $78.9$ \\ & Stat-w/ (ours) & $90.8$ ({\color{green}{$\uparrow$}}) & $\bm{88.8}$ ({\color{green}{$\uparrow$}}) & $88.7$ ({\color{green}{$\uparrow$}}) & $75.3$ ({\color{green}{$\uparrow$}}) & $82.3$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $89.5$ & $87.8$ & $87.9$ & $81.7$ & $82.7$\\ MM & Mean-w/ & $87.9$ & $85.8$ & $86.5$ & $82.0$ & $79.7$\\ (0, 5) & Stat-w/o & $89.4$ & $87.7$ & $87.9$ & $81.6$ & $82.5$ \\ & Stat-w/ (ours) & $86.6$ ({\color{red}{$\downarrow$}}) & $86.7$ ({\color{red}{$\downarrow$}}) & $86.0$ ({\color{red}{$\downarrow$}}) & $82.9$ ({\color{green}{$\uparrow$}}) & $83.7$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_ravdess_moda1} \end{table} \subsection{VGGSound} \begin{table}[!ht] \small \centering \caption{Accuracies of different models (\%) are evaluated on VGGSound when video features are perturbed. Mean accuracies are reported after repeatedly running 5 times.} \vspace{2pt} \begin{tabular}{clcccccccc} \toprule UM / MM & ~~~~~Model & Clean & $\omega_0 = 1.5$ & $\omega_1 = 6$ & $\omega_4 = 0.03$ & $\omega_5 = 0.001$\\ \midrule & 0: Aud-regT & $54.4$ & $15.0$ & $49.8$ & $23.0$ & $19.9$ \\ & 1: Img-regT & $27.4$ & $5.8$ & $27.4$ & $9.5$ & $9.0$ \\ UM & 2: Img-advT & $27.5$ & $5.3$ & $27.4$ & $10.7$ & $10.3$\\ Nets & 3: Img-freT & $25.2$ & $4.0$ & $24.2$ & $20.4$ & $22.9$\\ & 4: Img-staT & $27.0$ & $6.9$ & $26.9$ & $10.5$ & $9.6$ \\ & 5: Img-mixT & $27.2$ & $8.4$ & $27.1$ & $7.3$ & $7.2$\\ \midrule & Mean-w/o & $57.7$ & $45.8$ & $57.7$ & $35.0$ & $25.7$ \\ MM & Mean-w/ & $53.9$ & $48.6$ & $53.9$ & $37.9$ & $20.5$\\ (0, 1) & Stat-w/o & $58.5$ & $46.0$ & $58.4$ & $35.3$ & $26.0$ \\ & Stat-w/ (ours) & $60.1$ ({\color{green}{$\uparrow$}}) & $51.2$ ({\color{green}{$\uparrow$}}) & $60.0$ ({\color{green}{$\uparrow$}}) & $39.7$ ({\color{green}{$\uparrow$}}) & $28.5$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $58.8$ & $45.0$ & $58.8$ & $37.2$ & $26.9$\\ MM & Mean-w/ & $54.1$ & $49.0$ & $54.1$ & $38.3$ & $21.3$\\ (0, 2) & Stat-w/o & $59.4$ & $45.4$ & $59.4$ & $37.3$ & $27.2$ \\ & Stat-w/ (ours) & $\bm{61.1}$ ({\color{green}{$\uparrow$}}) & $50.1$ ({\color{green}{$\uparrow$}}) & $\bm{61.1}$ ({\color{green}{$\uparrow$}}) & $41.2$ ({\color{green}{$\uparrow$}}) & $29.7$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $57.9$ & $51.5$ & $57.9$ & $57.8$ & $57.7$\\ MM & Mean-w/ & $55.2$ & $53.6$ & $55.2$ & $55.0$ & $55.2$\\ (0, 3) & Stat-w/o & $58.8$ & $52.6$ & $58.8$ & $55.6$ & $57.7$ \\ & Stat-w/ (ours) & $56.7$ ({\color{red}{$\downarrow$}}) & $53.2$ ({\color{red}{$\downarrow$}}) & $56.7$ ({\color{red}{$\downarrow$}}) & $\bm{58.5}$ ({\color{green}{$\uparrow$}}) & $\bm{57.9}$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $57.5$ & $47.2$ & $57.3$ & $36.9$ & $30.9$\\ MM & Mean-w/ & $53.5$ & $49.2$ & $53.4$ & $37.2$ & $23.4$\\ (0, 4) & Stat-w/o & $58.2$ & $47.5$ & $58.1$ & $37.4$ & $31.0$ \\ & Stat-w/ (ours) & $59.8$ ({\color{green}{$\uparrow$}}) & $51.9$ ({\color{green}{$\uparrow$}}) & $59.7$ ({\color{green}{$\uparrow$}}) & $41.0$ ({\color{green}{$\uparrow$}}) & $34.0$ ({\color{green}{$\uparrow$}}) \\ \hline & Mean-w/o & $57.8$ & $55.3$ & $57.8$ & $53.9$ & $53.5$\\ MM & Mean-w/ & $55.6$ & $54.7$ & $55.6$ & $54.3$ & $49.5$\\ (0, 5) & Stat-w/o & $59.1$ & $\bm{56.4}$ & $59.0$ & $54.5$ & $54.3$ \\ & Stat-w/ (ours) & $57.1$ ({\color{red}{$\downarrow$}}) & $55.9$ ({\color{red}{$\downarrow$}}) & $57.0$ ({\color{red}{$\downarrow$}}) & $55.2$ ({\color{green}{$\uparrow$}}) & $55.4$ ({\color{green}{$\uparrow$}}) \\ \bottomrule \end{tabular} \label{table:exp_vggsound_moda1} \end{table} We consider classification task on a real-world data set VGGSound~\cite{chen2020vggsound}, where two modalities audio and video are available. To construct an affordable problem, our classification task only focuses on a subset of all classes. Namely, we randomly choose 100 classes and our classification problem is constrained on them. Consequently, there are 53622 audio-video pairs for training and 4706 audio-video pairs for testing. For the audio modality, we apply a short-time Fourier transform on each raw waveform, generating a 313 $\times$ 513 spectrogram. We take a 2D ResNet-18~\cite{resnet} to process these spectrograms. For the video modality, we evenly sample 32 frames from each 10-second video resulting input feature size of 32 $\times$ 256 $\times$ 256. We take a ResNet-18 network to process the video input where 3D convolutions have been adopted to replace 2D convolutions. \vspace{6pt} \noindent\textbf{Results} We present the results of mean fusion and statistical fusion with or without robust add-on in Table \ref{table:exp_vggsound_moda1}. Unlike the previous experiments, here we make audio modality clean and assume corruptions/attacks on the video modality. The results demonstrate that our method outperforms counterpart statistical fusion without Jacobian regularization in most cases. Furthermore, the bold results imply that our Jacobian regularization could enhance model robustness under various corruptions or attacks. {In Supplementary, we capture another important setting: the video modality (or audio modality) is corrupted, but we don't know which one.} Thus, both modalities are trained with robust add-ons. Please refer to Supplementary for more details.
-187,093.30367
[ -2.97265625, 2.5859375 ]
32.216216
[ -3.056640625, 0.41162109375, -1.8447265625, -5.265625, -0.91845703125, 7.81640625 ]
[ 0.406005859375, 7.40625, -0.7216796875, 5.68359375 ]
2,790
15,978
[ -3.484375, 3.861328125 ]
41.31109
[ -5.6875, -4.3203125, -4.1171875, -1.892578125, 2.25, 11.625 ]
0.465024
27.976294
13.205658
16.472019
[ 1.395916223526001 ]
-127,866.188483
6.325573
-181,969.814485
1.16355
6.574145
[ -2.283203125, -3.375, -3.912109375, -5.1015625, 2.43359375, 12.234375 ]
[ -5.25390625, -1.9501953125, -2.365234375, -1.783203125, 3.533203125, 5.31640625 ]
BkiUbmzxK1UJ-rRH-1MD
\section{Introduction} The large-scale structure (LSS) in the late Universe is a fundamental probe of the cosmological model, sensitive to both universal expansion and structure growth, and complementary to early Universe observations from the cosmic microwave background. The LSS can be mapped by large redshift surveys through systematic measurements of the three-dimensional positions of matter tracers such as galaxies or quasars. Because the observed LSS is the result of the growth of initial matter perturbations through gravity in an expanding universe, it gives the possibility of both testing the expansion and structure growth histories, which in turn put us in a unique position to solve the question of the origin of the late acceleration of the expansion and dark energy \citep{clifton_modified_2012, weinberg_observational_2013, zhai_evaluation_2017, ferreira_cosmological_2019}. Over the last two decades, redshift surveys have explored increasingly larger volumes of the Universe at different cosmic times. The methodology to extract the cosmological information from those redshift surveys has evolved and has now reached maturity. Particularly, the baryon acoustic oscillations (BAO) and the redshift-space distortions (RSD) in the two-point and three-point statistics of the galaxy spatial distribution are now key observables to constrain cosmological models. The BAO horizon scale imprinted in the matter distribution was frozen in the LSS at the drag epoch, slightly after matter-radiation decoupling. This characteristic scale can still be seen in the large-scale distribution of galaxies at late times and be used as a standard ruler to measure the expansion history. At the same time, the galaxy peculiar velocities distorting the line-of-sight cosmological distances based on observed redshifts, are sensitive on large scales to the coherent motions induced by the growth rate of structure, which in turn depends on the strength of gravity. BAO and RSD are highly complementary, as they allow both geometrical and dynamical cosmological constraints from the same observations. The signature of baryons in the clustering of galaxies was first detected in the Sloan Digital Sky Survey (SDSS; \citealt{eisenstein_detection_2005}) and 2dF Galaxy Redshift Survey (2dFGRS; \citealt{percival_2df_2001,cole_2df_2005}). Since then, further measurements using the 2dFGRS, SDSS and additional surveys have improved the accuracy of BAO measurements and extended the range of redshifts covered from $z=0$ to $z=1$. Examples of analyses include those of the SDSS-II \citep{percival_baryon_2010}, 6dFGS \citep{beutler_6df_2011}, WiggleZ, \citep{kazin_wigglez_2014} and SDSS-MGS \citep{ross_clustering_2015} galaxy surveys. An important milestone was achieved with the Baryon Oscillation Spectroscopic Survey (BOSS; \citealt{dawson_baryon_2013}), part of the third generation of the Sloan Digital Sky Survey \citep{eisenstein_sdss-iii:_2011}. This allowed the most precise measurements of BAO using galaxies achieved to date using galaxies as direct tracers \citep{alam_clustering_2017} and Lyman-$\alpha$ forest measurements \citep{bautista_measurement_2017, du_mas_des_bourboux_baryon_2017}, reaching a relative precision of 1 per cent on the distance relative to the sound horizon at the drag epoch. Although RSD have been understood and measured since the late 1980s \citep{kaiser_clustering_1987}, it is only in the last decade when there has been significant interest in deviations from standard gravity that would explain the apparent late-time acceleration of the expansion of the Universe, that the ability of RSD measurements to provide such tests has been explored \citep{guzzo_test_2008,song_reconstructing_2009}. This has resulted in renewed interest in RSD with examples of RSD measurement from the WiggleZ \citep{blake_wigglez_2011}, 6dFGRS \citep{beutler_6df_2012}, SDSS-II \citep{samushia_interpreting_2012}, SDSS-MGS \citep{Howlett_clustering_2015}, FastSound \citep{okumura_subaru_2016}, and VIPERS \citep{pezzotta_vimos_2017} galaxy surveys, with BOSS achieving the best precision of $\sim6$\% on the parameter combination $f\sigma_8$ \citep{beutler_clustering_2017, grieb_clustering_2017, sanchez_clustering_2017, satpathy_clustering_2017}, which is commonly used to quantify the amplitude of the velocity power spectrum. The extended Baryon Oscillation Spectroscopic Survey (eBOSS; \citealt{dawson_sdss-iv_2016}) program is the successor of BOSS in the fourth generation of the SDSS \citep{blanton_sloan_2017}. It maps the LSS using four main tracers: Luminous Red Galaxies (LRGs), Emission Line Galaxies (ELGs), quasars used as direct tracers of the density field, and quasars from whose spectra we can measure the Ly$\alpha$ forest. With respect to BOSS, it explores galaxies at higher redshifts, covering the range $0.6<z<2.2$. Using the first two years of data from Data Release 14 (DR14), BAO and RSD measurements have been performed using different tracers and methods: LRG BAO \citep{bautista_sdss-iv_2018}, LRG RSD \citep{icaza-lizaola_clustering_2020}, quasar BAO \citep{ata_clustering_2018}, quasar BAO with redshift weights \citep{zhu_clustering_2018}, quasar BAO Fourier-space \citep{wang_clustering_2018}, quasar RSD Fourier-space \citep{gil-marin_clustering_2018}, quasar RSD Fourier-space with redshift weights \citep{ruggeri_optimal_2017, ruggeri_clustering_2019}, quasar RSD in configuration space \citep{hou_clustering_2018, zarrouk_clustering_2018}, and quasar tomographic RSD in Fourier space with redshift weights \citep{zhao_clustering_2019}. In this paper we perform the BAO and RSD analyses in configuration space of the completed eBOSS LRG sample, part of Data Release 16. This work is part of a series of papers using different tracers and methods\footnote{A summary of all SDSS BAO and RSD measurements with accompanying legacy figures can be found at \\ \url{sdss.org/science/final-bao-and-rsd-measurements/} \\ and the cosmological interpretation of these measurements can be found at \\ \url{sdss.org/science/cosmology-results-from-eboss/}}. The official SDSS-IV DR16 quasar catalog is described in \citet{lyke_2020}. The production of the catalogs specific for large-scale clustering measurements of the quasar and LRG sample (input for this work) is described in \citet{ross_2020}, while the analogous work for the ELG sample is described in \cite{raichoor_2020}. From the same LRG catalog, \citet{gil-marin_2020} report the BAO and RSD analyses in Fourier space. The BAO and RSD constraints from the quasar sample are presented by \citet{hou_2020} in configuration space and by \citet{neveux_2020} in Fourier space. The clustering from the ELG sample is described by \citet{de_mattia_2020} in Fourier space and by \citet{amelie_2020} in configuration space. Finally, a series of articles describes the simulations used to test the different methodologies for each tracer. The approximate mocks used to estimate covariance matrices and assess observational systematics for the LRG, ELG, and quasar samples are described in \citet{zhao_2020} (see also \citet{lin_2020} for an alternative method for ELGs), while realistic N-body simulations were produced by \citet{rossi_2020} for the LRG sample, by \citet{smith_2020} for the quasar sample, and by \citet{alam_2020} for the ELG sample. In \citet{avila_2020}, halo occupation models for ELGs are studied. A machine-learning method to remove systematics caused by photometry was applied to the ELG sample \citep{kong_2020} and a new method to account for fiber collisions in the eBOSS sample is described in \citet{mohammad_2020}. The BAO analysis of the Lyman-$\alpha$ forest sample is presented by \citet{du_mas_des_bourboux_2020}. The final cosmological implications from all these clustering analyses are presented in \citet{mueller_2020}. The paper is organized as follows. Section \ref{sec:dataset} describes the LRG dataset and simulations used in this analysis. Section \ref{sec:method} presents the adopted methodology and particularly BAO and RSD theoretical models. We estimate biases and systematic errors from different sources in section \ref{sec:robustness}. We present BAO and RSD results in Section \ref{sec:results} and finally conclude in Section \ref{sec:conclusion}. \section{Dataset} \label{sec:dataset} In this section, we summarize the observations, catalogs, and mock datasets that are used to test our methodology, as well as the clustering statistics used in this work. \subsection{Spectroscopic observations and reductions} The fourth generation of the Sloan Digital Sky Survey \citep[SDSS-IV][]{blanton_sloan_2017} employed the two multi-object BOSS spectrographs \citep{smee_multi-object_2013} installed on the 2.5-meter telescope \citep{gunn_2.5_2006} at the Apache Point Observatory in New Mexico, USA, to carry out spectroscopic observations for eBOSS. The target sample of LRGs, the analysis of which is our focus, was selected from the optical SDSS photometry from DR13 \citep{albareti_13th_2017}, with additional infrared information from the WISE satellite \citep{lang_wise_2014}. The final targeting algorithm is described in detail in \citet{prakash_sdss-iv_2016} and produced about 60 deg$^{-2}$ LRG targets over the 7500 deg$^{2}$ of the eBOSS footprint, of which 50 deg$^{-2}$ were observed spectroscopically. The selection was tested over 466 deg$^2$ covered during the Sloan Extended Quasar, ELG, and LRG Survey (SEQUELS), confirming that more than 41 deg$^{-2}$ LRGs have $0.6 < z < 1.0$ \citep{dawson_sdss-iv_2016}. The raw CCD images were converted to one-dimensional, wavelength and flux calibrated spectra using version \textsc{v5\_13\_0} of the SDSS spectroscopic pipeline \textsc{idlspec2d}\footnote{Publicly available at \url{sdss.org/dr16/software/products}}. Two main improvements of this pipeline since its previous release \citep[DR14;][]{abolfathi_fourteenth_2018} include a new library of stellar templates for flux calibration and a more stable extraction procedure. \citet{ahumada_sixteenth_2019} provide a summary of all improvements of the spectroscopic pipeline since SDSS-III. The redshift of each LRG was estimated with the \textsc{redrock} algorithm\footnote{Publicly available at \url{github.com/desihub/redrock}}. This algorithm improves classification rates with respect to its predecessor \textsc{redmonster} \citep{hutchinson_redshift_2016}. \textsc{redrock} uses templates derived from principal component analysis of SDSS data to classify spectra, which is followed by a redshift refinement procedure that uses stellar population models for galaxies. On average, 96.5 per cent of spectra yield a confident redshift estimate with \textsc{redrock} compared to 90 per cent with \textsc{redmonster}, with less than 1 per cent of catastrophic redshift errors (details can be found in Ross et al., 2020). \subsection{Survey geometry and observational features} The full procedure to model the survey geometry and correct for observational features is described in detail in the companion paper \citet{ross_2020}. We summarize it in the following. The random catalog allows estimating the survey geometry and number density of galaxies in the observed sample. It contains a random population of objects with the same radial and angular selection functions as the data. A random uniform sample of points is drawn over the angular footprint of eBOSS targets to model its geometry. We use random samples with 50 times more objects than in the data to minimize the shot noise contribution in the estimated correlation function, and redshifts are randomly taken from galaxy redshifts in the data. A series of masks are then applied to both data and random samples in order to eliminate regions with bad photometric properties, targets that collide with quasar spectra (which had priority in fiber assignement), and the centerpost region of the plates where it is physically impossible to put a fiber. All masks combined cover 17 per cent of the initial footprint, with the quasar collision mask accounting for 11 per cent. The spectroscopic information is finally matched to the remaining targets. About 4 per cent of the LRG targets were not observed due to \textit{fiber collisions}, i.e., when a group of two or more galaxies are closer than 62$^{\prime\prime}$ they cannot all receive a fiber. On regions of the sky observed more than once, some collisions could be resolved. These collisions can bias the clustering measurements so we applied the following correction: $N_{\rm targ}$ objects in a given collision group for which $N_{\rm spec}$ have a spectrum, all objects are up-weighted by $w_{\rm cp} = N_{\rm targ}/N_{\rm spec}$. This is different compared to \citet{bautista_sdss-iv_2018}, where the weight of the collided object without spectrum was transferred to its nearest neighbor with valid spectrum. Both corrections are only approximations valid on scales larger than 62$^{\prime\prime}$. An unbiased correction method is described in \cite{bianchi_unbiased_2017} and applied to eBOSS samples in \citet{mohammad_2020}. We show in Appendix \ref{app:pip_weights} that our results are insensitive to the correction method since it affects mostly the smallest scales. A similar procedure as in \citet{bautista_sdss-iv_2018} was used to account for the $3.5$ per cent of LRG targets without reliable redshift estimate. The \textit{redshift-failure} weight $w_{\rm noz}$ acts as an inverse probability weight, boosting galaxies with good redshifts such that this weighted sample is an unbiased sampling of the full population. This assumes that the probability of a given galaxy being selected is a function of both its trace position on the CCD and the overall signal-to-noise ratio of the spectrograph in which this target was observed, and that the galaxies not observed are statistically equivalent to the observed galaxies. Spurious fluctuations in the target selection caused by the photometry are corrected by weighting each galaxy by $w_{\rm sys}$. These weights are computed with a multi-linear regression on the observed relations between the angular over-densities of galaxies versus stellar density, seeing and galactic extinction. Fitting all quantities simultaneously automatically accounts for their correlations. The weights $w_{\rm noz}$ and $w_{\rm sys}$ are computed independently. The observational completeness creates artificial angular variations of the density that are accounted for using the random catalog. The completeness is defined as the ratio of the number of weighted spectra (including those classified as stars or quasars) to the number of targets (Eq.~11 in Ross et al., 2020). This quantity is computed per sky \textit{sector}, i.e., a connected region observed by a unique set of plates. We downweight each point in the random catalog by the completeness of its corresponding sky sector. Optimal weights for large-scale correlations, known as FKP weights \citep{feldman_power-spectrum_1994}, are computed with the estimated comoving density of tracers $\bar{n}(z)$ as a function of redshift using our fiducial cosmology in Table~\ref{tab:cosmologies}. The final weight for each galaxy is defined\footnote{Note that this definition differs from the one used in BOSS, where $w = (w_{\rm noz}+ w_{\rm cp}-1)w_{\rm sys} w_{\rm FKP}$.} as $w = w_{\rm noz}w_{\rm cp}w_{\rm syst} w_{\rm FKP}$. The weight for each galaxy from the random catalogue is the same, with the completeness information already included in $w_{\rm sys}$. The eBOSS sample of LRGs overlaps in area and redshift range with the highest-redshift bin of the CMASS sample ($0.5<z<0.75$). We combine the eBOSS LRG sample with all the $z > 0.6$ BOSS CMASS galaxies and their corresponding random catalog (including the non-overlapping with eBOSS), making sure that the data-to-random number ratio is the same for both samples. This combination is beneficial for two reasons. First, the combined sample supersedes the last redshift bin of BOSS measurements while being completely independent of the first two lower redshift bins. Second, the reconstruction technique applied to this sample (see next section) benefits from a higher density of tracers, reducing potential noise introduced by the procedure. The new eBOSS LRG sample covers 4,242 deg$^{2}$ of the total BOSS CMASS footprint of 9,494 deg$^2$ (NGC and SGC combined). Considering their spectroscopic weights, the new eBOSS sample has 185,295 new redshifts over $0.6 < z < 1.0$ while CMASS contributes with 104,865 redshifts in the overlapping area and 111,892 in the non-overlapping area. A total of 402,052 LRGs over $0.6 < z < 1.0$ contribute to this measurement, with a total effective comoving volume of 2.72 Gpc$^3$ (1.43 Gpc$^3$ from the CMASS sample and 1.28 Gpc$^3$ from the new eBOSS sample). A detailed description of these numbers is given in \citet{ross_2020}. In the following, we simply refer to the combined CMASS+LRG sample as the eBOSS LRG sample. The number density of CMASS galaxies, LRGs, and combined CMASS+LRG sample are presented in Fig. \ref{fig:nz}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/nz.png} \caption{The observed number density of eBOSS LRGs (dashed curve), BOSS CMASS galaxies (dotted curve), and combined CMASS+LRG sample galaxies (solid curve) at $0.6<z<1$. This combines NGC and SGC fields.} \label{fig:nz} \end{figure} \subsection{Reconstruction} \label{sec:reconstruction} While constraints on the growth rate of structure are obtained using the information from the full shape of the correlation function, BAO analyses extract the cosmological information only from the position of the BAO peak. In our BAO analysis, we applied the reconstruction technique of \citet{burden_efficient_2014, burden_reconstruction_2015} to the observed galaxy density field in order to remove a fraction of the redshift-space distortions, as well as non-linear motions of galaxies that smeared out the BAO peak. This technique sharpens the BAO feature in the two-point statistics in Fourier and configuration space, increasing the precision of the measurement of the acoustic scale. Reconstruction is applied on actual data and on mock catalogs using a publicly available\footnote{\url{https://github.com/julianbautista/eboss_clustering}} code \citep{bautista_sdss-iv_2018}. Our final BAO results are solely based on reconstructed catalogs, while full-shape results use the pre-reconstruction sample. We apply reconstruction to the full eBOSS+CMASS final LRG catalog. We use our fiducial cosmology from Table~\ref{tab:cosmologies} to convert redshifts to comoving distances. For the reconstruction, we fix the bias value to $b = 2.3$ and assume the standard gravity relation between the growth rate of structure and $\Omega_m$, i.e. $f = \Omega_m^{6/11}(z=0.7) = 0.815$. We use a smoothing scale of $15$ \hmpc. The BAO results are not sensitive to small variations of those parameter choices as studied in \cite{carter_impact_2019}. \subsection{Mocks} \label{sec:mocks_description} In order to test the overall methodology and study the impact of systematic effects, we have constructed several sets of mock samples. Approximate methods are considered to be sufficient for covariance matrix estimates and to derive systematic biases in BAO measurements. However, the full-shape analysis of the correlation function requires more realistic N-body simulations, particularly in order to test the modeling. In this study, our synthetic datasets are the following: \begin{itemize} \item 1000 realisations of the LRG eBOSS+CMASS survey geometry using the \textsc{EZmock} method \citep{chuang_ezmocks:_2015}, which employs the Zel'dovich approximation to compute the density field at a given redshift and populate it with galaxies. This method is fast and has been calibrated to reproduce the two- and three-point statistics of the given galaxy sample, to a good approximation and up to mildly non-linear scales. The angular and redshift distributions of the eBOSS LRG sample in combination with the $z>0.6$ CMASS sample were reproduced in these mock catalogs. The full description of the \textsc{EZmock} LRG samples can be found in the companion paper Zhao et al. 2020. We use these mocks in several steps of our analysis: to infer the error covariance matrix of our clustering measurements in the data, to study the impact of observational systematic effects on cosmology, and to estimate the correlations between different methods for the calculation of the consensus results. \item 84 realisations of the \textsc{Nseries} mocks, which are N-body simulation snapshots populated with a single Halo Occupation distribution (HOD) model. These mock catalogs reproduce the angular and redshift distributions of the North Galactic Cap of the BOSS CMASS sample within the redshift range $0.43<z<0.70$ \citep{alam_clustering_2017}. While this dataset is not fully representative of the eBOSS LRG sample, we use these N-body mocks to test the RSD models down to the non-linear regime. The number of available realisations and their large volume are ideal to test model accuracy in the high-precision regime. The covariance matrix for these mocks were computed from 2048 realisations of the same volume with the \textsc{MD-Patchy} approximated method \citep{kitaura_modelling_2014}. The redshift of those mocks is $z = 0.55$. \item 27 realisations extracted from the {\sc OuterRim} N-body simulation \citep{heitmann_outer_2019}, and corresponding to cubical mocks of $1~h^{-3}~{\rm Gpc}^3$ each. The dark matter haloes have been populated with galaxies using four different HOD \citep{zheng_galaxy_2007, leauthaud_theoretical_2011, tinker_evolution_2013, hearin_beyond_2015} at 3 different luminosity thresholds to cover a large range of galaxy populations. These mocks are part of our internal \textsc{MockChallenge} and aimed at quantifying potential systematic errors originating from the HODs. A detailed description of these simulations and the \textsc{MockChallenge} can be found in the companion paper \citet{rossi_2020}. The redshift of those mocks is $z=0.695$. \end{itemize} \subsection{Fiducial cosmologies} \label{sec:fiducial_cosmology} The redshift of each galaxy is converted into radial comoving distances for clustering measurements by means of a fiducial cosmology. The fiducial cosmologies employed in this work are shown in Table~\ref{tab:cosmologies}. Our baseline choice, named ``Base'', is a flat $\Lambda$CDM model matching the cosmology used in previous BOSS analyses \citep{alam_clustering_2017} with parameters within 1$\sigma$ of Planck best-fit parameters \citep{planck_collaboration_planck_2018-1}. Some of these cosmologies were used to produce the mock datasets described in Section \ref{sec:mocks_description}. A choice of fiducial cosmology is also needed when computing the linear power spectrum $P_{\rm lin}(k)$, input for all our correlation function models in this work (see Sections~\ref{sec:bao_modelling} and \ref{sec:rsd_modelling}). In Section~\ref{sec:systematics_bao} and Section~\ref{sec:systematics_rsd} we study the dependence of our results to the choice of fiducial cosmology. \begin{table} \centering \caption{Sets of cosmological models used in this work. All models are parameterised by their fraction of the total energy density in form of total matter $\Omega_m$, cold dark matter $\Omega_c$, baryons $\Omega_b$, and neutrinos $\Omega_\nu$, the Hubble constant $h = H_0/(100 {\rm km/s/Mpc})$, the primordial spectral index $n_s$ and primordial amplitude of power spectrum $A_s$. With these parameters we compute the normalisation of the linear power spectrum $\sigma_8$ at $z=0$ and the comoving sound horizon scale at drag epoch $r_{\rm drag}$. The different labels refer to our baseline choice (Base), the {\sc EZmocks} (EZ), the {\sc Nseries} (NS), the {\sc OuterRim} (OR) cosmologies, and an additional model (X) with larger value for $\Omega_m$. } \label{tab:cosmologies} \begin{tabular}{cccccc} \hline \hline & Base & \sc EZ & \sc NS & \sc OR & $X$ \\ \hline $\Omega_m$& 0.310 & 0.307 & 0.286 & 0.265 & 0.350 \\ $\Omega_c$& 0.260 & 0.259 & 0.239 & 0.220 & 0.300 \\ $\Omega_b$& 0.048 & 0.048 & 0.047 & 0.045 & 0.048 \\ $\Omega_\nu$& 0.0014 & 0 & 0 & 0 & 0.0014 \\ $h$& 0.676 & 0.678 & 0.700 & 0.710 & 0.676 \\ $n_s$& 0.970 & 0.961 & 0.960 & 0.963 & 0.970 \\ $A_s$ [$10^{-9}$]& 2.041& 2.116& 2.147& 2.160& 2.041 \\ $\sigma_8(z=0)$ & 0.800 & 0.823 & 0.820 & 0.800 & 0.874 \\ $r_{\rm drag}$ [Mpc] & 147.78 & 147.66 & 147.15 & 149.35 & 143.17 \\ \hline \hline \end{tabular} \end{table} \begin{table} \centering \caption{Values for the comoving angular diameter distance $D_M$ and the Hubble distance $D_H = c/H(z)$ in units of the sound horizon scale at drag epoch $r_d$, and the normalised growth rate of structures $f{\sigma_8}$. These values are predictions from the cosmological models in Table~\ref{tab:cosmologies} computed at typical redshifts used in this work. } \label{tab:cosmo_derived_parameters} \begin{tabular}{lcccc} \hline \hline Model & $z_{\rm eff}$ & $\frac{D_M}{r_{\rm drag}}$ & $\frac{D_H}{r_{\rm drag}}$ & $f{\sigma_8}$ \\ \hline Base & 0.698 & 17.436 & 20.194 & 0.456 \\ Base & 0.560 & 14.529 & 21.960 & 0.465 \\ {\sc EZ} & 0.698 & 17.429 & 20.211 & 0.467 \\ {\sc NS} & 0.560 & 14.221 & 21.692 & 0.469 \\ {\sc OR} & 0.695 & 16.717 & 19.866 & 0.447 \\ X & 0.698 & 17.685 & 20.146 & 0.504 \\ X & 0.560 & 14.778 & 22.019 & 0.518 \\ \hline \hline \end{tabular} \end{table} We define the effective redshift of our data and mock catalogs as the weighted mean redshift of galaxy pairs, \begin{equation} z_{\rm eff} =\frac{\sum_{i>j}w_i w_j(z_i+z_j)/2}{\sum_{i>j}w_i w_j}, \label{eq:zeff} \end{equation} where $w_i$ is the total weight of the galaxy $i$ and the indices $i$,$j$ run over the galaxies in the considered catalog. We only include the pairs of galaxies with separations comprised between $25$ and $130$ $h^{-1}~{\rm Mpc}$, which correspond to those effectively used in our full-shape analysis (see Section~\ref{sec:rsd_modelling}). By doing so, we obtain $z_{\rm eff}=0.698$ for the combined sample. The {\sc EZmocks} were constructed to mimic our data sample and thus have the same $z_{\rm eff}$. The {\sc Nseries} mocks were constructed to match the BOSS CMASS NGC sample and we obtain $z_{\rm eff} = 0.56$. The {\sc MockChallenge} mocks were produced with a snapshot at $z=0.695$ and we use this value as their effective redshift. \subsection{Galaxy clustering estimation} We estimate the redshift-space galaxy clustering in configuration space by measuring the galaxy anisotropic two-point correlation function $\xi(r,\mu)$. This measurement is performed with the standard \citet{landy_bias_1993} estimator: \begin{equation} \xi(r,\mu)=\frac{GG(r,\mu)-2GR(r,\mu)+RR(r,\mu)}{RR(r,\mu)}, \label{eq:ls} \end{equation} where $GG(r,\mu)$, $GR(r,\mu)$, and $RR(r,\mu)$ are respectively the normalized galaxy-galaxy, galaxy-random, and random-random number of pairs with separation $(r,\mu)$. For the post-reconstruction, we employ the same estimator except that in the numerator, displaced galaxy and random catalogs are used instead. Since we are interested in quantifying RSD effects, we decompose the three-dimensional galaxy separation vector $\vec{r}$ into polar coordinates $(r,\mu)$ aligned with the line-of-sight direction, where $r$ is the norm of the separation vector and $\mu$ is the cosine of the angle between the line-of-sight and separation vector directions. The pair counts are binned in 5\hmpc\ bins in $r$ and 0.01 bins in $\mu$. The measured anisotropic correlation function, where the galaxy separation vector $\vec{r}$ has been decomposed into line-of-sight and transverse separations $(r_\perp,r_\parallel)$, is presented in the left panel of Fig.~\ref{fig:cfs}. A clear BAO feature is seen at $r \approx 100$\hmpc\ as well as the impact of RSD, which squash the contours along the line of sight on large scales. In the right panel of Fig.~\ref{fig:cfs} we show the post-reconstruction correlation function where some of the isotropy is recovered and the BAO feature is sharpened. \begin{figure*} \centering \includegraphics[width=2\columnwidth]{figures/xirppi.png} \caption{ Anisotropic two-point correlation function of eBOSS LRG+CMASS galaxies at $0.6<z<1$. The left (right) panel shows the pre-reconstruction (post-reconstruction) two-point correlation function in bins of $r_\perp$ and $r_\parallel$. Bins of size $1.25~h^{-1}~$Mpc and a bi-cubic spline interpolation have been used to produce the contours. } \label{fig:cfs} \end{figure*} For the cosmological analysis, we compress the information contained in the full anisotropic correlation function. We define the multipole moments of the correlation function by decomposing $\xi(r, \mu)$ on the basis of Legendre polynomials. Since we are working with binned data, the discrete decomposition is written as: \begin{equation} \hat{\xi}_\ell(r) = (2\ell+1)\sum_{i} \xi(r, \mu_i) L_\ell(\mu_i) {\rm d}\mu, \end{equation} where only even multipoles do not vanish given the symmetry of galaxy pairs and our choice of line of sight. We note that in the previous equation there is a factor of 2 cancellation due to the imposed symmetry between negative and positive $\mu$. Throughout this work, we only consider $\ell = 0$, 2 and 4 multipoles, referred to as monopole, quadrupole, and hexadecapole, respectively in the following. The red points with error bars in Fig.~\ref{fig:multipoles_data_versus_mocks} show the even multipoles of the correlation function from the eBOSS LRG sample. The solid, dashed, and dotted black curves display the average multipoles in the different mock datasets used in this study: {\sc EZmocks}, {\sc Nseries}, and {\sc MockChallenge}. The error bars are obtained from the dispersion of the 1000 {\sc EZmocks} multipoles around their mean. By construction, the amplitude of the EZmock multipoles matches the data at separations $s<70$\hmpc. A slight mismatch in the BAO peak amplitudes between data and {\sc EZmocks} is visible. This mismatch does not impact cosmological results from the data since the covariance matrix dependency on the peak amplitude is small. However, the comparison of the precision of BAO peak measurements between mocks and data needs to account for this mismatch: the expected errors of our BAO measurement are smaller for data than for the ensemble of {\sc EZmocks}. For comparison, the average multipoles of the {\sc Nseries} mocks, also shown in Fig.~\ref{fig:multipoles_data_versus_mocks}, are a better match to the peak amplitude seen in the data. \begin{figure*} \centering \textbf{Pre-reconstruction}\par\medskip \includegraphics[width=.9\textwidth]{figures/multipoles_ezmock_LRGpCMASS.pdf} \textbf{Post-reconstruction}\par\medskip \includegraphics[width=.9\textwidth]{figures/multipoles_ezmock_LRGpCMASS_rec.pdf} \caption{Multipoles of the correlation function of data compared to the mock catalogs. The data is the combined eBOSS LRG + CMASS (NGC+SGC) samples and the mocks are the average multipoles of 1000 {\sc EZmocks} realisations (solid line), 84 {\sc Nseries} realisations (dashed line) and 27 {\sc MockChallenge} mocks populated with L11 HOD model (dotted lines). Top panels show the monopole, quadrupole and hexadecapole of the pre-reconstruction samples while bottom panels show the same for the post-reconstruction case.} \label{fig:multipoles_data_versus_mocks} \end{figure*} \section{Methodology} \label{sec:method} In this section we describe the BAO and RSD modelling, fitting procedure, and how errors on cosmological parameters are estimated. \subsection{BAO modelling} \label{sec:bao_modelling} We employ the standard approach used in previous SDSS publications for measuring the baryon acoustic oscillations scale in configuration space (e.g., \citealt{anderson_clustering_2014, ross_clustering_2017, alam_clustering_2017, bautista_sdss-iv_2018}). The code that produces the model and perform the fitting to the data is publicly available\footnote{\url{https://github.com/julianbautista/eboss_clustering}}. The aim is to model the correlation function multipoles $\xi_\ell(r)$ as a function of separations $r$ relevant for BAO ($30<r<180~h^{-1}$Mpc). The starting point is the model for the redshift-space anisotropic galaxy power-spectrum $P(k, \mu)$, \begin{multline} P(k, \mu) = \frac{b^2 \left[1+\beta(1-S(k))\mu^2\right]^2} {(1+ k^2\mu^2\Sigma_s^2/2)} \times \\ \times \left[ P_{\rm no \ peak}(k) + P_{\rm peak}(k) e^{-k^2\Sigma_{\rm nl}^2(\mu)/2} \right] \label{eq:pk2d} \end{multline} where $b$ is the linear bias, $\beta = f/b$ is the redshift-space distortions parameter, $k$ is the modulus of the wave-vector and $\mu$ is the cosine of the angle between the wave-vector and the line of sight. The non-linear broadening of the BAO peak is modelled by multiplying the ``peak-only'' power spectrum $P_{\rm peak}$ (see below) by a Gaussian distribution with $\Sigma_{\rm nl}^2(\mu) = \Sigma_\parallel^2 \mu^2 + \Sigma^2_\perp(1-\mu^2)$. The non-linear random motions on small scales are modeled by a Lorentzian distribution parametrized by $\Sigma_s$. When performing fits to the multipoles of a single realisation of the survey, the values of $(\Sigma_\parallel, \Sigma_\perp, \Sigma_s)$ are held fixed to improve convergence. The values chosen for these damping terms were obtained from fits to the average correlation function of the {\sc Nseries} mocks, which are full N-body simulations. We show in Section~\ref{sec:systematics_bao} that our results are insensitive to small changes to those values. Following \citet{seo_modeling_2016} theoretical considerations, we apply a term $S(k) = e^{-k^2\Sigma_r^2/2}$ to the post-reconstruction modeling of the correlation function ($S(k)=0$ for the pre-reconstruction BAO model). This term models the smoothing used in our reconstruction technique, where $\Sigma_r = 15$\hmpc\ (see Section~\ref{sec:reconstruction}). We follow the procedure from \cite{kirkby_fitting_2013} to decompose the BAO peak component $P_{\rm peak}$ from the linear power-spectrum $P_{\rm lin}$. We start by computing the correlation function by Fourier transforming $P_{\rm lin}$, then we replace the correlations over the peak region by a polynomial function fitted using information outside the peak region ($50 < r < 80$ and $160 < r < 190$\hmpc). The resulting correlation function is then Fourier transformed back to get $P_{\rm no \ peak}$. The linear power spectrum $P_{\rm lin}$ is computed using the code CAMB\footnote{\url{camb.info}} \citep{lewis_efficient_2000} with cosmological parameters of our fiducial cosmology (Table~\ref{tab:cosmologies}). The analysis in Fourier space uses the same procedure \citep[see][]{gil-marin_2020}. Previous BOSS \& eBOSS analyses making BAO measurements from direct tracer galaxies, used the approximate formulae from \citet{eisenstein_cosmic_1998} for decomposing the peak. We have checked that both methods yield only negligibly different results. The correlation function multipoles $\xi_\ell(s)$ are obtained from the multipoles of the power-spectrum $P_\ell(k)$, defined as: \begin{equation} \label{eq:pkmultipoles} P_\ell(k) = \frac{2\ell+1}{2} \int_{-1}^{1} P(k, \mu) L_\ell(\mu) ~{\rm d}\mu \end{equation} where $L_\ell$ are Legendre polynomials. The $P_\ell$ are then Hankel transformed to $\xi_\ell$ using: \begin{equation} \xi_\ell(r) = \frac{i^\ell}{2\pi^2}\int_0^\infty k^2 j_\ell(kr) P_\ell(k) ~ {\rm d}k \end{equation} where $j_\ell$ are the spherical Bessel functions. These transforms are computed using a Python implementation\footnote{\url{https://github.com/julianbautista/eboss_clustering}} of the FFTLog algorithm described in \citet{hamilton_uncorrelated_2000}. We parameterise the BAO peak position in our model via two dilation parameters that scale separations into transverse, $\aperp$, and radial, $\alpha_\parallel$, directions. These quantities are related, respectively, to the comoving angular diameter distance, $D_M = (1+z)D_A(z)$, and to the Hubble distance, $D_H = c/H(z)$, by \begin{equation} \aperp = \frac{D_M(z_{\rm eff})/r_d}{D_M^{\rm fid}(z_{\rm eff})/ r_d^{\rm fid}} \label{eq:aperp} \end{equation} \begin{equation} \alpha_\parallel = \frac{D_H(z_{\rm eff})/r_d}{D_H^{\rm fid}(z_{\rm eff})/ r_d^{\rm fid}} \label{eq:apara} \end{equation} In our implementation, we apply the scaling factors exclusively to the peak component of the power spectrum. As shown by \citet{kirkby_fitting_2013}, the decoupling between the peak and full-shape of the correlation function makes the constraints on the dilation parameters to be only dependent on the BAO peak position, with no information coming from the full-shape as it is the case for RSD analysis. The final BAO model is a combination of the cosmological multipoles $\xi_\ell$ and a smooth function of separation. The smooth function is meant to account for unknown systematic effects in the survey that potentially create large-scale correlations that could contaminate our measurements. Furthermore, there are currently no accurate analytical models for the post-reconstruction multipoles to date (the $S(k)$ term in Eq.~\ref{eq:pk2d} is generally not sufficient). Our final template is written as: \begin{equation} \xi^t_\ell(r) = \xi_\ell(\aperp, \alpha_\parallel, r) + \sum_{i=i_{\rm min}}^{i_{\rm max}} a_{\ell, i}{r^i}. \label{eq:template} \end{equation} Our baseline analysis uses $i_{\rm min} = -2$ and $i_{\rm max}=0$, corresponding to three nuisance parameters per multipole. We find that increasing the numbers of nuisance terms does not impact significantly the results. Note that this smooth function cannot be used in the full-shape RSD analysis since these terms would be completely degenerate with the growth rate of structure parameter. Our baseline BAO analysis uses the monopole $\xi_0$ and the quadrupole $\xi_2$ of the correlation function. We performed fits on mock multipoles including the hexadecapole $\xi_4$, finding that it does not add information (see Table~\ref{tab:bao_ezmock_stats_errors}). We fix $\beta = 0.35$ and fitting $b$ with a flat prior between $b=1.0$ and 4. For all fits, the broadband parameters are free, while both dilation parameters are allowed to vary between 0.5 and 1.5. A total of 9 parameters are fitted simultaneously. \subsection{RSD modelling} \label{sec:rsd_modelling} We describe the apparent distortions introduced by galaxy peculiar velocities in the redshift-space galaxy clustering pattern using two different analytical models: the combined Gaussian streaming and Convolutional Lagrangian Perturbation Theory (CLPT) formalism developed by \citet{reid_towards_2011, carlson_convolution_2013, wang_analytic_2014}, and the \citet{taruya_baryon_2010} model (TNS) supplemented with a non-linear galaxy bias prescription. These two models, frequently used in the literature, partially account for RSD non-linearities and describe the anisotropic clustering down to the quasi-linear regime. We use both models to fit the multipoles of the correlation function and later combine their results to provide more robust estimates of the growth rate of structure and geometrical parameters. This procedure should reduce the residual theoretical systematic errors. In this section, we briefly describe the two models and assess in Section~\ref{sec:systematics_rsd} their performance in the recovery of unbiased cosmological parameters using mock datasets. \subsubsection{Convolutional Lagrangian Perturbation Theory with Gaussian Streaming} CLPT provides a non-perturbative resummation of Lagrangian perturbation to the two-point statistic in configuration space for biased tracers. The Lagrangian coordinates $\vec q$ of a given tracer are related to their Eulerian coordinates $\vec x$ through the following equation: \begin{equation} \vec x (\vec q,t)=\vec q+\vec \Psi(\vec q,t), \end{equation} where $ \Psi(\vec q,t)$ refers to the displacement field evaluated at the Lagrangian position at each time $t$. The two-point correlation function is expanded in its Lagrangian coordinates considering the tracer $X$, in our case the LRG, to be locally biased with respect to the matter overdensity $\delta (\vec q)$. The expansion is performed over different orders of the Lagrangian bias function $F[\delta (\vec q) ]$, defined as: \begin{equation} 1+\delta_X(\vec q,t)=F[\delta(\vec q)]. \end{equation} The Eulerian density contrast field is computed by convolving with the displacement: \begin{equation} \label{delta_model} 1+\delta_X (\vec x)=\int d^3q\, F\left[ \delta(\vec q) \right]\int \frac{d^3 k}{(2 \pi)^3} e^{i \vec k (\vec x - \vec q - \vec \psi(\vec q))}. \end{equation} The local Lagrangian bias function $F$ is approximated by a non-local expansion using its first and second derivative, where the $n^{th}$ derivative is given by: \begin{equation} \label{F_der} \langle F^n \rangle=\int \frac{d\delta}{\sqrt{2\pi} \sigma}e^{-\delta^2/2\sigma^2}\frac{d^n F}{d \delta^n}. \end{equation} The two-point correlation function is obtained by evaluating the expression $\xi_X(\vec r) = \left< \delta_X (\vec x) \delta_X (\vec x + \vec r)\right>$, corresponding to Eq 19 of \cite{carlson_convolution_2013}, and that can be simplified as in their Eq. 46: \begin{equation} \label{xi_model} 1+\xi_X(\vec r)=\int d^3 q M(\vec r, \vec q), \end{equation} where $ M(\vec r, \vec q)$ is the kernel of convolution taking into account the displacement and bias expansion up to its second derivative term. The bias derivative terms are computed using the linear power spectrum derived from the code CAMB \citep{lewis_efficient_2000} using the fiducial cosmology described in Table~\ref{tab:cosmologies}. As we are interested in studying RSD, we need to model the impact of peculiar velocity. The CLPT provides the pairwise mean velocity $v_{12}(r)$ and the pairwise velocity dispersion $\sigma_{12}(r)$ as a function of the real-space separation. They are computed following the formalism developed in \cite{wang_analytic_2014}, which is similar to the one describe above but modifying the kernel to take into account the velocity rather than the density: \begin{equation} v_{12}(r)=(1+\xi(\vec{r}))^{-1}\int {M_1}(\vec{r}, \vec{q}) d^3 q, \end{equation} and \begin{equation} \sigma_{12}(r)=(1+\xi(\vec r))^{-1}\int M_2(\vec r, \vec q)d^3q. \end{equation} The kernels $M_{1,2}(\vec{r}, \vec{q})$ also depend on the first two non-local derivatives of the Lagrangian bias $\langle F' \rangle$ and $\langle F'' \rangle$, which are free parameters in addition to the linear growth rate $f$ in our model. Hereafter, we eliminate the angle brackets around the Lagrangian bias terms to simplify the notation in the following sections. Although CLPT is more accurate than Lagrangian Resummation Theory from \cite{matsubara_resumming_2008} in real space, we still have to improve the small-scale modelling in order to study redshift-space distortions. This is particularly important considering that part of peculiar velocities are generated by interactions that occur at the typical scales of clusters of galaxies ($\sim$1 Mpc). This is achieved by mapping the real-space CLPT model of the two-point statistics into redshift space with the Gaussian Streaming (GS) model proposed by \cite{reid_towards_2011}. The pairwise velocity distribution of tracers is assumed to have a Gaussian distribution that depends on both the separation $r$ and the angle between the separation vector and the line of sight $\mu$. We use the \cite{wang_analytic_2014} implementation that uses CLPT results as input for the GS model. The redshift-space correlation function is finally computed as: \begin{equation} \label{gsrd_integral} \begin{split} 1+\xi_{\rm X}(r_\perp,r_\parallel)= & \int \frac{1}{\sqrt{2\pi \left[\sigma_{12}^2(r)+\sigma^2_{\rm FoG}\right]}}[1+\xi_{\rm X}(r)]\\ & \times \exp{-\frac{[r_\parallel-y-\mu v_{12}(r)]^2}{2\left[\sigma_{12}^2(r)+\sigma^2_{\rm FoG}\right]}} dy, \end{split} \end{equation} where $\xi(r)$, $v_{12}(r)$, and $\sigma_{12}(r)$ are obtained from CLPT. The last function in the integral takes into account the scale-dependent halo-halo pairwise velocity and we have to introduce an extra parameter $\sigma_{\rm FoG}$ describing the galaxy random motions with respect to their parent halo, also known as Fingers-of-God (FoG) effect. \cite{reid_towards_2011} demonstrated that the GS model can predict clustering with an accuracy of $\approx 2$ per cent when dark-matter halos are used as tracers. Using galaxies, the accuracy decreases as $\sigma_{\rm FoG}$ increases. Considering that about 85 per cent of the galaxies from the LRG sample are central galaxies \citep{zhai_clustering_2017}, the accuracy remains close to the one obtained using halos. In summary, given a fiducial cosmology, this RSD model has four free parameters $[f, F',F'', \sigma_{\rm FoG}]$. \subsubsection{TNS model} The other RSD model that we consider is the \cite{taruya_baryon_2010} model extended to non-linearly biased tracers. We refer to it as TNS in this work. Its implementation closely follows the one presented in \citet{de_la_torre_vimos_2017}. This model is based on the conservation of the number density in real- and redshift-space \citep{kaiser_clustering_1987}. In this framework, the anisotropic power spectrum for unbiased matter tracers follows the general form \citep{scoccimarro_power_1999} \begin{eqnarray} P^s(k,\mu)&=&\int \frac{d^3 \vec{r}}{(2\pi)^3} e^{-i\vec{k} \cdot \vec{r}}\left<e^{-ikf\mu \Delta u_\parallel} \times \right. \nonumber \\ && \left. [\delta(\vec{x})+f \partial_{_\parallel} u_{_\parallel}(\vec{x})][\delta(\vec{x}^\prime)+f \partial_{_\parallel} u_{_\parallel}(\vec{x}^\prime)] \right> \label{eq:rspk} \end{eqnarray} where $\mu=k_\parallel/k$, $u_\parallel(\vec{r})=-v_\parallel(\vec{r})/(f aH(a))$, $v_\parallel(\vec{r})$ is the line-of-sight component of the peculiar velocity, $\delta$ is the matter density field, $\Delta u_\parallel=u_\parallel(\vec{x})-u_\parallel(\vec{x}^\prime)$ and $\vec{r}=\vec{x}-\vec{x}^\prime$. The model by \cite{taruya_baryon_2010} for Eq. \ref{eq:rspk} can be written \begin{multline} P^s(k,\mu)= D(k\mu\sigma_v)\big[ P_{\delta\delta}(k) +2\mu^2f P_{\delta\theta}(k) + \mu^4f^2P_{\theta\theta}(k)+ \\ C_A(k,\mu,f)+C_B(k,\mu,f) \big]\,, \end{multline} where $\theta$ is the divergence of the velocity field defined as $\theta = -\nabla {\bf \cdot v}/(aHf)$. $P_{\delta\delta}$, $P_{\theta\theta}$ and $P_{\delta\theta}$ are respectively the non-linear matter density, velocity divergence, and density-velocity divergence power-spectra. $C_A(k,\mu,f)$ and $C_B(k,\mu,f)$ are two correction terms that reduce to integrals of the matter power spectrum given in \citet{taruya_baryon_2010}. The phenomenological damping function $D(k\mu\sigma_v)$, not only describes the FoG effect induced by random motions in virialized systems, but has also a damping effect on the power spectra. Several functional forms can be used, in particular Gaussian or Lorentzian forms have been extensively used in previous analyses. We opt for a Lorentzian damping function that provides a better agreement to the LRG data and mocks, \begin{equation} D(k,\mu,\sigma_v) = (1+k^2\mu^2\sigma_v^2)^{-1}, \end{equation} where $\sigma_v$ represents an effective pairwise velocity dispersion that is later treated as a nuisance parameter in the cosmological inference. This model can be generalized to the case of biased tracers by including a galaxy biasing model. In that case, the anisotropic galaxy power spectrum can be rewritten as \begin{multline} P^s_{\rm g}(k,\mu) = D(k\mu\sigma_v) \big[ P_{\rm gg}(k) + 2\mu^2fP_{\rm{g} \theta} + \mu^4f^2 P_{\theta\theta}(k) + \\ C_A(k,\mu,f,b_1) + C_B(k,\mu,f,b_1) \big] \label{eq:psg} \end{multline} where $b_1$ is the galaxy linear bias. The explicit expressions for $C_A(k,\mu,f,b_1)$ and $C_B(k,\mu,f,b_1)$ are given in, e.g., \citet{de_la_torre_modelling_2012}. We adopt here a non-linear, non-local, prescription for galaxy biasing that follows the work of \citet{mcdonald_clustering_2009, chan_gravity_2012}. Specifically we use renormalized pertubative bias scheme presented in \citet{assassi_renormalized_2014} at 1-loop. In that case, the relation between the galaxy overdensity $\delta_{\mathrm{g}}$ and matter overdensity $\delta$ is written as \begin{equation} \delta_{\mathrm{g}} = b_{1}\delta + \frac{b_{2}}{2}\delta^{2} + b_{ \mathcal{G}_{2}} \mathcal{G}_{2} + b_{\Gamma_{3}}\Gamma_{3} \end{equation} where the two operators $\mathcal{G}_{2}$ and $\Gamma_{3}$ are defined as \begin{align} \mathcal{G}_{2}(\phi) &\equiv(\partial_i \partial_j \phi)^{2} - (\partial^{2}\phi)^{2}, \\ \Gamma_{3}(\phi,\phi_v )&\equiv\mathcal{G}_{2}(\phi) - \mathcal{G}_{2}(\phi_v), \end{align} and $\phi$ and $\phi_v$ correspond to the gravitational and velocity potentials respectively. In the local Lagrangian picture, the non-local bias parameters $b_{\mathcal{G}_{2}}$ and $b_{\Gamma_{3}}$ are related to the linear bias parameter $b_1$ as \begin{align} b_{\mathcal{G}_{2}} &= -\frac{2}{7}(b_1 - 1) \\ b_{\Gamma_{3}} &= \frac{11}{42}(b_1 - 1). \label{eq:nllbg3} \end{align} Bispectrum analyses in halo simulations show that those relations are reasonable approximations \citep{chan_gravity_2012,saito_understanding_2014}. However, as pointed out in \cite{sanchez_clustering_2017}, fixing $b_{\Gamma_{3}}$ to the local Lagrangian prediction is not necessary optimal because $b_{\Gamma_{3}}$ partially absorbs the scale dependence in $b_1$, which should in principle be present in the bias expansion. Moreover, local Lagrangian relation remains an approximation in the nonlinear regime \citep[e.g.][]{matsubara_nonlinear_2011}. We investigate in Section~\ref{sec:robustness} whether fixing $b_{\Gamma_{3}}$ or not is optimal for the specific case of LRG using {\sc Nseries} mocks. With this biasing model, the galaxy-galaxy and galaxy-velocity divergence power spectra read \citep{assassi_efficient_2017, simonovic_cosmological_2018} \begin{eqnarray} P_{gg}(k)&=&b_1^2 P_{\delta\delta}(k)+ b_2b_1I_{\delta^{2}}(k) + 2b_1b_{ \mathcal{G}_{2}}I_{{ \mathcal{G}_{2}}}(k) \nonumber \\ &&+2\left(b_1b_{ \mathcal{G}_{2}} + \frac{2}{5}b_1 b_{\Gamma_{3}}\right)F_{\mathcal{G}_{2}}(k) + \frac{1}{4}b_2^{2}I_{\delta^{2}\delta^{2}}(k) \nonumber \\ &&+b_{ \mathcal{G}_{2}}^{2} I_{\mathcal{G}_{2}\mathcal{G}_{2}}(k) \frac{1}{2}b_2b_{ \mathcal{G}_{2}}I_{\delta_2 \mathcal{G}_{2}}(k) \\ P_{g\theta}(k)&=&b_1P_{\delta\theta}(k)+\frac{b_2}{4} I_{\delta^{2}\theta}(k) + b_{ \mathcal{G}_{2}}I_{{ \mathcal{G}_{2}}\theta}(k) \nonumber \\ && + \left(b_{ \mathcal{G}_{2}} + \frac{2}{5} b_{\Gamma_{3}}\right)F_{\mathcal{G}_{2}\theta}(k). \end{eqnarray} In the above equations, $I_{\delta^{2}}(k)$, $I_{{ \mathcal{G}_{2}}}(k) $, ${F_{\mathcal{G}_{2}}(k)}$, $I_{\delta^{2}\delta^{2}}(k)$, $I_{\mathcal{G}_{2}\mathcal{G}_{2}}(k)$, $I_{\delta_2 \mathcal{G}_{2}}(k)$, are 1-loop integrals which expressions can be found in \citet{simonovic_cosmological_2018}. The expressions for $ I_{\delta^{2}\theta}(k)$, $I_{{ \mathcal{G}_{2}}\theta}(k)$, and $F_{\mathcal{G}_{2}\theta}(k)$ integrals are nearly identical as for $I_{\delta^{2}}(k)$, $I_{{ \mathcal{G}_{2}}}(k)$, and ${F_{\mathcal{G}_{2}}(k)}$, except that the $G_2$ kernel replaces the $F_2$ kernel in $I_{\delta^{2}}(k)$, $I_{{ \mathcal{G}_{2}}}(k) $ and ${F_{\mathcal{G}_{2}}(k)}$. Those 1-loop integrals are computed using the method described in \citet{simonovic_cosmological_2018}, which uses a power-law decomposition of the input linear power spectrum to perform the integrals. This allows a fast and robust computation of those integrals. The input linear power spectrum $P_{\rm lin}$ is obtained with \textsc{CAMB}, while the non-linear power spectrum $P_{\delta\delta}$ is calculed from the {\sc RESPRESSO} code \citep{nishimichi_moving_2017}. This non-linear power spectrum prediction does agree very well with successful perturbation theory-based predictions such as RegPT, but extend their validity to $k\simeq0.4$ \citep{nishimichi_moving_2017}. This is very relevant for configuration space analysis, where one needs to have both a correct BAO amplitude and a non-vanishing signal at high $k$ to avoid aliasing in the transformation from Fourier to configuration space. To obtain $P_{\theta\theta}$ and $P_{\delta\theta}$ power spectra, we use the universal fitting functions obtained by \citet{bel_accurate_2019} and that depend on $\sigma_8(z)$, $P_{\delta\delta}$, and $P_{\rm lin}$ as \begin{equation} \begin{aligned} P_{\theta \theta}(k)&=P_L(k) e^{-k\left(a_{1}+a_{2} k+a_{3} k^{2}\right)}, \\ P_{\delta \theta}(k)&=\left(P_{\delta \delta}(k) P_{\rm lin}(k)\right)^{\frac{1}{2}} e^{-\frac{k}{k_{\delta}}-b k^{6}}. \label{eqn:pdv_fitting_function} \end{aligned} \end{equation} The overall degree of nonlinear evolution is encoded by the amplitude of the matter fluctuation at the considered effective redshift. The explicit dependence of the fitting function coefficients on $\sigma_8$ is given by \begin{equation} \begin{aligned} a_{1} &=-0.817+3.198 \sigma_{8} \\ a_{2} &=0.877-4.191 \sigma_{8} \\ a_{3} &=-1.199+4.629 \sigma_{8} \\ 1 / k_{\delta} &=-0.017+1.496 \sigma_{8}^{2} \\ b &=0.091+0.702 \sigma_{8}^{2}. \end{aligned} \end{equation} In total, this model has either four or five free parameters, $[f,b_1,b_2,\sigma_v]$ or $[f,b_1,b_2,b_{\Gamma_{3}}\sigma_v]$, depending on the number of bias parameters that are let free. Finally, the multipole moments of the anisotropic correlation function are obtained by performing the Hankel transform of the model $P_\ell^s(k)$. \subsubsection{Alcock-Paczynski effect} For both RSD models, the \citet{alcock_evolution_1979} effect implementation follows that of \citet{xu_measuring_2013}. The Alcock-Paczynski distortions are simplified if we define the $\alpha$ and $\epsilon$ parameters, which characterize respectively the isotropic and anisotropic distortion components. These are related to $\alpha_{\perp}$ and $\alpha_\parallel$ (Eqs.~\ref{eq:aperp} and \ref{eq:apara}) as \begin{eqnarray} \alpha &=& \alpha_\parallel^{1/3} \alpha^{2/3}_{\perp} \\ \label{eq:alpha} \epsilon &=& \left( \alpha_\parallel/\alpha_{\perp} \right)^{1/3} - 1, \label{eq:epsilon} \end{eqnarray} For model $\xi_0$, $\xi_2$, and $\xi_4$, the same quantities in the fiducial cosmology are given by \citep{xu_measuring_2013}: \begin{eqnarray} \xi^{\rm fid}_0(r^{\rm fid}) &=& \xi_0(\alpha r) + \frac{2}{5}\epsilon \left[ 3\xi_2(\alpha r) + \frac{d\xi_2(\alpha r)}{d\ln(r)} \right] \label{eqn:mono} \\ \xi^{\rm fid}_2(r^{\rm fid}) &=& \bigg( 1 + \frac{6}{7}\epsilon \bigg)\xi_2(\alpha r) +2\epsilon \frac{d\xi_0(\alpha r)}{d\ln(r)} +\frac{4}{7}\epsilon \frac{d\xi_2(\alpha r)}{d\ln(r)} \nonumber \\ && + \frac{4}{7}\epsilon \bigg[ 5\xi_4(\alpha r) + \frac{d\xi_4(\alpha r)}{d\ln(r)} \bigg]. \\ \xi^{\rm fid}_4(r^{\rm fid}) &=& \xi_4(\alpha r) + \frac{36}{35}\epsilon \bigg[-2\xi_2(\alpha r) + \frac{d\xi_2(\alpha r)}{d\ln(r)} \bigg] \nonumber \\ && + \frac{20}{77} \epsilon \bigg[3\xi_4(\alpha r) + 2\frac{d\xi_4(\alpha r)}{d\ln(r)} \bigg] \nonumber \\ && + \frac{90}{143} \bigg[7\xi_6(\alpha r) + \frac{d\xi_6(\alpha r)}{d\ln(r)} \bigg]. \label{eqn:quad} \end{eqnarray} We note that this is an approximation for small variations around $\alpha=1$ and $\epsilon=0$ \citep{xu_measuring_2013}. Nonetheless, for the observed values on those parameters and when comparing to the model prediction based on the exact transformation, the results are virtually the same. \subsubsection{The fiducial scale at which $\sigma_8$ is measured} \label{sec:fs8_scaling} We perform an additional step in order to reduce the dependency of our $f{\sigma_8}$ constraints on the choice of fiducial cosmology. When fitting the correlation function multipoles, $\sigma_8$ is kept fixed to its fiducial value defined as \begin{equation} \sigma_R^2 = \int_0^\infty {\rm d}k \ k^2 P_{\rm lin}(k) W_{\rm TH}^2(Rk), \end{equation} where $P_{\rm lin}$ is the linear matter power-spectrum predicted by the fiducial cosmology, $W_{\rm TH}$ is the Fourier transform of a top-hat function with characteristic radius of $R=8$\hmpc. The resulting $f$ is scaled by $\sigma_8$. However, in Section~\ref{sec:systematics_rsd} we show that the recovered $f\sigma_8$ has a strong dependence on the fiducial cosmology when we have best-fit $\alpha$ not close to unity. We can reduce this dependency by recomputing $\sigma_8$ using $R=8\alpha$\hmpc, where $\alpha$ is the isotropic dilation factor (Eq.~\ref{eq:alpha}) obtained in the fit. In effect, this keeps the scale at which $\sigma_8$ is fitted fixed relative to the data in units of $h^{-1}$Mpc, which only depends on $\Omega_m^{\rm fid}$. This is an alternative approach to the recently proposed $\sigma_{12}$ parametrisation \citep{sanchez_let_2020}, where the radius of the top-hat function is set to $R=12$~Mpc instead of $R=8$\hmpc. Unless otherwise stated, all the reported values of $f{\sigma_8}$ in this work provide $f{\sigma_8}$ where the scale is fixed in this way. \subsection{Parameter inference} The cosmological parameter inference is performed by means of the likelihood analysis of the data. The likelihood $\mathcal{L}$ is defined such that \begin{equation} -2\ln\mathcal{L}(\theta) = \sum_{i,j}^{N_p}\Delta_i(\theta) \hat{\Psi}_{ij} \Delta_j(\theta), \end{equation} where $\theta$ is the vector of parameters, $\vec{\Delta}$ is the data-model difference vector, $N_p$ is the total number of data points. An estimate of the precision matrix $\hat{\Psi} = (1-D) \hat{C}^{-1}$ is obtained from the covariance $\hat{C}$ from 1000 realisation of EZmocks, where $D = (N_p+1)/(N_{\rm mocks} -1)$ is a factor that accounts for the skewed nature of the Wishart distribution \citep{hartlap_why_2007}. The data vector that enters in $\vec{\Delta}$ includes, in the baseline configuration, the monopole and quadrupole correlation functions for the BAO analysis, and the monopole, quadrupole, and hexadecapole correlation functions for the RSD analysis. In the BAO analysis, the best-fit parameters ($\aperp, \alpha_\parallel$) are found by minimizing $-2\ln\mathcal{L} = \chi^2$ using a quasi-Newton minimum finder algorithm {\sc iMinuit}\footnote{\url{https://iminuit.readthedocs.io/}}. The errors in $\alpha_\parallel$ and $\aperp$ are found by computing the intervals where $\chi^2$ increases by unity. Gaussianity is not assumed in the error calculation, but we find that on average, errors are symmetric and correctly described by a Gaussian. The 2D errors in $(\aperp, \alpha_\parallel)$, such as those presented in Figure~\ref{fig:consensus_bao}, are found by scanning $\chi^2$ values in a regular grid in $\aperp$ and $\alpha_\parallel$. In the case of the full-shape analysis, we explore the likelihood with the Markov chain Monte Carlo ensemble sampler {\sc emcee}\footnote{\url{https://emcee.readthedocs.io/}}. The input power spectrum shape parameters are fixed at the fiducial cosmology and any deviations are accounted for through the Alcock-Paczynski parameters $\aperp$ and $\alpha_\parallel$. We assume the uniform priors on model parameters given in Table~\ref{tab:prior_ref}. \begin{table} \label{tab:prior_ref} \caption{List of fitter parameters and their priors used in full-shape analysis for the two models.} \centering \begin{tabular}{cccc} \hline \hline Par. TNS & Prior TNS &Par. CLPT-GS & Prior CLPT-GS\\ \hline $\alpha_{\perp}$ & $[0.5, 1.5]$& $\alpha_{\perp}$ & $[0.5, 1.5]$\\ $\alpha_{\parallel}$ & $[0.5, 1.5]$& $\alpha_{\parallel}$ & $[0.5, 1.5]$\\ \hline $f$ & $[0,2]$&$f$ & $[0,2]$ \\ $b_1$ & $[0.2,4]$& $\langle F' \rangle$& [0,3]\\ $b_2$ & $[-10, 10]$&$\langle F'' \rangle$& [-10,10]\\ $b_{\Gamma_3}$ & $[-2, 4]$&$\sigma_{\rm FoG}$ &[0,40]\\ $\sigma_v$ & $[0.1,8]$ \\ \hline \hline \end{tabular} \end{table} \begin{table} \label{tab:percival_factors} \caption{Characteristics of the baseline fits for all models in this work, where $N_{\rm mock}$ is the number of mocks used in the estimation of the covariance matrix, $N_{\rm par}$ is the total number of parameters fitted, $N_{\rm bins}$ is the total size of the data vector, $(1-D)$ is the correction factor to the precision matrix \citep{hartlap_why_2007}, $m_1$ is the factor to be applied to the estimated error matrix and $m_2$ is the factor that scales the scatter of best-fit parameters of a set of mocks (if these were used in the calculation of the covariance matrix). The derivation of $m_1$ and $m_2$ can be found in \citet{percival_clustering_2014}. } \centering \begin{tabular}{cccc} \hline \hline & BAO & RSD TNS & RSD CLPT-GS \\ \hline $N_{\rm mock}$ & 1000 & 1000 & 1000 \\ $N_{\rm par}$ & 9 & 7 & 6 \\ $N_{\rm bins}$ & 40 & 65 & 63 \\ $(1-D)$ & 0.96 & 0.93 & 0.94 \\ $m_1$ & 1.022 & 1.053 & 1.053 \\ $m_2$ & 1.065 & 1.128 & 1.125 \\ \hline \hline \end{tabular} \end{table} The final parameter constraints are obtained by marginalizing the full posterior likelihood over the nuisance parameters. The marginal posterior is approximated by a multivariate Gaussian distribution with central values given by best-fitting parameter values $\theta^* = (\aperp, \alpha_\parallel, f{\sigma_8})$ and parameter covariance matrix $C_{\theta}$. Since the covariance matrix is computed from a finite number of mock realisations, we need to apply correction factors to the obtained $C_{\theta}$. These factors are Eq.~18 and 22 from \citet{percival_clustering_2014} to be applied to uncertainties and to the scatter over best-fit values, respectively. These factors, which depend on the number of mocks, parameters and bins in the data vectors, are presented in Table~\ref{tab:percival_factors}. The final parameter constraints from this work are available to the public in this format\footnote{\url{sdss.org/}}. \subsection{Combining BAO and RSD constraints} \label{sec:combining_analysis} From the same input LRG catalog, we produced BAO-only and full-shape RSD constraints, both in configuration and Fourier space \citep{gil-marin_2020}. Each measurement yields a marginal posterior on $(\aperp, \alpha_\parallel)$ for BAO-only or $(\aperp, \alpha_\parallel, f{\sigma_8})$ for the full-shape RSD analyses. In the following we describe the procedure to combine all these posteriors into a single consensus constraint, while correctly accounting for their covariances. This consensus result is the one used for the final cosmological constraints described in \citet{mueller_2020}. We follow closely the method presented in \citet{sanchez_clustering_2017} to derive the consensus result. The idea is to compress $M$ data vectors $x_m$ containing $p$ parameters and their $p\times p$ covariance matrices $C_{mm}$ from different methods into a single vector $x_c$ and covariance $C_c$, assuming that the $\chi^2$ between individual measurements is the same as the one from the compressed result. The expression for the combined covariance matrix is \begin{equation} C_c \equiv \left(\sum_{m=1}^M \sum_{n=1}^M C_{mn}^{-1} \right)^{-1} \end{equation} and the combined data vector is \begin{equation} x_c = C_c \sum_{m=1}^M \left( \sum_{n=1}^M C^{-1}_{nm} \right) x_m \end{equation} where $C_{mn}$ is a $p \times p$ block from the full covariance matrix between all parameters and methods $C$, defined as \begin{equation} C = \begin{pmatrix} C_{11} & C_{12} & \cdots & C_{1M} \\ C_{21} & C_{22} & \cdots & C_{2M} \\ \vdots & \vdots & \ddots & \vdots \\ C_{M1} & C_{M2} & \cdots & C_{MM} \end{pmatrix} \label{eq:full_covariance} \end{equation} The diagonal blocks $C_{mm}$ are obtained from the Gaussian approximation of the marginal posterior from each method. The off-diagonal blocks $C_{mn}$ with $m\neq n$ cannot be estimated from our fits. We derive these off-diagonal blocks from results from each method applied to the 1000 {\sc EZmocks} realisations. More precisely, we compute the correlation coefficients $\rho^{\rm mocks}_{p_1, p_2, m, n}$ between parameters $p_1$, $p_2$ and methods $m, n$ using the mocks and scale these coefficients by the diagonal errors from the data. It is worth emphasizing that the correlation coefficients between parameters depend on the given realisation of the data, while the ones derived from mock measurements are ensemble averaged coefficients. Therefore, we scale the correlations coefficients from the mocks in order to match the maximum correlation coefficient that would be possible with the data \citep{ross_information_2015}. For the same parameter $p_1$ measured by two different methods $m$ and $n$, we assume that the maximum correlation between them is given by $\rho_{\rm max} = \sigma_{p1, m}/\sigma_{p1, n}$, where $\sigma_p$ is the error of parameter $p$. This number is computed for the data realisation $\rho_{\rm max}^{\rm data}$ and for the ensemble of mocks $\rho_{\rm max}^{\rm mocks}$. We can write the adjusted correlation coefficients as \begin{equation} \rho^{\rm data}_{p_1, p_1, m, n} = \rho^{\rm mocks}_{p_1, p_1, m, n} \frac{ \rho^{\rm data}_{\rm max} }{ \rho^{\rm mocks}_{\rm max} } \end{equation} The equation above accounts for the diagonal terms of the off-diagonal block $C_{mn}$. For the off-diagonal terms, we use \begin{equation} \rho^{\rm data}_{p_1, p_2, m, n} = \frac{1}{4}\left(\rho^{\rm data}_{p_1, p_1, m, n} + \rho^{\rm data}_{p_2, p_2, m, n}\right) \left(\rho^{\rm data}_{p_1, p_2, m, m} + \rho^{\rm data}_{p_1, p_2, n, n}\right) \end{equation} We use the method described above to perform all the constraint combinations, except for the combination of results from CLPT-GS and TNS RSD models, which use the same input data vector (pre-reconstruction multipoles in configuration space). For this particular combination, we simply assume that $C_c^{-1} = 0.5(C_{mm}^{-1}+C_{nn}^{-1})$ and $x_c = 2 C_c^{-1}\left(C_{mm}^{-1} x_{m} + C_{nn}^{-1} x_{n}\right)$. For all combinations, we chose to use the results from at most two methods at once ($M=2$) in order to reduce the potential noise introduced by the procedure. Denoting $\xi_\ell$ the results from the configuration space analysis and $P_\ell$ that from the Fourier space analysis, our recipe to obtain the consensus result for the LRG sample is as follows: \begin{itemize} \item Combine RSD $\xi_\ell$ TNS and RSD $\xi_\ell$ CLPT-GS results into RSD $\xi_\ell$, \item Combine BAO $\xi_\ell$ with BAO $P_\ell$ into BAO $(\xi_\ell+P_\ell)$, \item Combine RSD $\xi_\ell$ with RSD $P_\ell$ into RSD $(\xi_\ell+P_\ell)$, \item Combine BAO $(\xi_\ell+P_\ell)$ with RSD $(\xi_\ell+P_\ell)$ into BAO$+$RSD $(\xi_\ell+P_\ell)$ \end{itemize} Alternatively, we can proceed as \begin{itemize} \item Combine BAO $\xi_\ell$ with RSD $\xi_\ell$ into (BAO$+$RSD) $\xi_\ell$, \item Combine BAO $P_\ell$ with RSD $P_\ell$ into (BAO$+$RSD) $P_\ell$ \item Combine BAO$+$RSD $\xi_\ell$ with BAO$+$RSD $P_\ell$ into (BAO$+$RSD) $\xi_\ell+P_\ell$ \end{itemize} In Section~\ref{sec:statistical_properties} we test this procedure on the mock catalogues. \section{Robustness of the analysis and systematic errors} \label{sec:robustness} In this section we perform a comprehensive set of tests of the adopted methodology using all the simulated datasets available. We estimate the biases in the measurement of the cosmological parameters ($\aperp, \alpha_\parallel, f{\sigma_8}$) and derive the systematic errors for both BAO-only and full-shape RSD analyses. For a given parameter, we define the systematic error $\sigma_{p, \rm syst}$ as follows. We compare the estimated value of the parameter $x_p$ to a reference value $x_p^{\rm ref}$ and set the systematic error value to \begin{eqnarray} \sigma_{p, \rm syst} = 2\sigma_p, \ &{\rm if} \ | x_p - x_p^{\rm ref}| < 2\sigma_p, \label{eq:syst_error1} \\ \sigma_{p, \rm syst} = |x_p - x_p^{\rm ref}|, \ & {\rm if} \ | x_p - x_p^{\rm ref}| > 2\sigma_p, \label{eq:syst_error2} \end{eqnarray} where $\sigma_p$ is the estimated statistical error on $x_p$. As a conservative approach, we use the maximum value of the bias amongst the several cases studied. \subsection{Systematics in the BAO analysis} \label{sec:systematics_bao} The methodology described in Section~\ref{sec:bao_modelling} was tested using the 1000 {\sc EZmocks} mock survey realisations and 84 {\sc Nseries} realisations. For each realisation, we compute the correlation function and its multipoles, and fit for the BAO peak position to determine the dilation parameters $\alpha_\parallel$, $\aperp$ and associated errors. We compare the best-fit $\aperp, \alpha_\parallel$ to their expected values, which are obtained from the cosmological models described in Table~\ref{tab:cosmologies}. The effective redshift of the \textsc{EZmocks} is $z_{\rm eff} = 0.698$ and $z_{\rm eff} = 0.56$ for \textsc{Nseries}. In Figure~\ref{fig:ezmock_bao_fiducial} we summarize the systematic biases from pre- and post-reconstruction mocks for a few choices of fiducial cosmology, parameterised by $\Omega_m^{\rm fid}$. In pre-reconstruction mocks, biases in the recovered $\alpha$ values reach up to 0.5 per cent in $\aperp$ and 1.0 per cent in $\alpha_\parallel$. These biases are expected due to the impact of non-linear effects on the position of the peak that cannot be correctly accounted for with the Gaussian damping terms in Eq.~\ref{eq:pk2d} at this level of precision \citep{seo_modeling_2016}. We recall that we are fitting the average of all realisations. The reconstruction procedure removes in part the non-linear effects and this is seen as a reduction of the biases to less than 0.2 per cent. The bias reduction is also seen in the {\sc Nseries} mocks, particularly on $\aperp$, confirming that the bias reduction is not related to a feature of the mocks induced by the approximate method used to build them. \begin{figure} \centering \textbf{Pre-reconstruction}\par\medskip \includegraphics[width=\columnwidth]{figures/bao_fiducial_prerec.pdf} \textbf{Post-reconstruction}\par\medskip \includegraphics[width=\columnwidth]{figures/bao_fiducial_postrec.pdf} \caption{Impact of choice of fiducial cosmology in the recovered values of $\alpha_\parallel$ and $\aperp$ from the stacks of 1000 multipoles from the {\sc EZmocks} (blue) and 84 {\sc Nseries} mocks (orange), for pre- (top panels) and post- (bottom panels) reconstruction. Associated error bars correspond to the error on the mean of the mocks. The gray shaded areas correspond to one per cent errors. For comparison, the error on real data is near 1.9 per cent for $\aperp$ and 2.6 per cent for $\alpha_\parallel$ in the post-reconstruction case. } \label{fig:ezmock_bao_fiducial} \end{figure} Table~\ref{tab:bao_ezmock_stats_bias} shows results from Figure~\ref{fig:ezmock_bao_fiducial} for the post-reconstruction case only, including the fits with the hexadecapole $\xi_{\ell=4}$. The impact of the hexadecapole is negligible even in this very low-noise regime, for both types of mocks. The reported dilation parameters for almost all cases are consistent with expected value within 2$\sigma$. We see a 2.6$\sigma$ deviation on $\aperp$ for the {\sc Nseries} case analysed with $\Omega_m^{\rm fid} = 0.35$. However this choice of $\Omega_m^{\rm fid}$ is the most distant from the true value of the simulation and its observed bias is still less than half a per cent, which is small compared to the statistical power of our sample. For the {\sc EZmocks}, which have smaller errors, the biases are up to 0.13 per cent for $\aperp$ and 0.18 per cent for $\alpha_\parallel$. These biases are much smaller than the expected statistical errors in our data, i.e. $\sim$1.9 per cent for $\aperp$ and $\sim$2.6 per cent for $\alpha_\parallel$, showing that our methodology is robust at this statistical level. In these fits, all parameters except $\Sigma_{\rm rec} = 15$\hmpc\ were left free. The best-fit values of $\Sigma_\perp,\Sigma_\parallel$ and $\Sigma_s$ were used and held fixed in the fits of individual realisations. \begin{table} \centering \caption{Average biases from BAO fits on the stacked multipoles of 1000 {\sc EZmocks} and 84 {\sc Nseries} realisations. All results are based on post-reconstruction correlation functions. } \begin{tabular}{lcccc} \hline \hline Sample & $\Omega_m^{\rm fid}$ & $\ell_{\rm max}$ & $\aperp - \aperp^{\rm exp} \ [10^{-3}]$ & $\alpha_\parallel - \alpha_\parallel^{\rm exp} \ [10^{-3}]$ \\ \hline {\sc EZ} & 0.27 & 2 & $0.4 \pm 0.7$ & $1.1 \pm 1.0$ \\ {\sc EZ} & 0.27 & 4 & $0.5 \pm 0.7$ & $1.4 \pm 1.0$ \\ {\sc EZ} & 0.31 & 2 & $0.9 \pm 0.7$ & $0.3 \pm 1.1$ \\ {\sc EZ} & 0.31 & 4 & $1.0 \pm 0.7$ & $0.4 \pm 1.1$ \\ {\sc EZ} & 0.35 & 2 & $1.3 \pm 0.7$ & $1.8 \pm 1.0$ \\ {\sc EZ} & 0.35 & 4 & $1.2 \pm 0.7$ & $1.5 \pm 1.0$ \\ {\sc NS} & 0.286 & 2 & $2.3 \pm 1.5$ & $3.1 \pm 2.4$ \\ {\sc NS} & 0.286 & 4 & $2.2 \pm 1.5$ & $3.0 \pm 2.4$ \\ {\sc NS} & 0.31 & 2 & $3.0 \pm 1.5$ & $3.6 \pm 2.4$ \\ {\sc NS} & 0.31 & 4 & $3.0 \pm 1.5$ & $3.7 \pm 2.4$ \\ {\sc NS} & 0.35 & 2 & $3.9 \pm 1.5$ & $3.2 \pm 2.4$ \\ {\sc NS} & 0.35 & 4 & $3.9 \pm 1.5$ & $3.5 \pm 2.4$ \\ \hline \end{tabular} \label{tab:bao_ezmock_stats_bias} \end{table} Results from Table~\ref{tab:bao_ezmock_stats_bias} and Figure~\ref{fig:ezmock_bao_fiducial} show no statistically significant dependence of results with the choice of fiducial cosmology. We derived the systematic errors for the BAO analysis using the values from Table~\ref{tab:bao_ezmock_stats_bias} and Eqs.~\ref{eq:syst_error1} and \ref{eq:syst_error2}. We used only the fits to the {\sc EZmocks} which have the better precision. The systematic errors are for $\aperp$ and $\alpha_\parallel$, respectively: \begin{equation} {\rm BAO}: \ \sigma_{\rm syst, model} = (0.0014, 0.0021) \end{equation} which are negligible compared to statistical errors of one realisation of our data. Note that the fiducial cosmologies considered are all flat and assume general relativity. \citet{carter_impact_2019} and \citet{bernal_robustness_2020} find that BAO measurements are robust to a larger variety of fiducial cosmologies (but all close to the assumed one). Additional systematic errors should be anticipated when extrapolating to cosmologies that are significantly different than the truth, for instance yielding dilation parameters significantly different than unity. \begin{figure*} \centering \includegraphics[width=0.35\textwidth]{figures/EZmockv7_pre_postrecon_alpha_perp_value_v72.pdf} \includegraphics[width=0.35\textwidth]{figures/EZmockv7_pre_postrecon_alpha_para_value_v72.pdf} \includegraphics[width=0.35\textwidth]{figures/EZmockv7_pre_postrecon_alpha_perp_error_v72.pdf} \includegraphics[width=0.35\textwidth]{figures/EZmockv7_pre_postrecon_alpha_para_error_v72.pdf} \caption{Distribution of dilation parameters $\aperp$ and $\alpha_\parallel$ and its estimated errors for pre and post reconstruction \textsc{EZmock} catalogs with systematic effects. The color scale indicates the difference in $\chi^2$ values between a model with and without BAO peak. The red stars shows results with real data. There is a known mismatch in the BAO peak amplitude between data and EZmocks causing the accuracy of the data point to be slightly smaller than the error distribution in the EZmocks (see Section~\ref{sec:mocks_description}). } \label{fig:ezmock_bao_alphas} \end{figure*} Figure~\ref{fig:ezmock_bao_alphas} displays the distribution of recovered $\aperp$, $\alpha_\parallel$ and their respective errors measured from each of the individual {\sc EZmocks}. The error distribution shows that reconstruction improves the constraints on $\aperp$ or $\alpha_\parallel$ in 94 per cent of the realisations (89 per cent have both errors improved). As expected, realisations with smaller errors generally exhibit larger values of $\Delta \chi^2 = \chi^2_{\rm no \ peak} - \chi^2_{\rm peak}$, meaning a more pronounced BAO peak and higher detection significance. We see no particular trend in the best-fit $\alpha$ values with $\Delta \chi^2$ in the two top panels. The red stars in Figure~\ref{fig:ezmock_bao_alphas} indicate the values obtained in real data. The error in $\aperp$ in the data is typical of what is found in mocks, although for $\alpha_\parallel$ it is found at the extreme of the mocks distribution. As discussed in Section~\ref{sec:mocks_description} and displayed in Figure~\ref{fig:multipoles_data_versus_mocks}, the BAO peak amplitude in the data multipoles is slightly larger than the one seen in this {\sc EZmock} sample. A similar behaviour is observed in the eBOSS QSO sample \citep{hou_2020, neveux_2020} who also use {\sc EZmocks} from \citet{zhao_2020} and in the BOSS DR12 CMASS sample (see Figure 12 of \citealt{ross_clustering_2017}). Table~\ref{tab:bao_ezmock_stats_errors} presents a statistical summary of the fits performed on the {\sc EZmocks}. We tested several changes to our baseline analysis: include the hexadecapole, change the separation range $[r_{\rm min}, r_{\rm max}]$, allow BAO damping parameters $\Sigma_\perp$ and $\Sigma_\parallel$ to vary within a Gaussian prior ($5.5 \pm 2$\hmpc), and fit the pre-reconstruction multipoles. We remove realisations with fits that did not converge or with extreme error values (more than 5$\sigma$ of their distribution, where $\sigma$ is defined as the half the range covered by 68 per cent of values). The total number of valid realisations is given by $N_{\rm good}$ in Table~\ref{tab:bao_ezmock_stats_errors}. In most cases studied, the observed standard deviation of the best-fit parameters $\sigma(\alpha)$ is consistent with the average per-mock error estimates $\langle \sigma_\alpha \rangle$, indicating that our errors are correctly estimated. We also see that the dispersion of dilation parameters is not significantly reduced when adding the hexadecapole $\xi_4$ to the BAO fits, showing that most of the BAO information is contained in the monopole and quadrupole at this level of precision. The mean and dispersion of the pull parameter, defined as $Z_\alpha = (\alpha - \langle \alpha \rangle)/\sigma_\alpha$, are consistent with an unit Gaussian for almost all cases, which further validates our error estimates. \begin{table*} \centering \caption{Statistics on errors from BAO fits on 1000 {\sc EZmocks} realisations. All results are based on post-reconstruction correlation functions. $\sigma$ is the scatter of best-fit values $x_i$ amongst the $N_{\rm good}$ realisations with confident detection or non-extreme values or errors (out of the 1000), $\langle \sigma_i \rangle$ is the mean estimated error per mock, $Z = (x_i - \langle x_i \rangle)/\sigma_i$ is the pull quantity for which we show the mean $\langle Z_i \rangle$ and standard deviation $\sigma(Z)$. First row corresponds to our baseline analysis. } \begin{tabular}{lccccccccccccc} \hline \hline Analysis & $N_{\rm good}$ & & \multicolumn{4}{c}{$\aperp$} & & \multicolumn{4}{c}{$\alpha_\parallel$} \\ & & & $\sigma$ & $\langle \sigma_i \rangle$ & $\langle Z_i \rangle $ & $\sigma(Z_i)$ & & $\sigma$ & $\langle \sigma_i \rangle$ & $\langle Z_i \rangle $ & $\sigma(Z_i)$ \\ \hline baseline & 990 & & 0.022 & 0.023 & -0.02 & $0.99 $ & & 0.035 & 0.036 & -0.03 & $0.96 $ \\ $\ell_{\rm max}= 4$ & 995 & & 0.022 & 0.023 & -0.02 & $0.99 $ & & 0.035 & 0.035 & -0.03 & $0.97 $ \\ pre-recon & 968 & & 0.030 & 0.030 & -0.05 & $1.07 $ & & 0.055 & 0.056 & -0.06 & $0.97 $ \\ pre-recon $\ell_{\rm max}= 4$ & 968 & & 0.029 & 0.028 & -0.03 & $1.04 $ & & 0.054 & 0.054 & -0.07 & $1.02 $ \\ $r_{\rm min} = 20$\hmpc & 979 & & 0.023 & 0.026 & -0.01 & $0.93 $ & & 0.035 & 0.040 & 0.04 & $1.26 $ \\ $r_{\rm min} = 30$\hmpc & 987 & & 0.023 & 0.024 & -0.02 & $0.95 $ & & 0.036 & 0.038 & -0.02 & $0.92 $ \\ $r_{\rm min} = 40$\hmpc & 995 & & 0.022 & 0.023 & -0.02 & $0.98 $ & & 0.035 & 0.036 & -0.02 & $0.94 $ \\ $r_{\rm max} = 160$\hmpc & 989 & & 0.022 & 0.023 & -0.02 & $0.99 $ & & 0.036 & 0.036 & -0.03 & $0.96 $ \\ $r_{\rm max} = 170$\hmpc & 989 & & 0.022 & 0.023 & -0.02 & $0.99 $ & & 0.036 & 0.036 & -0.03 & $0.96 $ \\ $r_{\rm max} = 180$\hmpc & 990 & & 0.022 & 0.023 & -0.02 & $0.98 $ & & 0.035 & 0.036 & -0.03 & $0.95 $ \\ Prior $\Sigma_{\perp,\parallel}$ & 993 & & 0.022 & 0.023 & -0.02 & $1.00 $ & & 0.035 & 0.035 & -0.03 & $0.96 $ \\ \hline \end{tabular} \label{tab:bao_ezmock_stats_errors} \end{table*} All the tests performed in this section show that our BAO analysis is unbiased and provides correct error estimates. We apply our baseline analysis to the real data and report results in Section~\ref{sec:results_bao}. \subsection{Systematics in the RSD analysis} \label{sec:systematics_rsd} We present in this section the systematic error budget of the full-shape RSD analysis. Particularly, we discuss the impact of the choice of scales used in the fit, the bias introduced by each model, the bias introduced by varying the fiducial cosmology, the bias associated to the choice of the LRG halo occupation distribution model, and the impact of observational effects. These are quantified through the analysis of the various sets of mocks with both TNS and CLPT-GS models, which are described in Section~\ref{sec:rsd_modelling}. \subsubsection{Optimal fitting range of scales} We first study the optimal range of scales in the fit for the two RSD models considered in this work (see Section~\ref{sec:method}). It is worth noting that the optimal range of scales is not necessarily the same for the two models. Generally, full-shape RSD analyses use scales going from tens of \hmpc\ to about $130-150\,$\hmpc. Including smaller scales potentially increases the precision of the constraints but at the expense of stronger biases on the recovered parameters. This is related to the limitations of current RSD models to fully describe the non-linear regime. On the other hand, including scales larger than $\sim 130$ \hmpc\ does not significantly improve the precision, since the variations of the model on those scales are small. In order to determine the optimal range of scales for our RSD models, we performed fits to the mean correlation function of the \textsc{Nseries} mocks, which are those that most accurately predict the expected RSD in the data. Figure~\ref{fig:rmin_impact} shows the best-fit values of $f{\sigma_8}$, $\alpha_\parallel$, and $\aperp$ as a function of the minimum scale used in the fit, $r_{\rm min}$. In each panel, the grey bands show 1 per cent errors in $\aperp, \alpha_\parallel$ and 3 per cent errors in $f{\sigma_8}$ for reference. Top panels present the measurements from the TNS model when the parameter $b_{\Gamma 3}$ fixed to the value given by Eq.~\ref{eq:nllbg3}, while in the mid panels this parameter is let free. Bottom panels show best-fit values for the CLPT-GS model as studied in \citet{icaza-lizaola_clustering_2020}. As noted in \citet{zarrouk_clustering_2018}, the hexadecapole is more sensitive to the difference between the true and fiducial cosmologies and is generally less well modelled on small scales compared to the monopole and quadrupole. We therefore consider the possibility of having a different minimum fitting scale for the hexadecapole with respect to the monopole and quadrupole that share the same $r_{\rm min}$. For consistency with the other systematic tests, we performed this analysis using two choices of fiducial cosmologies, $\Omega^{\rm fid}_m = 0.286$ (blue) and $\Omega^{\rm fid}_m = 0.31$ (red). The maximum separation in all cases is $r_{\rm max} = 130$\hmpc, as we find that using larger $r_{\rm max}$ has a negligible impact on the recovered parameter values and associated errors. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/rsd_rmin_nseries.pdf} \caption{Biases in the measurement of $f{\sigma_8}, \alpha_\parallel, \aperp$ obtained from full-shape fits to the average of 84 multipoles from the \textsc{Nseries} mocks as a function of the separation range used. The y-axis displays the value of the minimal separation $r_{\rm min}$ used in fits of the monopole, quadrupole (MQ) and hexadecapole (H). Top and mid rows display results for the TNS model when fixing or letting free the parameter $b\Gamma_3$ respectively. Bottom row presents results for the CLPT-GS model. The blue circles correspond to the analysis using $\Omega_m^{\rm fid} = 0.286$ (the true value of simulations) while the red squares correspond to $\Omega_m^{\rm fid} = 0.31$. The gray shaded areas correspond to 1 per cent errors in $\aperp, \alpha_\parallel$ and to 3 per cent in $f{\sigma_8}$. The green shared area shows our choice for baseline analysis for TNS and CLPT-GS models. } \label{fig:rmin_impact} \end{figure} In the case of the TNS model, we consider two different cases that correspond to when $b_{\Gamma 3}$ is fixed to its Lagrangian prediction and when $b_{\Gamma 3}$ is allowed to vary. In the case of $\Omega^{\rm fid}_m = 0.286$ and when $b_{\Gamma 3}$ is fixed, in the top panels of Figure~\ref{fig:rmin_impact}, we can see that $f\sigma_8$ is overestimated by 1.5 per cent when using scales above 25\hmpc\ and by 2 per cent below. Using $r_{\rm min}>25$\hmpc\ reduces the bias to about 1 per cent on $f\sigma_8$. For $\alpha_\parallel$ and $\alpha_\perp$ parameters, biases range from 0.3 to 0.5 per cent and are all statistically consistent with zero. When $b_{\Gamma 3}$ is let free, in the mid panels of Figure~\ref{fig:rmin_impact}, the model provide more robust measurements of $f\sigma_8$ at all tested ranges. The biases in $f\sigma_8$ over all ranges does not exceed 0.6$\sigma$, compared to approx 2.5$\sigma$ for the fixed $b_{\Gamma 3}$ case. We also remark that letting $b_{\Gamma_3}$ free also provides a better fit to the BAO amplitude and the hexadecapole on the scales of $20-25$\hmpc. We see a 1 per cent bias on $\alpha_\parallel$ when $r_{\rm min} = 20$\hmpc\ for all three multipoles. This bias is however reduced by increasing the hexadecapole minimum scale to $r_{\rm min}=25$\hmpc. The most optimal configuration for the TNS model is to let $b\Gamma_3$ free and fit the monopole and quadrupole in the range $20 \leq r \leq 130$\hmpc\ and the hexadecapole in the range $25 \leq r \leq 130$\hmpc, as marked by the green band in Figure~\ref{fig:rmin_impact}. If we use $\Omega^{\rm fid}_m=0.31$, the trends and quantitative results are similar to the case with $\Omega^{\rm fid}_m=0.286$. For the CLPT-GS model, an exploration of the optimal fitting range was done in \citet{icaza-lizaola_clustering_2020}. Two sets of tests have been performed. The first set consisted of fitting the mean of the mocks when varying $r_{\rm min}$ and the second, fitting the 84 individual mocks and measuring the bias and variance of the best fits when varying $r_{\rm min}$. We revisit the first set of tests, but this time performing a full MCMC analysis to determine best fits and errors. The bottom panels of Figure~\ref{fig:rmin_impact} summarise the results. In the case of $\Omega^{\rm fid}_m = 0.286$, we see that using $r_{\rm min} = 25$\hmpc\ for all multipoles yields to biases of 0.1, 1.1 and 1.6 per cent in $\aperp$, $\alpha_\parallel$, and $f\sigma_8$. Increasing $r_{\rm min}$ for the hexadecapole while fixing $r_{\rm min} = 25$\hmpc\ for the monopole and quadrupole, does not change the results significantly, the biases are 0.1 per cent for all ranges in $\aperp$, and 1 per cent also for all ranges in $\alpha_\parallel$. For $f\sigma_8$ variations of 0.1-0.2 per cent arises when varying the range, but this variation in statistically consistent with zero. In the case of $\Omega^{\rm fid}_m = 0.31$, we find very similar trends. Using $r_{\rm min} = 25$\hmpc\ for all multipoles yields biases of 0.2, 0.9 and 1.6 in $\aperp$, $\alpha_\parallel$ and $f\sigma_8$ respectively. When we decrease the range of the fits, the biases on $(\aperp,\alpha_\parallel,f{\sigma_8})$ varies by (0.1-0.2, 0.2-0.3, 0.3-0.4) per cent. These variations are not significant and we decide to keep the lowest considered minimum scales on the hexadecapole in the fits. Compared with previous BOSS full-shape RSD analysis in configuration space, we used for CLPT-GS model the same minimum scale for the monopole and quadrupole \citep{ satpathy_clustering_2017,alam_clustering_2017}. The hexadecapole was not included in BOSS analyses. The exploration for the optimal minimum scale to be used for the hexadecapole was done in \cite{icaza-lizaola_clustering_2020} and revisited in this work. The systematic error associated to the adopted fitting range is also consistent with previous results for the case where only the monopole and quadrupole are used, as reported in \cite{icaza-lizaola_clustering_2020}. The TNS model was not used in configuration space for analysing previous SDSS samples. However, as we describe in section \ref{section:syserr}, the bias associated with both models when using their optimal fitting range is consistent between them, as well as consistent with previous BOSS results. Overall, these tests performed on the {\sc Nseries} mocks allow us to define the optimal fitting ranges of scales for both RSD models. Minimizing the bias of the models while keeping $r_{\rm min}$ as small as possible, we eventually adopt the following optimal ranges: \begin{itemize} \item TNS model: $20 < r < 130$\hmpc\ for $\xi_0$ and $\xi_2$, and $25 < r < 130$\hmpc\ for $\xi_4$ \item CLPT-GS model: $25 < r < 130$\hmpc\ for all multipoles, \end{itemize} which serve as baseline in the following. We compare the performance of the two models using these ranges in the following sections. \subsubsection{Systematic errors from RSD modeling and adopted fiducial cosmology}\label{section:syserr} We quantify in this section the systematic error introduced by the RSD modelling and the choice of fiducial cosmology. For this, we used the {\sc Nseries} mocks\footnote{Given the mismatch between the clustering of the \textsc{MockChallenge} mocks and data, and its larger cosmic variance compared to {\sc Nseries} mocks, we decided to use {\sc MockChallenge} only for the quantification of systematic errors related to the halo occupation models.}. The measurements of $\aperp$, $\alpha_\parallel$ and $f\sigma_8$ from fits to the average multipoles are given in Table~\ref{tab:rsd_ns_stats_bias} and shown in Figure~\ref{fig:rsd_nseries_cosmology}. The shaded area in the figure corresponds to 1 per cent deviation for $\aperp, \alpha_\parallel$ expected values and 3 per cent for $f{\sigma_8}$ expected value. We used both TNS (red) and CLPT-GS (blue) models and consider three choices of fiducial cosmologies parameterised by their value of $\Omega_m^{\rm fid}$. Note that, as for the BAO analysis, we only test flat $\Lambda$CDM models close to the most probable one. We expect the full-shape analysis to be biased if the fiducial cosmology is too different from the truth (the parametrisation with $\aperp$ and $\alpha_\parallel$ would not fully account for the distortions and the template power spectrum would differ significantly). \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/rsd_nseries_fidcosmo_updated.pdf} \caption{Biases in best-fit parameters for both CLPT-GS (blue) and TNS (red) models from fits to the average multipoles of 84 {\sc Nseries} mocks. Shaded grey areas show the equivalent of 1 per cent error for $\aperp,\alpha_\parallel$ and 3 per cent for $f{\sigma_8}$. In the right panel, crosses indicate $f{\sigma_8}$ values when $\sigma_8$ is not recomputed as described in Section~\ref{sec:fs8_scaling}. The true cosmology of the mocks is $\Omega_m = 0.286$. For reference, the errors on our data sample are $\sim$ 2, 3 and 10 per cent for $\aperp, \alpha_\parallel, f{\sigma_8}$ respectively. } \label{fig:rsd_nseries_cosmology} \end{figure} \begin{table} \centering \caption{Performance of the two full-shape models on the {\sc Nseries} mocks. Fits were performed on the the average of 84 multipoles. We report the shifts of best-fit parameters relative to their expected values. For $\Omega_m^{\rm fid} = 0.286$ we expect that both the $\alpha$ parameters are equal to 1. For $\Omega_m^{\rm fid} = 0.31$, $\aperp^{\rm exp} = 0.9788$, $\alpha_\parallel^{\rm exp}= 0.9878$ while for $\Omega_m^{\rm fid} = 0.35$ we expect $\aperp^{\rm exp} = 0.9623$, $\alpha_\parallel^{\rm exp}= 0.9851$. Since the growth rate of structures does not depend on the assumed cosmology, we expect to recover $f\sigma_8^{\rm exp} = 0.469$ for all cases. } \begin{tabular}{lccccc} \hline \hline Model & $\Omega_m^{\rm fid}$ & $\Delta \aperp \ [10^{-2}]$ & $\Delta \alpha_\parallel \ [10^{-2}]$ & $\Delta f\sigma_8 \ [10^{-2}]$ \\ \hline {\sc clpt-gs} & 0.286& $0.2 \pm 0.2$ & $-0.9 \pm 0.3$ & $-0.6 \pm 0.5$ \\ {\sc clpt-gs} & 0.31& $-0.2 \pm 0.2$ & $-0.8 \pm 0.3$ & $-0.0 \pm 0.5$ \\ {\sc clpt-gs} & 0.35& $-0.7 \pm 0.2$ & $-1.1 \pm 0.3$ & $0.0 \pm 0.5$ \\ {\sc tns} & 0.286& $-0.2 \pm 0.2$ & $-0.5 \pm 0.3$ & $0.6 \pm 0.4$ \\ {\sc tns} & 0.31& $-0.3 \pm 0.2$ & $-0.6 \pm 0.3$ & $0.3 \pm 0.4$ \\ {\sc tns} & 0.35& $-0.5 \pm 0.2$ & $-0.5 \pm 0.3$ & $-0.3 \pm 0.4$ \\ \hline \hline \end{tabular} \label{tab:rsd_ns_stats_bias} \end{table} We find that both RSD models are able to recover the true parameter values within these bounds. We estimate the systematic errors related to RSD modelling using Eq.~\ref{eq:syst_error1} and \ref{eq:syst_error2} by considering the shifts for the case where $\Omega_m^{\rm fid} = 0.286$ which is the true cosmology of the {\sc Nseries} mocks. We obtain, for $\aperp$, $\alpha_\parallel$ and $f{\sigma_8}$, respectively: \begin{eqnarray} {\rm {\sc CLPT-GS}}: \sigma_{\rm syst, model} = (0.4, \ 0.9, \ 1.0) \times 10^{-2} \\ {\rm TNS}: \sigma_{\rm syst, model} = (0.4, \ 0.6, \ 0.9) \times 10^{-2}. \end{eqnarray} The biases on the recovered parameters shown in Figure~\ref{fig:rsd_nseries_cosmology} induced by the choice of fiducial cosmology remain within 1, 1, and 3 per cent for $\aperp, \alpha_\parallel$, and $f{\sigma_8}$ respectively. For $\aperp$, both CLPT-GS and TNS models produces biases lower than 2$\sigma$ for all cosmologies except $\Omega_m^{\rm fid}=0.35$, which is the most distant value from the true cosmology of the simulation $\Omega_m = 0.286$. For $\alpha_\parallel$, all biases are consistent with zero at 2$\sigma$ level for the TNS model, while CLPT-GS shows biases slightly larger than 2$\sigma$ for all $\Omega_m^{\rm fid}$. The right panel of Figure~\ref{fig:rsd_nseries_cosmology} shows the measured $f{\sigma_8}$ when using the original value of $\sigma_8$ from the template (crosses) and when recomputing it with the scaling of $R=8$\hmpc\ by the isotropic dilation factor $\alpha = \aperp^{(2/3)}\alpha_\parallel^{(1/3)}$ (filled circles) as described in Section~\ref{sec:fs8_scaling}. Both TNS and CLPT-GS models show a consistent dependency with $\Omega_m^{\rm fid}$ when $\sigma_8$ is not re-evaluated: larger $\Omega_m^{\rm fid}$ yields smaller $f{\sigma_8}$. This is also found in the Fourier-space analysis of \citet{gil-marin_2020} and in Figure 14 of \citet{smith_2020}. As we recompute $\sigma_8$, this dependency is considerably reduced, which in turn reduces the contribution of the choice of fiducial cosmology to the systematic error budget. Using Eq.~\ref{eq:syst_error1} and \ref{eq:syst_error2}, with the entries of Table~\ref{tab:rsd_ns_stats_bias} (with $\sigma_8$ re-computed) where $\Omega_m^{\rm fid} \neq 0.286$ compared to the entries where $\Omega_m^{\rm fid} = 0.286$, we obtain the following systematic errors associated with the choice of fiducial cosmology for $\aperp$, $\alpha_\parallel$ and $f{\sigma_8}$, respectively: \begin{eqnarray} {\rm CLPT-GS}: \sigma_{\rm syst, fid} = (0.9, \ 1.0, \ 1.4 ) \times 10^{-2} \\ {\rm TNS}: \sigma_{\rm syst, fid} = (0.5, \ 0.8, \ 1.2) \times 10^{-2} \end{eqnarray} These systematic errors would be twice as large if $\sigma_8$ was not recomputed as described in Section~\ref{sec:fs8_scaling}. \subsubsection{Systematic errors from HOD} We quantify in this section the potential systematic errors introduced by the models with respect to how LRGs occupy dark matter halos. This is done by analysing mock catalogs produced with different halo occupation distribution (HOD) models that mimic different underlying galaxy clustering properties. The same input dark matter field is used when varying the HOD model. We use the \textsc{OuterRim} mocks described in Section~\ref{sec:mocks_description} and in \citet{rossi_2020}. Specifically, we analysed the mocks constructed using the ``Threshold 2'' for the HOD models from \citet{leauthaud_theoretical_2011, tinker_evolution_2013} and \citet{hearin_beyond_2015} and performed fits to the average multipoles over the 27 realisations available for each HOD model. Figure~\ref{fig:OR_stats} and Table~\ref{tab:rsd_or_stats_bias} shows the results. In this figure, each best-fit parameter is compared to the average best-fit over all HOD models in order to quantify the relative impact of each HOD (instead of comparing with their true value). The biases with respect to the true values were quantified in the previous section. The shaded regions represent 1 per cent error for $\aperp$ and $\alpha_\parallel$, and 3 per cent error for $f\sigma_8$. We find that the biases for both RSD models are all within 1$\sigma$ from the mean, although statistical errors are quite large (around one per cent for $\aperp, \alpha_\parallel$) compared to {\sc Nseries} mocks for instance. Also, the observed shifts are all smaller than the systematic errors estimated in the previous section. If we were to use the same definition for the systematic error introduced in Section~\ref{sec:robustness}, the relatively large errors from these measurements would produce a significant contribution to the error budget. Therefore we consider that HOD has a negligible contribution to the total systematic error budget. \begin{table} \centering \caption{Performance of the full-shape analyses on the {\sc Outerim} mocks produced using different HOD recipes. For each HOD \citep{leauthaud_theoretical_2011, tinker_evolution_2013, hearin_beyond_2015}, we display results obtained from our two RSD models (CLPT-GS and TNS). All results are from fits to the average multipoles of 27 realisations. Each row displays the shift of best-fit parameters with respect to the average parameters over the three HOD models: $\Delta x = x - \langle x \rangle_{\rm HOD}$. We found that these shifts are not significant and therefore do not contribute to systematic errors. } \begin{tabular}{llccc} \hline \hline HOD & Model & $\Delta \aperp$ [$10^{-2}$] & $\Delta \alpha_\parallel$ [$10^{-2}$]& $\Delta f{\sigma_8}$ [$10^{-2}$] \\ \hline L11 & {\sc clpt-gs} & $0.0 \pm 0.7$ & $0.0 \pm 1.1$ & $-0.1 \pm 1.7$ \\ T13 & {\sc clpt-gs} & $0.1 \pm 0.8$ & $-0.2 \pm 1.2$ & $-0.6 \pm 1.8$ \\ H15 & {\sc clpt-gs} & $0.0 \pm 0.7$ & $0.3 \pm 1.1$ & $0.6 \pm 1.8$ \\ L11 & {\sc tns} & $-0.4 \pm 0.5$ & $-0.7 \pm 1.1$ & $0.7 \pm 1.5$ \\ T13 & {\sc tns} & $0.2 \pm 0.6$ & $0.8 \pm 1.0$ & $-0.9 \pm 1.4$ \\ H15 & {\sc tns} & $0.2 \pm 0.6$ & $-0.1 \pm 1.0$ & $0.2 \pm 1.5$ \\ \hline \end{tabular} \label{tab:rsd_or_stats_bias} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/rsd_outerrim_hod_vs_mean_29april.pdf} \caption{Best-fit values of $\aperp$, $\alpha_\parallel$ and $f{\sigma_8}$ from fitting the average multipoles of the \textsc{OuterRim} mocks compared to their average over all HOD models. Blue points show results for the CLPT-GS model and red points show results for the TNS model. The shaded area shows 1\% error for $\aperp, \alpha_\parallel$ and 3\% for $f{\sigma_8}$. } \label{fig:OR_stats} \end{figure} \subsubsection{Systematic errors from observational effects} We investigate in this section the observational systematics. We used a set of 100 {\sc EZmocks} to quantify their impact on our measurements. From the same set, we added different observational effects. For simplicity, those samples were made from mocks reproducing only the eBOSS component of the survey, neglecting the CMASS component. We consider that the systematic errors estimated this way can be extrapolated to the full eBOSS+CMASS sample by assuming that their contribution is the same over the CMASS volume. We thus produced the following samples: \begin{itemize} \item[1.] no observational effects included, which we use as reference, \item[2.] including the effect of the radial integral constraint \citep[RIC,][]{de_mattia_integral_2019}, where the redshifts of the random catalog are randomly chosen from the redshifts of the data catalog, \item[3.] including RIC and all angular features: fiber collisions, redshift failures, and photometric corrections. \end{itemize} For each set, we computed the average multipoles and fitted them using our two RSD models. The covariance matrix is held fixed between cases. Table~\ref{tab:rsd_ez_stats_bias} summarises the biases in $\aperp, \alpha_\parallel, f{\sigma_8}$ caused by the different observational effects. The shifts are relative to results of mocks without observational effects. We find that the radial integral constraint produces the greatest effect, particularly for the CLPT-GS model for which the deviation on $f{\sigma_8}$ is slightly larger than $2\sigma$. Indeed, the quadrupole for mocks with RIC has smaller absolute amplitude, which translates into small $f{\sigma_8}$ values. However, when adding angular observational effects the shifts are all broadly consistent with zero, which indicates that the two effects partially cancel each other. Using values from the Table~\ref{tab:rsd_ez_stats_bias} and Eqs.~\ref{eq:syst_error1} and \ref{eq:syst_error2}, we derive the following systematic errors from observational effects for $\aperp$, $\alpha_\parallel$ and $f{\sigma_8}$, respectively: \begin{eqnarray} {\rm CLPT-GS}: \sigma_{\rm syst, obs} &=& (0.9, \ 1.2, \ 1.7 ) \times 10^{-2} \\ {\rm TNS}: \sigma_{\rm syst, obs} &=& (1.0, \ 1.3, \ 1.8 ) \times 10^{-2} \end{eqnarray} These systematic errors are about 50 per cent of the statistical errors for each parameter, which corresponds to the most significant contribution to the systematic error budget. \begin{table} \centering \caption{Impact of observational effects on the full-shape analysis using {\sc Ezmocks}. Each row displays the shifts of best-fit parameters with respect to the case without observational effects (``no syst''): $\Delta x = x - x^{\rm no \ syst}$. Fits are performed on the average multipoles of 100 realisations. We test the cases of mocks with radial integral constraint (RIC) and mocks with the combination of RIC and all angular observational effects (fiber collisions, redshift failures and photometric fluctuations). The angular effects introduced in mocks are corrected using the same procedure used in data. For simplicity, the mocks used here are only for the eBOSS part of the survey. } \begin{tabular}{ccccc} \hline \hline Type & Model & $\Delta \aperp$ [$10^{-2}$] & $\Delta \alpha_\parallel$ [$10^{-2}$]& $\Delta f\sigma_8$ [$10^{-2}$] \\ \hline RIC & {\sc clpt-gs} & $-0.3 \pm 0.5$ & $1.1 \pm 0.6$ & $-1.7 \pm 0.8$ \\ $+$Ang. Sys. & {\sc clpt-gs} & $0.0 \pm 0.4$ & $0.3 \pm 0.6$ & $0.0 \pm 0.9$ \\ RIC & {\sc tns} & $0.6 \pm 0.5$ & $-0.1 \pm 0.7$ & $-0.8 \pm 0.9$ \\ $+$Ang. Sys. & {\sc tns} & $0.8 \pm 0.5$ & $-0.2 \pm 0.7$ & $0.1 \pm 0.9$ \\ \hline \hline \end{tabular} \label{tab:rsd_ez_stats_bias} \end{table} \subsubsection{Total systematic error of the full-shape RSD analysis} Table~\ref{tab:rsd_total_systematic} summarises all systematic error contributions to the full-shape measurements discussed in the previous sections. We show the results for our two configuration-space RSD models TNS and CLPT-GS and for the Fourier space analysis of \citet{gil-marin_2020}. We compute the total systematic error $\sigma_{\rm syst}$ by summing up all the contributions in quadrature, assuming that they are all independent. By comparing the systematic errors with the statistical error from the baseline fits to the data (see Section~\ref{sec:results_rsd}), we find that the systematic errors are far from being negligible: more than 50 per cent of the statistical errors for all parameters. The systematic errors are in quadrature to the diagonal of the covariance of each measurement. We do not attempt to compute the covariance between systematic errors and this approach is more conservative (it does not underestimate errors). \begin{table} \centering \caption{Summary of systematic errors obtained from tests with mock catalogs. The total systematic error $\sigma_{\rm syst}$ is the quadratic sum of each contribution. We compare the systematic errors to the statistical errors from our baseline fits on real data. The last rows display the final error which is a quadratic sum of statistical and systematic errors. } \begin{tabular}{llccc} \hline \hline Type & Model & $\sigma_{\aperp}$ & $\sigma_{\alpha_\parallel}$ & $\sigma_{f{\sigma_8}}$ \\ \hline \multirow{2}{*}{Modelling} & {\sc clpt-gs} & 0.004 & 0.009 & 0.010 \\ & {\sc tns} & 0.004 & 0.006 & 0.009 \\ \multirow{2}{*}{Fid. cosmology} & {\sc clpt-gs} & 0.009 & 0.010 & 0.014 \\ & {\sc tns} & 0.005 & 0.008 & 0.012 \\ \multirow{2}{*}{Obs. effects} & {\sc clpt-gs} & 0.009 & 0.012 & 0.017 \\ & {\sc tns} & 0.010 & 0.014 & 0.018 \\ \hline & {\sc clpt-gs} & 0.013 & 0.018 & 0.024 \\ $\sigma_{\rm syst}$ & {\sc tns} & 0.012 & 0.017 & 0.023 \\ & $P_\ell$ & 0.012 & 0.013 & 0.024 \\ \hline & {\sc clpt-gs} & 0.020 & 0.028 & 0.045 \\ $\sigma_{\rm stat}$ & {\sc tns} & 0.018 & 0.031 & 0.040 \\ & $P_\ell$ & 0.027 & 0.036 & 0.042 \\ \hline & {\sc clpt-gs} & 0.66 & 0.63 & 0.54 \\ $\sigma_{\rm syst}/\sigma_{\rm stat}$ & {\sc tns} & 0.65 & 0.55 & 0.58 \\ & $P_\ell$ & 0.43 & 0.37 & 0.58 \\ \hline & {\sc clpt-gs} & 0.024 & 0.033 & 0.051 \\ $\sigma_{\rm tot} = \sqrt{\sigma_{\rm syst}^2+\sigma_{\rm stat}^2}$ & {\sc tns} & 0.021 & 0.035 & 0.046 \\ & $P_\ell$ & 0.029 & 0.038 & 0.048 \\ \hline \hline \end{tabular} \label{tab:rsd_total_systematic} \end{table} \subsection{Statistical properties of the LRG sample} \label{sec:statistical_properties} We can also use the {\sc EZmocks} for evaluating the statistical properties of the LRG sample, in particular to quantify how typical is our data compared with {\sc EZmocks}, but also for measuring the correlations among the different methods and globally validating our error estimation. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{figures/Plot_subspace_mocks_scatter_histo_EZmocks_CLPT-TNS_new.pdf} \includegraphics[width=0.49\textwidth]{figures/Plot_subspace_mocks_scatter_histo_EZmocks_CLPT-TNS_Errors_new.pdf} \caption{Comparison between best-fit values (left panels) and estimated errors (right panel) for $(\aperp, \alpha_\parallel, f{\sigma_8})$ using 1000 realisations of {\sc EZmocks} fitted with the TNS and CLPT-GS models. The values obtained with real data are indicated by stars in each panel or as coloured vertical lines in the histograms. The thin black dashed line on the 2D plots refer to the true values each parameter in the {\sc EZmocks}. } \label{fig:Triangle_params_errors_EZ} \end{figure*} The left panel of the Figure~\ref{fig:Triangle_params_errors_EZ} presents a comparison between the best-fit $(\aperp, \alpha_\parallel, f{\sigma_8})$ and their estimated errors from fits of the TNS and the CLPT-GS models. The confidence contours contain approximately 68 per cent and 95 per cent of the results around the mean. The contours and histograms reveal a good agreement for the two models. Stars indicate the corresponding best fit values obtained from the data. The correlations between best-fit parameters of both models are 86, 83 and 93 per cent for $\aperp$, $\alpha_\parallel$ and $f{\sigma_8}$ respectively. A similar comparison for the errors is presented in the right panel of the Figure~\ref{fig:Triangle_params_errors_EZ}. The errors inferred from the data analysis, shown as stars, are in good agreement with the 2D distributions from the mocks, lying within the 68 per cent contours. The histograms comparing the distributions of errors for both methods also show a good agreement, in particular for $\alpha_\parallel$ and $f\sigma_8$. For $\aperp$, we observe that the distribution from CLPT-GS is slightly peaked towards smaller errors, while for TNS the error distribution has a larger dispersion for this parameter. The correlation coefficients between estimated errors from the two models are: 56, 38, and 39 per cent for $\aperp, \alpha_\parallel, f{\sigma_8}$, respectively. Table \ref{tab:ezmock_stats_errors} summarizes the statistical properties of errors for ${\aperp, \alpha_\parallel, f{\sigma_8}}$ for both BAO and full shape RSD analysis in configuration space (noted $\xi_\ell$). We also include for reference the results from Fourier space analysis of \citet{gil-marin_2020}, noted $P_\ell$. For each parameter we show the standard deviation of the best fits values, $\sigma$, the mean estimated error $\langle \sigma \rangle$, the mean of the pull, $Z_i = (x_i - \langle x \rangle)/\sigma_x$ where $x={\aperp, \alpha_\parallel, f{\sigma_8}}$, and its standard deviation $\sigma(Z)$. If errors are correctly estimated and follow a Gaussian distribution, we expect that $\sigma = \langle \sigma_i \rangle$, $\langle Z_i \rangle = 0$ and $\sigma(Z) = 1$. For method, we remove results from non-converged chains and 5$\sigma$ outliers in both best-fit values and errors (with $\sigma$ defined as half of the range covered by the central 68 per cent values). Table~\ref{tab:ezmock_stats_errors} also shows the results from combining different methods employing the procedure described in Section~\ref{sec:combining_analysis}. For each combination, we create the covariance matrix $C$ (Eq.~\ref{eq:full_covariance}) from the correlation coefficients obtained from 1000 {\sc EZmocks} fits, with small adjustments to account for the observed errors of a given realisation. The correlation coefficients (before this adjustement) is shown in Figure~\ref{fig:correlation_matrix} for all five methods. The BAO measurements from configuration and Fourier spaces are 87 and 88 per cent correlated for $\aperp$ and $\alpha_\parallel$, respectively. In RSD analyses these correlations reduce to slightly less than 80 per cent between $\aperp, \alpha_\parallel$ of both spaces, while $f{\sigma_8}$ correlations reach 84 per cent. The fact that these correlations are not exactly 100 per cent indicates that there is potential gain combining them. For the BAO results (top three rows of Table~\ref{tab:ezmock_stats_errors}), we see good agreement between $\sigma_x$ and $\langle \sigma \rangle$ for all the parameters in both the spaces. The mean of the pull $\langle Z_i \rangle$ is consistent with zero (their errors are roughly 0.02) and the standard deviation $\sigma(Z_i)$ is slightly smaller than unity for all variables, indicating that errors might be slightly overestimated. The combined BAO results of $(\xi_\ell+P_\ell)$ have errors slightly reduced to 2.2\% for $\aperp$ and 3.4\% in $\alpha_\parallel$ (based on the scatter $\sigma$ of the best-fit values). The $\sigma(Z_i)$ are both closer to 1.0, indicating better estimate of errors for the combined case. As a conservative approach, the BAO errors on data (Section~\ref{sec:results_bao}) are therefore not corrected by this overestimation. Full shape RSD results (4th to 8th rows in Table~\ref{tab:ezmock_stats_errors}) also show good agreement between $\sigma_x$ and $\langle \sigma \rangle$ for all the parameters for both models and both spaces. Figure~\ref{fig:Unitary_errors_EZmocks} shows the pull distributions for both CLPT-GS and TNS models. The mean of the pull for $\aperp$ and $f{\sigma_8}$ are consistent with zero in all cases though the mean pull for $\alpha_\parallel$ is negative, indicating a slightly skewed distribution. The $\sigma(Z_i)$ values for CLPT-GS and TNS models are consistent with one for $\aperp$ and slightly different than one for $\alpha_\parallel$ and $f{\sigma_8}$. Their combination (6th row) with inverse variance weighing slightly compensates for these differences, yielding better estimated errors, with $\sigma(Z_i)$ closer to one for all three parameters. The full-shape measurements in Fourier space (7th row) show similar behaviour than the ones in configuration space, with errors larger than measurements in configuration space. This is due to the larger number of nuisance parameters in the Fourier space analysis and to the choice of scales used in the Fourier space fits ($0.02 \leq k \leq 0.15 \ h {\rm Mpc}^{-1}$), which do not exactly translate to the range in separation used in our fits $(25 < r< 130$~\hmpc), and may contain less information in average. The combined $\xi_\ell+P_\ell$ full-shape results in the 8th row present smaller dispersion on all parameters relative to each individual method. The pull values indicating slightly overestimated errors, which we do not attempt to correct. \begin{figure} \includegraphics[width=\columnwidth]{figures/Distrib_Unitary_means_all_new.pdf} \caption{Normalized distributions of the pull for the $\alpha_\parallel$, $\aperp$ and $f{\sigma_8}$ from fits of TNS and CLPT-GS models on {\sc EZmocks}. The blue dashed lines represent the centered normalized Gaussian distribution. } \label{fig:Unitary_errors_EZmocks} \end{figure} \begin{table*} \centering \caption{Statistics on errors from consensus results on 1000 {\sc EZmocks} realisations. For each parameter, we show the standard deviation of best-fit values, $\sigma(x_i)$, the mean estimated error $\langle \sigma_i \rangle$, the mean of the pull, $Z_i = (x_i - \langle x_i \rangle)/\sigma_i$ and its standard deviation $\sigma(Z_i)$. $N_{\rm good}$ shows the number of valid realisations for each case after removing extreme values and errors at 5$\sigma$ level. } \begin{tabular}{lccccccccccccccc} \hline \hline Observable & $N_{\rm good}$ & \multicolumn{4}{c}{$\aperp$} & & \multicolumn{4}{c}{$\alpha_\parallel$} & & \multicolumn{4}{c}{$f{\sigma_8}$} \\ & & $\sigma$ & $\langle \sigma_i \rangle$ & $\langle Z_i \rangle $ & $\sigma(Z_i)$ & & $\sigma$ & $\langle \sigma_i \rangle$ & $\langle Z_i \rangle $ & $\sigma(Z_i)$ & & $\sigma$ & $\langle \sigma_i \rangle$ & $\langle Z_i \rangle $ & $\sigma(Z_i)$ \\ \hline {\sc bao} $\xi_\ell$ & 987 & 0.023 & 0.023 & -0.02 & 0.98 & & 0.036 & 0.035 & -0.02 & 0.96 & & - & - & - & - \\ {\sc bao} $P_\ell$ & 978 & 0.024 & 0.024 & -0.02 & 0.95 & & 0.039 & 0.040 & 0.00 & 0.90 & & - & - & - & - \\ {\sc bao} $\xi_\ell+P_\ell$ & 970 & 0.022 & 0.022 & -0.02 & 1.01 & & 0.034 & 0.034 & -0.02 & 0.97 & & - & - & - & - \\ \hline {\sc rsd} $\xi_\ell$ {\sc clpt} & 819 & 0.023 & 0.021 & 0.01 & 1.03 & & 0.033 & 0.033 & -0.04 & 0.95 & & 0.046 & 0.045 & -0.01 & 0.97 \\ {\sc rsd} $\xi_\ell$ {\sc tns} & 951 & 0.024 & 0.023 & -0.05 & 1.03 & & 0.037 & 0.033 & -0.05 & 1.07 & & 0.046 & 0.045 & -0.01 & 0.95 \\ {\sc rsd} $\xi_\ell$ & 781 & 0.021 & 0.021 & -0.01 & 0.99 & & 0.031 & 0.032 & -0.03 & 0.96 & & 0.042 & 0.045 & -0.01 & 0.95 \\ {\sc rsd} $P_\ell$ & 977 & 0.025 & 0.026 & 0.02 & 0.94 & & 0.037 & 0.036 & -0.04 & 1.00 & & 0.046 & 0.046 & 0.01 & 0.96 \\ {\sc rsd} $\xi_\ell+P_\ell$ & 767 & 0.019 & 0.020 & 0.00 & 0.98 & & 0.030 & 0.031 & -0.03 & 0.97 & & 0.041 & 0.043 & -0.00 & 0.97 \\ \hline {\sc bao}$+${\sc rsd} $\xi_\ell$ & 772 & 0.018 & 0.019 & -0.01 & 1.00 & & 0.024 & 0.025 & -0.03 & 0.97 & & 0.043 & 0.040 & -0.02 & 1.06 \\ {\sc bao}$+${\sc rsd} $P_\ell$ & 955 & 0.019 & 0.020 & 0.00 & 0.96 & & 0.028 & 0.029 & -0.03 & 0.96 & & 0.044 & 0.042 & -0.01 & 1.05 \\ {\sc bao}$\times${\sc rsd} $P_\ell$ & 986 & 0.019 & 0.019 & 0.03 & 0.99 & & 0.029 & 0.028 & -0.06 & 1.02 & & 0.041 & 0.045 & -0.01 & 0.92 \\ \hline {\sc bao} ($\xi_\ell+P_\ell$) + & 747 & 0.017 & 0.018 & -0.01 & 1.00 & & 0.024 & 0.025 & -0.03 & 0.97 & & 0.042 & 0.039 & -0.02 & 1.09 \\ {\sc rsd} ($\xi_\ell+P_\ell$) \\ ({\sc bao}$+${\sc rsd}) $\xi_\ell$ + & 747 & 0.017 & 0.018 & -0.01 & 1.01 & & 0.024 & 0.025 & -0.03 & 0.99 & & 0.042 & 0.039 & -0.02 & 1.09 \\ ({\sc bao}$+${\sc rsd}) $P_\ell$ \\ \hline \hline \end{tabular} \label{tab:ezmock_stats_errors} \end{table*} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/correlation_matrix.pdf} \caption{Correlation coefficients between $\aperp, \alpha_\parallel, f{\sigma_8}$ for all methods and models obtained from fits to 1000 {\sc EZmock} realisation of the eBOSS LRG+CMASS sample. The values of $f{\sigma_8}$ have been corrected with the procedure described in Section~\ref{sec:fs8_scaling}.} \label{fig:correlation_matrix} \end{figure} The 9th and 10th row of Table \ref{tab:ezmock_stats_errors} show results of combining BAO and full-shape RSD results for a given space, $\xi_\ell$ or $P_\ell$, while fully accounting for their large covariance as described in Section~\ref{sec:combining_analysis}. We see that the scatter of $\aperp$ and $\alpha_\parallel$ is reduced by $\sim$ 20 and 30 per cent, respectively, relative to their BAO-only analyses. For $f{\sigma_8}$ the scatter of best-fit values is the same as the full-shape-only analyses, as expected (BAO only do not provide extra information on $f{\sigma_8}$). The values of $\sigma(Z_i)$ for the combined results are consistent with one for $\aperp, \alpha_\parallel$, though for $f{\sigma_8}$ they are more than 5 per cent larger than unity for both configuration and Fourier space. This would indicate that our combination procedure from Section~\ref{sec:combining_analysis} produces slightly underestimated errors for $f{\sigma_8}$. In \citet{gil-marin_2020}, an alternative method was suggested to extract the consensus results from BAO and RSD analysis: a simultaneous fit. Both BAO and RSD models are fitted simultaneously to the concatenation of the pre- and post-reconstruction data vectors. This fit requires the full covariance matrix between pre- and post-reconstruction multipoles and is estimated from 1000 {\sc EZmocks}. Results of simultaneous fits on mocks are shown in the 11th row of Table~\ref{tab:ezmock_stats_errors} and are noted ``BAO$\times$RSD $P_\ell$''. These are to be compared with our usual method of combining posteriors, noted ``BAO$+$RSD'' and shown in the 10th row. First, we see good agreement between the scatter of best-fit values of all three parameters between BAO$\times$RSD and BAO$+$RSD. However, the simultaneous fit overestimates the errors in $f{\sigma_8}$ by 8 per cent, based on its $\sigma(Z_i)$ value. While in theory the simultaneous fit is a better procedure, accounting for all correlations, in practice we only use 1000 mocks to estimate a larger covariance matrix with large off-diagonal terms. Therefore we cannot conclude from this test, which method leads to better estimated errors. We use BAO$+$RSD entries for the consensus results. The last two rows of Table \ref{tab:ezmock_stats_errors} show statistics on the final consensus results from the LRG sample when combining BAO and full shape from both Fourier and configuration spaces. These results reflect the full statistical power of the LRG sample. The excellent agreement between the statistics of these two rows shows that the order of combination does not impact results. The dispersion $\sigma$ on $\aperp$ and $\alpha_\parallel$ are reduced to 1.8 and 2.6 per cent respectively while we had 2.2 and 3.4 per cent for BAO only, and 2.0 and 3.2 per cent for full-shape only. The pull distributions for $\aperp$ and $\alpha_\parallel$ are consistent with a Gaussian distribution. The scatter in $f{\sigma_8}$ is not reduced compared to individual methods, which is expected since BAO does not add information on this parameter, so the consensus error should be equal to the one obtained from the full-shape fits. However, the $\sigma(Z_i)$ for $f{\sigma_8}$ indicates that our consensus errors on this parameter might be underestimated by 10 per cent. While this seems to be significant, this result can be a consequence of the Gaussian assumption of all individual likelihoods not holding for all realisations, or the combination procedure itself might lead to underestimated errors (as seen with $f{\sigma_8}$ in the 9th and 10th rows), though we would need more mocks to test these hypotheses carefully. For this work, we consider the underestimation on $f{\sigma_8}$ consensus errors (last two rows of Table~\ref{tab:ezmock_stats_errors}) as another source of systematic error. The simplest correction to this underestimation is to scale the estimated errors of $f{\sigma_8}$ in each realisation by $\sigma(Z_i) = 1.09$. We proceed to apply this correction factor to the consensus $f{\sigma_8}$ errors with our data sample. This factor is to be applied only to statistical errors. In Section~\ref{sec:consensus_results} we describe how we apply with this scaling in the presence of systematic errors. \section{Results} \label{sec:results} We provide in this section the results of the BAO analysis, the full-shape RSD analysis and the combination of the two for the eBOSS LRG sample. The analysis assumes an effective redshift for the sample of $z_{\rm eff}=0.698$. \subsection{Result from the BAO analysis} \label{sec:results_bao} We present in Figure~\ref{fig:bao_bestfit} our best-fit BAO model to the post-reconstruction eBOSS LRG multipoles. The associated reduced chi-squared is $\chi^2/{\rm dof} = 39/(40-9) = 1.26$. By scaling the resulting $\aperp$ and $\alpha_\parallel$ by $(D_M/r_d)^{\rm fid}$ and $(D_H/r_d)^{\rm fid}$, respectively (Eqs.~\ref{eq:aperp} and \ref{eq:apara}), we obtain: \begin{equation} \mathbf{D}_{{\rm BAO},{\xi_\ell}} = \begin{pmatrix} D_M/r_d \\ D_H/r_d \end{pmatrix}= \begin{pmatrix} 17.86 \pm 0.33 \\ 19.34 \pm 0.54 \end{pmatrix} \end{equation} and the covariance matrix is \begin{equation} \mathbf{C}_{{\rm BAO},{\xi_\ell}} = \begin{blockarray}{cc} D_M/r_d & D_H/r_d \vspace{1mm} \\ \begin{block}{(cc)} 1.11 \times 10^{-1} & -5.86 \times 10^{-2} & \\ - & 2.92 \times 10^{-1} & \\ \end{block} \end{blockarray} \end{equation} The errors correspond to a BAO measurement at 1.9 per cent in the transverse direction and 2.8 per cent in the radial direction, the best constraints ever obtained from $z>0.6$ galaxies. The correlation coefficient between both parameters is -0.33. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{figures/DR16_LRG_BAO_bestfit_xi0.pdf} \includegraphics[width=0.8\columnwidth]{figures/DR16_LRG_BAO_bestfit_xi2.pdf} \caption{Best-fit BAO model to the monopole (top) and quadrupole (bottom) of the post-reconstruction correlation function of the eBOSS + CMASS LRG sample. The legend displays the $\chi^2$ value of the fit.} \label{fig:bao_bestfit} \end{figure} Figure~\ref{fig:consensus_bao} shows in blue the 68 and 95\% confidence contours in the $(D_M/r_d, D_H/r_d)$ space for the BAO measurement in configuration space. Our best-fit values are consistent within 1.26$\sigma$ to the prediction of a flat $\Lambda$CDM model given by Planck 2018 best-fit parameters \citep{planck_collaboration_planck_2018} assuming a $\chi^2$ distribution with two degrees of freedom. This measurement is also in excellent agreement with the BAO analysis performed in Fourier space \citep{gil-marin_2020}, shown as red contours in Figure~\ref{fig:consensus_bao}. Since Fourier and configuration space analyses use the same data, final measurements are highly correlated. Based on measurements of the same 1000 realisations of {\sc EZmocks}, we obtain correlation coefficients of 0.86 for both $D_M/r_d$ and $D_H/r_d$. As these correlations are not unity, there is some gain, in combining both measurements. Using the methods presented in Section~\ref{sec:combining_analysis}, we compute the combined BAO measurements between Fourier and configuration space. The result is displayed as grey contours in Figure~\ref{fig:consensus_bao} and in Table~\ref{tab:finalresults} as ``BAO $\xi_\ell+P_\ell$''. The error of the combined result is only 2\% smaller than the error of the configuration space analysis alone. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{figures/DR16_LRG_consensus_bao_planck.pdf} \caption{Constraints on $D_M/r_d$ and $D_H/r_d$ at $z_{\rm eff}=0.698$ from the BAO analysis of the eBOSS LRG sample post-reconstruction. Contours show 68 and 95 per cent confidence regions for the configuration space analysis in blue (this work), the Fourier space analysis from \citet{gil-marin_2020} in salmon, and the consensus BAO result in grey. The expected values in a flat $\Lambda$CDM model with Planck 2018 best-fit parameters, shown by the black star, lies at 1.26$\sigma$ from our best-fit parameters of the configuration space analysis. } \label{fig:consensus_bao} \end{figure} Table~\ref{tab:bao_results_tests} shows the impact on the BAO results in configuration space of different modifications in the methodology around the baseline configuration. The middle part of the table shows that our result is reasonably insensitive to some of these changes. Setting all systematic weights to unity causes only mild shifts to best-fit parameters while estimated errors are unchanged. Removing the corrections by weights significantly distorts the broad shape of the correlation function. The fact that our BAO results are insensitive to these corrections proves that practically all information comes uniquely from the BAO peak and not from the full-shape of the correlation function. This is a strong robustness validation of our BAO measurement. When leaving BAO damping parameters $(\Sigma_\perp,\Sigma_\parallel)$ free or constrained within a Gaussian prior, the best-fit values barely change while their errors are smaller than our baseline analysis. As observed on mocks, some realisations present sharper peaks due to noise and a sharper model could be considered as a better fit. However, we prefer to be conservative and not allow for this artificial increase in precision in our BAO analysis. Including the hexadecapole or changing the fiducial cosmology shifts alphas by less than one error bar, which is consistent to what is observed in mocks. We performed the BAO fits using the methods used in the BOSS DR12 analysis \citep{alam_clustering_2017} which gives results in excellent agreement with our baseline method, with a slight better $\chi^2$. In the third part of Table~\ref{tab:bao_results_tests} we present the pre-reconstruction result with similar best-fit $\aperp$ and $\alpha_\parallel$ but with errors larger by factors of 1.3 and 1.5, which is typical as seen in mocks (Figure~\ref{fig:ezmock_bao_alphas}). Pre-reconstruction BAO-only fits using our methodology show biases of about 1 per cent in the mocks, therefore we do not recommend using pre-reconstruction results without accounting for these biases. The NGC and SGC results are two independent samples and their best-fit $\aperp$ and $\alpha_\parallel$ are 0.25 and 0.53$\sigma$ from each other respectively, therefore not representing a significant difference among hemispheres. \begin{table} \caption{The BAO measurement with the DR16 eBOSS+CMASS LRG dataset using the standard pipeline described in Section~\ref{sec:bao_modelling} and other analysis choices. Note that for cases with different $\Omega_m^{\rm fid}$, we scale the obtained $\aperp, \alpha_\parallel$ by the distance ratios in order to make them comparable with the case where $\Omega_m^{\rm fid} = 0.31$.} \begin{center} \begin{tabular}{cccc} case & $\alpha_\perp$ & $\alpha_\parallel$ & $\chi^2/{\rm d.o.f.}$ \\ \hline \hline Baseline & $1.024 \pm 0.019$ & $0.956 \pm 0.023$ & $39.0/(40-9)$ \\ \hline $w_{\rm sys}w_{\rm cp}w_{\rm noz} = 1$ & $1.022 \pm 0.018$ & $0.954 \pm 0.023$ & $30.1/(40-9)$ \\ $\Sigma_\perp, \Sigma_\parallel$ free & $1.027 \pm 0.016$ & $0.947 \pm 0.019$ & $31.9/(40-11)$ \\ $\Sigma_\perp, \Sigma_\parallel$ prior & $1.025 \pm 0.017$ & $0.952 \pm 0.021$ & $36.2/(40-11)$ \\ $+\xi_4$ & $1.031 \pm 0.019$ & $0.949 \pm 0.024$ & $53.5/(60-12)$ \\ $\Omega_m^{\rm fid} = 0.27$ & $1.026 \pm 0.020$ & $0.950 \pm 0.023$ & $33.3/(40-9)$ \\ $\Omega_m^{\rm fid} = 0.35$ & $1.026 \pm 0.019$ & $0.951 \pm 0.022$ & $39.6/(40-9)$ \\ DR12 method & $1.023 \pm 0.019$ & $0.955 \pm 0.024$ & $34.5/(40-10)$ \\ \hline Pre-recon & $1.035 \pm 0.025$ & $0.957 \pm 0.035$ & $42.0/(40-9)$ \\ NGC only & $1.038 \pm 0.024$ & $0.943 \pm 0.024$ & $42.3/(40-9)$ \\ SGC only & $0.993 \pm 0.032$ & $0.982 \pm 0.070$ & $44.5/(40-9)$ \\ \hline \hline \end{tabular} \end{center} \label{tab:bao_results_tests} \end{table} \subsection{Results from the full-shape RSD analysis} \label{sec:results_rsd} \begin{table*} \caption{The full-shape measurements with the DR16 eBOSS+CMASS LRG dataset from our baseline analysis described in Section~\ref{sec:rsd_modelling} followed by results from other analysis choices. The presented errors are purely statistical and do not include systematic errors. } \centering \begin{tabular}{lccccc} \hline \hline Model & Analysis & $\alpha_\perp$ & $\alpha_\parallel$ & $f\sigma_8$ &$\chi^2/{\rm d.o.f.}$ \\ \hline CLPT-GS & baseline & $0.997 \pm 0.020$ & $1.013 \pm 0.028$ & $0.471 \pm 0.045$ & $83.7/(63-6)=1.47$ \\ CLPT-GS & $r_{\rm min} = 35$\hmpc\ for $\xi_4$ & $1.017 \pm 0.022$ & $0.971 \pm 0.031$ & $0.499 \pm 0.046$ & $79.3/(61-6)=1.44$ \\ CLPT-GS & NGC only & $1.015 \pm 0.025$ & $1.009 \pm 0.031$ & $0.464 \pm 0.055$ & $81.1/(63-6)=1.40$ \\ CLPT-GS & SGC only & $0.985 \pm 0.036$ & $1.041 \pm 0.062$ & $0.439 \pm 0.078$ & $71.3/(63-6)=1.25$ \\ TNS & baseline & $ 1.001 \pm 0.018 $ & $ 1.013 \pm 0.031 $ & $ 0.451 \pm 0.040 $ & $85.2/(65-7) = 1.47 $ \\ TNS & $r_{\rm min} = 35$\hmpc\ for $\xi_4$ & $ 1.013 \pm 0.016 $ & $ 0.976 \pm 0.027 $ & $ 0.458 \pm 0.036 $ & $73.7/(63-7)= 1.32$ \\ TNS & Without $\xi_4$ & $1.019 \pm 0.019$ & $0.963 \pm 0.035$ & $0.472 \pm 0.044$& $50.1/(44-7)= 1.35$ \\ TNS & NGC only & $ 1.024 \pm 0.029 $ & $ 1.013 \pm 0.036 $ & $ 0.436 \pm 0.053 $ & $80.6/(65-7) = 1.39$ \\ TNS & SGC only & $ 0.993 \pm 0.034 $ & $ 1.076 \pm 0.070 $ & $ 0.423 \pm 0.076 $ & $69.1/(65-7)= 1.19$ \\ \hline \hline \end{tabular} \label{tab:rsd_results_variations} \end{table*}% We present in Figure~\ref{fig:rsd_bestfit} the best-fit TNS (red) and CLPT-GS (blue) RSD models to the pre-reconstruction eBOSS LRG multipoles. The associated reduced chi-squared values are $\chi^2/{\rm dof} = 85.2/(65-7) = 1.47$ for TNS and $\chi^2/{\rm dof} = 83.7/(63-6) = 1.47$ for CLPT-GS. While these values are unlikely explained by statistical fluctuations, we verified that the values reported for the $\chi^2$ for both models are within {\sc EZmock} $\chi^2$ distributions. Both models perform similarly on data, but some differences are visible in Figure~\ref{fig:rsd_bestfit}. The TNS model produces a slightly sharper BAO peak than the CLPT-GS model, clearly visible in the monopole. This is due the fact that, intrinsically, the CLPT-GS model tends to predict a slighter higher BAO damping compared to Eulerian perturbation theory, as implemented here in the TNS model with {\sc RESPRESSO} prescription. The CLPT-GS model have a slightly higher hexadecapole amplitude than the TNS model but both models seems to underestimate the hexadecapole amplitude below 35 \hmpc\ by 1$\sigma$ of the statistical uncertainties of the data. This underestimation in the amplitude of the hexadecapole is also present in the mocks for both the \textsc{Nseries} and \textsc{EZmocks} and was already reported in \citet{icaza-lizaola_clustering_2020} explaining the relative high $\chi^{2}$ of the data. Table~\ref{tab:rsd_results_variations} shows the impact of different modifications in the methodology around the baseline configuration. First, if we change the range of scales used in the hexadecapole by changing $r_{\rm min}$ from 25\hmpc\ to 35\hmpc. We see a decrease of the reduced chi-squared as we remove these scales from the hexedecapole, which are underestimated by the models. Removing those scales impact the measured cosmological parameters, particularly $\alpha_\parallel$, which is shifted by about 1 $\sigma$. We performed the same cuts on the analysis of {\sc EZmocks}, finding that such a shift lies at about 2.3$\sigma$ of the shifts observed in 1000 mocks (see details in Appendix~\ref{app:hexadec}). The NGC and SGC fields are two independent samples and we find that their individual best-fit $\aperp$ and $\alpha_\parallel$, and $f\sigma_8$ are 0.7$\sigma$, $0.5\sigma$ and 0.3$\sigma$ from each other respectively for CLPT-GS and 0.7$\sigma$, $0.8\sigma$ and 0.1$\sigma$ for TNS, which is not a significant difference. The marginal posteriors on $\aperp, \alpha_\parallel, f{\sigma_8}$ and associated 68\% and 95\% confidence contours are shown in Figure~\ref{fig:rsd_likelihood}. The posteriors obtained from both models are in good agreement. Entries denoted as ``RSD $\xi_\ell $ CLPT-GS'' and ``RSD $\xi_\ell$ TNS'' in Table~\ref{tab:finalresults} gives the best-fit parameters and 1$\sigma$ error (including systematic errors), translated into $D_M/r_d, D_H/r_d, f{\sigma_8}$. We find an excellent agreement in the best-fit parameters and errors between the two RSD models, as expected from the posteriors. The full posteriors including all nuisance parameters can be found in Appendix~\ref{app:posteriors}. We combine the results from our two RSD models using a weighted average based on the individual covariance matrices (see Section~\ref{sec:combining_analysis}). The combined measurement is indicated by ``RSD $\xi_\ell$'' in Table~\ref{tab:finalresults} and shown with dashed contours in Figure~\ref{fig:rsd_likelihood}. Central values and errors of the combined result fall approximately in between the values of each individual measurement. The combined best-fit parameters and covariance matrix of the full-shape RSD analysis in configuration space, including systematic errors, are \begin{equation} \mathbf{D}_{{\rm RSD},{\xi_\ell}} = \begin{pmatrix} D_M/r_d \\ D_H/r_d \\ f\sigma_8 \end{pmatrix}= \begin{pmatrix} 17.42 \pm 0.40 \\ 20.46 \pm 0.70 \\ 0.460 \pm 0.050 \end{pmatrix} \end{equation} \begin{equation} \mathbf{C}_{{\rm RSD},{\xi_\ell}} = \begin{blockarray}{ccc} D_M/r_d & D_H/r_d & f{\sigma_8} \vspace{1mm} \\ \begin{block}{(ccc)} 1.59 \times 10^{-1} & 6.28 \times 10^{-3} & 6.13 \times 10^{-3} & \\ - & 4.88 \times 10^{-1} & -4.83 \times 10^{-3} & \\ - & - & 2.46 \times 10^{-3} & \\ \end{block} \end{blockarray} \end{equation} This corresponds to a 2.3 and 3.4 per cent measurements of the transverse and radial dilation parameters and a 11 per cent measurement of the growth rate of structure times $\sigma_8$. The errors on $D_M/r_d$ and $D_H/r_d$ are slightly larger than the ones from the BAO-only analysis, as expected, but the correlation coefficient between them is reduced from $-0.33$ to $0.02$. This happens because information on dilation parameters also come from the full-shape of the correlation function, rather than just the BAO peak. For instance, the correlation coefficient between $f{\sigma_8}$ and $D_M/r_d$ is 0.31 and between $f{\sigma_8}$ and $D_H/r_d$ is -0.14. \begin{figure*} \centering \includegraphics[width=0.32\textwidth]{figures/DR16_LRG_RSD_v7_2_bestfits_mono.pdf} \includegraphics[width=0.32\textwidth]{figures/DR16_LRG_RSD_v7_2_bestfits_quad_scaler1.pdf} \includegraphics[width=0.32\textwidth]{figures/DR16_LRG_RSD_v7_2_bestfits_hexa_scaler1.pdf} \caption{Best-fits full-shape models to the eBOSS + CMASS multipoles. Left, mid and right panel display mono, quad and hexadecapole, respectively. The monopole is scaled by $r^2$ while the other two are scaled by $r$. The CLPT-GS model is shown by the blue dashed line while the TNS model is shown by the red solid line. Note the baseline ranges used for each model are slightly different (see Figure~\ref{fig:rmin_impact}). } \label{fig:rsd_bestfit} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/Plot_Comparison_MCMC_MarianaV72_Romain_myscale_new.pdf} \caption{Comparison between the TNS and CLPT-GS final posterior distributions over the three main parameters using the DR16 data. The distributions are in good agreement for the two models. The vertical dashed lines on the 1D distributions refer to the mean. Dashed line contours show the combined result from the two models, assumming Gaussian errors. The full posteriors including nuisance parameters can be found in Appendix~\ref{app:posteriors}. } \label{fig:rsd_likelihood} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/DR16_LRG_consensus_rsd_planck.pdf} \caption{Constraints on $D_M/r_d, D_H/r_d$ and $f{\sigma_8}$ $z_{\rm eff} = 0.698$ from the full-shape RSD analysis of the completed eBOSS LRG sample pre-reconstruction. Contours show 68 and 95 per cent confidence regions for the analyses in configuration space (blue), Fourier space (red) and the combined (grey). The expected values in a flat $\Lambda$CDM model with best-fit parameters from Planck 2018 results is indicated as a black star.} \label{fig:consensus_rsd} \end{figure} \subsection{Consensus Results} \label{sec:consensus_results} We present in Figure~\ref{fig:consensus_bao_rsd} the final results of this work obtained by the combination of BAO and full-shape RSD analyses in both configuration and Fourier spaces. Accounting for all sources of systematic error discussed in Section~\ref{sec:systematics_rsd} and \ref{sec:statistical_properties}, the best-fit parameters and associated covariance matrix are: \begin{equation} \mathbf{D}_{{\rm LRG}} = \begin{pmatrix} D_M/r_d \\ D_H/r_d \\ f{\sigma_8} \end{pmatrix} = \begin{pmatrix} 17.65 \pm 0.30 \\ 19.77 \pm 0.47 \\ 0.473 \pm 0.044 \end{pmatrix} \label{eq:consensus_vector} \end{equation} \begin{equation} \mathbf{C}_{{\rm LRG}} = \begin{blockarray}{ccc} D_M/r_d & D_H/r_d & f{\sigma_8} \vspace{1mm} \\ \begin{block}{(ccc)} 9.11 \times 10^{-2} & -3.38 \times 10^{-2} & 2.47 \times 10^{-3} & \\ - & 2.20 \times 10^{-1} & -3.61 \times 10^{-3} & \\ - & - & 1.96 \times 10^{-3} & \\ \end{block} \end{blockarray} \label{eq:consensus_covmatrix} \end{equation} which translate into a 1.7 and 2.4 per cent measurement of $D_M/r_d$ and $D_H/r_d$ respectively. The correlation between these two is $-24$ per cent. The error on $f{\sigma_8}$ is 9.4 per cent, which is the most precise measurement to date in this redshift range. We note that this final measurement is not sensitive to the order of combinations, as seen in the second panel of Figure~\ref{fig:consensus_bao_rsd} and in the last row of Table~\ref{tab:finalresults}. Those measurements agree well with the predictions from \citet{planck_collaboration_planck_2018-1}, which predict at this redshift: 17.48, 20.23 and 0.462, respectively, for a flat $\Lambda$CDM model assuming gravity is described by General Relativity. These values are shown as stars in Figure~\ref{fig:consensus_bao_rsd}. Systematic errors originating from observational effects, modelling and combination methods were carefully included in our measurements and are responsible for inflating final errors by 6, 13 and 20 per cent, respectively, on $D_M/r_d, D_H/r_d$ and $f{\sigma_8}$. In Section~\ref{sec:statistical_properties}, we found that our statistical errors on the consensus $f{\sigma_8}$ were slightly underestimated. To apply this correction on the data consensus, we proceed as follows. First, we compute consensus with and without accounting for systematic errors from Table~\ref{tab:rsd_total_systematic}. The difference between their error matrices gives us the additive systematic matrix. Then, we scale the statistical errors on $f{\sigma_8}$ by 1.09 and we add back the additive systematic matrix. This procedure yields the results reported in Eq.~\ref{eq:consensus_vector} and \ref{eq:consensus_covmatrix}. \begin{table} \caption{ Summary table with results from this work, from \citet{gil-marin_2020}, and their combination. All reported errors include the systematic component. The effective redshift of all measurements is $z_{\rm eff} = 0.698$. } \begin{center} \begin{tabular}{lccc} \hline \hline Method & $D_M/r_d$ & $D_H/r_d$ & $f\sigma_8$ \\ \hline {\sc bao} $\xi_\ell$ & $17.86 \pm 0.33$ & $19.34 \pm 0.54$ & - \\ {\sc bao} $P_\ell$ & $17.86 \pm 0.37$ & $19.30 \pm 0.56$ & - \\ {\sc bao} $\xi_\ell+P_\ell$ & $17.86 \pm 0.33$ & $19.33 \pm 0.53$ & - \\ \hline {\sc rsd} $\xi_\ell$ {\sc clpt} & $17.39 \pm 0.43$ & $20.46 \pm 0.68$ & $0.471 \pm 0.052$ \\ {\sc rsd} $\xi_\ell$ {\sc tns} & $17.45 \pm 0.38$ & $20.45 \pm 0.72$ & $0.451 \pm 0.047$ \\ {\sc rsd} $\xi_\ell$ & $17.42 \pm 0.40$ & $20.46 \pm 0.70$ & $0.460 \pm 0.050$ \\ {\sc rsd} $P_\ell$ & $17.49 \pm 0.52$ & $20.18 \pm 0.78$ & $0.454 \pm 0.046$ \\ {\sc rsd} $\xi_\ell+P_\ell$ & $17.40 \pm 0.39$ & $20.37 \pm 0.68$ & $0.449 \pm 0.044$ \\ \hline {\sc bao}$+${\sc rsd} $\xi_\ell$ & $17.65 \pm 0.31$ & $19.81 \pm 0.47$ & $0.483 \pm 0.047$ \\ {\sc bao}$+${\sc rsd} $P_\ell$ & $17.72 \pm 0.34$ & $19.58 \pm 0.50$ & $0.474 \pm 0.042$ \\ \hline {\sc bao} ($\xi_\ell+P_\ell$) $+$ & $17.65 \pm 0.30$ & $19.77 \pm 0.47$ & $0.473 \pm 0.044$ \\ {\sc rsd} ($\xi_\ell+P_\ell$) \\ ({\sc bao}$+${\sc rsd}) $\xi_\ell$ $+$ & $17.64 \pm 0.30$ & $19.78 \pm 0.46$ & $0.470 \pm 0.044$ \\ ({\sc bao}$+${\sc rsd}) $P_\ell$ \\ \hline \hline \end{tabular} \end{center} \label{tab:finalresults} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/DR16_LRG_consensus_bao_rsd_planck.pdf} \includegraphics[width=\columnwidth]{figures/DR16_LRG_consensus_bao_rsd_alternative_planck.pdf} \caption{Final measurements of $D_M/r_d, D_H/r_d, f{\sigma_8}$ from the completed eBOSS LRG sample at $z_{\rm eff}=0.698$. Top and bottom panels show two possible procedures for obtaining the final result. The grey contours show the final results, which virtually the same in both panels (two bottom lines in Table~\ref{tab:finalresults}). The black star indicates the prediction in a flat $\Lambda$CDM model with parameters from Planck 2018 results. } \label{fig:consensus_bao_rsd} \end{figure} \subsection{Comparison with previous results} \label{sec:comparison_previous} Our final consensus result for the DR16 LRG sample is shown in Eqs.~\ref{eq:consensus_vector} and \ref{eq:consensus_covmatrix}, and used a total of 402,052 (weighted) galaxies over 9,463~deg$^2$ (with 4,242~deg$^2$ observed by eBOSS). \citet{bautista_sdss-iv_2018} and \citet{icaza-lizaola_clustering_2020} describe, respectively, the BAO and full-shape RSD measurements using the DR14 LRG sample, that contains 126,557 galaxies over 1,844~deg$^2$. In the DR14 sample, CMASS galaxies outside of the eBOSS footprint were not used. Because of that, the effective redshift of the DR14 measurements is slightly higher, at $z_{\rm eff} = 0.72$. \citet{bautista_sdss-iv_2018} reported a 2.5 per cent measurement of the ratio of the spherically averaged distance to the sound horizon scale, $D_V(z=0.72)/r_d = 16.08^{+0.41}_{-0.40}$. This result was obtained with isotropic fits to the monopole of the post-reconstruction correlation function. The statistical power of the DR14 sample is relatively low for anisotropic BAO constraints and has large non-Gaussian errors. Converting our DR16 anisotropic measurement of Eq.~\ref{eq:consensus_vector} into spherically averaged distances we obtain: $D_V(z=0.698)/r_d = 16.26 \pm 0.20$, which is well within 1$\sigma$ from the DR14 value. The error on $D_V$ has reduced by a factor of two, slightly more than the square-root of the increase in effective volume, which gives a factor of $\sqrt{ V_{\rm eff, DR16}/V_{\rm eff, DR14}} = \sqrt{2.73/0.9} \sim 1.74$. Note that in DR16 we combine BAO and full-shape analysis in Fourier and configuration spaces, which maximizes the amount of extracted cosmological information. \citet{icaza-lizaola_clustering_2020} presented the full-shape RSD analysis in the DR14 LRG sample in configuration space, yielding $f \sigma_8 = 0.454 \pm 0.134$, $D_M/r_d = 17.07 \pm 1.55$, and $D_H/r_d = 19.17 \pm 2.84$. All values are consistent within 1$\sigma$ of DR16 results, even though errors for DR14 are quite large given the even lower significance of the BAO peak in the pre-reconstruction multipoles. The error on the growth rate of structure $f{\sigma_8}$ reduces by a factor of 3 in DR16 compared to DR14, clearly benefiting from the larger sample and the combination with post-reconstruction BAO results that help breaking model degeneracies. Our DR16 LRG results at $0.6 < z < 1.0$ supersede the highest redshift results of the DR12 BOSS sample at $0.5 < z < 0.75$, which has an effective redshift of $z_{\rm eff} = 0.61$. \citet{alam_clustering_2017} report a 1.4, 2.2 and 7.8 per cent measurements of $D_M/r_d, D_H/r_d$ and $f{\sigma_8}$ respectively. While the errors in the high-redshift bin are slightly smaller than our DR16 result, it has a large correlation with the intermediate-redshift bin at $0.4 < z <0.6$. Our DR16 measurement is thus virtually independent of the first two DR12 BOSS redshift bins, and has effectively more weight in the final joint cosmological constraints. The cosmological implications of our DR16 LRG measurements are fully described in \citet{mueller_2020}. \section{Conclusion} \label{sec:conclusion} This work presented the cosmological analysis of the configuration-space anisotropic clustering in the DR16 eBOSS LRG sample, which is used for the final cosmological analysis of the completed eBOSS survey. We extracted and model the BAO and RSD features from the galaxy two-point correlation function monopole, quadrupole, and hexadecapole moments. We used the reconstruction technique to sharpen the BAO peak and mitigate associated non-linearies. The pre- and post-reconstruction multipole moments were used to perform a full-shape RSD analysis and a BAO-only analysis, respectively. In the RSD analysis, we considered two different RSD models, which results were later combined to increase the robustness and accuracy of the measurements. The combination of the BAO-only and full-shape RSD analyses allowed us to derive joint constraints on the three cosmological parameter combinations: $D_H(z)/r_d$, $D_M(z)/r_d$ and $f\sigma_8(z)$. This analysis is complementary to that performed in Fourier space and presented in Gil-Marin et al. 2020. We found an excellent agreement between the inferred parameters in both spaces, both for BAO-only and full-shape RSD analyses. After combining the results with those from that in Fourier space, we obtain the following final constraints: $D_M/r_d = 17.65 \pm 0.30$, $D_H/r_d = 19.77 \pm 0.47$, $f{\sigma_8} = 0.473 \pm 0.044$, which are currently the most accurate at $z_{\rm eff}=0.698$. The adopted methodology has been extensively tested on a set of realistic simulations and shown to be very robust against systematics. In particular, we investigated different potential sources of systematic errors: inaccuracy in the modelling of both BAO/RSD and intrinsic galaxy clustering, arbitrary choice of reference cosmology, and systematic errors from observational effects such as redshift failures, fiber collision, incompleteness, or the radial integral constraint. We quantified the associated systematic error contributions and included them on the final cosmological parameter constraints. Overall, we found that the total systematic error inflate errors by 6, 13 and 20 per cent for $\aperp$, $\alpha_\parallel$ and $f{\sigma_8}$. The cosmological parameters inferred from the DR16 eBOSS LRG sample are in good agreement with the predictions from General Relativity in a flat $\Lambda$CDM cosmological model with parameters set to Planck 2018 results. These measurements complement those obtained from the other eBOSS tracers \citep{raichoor_2020, de_mattia_2020, hou_2020, neveux_2020, du_mas_des_bourboux_2020}. The full cosmological interpretation of all eBOSS tracer results and combined with previous BOSS results is presented in \citet{mueller_2020}. Future large spectroscopic surveys such as DESI or Euclid will probe much larger volumes of the Universe. This will allow reducing the statistical errors on the cosmological parameters considerably, at the percent level or below. For those it will be crucial to control the level of systematics at a extremely low level. This is today a challenge and the work presented here has shown the current state-of-the-art methodology, which will have to be further developed and improved in view of the optimal exploitation of next-generation surveys. \section*{Data Availability} The correlation functions, covariance matrices, and resulting likelihoods for cosmological parameters are (will be made) available (after acceptance) via the SDSS Science Archive Server (\href{https://sas.sdss.org/}{https://sas.sdss.org/}), with the exact address tbd. \section*{Acknowledgements} RP, SdlT, and SE acknowledge the support from the French National Research Agency (ANR) under contract ANR-16-CE31-0021, eBOSS. SdlT and SE acknowledge the support of the OCEVU Labex (ANR-11-LABX-0060) and the A*MIDEX project (ANR-11-IDEX-0001-02) funded by the "Investissements d'Avenir" French government program managed by the ANR. MVM and SF are partially supported by Programa de Apoyo a Proyectos de Investigaci\'on e Inovaci\'on ca Teconol\'ogica (PAPITT) no. IA101518, no. IA101619, Proyecto LANCAD-UNAM-DGTIC-319 and LANCAD-UNAM-DGTIC-136. HGM acknowledges the support from la Caixa Foundation (ID 100010434) which code LCF/BQ/PI18/11630024. SA is supported by the European Research Council through the COSFORM Research Grant (\#670193). GR, PDC and JM acknowledge support from the National Research Foundation of Korea (NRF) through Grants No. 2017R1E1A1A01077508 and No. 2020R1A2C1005655 funded by the Korean Ministry of Education, Science and Technology (MoEST), and from the faculty research fund of Sejong University. Numerical computations were done on the Sciama High Performance Compute (HPC) cluster which is supported by the ICG, SEPNet and the University of Portsmouth. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. This research also uses resources of the HPC cluster ATOCATL-IA-UNAM M\'exico. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 693024). Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. \bibliographystyle{mnras}
-93,422.761735
[ -3.48046875, 3.1484375 ]
24.074074
[ -2.83984375, 0.6962890625, -1.875, -5.82421875, -0.765625, 8.09375 ]
[ 5.98828125, 8.21875, 4.53515625, 8.2421875 ]
1,857
19,845
[ -3.28515625, 3.861328125 ]
31.073323
[ -5.99609375, -2.89453125, -3.724609375, -2.349609375, 1.20703125, 11.609375 ]
1.015566
11.71509
16.48778
7.489681
[ 2.4991815090179443 ]
-53,667.682828
5.68128
-89,181.253169
0.319305
6.408205
[ -3.50390625, -3.78125, -3.052734375, -3.58203125, 2.681640625, 10.546875 ]
[ -6.10546875, -2.279296875, -2.458984375, -1.357421875, 3.951171875, 5.40625 ]
BkiUbh04uzqh_Kly7RNW
\section{Foundations} We adopt here the formulation presented by~\cite{Belbute-Peres2017}, and we extend it to include frictional forces between a pushed object and a support surface. The transition function is given as $x_{t+1} = x_t + v_{t} dt$ where $dt$ is the duration of a constant short time-step. Velocity $v_{t}$ is a function of pushing force $\mu_t$ and mechanical parameters $\theta$. To find $v_{t+dt}$, we solve the system of equations of motion given in Figure~\ref{equation_motion}, where $x_t$ and $v_{t}$ are given as inputs, $[a,\rho,\xi]$ are slack variables, $[v_{t+dt},\lambda_e,\lambda_c,\lambda_f,\eta]$ are unknowns, and $[\mathcal{M}, \mu_f] \stackrel{def}{=} \theta$ are the mechanical properties of the manipulated object that are hypothesized. $\mathcal{M}$ is a diagonal mass matrix, where the diagonal is given as $[\mathcal{I}_1,\mathcal{M}_1,\mathcal{M}_1,\mathcal{I}_2,\mathcal{M}_2,\mathcal{M}_2,\dots, \mathcal{I}_k,\mathcal{M}_k,\mathcal{M}_k]$, where $\mathcal{I}_i$ is the moment of inertia of the $i^{th}$ cell of the object, and $\mathcal{M}_i$ is its mass. $\mu_F$ is a diagonal matrix containing the magnitudes of the frictional forces exerted on each cell of the pushed object in the direction opposite to its motion. $\mathcal{J}_e (x_t)$ is a global Jacobian matrix listing all constraints related to the joints of the object. These constraints ensure that the different cells of the object move together with the same velocity, since the object is assumed to be rigid. $\mathcal{J}_c (x_t)$ is a matrix containing constraints related to contacts between the cells of the object. These constraints enforce that rigid cells of the object do not interpenetrate. $c$ is a vector that depends on the velocities at the contact points and on the combined restitution parameter of the cells. More detailed descriptions of $\mathcal{J}_e (x_t)$ and $\mathcal{J}_c (x_t)$ can be found in the supplementary material of~\cite{Belbute-Peres:2018:EDP:3327757.3327820}. $\mathcal{J}_f$ is a Jacobian matrix related to the frictional forces, and it is particularly important for the objectives of this work. We will return to $\mathcal{J}_f$ in Section~\ref{friction}. To present the solution more concisely, the following terms are introduced. \begin{eqnarray*} \alpha=-v_{t+dt}, \beta = \lambda_e, A = \mathcal{J}_e(x_t) , q = -\mathcal{M}v_t - dt \mu_t\\ \gamma =\begin{bmatrix} \lambda_c \\ \lambda_f \\ \eta \end{bmatrix}, s = \begin{bmatrix} a \\ \rho \\ \xi \end{bmatrix}, m = \begin{bmatrix} c \\ 0 \\ 0 \end{bmatrix}, s = \begin{bmatrix} a \\ \rho \\ \xi \end{bmatrix}, G = \begin{bmatrix} \mathcal{J}_e(x_t) & 0 \\ \mathcal{J}_f & 0 \\ 0 & 0 \end{bmatrix}, \mathcal{F} = \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & E \\ \mu_f & -E^T & 0 \end{bmatrix} \end{eqnarray*} The linear complementary problem (LCP) becomes \begin{eqnarray} \begin{bmatrix} 0 \\ s \\ 0 \end{bmatrix} + \begin{bmatrix} \mathcal{M} & G^T & A^T \\ G & \mathcal{F} & 0 \\ A & 0 & 0 \end{bmatrix} \begin{bmatrix} \alpha \\ \beta \\ \gamma \end{bmatrix} = \begin{bmatrix} -q \\ m \\ 0 \end{bmatrix} \label{lcp} \end{eqnarray} subject to $s\geq \mathbf{0}, \gamma \geq \mathbf{0}, s^T\gamma = 0$. \begin{figure} {\scriptsize \begin{equation*} \boxed{ \begin{bmatrix} 0 \\ 0 \\ a \\ \rho \\ \xi \end{bmatrix} - \begin{bmatrix} \mathcal{M} & -\mathcal{J}_e(x_t) & -\mathcal{J}_c(x_t) & -\mathcal{J}_f & 0 \\ \mathcal{J}_e(x_t) & 0 & 0 & 0 & 0\\ \mathcal{J}_c(x_t) & 0 & 0& 0 & 0 \\ \mathcal{J}_f & 0 & 0 & 0 & E \\ 0 & 0 & \mu_{f} & -E^T & 0\end{bmatrix} \begin{bmatrix} v_{t+dt} \\ \lambda_e \\ \lambda_c \\ \lambda_f \\ \eta \end{bmatrix} = \begin{bmatrix} \mathcal{M}v_{t}+dt \mu_t \\ 0 \\ c \\ 0 \\ 0 \end{bmatrix} \textrm{s.t.} \begin{bmatrix} a \\ \rho \\ \xi \end{bmatrix} \geq 0, \begin{bmatrix} \lambda_c \\ \lambda_f \\ \eta \end{bmatrix} \geq 0, \begin{bmatrix} a \\ \rho \\ \psi \end{bmatrix} \begin{bmatrix} \lambda_c \\ \lambda_f \\ \eta \end{bmatrix} = 0 } \end{equation*} } \vspace{-0.5cm} \caption{Equations of Motion} \label{equation_motion} \end{figure} The solution is obtained, after an initialization step, by iteratively minimizing the residuals from the equation above through variables $\alpha,\beta,\gamma$ and $s$. The solution is obtained by utilizing the convex optimizer of~\cite{MattingleySB12}. \subsection{Baselines} The compared baselines are \emph{ random search}, \emph{finite differences gradient}, \emph{automatic differentiation with Autograd}, and \emph{weighted sampling}. The weighted sampling search generates random values uniformly in the first iteration, and then iteratively generates normally distributed random values around the best parameter obtained in the previous iteration. The standard deviation of the random values is gradually reduced over time, to focus the search on the most promising region. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=0.49\textwidth]{./figures/res/quan/real_aver_sims_vs_cell.png} & \includegraphics[width=0.49\textwidth]{./figures/res/quan/real_aver_time_vs_cell.png}\\ \end{tabular} \vspace{-0.5cm} \caption{Average predicted cell position error (in meters) over all objects and cells for each object as a function of the number of simulations (left) and the computation time (right).} \label{fig:quan_aver} \vspace{-0.8cm} \end{figure} \vspace{-0.2cm} \subsection{Experimental Setup} The experiments are performed on both simulated and real robot and objects. A rigid object is set on a table-top. The robot's end effector is moved randomly to collide with the object and push it forward. The initial and final poses of the object are recorded. The methods discussed above are used to estimate the object's mass and frictional forces that are distributed over its cells. Since the ground-truth values of mass and friction are unknown, the identified models are evaluated in terms of the accuracy in the predicted pose of each cell, using a set of test data. Five different objects are used in our experiments: a hammer, a ranch, a crimp, a toolbox, and a book. A hammer has an unbalanced mass distribution because the iron head is much heavier than the wooden handle. A crimp has high frictional forces on its heavy iron head and stiff handle. A ranch is composed of the same material, however, its handle in the middle (main body) floats and does not touch the table, because of the elevated height of its side parts. Therefore, there are zero frictional forces on the handle. Finally, an open book that has a different number of pages on the left and right sides also has an unbalanced mass-friction distribution, resulting in rotations when pushed. A toolbox also can have a various mass distribution depending on how the tools are arranged inside. Note that our method does not assume that the full shape of the object is known, it uses only the observed top part and projects it down on the table to build a complete 3D geometric model. The geometric model is generally wrong because the object is seldom flat, but the parts that do not actually touch the table end up having nearly zero frictions in the identified model. Thus, identified low friction forces compensate for wrongly presumed surfaces in the occluded bottom part of the object, and the predicted motion of the object is accurate despite using inaccurate geometries. In the simulation experiments, we simulated four random actions on each of the following objects for collecting training data: hammer, crimp, and ranch. We use the physics engine \textit{Bullet} for that purpose. The results are averaged over ten independent experiments, with a different ground-truth model of the object used in each experiment to generate data. The identified mass and friction models are evaluated by measuring the accuracy of the predicted motions on a test set of $12$ different random pushing actions. In the real robot setup, we used a Kuka robotic arm and Robotiq 3-finger hand to apply the pushing actions, and recorded the initial and final object poses using a depth-sensing camera, as shown as Figure \ref{fig:robot}. The training data set contains only two random pushing actions, while the test data set contains five random pushing actions. The goal is to show how the robot can identify models of objects with a very small number of manipulation actions. The number of cells per object varies from $70$ to $100$ depending on the size of the object. \iffalse \begin{wrapfigure}{r}{*} \centering \begin{tabular}{cc} \includegraphics[width=0.2\textwidth]{./figures/res/qual/obj_box.png} & \includegraphics[width=0.18\textwidth,trim={4cm 7cm 4cm 7cm},clip]{./figures/res/qual/massmap_box_3.png} \\ \includegraphics[width=0.2\textwidth]{./figures/res/qual/obj_hammer.png} & \includegraphics[width=0.18\textwidth,trim={4cm 7cm 1cm 6cm},clip]{./figures/res/qual/massmap_hammer_4.png} \\ \includegraphics[width=0.2\textwidth]{./figures/res/qual/obj_book.png} & \includegraphics[width=0.18\textwidth,trim={4cm 6cm 4cm 6cm},clip]{./figures/res/qual/massmap_book_3.png} \\ \includegraphics[width=0.2\textwidth,trim={0 5cm 0 5cm},clip]{./figures/res/qual/crimp_color.png} & \includegraphics[width=0.18\textwidth,trim={3cm 6cm 3cm 6cm},clip]{./figures/res/qual/massmap_crimp_3.png} \\ \includegraphics[width=0.2\textwidth,trim={0 5cm 0 5cm},clip]{./figures/res/qual/ranch_color.png} & \includegraphics[width=0.18\textwidth,trim={3cm 6cm 3cm 6cm},clip]{./figures/res/qual/massmap_ranch_4.png} \end{tabular} \caption{Learned friction$\times$mass distributions. Red color means higher mass$\times$friction value while blue color means lower mass$\times$friction value.} \label{fig:qual_heatmap} \end{wrapfigure} \begin{wrapfigure}{r}{*} \centering \begin{tabular}{cc} \begin{tabular}{c} \includegraphics[width=0.4\textwidth]{./figures/res/quan/sim_ntrain_sims_vs_cell.png}\\ \end{tabular} \\ \end{tabular} \caption{Predicted cell position error with different numbers of training examples.} \vspace{-1cm} \label{fig:quan_ntrain} \end{wrapfigure} \fi \begin{figure}[h] \centering \begin{tabular}{cc} \includegraphics[height=5.5cm]{./figures/res/qual.png} & \includegraphics[width=0.5\textwidth]{./figures/res/quan/sim_ntrain_sims_vs_cell.png} \\ (a) & (b) \end{tabular} \vspace{-0.3cm} \caption{(a) Learned friction$\times$mass distributions. Red color means higher mass$\times$friction value while blue color means lower mass$\times$friction value. (b) Predicted cell position error with different numbers of training examples and simulations (gradient-descent steps).} \vspace{-0.8cm} \label{fig:quan_ntrain} \end{figure} \vspace{-0.2cm} \subsection{Results} Figure \ref{fig:quan_ntrain} (a) shows identified mass-friction distributions, given as the product of the mass and the friction coefficients of each cell. The mass-friction distributions of the first three objects are estimated in the real robot setup, and the last two are estimated in the simulation setup. The toolbox contains heavy tools on the left side (ranches, bolts and nuts), while relatively light tools like plastic screw drivers and cramps are placed on the right side. The proposed method was able to predict the unbalanced mass distribution of the box while it was covered. Likewise, the heavier iron head of the hammer and its light wooden handle are successfully estimated as well as the thicker and thinner sides of the book. The proposed method successfully estimated the heavier part of the crimp, and the floating part of the ranch was simulated by much lighter friction values in the middle. \iffalse Figure \ref{fig:qual_move} shows how the estimated mass distribution is matched to their true behaviors reacting to a various actions. For example, although we push the objects while contacting a certain side, the objects are supposed to move straight if it is the center of the mass. Using the proposed method, we can predict the movement of the objects with other pushing actions. \begin{figure}[h] \centering \begin{tabular}{c} \includegraphics[width=0.18\textwidth,trim={3cm 3cm 3cm 4cm},clip]{./figures/res/qual/move_box_1.png} \includegraphics[width=0.18\textwidth,trim={2cm 1cm 2cm 4cm},clip]{./figures/res/qual/move_hammer_0.png} \includegraphics[width=0.18\textwidth,trim={4cm 4.5cm 3cm 3.5cm},clip]{./figures/res/qual/move_book_1.png} \includegraphics[width=0.18\textwidth,trim={2cm 3cm 2cm 3cm},clip]{./figures/res/qual/move_crimp_0.png} \includegraphics[width=0.18\textwidth,trim={2cm 3cm 2cm 4cm},clip]{./figures/res/qual/move_ranch_1.png} \\ \includegraphics[width=0.18\textwidth,trim={3cm 3cm 3cm 4cm},clip]{./figures/res/qual/move_box_2.png} \includegraphics[width=0.18\textwidth,trim={2cm 1cm 2cm 4cm},clip]{./figures/res/qual/move_hammer_1.png} \includegraphics[width=0.18\textwidth,trim={4cm 4cm 3cm 3.8cm},clip]{./figures/res/qual/move_book_2.png} \includegraphics[width=0.18\textwidth,trim={2cm 3cm 2cm 3cm},clip]{./figures/res/qual/move_crimp_1.png} \includegraphics[width=0.18\textwidth,trim={2cm 3cm 2cm 4cm},clip]{./figures/res/qual/move_ranch_2.png} \\ \includegraphics[width=0.18\textwidth,trim={3cm 3cm 3cm 4cm},clip]{./figures/res/qual/move_box_3.png} \includegraphics[width=0.18\textwidth,trim={2cm 1cm 2cm 4cm},clip]{./figures/res/qual/move_hammer_2.png} \includegraphics[width=0.18\textwidth,trim={4cm 2.8cm 3cm 4.8cm},clip]{./figures/res/qual/move_book_3.png} \includegraphics[width=0.18\textwidth,trim={2cm 3cm 2cm 3cm},clip]{./figures/res/qual/move_crimp_2.png} \includegraphics[width=0.18\textwidth,trim={2cm 3cm 2cm 4cm},clip]{./figures/res/qual/move_ranch_3.png} \\ \includegraphics[width=0.18\textwidth,trim={3cm 3cm 3cm 4cm},clip]{./figures/res/qual/move_box_4.png} \includegraphics[width=0.18\textwidth,trim={2cm 1cm 2cm 4cm},clip]{./figures/res/qual/move_hammer_3.png} \includegraphics[width=0.18\textwidth,trim={4cm 3cm 3cm 4.8cm},clip]{./figures/res/qual/move_book_4.png} \includegraphics[width=0.18\textwidth,trim={2cm 3cm 2cm 3cm},clip]{./figures/res/qual/move_crimp_3.png} \includegraphics[width=0.18\textwidth,trim={2cm 3cm 2cm 4cm},clip]{./figures/res/qual/move_ranch_4.png} \end{tabular} \caption{Qualitative results of simulated actions. The yellow cubes indicate the movement of the robot finger, the gray is the initial pose, the green is the true object pose, and the heat-map is the estimated mass distribution with the simulated object pose.} \label{fig:qual_move} \end{figure} \fi Figure \ref{fig:quan_aver} shows the difference between the predicted cell positions and the ground truth as a function of the number simulations used in the parameter estimations. The results first demonstrate that global optimization methods (random and weighted sampling) suffer from the curse of dimensionality due to the combinatorial explosion in the number of possible parameters for all cells. The results also demonstrate that the proposed method can estimate the parameters within a small number of simulations and a short computation time. The proposed algorithm was able to estimate the parameters with under $1.5cm$ average cell position error within $30$ seconds. Surprisingly, the differential physics engine with \textit{Autograd} requires a significantly longer computation time as the number of cells increases, which is a critical in practice. The finite differences approach also failed to converge to an accurate model due to the high computational cost of the gradient computation, as well as the sensitivity of the computed gradients to the choice the grid size. Finally, Figure \ref{fig:quan_ntrain} (b) shows how the number of training actions improves the accuracy of the learned model. Increasing the number of training actions allows the robot to uncover properties of different parts of the object more accurately. Using a larger number of training actions slows down the convergence of the gradient-descent algorithm, but improves the accuracy of the learned model. \vspace{-0.7cm} \section{Introduction} \input{introduction.tex} \section{Related Work} \input{related_work.tex} \section{Problem Setup and Notation} \input{problem_setup.tex} \input{differentiable_physics.tex} \vspace{-1cm} \input{proposed_algorithm.tex} \section{Evaluation} \input{experiments.tex} \vspace{-0.25cm} \section{Conclusion} \vspace{-0.1cm} To identify friction and mass distributions of unknown objects pushed by a robot, we proposed a new method that consists in dividing an object into a large number of connected cells, with each cell having different mechanical properties. We adopted a differentiable physics engine that was recently proposed to simulate contact interactions between 2-dimensional objects, and we extended it to deal with frictional forces occurring on table-top 3D objects. In addition to the automatic derivation of the engine, based on {\it Autograd}, we presented a simple gradient-descent algorithm that exploits weak assumptions about the object and the collision to simplify the form of the gradient of reality-gap loss function with respect to the object's parameters. The proposed algorithm was tested in simulation and with real objects, and shown to be efficient in identifying models of objects with non-uniform mass-friction distributions. \noindent\textbf{Acknowledgments}{~This work was supported by NSF awards 1734492, 1723869 and 1846043.} \bibliographystyle{abbrv} \section{Friction Model} \label{friction} We describe the Jacobian matrix $\mathcal{J}_f$ related to the Coulomb frictional forces between the object's cells and the support surface, and the corresponding constraints. This model is based on the one derived in~\cite{cline}. \vspace{-0.15cm} \begin{eqnarray*} \small \mathcal{D} = \begin{bmatrix} 0 & 1 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & -1 \end{bmatrix} , \mathcal{J}_f = \begin{bmatrix} \mathcal{D} & 0 & \dots & 0\\ 0 & \mathcal{D} & \dots & 0\\ \vdots & \vdots & \dots & 0 \\ 0 & 0 & \dots & \mathcal{D} \end{bmatrix} , \mu_f = \begin{bmatrix} \mu_{f_1} & 0 & \dots & 0\\ 0 & \mu_{f_2} & \dots & 0\\ \vdots & \vdots & \dots & \vdots \\ 0 & 0 & \dots & \mu_{f_k} \end{bmatrix} , E = \begin{bmatrix} \zeta & 0 & \dots & 0\\ 0 & \zeta & \dots & 0\\ \vdots & \vdots & \dots & \vdots \\ 0 & 0 & \dots & \zeta \end{bmatrix} \end{eqnarray*} \vspace{-0.15cm} $\zeta = [1,1,1,1]^T$, $\mathcal{J}_f$ and $E$ are both a $k\times(4k)$ matrix where $k$ is the number of cells in the object. $\mu_{f_i}$ is the unknow friction coefficient of cell $i$ with the support surface. In the original model of~\cite{cline}, matrix $\mathcal D$ was defined as $ \mathcal{D} = \begin{bmatrix} (p_{contact} \times d) & d \\ (p_{contact} \times -d) & -d \end{bmatrix}, $ where $d$ is a vector pointing to the tangential to a contact, and $-d$ is a vector pointing to the opposite direction. $p_{contact}$ is the contact point on the object. In our model, we consider only pushing actions that slide objects forward without rolling them over. We thus eliminate the first column of $\mathcal{D}$ which are multiplied by the angular velocities and replace it with a zero column. The friction terms have complementary constraints that are related to the contact points. We will omit the derivations here (see~\cite{cline}), but interactions between the objects and the support surface give rise to the following constraints, $\mu_f -E^T \lambda_f \geq 0, \mathcal{J}_f v_{t+dt} + \gamma E \geq 0, \rho^T \lambda_f = 0, \xi^T \eta = 0.$ \section{Proposed Algorithm} To obtain parameters $\theta^* = [\mathcal{M}^*,\mu_{f}^*]$, a gradient descent on loss function in Equation~\ref{simulationError} is performed, wherein the gradients are computed analytically. A first approach to compute the gradient is to use the {\it Autograd} library for automatic derivation in Python. We propose here a second simpler and faster approach that exploits three mild assumptions: 1) the manipulated object is rigid, 2) the contact point between the robot's end-effector and the object remains constant during the pushing action, and 3) collision between the robot's end-effector and the object is perfectly inelastic with a zero restitution coefficient. That is, the end-effector remains in contact with the object during the pushing action. The relatively low velocity of the end-effector (around 0.2 m/s) ensures the inelastic nature of the collision. The equation of motion can be written as: \vspace{-0.1cm} \begin{eqnarray} \mathcal{M} v_{t+dt} -\mathcal{J}_e(x_t) \lambda_{e} -\mathcal{J}_c(x_t) \lambda_{c} - \mu_{f} \lambda_c= \mathcal{M}v_{t}+dt \mu_t. \end{eqnarray} Thus, $V(\hat{x}_{t},\hat{v}_{t}, u_t, \theta) = v_{t+dt} = \mathcal{M}^{-1} \big( \mathcal{J}_e(x_t) \lambda_{e} + \mathcal{J}_c(x_t) \lambda_{c} +\mu_{f} \lambda_c + \mathcal{M}v_{t}+dt \mu_t \big). $ Inverse mass matrix $ \mathcal{M}^{-1}$ exists because $\mathcal{M}$ is a diagonal full-rank matrix. The gradients of predicted velocity with respect to $\mu_f$ is give as: $ \nabla_{\mu_f} V(\hat{x}_{t},\hat{v}_{t}, u_t, \theta) = \mathcal{M}^{-1} \lambda_c, $ because $\nabla_{\mu_f} \mathcal{J}_e(x_t) \lambda_{e} = \nabla_{\mu_f} \mathcal{J}_c(x_t) \lambda_{c} = 0$ from assumptions 1-3. The gradient of the loss in Equation~\ref{simulationError} is given by $\nabla_{\mu_f} loss(\theta) = \sum_{t=0}^{T-2} \nabla_{\mu_f} \|x_{t+2} - \big(\hat{x}_{t+1} + V(\hat{x}_{t},\hat{v}_{t}, u_t, \theta)dt \big) \|_2 = \sum_{t=0}^{T-2} \Big( x_{t+2} - \big(\hat{x}_{t+1} + V(\hat{x}_{t},\hat{v}_{t}, u_t, \theta)dt \big)\Big) \mathcal{M}^{-1} \lambda_c dt = C_f \sum_{t=0}^{T-2} \Big( x_{t+2} - \big(\hat{x}_{t+1} + \underbrace{V(\hat{x}_{t},\hat{v}_{t}, u_t, \theta)}_{\textrm{forward simulation}}\Big)$ where $C_f$ is a constant diagonal matrix. The real value of $C_f$ is of little importance as it will be absorbed by the learning rate of the gradient descent algorithm. A similar derivation for $\mathcal M$ gives $\nabla_{\mathcal M} loss(\theta) = C_m \sum_{t=0}^{T-2} \Big( x_{t+2} - \big(\hat{x}_{t+1} +V(\hat{x}_{t},\hat{v}_{t}, u_t, \theta)\Big)$, \RestyleAlgo{boxruled} with $C_M = \alpha C_f^{-\frac{1}{2}}$ and $\alpha$ is a constant factor. Thus, we use update rates $\alpha_{\textrm{rate}}$ and $\sqrt{\alpha_{\textrm{rate}}}$ for frictional forces magnitudes and mass matrix respectively, as as shown in Algorithm~\ref{algo}. \begin{algorithm}[h] \KwIn{Real-world trajectory data $\tau^* = \{(x_{t},\mu_{t}, x_{t+1})\}$ for $t=0, \dots,T-1$, wherein $x_t$ is a vector in $[SE(2)]^k$ corresponding to translation and rotation in the plane for each of the $k$ cells of a pushed unknown object, and $\mu_{t}$ is a force described by the contact point between the end-effector and one of the object's cells in addition to the pushing direction. Predefined learning rate $\alpha_{\textrm{rate}}$, and loss threshold $\epsilon$.} \KwOut{Parameters vector $\theta = [\mathcal{M}, \mu_f]$} Initialize $\mu_{f}$ randomly; \Repeat{$loss \leq \epsilon$}{ $loss \leftarrow0$; \For{$t\in\{0,T-2\}$}{ Simulate $\{(\hat{x}_{t+1},\mu_{t+1})\}$ by solving the LCP in Equation~\ref{lcp} with parameters $\theta$ (or by using an off-the shelf physics engine) and get the predicted next state $\hat{x}_{t+2} = \hat{x}_{t+1} + V(\hat{x}_{t},\hat{v}_{t}, u_t, \theta)$; $loss \leftarrow loss + \|\hat{x}_{t+2} - x_{t+2}\|_2$; $\mu_{f} \leftarrow \mu_{f} + \alpha_{\textrm{rate}} (\hat{x}_{t+2} - x_{t+2});$ $\mathcal{M} \leftarrow \mathcal{M} + \sqrt{\alpha_{\textrm{rate}}} (\hat{x}_{t+2} - x_{t+2});$ } } \caption{Learning Mechanical Models with Differentiable Physics Simulations} \label{algo} \end{algorithm} \vspace{-0.5cm}
-25,625.090006
[ -2.73828125, 2.45703125 ]
30.083565
[ -2.482421875, 2.033203125, -1.0595703125, -3.783203125, -1.6689453125, 5.078125 ]
[ 1.9609375, 7.921875, 0.364990234375, 6.3515625 ]
239
2,758
[ -3.41796875, 4.1328125 ]
30.678685
[ -4.98828125, -2.68359375, -2.87109375, -2.126953125, 1.2119140625, 8.9609375 ]
1.386161
11.267863
30.348078
5.859058
[ 2.8941190242767334 ]
-17,200.198808
6.277737
-25,106.351523
0.762389
5.74165
[ -2.7890625, -2.759765625, -2.91015625, -3.958984375, 2.287109375, 9.875 ]
[ -5.484375, -1.458984375, -1.93359375, -1.5634765625, 3.275390625, 3.9609375 ]